ArticlePDF Available

The SPARK Tool to prioritise questions for systematic reviews in health policy and systems research: Development and initial validation

Authors:

Abstract and Figures

Background Groups or institutions funding or conducting systematic reviews in health policy and systems research (HPSR) should prioritise topics according to the needs of policymakers and stakeholders. The aim of this study was to develop and validate a tool to prioritise questions for systematic reviews in HPSR. Methods We developed the tool following a four-step approach consisting of (1) the definition of the purpose and scope of tool, (2) item generation and reduction, (3) testing for content and face validity, (4) and pilot testing of the tool. The research team involved international experts in HPSR, systematic review methodology and tool development, led by the Center for Systematic Reviews on Health Policy and Systems Research (SPARK). We followed an inclusive approach in determining the final selection of items to allow customisation to the user’s needs. Results The purpose of the SPARK tool was to prioritise questions in HPSR in order to address them in systematic reviews. In the item generation and reduction phase, an extensive literature search yielded 40 relevant articles, which were reviewed by the research team to create a preliminary list of 19 candidate items for inclusion in the tool. As part of testing for content and face validity, input from international experts led to the refining, changing, merging and addition of new items, and to organisation of the tool into two modules. Following pilot testing, we finalised the tool, with 22 items organised in two modules – the first module including 13 items to be rated by policymakers and stakeholders, and the second including 9 items to be rated by systematic review teams. Users can customise the tool to their needs, by omitting items that may not be applicable to their settings. We also developed a user manual that provides guidance on how to use the SPARK tool, along with signaling questions. Conclusion We have developed and conducted initial validation of the SPARK tool to prioritise questions for systematic reviews in HPSR, along with a user manual. By aligning systematic review production to policy priorities, the tool will help support evidence-informed policymaking and reduce research waste. We invite others to contribute with additional real-life implementation of the tool. Electronic supplementary material The online version of this article (doi:10.1186/s12961-017-0242-4) contains supplementary material, which is available to authorized users.
Content may be subject to copyright.
R E S E A R C H Open Access
The SPARK Tool to prioritise questions for
systematic reviews in health policy and
systems research: development and initial
validation
Elie A. Akl
1,2,3
, Racha Fadlallah
2,4,5
, Lilian Ghandour
6
, Ola Kdouh
7
, Etienne Langlois
8
, John N. Lavis
3,9,10,11
,
Holger Schünemann
3,12
and Fadi El-Jardali
2,3,4,5*
Abstract
Background: Groups or institutions funding or conducting systematic reviews in health policy and systems research
(HPSR) should prioritise topics according to the needs of policymakers and stakeholders. The aim of this study was to
develop and validate a tool to prioritise questions for systematic reviews in HPSR.
Methods: We developed the tool following a four-step approach consisting of (1) the definition of the purpose and
scope of tool, (2) item generation and reduction, (3) testing for content and face validity, (4) and pilot testing of the
tool. The research team involved international experts in HPSR, systematic review methodology and tool development,
led by the Center for Systematic Reviews on Health Policy and Systems Research (SPARK). We followed an inclusive
approach in determining the final selection of items to allow customisation to the usersneeds.
Results: The purpose of the SPARK tool was to prioritise questions in HPSR in order to address them in systematic
reviews. In the item generation and reduction phase, an extensive literature search yielded 40 relevant articles, which
were reviewed by the research team to create a preliminary list of 19 candidate items for inclusion in the tool. As part
of testing for content and face validity, input from international experts led to the refining, changing, merging and
addition of new items, and to organisation of the tool into two modules. Following pilot testing, we finalised the tool,
with 22 items organised in two modules the first module including 13 items to be rated by policymakers and
stakeholders, and the second including 9 items to be rated by systematic review teams. Users can customise the
tool to their needs, by omitting items that may not be applicable to their settings. We also developed a user manual
that provides guidance on how to use the SPARK tool, along with signaling questions.
Conclusion: We have developed and conducted initial validation of the SPARK tool to prioritise questions for systematic
reviews in HPSR, along with a user manual. By aligning systematic review production to policy priorities, the tool will
help support evidence-informed policymaking and reduce research waste. We invite others to contribute with additional
real-life implementation of the tool.
Keywords: Systematic review, Health policy and systems research, Priority setting, Evidence-informed policymaking,
Health system strengthening, Development of a tool
* Correspondence: fe08@aub.edu.lb
2
Center for Systematic Reviews of Health Policy and Systems Research
(SPARK), American University of Beirut, Beirut, Lebanon
3
Department of Clinical Epidemiology and Biostatistics, McMaster University,
Hamilton, ON, Canada
Full list of author information is available at the end of the article
© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Akl et al. Health Research Policy and Systems (2017) 15:77
DOI 10.1186/s12961-017-0242-4
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Background
Health policy and systems research (HPSR) can strengthen
health systems, drive progress towards universal health
coverage and help deliver the promise of better health for
all [14]. Evidence from HPSR can help inform critical
health systems decisions, including who delivers health
services and where and how these services are financed
and organised [57]. It can also be used in the design and
evaluation of innovative health system interventions that
can help improve the quality of health services and reduce
health inequities [8].
Systematic reviews of HPSR can be of great help to
decision-makers as they constitute a more reliable and
robust source of evidence than individual studies, par-
ticularly when the findings of the individual studies are
complex or conflicting [9]. In addition to addressing the
effectiveness of policy options under consideration, they
can help clarify problems and their causes, and address
implementation, resource use, acceptability, feasibility
and impact on health equity [4, 10].
Groups or institutions funding or conducting systematic
reviews in HPSR should prioritise topics according to the
needs of policymakers and stakeholders [11, 12]. A priori-
tisation process can increase the likelihood that the best
available evidence informs health policy decision-making
[13, 14]. It can also promote optimal allocation of scarce
resources in order to pursue the review questions that are
likely to have a significant impact on knowledge, policy or
practice [15]. In addition, a carefully-planned and inclusive
priority setting process provides a platform for interaction
and trust building among diverse stakeholders, both of
which are important for the eventual uptake of research in
decision-making [16, 17].
A number of tools and approaches have been pub-
lished for the setting of research priorities [18, 19]. For
example, Viergever et al. [20] developed a nine-item
checklist that provides guidance on the planning of
research prioritisation processes. However, these tools
and approaches focus on setting priorities for health or
clinical research in general, with none specific to system-
atic reviews or HPSR. Some of the limitations hindering
their application to systematic reviews in HPSR include
their disease-driven orientation, lack of transparency in
the prioritisation process, inexplicit criteria for decision-
making, and time-consuming nature due to involvement
of multi-stage discussions or multiple iterations [18]. Im-
portantly, when HPSR is considered through technical,
disease-driven priority setting processes, it is systematic-
ally undervalued, thus contributing to fragmentation of
health systems research [21].
A tool to prioritise review questions in HPSRs would
address the abovementioned gap. In addition, it could
help promote evidence-informed approaches to health
system reforms which, in turn, could contribute to
strengthened health systems and improved health out-
comes [22]. Therefore, the aim of this study was to
develop and validate a tool to prioritise questions for
systematic reviews in HPSR.
Methods
General approach
We followed a standard approach for instrument devel-
opment using the four steps described in the framework
by Kirshner and Guyatt [23]:
Step 1: Definition of the purpose and scope;
Step 2: Item generation and reduction;
Step 3: Testing for content and face validity;
Step 4: Pilot-testing.
The project team included researchers with expertise
in systematic review methodology, health policy and
systems research, and research tool development. The
project was led by the team of the Center for Systematic
Reviews on Health Policy and Systems Research
(SPARK) at the American University of Beirut. The Insti-
tutional Review Board at the American University of
Beirut approved the project.
Specific steps
Step 1: Definition of the purpose and scope
The research team defined the purpose and scope of the
tool based on internal discussions, and consultation with
a purposive sample of policymakers and other stake-
holders. The definition reflected the objective of the tool
to prioritise questions for systematic reviews in HPSR.
Step 2: Item generation and reduction
For item generation, we conducted a literature review to
capture any documents relevant to the objective of this
project. We used the following combination of terms to
search Medline and PubMed: (priority settingOR
priority-settingOR setting of priorit*) AND (health).
We initially ran the search in June 2014 followed by an
updated search in March 2015. We also screened the
reference lists of relevant articles identified through the
search. The research team then abstracted from the
identified literature all potentially relevant items for
inclusion in the tool. For item reduction, the team mem-
bers created a preliminary list of candidate items by
removing obviously repetitive, redundant and irrelevant
items. We followed an inclusive approach in determining
the final selections of items to allow customisation to
the users needs.
Step 3: Testing for content and face validity
In order to establish the content and face validity, we
sought input from content experts on the clarity of the
Akl et al. Health Research Policy and Systems (2017) 15:77 Page 2 of 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
wording of items, the relevance of included items, the
need to include additional items and the potential
merging of items.
We sought input from three groups of content experts,
as detailed below.
Group 1: International experts in the field. We
shared the draft tool with six international experts
in health policy and systems research and systematic
review methodology. The draft tool contained the
preliminary list of candidate items alongside an
explanation for each item (Additional file 1). We
asked participants to rate their agreement on a
5-point scale (1, strongly disagree, to 5, strongly
agree) on whether or not each item should be
retained in the tool. In addition, participants had the
opportunity to suggest refinements and modifications
to each of the items as well as nominate new items
and suggest merging of items. We automatically
retained items rated favourably by at least half of the
participants. For the remaining items and for
additional items nominated by participants, final
decisions were made through consensus amongst the
research team members.
Group 2: Participants in a workshop on prioritising
questions for systematic reviews in health policy and
systems research at the 22
nd
Cochrane Colloquium
in Hyderabad, India. We grouped the revised items
(generated from group 1) into four domains, namely
problem, context, impact and technical, prior to
administering the tool to participants. We divided
participants into three focus groups, and asked each
to pick two domains for discussion. Then, we asked
participants to comment on the clarity and
comprehensiveness of the items within the selected
themes. Participants were then asked to reflect on
the tool as a whole. Members of the research team
took thorough notes of all the discussions.
Group 3: Participants in an interactive presentation
on the tool at the Third Global Symposium on
Health Systems Research held in Cape Town, South
Africa. The same version of the tool used for group
2 was presented to this group. The presentation was
followed by an open discussion about the tool and
its components. The research team used the
qualitative feedback from both groups 2 and 3 to
refine some of the items.
Step 4: Pilot testing
As part of pilot testing, we pre-tested the revised tool
through the interviewing of a purposive sample of three
international experts in the field of evidence-informed
policymaking, systematic review methodology and HPSR.
We conducted semi-structured interviews following a
brief guide developed by the team to elicit their input on
the clarity, readability and comprehensiveness of the items
and of the user manual. Then, we administered Module 1
of the revised tool to three policymakers (two from
Lebanon and one from South Africa). We asked the
policymakers to complete the module for two selected
review questions (once for each review question). Finally,
we asked them to reflect on the process.
We obtained final feedback on the general organisation
of the tool and the wording of the items from two separate
groups, (1) participants in a workshop on priority setting
at the 2017 Cochrane Canada meeting and (2) participants
in two consecutive webinars held by the Global Evidence
Synthesis Initiative.
Results
In the next section, we present the findings of each of
the four development steps as well as a description of
the current version of the tool and the user manual.
Step 1: Definition of the purpose and scope
The tool is intended to prioritise questions of HPSR in
order to address them in systematic reviews. HPSR is an
multidisciplinary field of research that investigates issues
such as how healthcare is financed, organised, delivered
and used; how health policies are prioritised, developed
and implemented; and how and why health systems do
or do not achieve health and wider social goals [24].
Ideally, the tool is used during formal processes such
as priority setting exercises. However, policymakers and
stakeholders can also use it on an individual basis, e.g.
when a formal process is not feasible. The tool needs to
be used independently for each review question being
considered for prioritisation.
Step 2: Item generation and reduction
We identified 40 relevant articles on previous priority set-
ting exercises, priority setting approaches and guidelines
on how to develop priority setting tools for research.
Members of the research team with expertise in system-
atic review methodology, and in health policy and systems
research, abstracted potentially relevant items from these
40 articles. Then, they reviewed these items and elimi-
nated those that were obviously repetitive, redundant or
unrelated to systematic reviews of HPSR. This created a
preliminary list of 19 candidate items along with explana-
tions of their meanings (Additional file 1).
Step 3: Testing for content and face validity
Group 1 involved 6 participants, group 2 involved 14
participants and group 3 involved more than 20 par-
ticipants. Participants included academic health re-
searchers, directors of research institutes/centres,
systematic review methodologists, members of health
Akl et al. Health Research Policy and Systems (2017) 15:77 Page 3 of 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
professional associations and policymakers. Inputs from
participants led to iterative refinements of the items
and their wording.
Using the results of the quantitative and qualitative
feedback from participants, the research team held a
number of meetings and reached a consensus to:
Refine the wordings for some items, merge others
and add new ones. This brought the number of
items from 19 to 22. Additional file 2shows the
detailed changes made to the initial list of 19 items
and to their meanings.
Split the tool into two modules. The first module
includes items relevant to policymakers and
stakeholders, while the second module includes
items relevant to systematic review teams.
Convert the revised list of items into declarative
statements. We opted for a 5-point scale with the
following anchors: strongly disagree(1), disagree
(2), neither agree nor disagree(3), agree(4),
strongly agree(5).
Step 4: Pilot testing
Based on the feedback from the three international
experts and consultations among the research team,
we refined and changed the wording for some of the
items, merged two items into one and added one
additional item, bringing the final number of items to
22 (Additional file 2). An average of 3 minutes was
required to complete Module 1 of the tool for each
review question.
The pilot testing confirmed the ease of use of the tool
and its relevance in prioritising review questions. Partici-
pants in the pilot testing made suggestions for the
rewording of a few items to enhance their clarity, but
they did not suggest additional items. The pilot testing
also revealed the need to assess the systematic review
teams available financial and human resources prior to
the prioritisation process. This would subsequently
inform the number of systematic reviews that the team
can conduct, thus allowing them to establish a plan to
translate the priorities to actual research.
Based on the final feedback on the tool, we developed
signaling questions for each item in order to minimise
variations in interpretation. We also reworded some of
the items to improve clarity. The discussions highlighted
the importance of keeping the use of the tool flexible in
terms of what items to include or omit.
The SPARK tool
In the current version of the tool, the 22 items are
organised in two modules. The first module includes 13
items relevant to policymakers and stakeholders, while the
second module includes 9 items relevant to systematic
review teams. The 22 items are presented in Box 1. The
complete tool, along with the signaling questions, is
presented in Additional file 3 as part of the user manual.
Users can customise the tool to their needs by omitting
items that may not be applicable to their settings.
Box 1 The 22 items included in the SPARK tool
Module 1
a
(Relevance of question to policymakers and stakeholders)
1. Addressing this question responds to a problem that is of large burden
2. Addressing this question responds to a problem that is persistent
3. Addressing this question responds to the needs of the population
4. Addressing this question responds to the needs of decision-makers
5. Addressing this question responds to national health priorities
6. Addressing this question is a moral obligation
7. Addressing this question is expected to positively impact equity in health
8. Addressing this question is expected to positively impact population health
9. Addressing this question is expected to positively impact patient
experience of care
10. Addressing this question is expected to positively impact healthcare
expenditures
11. Addressing this question is expected to positively impact the overall
development of the country
12. Using the research evidence for this question is critical to inform
decision-making
13. Using the research evidence for this question is expected to be
supported by political actors
Module 2
(Appropriateness and feasibility for systematic review teams)
1. The question can be translated into an answerable systematic review
question
2. There are no available or adequate systematic reviews on this question
3. Primary studies are available for inclusion in the systematic review
4. There is adequate human capacity to undertake the systematic review
5. There is adequate operation/management capacity to undertake the
systematic review
6. The systematic review is feasible within the expected timeframe
7. Conducting the systematic review contributes to sustainable capacity
to conduct future reviews
8. Conducting the systematic review is a social responsibility
9. Conducting the systematic review does not raise any ethical concerns
a
The item could relate to the problem when the question is not refined by the
time of the priority setting exercise
The user manual
The user manual is divided into five sections, namely (1)
purpose of the SPARK tool, (2) components of the SPARK
tool, (3) preparatory work, (4) using the SPARK tool, and
(5) the SPARK tool (full version) (Additional file 3).
The recommended approach to administer the tool is
for policymakers and stakeholders to complete Module 1
Akl et al. Health Research Policy and Systems (2017) 15:77 Page 4 of 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
in order to rank questions according to their relevance.
Module 2 is then applied to those relevant questions in
order to rank them according to the feasibility and
appropriateness of conducting a systematic review to
address them. The order of administration can be
reversed, for example, when there is a relatively large
number of questions to prioritise and a time constraint for
policymakers and stakeholders.
The use of the tool does not include assigning weights
to each item or to each module. However, the technical
team undertaking the prioritisation process may decide a
priori on different weightings for different items or for
the two respective modules. They may also define a
threshold score in order to consider the review question
a priority.
Discussion
In this article, we describe the development and initial
validation of a tool to prioritise questions for systematic
reviews in HPSR. The current version of the tool
consists of 22 items, in two modules. The first module
includes 13 items about question relevance (to be
answered by policymakers and stakeholders). These
items could also be framed around the problems when
the questions have not been refined by the time of the
priority setting exercise. The second module includes 9
items about the feasibility and appropriateness of
conducting a review (to be answered by systematic
review teams), typically only for those questions deemed
relevant by policymakers and stakeholders. Users can
customise the tool to their needs by omitting items that
may not be applicable to their settings. We also
developed a user manual that provides detailed guidance
on how to use the SPARK tool, along with signaling
questions. To our knowledge, this is the first tool
designed for the prioritisation of questions for
systematic reviews in HPSR.
Ideally, the use of Module 1 of the tool is performed
in a group setting, where policymakers and stakeholders
are physically together and can discuss the questions
(with subsequent refinement/addition of new questions),
rating them either individually or in a group. When it
is not feasible to have all policymakers and stakeholders
physically together, the rating can be performed
individually (e.g. by email or using a web-based survey).
The use of the tool assumes the existence of a pool of
potential questions (or problems) in need of prioritisation.
Therefore, preparatory work might be needed to generate
those questions (or problems). This can be in the form of a
literature review, surveys and informal consultations with
policymakers and stakeholders. In preparation for using
Module 1, it would be useful to prepare brief vignettes
containing background and contextual information on the
problem being addressed by each question of interest and
distribute these to policymakers [25]. Additionally, in
preparation for using Module 2, it would be ideal to
develop evidence maps of systematic reviews and of
primary studies addressing the questions of interest [26].
The mapping of systematic reviews would help in avoiding
duplication of efforts when a relevant, up to date, and of
sufficiently high quality systematic review exists. The
mapping of primary studies would help in avoiding
questions that would result in empty systematic reviews.
As a key strength of this study, a multidisciplinary team
developed and validated the tool following a standard
methodology with the involvement of international experts
in HPSR, systematic review methodology and tool
development. We used a mix of surveys, qualitative
interviews and feedback from international experts to
enhance the validity of our findings. While some of the
items may not be applicable to all settings, we attempted to
address this by following an inclusive approach in
determining the final selection of items to allow
customisation to the users needs. Nonetheless, the tool
could benefit from additional real-life testing in different
contexts to enhance its generalisability. In fact, we are plan-
ning to use the tool in priority setting exercises to identify
priority questions at both the national and regional level.
The SPARK tool will address the gap identified in the
scientific literature on setting priorities for systematic
reviews in the area of HPSR, as expressed by those
involved in evidence synthesis in the field of HPSR [24].
In addition, the tool will support evidence-informed
decision-making and practice by promoting the produc-
tion of policy-relevant systematic reviews. It will also
facilitate engaging policymakers and stakeholders in
prioritising review questions [22].
Using this tool is particularly relevant in the context of
low- and middle-income countries, where the capacity
of production of systematic reviews is limited and often
misaligned with policy needs and priorities [11, 27, 28].
The prioritisation can help channel limited resources to
areas of highest priority [27, 29]. Furthermore, by asses-
sing appropriateness of conducting systematic reviews,
the tool contributes to global efforts to reduce research
waste and avoid duplication of research efforts [30]. This
could particularly resonate with funding organisations.
For instance, as part of its efforts to minimise waste in
research, the National Institute for Health Research
requires systematic reviews of existing evidence as pre-
requisite for any new research [31].
While using both modules of the tool is required to
prioritise questions for systematic reviews, there are
cases where one could use only one of the two modules.
For example, one may opt to use Module 1 only to
generate national research priorities regardless of the
feasibility and appropriateness of conducting systematic
reviews. Additionally, in the setting of guideline
Akl et al. Health Research Policy and Systems (2017) 15:77 Page 5 of 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
development, it could be used to inform the priority
settingdomain in the guideline development checklist
[32], and the priority of the problemdomain in the
GRADE Evidence to Decision tables [33]. Similarly,
Module 2 could be used to help decide on the
feasibility of a systematic review, e.g. when deciding
what questions to address in systematic review work
based on the results of a mapping exercise [26].
Finally, it is worth noting that priority setting is just a
first step in the knowledge framework [34]. Following a
priority setting exercise, it is important to document the
details of the prioritisation process to increase the
credibility and thus the acceptability of the final
products [20]. This should be followed up with evidence
synthesis, knowledge translation activities and impact
analysis [34], and will help with examining the degree to
which the priorities have been addressed in research, as
well as whether and how the research was used (or not)
in decision-making [20, 34].
Conclusion
The SPARK tool for prioritising questions for systematic
reviews in HPSR will address a gap in the scientific
literature. We believe the tool will be useful for groups or
institutions funding or conducting systematic reviews in
HPSR. Additionally, it will help support evidence-informed
policymaking and practice and reduce research waste by
aligning systematic review production to policy priorities.
We are currently experimenting with the tool at the SPARK
Center. We encourage people involved in health systems
and policy to use the tool and researchers in the field to
conduct further testing within their own contexts as a con-
tribution to refining the tool.
Additional files
Additional file 1: Preliminary list of 19 candidate items along with their
meanings. (PDF 90 kb)
Additional file 2: Iterative refinements of the items and their wording
through the development and validation process. (PDF 196 kb)
Additional file 3: User manual for the SPARK tool. (PDF 511 kb)
Abbreviations
GRADE: Grading of Recommendations, Assessment, Development and
Evaluations; HPSR: Health policy and systems research; SPARK: Center for
Systematic Reviews in Health Policy and Systems Research
Acknowledgements
We would like to thank all the participants who provided input on our tool.
Funding
This study was supported by the Alliance for Health Policy and Systems
Research, WHO, Geneva. Although one of the authors (EL) is employed by the
funder, the funder was not involved in the design of the study and collection,
analysis, and interpretation of data and in writing of the manuscript.
Availability of data and materials
All data generated or analysed during this study are included in this
published article and its Additional files.
Authorscontributions
EAA and FE contributed to conception and design, acquisition of data, analysis
and interpretation of data, and drafting of the manuscript. RF contributed to
design, acquisition of data, analysis and interpretation of data, and drafting and
finalising the manuscript. OK contributed to design, acquisition of data, and
analysis of data. LG contributed to interpretation of data and initial drafting of
the manuscript. EL, JL and HS contributed to interpretation of data, and critical
revision of the manuscript for important intellectual content. All authors read
and approved the final manuscript.
Ethics approval and consent to participate
This project was approved by the Institutional Review Board at the American
University of Beirut, Lebanon.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
PublishersNote
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Author details
1
Department of Internal Medicine, American University of Beirut, Beirut,
Lebanon.
2
Center for Systematic Reviews of Health Policy and Systems
Research (SPARK), American University of Beirut, Beirut, Lebanon.
3
Department of Clinical Epidemiology and Biostatistics, McMaster University,
Hamilton, ON, Canada.
4
Department of Health Management and Policy,
Faculty of Health Sciences, American University of Beirut, Beirut, Lebanon.
5
Knowledge to Policy (K2P) Center, American University of Beirut, Beirut,
Lebanon.
6
Department of Clinical Epidemiology and Biostatistics, Faculty of
Health Sciences, American University of Beirut, Beirut, Lebanon.
7
Primary
Healthcare Department at the Ministry of Public Health, Beirut, Lebanon.
8
Alliance for Health Policy and Systems Research, World Health Organization,
Avenue Appia 20, 1211 Geneva, Switzerland.
9
McMaster Health Forum,
McMaster University, Hamilton, ON, Canada.
10
Centre for Health Economics
and Policy Analysis, McMaster University, Hamilton, ON, Canada.
11
Department of Global Health and Population, Harvard T.H. Chan School of
Public Health, Boston, MA, United States of America.
12
McMaster GRADE
Centre and Department of Medicine, McMaster University, Hamilton, ON,
Canada.
Received: 25 December 2016 Accepted: 16 August 2017
References
1. World Health Organization. Changing Mindsets: Strategy on Health Policy
and Systems Research. Geneva: WHO; 2012.
2. Hatt L, Johns B, Connor C, Meline M, Kukla M, Moat K. Impact of Health
Systems Strengthening on Health; 2015. https://www.hfgproject.org/wp-
content/uploads/2016/03/Impact-of-Health-Systems-Strengthening-on-
Health-7-24-1.pdf. Accessed 31 Aug 2017.
3. Chanda-Kapata P, Campbell S, Zarowsky C. Developing a national health
research system: participatory approaches to legislative, institutional and
networking dimensions in Zambia. Health Res Policy Syst. 2012;10:17.
4. Travis P, Bennett S, Haines A, Pang T, Bhutta Z, Hyder AA, Pielemeier NR,
Mills A, Evans T. Overcoming health-systems constraints to achieve the
Millennium Development Goals. Lancet. 2004;364(9437):9006.
5. Alliance for Health Policy, Systems Research (AHPSR). Briefing Note Number
1: What Is Health Policy and Systems Research and Why Does It Matter?
Geneva: WHO; 2007.
6. Gilson L, Hanson K, Sheikh K, Agyepong IA, Ssengooba F, Bennett S.
Building the field of health policy and systems research: social science
matters. PLoS Med. 2011;8(8):e1001079.
7. Koon AD, Rao KD, Tran NT, Ghaffar A. Embedding health policy and systems
research into decision-making processes in low- and middle-income
countries. Health Res Policy Syst. 2013;11:30.
Akl et al. Health Research Policy and Systems (2017) 15:77 Page 6 of 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
8. Bennett S, Agyepong IA, Sheikh K, Hanson K, Ssengooba F, Gilson L.
Building the field of health policy and systems research: an agenda for
action. PLoS Med. 2011;8(8):e1001081.
9. World Health Organization. Systematic Reviews in Health Policy and
Systems Research; 2009. http://digicollection.org/hss/documents/s16867e/
s16867e.pdf. Accessed 31 Aug 2017.
10. Langlois EV, Ranson MK, Barnighausen T, Bosch-Capblanch X, Daniels K,
El-Jardali F, Ghaffar A, Grimshaw J, Haines A, Lavis JN, et al. Advancing the
field of health systems research synthesis. Syst Rev. 2015;4:90.
11. El-Jardali F, Akl EA, Karroum LB, Kdouh O, Akik C, Fadlallah R, Hammoud R.
Systematic reviews addressing identified health policy priorities in Eastern
Mediterranean countries: a situational analysis. Health Res Policy Syst. 2014;12:48.
12. Fleurence RL, Torgerson DJ. Setting priorities for research. Health Policy.
2004;69(1):110.
13. Lavis JN, Oxman AD, Lewin S, Fretheim A. SUPPORT Tools for evidence-
informed health Policymaking (STP) 3: Setting priorities for supporting
evidence-informed policymaking. Health Res Policy Syst. 2009;7 Suppl 1:S3.
14. Campbell S. Deliberative Priority Setting. A CIHR Knowledge Translation
Module. Ottawa: Canadian Institutes for Health Research; 2010.
15. Kok MO, Gyapong JO, Wolffers I, Ofori-Adjei D, Ruitenberg J. Which health
research gets used and why? An empirical analysis of 30 cases. Health Res
Policy Syst. 2016;14:36.
16. Mador RL, Kornas K, Simard A, Haroun V. Using the Nine Common Themes
of Good Practice checklist as a tool for evaluating the research priority
setting process of a provincial research and program evaluation program.
Health Res Policy Syst. 2016;14:22.
17. Cole DC, Nyirenda LJ, Fazal N, Bates I. Implementing a national health
research for development platform in a low-income country - a review of
Malawis Health Research Capacity Strengthening Initiative. Health Res
Policy Syst. 2016;14:24.
18. Yoshida S. Approaches, tools and methods used for setting priorities in
health research in the 21(st) century. J Glob Health. 2016;6(1):010507.
19. The Collaborative Training Program. Health Research for Policy, Action and
Practice. Resource Modules. Module III: Promoting the use of knowledge in
policy and practice, Version 2. 2004. http://www.who.int/alliance-hpsr/
resources/ModuleIII_U2_CommunityV2.pdf?ua=1. Accessed 31 Aug 2017.
20. Viergever RF, Olifson S, Ghaffar A, Terry RF. A checklist for health research
priority setting: nine common themes of good practice. Health Res Policy
Syst. 2010;8:36.
21. Ranson MK, Bennett SC. Priority setting and health policy and systems
research. Health Res Policy Syst. 2009;7:27.
22. Langlois EV, Becerril Montekio V, Young T, Song K, Alcalde-Rabanal J, Tran N.
Enhancing evidence informed policymaking in complex health systems: lessons
from multi-site collaborative approaches. Health Res Policy Syst. 2016;14:20.
23. Kirshner B, Guyatt G. A methodological framework for assessing health
indices. J Chronic Dis. 1985;38(1):2736.
24 Gilson L. Health Policy and System Research A Methodology Reader: The
Abridged Version. Geneva: Alliance for Health Policy and Systems
Research, WHO; 2013.
25 Bryant J, Sanson-Fisher R, Walsh J, Stewart J. Health research priority setting
in selected high income countries: a narrative review of methods used and
recommendations for future practice. Cost Eff Resour Alloc. 2014;12:23.
26 Schmucker C, Motschall E, Antes G, Meerpohl JJ. Methods of evidence
mapping. A systematic review. Bundesgesundheitsblatt Gesundheitsforschung
Gesundheitsschutz. 2013;56(10):13907.
27 Law T, Lavis J, Hamandi A, Cheung A, El-Jardali F. Climate for evidence-informed
health systems: a profile of systematic review production in 41 low- and
middle-income countries, 1996-2008. J Health Serv Res Policy. 2012;17(1):410.
28 Oliver S, Bangpan M, Stansfield C, Stewart R. Capacity for conducting
systematic reviews in low- and middle-income countries: a rapid appraisal.
Health Res Policy Syst. 2015;13:23.
29 Uneke CJ, Ezeoha AE, Ndukwe CD, Oyibo PG, Onwe F, Aulakh BK. Research
priority setting for health policy and health systems strengthening in
Nigeria: the policymakers and stakeholders perspective and involvement.
Pan Afr Med J. 2013;16:10.
30 Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM,
Howells DW, Ioannidis JP, Oliver S. How to increase value and reduce waste
when research priorities are set. Lancet. 2014;383(9912):15665.
31 Nasser M, Clarke M, Chalmers I, Brurberg KG, Nykvist H, Lund H, Glasziou P.
What are funders doing to minimise waste in research? Lancet. 2017;
389(10073):10067.
32 Oxman AD, Schunemann HJ, Fretheim A. Improving the use of research
evidence in guideline development: 2. Priority setting Health Res Policy Syst.
2006;4:14.
33 Alonso-Coello P, Schunemann HJ, Moberg J, Brignardello-Petersen R, Akl EA,
Davoli M, Treweek S, Mustafa RA, Rada G, Rosenbaum S, et al. GRADE
Evidence to Decision (EtD) frameworks: a systematic and transparent
approach to making well informed healthcare choices. 1: Introduction. BMJ.
2016;353:i2016.
34 El-Jardali F, Fadlallah R. A call for a backward design to knowledge
translation. Int J Health Policy Manag. 2015;4(1):15.
We accept pre-submission inquiries
Our selector tool helps you to find the most relevant journal
We provide round the clock customer support
Convenient online submission
Thorough peer review
Inclusion in PubMed and all major indexing services
Maximum visibility for your research
Submit your manuscript at
www.biomedcentral.com/submit
Submit your next manuscript to BioMed Central
and we will help you at every step:
Akl et al. Health Research Policy and Systems (2017) 15:77 Page 7 of 7
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Others have proposed tools and approaches for prioritizing topics or questions for systematic review. For example, Akl et al. developed the SPARK tool to prioritize questions for systematic reviews in health policy and systems research [14]. Also, Cochrane has produced a general guidance on the process of prioritizing topics for systematic reviews. ...
... Table 2 provides a description of the methods of development of the prioritization approaches. The most frequently reported step was reviewing the literature (n 5 5; 71%), whereas the least frequently reported step was the development of a user manual as part of the development process (n 5 1; 14%) [14]. Three studies (43%) followed a common pathway for development including conducting a literature review, stakeholder input (survey or interview), and pilot testing [14,28,31]. ...
... The most frequently reported step was reviewing the literature (n 5 5; 71%), whereas the least frequently reported step was the development of a user manual as part of the development process (n 5 1; 14%) [14]. Three studies (43%) followed a common pathway for development including conducting a literature review, stakeholder input (survey or interview), and pilot testing [14,28,31]. ...
Article
Objective: To systematically review the literature for proposed approaches and exercises conducted to prioritize topics or questions for systematic reviews and other types of evidence syntheses in any health-related area. Study design and setting: A systematic review. We searched Medline and CINAHL databases in addition to Cochrane website and Google Scholar. Teams of two reviewers independently screened the studies and extracted data. Results: We included 31 articles reporting on 29 studies: seven proposed approaches for prioritization and 25 conducted prioritization exercises (three did both). The included studies addressed the following fields: clinical (n=19; 66%), public health (n=10; 34%) and health policy and systems (n=8; 28%), with six studies (21%) addressing more than one field. We categorized prioritization into 11 steps clustered in 3 phases (pre-prioritization, prioritization and post-prioritization). Twenty-eight studies (97%) involved or proposed involving stakeholders in the priority setting process. These 28 studies referred to twelve stakeholder categories, most frequently to health care providers (n= 24; 86%) and researchers (n=21; 75%). A common framework of 25 prioritization criteria was derived, clustered in 10 domains. Conclusion: We identified literature that addresses different aspects of prioritizing topics or questions for evidence syntheses, including prioritization steps and criteria. The identified steps and criteria can serve as a menu of options to select from, as judged appropriate to the context.
... Using this SBR, prioritisation of work could then proceed with efficiency ( Figure 1) and in line with items 2-6 from module 2 of SPARK, a prioritisation tool for systematic reviews [48]. ...
... Using this SBR, prioritisation of work could then proceed with efficiency ( Figure 1) and in line with items 2-6 from module 2 of SPARK, a prioritisation tool for systematic reviews [48]. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 Figure 1: The process of systematic reviewing using a study-based register. ...
... Excessive updating wastes resource while inadequate updating could result in outdated or incomplete evidence being used [69]. While there are methods to detect if 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 updating a review could change the current conclusion/practice, almost all require an awareness of the available 'unused' relevant literature [48,, and some degree of screening and data checking to allow an informed decision. Within a well-constructed and maintained study register, this investment has already been made. ...
Preprint
Full-text available
BACKGROUND: Maintained study-based registers (SBRs) have, at their core, study records linked to, potentially, multiple other records such as references, data sets, standard texts and full text reports. Such registers can minimise and refine searching, de-duplicating, screening and acquisition of full text. SBRs can facilitate new review titles/updates and, within seconds, inform the team about the potential workload of each task. METHODS: We discuss advantages/disadvantages of SBRs and report a case of how such a register was used to develop a successful grant application and deliver results-reducing considerable redundancy of effort. RESULTS: SBRs saved time in question-setting and scoping and made rapid production of nine Cochrane systematic reviews possible. CONCLUSION: Whilst helping prioritise and conduct systematic reviews, SBRs improve quality. Those funding Information Specialists for literature reviewing could reasonably stipulate the resulting SBR to be delivered for dissemination and use beyond the life of the project.
... Cochrane is a global, independent, not-for-profit organisation with an international network of contributors who conduct and publish systematic reviews (termed Cochrane Reviews). To ensure the relevance of their reviews to health decision-making, Cochrane recently adopted strategic objectives related to the prioritisation of Cochrane Reviews [12] and a number of Cochrane groups have undertaken comprehensive priority-setting activities with stakeholders specific to their topic scope [13][14][15][16][17]. Importantly, considerable guidance exists to generate broadly scoped research priorities [18][19][20], but the methods and 'real-world' considerations to inform the subsequent formulation and selection of answerable systematic review questions are still developing [21,22]. ...
... Workshop part B: refining priority topics We undertook facilitated small group work to further refine the priority topics [15,33], inviting participants to explore the problem underpinning the priority, who it affects and offer potential solutions (Fig. 2, part B). Their reflections were used to inform the context, justification or background of a Cochrane Review, particularly important for complex reviews [21], and the commonly used population and intervention components of review inclusion criteria [34]. To do this, participants worked in small groups of up to five people, with a co-facilitator guiding the discussion, using a series of prompts ( Fig. 2, session B1, and Additional file 4, small group discussion facilitator template). ...
... As others have noted, turning stakeholder-generated priority topics into answerable, appropriate and feasible systematic questions is an iterative and collaborative process, usually conducted subsequent to any prioritisation activity and one that must inevitably include systematic review authors and editors [21,22,38]. We were unable to identify suitable guidance for this step, and therefore we developed an approach based on evidence mapping [31] and standard editorial processes of scope delineation and feasibility. ...
Article
Full-text available
Background: Priority-setting partnerships between researchers and stakeholders (meaning consumers, health professionals and health decision-makers) may improve research relevance and value. The Cochrane Consumers and Communication Group (CCCG) publishes systematic reviews in 'health communication and participation', which includes concepts such as shared decision-making, patient-centred care and health literacy. We aimed to select and refine priority topics for systematic reviews in health communication and participation, and use these to identify five priority CCCG Cochrane Reviews. Methods: Twenty-eight participants (14 consumers, 14 health professionals/decision-makers) attended a 1-day workshop in Australia. Using large-group activities and voting, participants discussed, revised and then selected 12 priority topics from a list of 21 previously identified topics. In mixed small groups, participants refined these topics, exploring underlying problems, who they affect and potential solutions. Thematic analysis identified cross-cutting themes, in addition to key populations and potential interventions for future Cochrane Reviews. We mapped these against CCCG's existing review portfolio to identify five priority reviews. Results: Priority topics included poor understanding and implementation of patient-centred care by health services, the fact that health information can be a low priority for health professionals, communication and coordination breakdowns in health services, and inadequate consumer involvement in health service design. The four themes underpinning the topics were culture and organisational structures, health professional attitudes and assumptions, inconsistent experiences of care, and lack of shared understanding in the sector. Key populations for future reviews were described in terms of social health characteristics (e.g. people from indigenous or culturally and linguistically diverse backgrounds, elderly people, and people experiencing socioeconomic disadvantage) more than individual health characteristics. Potential interventions included health professional education, interventions to change health service/health professional culture and attitudes, and health service policies and standards. The resulting five priority Cochrane Reviews identified were improving end-of-life care communication, patient/family involvement in patient safety, improving future doctors' communication skills, consumer engagement strategies, and promoting patient-centred care. Conclusions: Stakeholders identified priority topics for systematic reviews associated with structural and cultural challenges underlying health communication and participation, and were concerned that issues of equity be addressed. Priority-setting with stakeholders presents opportunities and challenges for review producers.
... Using this SBR, prioritisation of work could then proceed with efficiency ( Fig. 1) and in line with items 2-6 from module 2 of SPARK, a prioritisation tool for systematic reviews [48]. ...
... Excessive updating wastes resource while inadequate updating could result in outdated or incomplete evidence being used [69]. While there are methods to detect if updating a review could change the current conclusion/practice, almost all require an awareness of the available 'unused' relevant literature [48,, and some degree of screening and data checking to allow an informed decision. Within a well-constructed and maintained study register, this investment has already been made. ...
Article
Full-text available
Background: Maintained study-based registers (SBRs) have, at their core, study records linked to, potentially, multiple other records such as references, data sets, standard texts and full-text reports. Such registers can minimise and refine searching, de-duplicating, screening and acquisition of full texts. SBRs can facilitate new review titles/updates and, within seconds, inform the team about the potential workload of each task. Methods: We discuss the advantages/disadvantages of SBRs and report a case of how such a register was used to develop a successful grant application and deliver results—reducing considerable redundancy of effort. Results: SBRs saved time in question-setting and scoping and made rapid production of nine Cochrane systematic reviews possible. Conclusion: Whilst helping prioritise and conduct systematic reviews, SBRs improve quality. Those funding information specialists for literature reviewing could reasonably stipulate the resulting SBR to be delivered for dissemination and use beyond the life of the project.
... Nonetheless, considerable effort has been undertaken in the last several years to build a consensus on the defining features of HSG, HSG development strategies, tools to support decisions by health systems and policy leaders, and methods to enable the contextualization of recommendations [1,2,3,14,15,16,17]. ...
... In response, an international team of researchers and stakeholders in the HSG field created the Appraisal of Guidelines and REsearch and Evaluation -Health Systems (AGREE-HS). The AGREE-HS is a newly released tool to support the development, reporting and evaluation of HSG; studies completed to date have indicated that it is usable, reliable, and valid [8,14,15,18]. ...
Article
Full-text available
Abstract Health systems guidance (HSG) documents contain systematically developed statements or recommendations intended to address a health system challenge. The concept of HSG is fairly new and considerable effort has been undertaken to build tools to support the contextualization of recommendations. One example is the Appraisal of Guidelines for REsearch and Evaluation - Health Systems (AGREE-HS), created by international stakeholders and researchers, to assist in the development, reporting and evaluation of HSG. Here, we present the quality appraisal of 85 HSG documents published from 2012 to 2017 using the AGREE-HS. The AGREE-HS consists of five items (Topic, Participants, Methods, Recommendations, and Implementability), which are scored on a 7-point response scale (1=lowest quality; 7=highest quality). Overall, AGREE-HS item scores were highest for the 'Topic' and 'Recommendations' items (means above the mid-point of 4), while the 'Participants', 'Methods', and 'Implementability' items received lower scores. Documents without a specific health focus and those authored by the National Institute for Health and Care Excellence group, achieved higher AGREE-HS overall scores than their comparators. No statistically significant changes in overall scores were observed over time. This is the first time that the AGREE-HS has been applied, providing a current quality status report of HSG and identifying where improvements in HSG development and reporting can be made. Keywords: Health policy; Health systems; Health systems guidance; Health systems research; Quality appraisal.
... For example, is it likely that a replication will remain relevant to policy and practice for a useful length of time? Is it likely for replication results to lead to implementation by practitioners and policy makers?Priority setting tool (eg, SPARK tool,20 James Lind Alliance,21 CINARI 22 ) Yes / No Question 2. Is it likely that direct replication by repetition or conceptual replication by broadening or narrowing of the scope will address uncertainties, controversies, or the need for additional evidence related to: ...
... The ranking criteria were derived from a previous priority setting exercise conducted in the region 37 and complemented by additional criteria extracted from SPARK tool. 38 Each research question was ranked against the below set of criteria, on a 3-Likert scale (low, medium and high): ► Relevance: Is this question relevant to policy/community concerns? ► Urgency: Is the evidence on this question needed within the next 1-3 years? ...
Article
Full-text available
Introduction Strong primary health care (PHC) leads to better health outcomes, improves health equity and accelerates progress towards universal health coverage (UHC). The Astana Declaration on PHC emphasised the importance of quality care to achieve UHC. A comprehensive understanding of the quality paradigm of PHC is critical, yet it remains elusive in countries of the Eastern Mediterranean Region (EMR). This study used a multistep approach to generate a policy-relevant research agenda for strengthening quality, safety and performance management in PHC in the EMR. Methods A multistep approach was adopted, encompassing the following steps: scoping review and generation of evidence and gap maps, validation and ranking exercises, and development of an approach for research implementation. We followed Joanna Briggs Institute guidelines for conducting scoping reviews and a method review of the literature to build the evidence and gap maps. For the validation and ranking exercises, we purposively sampled 55 high-level policy-makers and stakeholders from selected EMR countries. We used explicit multicriteria for ranking the research questions emerging from the gap maps. The approach for research implementation was adapted from the literature and subsequently tailored to address the top ranked research question. Results The evidence and gap maps revealed limited production of research evidence in the area of quality, safety and performance management in PHC by country and by topic. The priority setting exercises generated a ranked list of 34 policy-relevant research questions addressing quality, safety and performance management in PHC in the EMR. The proposed research implementation plan involves collaborative knowledge generation with policy-makers along with knowledge translation and impact assessment. Conclusion Study findings can help inform and direct future plans to generate, disseminate and use research evidence to enhance quality, safety and performance management in PHC in EMR and beyond. Study methodology can help bridge the gap between research and policy-making.
... However, involving policymakers and stakeholders in setting priorities for research on health was highest among NGOs. Conducting priority setting is only the first step in KT and should be followed by evidence synthesis, development of KT products and impact assessment (18). Around half of respondents did not know whether a national health council that regulates funding priorities exists in their country. ...
Article
Full-text available
Background: Health research institutions in the Eastern Mediterranean Region (EMR) can play an integral role in promoting and supporting Knowledge Translation (KT). Assessing institutions' engagement in KT and bridging the "research- policy" gap is important in designing context-specific strategies to promote KT and informing funding efforts in the region. Aims: The objective of this study was to explore the engagement of EMR institutions in KT activities. Methods: A cross-sectional survey of institutions undertaking health research in the 22 EMR countries was undertaken. The survey covered institutional characteristics, institutional planning for research, national planning for health research, and knowledge management, translation and dissemination. Results: 575 institutions were contacted of which 223 (38.3%) responded. Half the sampled institutions reported conducting priority-setting exercises, with 60.2% not following a standardized approach. Less than half institutions reported frequently/ always (40.5%) involving policymakers and stakeholders in setting priorities for research on health. Only 26.5% of respondent institutions reported that they examine the extent to which health policymakers utilize their research results. Moreover, only 23.3% reported measuring the impact of their health research. Conclusions: There is still misalignment between national health research priorities and actual research production, and KT activities are still rarely undertaken by institutions in the EMR. National governments and international funding agencies are called to support research production and translation in the EMR. Institutions and researchers are also called to produce policy-relevant research and be responsive to the needs and priorities of policy-makers.
Article
It is widely recognised that the process of public health policy making (i.e., the analysis, action plan design, execution, monitoring and evaluation of public health policies) should be evidenced based, and supported by data analytics and decision- making tools tailored to it. This is because the management of health conditions and their consequences at a public health policy making level can benefit from such type of analysis of heterogeneous data, including health care devices usage, physiological, cognitive, clinical and medication, personal, behavioural, lifestyle data, occupational and environmental data. In this paper we present a novel approach to public health policy making in a form of an ontology, and an integrated platform for realising this approach. Our solution is model-driven and makes use of big data analytics technology. More specifically, it is based on public health policy decision making (PHPDM) models that steer the public health policy decision making process by defining the data that need to be collected, the ways in which they should be analysed in order to produce the evidence useful for public health policymaking, how this evidence may support or contradict various policy interventions (actions), and the stakeholders involved in the decision-making process. The resulted web-based platform has been implemented using Hadoop, Spark and HBASE, developed in the context of a research programme on public health policy making for the management of hearing loss called EVOTION, funded by the Horizon 2020.
Thesis
Full-text available
Extended Abstract Summary Although narrative reviews remain important, overviewing literature often now takes some form of systematic approach. Pivotal to being systematic is the searching and, with that, the role of Information Specialists. To offset a common criticism of the time-consuming nature of systematic reviewing the Information Specialist must evolve real-world solutions for highly sensitive and specific searches and an efficient supply of complete, valid, and accessible data. This work describes the five-year evolution of a unique and powerful relational study-based register – of randomised controlled trials (RCTs)(Paper 1). Meta-data from 19,964 RCTs have been extracted and a controlled language created to allow accurate classification and identification of only relevant studies for any given review (Paper 6). This advanced system almost eradicates the need for reviewers of trials to search for themselves, saving the usual waste in review preparation or grant application (Paper4). The umbrella term ‘meta-data’ may include complete datasets – randomised trials’ ‘big data’. Although increasing numbers of individual patient datasets (IPD) exist, by far the most common data are the qualitative and quantitative information extracted - by hand or machine - from each study’s set of publications. To be rigorous, this process of data extraction must be possible to verify- with each tiny piece of data being traceable to its source. This should also prevent the continuous repetition of the same data extraction by successive generations of reviewers. Paper 2 describes pioneering work in creating an easy system to make this possible. Paper 3 calls for wide access to publically funded datasets of extracted data from trials and Paper 8 and Paper 9 describe why openness is important for reproducibility and how we could enhance the reproducibility of systematic reviews and make them a role model for other study designs. Furthermore, a register working at this level of sophistication lends itself to semi-automation of the systematic reviewing process (Paper 4, Paper 5) and novel uses of these data – including increasing the rigour in the methodology of the analyses of systematic reviews (Paper 5). These registers greatly facilitate new insights into research activity(presented in Paper 7). This paper reports patterns and trends that could support decisions about the future of the register, the process of systematic reviews and the direction of research overall. This work represents a step-change in the sophistication of the role of Information Specialists in systematic reviewing. The investment of effort of the last half-decade results in a database with unparalleled functionality and completeness, with rich research potential, already relating to reliable, accurate datasets that can be supplied to any person or machine. The body of work presented in this thesis is a weave of four papers placed within ‘background and developing novel methods’ – although parts of these papers do also report results and conclusions. Those four ‘background’ papers lead to another two articles largely reporting results, and finally, three papers focus on ‘conclusions and impact on policy’. Background and developing novel methods Paper 1: This introduces the idea of two types of registers to Information Science: 1. Reference-based register - based on the bibliographic data of separate, disconnected, multiple reports of a study; and 2. Study-based register - based on the entire data of one study, including its connected reports, and its associated meta-data and bibliographic details; and, within this a. Automated study-based register - in which data and meta-data are widely available so that the systematic reviews could start with meta-analysis. The paper is the first to discuss the necessity, rationale, and steps for the development, utilization and maintenance of study-based registers as well as the challenges and gains for organizations supporting systematic reviews. Finally, the paper presents an example of structured data in machine-readable XML and human-friendly tabular format encouraging sharing of data, meta-data and the locations of extracted and tabulated data in the original reports. Paper 2: This follows the arguments from paper one and describes three methods of locating data in the original reports. The paper, for the first time, compares the advantages and disadvantages of each method. The paper develops the argument to describe the practicalities of how actual tabular data records - including meta-data and the exact location of every small piece of qualitative and quantitative data - were created (work supported by HTA NIHR Programme grant HTA-14/27/02). Paper 2 ends with a call for open access sharing of this type of research data. Paper 4: This describes the use of a sophisticated trials register with a particular focus on saving time/effort/money. This describes and quantifies – including through a flow diagram - the processes of how tasks that usually take months to complete can be undertaken [better] in minutes through the use of a well-constructed and maintained study-based register. The paper discusses – and tries to quantify - the avoidable waste in the process of systematic reviewing and a radical approach to study search and screening. Paper 5: This describes the use of a sophisticated trials register with a particular focus on novel analysis and easy-to-use quantifiable means of increasing methodological rigour in network meta-analyses. High-grade registers are used not only to identify all relevant studies but also all relevant comparisons within those studies. This work presents, for the first time, a simple mathematical formula that accurately predicts the number of potential comparisons within a single RCT or, more importantly, a network meta-analysis. For example, a single trial with two interventions generates one comparison; a three-arm trial –three; and an eight-arm trial no less than 28. Within the increasingly prevalent network meta-analyses, many arms exist for potential indirect comparisons and the tested formula accurately enumerates this number. Those embarking on a network meta-analysis can pre-state which potential comparisons are of interest rather than doing this post hoc. Where a shortfall in the number of comparisons actually utilised or reported occurs - this is a considerable opportunity for the inclusion of bias, that can be, at least partially guarded against by the use of the pre hoc simple formula. Results Paper 6: This documents the detailed, classification of all pharmacological interventions used in all schizophrenia RCTs. Data relating to interventions extracted from 19,964 RCTs were, for the first time, carefully categorised using a [necessarily] novel controlled language derived from WHO ATC. This initiative now allows uniquely accurate searching for intervention with resulting searches of ultra-high, pinpoint accuracy and no redundancy. Quantification of the workload involved in systematically reviewing an area or topic becomes noticeably more accurate, further magnified by the supply of full datasets. Paper 7: Using the curated register, I illustrate how new insights into publication, research and care can be gained from even the relatively simple analysis of the now less confused body of trial evidence maintained within the study-based register. Conclusion and Impact on Policy Paper 3: To help the move toward full access to all data extracted from trials by people who are publically funded, I planned, instigated, led and co-ordinated this international and senior collaborative authorship. The paper encouraged the Cochrane Collaboration to develop global policy and take action regarding data sharing, referring to successful examples of such sharing from systematic reviews. This call did help move the argument forward within this largest producer of maintained reviews worldwide (Appendix A). Paper 8 and 9: Study-based registers can directly assist in the crisis over irreproducibility within research. Systematic review methods do have certain strengths because of the need to use two or three reviewers and through the development of automation. Unlike many who suggest adding new reproducibility tests into the systematic review process – to increase transparency but also making the process even more time-consuming - I discuss seven suggested strategies to enhance the reproducibility of systematic reviews: pre-registration, open methods, open data, collaboration, automation, reporting guidelines, and post-publication reviews. These two papers complement Paper 3’s call for data sharing policy in Cochrane Collaboration. Furthermore, Paper 8 & 9 expand on the idea that, because systematic reviews are often updated and have existing protocols, and also because relevant automation tools are developing or in existence – allowing replication of processes in seconds - systematic reviews can be a role model of reproducibility for other research designs. References Paper 1: Shokraneh F, Adams CE. Study-based registers of randomized controlled trials: Starting a systematic review with data extraction or meta-analysis. BioImpacts 2017; 7(4): 209-217. https://doi.org/10.15171/bi.2017.25 Paper 2: Shokraneh F, Adams CE. Increasing value and reducing waste in data extraction for systematic reviews: tracking data in data extraction forms. Systematic Reviews 2018; 6: 153. https://doi.org/10.1186/s13643-017-0546-z Paper 3: Shokraneh F, Adams CE, Clarke M, Amato L, Bastian H, Beller E, et al. Why Cochrane should prioritise sharing data. BMJ 2018; 362:k3229. https://doi.org/10.1136/bmj.k3229 Paper 4: Shokraneh F, Adams CE. Study-based registers reduce waste in systematic reviewing: discussion and case report. Systematic Reviews 2019; 8:129. https://doi.org/10.1186/s13643-019-1035-3 Paper 5: Shokraneh F, Adams CE. A simple formula for enumerating comparisons in trials and network meta-analysis. F1000Research 2019; 8:38. https://doi.org/10.12688/f1000research.17352.1 Paper 6: Shokraneh F, Adams CE. Classification of all pharmacological interventions tested in trials relevant to people with schizophrenia: A study-based analysis. Health Information and Libraries Journal 2021; https://doi.org/10.1111/hir.12366 Paper 7: Shokraneh F, Adams CE. Cochrane Schizophrenia Group’s Study-Based Register of Randomized Controlled Trials: Development and Content Analysis. Schizophrenia Bulletin Open 2020; https://doi.org/10.1093/schizbullopen/sgaa061 Paper 8: Shokraneh F. Reducing waste and increasing value through embedded replicability and reproducibility in systematic review process and automation. Journal of Clinical Epidemiology 2019; 112: 98-9. https://doi.org/10.1016/j.jclinepi.2019.04.008 Paper 9: Shokraneh F. Reproducibility and replicability of systematic reviews. World Journal of Meta-Analysis 2019;7(3):66-71. http://dx.doi.org/10.13105/wjma.v7.i3.66
Article
Full-text available
#### Summary points Healthcare decision making is complex. Decision-making processes and the factors (criteria) that decision makers should consider vary for different types of decisions, including clinical recommendations, coverage decisions, and health system or public health recommendations or decisions.1 2 3 4 However, some criteria are relevant for all of these decisions, including the anticipated effects of the options being considered, the certainty of the evidence for those effects (also referred to as quality of evidence or confidence in effect estimates), and the costs and feasibility of the options. Decision makers must make judgments about each relevant factor, informed by the best evidence that is available to them. Often, the processes that decision makers use, the criteria that they consider and the evidence that they …
Article
Full-text available
Background While health research is considered essential for improving health worldwide, it remains unclear how it is best organized to contribute to health. This study examined research that was part of a Ghanaian-Dutch research program that aimed to increase the likelihood that results would be used by funding research that focused on national research priorities and was led by local researchers. The aim of this study was to map the contribution of this research to action and examine which features of research and translation processes were associated with the use of the results. Methods Using Contribution Mapping, we systematically examined how 30 studies evolved and how results were used to contribute to action. We combined interviews with 113 purposively selected key informants, document analysis and triangulation to map how research and translation processes evolved and contributions to action were realized. After each case was analysed separately, a cross-case analysis was conducted to identify patterns in the association between features of research processes and the use of research. Results The results of 20 of the 30 studies were used to contribute to action within 12 months. The priority setting and proposal selection process led to the funding of studies which were from the outset closely aligned with health sector priorities. Research was most likely to be used when it was initiated and conducted by people who were in a position to use their results in their own work. The results of 17 out of 18 of these user-initiated studies were translated into action. Other features of research that appeared to contribute to its use were involving potential key users in formulating proposals and developing recommendations. Conclusions Our study underlines the importance of supporting research that meets locally-expressed needs and that is led by people embedded in the contexts in which results can be used. Supporting the involvement of health sector professionals in the design, conduct and interpretation of research appears to be an especially worthwhile investment.
Article
Full-text available
Background: National health research for development (R4D) platforms in lower income countries (LICs) are few. The Health Research Capacity Strengthening Initiative (HRCSI, 2008-2013) was a national systems-strengthening programme in Malawi involved in national priority setting, decision-making on funding, and health research actor mobilization. Methods: We adopted a retrospective mixed-methods evaluation approach, starting with information gleaned from reports (HRCSI and Malawian) and databases (HRCSI). A framework of a health research system (actors and components) guided report review and interview guide development. From a list of 173 individuals involved in HRCSI, 30 interviewees were selected within categories of stakeholders. Interviews were conducted face-to-face or via telephone/Skype over 1 month, documented with extensive notes. Analysis of emerging themes was iterative among co-evaluators, with synthesis according to the implementation stage. Results: Major HRCSI outputs included (1) National research priority-setting: through the production of themed background papers by Malawian health researchers and broad consultation, HRCSI led the development of a National Health Research Agenda (2012-2016), widely regarded as one of HRCSI's foremost achievements. (2) Institutional research capacity: there was an overwhelming view that HRCSI had produced a step-change in the number of high calibre scientists in Malawi and in fostering research interest among young Malawians, providing support for around 56 MSc and PhD students, and over 400 undergraduate health-related projects. (3) Knowledge sharing: HRCSI supported research dissemination through national and institutional meetings by sponsoring attendance at conferences and through close relationships with individuals in the print media for disseminating information. (4) Sustainability: From 2011-2013, HRCSI significantly improved research systems, processes and leadership in Malawi, but further strengthening was needed for HRCSI to be effectively integrated into government structures and sustained long-term. Overall, HRCSI carried out many components relevant to a national health research system coordinating platform, and became competent at managing over half of 12 areas of performance for research councils. Debate about its location and challenges to sustainability remain open questions. Conclusions: More experimentation in the setting-up of national health R4D platforms to promote country 'ownership' is needed, accompanied by evaluation processes that facilitate learning and knowledge exchange of better practices among key actors in health R4D systems.
Article
Full-text available
Background: Given the context-specific nature of health research prioritization and the obligation to effectively allocate resources to initiatives that will achieve the greatest impact, evaluation of priority setting processes can refine and strengthen such exercises and their outcomes. However, guidance is needed on evaluation tools that can be applied to research priority setting. This paper describes the adaption and application of a conceptual framework to evaluate a research priority setting exercise operating within the public health sector in Ontario, Canada. Methods: The Nine Common Themes of Good Practice checklist, described by Viergever et al. (Health Res Policy Syst 8:36, 2010) was used as the conceptual framework to evaluate the research priority setting process developed for the Locally Driven Collaborative Projects (LDCP) program in Ontario, Canada. Multiple data sources were used to inform the evaluation, including a review of selected priority setting approaches, surveys with priority setting participants, document review, and consultation with the program advisory committee. Results: The evaluation assisted in identifying improvements to six elements of the LDCP priority setting process. The modifications were aimed at improving inclusiveness, information gathering practices, planning for project implementation, and evaluation. In addition, the findings identified that the timing of priority setting activities and level of control over the process were key factors that influenced the ability to effectively implement changes. Conclusions: The findings demonstrate the novel adaptation and application of the 'Nine Common Themes of Good Practice checklist' as a tool for evaluating a research priority setting exercise. The tool can guide the development of evaluation questions and enables the assessment of key constructs related to the design and delivery of a research priority setting process.
Article
Full-text available
Background There is an increasing interest worldwide to ensure evidence-informed health policymaking as a means to improve health systems performance. There is a need to engage policymakers in collaborative approaches to generate and use knowledge in real world settings. To address this gap, we implemented two interventions based on iterative exchanges between researchers and policymakers/implementers. This article aims to reflect on the implementation and impact of these multi-site evidence-to-policy approaches implemented in low-resource settings. Methods The first approach was implemented in Mexico and Nicaragua and focused on implementation research facilitated by communities of practice (CoP) among maternal health stakeholders. We conducted a process evaluation of the CoPs and assessed the professionals’ abilities to acquire, analyse, adapt and apply research. The second approach, called the Policy BUilding Demand for evidence in Decision making through Interaction and Enhancing Skills (Policy BUDDIES), was implemented in South Africa and Cameroon. The intervention put forth a ‘buddying’ process to enhance demand and use of systematic reviews by sub-national policymakers. The Policy BUDDIES initiative was assessed using a mixed-methods realist evaluation design. ResultsIn Mexico, the implementation research supported by CoPs triggered monitoring by local health organizations of the quality of maternal healthcare programs. Health programme personnel involved in CoPs in Mexico and Nicaragua reported improved capacities to identify and use evidence in solving implementation problems. In South Africa, Policy BUDDIES informed a policy framework for medication adherence for chronic diseases, including both HIV and non-communicable diseases. Policymakers engaged in the buddying process reported an enhanced recognition of the value of research, and greater demand for policy-relevant knowledge. Conclusions The collaborative evidence-to-policy approaches underline the importance of iterations and continuity in the engagement of researchers and policymakers/programme managers, in order to account for swift evolutions in health policy planning and implementation. In developing and supporting evidence-to-policy interventions, due consideration should be given to fit-for-purpose approaches, as different needs in policymaking cycles require adapted processes and knowledge. Greater consideration should be provided to approaches embedding the use of research in real-world policymaking, better suited to the complex adaptive nature of health systems.
Article
Full-text available
Background: Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. Methods: To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001-2014. Results: A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (<1%). About 3% of studies reported no clear process and provided very little information on how priorities were set. A further 19% used a combination of expert panel interview and focus group discussion ("consultation process") but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face-to-face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. Conclusion: The number of priority setting exercises in health research published in PubMed-indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well-defined structure - such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix - it is likely that the Delphi method and non-replicable consultation processes will gradually be replaced by these emerging tools, which offer more transparency and replicability. It is too early to say whether any single method can address the needs of most exercises conducted at different levels, or if better results may perhaps be achieved through combination of components of several methods.
Article
Full-text available
Those planning, managing and working in health systems worldwide routinely need to make decisions regarding strategies to improve health care and promote equity. Systematic reviews of different kinds can be of great help to these decision-makers, providing actionable evidence at every step in the decision-making process. Although there is growing recognition of the importance of systematic reviews to inform both policy decisions and produce guidance for health systems, a number of important methodological and evidence uptake challenges remain and better coordination of existing initiatives is needed. The Alliance for Health Policy and Systems Research, housed within the World Health Organization, convened an Advisory Group on Health Systems Research (HSR) Synthesis to bring together different stakeholders interested in HSR synthesis and its use in decision-making processes. We describe the rationale of the Advisory Group and the six areas of its work and reflects on its role in advancing the field of HSR synthesis. We argue in favour of greater cross-institutional collaborations, as well as capacity strengthening in low- and middle-income countries, to advance the science and practice of health systems research synthesis. We advocate for the integration of quasi-experimental study designs in reviews of effectiveness of health systems intervention and reforms. The Advisory Group also recommends adopting priority-setting approaches for HSR synthesis and increasing the use of findings from systematic reviews in health policy and decision-making.
Article
Full-text available
Systematic reviews of research are increasingly recognised as important for informing decisions across policy sectors and for setting priorities for research. Although reviews draw on international research, the host institutions and countries can focus attention on their own priorities. The uneven capacity for conducting research around the world raises questions about the capacity for conducting systematic reviews. A rapid appraisal was conducted of current capacity and capacity strengthening activities for conducting systematic reviews in low- and middle-income countries (LMICs). A systems approach to analysis considered the capacity of individuals nested within the larger units of research teams, institutions that fund, support, and/or conduct systematic reviews, and systems that support systematic reviewing internationally. International systematic review networks, and their support organisations, are dominated by members from high-income countries. The largest network comprising a skilled workforce and established centres is the Cochrane Collaboration. Other networks, although smaller, provide support for systematic reviews addressing questions beyond effective clinical practice which require a broader range of methods. Capacity constraints were apparent at the levels of individuals, review teams, organisations, and system wide. Constraints at each level limited the capacity at levels nested within them. Skills training for individuals had limited utility if not allied to opportunities for review teams to practice the skills. Skills development was further constrained by language barriers, lack of support from academic organisations, and the limitations of wider systems for communication and knowledge management. All networks hosted some activities for strengthening the capacities of individuals and teams, although these were usually independent of core academic programmes and traditional career progression. Even rarer were efforts to increase demand for systematic reviews and to strengthen links between producers and potential users of systematic reviews. Limited capacity for conducting systematic reviews within LMICs presents a major technical and social challenge to advancing their health systems. Effective capacity in LMICs can be spread through investing effort at multiple levels simultaneously, supported by countries (predominantly high-income countries) with established skills and experience.
Article
Full-text available
Research priority setting aims to gain consensus about areas where research effort will have wide benefits to society. While general principles for setting health research priorities have been suggested, there has been no critical review of the different approaches used. This review aims to: (i) examine methods, models and frameworks used to set health research priorities; (ii) identify barriers and facilitators to priority setting processes; and (iii) determine the outcomes of priority setting processes in relation to their objectives and impact on policy and practice. Medline, Cochrane, and PsycINFO databases were searched for relevant peer-reviewed studies published from 1990 to March 2012. A review of grey literature was also conducted. Priority setting exercises that aimed to develop population health and health services research priorities conducted in Australia, New Zealand, North America, Europe and the UK were included. Two authors extracted data from identified studies. Eleven diverse priority setting exercises across a range of health areas were identified. Strategies including calls for submission, stakeholder surveys, questionnaires, interviews, workshops, focus groups, roundtables, the Nominal Group and Delphi technique were used to generate research priorities. Nine priority setting exercises used a core steering or advisory group to oversee and supervise the priority setting process. None of the models conducted a systematic assessment of the outcomes of the priority setting processes, or assessed the impact of the generated priorities on policy or practice. A number of barriers and facilitators to undertaking research priority setting were identified. The methods used to undertake research priority setting should be selected based upon the context of the priority setting process and time and resource constraints. Ideally, priority setting should be overseen by a multi-disciplinary advisory group, involve a broad representation of stakeholders, utilise objective and clearly defined criteria for generating priorities, and be evaluated.