Working PaperPDF Available

EVIDENT Guidance for Reviewing the Evidence: a compendium of methodological literature and websites

Authors:

Figures

Content may be subject to copyright.
1
EVIDENT Guidance for Reviewing the
Evidence: a compendium of
methodological literature and websites
Dr Andrew Booth, BA Dipl Lib MSc PhD MCLIP School of Health & Related
Research(ScHARR), University of Sheffield, United Kingdom for and on behalf of
the EVIDENT Project
January 2015
2
Getting Started ........................................................................................................................................ 4
Using the Research Literature................................................................................................................. 4
Reviewing the Literature ......................................................................................................................... 4
What does it mean to be systematic and transparent? ......................................................................... 5
But what about time, quality, & money (resources)? ............................................................................ 5
What are my choices? ............................................................................................................................. 6
Summary Table of Evidence Products ................................................................................................ 7
How Long have You Got? .................................................................................................................. 13
An Overview of Review Types ............................................................................................................... 14
I need to see the overall picture… .................................................................................................... 14
Mapping Review ........................................................................................................................... 14
Scoping Review ............................................................................................................................. 16
Review of Reviews ........................................................................................................................ 19
I need to know what my options are for a particular decision… ...................................................... 21
Evidence Briefing........................................................................................................................... 21
I need the evidence for a particular intervention fast… ................................................................... 24
Evidence Summary ........................................................................................................................ 24
Rapid Evidence Assessment .......................................................................................................... 29
Rapid Review ................................................................................................................................. 32
Rapid Realist Review ..................................................................................................................... 35
I need to build up a picture from existing related reviews… ............................................................ 37
Umbrella Review ........................................................................................................................... 37
I need to look at a specific topic in depth ......................................................................................... 39
Systematic Review of Quantitative Evidence................................................................................ 39
Meta-Analysis ................................................................................................................................ 42
Systematic Review of Qualitative Evidence .................................................................................. 43
I need to understand how an intervention or programme works… ................................................. 45
Systematic Review with Logic Model ............................................................................................ 45
Realist Synthesis ............................................................................................................................ 47
Table 3 - Approach to realist review (from Rycroft-Malone, adapted from Pawson) .......................... 48
What other choices are available? .................................................................................................... 51
Framework Synthesis .................................................................................................................... 51
Narrative Synthesis ....................................................................................................................... 52
Qualitative Comparative Analysis ................................................................................................. 53
3
Summary ........................................................................................................................................... 55
References ........................................................................................................................................ 56
4
Getting Started
So you have identified a priority issue relating to health and nutrition. What next? No doubt you will
want to identify a potential solution not simply the first answer that comes to mind but one that is
based upon the evidence. By evidence we mean potential solutions that have been evaluated in the
research literature which we will use alongside local data and a knowledge of the attitudes and
values of our local population.
Research
Literature
+
Local
Data
+
Community
Values
=
Evidence
Using the Research Literature
In the past a group of decision-makers might have come up with a “good idea” from a meeting or
discussion. The problem was that these good ideas would not usually have been evaluated they
might only work in some settings but not others, they might only work some of the time or they
might not even work at all! Even if a decision-maker had found a research study that looked at how
an idea worked in practice s/he might have picked the first study that came to hand. S/he may even
have carefully selected a research study that supports what s/he wants to do. Clearly it is much
better to bring together a full set of studies, to weigh them all up, alongside local data and
community values, and then make a decision on what, on balance, is the best way to tackle your
particular problem or issue. This process involves systematic approaches to reviewing the
literature.[1]
Reviewing the Literature
When someone reviews the literature they may still make similar mistakes to those made by
someone who picks just one article. They could select the first group of studies that comes to hand.
They could pick a group of studies that supports their own opinion. When someone is reviewing the
literature we want them to (i) search as well as they can for all studies on their particular topic, and,
(ii) find a group of studies that work together to provide an accurate representation of the topic and
any potential solutions.
In the past the author of a literature review often drew only on studies that were close at hand.[2]
[3]A review author might select only those studies that support their own opinion. A review would
be judged by the quality of the argument, not on the quality of the evidence. This is like awarding an
Olympic Gold Medal to the athlete who gave the best TV interview instead of presenting it to the
one who ran the fastest!
A well-conducted, rigorous literature review will seek to use all the relevant evidence and to provide
a full picture both in favour and against each possible decision choice.[4] To this we can add two
further requirements a well-conducted literature review must be systematic (so that people can
5
tell how much they can trust the review) and transparent (so that people can tell exactly what has
been done).
What does it mean to be systematic and transparent?
If a decision maker is to make a decision based upon the analysis of the literature that you have
produced they need to know that when you created the evidence product you took steps to
minimise bias. Bias refers to any systematic error that may result in a user of the research coming to
an inaccurate judgement about what the evidence says.[5] This would include selecting items that
only support the action being considered, excluding from the analysis any outcomes or results that
might result in the intervention or programme being viewed unfavourably, and using language in a
report that implies that an intervention or programme is more effective than it actually is. Because
the effects of all these types of bias are systematic they can be counterbalanced by stages of the
review process that are themselves systematic. For example, a systematic evidence product would
specify clear inclusion and exclusion for potentially includable studies, would seek to report all
relevant outcomes whether positive or negative and would frame the results and recommendations
in a way that is appropriate to the collective findings from the individual studies. Following a process
that has been documented beforehand, e.g. through a research proposal or review protocol,
minimises the likelihood of bias and, potentially, can increase the confidence of a reader in the
results and recommendations.
Related to the requirement to be systematic in the way the evidence product has been conducted is
the need to be transparent in the way that same product is reported. A user of research is able to
gain much more confidence in the quality of that report if they can clearly identify that the steps
through which the research has passed are reasonable and rigorous. For this reason, much attention
is currently focused on reporting standards for research studies. e.g. the EQUATOR Network.
Transparency may be handled by producing a specific protocol for a substantive piece of work (e.g. a
systematic review). Alternatively, you might develop a standard operating procedure for a more
ephemeral series of products e.g. evidence briefings. In the latter case each product should be
checked for fidelity to the operating procedure and any deviations should be documented, together
with the likely implications for that specific product. Documenting the process or product as
completely as is feasible minimises the likelihood that the fitness for purpose of the evidence
product is challenged. It can also potentially increase the confidence of the reader in the results and
recommendations.
But what about time, quality, & money (resources)?
When you produce an evidence synthesis product you must make a trade-off between rigour,
timeliness and feasibility. These considerations are embodied in the Time-Quality-Money (TQM)
mnemonic.[1] Extending the Time taken to produce an output may offer the opportunity to improve
the Quality but it will increase the costs (Money) required for that product. Delivering a High-Quality
product within a narrowly constrained Timeframe may require that you expand the review team so
that more members are involved in production. However this may, in turn, result in further
challenges to the Quality by adding concerns about the consistency of processes and judgements
between members of a larger team of reviewers. Delivering a product within a limited Timeframe
without increasing the resources (Money) will invariably require that some compromises must be
6
made to the Quality; for example, in searching fewer sources, in being more superficial when
assessing study quality, or in reducing the depth of analysis or the extent of an accompanying
interpretation.
What are my choices?
Fortunately, you have a wide choice of review methods from which you can select an appropriate
method for your own evidence review. Your final choice will be based upon:
1. The Type of Review Question you are asking
2. The Type and Quantity of Studies Available to Answer your Question
3. How your final Review is going to be used
4. The Skills, Resources and Expertise of Your Team
14
An Overview of Review Types
I need to see the overall picture…
Mapping Review
What is it?
A mapping review is “a secondary study that reviews articles related to a specific research topic”
1
. It
has three principal objectives: (i) to provide an overview of a research area to assess the existing
evidence
2
, (ii) to identify gaps in sets of primary studies, where new or better primary studies are
required (iii) to pinpoint specific knowledge gaps where more complete systematic literature reviews
might be required.[6, 7]
Ultimately, a mapping review aims at categorising, classifying, characterising patterns, trends or
themes in evidence production or publication
3
. The main difference between a mapping study and a
systematic literature review is the formulation of the research questions and the analysis of the
available information.[7] According to Grant & Booth: “Mapping reviews can be distinguished
from scoping reviews (see below) because the subsequent outcome may involve either further
review work or primary research and this outcome is not known beforehand”.[8] Similarly Anderson
describes a mapping review as a scoping review that focuses on examining the range and nature of
a broad topic area[9] (i.e. not a PICO question)[10]. In such mapping reviews the research question
is generic and usually relates to research trends. Because there is no specific PICO, with multiple
PICOs being accommodated by the broad topic area, the reviewers do not have a preconceived plan
to systematically review the literature. Essentially, researchers are constructing a nominal sampling
frame for a topic (e.g. what research has been conducted in the past 10 years) and characterise the
literature located within that sampling frame.
When should I use this Method?
A mapping review is best used where a clear target for a more focused evidence product has not yet
been identified. Sensitisation to the field in particular where there is a critical mass of literature
and, equally importantly, where the gaps are helps in the planning of future primary research or
synthesis work. Coding and categorisation of the evidence that has been retrieved, instead of
subjecting the literature to more detailed quality assessment and synthesis, helps in preparing for
follow-up review activities. Indeed it is not unusual to code and categorise a wider body of literature,
at least at a superficial level, and then to select a smaller subset for more detailed review. Of course
the coding and categorisation activity becomes even more useful if a team is planning a number of
reviews within the initial scope of the literature “map”. If mapping is carried out as a one off activity
1
Fernández-Diego, M., & González-Ladrón-de-Guevara, F. (2014). Potential and limitations
of the ISBSG dataset in enhancing software engineering research: A mapping
review. Information and Software Technology, 56(6), 527-544.
2
W. Afzal, R. Torkar, R. Feldt, A systematic mapping study on non-functional search-based
software testing, in: Proc. 20th Int. Conf. Softw. Eng. Knowl. Eng. SEKE’08 Knowl. Syst. Inst.
Grad. Sch., 2008.
3
K. Petersen, R. Feldt, S. Mujtaba, M. Mattsson, Systematic mapping studies in software
engineering, in: 12th Int. Conf. Eval. Assess. Softw. Eng., 2008, p. 1.
15
the review team may decide to code and categorise only a sample of studies sufficient to highlight
the potential of the data and the study types within which they are included.
How is it done?
Petticrew and Roberts[6] suggest that a mapping review ‘‘involves a search of the literature to
determine what sorts of studies addressing the systematic review question have been carried out,
where they are published, in what databases they have been indexed, what sorts of outcomes they
have assessed, and in which populations.’’ Mapping reviews require a rigorous searching process as
well as detailed inclusion and exclusion criteria that are clearly defined in the research protocol and
presented in the results report
4
.
How long will it take?
The duration of a mapping review depends upon how much literature there is to be mapped and
how much detail is to be included in the coding. Typically coding focuses around the PICOS
(Population Intervention Comparison Outcome and Study Type) characteristics.[11] Detailed coding
of subpopulations or coding of secondary outcomes can significantly increase the time taken. A
mapping review may take between 1 and 4 months. It is often followed by more focused review
activity.
Where Can I see an Example?
Harrison, M. B., KeepingBurke, L., Godfrey, C. M., RossWhite, A., McVeety, J., Donaldson, V., ... &
Doran, D. M. (2013). Safety in home care: a mapping review of the international literature.
International Journal of EvidenceBased Healthcare, 11(3), 148-160.
Jones, R., EversonHock, E. S., Papaioannou, D., Guillaume, L., Goyder, E., Chilcott, J., ... & Swann, C.
(2011). Factors associated with outcomes for lookedafter children and young people: a correlates
review of the literature. Child: care, health and development, 37(5), 613-622.
Where Do I find Out More?
Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated
methodologies. Health Information & Libraries Journal,26(2), 91-108.
Paré, G., Trudel, M. C., Jaana, M., & Kitsiou, S. (2014). Synthesizing information systems knowledge:
A typology of literature reviews. Information & Management.
Petersen, K, Feldt R, Mujtaba S, & Mattsson M, Systematic mapping studies in software engineering,
in: 12th Int. Conf. Eval. Assess. Softw. Eng., 2008, p. 1.
4
D. Budgen, M. Turner, P. Brereton, B. Kitchenham, Using mapping studies in software
engineering, in: Proc. PPIG, 2008, pp. 195204.
16
Scoping Review
What is it?
Scoping reviews aim “to map rapidly the key concepts underpinning a research area and the main
sources and types of evidence available, and can be undertaken as stand-alone projects in their own
right, especially where an area is complex or has not been reviewed comprehensively before”.[12]
’Scoping studies are concerned with contextualizing knowledge in terms of identifying the current
state of understanding; identifying the sorts of things we know and do not know; and then setting
this within policy and practice contexts’. Certain characteristics of scoping reviews differentiate
them from other types of reviews:[8]
Preliminary assessment of size and scope of available research literature
Aims to identify nature and extent of research evidence (usually including ongoing research)
Completeness of searching determined by time/scope constraints
May include research in progress
No formal quality assessment
Typically tabular with some narrative commentary
Characterizes quantity and quality of literature, perhaps by study design and other key
features
Attempts to specify a viable review
When should I use this Method?
The mention of “map” in the above definition may lead to potential confusion between a scoping
review and a mapping review. In this compendium we use the two terms precisely. A scoping
review seeks to establish the parameters for a planned review and to establish the likely quantity
and quality of the evidence to be reviewed. It has both a conceptual and a pragmatic function as a
preliminary to more intensive follow-up review activity. In contrast a mapping review within a broad
topic area seeks to establish where opportunities for review lie and where subsequent review
efforts, if any, might best be targeted. It can therefore be considered more exploratory and more
speculative than a scoping review which is often about operationalising detailed plans for a
proposed review. Typically, a scoping review does not seek to code and categorise the literature
retrieved, beyond considering whether particular bodies of literature lie within, or outside, the
scope of the proposed review. When conducting a scoping study a review team may sample
selectively, but representatively, from the literature and then extrapolate actual numbers of studies
to be included from the sample of studies that they have retrieved.
How is it done?
Levac[13] has extended Arksey & O’Malley's original scoping review methodology.[14] Levac
proposes six stages for those undertaking a scoping study:[13]
1. clarifying and linking the purpose and research question
2. balancing feasibility with breadth and comprehensiveness of the scoping process
3. using an iterative team approach to selecting studies
4. extracting data;
5. incorporating a numerical summary and qualitative thematic analysis, reporting results and
considering implications of study findings to policy, practice, or research;
17
6. incorporating consultation with stakeholders as a knowledge translation component of
scoping.
Lastly, Levac proposes other considerations for scoping methodologies in order to support the
further development of scoping studies within health research.[13] In 2013, Daudt[15] updated both
the Arksey[14] and Levac[13] frameworks for scoping reviews.
How long will it take?
A scoping review may take six months to conduct.[14] Pham reports durations of between two
weeks and 20 months across a sample of almost 350 scoping reviews.[16] In some cases a scoping
review acts as a preliminary to a systematic review and so the time taken scoping the literature may
be factored into the time taken to conduct the review (i.e. extending the duration more towards
eighteen months as opposed to a “standard” 12 month systematic review duration). The UK
Government identify a variant entitled quick scoping review which refers to a “quick overview of
research undertaken on a (constrained) topic”[17] stating that it will typically take from 1 week to 2
months to complete in seeking to “determine the range of studies that are available on a specific
topic”
5
. This ‘map’ of the existing literature is undertaken with limited resources (particularly time);
constrained by all or some of the following:
Question: a delimited narrow focus (if a broad question then a team will need to further limit
search)
Search: use few search sources (e.g. just one or two bibliographic database); use only key
terms rather than extensive search of all variants; if there are many existing recent reviews,
then a team should consider a map of research in those reviews
Screen: use only electronically available abstracts and texts
Map: use only easily available sources; provide only simple description with limited analysis
Where Can I see an Example?
Baylor C, Yorkston KM, Jensen MP, Truitt AR, Molton IR. Scoping review of common secondary
conditions after stroke and their associations with age and time post stroke. Top Stroke Rehabil.
2014 Sep-Oct;21(5):371-82. doi: 10.1310/tsr2105-371.
Mitton, C., Smith, N., Peacock, S., Evoy, B., & Abelson, J. (2009). Public participation in health care
priority setting: A scoping review. Health Policy, 91(3), 219-228.
King JL, Pomeranz JL, Merten JW (2014). Nutrition interventions for people with disabilities: a
scoping review. Disabil Health J. Apr;7(2):157-63. doi: 10.1016/j.dhjo.2013.12.003. Epub 2014 Jan 3.
Valaitis, R., Martin-Misener, R., Wong, S. T., MacDonald, M., Meagher-Stewart, D., Austin, P., &
Kaczorowski, J. (2012). Methods, strategies and technologies used to conduct a scoping literature
review of collaboration between primary care and public health. Primary health care research &
development, 13(03), 219-236.
5
Government Social Research. Rapid Evidence Assessment Toolkit.
www.gsr.gov.uk/professional_guidance/rea_toolkit/index.asp (last accessed 18 February 2015)
18
Where Do I find Out More?
Arksey, H., & O'Malley, L. (2005). Scoping studies: towards a methodological framework.
International Journal of social research methodology, 8(1), 19-32.
Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). ‘Scoping the scope’of a Cochrane review.
Journal of Public Health, 33(1), 147-150.
Daudt, H. M., Van Mossel, C., & Scott, S. J. (2013). Enhancing the scoping study methodology: a
large, inter-professional team’s experience with Arksey and O’Malley’s framework. BMC medical
research methodology, 13(1), 48.
Davis, K., Drey, N., & Gould, D. (2009). What are scoping studies? A review of the nursing literature.
International journal of nursing studies, 46(10), 1386-1400.
Hidalgo Landa, A, Szabo, I, Le Brun, L, Owen, I and Fletcher, G. (2011) Evidence Based Scoping
Reviews The Electronic Journal Information Systems Evaluation 14( 1): 46-52, available online at
www.ejise.com
Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: advancing the methodology.
Implement Sci, 5(1), 1-9.
Pham, M. T., Rajić, A., Greig, J. D., Sargeant, J. M., Papadopoulos, A., & McEwen, S. A. (2014). A
scoping review of scoping reviews: advancing the approach and enhancing the consistency. Research
Synthesis Methods, 5(4), 371-385.
Rumrill, PD., Fitzgerald, SM., & Merchant, WR. (2010). Using scoping literature reviews as a means of
understanding and interpreting existing literature. Work: A Journal of Prevention, Assessment and
Rehabilitation, 35(3), 399-404.
19
Review of Reviews
What is it?
The review of reviews (overview of reviews)[18] seeks to enhance existing reviews by providing an
up-to-date synthesis of recent evidence and by evaluating the quality of the systematic reviews
included. Systematic reviews of systematic reviews are useful in managing evidence across broad
topics areas or in reviewing research intensive areas.[19, 20] The level of evidence is generally high
in systematic reviews. For this reason, systematic reviews are used widely to inform healthcare
policy and guidelines. However, systematic reviews, and the studies included in them, may be
subject to publication bias.[21] A review team can only review what is published and available in the
public domain. Furthermore, the delay in publication of primary studies and the further delay in
their inclusion in systematic reviews may mean that more recent research or conflicting evidence
has appeared since the review was published. In addition, reviews may provide information about
the effectiveness of interventions. However other aspects of the intervention, including how feasible
it is and how acceptable it is to users, may be equally important factors when making a decision.[21]
In view of acknowledged limitations of a review of reviews methodology, it is important to consider
all aspects of the validity of a systematic review before moving to a value judgement on its
usefulness.[21] Systematic reviews generate the highest level of evidence, as they synthesise eligible
primary research studies, and therefore are useful to direct future practice. However, it is recognised
that both expert opinion and original research also provide valuable evidence and may also be more
current
When should I use this Method?
A review of reviews is used where there is good coverage of a topic by existing systematic reviews
yet, in contrast to an umbrella review, the reviews are more typically heterogeneous in terms of
their coverage of population-intervention pairs. For example the populations studied in included
reviews may be markedly different to the population where the intervention is to be implemented
or the context for the organisation of services may make comparability and applicability more
problematic. Typically the ready availability of preformatted data permits supplementary data
analysis; for example in assessing the methodological quality of the included studies or in mapping
the geographical distribution of the studies included in the reviews. A review of reviews can be used
where there has already been considerable research with numerous reviews available within a
particular topic area. The most notable limitation of the review of reviews method is that, by
definition, it will not pick up research outside of existing reviews
6
.
How is it done?
The methodology for a review of reviews typically follows that for a conventional systematic review.
However in this case the included studies are reviews, preferably systematic reviews, instead of
primary studies. The quality of included studies is typically assessed using AMSTAR, a measurement
tool for the assessment of multiple systematic reviews with good reliability and validity.[22] In some
cases the reviews are simply used as a source of included studies and these studies are assessed
using a single common approach to judging quality. A key step is therefore mapping the included
studies across the full set of reviews. A tool such as Epistemonikos (www.epistemonikos.org) may
6
Government Social Research. Rapid Evidence Assessment Toolkit.
www.gsr.gov.uk/professional_guidance/rea_toolkit/index.asp (last accessed 18 February 2015)
20
be useful in this process.[23] Such an approach partially compensates for the fact that reviews are of
variable quality and a review team may have little basis for confidence in the original judgements on
the quality of included studies. Less judgementally, use of a common quality assessment instrument
may help in integration. However, where these quality judgements are used to exclude studies it
may result in a very different set of included studies from those in the original reviews.
How long will it take?
A review of reviews may be quicker than other types of full systematic review and so may take as
little as three months. A key consideration is whether synthesis will take place at the level of the
reviews themselves (which could be within a three-month period) or whether synthesis will use all
the included primary studies (which could extend it to 6 months or more). A further issue is the
extent to which primary studies, that have not previously been included in systematic reviews,
perhaps because they have been published more recently, are to be identified from supplementary
searches and then incorporated in the final overview.
Where Can I see an Example?
Greaves CJ, Sheppard KE, Abraham C, Hardeman W, Roden M, Evans PH, Schwarz P; IMAGE Study
Group. Systematic review of reviews of intervention components associated with increased
effectiveness in dietary and physical activity interventions. BMC Public Health. 2011 Feb 18;11:119.
doi: 10.1186/1471-2458-11-119.
McNeill, J., Lynn, F., & Alderdice, F. (2010). Systematic review of reviews: the public health role of
the midwife. School of Nursing & Midwifery, Queen’s University Belfast.
http://qub.ac.uk/schools/SchoolofNursingandMidwifery/Research/FileStore/Filetoupload,396468,en
.pdf
Thangaratinam, S., & Jolly, K. (2010). Obesity in pregnancy: a review of reviews on the effectiveness
of interventions. BJOG: An International Journal of Obstetrics & Gynaecology, 117(11), 1309-1312.
Where Do I find Out More?
Lavis, J. N., Oxman, A. D., Grimshaw, J., Johansen, M., Boyko, J. A., Lewin, S., & Fretheim, A. (2009).
SUPPORT Tools for evidence-informed health Policymaking (STP) 7: Finding systematic
reviews. Health Research Policy and Systems, 7(Suppl 1), S7.
Pieper, D., Antoine, S. L., Morfeld, J. C., Mathes, T., & Eikermann, M. (2014). Methodological
approaches in conducting overviews: current state in HTA agencies. Research Synthesis
Methods, 5(3), 187-199.
Smith V, Devane D, Begley, CM, & Clarke M. (2011). Methodology in conducting a systematic review
of systematic reviews of healthcare interventions. BMC Medical Research Methodology, 11(1), 15.
White CM, Ip S, McPheeters M, et al. Using existing systematic reviews to replace de novo processes
in conducting Comparative Effectiveness Reviews. In: Agency for Healthcare Research and Quality.
Methods Guide for Comparative Effectiveness Reviews [posted September 2009]. Rockville, MD.
Available at: http://effectivehealthcare.ahrq.gov/healthInfo.cfm?infotype=rr&ProcessID=60.
Whitlock, E. P., Lin, J. S., Chou, R., Shekelle, P., & Robinson, K. A. (2008). Using existing systematic
reviews in complex systematic reviews. Annals of internal medicine, 148(10), 776-782.
21
I need to know what my options are for a particular decision…
Evidence Briefing
What is it?
An evidence briefing (evidence brief/policy brief) “brings together global research evidence (from
systematic reviews) and local evidence to inform deliberations about health policies and
programmes”.[24] [25] An evidence briefing/policy brief is distinguished most clearly from other
evidence products in that it begins with “explicit identification of a high-priority issue”.[26] The
evidence briefing then “summarises the best available evidence to clarify the size and nature of the
problem, describes the likely impacts of key options for addressing the problem, and informs
considerations about potential barriers to implementing the options and strategies for addressing
these barriers”.[27] This focus on addressing a particular issue is further reflected in the way It helps
to make clear “the trade-offs involved in selecting one option over others” together with any
“benefits from combining particular elements of the different options”.[26]
When should I use this Method?
The evidence briefing is an appropriate vehicle where you intend to explicitly signal one or more
courses of action from a list of options. A more neutral descriptive evidence product, the evidence
summary, highlights the relevant evidence to inform the question but in that product the course of
action is typically implicit. Although distinctions between these two types of evidence product are
by no means consistent within the literature for the purposes of this compendium we use “evidence
summary” to describe a descriptive product that summarises best evidence for a particular
intervention and “evidence briefing” to describe a multi-attribute document that summarises the
pros and cons of options for a particular decision. By analogy an evidence summary functions like an
administrator who neutrally gathers factual information to support the decision making process
whereas an evidence briefing is more of an adviser who helps in considering different options and
their consequences and in making the ultimate decision.
How is it done?
An evidence briefing addresses a high-priority issue and typically includes a concise, yet rich,
description of the context being addressed. It follows this by outlining the problem, costs and
consequences of options to address the issue, and key considerations relating to implementation. As
with other evidence products the evidence briefing employs systematic and transparent methods to
identify, select, and assess synthesised research evidence. However it typically expands its
perspective to include such considerations as quality, local applicability, and equity alongside
findings from the synthesised research evidence. One feature of the evidence briefing that reflects
its end purpose is use of a “graded-entry format”,[26, 28] that is the reader can drill down through
various levels of detail as circumstances require. The evidence briefing seeks to optimise both rigour
and relevance. It is fundamentally issue-led (demand-led)[29, 30] [31] rather than evidence led.
Fuller details on the evidence briefing method can be found at: http://global.evipnet.org/SURE-
Guides/
22
Table 1 - Process for production of an evidence briefing
Task
Agree on team to prepare evidence briefing and policy for authorship
Problem description and diagnosis
Outline problem and information needs
Identify and appraise evidence and other information
First draft describing problem
Internal review and revision of problem description
Policy options
Identify potential programmes or services to address problem and information needs (particularly
systematic reviews)
Identify and appraise evidence and other information
Agree on options (single elements/bundles of relevant programs/services and health systems
arrangements)
First draft describing options
Internal review and revision of options
Implementation strategies
Identify barriers to implementing policy options, strategies to address these barriers, and information
needs (particularly systematic reviews)
Identification and appraisal of evidence and other information
First draft describing implementation strategies
Internal review and revision of implementation strategies
Completion of the full evidence briefing
Draft title, cover page, key messages, executive summary, references, description of methods,
acknowledgements (including funders), conflicts of interests
External review of the draft evidence briefing
Revision of the full evidence briefing
Policy dialogue, informing and engaging stakeholders
Plan and run a policy dialogue
Agree on team to plan dialogue
Decide on the objectives of dialogue
23
Task
Decide when dialogue will take place
Inform and engage stakeholders
Agree on team to plan and monitor efforts to inform and engage stakeholders
Decide which key stakeholders should be informed and engaged in preparing and using evidence briefing
Evaluation and publication of the evidence briefing
Finalise and publish the evidence briefing
How long will it take?
Evidence briefings are typically produced in days and weeks rather than the months or years
required to prepare a systematic review.[26]
Where Can I see an Example?
SURE evidence policy briefs
http://www.who.int/evidence/sure/policybriefs/en/
Where Do I find Out More?
Chambers, D., & Wilson, P. (2012). A framework for production of systematic review based briefings
to support evidence-informed decision-making. Systematic reviews, 1(1), 1-8.
Lavis, J. N., Permanand, G., Oxman, A. D., Lewin, S., & Fretheim, A. (2009). SUPPORT Tools for
evidence-informed health Policymaking (STP) 13: Preparing and using policy briefs to support
evidence-informed policymaking. Health Research Policy and Systems, 7(Suppl 1), S13.
Moat, K. A., Lavis, J. N., Clancy, S. J., El-Jardali, F., & Pantoja, T. (2014). Evidence briefs and
deliberative dialogues: perceptions and intentions to act on what was learnt. Bulletin of the World
Health Organization, 92(1), 20-28.
Rajabi, F. (2012). Evidence-informed health policy making: The role of Policy Brief. International
journal of preventive medicine, 3(9), 596.
Rosenbaum, S. E., Glenton, C., Wiysonge, C. S., Abalos, E., Mignini, L., Young, T., ... & Oxman, A. D.
(2011). Evidence summaries tailored to health policy-makers in low-and middle-income
countries. Bulletin of the World Health Organization, 89(1), 54-61.
24
I need the evidence for a particular intervention fast…
Evidence Summary
What is it?
An evidence summary is “a short summary of the best available evidence on a defined question,
with consideration of implications for further research. It aims to help policy makers use the best
available evidence in their decision-making about interventions”.[32]
NB. “Evidence summary” may also be used to refer to a brief outline of a single item of evidence, for
example a single systematic review.
When should I use this Method?
An evidence summary offers a neutral presentation of the best available evidence for an
intervention. Although an evidence summary may implicitly point towards a desired course of action
it achieves this simply by highlighting the relevant evidence to inform the question. Where the
intention is to explicitly signal one or more from a list of courses of action the evidence briefing (see
above) is the more appropriate vehicle. Although distinctions between the two types of evidence
product are by no means consistent within the literature for the purposes of this compendium we
use “evidence summary” to describe a descriptive product that summarises best evidence for a
particular intervention and “evidence brief” to describe more of a multi-attribute document that
summarises the pros and cons of options for a particular decision. Potentially, therefore,
information from a number of intervention-based evidence summaries could be incorporated into a
single evidence briefing to assist in an option appraisal of how best to achieve a desired outcome.
By analogy an evidence summary functions like an honest broker who neutrally gathers factual
information to support the decision making process whereas an evidence briefing is more of an
adviser who helps in considering different options and their consequences and in making the
ultimate decision.
How is it done?
An Evidence Summary basically follows a streamlined evidence production method (See Box 1)
Box 1 - Method for production of an Evidence Summary
1 Define the question.
2 Provide a justification for the evidence summary.
3 Specify the inclusion criteria.
4 Search for studies.
5 Review the studies.
6 Assess the intervention/s against the relevant criteria.
7 Consider the research gaps.
25
8 Find appropriate case studies.
How long will it take?
Evidence summaries are typically produced in days and weeks rather than the months or years
required to prepare a systematic review from scratch.[26] For this reason they are typically best
undertaken within a regular programme of evidence production.[31] Production is overseen by an
advisory group who therefore deal with a variety of evidence summaries at different stages of
completion at any one time. Evidence summaries typically target high-level evidence (e.g.
summaries of systematic reviews)[33] and aim to identify the most significant and potentially
influential items of evidence. This enables the review team to build up a rapid picture of an
intervention and its likely effectiveness.
Where Can I see an Example?
The following template [Box 2] illustrates the essential features of an evidence summary.
26
Box 2 - Template for evidence summaries
Title [main heading]
An evidence summary [subheading]
<At beginning of summary, include statement:
This document summarises current evidence on [state the question], with implications for future
research.>
1 Why change is needed [or] The case for action [heading 1]
Brief statements to show why change is needed. Use cost or burden of disease data to show the size
of the problem and why it is important.
2 Review question(s) [heading 1]
State the review question(s).
3 Methods [heading 1]
Make it absolutely clear that you are using a “best available evidence” approach, not seeking
comprehensive coverage of all evidence on a topic.
Inclusion criteria for studies [heading 2]
Specify inclusion criteria for studies in a table using the PICOS headings: population, interventions,
comparisons, outcomes and study types.
Search strategy [heading 2]
Specify search strategy, resources searched and search terms. Specify date last searched (this allows
updating of the summary).e.g. : Searches were current as at [month and year].
4 Results [heading 1]
Summarise number of studies used for the evidence summary (e.g. how many systematic reviews
and how many economic evaluations, if any). Include references.
5 The evidence [heading 1]
Answer question. Start with a statement that shows level and quantity of evidence you found to
answer the question. Cite all references meeting your inclusion criteria. Summarise the evidence of
effectiveness and cost-effectiveness in dot points. Include the best available reference(s) for each
point in terms of strength of evidence. Clearly state interventions that didn’t work. Use a separate
heading for this if relevant.
27
Consider splitting the evidence into sections according to population groups, settings, determinants,
risk factors and/or intervention types whatever works best for the evidence you have and the
messages you wish to convey.
If possible, summarise what is involved in the intervention in terms of frequency, duration, delivery
method, participants (including age) and so on <Consider using TiDiER framework to help in
implementation>
6 Case studies [heading 1]
Case studies may help to show the effectiveness of the intervention and aspects of implementation.
Link to resources for specific programmes where appropriate. Do not label them as “good practice”
unless they have been formally, favourably and rigorously evaluated.
7 Research gaps [heading 1]
Summarise research gaps using bullet points.
8 References [heading 1]
References for studies meeting inclusion criteria and/or those cited in the text. The Vancouver
system of referencing is more economical within the context of a short summary.
Acknowledgements: <List contributors other than the author(s), experts that have been consulted>
Date summary last updated: <insert date>
Suggested citation for this evidence summary:
<Author. Title: An evidence summary. Place: Department, Organisation; Date.>
For further information please contact:
<Address for further information with full contact details>
Example Evidence Summaries using the above format can be accessed from:
http://www.health.vic.gov.au/prevention/evidence/intervention-effectiveness.htm
Clark, R., Waters, E., Armstrong, R., Conning, R., Allender, S., & Swinburn, B. (2013). Evidence and
obesity prevention: developing evidence summaries to support decision making. Evidence & Policy: A
Journal of Research, Debate and Practice, 9(4), 547-556.
Where Do I find Out More?
Guidelines for evidence summaries for health promotion and disease prevention interventions
(http://docs.health.vic.gov.au/docs/doc/Guidelines-for-evidence-summaries-for-health-
promotion-and-disease-prevention).
The following format from the same team focuses on policy and practice recommendations. This
therefore equates more closely with an Evidence Briefing (see above).
28
Guidelines for evidence summaries for health promotion and disease prevention interventions -
with implications for policy and practice. (http://docs.health.vic.gov.au/docs/doc/Guidelines-for-
evidence-summaries-for-health-promotion-and-disease-prevention-interventions--with-
implications-for-policy-and-practice )
29
Rapid Evidence Assessment
What is it?
Rapid Evidence Assessment (REA) is a process that is faster and less rigorous than a full
systematic review but more rigorous than ad hoc searching, it uses a combination of key informant
interviews and targeted literature searches to produce a report in a few days or a few weeks”
7
. REAs
provide a balanced assessment of what is already known about a policy or practice issue, by using
systematic review methods to search and critically appraise existing research. REAs aim to be
rigorous and explicit in method and thus systematic. However, necessarily, REAs make concessions
to the breadth or depth of the process by limiting particular aspects of the systematic review
process.[34] For example, the comprehensiveness of the search and other review stages may be
limited. Increasingly, health policy makers, clinicians and clients cannot wait the year or so required
for a full systematic review to deliver its findings. REAs can provide quick summaries of what is
already known about a topic or intervention. The Government Social Research Unit has produced an
REA toolkit which is recommended as a minimum standard for rapid evidence reviews.
8
When should I use this Method?
Rapid Evidence Assessments can be undertaken in the following circumstances:[35]
When there is uncertainty about the effectiveness of a policy or service and there has been
some previous research.
When a policy decision is required within months and policy makers/researchers want to
make decisions based on the best available evidence within that time.
During policy development, when evidence of the likely effects of an intervention is required.
When a wide range of research exists on a topic but questions remain unanswered.
When a map of evidence in a topic area is required to determine whether there is any existing
evidence and to direct future research needs.
As a starting point. Ideally, an REA is undertaken to answer a particularly pressing policy
concern. Once the immediate question is answered the REA can form the basis of a more
detailed full systematic review. In such cases, an REA is best described as an ‘interim evidence
assessment’
9
.
In these situations an REA can provide a quick synthesis of the available evidence by shortening the
conventional systematic review process.
By shortening conventional systematic review process REAs risk introducing bias.[36] Systematic
reviews also suffer from biases but limiting the process increases the risk of them occurring. For
example, limiting the search to published literature may introduce bias by excluding unpublished
material. Therefore, the need for the evidence to be provided rapidly should outweigh the risk of
7
Better Evaluation Rapid Evidence Assessment http://betterevaluation.org/evaluation-
options/rapid_evidence_assessment
8
Government Social Research. Rapid Evidence Assessment Toolkit.
www.gsr.gov.uk/professional_guidance/rea_toolkit/index.asp (last accessed 18 February 2015)
9
Evidence Based Approaches to Reducing Gang Violence A Rapid Evidence Assessment for Aston
and Handsworth Operational Group July 2004 http://www.civilservice.gov.uk/wp-
content/uploads/2011/09/rea_gang_violence_tcm6-7377.pdf
30
increased bias. REAs (along with all other review methods, especially those that use “rapid”
approaches) should record how they have been less comprehensive than a full systematic review.
They should also discuss the likely effect of bias that deviations from the conventional systematic
review method have caused. This ensures that those taking decisions are aware of limitations of the
evidence.
All review methods, including REAs, risk generating inconclusive findings that provide a weak answer
to the original question.[17] For example, there may not be studies of sufficient methodological
quality to address the question.
How is it done?
Several aspects of the systematic review process are limited in an REA to shorten the timescale. A
review team will limit some, but by no means all, of the following stages:[17]
The REA question if the question is broad the search needs to be further limited.
Searching consider using less developed search strings rather than extensive search of all
variants. Where there are many existing recent reviews, then consider a review of reviews
rather than of primary studies.
Screening stage REAs can use ‘grey’ and print sources but less exhaustively than systematic
reviews. An REA may use only electronically available abstracts and texts. However, this is
unadvisable because of the increased risk of bias.
Mapping stage if included at all; often has to be limited in terms of the breadth of the
initial evidence map.
Data extract only on results and key data for simple quality assessment.
Simple quality appraisal and/or synthesis of studies.
How long will it take?
Because an REA provides a quick overview of existing research on a (typically constrained) topic, and
a synthesis of the evidence provided by these studies to answer the REA question, it may be
completed within a 2 to 6 month timeframe. The speed at which the REA is undertaken depends on
how quickly the evidence is needed, the available resource to carry out the REA and the extent to
which reviewers are prepared to limit the systematic review process. Tight timescales in an REA
mean that if findings are inconclusive there is less time than in a systematic review to go back and
reformulate the question or inclusion criteria.
Where Can I see an Example?
Lambie-Mumford, H., Crossley, D., Jensen, E., Verbeke, M., & Dowler, E. (2014). Household food
security in the UK: a review of food aid-final report.
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/283071/household-food-
security-uk-140219.pdf
Manley, J., Gitter, S., & Slavchevska, V. (2012). How effective are cash transfer programmes at
improving nutritional status? A rapid evidence assessment of programmes’ effects on
anthropometric outcomes. London EPPI Centre. Social Research Science Unit. Institute of Education.
London: University of London.
31
McMurran, M. (2012). Individuallevel interventions for alcoholrelated violence: A rapid evidence
assessment. Criminal Behaviour and Mental Health, 22(1), 14-28.
Underwood, L., Thomas, J., Williams, T., & Thieba, A. (2007). The effectiveness of interventions for
people with common mental health problems on employment outcomes: a systematic rapid evidence
assessment.
http://eprints.ioe.ac.uk/5264/3/Underwood2007TheeffectivenessofinterventionsReport.pdf
Where Do I find Out More?
Abrami, P. C., Borokhovski, E., Bernard, R. M., Wade, C. A., Tamim, R., Persson, T., ... & Surkes, M. A.
(2010). Issues in conducting and disseminating brief reviews of evidence. Evidence & Policy: A
Journal of Research, Debate and Practice, 6(3), 371-389.
Ganann, R., Ciliska, D., & Thomas, H. (2010). Expediting systematic reviews: methods and
implications of rapid reviews. Implementation Science, 5(1), 56.
GSR, 2009. Rapid evidence assessments toolkit [online]. London: Government Social Research
Unit.http://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-
assessment
Thomas, J., Newman, M., & Oliver, S. (2013). Rapid evidence assessments of research to inform
social policy: taking stock and moving forward. Evidence & Policy: A Journal of Research, Debate and
Practice, 9(1), 5-27.
32
Rapid Review
What is it?
A rapid review is a brief synthesis and judgement of the available research evidence related to a
specific question
10
. The research evidence is drawn primarily from existing systematic reviews, meta-
analyses and economic evaluations. Therefore, more recent published trials, trials in progress and
unpublished (grey) literature are generally not included in the review. Rapid reviews are usually
conducted by senior researchers with expertise in the particular field of research. ‘Rapid reviews’
have emerged in response to the incompatibility between information needs of policy makers and
the time requirements to complete systematic reviews.[37] Rapid reviews provide a way to generate
similar types of knowledge synthesis as more comprehensive systematic reviews do. However they
attempt to accomplish this within an accelerated time period. Some critics question the validity of
rapid reviews.[36, 38, 39] Nevertheless rapid reviews simply represent an arbitrary point on a
continuum between comprehensiveness and timeliness for policy-relevant decisions.[40]
Potential confusion exists between a rapid review as a product and rapid review as a process. For
example, Khangura describes a rapid review process that results in the production of evidence
summaries as a product.[41] For this reason we choose to define a rapid review solely in terms of
being a type of evidence product. A rapid review is characterised either by (i) using pre-existing
summaries and syntheses in order to accelerate the process of assimilating individual primary
studies and/or (ii) explicitly sidestepping, or performing more superficially, one or more of the
accepted processes used in a systematic review to allow a review team to deliver a product within a
shortened timescale. Typically, rapid review processes might involve reducing the number of
databases/sources searched, restricting the types of studies examined to only secondary sources,
avoiding formal quality assessment or employing a light touch approach to assessment e.g. based
only on study design, using more descriptive and less analytical approaches to data synthesis and
presentation. Definitions of a rapid review that simply focus on the speed/timescale of the process
independent of the implications for quality are to be avoided.[42] A rapid review may transfer much
of the burden of interpretation of the synthesis from the review team to the reader. A review team
may focus only on more evident patterns or trends in the data, couched in cautious interpretation of
the data. They then leave the reader to come to a more nuanced understanding by paying detailed
attention to the data that they have summarised.
Table 2 - General comparison of rapid review versus systematic review approaches[41]
Rapid review
Systematic review
Timeframe
4-12 weeks
6 months to 2 years
Question
Question specified a priori (may include
broad PICOS)
Often a focused clinical question
(focused PICOS)
Sources and
Sources are limited. Sources/strategies
Comprehensive sources searched
10
Rapid reviews. http://www.health.vic.gov.au/prevention/evidence/rapid-reviews.htm
33
searches
are made explicit
and explicit strategies
Selection
Criterion-based; uniformly applied
Criterion-based
Appraisal
Rigorous; critical appraisal (SRs only)
Rigorous; critical appraisal
Synthesis
Descriptive summary/categorization of
the data
Qualitative summary +/- meta-
analysis
Inferences
Limited/cautious interpretation of the
findings
Evidence-based
When should I use this Method?
A rapid review is used when the speed of the answer, and thus the window of opportunity within
which results are delivered, is prioritised over the rigour of the answer. However unless certain
quality assurance measures are put in place this advantage is lost by producing an invalid answer. A
rapid review should not simply focus on the speed within which it is delivered[42] it must
transparently report the process used to produce a quick answer and attempt to assess the
implications of this process for confidence in the review findings.
How is it done?
A rapid review is accomplished by fast-tracking one or more of the standard stages of a systematic
review e.g.;
By searching a smaller selection of databases
By restricting to a particular study type
By reviewing reviews
By data extracting direct into tables of results
By limiting the number of outcomes being included
By performing quality assessment at a study design level rather than appraising each
individual study
By limiting the amount of analysis and interpretation
How long will it take?
Generically a rapid review refers to any review that takes less time than a conventional systematic
review (e.g. less than 12 months), depending on which of the review stages are accelerated.
However typically a rapid review is conducted in a significantly shorter time scale (e.g. between four
and twelve weeks).
Where Can I see an Example?
Chaiyachati, K. H., Ogbuoji, O., Price, M., Suthar, A. B., Negussie, E. K., & Bärnighausen, T. (2014).
Interventions to improve adherence to antiretroviral therapy: a rapid systematic review. Aids, 28
(suppl 2), S187-204.
34
Loveday, H. P., Wilson, J. A., Kerr, K., Pitchers, R., Walker, J. T., & Browne, J. (2014). Association
between healthcare water systems and Pseudomonas aeruginosa infections: a rapid systematic
review. Journal of Hospital Infection,86(1), 7-15.
Riley, B., Norman, C. D., & Best, A. (2012). Knowledge integration in public health: a rapid review
using systems thinking. Evidence & Policy: A Journal of Research, Debate and Practice, 8(4), 417-431.
Where Do I find Out More?
Harker, J., & Kleijnen, J. (2012). What is a rapid review? A methodological exploration of rapid
reviews in Health Technology Assessments. International Journal of EvidenceBased
Healthcare, 10(4), 397-410.
Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: the
evolution of a rapid review approach. Systematic reviews, 1(1), 1-9.
Schünemann HJ, Moja L. Reviews: Rapid! Rapid! Rapid! …and systematic. Syst Rev. 2015 Jan
14;4(1):4.
Watt, A., Cameron, A., Sturm, L., Lathlean, T., Babidge, W., Blamey, S., ... & Maddern, G. (2008).
Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health
technology assessment. International Journal of Technology Assessment in Health Care, 24(02), 133-
139.
Watt, A., Cameron, A., Sturm, L., Lathlean, T., Babidge, W., Blamey, S., ... & Maddern, G. (2008).
Rapid versus full systematic reviews: validity in clinical practice?. ANZ journal of surgery, 78(11),
1037-1040.
35
Rapid Realist Review
What is it?
Rapid realist review methodology (RRR) is “a tool for applying a realist approach to a knowledge
synthesis process and producing a product that is useful to policy makers in responding to time-
sensitive and/or emerging issues where there is limited time and resources”.[37] Conventional
’realist reviews typically engage in a much longer exploration of the literature and ‘testing’ of
theories. Often realist syntheses present their results within a framework of theory development. In
contrast rapid realist reviews are located within the context of short-term evidence synthesis
projects.[41] Specifically, the RRR methodology seeks to combine the theory specification of a realist
review[43] with the clarification of boundaries typical of a scoping review.[14]
Applying the realist approach (asking what works for whom under what circumstances) when limited
time and resources are available requires a methodology that can “generate a realist-based product
that can incorporate research, theory, and practice knowledge and thus meet the demands of real-
time policy developers/evaluators. Rapid realist reviews seek to remain consistent with the recently
published RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) publication
standards for realist syntheses, which note that realist reviews need to be focused based on the time
and resources provided as well as the questions that need to be answered.[44]
Rapid realist reviews seek to meet demand from policy makers for “knowledge syntheses that
highlight possible interventions (I) that could be implemented within a specific context (C) that in
turn interact with various mechanisms (M) and produce outcomes (O) of interest.[37] This
pragmatic focus has required a change of emphasis away from reviews that focus on producing
transferable theories, to syntheses that focus on identifying groups of interventions related to
outcomes of interest for policy makers.[37] A review team is required to ‘work backwards’ from the
desired outcome to “‘families of interventions’ (I) that can be implemented to produce those
outcomes, supported by a theoretical understanding of the contexts (C) within, and mechanisms (M)
by which such interventions operate.[37] In doing so, the RRR methodology focuses less on the
development of theory that is transferable across domains, than on the distillation of theory-driven,
contextually relevant interventions that are likely to be associated with specific outcomes within a
particular set of parameters.[37]
When should I use this Method?
Realist synthesis, specifically designed for use in complex systems, is considered ideal for
investigating questions requiring depth of understanding. However the method lacks transparency
and therefore may not be reproducible. Use of a rapid method is indicated where the answer to a
pressing health concern is more important than the rigour with which the insights derived are
gained.
How is it done?
A rapid realist review is conducted in the following iterative steps.[37]
1. Describe the initial hypotheses, or relevant candidate theories
2. Construct a theoretical framework
3. Undertake a more thorough search of the literature for pertinent papers
4. Extract and synthesise the data based on our theoretical framework.
36
Conventional’ realist reviews and RRRs have key differences that allow policy makers to ensure that
an appropriate methodology is used to deliver the desired outcome. The RRR methodology is
explicitly designed to engage knowledge users and review stakeholders to define the research
questions, and thus to streamline the review process. Results focus on context-specific explanations
for what works within a particular set of parameters rather than seeking explanations to transfer
across contexts and populations. For policy makers faced with making difficult decisions in short
time frames for which there is sufficient (if limited) published/research and practice-based evidence
available, RRR is believed to offer a practical, outcomes-focused knowledge synthesis method.[37]
How long will it take?
Based on a published series by the same team rapid realist syntheses were found to take between
two and six months with the majority coming towards the top of this range.[37]
Where Can I see an Example?
Best, A., Greenhalgh, T., Lewis, S., Saul, J. E., Carroll, S., & Bitz, J. (2012). Largesystem transformation
in health care: a realist review. Milbank Quarterly,90(3), 421-456.
Durham, J., & Blondell, S. J. (2014). Research protocol: a realist synthesis of cross-border patient
mobility from low-income and middle-income countries. BMJ open, 4(11), e006514.
Willis, C. D., Saul, J. E., Bitz, J., Pompu, K., Best, A., & Jackson, B. (2014). Improving organizational
capacity to address health literacy in public health: a rapid realist review. Public health, 128(6), 515-
524.
Where Do I find Out More?
Mijumbi, R. M., Oxman, A. D., Panisset, U., & Sewankambo, N. K. (2014). Feasibility of a rapid
response mechanism to meet policymakers’ urgent needs for research evidence about health
systems in a low income country: a case study. Implementation Science, 9(1), 114.
Pointing, S. B. (2014). Realist methodology in practice: translational findings from two realist
syntheses. Learning Communities: International Journal of Learning in Social Contexts, 14, 60-80.
Saul J, Willis C, Bitz J, et al A time-responsive tool for informing policy making: rapid realist
review. Implement Sci 2013;8:103.
Wilson MG, Lavis JN, Gauvin FP. Issue Brief: Developing a ‘Rapid-response’ Program for Health
System Decision-Makers in Canada. Hamilton, Canada: McMaster Health Forum, 7 March 2014.
http://hdl.handle.net/11375/14877
37
I need to build up a picture from existing related reviews…
Umbrella Review
What is it?
The label “umbrella review” is a relatively recent addition to the review typology. The Cochrane
Collaboration is currently seeking to assemble already existing reviews on the same topic, typically
performed by the same Review Groups, under umbrella reviews.[45] Essentially an umbrella review
is a cluster of existing systematic reviews on a shared topic. The objective of an ‘umbrella’ review is
to build upon an area that is well-covered by existing systematic reviews by synthesizing the
evidence from all relevant reviews to provide a single report which summarizes the current state of
knowledge on the topic. Such an umbrella review may be populated exclusively from systematic
review evidence or, alternatively, may also include randomised controlled trials that fall within the
broad scope of the umbrella review but that are not covered within one of the component reviews.
The inherent advantage of an umbrella review is that it may bring together many treatment
comparisons for the management of the same disease or condition.[45] Each comparison is
considered separately. Where technically possible and appropriate, meta-analyses are performed.
Ideally, given the breadth of scope and the desire to present coverage of a complete decision
problem both benefits and harms should be placed side by side to enable the reviewer and the
reader to determine trade-offs between risks and benefits.
When should I use this Method?
The Umbrella review is indicated when a particular topic area is already well-covered by systematic
reviews and/or meta-analyses. Typically, the broad topic area will have been “split” into focused
populations and/or interventions. The umbrella review seeks to impose an overall coherence by
lumping these precise reviews together. Umbrella reviews are particularly valuable within health
technology assessments that aim to consider all management options and yet may commission
separate reviews of an individual treatment with specific outcomes. Within the Cochrane
Collaboration umbrella reviews seek to serve as a ‘friendly front end‘[46] to the Cochrane Library,
offering the reader a quick overview (and exhaustive list) of the Cochrane reviews relevant to a
particular condition. An umbrella review is limited to only those interventions that have been
evaluated within a review. Nevertheless, such an overview can illuminate treatments currently being
used and the methods being used to provide much-needed evidence for health professionals, policy
makers and researchers.
Umbrella reviews are limited by the amount, quality and comprehensiveness of available
information in the primary studies.[47] Recently concern has been expressed that “patching
together pre-existing reviews is limited by different eligibility criteria, evaluation methods and
thoroughness of updating information across the merged reviews. Moreover, pre-existing reviews
may not cover all of the possible management options”.[45] It has been suggested that a more
efficient method is to commission a series of reviews around a shared methodology thereby picking
off individual topics and yet permitting relatively seamless integration. However the field is currently
a long way away from such prospective commissioning.[45] For the moment umbrella reviews offer
a mechanism by a review team is able to identify which methodological weaknesses make a
component review vulnerable to bias and compromise their validity.[48]
38
How is it done?
An umbrella review requires some overall structure to enable included reviews to be handled in a
common manner. An umbrella review is considerably quicker and easier if the component reviews
share a common methodology (e.g. all Cochrane Reviews). A database of reviews (e.g. the Database
of Abstracts of Reviews of Effects (DARE), Epistimonikos or PDQ) or a review study filter is typically
be used to identify and harvest literature at a review level. Mapping may be performed at either a
review or an individual study level.
How long will it take?
While no precise data is available on the typical duration of an umbrella review it is likely to take
approximately the same time as a Review of Reviews. In some circumstances it may take
considerably shorter (towards the 3 month timeframe) if the included reviews are easily comparable
and/or share a common methodology and/or format
Where Can I see an Example?
Labre, M. P., Herman, E. J., Dumitru, G. G., Valenzuela, K. A., & Cechman, C. L. (2012). Public health
interventions for asthma: an umbrella review, 19902010. American journal of preventive
medicine, 42(4), 403-410.
Safron, M., Cislak, A., Gaspar, T., & Luszczynska, A. (2011). Effects of school-based interventions
targeting obesity-related behaviors and body weight change: a systematic umbrella
review. Behavioral Medicine, 37(1), 15-25.
Tsilidis, K. K., Kasimis, J. C., Lopez, D. S., Ntzani, E. E., & Ioannidis, J. P. (2015). Type 2 diabetes and
cancer: umbrella review of meta-analyses of observational studies. BMJ, 350, g7607.
Where Do I find Out More?
Ioannidis, J. P. (2009). Integration of evidence from multiple meta-analyses: a primer on umbrella
reviews, treatment networks and multiple treatments meta-analyses. Canadian Medical Association
Journal, 181(8), 488-493.
Pieper, D., Buechter, R., Jerinic, P., & Eikermann, M. (2012). Overviews of reviews often have limited
rigor: a systematic review. Journal of clinical epidemiology, 65(12), 1267-1273.
39
I need to look at a specific topic in depth
Systematic Review of Quantitative Evidence
What is it?
A simple definition of a systematic literature review is “a means of identifying, evaluating and
interpreting all available research relevant to a particular research question, or topic area, or
phenomenon of interest”.[49] However by focusing only on the cumulation and synthesis of
evidence such a definition might be seen to downplay the intellectual endeavour that goes into the
production of such a review. This analytical intent is better captured by ”a systematic review is a
summary of available research on a given topic that compares studies based on design and methods.
It summarizes the findings of each, and points out flaws or potentially confounding variables that
may have been overlooked. A critical analysis of each study is done in an effort to rate the value of
its stated conclusions. The research findings are then summarized, and a conclusion is provided”
11
. A
key part of the systematic review method is to take reasonable procedures to minimise the effects
of bias in selecting and interpreting the included studies. A systematic review of quantitative
evidence may or may not include a meta-analysis (statistical pooling) of data extracted from the
included studies. The determining factor is the extent to which the data extracted from each study is
comparable i.e. does the included study measure the same outcome (e.g. pain) in a similar-enough
way.
When should I use this Method?
Reasons for undertaking a systematic review include:[49]
To summarise the existing evidence concerning the effectiveness of an intervention, programme
or policy
To identify gaps in existing research to inform areas for further investigation.
To provide a context within which to appropriately position new research activities
To examine the extent to which empirical evidence supports/refutes a theoretical hypothesis
To assist in the generation of new hypotheses
How is it done?
The systematic review of quantitative evidence addresses a research question by summarizing the
results of quantitative studies. Formal stages of question formulation, searching the literature,
developing inclusion and exclusion criteria, extracting data from included studies in a common
format and synthesising data are followed according to a pre-specified protocol.[1] Findings from
individual studies are aggregated to produce a ‘bottom line’ on the issue requiring evaluation. This
aggregation of findings is called evidence synthesis. The type of evidence synthesis is chosen to fit
the types(s) of data within the review. For example, a technique known as meta-analysis (see below)
is used if homogenous quantitative evidence is assessed for clinical effectiveness. Narrative
summaries are used if quantitative data are not homogenous. The purpose of a systematic review is
to sum up the best available research on a specific question. This is done by synthesizing the results
of several studies.
11
Institute for Research and Innovation in Social Services Confidence through evidence
toolkit http://toolkit.iriss.org.uk/glossary/systematic-review.html
40
A systematic review uses transparent procedures to find, evaluate and synthesize the results of
relevant research.[50] Procedures are explicitly defined in advance, to allow a review team to ensure
that the exercise is transparent and can be replicated.[51] This practice is also designed to minimize
bias. Studies included in a review are screened for quality, so that the findings of a large number of
studies can be combined. Peer review is a key part of the process; qualified independent researchers
control the author's methods and results.[52]
A systematic review must have: [52]
Clear inclusion/ exclusion criteria
An explicit search strategy
Systematic coding and analysis of included studies
Meta-analysis (where possible)
How long will it take?
A conventional systematic review typically takes between 12 and 18 months.
Where Can I see an Example?
List of EPPI-Centre systematic reviews
http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=62 . Includes:
2013 The views of young people in the UK about obesity, body size, shape and weight: a systematic
review. http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3394
2012 Communities that cook: a systematic review of the effectiveness and appropriateness of
interventions to introduce adults to home cooking http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3322
2011 Childhood obesity and educational attainment: a systematic review
http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=2954
Where Do I find Out More?
Centre for Reviews & Dissemination (CRD). (2009). Systematic reviews: CRD's guidance for
undertaking reviews in health care. Centre for Reviews and Dissemination.
Gough D, Oliver S, Thomas J (2012) An Introduction to Systematic Reviews. London: Sage
Hansen, H., & Trifkovic, N. (2013). Systematic Reviews: Questions, Methods and Usage. DANIDA,
Udenrigsministeriet.
Hemingway, P., & Brereton, N. (2009). What is a systematic review. What is Series. Bandolier, April.
http://www.medicine.ox.ac.uk/bandolier/painres/download/whatis/syst-review.pdf
Langer, L., & Stewart, R. (2014). What have we learned from the application of systematic review
methodology in international development?a thematic overview. Journal of Development
Effectiveness, 6(3), 236-248.
Mays, N., Pope, C., & Popay, J. (2005). Systematically reviewing qualitative and quantitative evidence
to inform management and policy-making in the health field. Journal of health services research &
policy, 10(suppl 1), 6-20.
41
Sayers, A. (2007). Tips and tricks in performing a systematic review. British Journal of General
Practice, 57(538), 425-425.
Waddington, H., White, H., Snilstveit, B., Hombrados, J. G., Vojtkova, M., Davies, P., ... & Tugwell, P.
(2012). How to do a good systematic review of effects in international development: a tool
kit. Journal of development effectiveness, 4(3), 359-387.
Webb, C., & Roe, B. H. (Eds.). (2007). Reviewing research evidence for nursing practice: Systematic
reviews. Blackwell Pub..
42
Meta-Analysis
What is it?
Meta-analysis is a statistical technique for combining the findings from independent studies.[53]
Good meta-analyses aim for complete coverage of all relevant studies, look for the presence of
heterogeneity, and explore the robustness of the main findings using sensitivity analysis.[54]
When should I use this Method?
Meta-analysis is most often used to assess the clinical effectiveness of healthcare interventions; it
does this by combining data from two or more randomised control trials.[53] To perform a meta-
analysis you either need to have studies that are measuring the same outcome in the same way or
for the ways of measuring outcomes to at least be similar enough to make such comparison
meaningful. In some cases outcomes can be mapped across different outcome scales or tools e.g.
pain measurement scores.
How is it done?
Meta-analysis of trials provides a precise estimate of treatment effect, giving due weight to the size
of the different studies included.[53] The validity of the meta-analysis depends on the quality of the
systematic review on which it is based.
How long will it take?
A meta-analysis typically takes the time taken to conduct a standard systematic review plus some
additional time to conduct the analysis. Study outcomes are typically entered into a software
package and graphical summaries (Forest Plots) are produced showing how the results from
different studies lie in relation to each other. Interpretation of these plots may include an
investigation of the differentness (homogeneity) of the included trials and also an estimation of
whether publication bias is likely to have occurred. In this latter case the review team looks to see
whether a particular type of studies e.g. studies with a small sample size and non-significant results
is missing from the graphical display.
Where Can I see an Example?
The Cochrane Library publishes a large number of systematic reviews with accompanying meta-
analyses. http://www.cochranelibrary.com/
Where Do I find Out More?
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2011). Introduction to meta-analysis.
John Wiley & Sons.
Crombie, I. K., & Davies, H. T. (2009). What is meta-analysis. What is, 1-8.
http://www.medicine.ox.ac.uk/bandolier/painres/download/whatis/meta-an.pdf
Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. Bmj,
315(7121), 1533-1537.
Rohwer, A., Garner, P., & Young, T. (2014). Reading systematic reviews to answer clinical questions.
Clinical Epidemiology and Global Health, 2(1), 39-46.
43
Systematic Review of Qualitative Evidence
What is it?
A systematic review of qualitative review (also known as qualitative evidence synthesis) is a method
for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or
‘constructs’ that lie in or across individual qualitative studies. “A qualitative synthesis uses
qualitative methods to synthesize existing qualitative studies to construct greater meaning through
an interpretive process …. it involves using a rigorous and methodologically grounded approach for
analysis that is filtered through an interpretive lens … deriving meaning from translation”.[55]
When should I use this Method?
Reasons for undertaking a systematic review of qualitative research include:
To complement existing evidence concerning the effectiveness of an intervention, programme
or policy with an understanding of how it might be received by patients, practitioners or the
wider community.
To explain why an intervention, programme or policy does not work as well as might have been
expected e.g. why adherence to a programme is poor or why practitioners deliver the
programme with poor fidelity;
To understand how contextual factors may interact with or interfere with the operation of an
intervention, programme or policy
To assist in the generation of new hypotheses
How is it done?
The stages of a qualitative evidence synthesis include:[56]
1. Formulate Research Question (and Protocol)
2. Search Databases (identify papers)
3. Screen Papers by title/abstract
4. Full text Review
5. Synthesis and Analysis of themes or findings from included papers
How long will it take?
Systematic reviews of qualitative evidence (qualitative evidence synthesis) typically take a similar
time to conduct as systematic reviews of quantitative research. However, this similarity hides
considerable variation between reviews that simply seek to aggregate qualitative information (e.g.
themes) which may be accomplished much more speedily, and more interpretative approaches that
seek to develop new insights.
Where Can I see an Example?
Glenton, C., Colvin, C. J., Carlsen, B., Swartz, A., Lewin, S., Noyes, J., & Rashidian, A. (2013). Barriers
and facilitators to the implementation of lay health worker programmes to improve access to
maternal and child health: qualitative evidence synthesis. The Cochrane Library.
44
Rehfuess, E. A., Puzzolo, E., Stanistreet, D., Pope, D., & Bruce, N. G. (2014). Enablers and barriers to
large-scale uptake of improved solid fuel stoves: a systematic review. Environmental health
perspectives, 122(2), 120-130.
Johnson, M., Everson-Hock, E., Jones, R., Woods, H. B., Payne, N., & Goyder, E. (2011). What are the
barriers to primary prevention of type 2 diabetes in black and minority ethnic groups in the UK? A
qualitative evidence synthesis.Diabetes research and clinical practice, 93(2), 150-158.
Stanistreet, D., Puzzolo, E., Bruce, N. G., Pope, D.,., & Rehfuess, E. A. (2014). Factors influencing
household uptake of improved solid fuel stoves in low-and middle-income countries: a qualitative
systematic review. International journal of environmental research and public health, 11(8), 8228-
8250.
Where Do I find Out More?
Gülmezoglu, A. M., Chandler, J., Shepperd, S., & Pantoja, T. (2013). Reviews of qualitative evidence:
a new milestone for Cochrane. The Cochrane database of systematic reviews, 11, ED000073.
Hannes, K., & Macaitis, K. (2012). A move to more systematic and transparent approaches in
qualitative evidence synthesis: update on a review of published papers. Qualitative Research, 12(4),
402-442.
Hannes, K., Booth, A., Harris, J., & Noyes, J. (2013). Celebrating methodological challenges and
changes: reflecting on the emergence and importance of the role of qualitative evidence in Cochrane
reviews. Syst Rev, 2, 84.
Lorenc, T., Pearson, M., Jamal, F., Cooper, C., & Garside, R. (2012). The role of systematic reviews of
qualitative evidence in evaluating interventions: a case study. Research Synthesis Methods, 3(1), 1-
10.
Major, C. H., & Savin-Baden, M. (2010). An introduction to qualitative research synthesis: Managing
the information explosion in social science research. Routledge.
Noyes J, Popay J, Pearson A, Hannes K, Booth A. Chapter 20: Qualitative research and Cochrane
reviews. In: Higgins JPT, Green S (editors), Cochrane Handbook for Systematic Reviews of
Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011. Available
from www.cochrane-handbook.org.
45
I need to understand how an intervention or programme works…
Systematic Review with Logic Model
What is it?
Logic models have an established place in seeking to understand complex healthcare programmes as
a way of illustrating how a programme seeks to achieve its intended outcomes.[57] Logic models
may also be used to examine correlates and to describe connections between determinants of
outcomes. Recently the place of logic models has been recognised as a valuable contributor to the
systematic review process. Logic models can be applied at different stages in a systematic review,
from informing a definition of scope through to providing a structure for data extraction, analysis
and interpretation.
When should I use this Method?
A systematic review with logic model can be used where you need to understand the conceptual
underpinnings of a particular intervention or programme. In particular you can use them to explore
causal links, effect mediators or moderators.[9] Logic models may also be used to direct the various
stages of the review process. They can “help justify narrowing the scope of a review, identify the
most relevant inclusion criteria, guide the literature search, and clarify interpretation of results when
drawing policy‐ relevant conclusions about review findings”.[57]
How is it done?
The logic model may be constructed a priori from an initial scoping of the literature and/or
consultation with stakeholders in which case it may be used subsequently as a framework for data
extraction and analysis. Alternatively, the logic model may emerge from the findings of the
systematic review with new data being used to explore, test or modify the relationships depicted in
a draft logic model.
How long will it take?
If you are going to use a logic model at the beginning of the systematic review process you may need
an extra one to three months as a supplementary prequel to the review. Alternatively, if you are
using a logic model to facilitate the review process, e.g. to determine the data extraction process,
the existence of a logic model may accelerate the review. A key time factor is whether the logic
model already exists, whether you create it from a supplementary review process or whether you
generate the logic model through stakeholder involvement.
Where Can I see an Example?
Allmark, P., Baxter, S., Goyder, E., Guillaume, L., & CroftonMartin, G. (2013). Assessing the health
benefits of advice services: using research evidence and logic model methods to explore complex
pathways. Health & social care in the community, 21(1), 59-68.
Baxter, S. K., Blank, L., Woods, H. B., Payne, N., Melanie, R., & Goyder, E. (2014). Using logic model
methods in systematic review synthesis: describing complex pathways in referral management
interventions. BMC medical research methodology, 14(1), 62.
46
Turley, R., Saith, R., Bhan, N., Rehfuess, E., & Carter, B. (2013). Slum upgrading strategies involving
physical environment and infrastructure interventions and their effects on health and socio
economic outcomes. The Cochrane Library.
Where Do I find Out More?
Logic Models in General
W.K. Kellogg Foundation Logic Model Development Website
https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-
development-guide
W.K. Kellogg Foundation Logic Model Development Guide
http://www.smartgivers.org/uploads/logicmodelguidepdf.pdf
Funnell, S. C., & Rogers, P. J. (2011). Purposeful program theory: effective use of theories of change
and logic models (Vol. 31). John Wiley & Sons.
Methods of Logic Models In Systematic Reviews
Anderson, L. M., Petticrew, M., Rehfuess, E., Armstrong, R., Ueffing, E., Baker, P., ... & Tugwell, P.
(2011). Using logic models to capture complexity in systematic reviews. Research synthesis methods,
2(1), 33-42.
Anderson, L. M., Oliver, S. R., Michie, S., Rehfuess, E., Noyes, J., & Shemilt, I. (2013). Investigating
complexity in systematic reviews of interventions by using a spectrum of methods. Journal of clinical
epidemiology, 66(11), 1223-1229.
Baxter, S., Killoran, A., Kelly, M. P., & Goyder, E. (2010). Synthesizing diverse evidence: the use of
primary qualitative data analysis methods and logic models in public health reviews. Public
health, 124(2), 99-106.
47
Realist Synthesis
What is it?
Realist Synthesis synthesises a wide range of evidence that seeks to identify underlying causal
mechanisms and explore how they work under what conditions, answering the question "What
works for whom under what circumstances?" rather than "What works?"
12
. Specifically, it seeks to
‘unpack the mechanism’ of how complex programmes work (or why they fail) in particular contexts
and settings. Realism has roots in philosophy, the social sciences, and evaluation, but is a relatively
new methodology for the synthesis of evidence in healthcare and other policy arenas.[58] Realist
synthesis is a theoretically driven, qualitative approach to synthesising qualitative, quantitative and
mixed-methods research evidence.[59] A realist review is theory driven. Most realist reviews focus
on interventions or programmes. While systematic reviews provide evidence on outcomes, a realist
review provides a method to understand what triggers particular behaviours, the consequences of
such behaviours, and what contextual factors affect those behaviours. In realist terms, these are
referred to as Context, Mechanisms and Outcomes (C-M-O configurations).[59]
Mechanisms refer to the variables in the decision-making process.[59] They include the beliefs,
values, desires and cognitive processes that influence why people choose to do what they do. These
mechanisms are influenced by the context. Context, in a realist review, generally refers to aspects of
the background, people and research setting that lead to the outcomes. Context is similar to
structure by incorporating social, cultural, historical or institutional aspects. Context either facilitates
or constrains the operation of an intervention or programme. Outcomes refer to expected or
unexpected intermediate (mediating) and final outcomes. Outcomes result from the interaction of
mechanisms and context.[59] The C-M-O configurations help ensure external validity, as they allow
the research team to extend their theory building to a level of abstraction for the theory/theories to
be useful in other contexts. The iterative approach to theory building and C-M-O configuring enables
a review team to confirm or refute their theories.[59]
When should I use this Method?
Realist synthesis is believed, by its creators, to fill “an important methodological need…for a
synthesis method that can cope effectively with management and service delivery interventions”.
[58]In comparison with reviews of the effectiveness of clinical interventions, “the literature on
service interventions is epistemologically complex and methodologically diverse. As such, it presents
additional challenges for the reviewer”.[58] Realist review methods are not without their
difficulties. The iterative, flexible nature of realist reviews does not align well with protocol-driven,
standardized processes common to established systematic review methods.[37] Results from a
realist review are only generalisable if similar mechanisms work to generate outcomes of interest.
Completion of a realist review requires a high level of training and experience. Given the novelty of
the method such training and experience may not be found routinely in government or policy
development agencies, academic institutions, or community-based organizations.[37] Realist
reviews require considerable and sustained investment over time. This level of investment may not
always suit the time-sensitive demands of many policy decisions. In addition, due to their expansive
and exploratory nature, realist reviews can often suffer from ‘scope creep’.[37]
12
Better Evaluation. Realist synthesis. http://betterevaluation.org/evaluation-
options/realistsynthesis
48
How is it done?
A realist synthesis follows similar stages to a conventional systematic review, but with some notable
differences:[60]
1. The focus of the synthesis is derived from a negotiation between stakeholders and reviewers and
therefore the extent of stakeholder involvement throughout the process is high.
2. The search and appraisal of evidence is purposive and theoretically driven with the aim of refining
theory.
3. Multiple types of information and evidence can be included.
4. The process is iterative.
5. The findings from the synthesis focus on explaining to the reader why (or not) the intervention
works and in what ways, to enable informed choices about further use and/or research
The realist approach involves identifying underlying causal mechanisms and exploring how they
work under what conditions.[60] The stages of this review included: defining the scope of the review
(concept mining and framework formulation); searching for and scrutinising the evidence; extracting
and synthesising the evidence; and developing the narrative, including hypotheses. See Table 1 from
Rycroft-Malone et al.[60]
Table 3 - Approach to realist review (from Rycroft-Malone[60], adapted from Pawson)
Stage
Action
Activity
Define the scope of
the review
Identify the question
What is the nature and content of the intervention?
What are the circumstances or context of its use?
What are the policy intentions or objectives?
What are the nature and form of its outcomes or
impacts?
Undertake exploratory searches to inform discussion
with review stakeholders.
Clarify the purpose(s)
of the review
Theory integrity does the intervention work as
predicted?
Theory adjudication which theories around the
intervention seem to fit best?
Comparison how does the intervention work in
different settings, for different groups?
Reality testing how does the policy intent of the
intervention translate into practice?
Find and articulate the
programme theories
Search for relevant ‘theories’ in the literature.
Draw up list of programme theories.
Group, categorise or synthesise theories.
Design a theoretically based evaluative framework
49
Stage
Action
Activity
to be ‘populated’ with evidence.
Develop bespoke data extraction forms.
Search for and
appraise the
evidence
Search for the
evidence
Decide and define purposive sampling strategy.
Define search sources, terms and methods to be
used (including cited reference searching).
Set the thresholds for stopping searching at
saturation.
Test of relevance
Test relevance does the research address the
theory under test?
Test rigour does the research support the
conclusions drawn from it by the researchers or the
reviewers?
Extract and
synthesise findings
Extract the results
Extract data to populate the evaluative framework
with evidence.
Synthesise findings
Compare and contrast findings from different
studies.
Use findings from studies to address purposes(s) of
review.
Seek both confirmatory and contradictory findings.
Refine programme theories in the light of evidence
including findings from analysis of study data.
Develop narrative
Involve commissioners/decision makers in review of
findings.
Disseminate review with findings, conclusions and
recommendations.
Rycroft-Malone et al. Implementation Science 2012 7:33 doi:10.1186/1748-5908-7-33
How long will it take?
Because it is an interpretive process a realist synthesis typically takes longer than a systematic
review on the same topic. This extra interpretive process may add as much as an additional six
months to a review topic. A key consideration is whether the realist review will seek to review the
entire body of literature related to a topic or whether, more typically, some selectivity is employed
in the sampling. For this reason some propose that attention is focused on particularly rich clusters
50
of related quantitative and qualitative papers that share a common study.[61-65] Others have
proposed a rapid realist review strategy (See above).[37]
Where Can I see an Example?
Best, A., Greenhalgh, T., Lewis, S., Saul, J. E., Carroll, S., & Bitz, J. (2012). Largesystem transformation
in health care: a realist review. Milbank Quarterly,90(3), 421-456.
Greenhalgh, T., Kristjansson, E., & Robinson, V. (2007). Realist review to understand the efficacy of
school feeding programmes. BMJ: British Medical Journal, 335(7625), 858.
Pointing, S. B. (2014). Realist methodology in practice: translational findings from two realist
syntheses. Learning Communities: International Journal of Learning in Social Contexts, 14, 60-80.
Willis, C. D., Saul, J. E., Bitz, J., Pompu, K., Best, A., & Jackson, B. (2014). Improving organizational
capacity to address health literacy in public health: a rapid realist review. Public health, 128 (6), 515-
524.
Where Do I find Out More?
Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist reviewa new method of
systematic review designed for complex policy interventions. Journal of health services research &
policy, 10(suppl 1), 21-34.
Saul, J. E., Willis, C. D., Bitz, J., & Best, A. (2013). A time-responsive tool for informing policy making:
rapid realist review. Implementation Science, 8(1), 103.
51
What other choices are available?
Framework Synthesis
Framework-based synthesis is thought to offer promise in addressing applied policy questions.[66] It
involves reviewers in choosing a conceptual model likely to be suitable for the question of the
review, and using it as the basis of their initial coding framework. This framework is then modified in
response to the evidence reported in the studies in the reviews. The final product is a revised
framework that may include both modified factors and new factors not anticipated in the original
model.
Figure 1 - Process of Best Fit Framework Synthesis[67]
'Best fit' framework-based synthesis may be especially suitable in addressing urgent policy
questions where the need for a more fully developed synthesis is balanced by the need for a quick
answer.[67]
52
Carroll, C., Booth, A., & Cooper, K. (2011). A worked example of" best fit" framework synthesis: A
systematic review of views concerning the taking of some potential chemopreventive agents. BMC
Medical Research Methodology,11(1), 1-9.
Carroll, C., Booth, A., Leaviss, J., & Rick, J. (2013). “Best fit” framework synthesis: refining the
method. BMC medical research methodology, 13(1), 37.
Carroll, C., Rick, J., Leaviss, J., Fishwick, D., & Booth, A. (2013). A qualitative evidence synthesis of
employees’ views of workplace smoking reduction or cessation interventions. BMC public
health, 13(1), 1095.
Dixon-Woods, M. (2011). Using framework-based synthesis for conducting reviews of qualitative
studies. BMC medicine, 9(1), 39.
Dixon-Woods, M., McNicol, S., & Martin, G. (2012). Ten challenges in improving quality in
healthcare: lessons from the Health Foundation's programme evaluations and relevant
literature. BMJ quality & safety, bmjqs-2011.
Narrative Synthesis
‘Narrative’ synthesis’ refers to an approach to the systematic review and synthesis of findings from
multiple studies that relies primarily on the use of words and text to summarise and explain the
findings of the synthesis.[68] Whilst narrative synthesis can involve the manipulation of statistical
data, the defining characteristic is that it adopts a textual approach to the process of synthesis to
‘tell the story’ of the findings from the included studies.[68] As used here ‘narrative synthesis’ refers
to a process of synthesis that can be used in systematic reviews focusing on a wide range of
questions, not only those relating to the effectiveness of a particular intervention. Narrative
synthesis offers a general framework of selected narrative descriptions and ordering of primary
evidence with commentary and interpretation. It combines this with specific tools and techniques
that help to increase transparency and trustworthiness.[68] Narrative synthesis can be applied to
reviews of quantitative and/or qualitative research
Arai, L., Britten, N., Popay, J., Roberts, H., Petticrew, M., Rodgers, M., & Sowden, A. (2007). Testing
methodological developments in the conduct of narrative synthesis: a demonstration review of
research on the implementation of smoke alarm interventions. Evidence & Policy: A Journal of
Research, Debate and Practice, 3(3), 361-383.
Barnett-Page, E., & Thomas, J. (2009). Methods for the synthesis of qualitative research: a critical
review. BMC medical research methodology, 9(1), 59.
Lucas, P. J., Baird, J., Arai, L., Law, C., & Roberts, H. M. (2007). Worked examples of alternative
methods for the synthesis of qualitative and quantitative research in systematic reviews. BMC
medical research methodology, 7(1), 4.
Rodgers, M., Sowden, A., Petticrew, M., Arai, L., Roberts, H., Britten, N., & Popay, J. (2009). Testing
methodological guidance on the conduct of narrative synthesis in systematic reviews effectiveness
of interventions to promote smoke alarm ownership and function. Evaluation, 15(1), 49-73.
53
Snilstveit, B., Oliver, S., & Vojtkova, M. (2012). Narrative approaches to systematic review and
synthesis of evidence for international development policy and practice. Journal of development
effectiveness, 4(3), 409-429.
Qualitative Comparative Analysis
Qualitative comparative analysis method is a mixed synthesis method that analyzes complex causal
connections using Boolean logic to explain pathways to a particular outcome based on a truth
table.[69] The Boolean analysis of necessary and sufficient conditions for particular outcomes is
based on the presence/absence of independent variables and outcomes in each primary study.
Necessity and sufficiency are indicated when certain set relations exist: With necessity, the outcome
is a subset of the causal condition; with sufficiency, the causal condition is a subset of the
outcome.[70] Often there are too many “cases” for researchers to keep all the case knowledge “in
their heads,” but too few cases or events for conventional statistical techniques (e.g. meta-
analysis).[71]
Blackman, T., Wistow, J., & Byrne, D. (2013). Using Qualitative Comparative Analysis to understand
complex policy problems. Evaluation, 19(2), 126-140.
Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M. (2012). Qualitative analysis techniques for the
review of the literature. The qualitative report, 17(28), 1-28.
Sager, F., & Andereggen, C. (2012). Dealing With Complex Causality in Realist Synthesis The Promise
of Qualitative Comparative Analysis. American Journal of Evaluation, 33(1), 60-78.
Thomas, J., O’Mara-Eves, A., & Brunton, G. (2014). Using qualitative comparative analysis (QCA) in
systematic reviews of complex interventions: a worked example. Systematic reviews, 3(1), 1-14.
54
55
Summary
As mentioned above your choice of evidence synthesis product is determined by:
1. The Type of Review Question you are asking
2. The Type and Quantity of Studies Available to Answer your Question
3. How your final Review will be used
4. The Skills, Resources and Expertise of Your Team
It should be noted that none of the methodologies are completely fixed in their duration and any of
the suggested timeframes can be negotiated with corresponding implications for quality and
resources. Nevertheless, the relative complexity and rigour of the different methods is indicated by
the suggested timechart so it is not possible, for example, to take one of the methodologies located
at the right hand side of the timeframe chart and then to conduct it within the time constraints
indicated by the left hand side of the chart. Indeed, it is preferable to choose a different label by
which to describe the resultant evidence product (cp. Rapid realist review and realist synthesis) than
to wrongly imply that the work has been conducted to the level indicated by a label recognised by
the evidence synthesis community.
Furthermore, many review teams consider the evidence synthesis products covered in this report to
illustrate a much more continuous portfolio. Within such a context the individual methods used by a
particular methodology can be considered simply a series of systematic approaches that constitute a
toolbox from which to select judiciously. A review team and a commissioner of reviews can
therefore use the options outlined in this compendium as a starting point for negotiation around a
particular review, selecting certain deliverables that may be included within the overall evidence
synthesis product.
A further consideration is that it is not uncommon for methods to be used in conjunction with each
other so, for example, to conduct an evidence briefing within a time-limited window and then to
follow this up with a more rigorous and considered systematic review product.
56
References
1. Booth A, Papaioannou D, Sutton A: Systematic Approaches to a Successful
Literature Review: Sage Publications Limited; 2012.
2. Mulrow CD: The medical review article: state of the science. Ann Intern Med
1987, 106(3):485-488.
3. McAlister FA, Clark HD, van Walraven C, Straus SE, Lawson FM, Moher D, Mulrow
CD: The medical review article revisited: has the science improved? Ann
Intern Med 1999, 131(12):947-951.
4. Cook DJ, Mulrow CD, Haynes RB: Systematic reviews: synthesis of best
evidence for clinical decisions. Ann Intern Med 1997, 126(5):376-380.
5. Last J, Spasoff R, Harris S, Thuriaux M: International epidemiological
association. A dictionary of epidemiology. In.: New York: Oxford University
Press; 2001.
6. Petticrew M, Roberts H: Systematic Reviews in the Social Sciences: A
practical guide. Oxford: Blackwell Publishing; 2006.
7. Keele S: Guidelines for performing systematic literature reviews in
software engineering. In: Technical report, Ver 23 EBSE Technical Report EBSE.
edn.; 2007.
8. Grant MJ, Booth A: A typology of reviews: an analysis of 14 review types and
associated methodologies. Health Info Libr J 2009, 26(2):91-108.
9. Anderson S, Allen P, Peckham S, Goodwin N: Asking the right questions:
scoping studies in the commissioning of research on the organisation and
delivery of health services. Health Research Policy and Systems 2008, 6.
10. Stone PW: Popping the (PICO) question in research and evidence-based
practice. Appl Nurs Res 2002, 15(3):197-198.
11. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P: Preferred reporting items
for systematic reviews and meta-analyses: the PRISMA statement. J Clin
Epidemiol 2009, 62(10):1006-1012.
12. Mays N, Roberts E, Popay J: Synthesising research evidence. Studying the
organisation and delivery of health services: Research methods 2001:188-220.
13. Levac D, Colquhoun H, O'Brien K: Scoping studies: advancing the
methodology. Implementation Science 2010, 5(1):69.
14. Arksey H, O'Malley L: Scoping studies: towards a methodological framework.
International Journal of Social Research Methodology 2005, 8(1):19-32.
15. Daudt H, van Mossel C, Scott S: Enhancing the scoping study methodology: a
large, inter-professional team's experience with Arksey and O'Malley's
framework. BMC Medical Research Methodology 2013, 13(1):48.
16. Pham M, Rajić A, Greig J, Sargeant J, Papadopoulos A, McEwen S: A scoping
review of scoping reviews: advancing the approach and enhancing the
consistency. Res Syn Meth 2014:n/a-n/a.
17. Rapid Evidence Assessment
Toolkit. [http://webarchive.nationalarchives.gov.uk/20140305122816/http://
www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-
evidence-assessment]
18. Banzi R, Moja L, Pistotti V, Facchini A, Liberati A: Conceptual frameworks and
empirical approaches used to assess the impact of health research: an
overview of reviews. Health Res Policy Syst 2011, 9:26.
57
19. Clarke M: Systematic review of reviews of risk factors for intracranial
aneurysms. Neuroradiology 2008, 50(8):653-664.
20. Williams C, Brunskill S, Altman D, Briggs A, Campbell H, Clarke M, Glanville J,
Gray A, Harris A, Johnston K et al: Cost-effectiveness of using prognostic
information to select women with breast cancer for adjuvant systemic
therapy. Health Technol Assess 2006, 10(34):iii-iv, ix-xi, 1-204.
21. McNeill J, Lynn F, Alderdice F: Systematic review of reviews: the public health
role of the midwife. . In. Belfast: School of Nursing & Midwifery, Queen’s
University Belfast.; 2010.
22. Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA,
Boers M: AMSTAR is a reliable and valid measurement tool to assess the
methodological quality of systematic reviews. J Clin Epidemiol 2009,
62(10):1013-1020.
23. Rada G, Pérez D, Capurro D: Epistemonikos: a free, relational, collaborative,
multilingual database of health evidence. Studies in health technology and
informatics 2013, 192:486.
24. Nabudere H, Asiimwe D, Mijumbi R: Task shifting in maternal and child health
care: an evidence brief for Uganda. Int J Technol Assess Health Care 2011,
27(2):173-179.
25. Woyessa A, Hadis M, Kebede A: Human resource capacity to effectively
implement malaria elimination: a policy brief for Ethiopia. Int J Technol
Assess Health Care 2013, 29(2):212-217.
26. Lavis JN, Permanand G, Oxman AD, Lewin S, Fretheim A: SUPPORT Tools for
evidence-informed health Policymaking (STP) 13: Preparing and using
policy briefs to support evidence-informed policymaking. Health Res Policy
Syst 2009, 7 Suppl 1:S13.
27. Policy briefs. [http://www.who.int/evidence/assessing/sure/Publication/en/]
28. Lavis J, Davies H, Oxman A, Denis JL, Golden-Biddle K, Ferlie E: Towards
systematic reviews that inform health care management and policy-
making. J Health Serv Res Policy 2005, 10 Suppl 1:35-48.
29. Chambers D, Wilson P: A framework for production of systematic review
based briefings to support evidence-informed decision-making. Syst Rev
2012, 1:32.
30. Chambers D, Grant R, Warren E, Pearson S-A, Wilson P: Use of evidence from
systematic reviews to inform commissioning decisions: a case study.
Evidence & Policy: A Journal of Research, Debate and Practice 2012, 8(2):141-148.
31. Wilson PM, Farley K, Thompson C, Chambers D, Bickerdike L, Watt IS, Lambert
M, Turner R: Effects of a demand-led evidence briefing service on the uptake
and use of research evidence by commissioners of health services: protocol
for a controlled before and after study. Implement Sci 2015, 10:7.
32. Health V: Guidelines for evidence summaries for health promotion and
disease prevention interventions. In.; 2012.
33. Clark R, Waters E, Armstrong R, Conning R, Allender S, Swinburn B: Evidence
and obesity prevention: developing evidence summaries to support
decision making. Evidence & Policy: A Journal of Research, Debate and Practice
2013, 9(4):547-556.
34. Blank L, Coster J, O'Cathain A, Knowles E, Tosh J, Turner J, Nicholl J: The
appropriateness of, and compliance with, telephone triage decisions: a
58
systematic review and narrative synthesis. J AdvNurs 2012(1365-2648
(Electronic)).
35. Rapid Evidence Assessment Toolkit.
[http://webarchive.nationalarchives.gov.uk/20140305122816/http://www.civi
lservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-
assessment]
36. Ganann R, Ciliska D, Thomas H: Expediting systematic reviews: methods and
implications of rapid reviews. Implementation Science 2010, 5(1):56.
37. Saul JE, Willis CD, Bitz J, Best A: A time-responsive tool for informing policy
making: rapid realist review. Implement Sci 2013, 8:103.
38. Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, Facey K, Hailey D,
Norderhaug I, Maddern G: Rapid reviews versus full systematic reviews: an
inventory of current methods and practice in health technology
assessment. Int J Technol Assess Health Care 2008, 24(2):133-139.
39. Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, Facey K, Hailey D,
Norderhaug I, Maddern G: RAPID VERSUS FULL SYSTEMATIC REVIEWS:
VALIDITY IN CLINICAL PRACTICE? ANZ Journal of Surgery 2008, 78(11):1037-
1040.
40. Riley B, Norman CD, Best A: Knowledge integration in public health: a rapid
review using systems thinking. Evidence & Policy: A Journal of Research, Debate
and Practice 2012, 8(4):417-431.
41. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D: Evidence summaries:
the evolution of a rapid review approach. Systematic reviews 2012, 1(1).
42. Schünemann HJ, Moja L: Reviews: Rapid! Rapid! Rapid! …and systematic. Syst
Rev 2015, 4:4.
43. Pawson R, Greenhalgh T, Harvey G, Walshe K: Realist review--a new method of
systematic review designed for complex policy interventions. J Health Serv
Res Policy 2005, 10 Suppl 1:21-34.
44. Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R: RAMESES
publication standards: realist syntheses. BMC Med 2013, 11:21.
45. Ioannidis JP: Integration of evidence from multiple meta-analyses: a primer
on umbrella reviews, treatment networks and multiple treatments meta-
analyses. CMAJ 2009, 181(8):488-493.
46. Silva V, Grande AJ, Martimbianco AL, Riera R, Carvalho AP: Overview of
systematic reviews - a new type of study: part I: why and for whom? Sao
Paulo Med J 2012, 130(6):398-404.
47. Pieper D, Buechter R, Jerinic P, Eikermann M: Overviews of reviews often have
limited rigor: a systematic review. J Clin Epidemiol 2012, 65(12):1267-1273.
48. Viswanathan M, Ansari M, Berkman N, Chang S, Hartling L, McPheeters L,
Santaguida P, Shamliyan T, Singh K, Tsertsvadze A et al: Assessing the Risk of
Bias of Individual Studies in Systematic Reviews of Health Care
Interventions. In: Agency for Healthcare Research and Quality Methods Guide for
Comparative Effectiveness Reviews. edn.: Agency for Healthcare Research and
Quality; 2012.
49. Kitchenham B: Procedures for performing systematic reviews. Keele, UK,
Keele University 2004, 33(2004):1-26.
50. Phelps SF, Campbell N: Systematic reviews in theory and practice for library
and information studies. Library and Information Research 2012, 36(112):6-15.
59
51. Briner RB, Denyer D: Systematic review and evidence synthesis as a practice
and scholarship tool. Handbook of evidence-based management: Companies,
classrooms and research 2012:112-129.
52. What is a systematic review?
[http://www.campbellcollaboration.org/what_is_a_systematic_review/]
53. Crombie IK, Davies HT: What is meta-analysis?
54. Thornton A, Lee P: Publication bias in meta-analysis: its causes and
consequences. J Clin Epidemiol 2000, 53(2):207-216.
55. Major CH, Savin-Baden M: An Introduction to Qualitative Research Synthesis:
Managing the Information Explosion in Social Science Research: Routledge;
2009.
56. Salter C: Qualitative review: Challenges and opportunities. . In: Presentation
for Dev UEA London. London; 2011.
57. Anderson LM, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis
D, Tugwell P: Using logic models to capture complexity in systematic
reviews. Res Synth Methods 2011, 2(1):33-42.
58. Pawson R, Greenhalgh T, Harvey G, Walshe K: Realist Synthesis: an
introduction. In. Manchester: University of Manchester; 2004.
59. Durham J, Blondell SJ: Research protocol: a realist synthesis of cross-border
patient mobility from low-income and middle-income countries. BMJ Open
2014, 4(11):e006514.
60. Rycroft-Malone J, McCormack B, Hutchinson AM, DeCorby K, Bucknall TK, Kent B,
Schultz A, Snelgrove-Clarke E, Stetler CB, Titler M et al: Realist synthesis:
illustrating the method for implementation research. Implement Sci 2012,
7:33.
61. Booth A, Harris J, Croot E, Springett J, Campbell F, Wilkins E: Towards a
methodology for cluster searching to provide conceptual and contextual
"richness" for systematic reviews of complex interventions: case study
(CLUSTER). BMC Med Res Methodol 2013, 13:118.
62. Jagosh J, Pluye P, Macaulay AC, Salsberg J, Henderson J, Sirett E, Bush PL, Seller R,
Wong G, Greenhalgh T et al: Assessing the outcomes of participatory
research: protocol for identifying, selecting, appraising and synthesizing
the literature for realist review. Implement Sci 2011, 6:24.
63. Jagosh J, Macaulay AC, Pluye P, Salsberg J, Bush PL, Henderson J, Sirett E, Wong G,
Cargo M, Herbert CP et al: Uncovering the benefits of participatory research:
implications of a realist review for health research and practice. Milbank Q
2012, 90(2):311-346.
64. Jagosh J, Pluye P, Wong G, Cargo M, Salsberg J, Bush PL, Herbert CP, Green LW,
Greenhalgh T, Macaulay AC: Critical reflections on realist review: insights
from customizing the methodology to the needs of participatory research
assessment. Res Synth Methods 2014, 5(2):131-141.
65. Jagosh J, Bush PL, Salsberg J, Macaulay AC, Greenhalgh T, Wong G, Cargo M, Green
LW, Herbert CP, Pluye P: A realist evaluation of community-based
participatory research: partnership synergy, trust building and related
ripple effects. BMC Public Health 2015, 15:725.
66. Dixon-Woods M: Using framework-based synthesis for conducting reviews
of qualitative studies. BMC Med 2011, 9:39.
67. Carroll C, Booth A, Cooper K: A worked example of "best fit" framework
synthesis: A systematic review of views concerning the taking of some
60
potential chemopreventive agents. Bmc Medical Research Methodology 2011,
11.
68. Popay J, Roberts H, Sowden A, Petticrew M, Arai L: Guidance on the conduct of
narrative synthesis in systematic reviews. A product from the ESRC 2006.
69. Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A: Synthesising
qualitative and quantitative evidence: a review of possible methods. J
Health Serv Res Policy 2005, 10(1):45-53.
70. Jordan E, Javernick-Will A, Amadei B: A qualitative comparative analysis of
neighborhood recovery following Hurricane Katrina. International Journal of
Disaster Resilience in the Built Environment 2014, 5(4):391-412.
71. Ragin C: A qualitative comparative analysis of pension systems. The
comparative political economy of the welfare state 1994:320-345.
... The mapping review was chosen to address a defined practicerelated issue of RN preceptor competencies, rather than investigate a whole topic such as nursing clinical teaching, facilitation and supervision. A mapping review is a unique approach to identifying variations, contradictions and the unknown in large volumes of evidence (Booth, 2016;Cooper, 2016). It is important to note that the mapping review and scoping review methodology are often considered interchangeably; however, there are differences in methodology (Campbell et al., 2023). ...
... Mapping reviews provide general answers to broad questions and are useful for informing larger knowledge syntheses, such as a systematic review and/or primary research projects (Aveyard & Bradbury-Jones, 2019;Booth, 2016). A scoping review differs, seeking to answer broad questions with more detail. ...
... The first two authors completed the search of the published literature between March and July 2022. As a systematic review may be conducted as a next research step, the mapping review is systematic in its approach although it does not require a systematic review reporting method (Booth, 2016). ...
Article
Full-text available
Aims To review the contemporary international literature on nurse preceptor competencies and map the components and their descriptors. Review Methods A mapping review. Data Sources Articles reporting evidence‐based and validated Registered Nurse (RN) preceptor competencies published between 2013 and 2022 were identified. Open access databases such as PubMed and Google Scholar and the library healthcare databases Scopus and CINAHL were searched. The authors collaborated at each review stage that included screening, article selection, tabulation, mapping and preparation of findings. Results Seven quantitative studies were included. Three were based on existing nurse preceptor competency data sets and four were purposely developed using collaborative research methods. Each study validated findings through a survey of nurse stakeholders. Three key competencies shared across all studies were ‘facilitating teaching’, ‘being a role model’ and ‘evaluating student's performance’. The number of competency categories ranged from three to 10 and the accompanying item descriptors from 9 to 83. Although terminology describing data sets was inconsistent, similarity was seen across competency domains. Conclusion The contemporary nursing preceptor role is considered an emerging specialist education role. The results offer a set of validated preceptor competency descriptors, applicable to practice, that provide insight into ways employers may recruit, support and retain nurse preceptors. Implications for the Profession The mapped results provide a concise summary of nurse preceptor competency research internationally that can inform further development of RN preceptors. Impact This review addresses the lack of consensus around nursing preceptor competencies for clinical supervision of undergraduate nursing students. Seven competency domains were identified describing key preceptor role capabilities. The domains Facilitator’, ‘Role model’ and ‘Evaluator’ featured across the included studies: ‘More than 300 competency descriptors were reported’. Our review results could better prepare RN preceptors for their important role. Employers of RN preceptors could use the results to design performance competencies that may enhance nursing preceptorship. Reporting Method This review adheres to the PRISMA‐ScR EQUATOR guidelines as the recommended reporting method for mapping reviews.
... [42][43][44] We will incorporate guidance provided by authors in the fields of library science, medicine, epidemiology and environmental sciences. [45][46][47][48][49][50][51] We will use the Preferred Reporting Items for Systematic reviews and Meta-analyses for Reporting Literature Searches (PRISMA-S) extension to verify that each component of each search is completely reported and reproducible. 52 Studies will be included that are observational epidemiologic designs and examine disease prevalence, association with other disorders, influence of multiple inputs, social determinants and relationships to health outcomes. ...
... This scoping review will characterise the evidence by systematically collating and cataloguing the characteristics and extent of the available literature. 43 48 Such a catalogue may then become a comprehensive database of literature. 45 Establishing an evidence map of SPP and NSHC associations can help set priorities for future epidemiological studies and systematic reviews and the development of specific research questions. ...
Article
Full-text available
Introduction The increasing prevalence of coexisting health conditions poses a challenge to healthcare providers and healthcare systems. Spinal pain (eg, neck and back pain) and spinal pathologies (eg, osteoporotic fractures and degenerative spinal disease) exist concurrently with other non-spinal health conditions (NSHC). However, the scope of what associations may exist among these co-occurring conditions is unclear. Therefore, this scoping review aims to map the epidemiological literature that reports associations between spine-related pain and pathologies (SPPs) and NSHCs. Methods and analysis This scoping review will follow the JBI protocol and Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews. We will systematically search the literature using key words and MeSH terms for SPPs and NSHCs. Terminology/vocabulary for NSHCs will include those for communicable and non-communicable diseases as reported by WHO Global Burden of Disease reports. Five databases will be searched from inception: MEDLINE, EMBASE, APA PsycInfo, Scopus and Web of Science Core Collection. Papers published in English, in peer-reviewed journals, including measures of association between SPPs and NSHCs and using observational epidemiologic study designs will be included. Excluded will be studies of cadaveric, animal or health behaviours; studies with no measures of association and non-observational epidemiologic studies. Results will include the number of studies, the studies that have evaluated the measures of association and the frequency of the studied associations between SPPs and NSHCs. Results will be reported in tables and diagrams. Themes of comorbidities will be synthesised into a descriptive report. Ethics and dissemination This scoping review was deemed exempt from ethics review. This review will provide a comprehensive overview of the literature that reports associations between SPPs and NSHCs to inform future research initiatives and practices. Results will be disseminated through publication in peer-reviewed journals and research conferences. Registration details https://osf.io/w49u3 .
... As an approach to research evaluation, realist synthesis provides a means to understand the triggers for particular behaviours, the effects of such behaviours and what contextual factors influence them (Booth, 2015). In a realist approach, a programme theory (or theories) is the underlying assumption(s) about how an intervention may work and the impacts it is expected to have (Pawson et al., 2005). ...
... We aim to identify 'conceptually rich' or 'contextually thick' evidence and will therefore, use citation searching and author searching to identify clusters of related papers (Booth, 2015). It is likely that some relevant evidence may exist in unpublished form, and therefore, we will seek to maximize opportunities for identification of this literature through consultation with our PAG, stakeholders and communication with relevant policy, professional and third sector organizations. ...
Article
Full-text available
Aim To identify and characterize strategies, which contribute to the prevention of urinary tract infection (UTI) in older people living in care homes. Design The realist synthesis has four iterative stages to (1) develop initial programme theory; (2) search for evidence; (3) test and refine theory supported by relevant evidence and (4) formulate recommendations. Data from research articles and other sources will be used to explore the connection between interventions and the context in which they are applied in order to understand the mechanisms, which influence the outcomes to prevent UTI. Methods A scoping search of the literature and workshops with stakeholders will identify initial programme theories. These theories will be tested and refined through a systematic search for evidence relating to mechanisms that trigger prevention and recognition of UTI in older people in care homes. Interviews with key stakeholders will establish practical relevance of the theories and their potential for implementation. Discussion UTI is the most commonly diagnosed infection in care home residents. Evidence on the effectiveness of strategies to prevent UTI in long‐term care facilities does not address the practicality of implementing these approaches in UK care homes. The realist synthesis is designed to examine this important gap in evidence. Impact Our evidence‐informed programme theory will help inform programmes to improve practice to reduce the incidence of UTI in older people living in care homes and related research. Patient and public involvement will be crucial to ensuring that our findings reach carers and the public. Patient and public contribution Involvement of patient and public representatives is embedded throughout the study to ensure it is underpinned by multiple perspectives of importance to care home residents. Our co‐investigator representing patient and public involvement is a lay member of the team and will chair the Project Advisory Group, which has two additional lay members. This will help to ensure that our findings and resources reach carers and the public and represent their voice in our publications and presentations to professional and lay audiences.
... An umbrella review serves as a comprehensive document that provides a useful overview of reviews on a specific topic, including all relevant reviews [83][84][85], Our study represents the first umbrella review conducted on risk factors for low birth weight (LBW). Additionally, significant publication bias, as determined by Egger's test, was observed in only three meta-analyses (Periodontal Disease, Depression, Anemia), while most of the others did not report significant publication bias. ...
Article
Full-text available
Background In this umbrella review, we systematically evaluated the evidence from meta-analyses and systematic reviews of maternal factors associated with low birth weight. Methods PubMed, Scopus, and Web of Science were searched to identify all relevant published studies up to August 2023. We included all meta-analysis studies (based on cohort, case-control, cross-sectional studies) that examined the association between maternal factors (15 risk factors) and risk of LBW, regardless of publication date. A random-effects meta-analysis was conducted to estimate the summary effect size along with the 95% confidence interval (CI), 95% prediction interval, and heterogeneity (I²) in all meta-analyses. Hedges’ g was used as the effect size metric. The effects of small studies and excess significance biases were assessed using funnel plots and the Egger’s test, respectively. The methodological quality of the included studies was assessed using the AMSTAR 2 tool. Results We included 13 systematic Review with 15 meta-analysis studies in our study based on the inclusion criteria. The following 13 maternal factors were identified as risk factors for low birth weight: crack/cocaine (odds ratio [OR] 2.82, 95% confidence interval [CI] 2.26–3.52), infertility (OR 1.34, 95% CI 1.2–1.48), smoking (OR 2.00, 95% CI 1.76–2.28), periodontal disease (OR 2.41, 95% CI 1.67–3.47), depression (OR 1.84, 95% CI 1.34–2.53), anemia (OR 1.32, 95% CI 1.13–1.55), caffeine/coffee (OR 1.34, 95% CI 1.14–1.57), heavy physical workload (OR 1.87, 95% CI 1.00-3.47), lifting ≥ 11 kg (OR 1.59, 95% CI 1.02–2.48), underweight (OR 1.79, 95% CI 1.20–2.67), alcohol (OR 1.23, 95% CI 1.04–1.46), hypertension (OR 3.90, 95% CI 2.73–5.58), and hypothyroidism (OR 1.40, 95% CI 1.01–1.94). A significant negative association was also reported between antenatal care and low birth weight. Conclusions This umbrella review identified drug use (such as crack/cocaine), infertility, smoking, periodontal disease, depression, caffeine and anemia as risk factors for low birth weight in pregnant women. These findings suggest that pregnant women can reduce the risk of low birth weight by maintaining good oral health, eating a healthy diet, managing stress and mental health, and avoiding smoking and drug use.
... This review was informed by the Joanna Briggs Institute (JBI) umbrella review methodology which is designed to combine all types of synthesis of research evidence (Aromataris et al., 2020). An umbrella review builds upon an area of research well covered by existing systematic reviews and has been defined as a review of reviews (Booth, 2016). When describing primary reviews within this paper, we are referring to the reviews included in the umbrella review; the studies included in the primary reviews are primary/empirical studies. ...
Article
Full-text available
The lives of healthy siblings living with a sibling with a long- term condition are often shaped by the family, type of illness, length of illness, age of the child, caregiver demands, and support provided to the family, ill sibling, and healthy sibling. While the experiences of healthy siblings are documented in the literature by parent proxy, literature on healthy siblings self-reported experiences of living with a sibling who has a long-term condition remains scarce. Purpose This umbrella review aims to synthesize reviews on the self-reported experiences of healthy siblings of children living with a sibling who has a long-term condition. Eligibility criteria Published peer-reviewed reviews in English language exploring the self-reported experiences of healthy siblings under 24 years old, whose siblings are diagnosed with a long-term condition. Sample Using a developed search strategy, seven electronic databases (CINAHLPlus, Scopus, PubMed, PsycINFO, Cochrane Database of Systematic Reviews, Clinical Key, and Google Scholar) were searched from 2018 till December 2023. Eleven reviews met the inclusion criteria and were subjected to narrative synthesis. Results Four themes (adjusting to changes, wanting to help, living the ups and downs, living the changes), and eight subthemes were generated from the syntheses. Conclusion This is the first umbrella review undertaken on healthy siblings self-reported experiences of living with a sibling who has a long-term condition. The impact of a long-term condition on healthy siblings of children with a long-term condition suggests a need for healthcare providers and organisations to provide better emotional, psychological, and informational support to healthy siblings and their families. Implications Findings from this review will inform healthcare providers, organisations, researchers, and policymakers on the development of future clinical practices and research for healthy siblings.
... We performed a mapping review of scientific literature to identify economic evaluations in different stages of the product development cycle for medical technologies. In mapping reviews a range of literature within a specified timeframe, in a specified topic area is examined [9]. As this is a mapping review no critical appraisal of included papers was performed [10]. ...
Article
Full-text available
Rationale Economic evaluations play an important role in the development and implementation of healthcare innovations. For pharmaceutical products, the methodologies used are laid down in guidelines, whereas for medical technologies the guidelines are not as strenuous. The aim of this review was therefore to analyze what types of methodologies are used in economic evaluations of medical technologies. Methods We performed a mapping review to identify economic evaluations for medical technologies. We decided to limit our search to one year (2022) and included cost utility and cost effectiveness analyses in which health technologies were evaluated. For each included study we identified the main methodological characteristics. Results A total of 364 papers were included in the analysis, 268 (74%) contained cost-utility analyses and 91 (25%) cost-effectiveness analyses. A model was used in 236 (64%) analyses, 117 analyses were trial based evaluations. Probabilistic sensitivity analyses and/or bootstrapping was performed in 266 (73%) analyses. Deterministic sensitivity analyses were used in 306 (84%). Time horizon and perspective were underreported in 15–25% of the included studies. Conclusions This review shows the wide range of methodologies used in economic evaluations as well as the extent and rigor in which these methodologies are used. Many of the included papers did no use or did not sufficiently report the use of appropriate standard methods. This may lead to research waste, a delay in successful implementation of valuable innovations and in the end may delay improvement patient outcomes.
... While systematic, meta, and scoping reviews have been published, and offer in-depth analyses and specificity on the evidence on this topic [16][17][18], there remains a need for a wider-scope appraisal of the OAA literature in terms of study type, design, and intervention quality, as well as outcomes of interest. A mapping review provides a useful structured overview of relevant research by "categorizing, classifying, characterizing patterns, trends or themes in evidence production or publication" [19,20]. Despite increased interest and a growing body of literature on this topic, a mapping review has never been conducted to assess gaps for future work to address. ...
Article
Full-text available
The development and use of oral anticancer agents (OAAs) continue to grow, and supporting individuals on OAAs is now a priority as they find themselves taking these drugs at home with little professional guidance. This mapping review provides an overview of the current evidence concerning OAA-supportive adherence interventions, identifying potential gaps, and making recommendations to guide future work. Four large databases and the grey literature were searched for publications from 2010 to 2022. Quantitative, qualitative, mixed-method, theses/dissertations, reports, and abstracts were included, whereas protocols and reviews were excluded. Duplicates were removed, and the remaining publications were screened by title and abstract. Full-text publications were assessed and those meeting the inclusion criteria were retained. Data extracted included the year of publication, theoretical underpinnings, study design, targeted patients, sample size, intervention type, and primary outcome(s). 3175 publications were screened, with 435 fully read. Of these, 314 were excluded with 120 retained. Of the 120 publications, 39.2% (n = 47) were observational studies, 38.3% (n = 46) were quasi-experimental, and 16.7% (n = 20) were experimental. Only 17.5% (n = 21) were theory-based. Despite the known efficacy of multi-modal interventions, 63.7% (n = 76) contained one or two modalities, 33.3% (n = 40) included 3, and 3.3% (n = 4) contained four types of modalities. Medication adherence was measured primarily through self-report (n = 31) or chart review/pharmacy refills (n = 28). Given the importance of patient tailored interventions, future work should test whether having four intervention modalities (behavioral, educational, medical, and technological) guided by theory can optimize OAA-related outcomes.
Article
Full-text available
Background Timely, appropriate, and equitable access to quality healthcare during pregnancy is proven to contribute to better health outcomes of birthing individuals and infants following birth. Equity is conceptualized as the absence of differences in healthcare access and quality among population groups. Healthcare policies are guides for front-line practices, and despite merits of contemporary policies striving to foster equitable healthcare, inequities persist. The purpose of this umbrella review is to identify prenatal healthcare practices, summarize how equities/inequities are reported in relation to patient experiences or health outcomes when accessing or using services, and collate equity reporting characteristics. Methods For this umbrella review, six electronic databases were searched (Medline, EMBASE, APA PsychInfo, CINAHL, International Bibliography of the Social Sciences, and Cochrane Library). Included studies were extracted for publication and study characteristics, equity reporting, primary outcomes (prenatal care influenced by equity/inequity) and secondary outcomes (infant health influenced by equity/inequity during pregnancy). Data was analyzed deductively using the PROGRESS-Plus equity framework and by summative content analysis for equity reporting characteristics. The included articles were assessed for quality using the Risk of Bias Assessment Tool for Systematic Reviews. Results The search identified 8065 articles and 236 underwent full-text screening. Of the 236, 68 systematic reviews were included with first authors representing 20 different countries. The population focus of included studies ranged across prenatal only (n = 14), perinatal (n = 25), maternal (n = 2), maternal and child (n = 19), and a general population (n = 8). Barriers to equity in prenatal care included travel and financial burden, culturally insensitive practices that deterred care engagement and continuity, and discriminatory behaviour that reduced care access and satisfaction. Facilitators to achieve equity included innovations such as community health workers, home visitation programs, conditional cash transfer programs, virtual care, and cross-cultural training, to enhance patient experiences and increase their access to, and use of health services. There was overlap across PROGRESS-Plus factors. Conclusions This umbrella review collated inequities present in prenatal healthcare services, globally. Further, this synthesis contributes to future solution and action-oriented research and practice by assembling evidence-informed opportunities, innovations, and approaches that may foster equitable prenatal health services to all members of diverse communities.
Article
Full-text available
Nature-based health interventions (NBHIs) are utilised to treat a range of physical and mental health conditions, and this rapid review sought to describe the breadth of instrumentation utilised to measure the effectiveness of NBHIs on the different domains of health and wellbeing. A total of 14,385 records were extracted from three databases, and a review of titles and abstracts and then of full text resulted in a final dataset of 167 articles that met the review criteria. NBHI settings were categorised as Garden/Horticulture, Blue Spaces, Urban Green Spaces, Wild Nature, and Camps/Residential. For each of these settings, major population groups included in the studies, health domains and outcomes addressed, as well as assessment tools used to measure NBHIs’ effectiveness were described and analysed in aggregate. A total of 336 measurement tools were utilised across the dataset, with only 29 being specifically designed to assess NBHIs. Most studies investigated mental health domains and measured the effectiveness of NBHIs to improve psychological factors and physical, behavioural, and healthy eating outcomes. Future research should interrogate how nature-based tools and outcome measurements could be used most effectively in NBHI settings.
Article
Background: The beneficial effect of physical activity in various health conditions is recognised, but the consistency and magnitude of its outcomes remain debated. Therefore, we aimed to chart the evidence of the association between physical activity and health outcomes in clinical and non-clinical populations. Methods: We conducted a meta-umbrella review using a semiquantitative and descriptive analysis. We searched PubMed/MEDLINE, PsycINFO, and CINHAL databases from inception to February 28, 2023, for umbrella reviews that evaluated the relationship between physical activity and health outcomes using validated methods to assess evidence levels. Two reviewers independently screened, extracted data, and assessed quality of the umbrella reviews. The overlap analysis of component meta-analyses within the umbrella reviews was performed using the Corrected Covered Area (CCA) method. To ensure consistency, pooled effect estimates were converted to equivalent odds ratios (eORs). Results: Sixteen umbrella reviews with a total of 130 statistically significant associations were included. The sole risk-demonstrating association, supported by convincing evidence, was between intensive sports and atrial fibrillation (eOR=1.64, 95%CI=1.10-2.43). The strongest protective associations, supported by convincing and highly suggestive evidence, were between any physical activity and the incidence of Parkinson's disease (eOR=0.66, 95%CI=0.57-0.78), Alzheimer's disease (eOR=0.62, 95%CI=0.52-0.72), cognitive decline (eOR=0.67; 95%CI=0.57-0.78), breast cancer incidence (eOR=0.87, 95%CI=0.84-0.90), endometrial cancer incidence (eOR=0.79, 95%CI=0.74-0.85), and between recreational physical activity and the incidence/mortality of cancer (eOR=0.70, 95%CI=0.60-0.83). The remaining ones demonstrated lower levels of evidence, while 60 (46.2%) of those exhibited multiple levels of evidence, displaying a lack of consistency. Conclusion: Despite the inconsistent evidence across associations, the contribution of regular physical activity to maintaining both physical and mental health cannot be underestimated, particularly when it comes to cognitive and cancer outcomes. The association between intensive sports and potential risk of atrial fibrillation requires further consideration though.
Article
Full-text available
In this article, we provide a framework for analyzing and interpreting sources that inform a literature review or, as it is more aptly called, a research synthesis. Specifically, using Leech and Onwuegbuzie's (2007, 2008) frameworks, we delineate how the following four major source types inform research syntheses: talk, observations, drawings/photographs/videos, and documents. We identify 17 qualitative data analysis techniques that are optimal for analyzing one or more of these source types. Further, we outline the role that the following five qualitative data analysis techniques can play in the research synthesis: constant comparison analysis, domain analysis, taxonomic analysis, componential analysis, and theme analysis. We contend that our framework represents a first step in an attempt to help literature reviewers analyze and interpret literature in an optimally rigorous way. © 2012: Anthony J. Onwuegbuzie, Nancy L. Leech, Kathleen M. T. Collins and Nova Southeastern University.
Chapter
Comparative research is exploding with alternative methodological and theoretical approaches. In this book, experts in each one of these methods provide a comprehensive explanation and application of time-series, pooled, event history and Boolean methods to substantive problems of the welfare state. Each section of the book focuses on a different method with a general introduction to the methods and then two papers using the method to deal with analysis concerning welfare state problems in a political economy perspective. Scholars concerned with methodology in this area cannot afford to overlook this book because it will help them keep up on proliferating methodologies. Graduate students in political science and sociology will find this book extremely useful in their careers.
Article
The author is currently conducting two rapid Realist Syntheses, one to identify the theoretical bases of closed-circuit television (CCTV) to reduce alcohol-related assault in the night time economy, and the other to identify dimensions of evaluation to improve the effectiveness and efficiency of a number of services in northern Australia which address homelessness and alcohol-harm reduction. The CCTV project grew out of a "completed" Realist Evaluation; the homelessness and alcohol-harm project is the foundation for a future Realist Evaluation. This paper will examine how the Realist Synthesis protocols have been applied both retrospectively, and to inform the future Realist Evaluation. Each evaluation aims to understand how specific interventions work, or don't work, using the explanatory structure of generative causation. Key findings are: that precise definitions of the programs' outcomes are crucial to retrospectively applying the Realist Synthesis methodology; that the realist methodology can embed a continuous improvement process in the funding organisation once these outcomes are defined, making research engagement more effective; that the outcomes (and causal mechanisms) lie at different systemic levels, both internal and external to the organisation; and that this last point is something people within funding organisations intuitively grasp, but have difficulty understanding.
Article
This article is about the use of systematic reviews as a research methodology in library and information studies (LIS). A systematic review is an attempt to gather all of the research on a given topic in order to answer a specific question. They have been used extensively in the health care field and have more recently found their way into the social sciences, including librarianship. Examples of the use of systematic reviews in LIS illustrate the benefits and challenges to using this methodology. Included is a brief description of how to conduct a review and a reading list for further information.
Chapter
Slums are densely populated, neglected parts of cities where housing and living conditions are exceptionally poor. In situ slum upgrading, at its basic level, involves improving the physical environment of the existing area, such as improving and installing basic infrastructure like water, sanitation, solid waste collection, electricity, storm water drainage, access roads and footpaths, and street lighting, as well as home improvements and securing land tenure. To explore the effects of slum upgrading strategies involving physical environment and infrastructure interventions on the health, quality of life and socio-economic wellbeing of urban slum dwellers in low and middle income countries (LMIC). Where reported, data were collected on the perspectives of slum dwellers regarding their needs, preferences for and satisfaction with interventions received. We searched for published and unpublished studies in 28 bibliographic databases including multidisciplinary (for example Scopus) and specialist databases covering health, social science, urban planning, environment and LMIC topics. Snowballing techniques included searching websites, journal handsearching, contacting authors and reference list checking.