ArticlePDF AvailableLiterature Review

Research impact: A narrative review

Authors:

Abstract and Figures

Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of impact is not without its critics. We review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1) different approaches to impact assessment are appropriate in different circumstances (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and should be measured; and (4) research on research impact is a rapidly developing field with new methodologies on the horizon.
Content may be subject to copyright.
R E V I E W Open Access
Research impact: a narrative review
Trisha Greenhalgh
1*
, James Raftery
2
, Steve Hanney
3
and Matthew Glover
3
Abstract
Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic
knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and
utilised. Much progress has been made in measuring both the outcomes of research and the processes and
activities through which these are achieved, though the measurement of impact is not without its critics. We
review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian
Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus
recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1)
different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and
sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend
to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and
should be measured; and (4) research on research impact is a rapidly developing field with new methodologies
on the horizon.
Keywords: Research impact, Knowledge translation, Implementation science, Research utilization, Payback Framework,
Monetisation, Research accountability, Health gains
Background
This paper addresses the question: What is research
impact and how might we measure it?. It has two main
aims, first, to introduce the general reader to a new and
somewhat specialised literature on the science of re-
search impact assessment and, second, to contribute to
the development of theory and the taxonomy of method
in this complex and rapidly growing field of inquiry.
Summarising evidence from previous systematic and
narrative reviews [17], including new reviews from our
own team [1, 5], we consider definitions of impact and
its conceptual and philosophical basis before reviewing
the strengths and limitations of different approaches to
its assessment. We conclude by suggesting where future
research on research impact might be directed.
Research impact has many definitions (Box 1). Its
measurement is important considering that researchers
are increasingly expected to be accountable and produce
value for money, especially when their work is funded
from the public purse [8]. Further, funders seek to
demonstrate the benefits from their research spending
[9] and there is pressure to reduce waste in research
[10]. By highlighting how (and how effectively) resources
are being used, impact assessment can inform strategic
planning by both funding bodies and research institu-
tions [1, 11].
We draw in particular on a recent meta-synthesis of
studies of research impact funded by the UK Health
Technology Assessment Programme (HTA review) cov-
ering literature mainly published between 2005 and 2014
[1]. The HTA review was based on a systematic search
of eight databases (including grey literature) plus hand
searching and reference checking, and identified over 20
different impact models and frameworks and 110 studies
describing their empirical applications (as single or mul-
tiple case studies), although only a handful had proven
robust and flexible across a range of examples. The ma-
terial presented in this summary paper, based on much
more extensive work, is inevitably somewhat eclectic.
Four of the six approaches we selected as established
were the ones most widely used in the 110 published
empirical studies. Additionally, we included the Soci-
etal Impact Assessment despite it being less widely
used since it has recently been the subject of a major
* Correspondence: trish.greenhalgh@phc.ox.ac.uk
1
Nuffield Department of Primary Care Health Sciences, University of Oxford,
Radcliffe Primary Care Building, Woodstock Rd, Oxford OX2 6GG, UK
Full list of author information is available at the end of the article
© 2016 Greenhalgh et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Greenhalgh et al. BMC Medicine (2016) 14:78
DOI 10.1186/s12916-016-0620-8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
EU-funded workstream (across a range of fields) and the
UK Research Excellence Framework (REF; on which em-
pirical work post-dated our review) because of the size
and uniqueness of the dataset and its significant (?) inter-
national interest. The approaches we selected as showing
promise for the future were chosen more subjectively on
the grounds that there is currently considerable academic
and/or policy interest in them.
Different approaches to assessing research impact
make different assumptions about the nature of research
knowledge, the purpose of research, the definition of
research quality, the role of values in research and its
implementation, the mechanisms by which impact is
achieved, and the implications for how impact is mea-
sured (Table 1). Short-term proximate impacts are easier
to attribute, but benefits from complementary assets
(such as the development of research infrastructure, pol-
itical support or key partnerships [8]) may accumulate in
the longer term but are more difficult and sometimes
impossible to fully capture.
Knowledge is intertwined with politics and persuasion.
If stakeholders agree on what the problem is and what a
solution would look like, the research-impact link will
tend to turn on the strength of research evidence in
favour of each potential decision option, as depicted in
column 2 of Table 1 [12]. However, in many fields for
example, public policymaking, social sciences, applied
public health and the study of how knowledge is dis-
tributed and negotiated in multi-stakeholder collabo-
rations the links between research and impact are
complex, indirect and hard to attribute (for an ex-
ample, see Kogan and Henkelsrichethnographic
study of the Rothschild experiment in the 1970s,
which sought and failed to rationalize the links be-
tween research and policy [13]). In policymaking, research
evidence is rather more often used conceptually (for gen-
eral enlightenment) or symbolically (to justify a chosen
course of action) than instrumentally (feeding directly into
Box 1: Definitions of research impact
Impact is the effect research has beyond academia and
consists of .benefits to one or more areas of the
economy, society, culture, public policy and services,
health, production, environment, international
development or quality of life, whether locally,
regionally, nationally or internationally(paragraph 62)
and as manifested in a wide variety of ways
including, but not limited to: the many types of
beneficiary (individuals, organisations, communities,
regions and other entities); impacts on products,
processes, behaviours, policies, practices; and avoidance
of harm or the waste of resources.(paragraph 63)
UK 2014 Research Excellence Framework [65]
Health impactscan be defined as changes in the
healthy functioning of individuals (physical,
psychological, and social aspects of their health),
changes to health services, or changes to the broader
determinants of health. Social impactsare changes
that are broader than simply those to health noted
above, and include changes to working systems, ethical
understanding of health interventions, or population
interactions. Economic impactscan be regarded as
the benefits from commercialization, the net monetary
value of improved health, and the benefits from
performing health research.
Canadian Academy of Health Sciences [33] (p. 51)
Academic impact is The demonstrable contribution
that excellent research makes to academic advances,
across and within disciplines, including significant
advances in understanding, methods, theory and
application.Economic and societal impact is
fostering global economic performance, and
specifically the economic competitiveness of the UK,
increasing the effectiveness of public services and
policy, [and] enhancing quality of life, health and
creative output.
Research Councils UK Pathways to Impact
(http://www.rcuk.ac.uk/innovation/impacts/)
A research impact is a recorded or otherwise
auditable occasion of influence from academic
research on another actor or organization. []It is not
the same thing as a change in outputs or activities as
a result of that influence, still less a change in social
outcomes. Changes in organizational outputs and
social outcomes are always attributable to multiple
forces and influences. Consequently, verified causal
links from one author or piece of work to output
changes or to social outcomes cannot realistically be
made or measured in the current state of knowledge.
[]However, secondary impacts from research can
sometimes be traced at a much more aggregate level,
and some macro-evaluations of the economic net bene-
fits of university research are feasible. Improving our
knowledge of primary impacts as occasions of influence
is the best route to expanding what can be achieved
here.
London School of Economics Impact Handbook for
Social Scientists [66]
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 2 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 1 Philosophical assumptions underpinning approaches to research impact
Perspective Positivist Constructivist Realist Critical Performative
Assumptions about what
[research] knowledge is
Facts (especially statements
on relationships between
variables), independent of
researchers and transferable
to new contexts
Explanations/interpretations
of a situation or
phenomenon, considering
the historical, cultural and
social context
Studies of how people interpret
external reality, producing
statements on what works for
whom in what circumstances
Studies that reveal societys
inherent conflicts and injustices
and give people the tools to
challenge their oppression
Knowledge is brought into
being and enacted in practice
by actor-networks of people
and technologies
Assumed purpose of
research
Predictive generalisations
(laws)
Meaning: perhaps in a single,
unique case
Theoretical generalisation
(what tends to work and why)
Learning, emancipation,
challenge
To map the changing
dynamics of actor-networks
Preferred research
methods
Hypothesis-testing;
experiments; modelling
and measurement
Naturalistic inquiry (i.e. in
real-world conditions)
Predominantly naturalistic,
may combine quantitative
and qualitative data
Participatory [action] research Naturalistic, with a focus
on change over time and
network [in]stability
Assumed way to achieve
quality in research
Hierarchy of preferred
study designs; standardised
instruments to help
eliminate bias
Reflexive theorising;
consideration of multiple
interpretations; dialogue
and debate
Abduction (what kind of
reasoning by human actors
could explain these findings
in this context?)
Measures to address power
imbalances (ethos of democracy,
conflict management); research
capacity building in community
partner(s)
Richness of description;
plausible account of the
network and how it changes
over time
Assumed relationship
between science and
values
Science is inherently
value-neutral (though research
can be used for benign or
malevolent motives)
Science can never be
value-neutral; the researchers
perspective must be made
explicit
Facts are interpreted and used
by people who bring particular
values and views
Science must be understood
in terms of what gave rise to
it and the interests it serves
Controversial; arguably,
Actor-Network Theory is
consistent with a value-laden
view of science
Assumed mechanism
through which impact is
achieved
Direct (new knowledge will
influence practice and policy if
the principles and methods of
implementation science are
followed)
Mainly indirect (e.g. via
interaction/enlightenment of
policymakers and influencing
the mindlinesof clinicians)
Interaction between reasoning
(of policymakers, practitioners,
etc.) and resources available for
implementing findings
Development of critical
consciousness; partnership-
building; lobbying; advocacy
Translations(stable changes
in the actor-network), achieved
by actors who mobilise other
actors into new configurations
Implications for the study
of research impact
Logic modelswill track how
research findings (transferable
facts about what works) are
disseminated, taken up and
used for societal benefit
Outcomes of social
interventions are
unpredictable; impact studies
should focus on activities
and interactionsto build
relations with policymakers
Impact studies should address
variability in uptake and use of
research by exploring context-
mechanism-outcome-impact
configurations
Impact has a political dimension;
research may challenge the
status quo; some stakeholders
stand to lose power, whereas
others may gain
For research to have impact,
a re-alignment of actors
(human/technological) is
needed; focus on the changing
actor-scenarioand how this
gets stabilised in the network
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 3 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
a particular policy decision) [12, 14], as shown empirically
by Amara et al.s large quantitative survey of how US gov-
ernment agencies drew on university research [15]. Social
science research is more likely to illuminate the com-
plexity of a phenomenon than produce a simple, imple-
mentablesolution that can be driven into practice by
incorporation into a guideline or protocol [16, 17], as was
shown by Dopson and Fitzgeraldsdetailedethnographic
case studies of the implementation of evidence-based
healthcare in healthcare organisations [18]. In such
situations, the research-impact relationship may be pro-
ductively explored using approaches that emphasise the
fluidity of knowledge and the multiple ways in which it
may be generated, assigned more or less credibility and
value, and utilised (columns 3 to 6 in Table 1) [12, 19].
Many approaches to assessing research impact com-
bine a logic model (to depict input-activities-output-
impact links) with a case studydescription to capture the
often complex processes and interactions through which
knowledge is produced (perhaps collaboratively and/or
with end-user input to study design), interpreted and
shared (for example, through engagement activities,
audience targeting and the use of champions, boundary
spanners and knowledge brokers [2024]). A nuanced
narrative may be essential to depict the non-linear links
between upstream research and distal outcomes and/
or help explain why research findings were not taken
up and implemented despite investment in knowledge
translation efforts [4, 6].
Below, we describe six approaches that have proved
robust and useful for measuring research impact and
some additional ones introduced more recently. Table 2
lists examples of applications of the main approaches
reviewed in this paper.
Established approaches to measuring research
impact
The Payback Framework
Developed by Buxton and Hanney in 1996 [25], the Pay-
back Framework (Fig. 1) remains the most widely used
approach. It was used by 27 of the 110 empirical applica-
tion studies in the recent HTA review [1]. Despite its
name, it does not measure impact in monetary terms. It
consists of two elements: a logic model of the seven
stages of research from conceptualisation to impact,
and five categories to classify the paybacks knowledge
(e.g. academic publications), benefits to future research
(e.g. training new researchers), benefits to policy (e.g. in-
formation base for clinical policies), benefits to health and
the health system (including cost savings and greater
equity), and broader economic benefits (e.g. commercial
spin-outs). Two interfaces for interaction between resear-
chers and potential users of research (project specifi-
cation, selection and commissioningand dissemination)
and various feedback loops connecting the stages are seen
as crucial.
The elements and categories in the Payback Frame-
work were designed to capture the diverse ways in which
impact may arise, notably the bidirectional interactions
between researchers and users at all stages in the re-
search process from agenda setting to dissemination and
implementation. The Payback Framework encourages an
assessment of the knowledge base at the time a piece of
research is commissioned data that might help with
issues of attribution (did research A cause impact B?)
and/or reveal a counterfactual (what other work was oc-
curring in the relevant field at the time?).
Applying the Payback Framework through case studies
is labour intensive: researcher interviews are combined
with document analysis and verification of claimed im-
pacts to prepare a detailed case study containing both
qualitative and quantitative information. Not all research
groups or funders will be sufficiently well resourced to
produce this level of detail for every project nor is it
always necessary to do so. Some authors have adapted
the Payback Framework methodology to reduce the
workload of impact assessment (for example, a recent
European Commission evaluation populated the categories
mainly by analysis of published documents [26]); neverthe-
less, it is not known how or to what extent such changes
wouldcompromisethedata.Impactsmaybeshortorlong
term [27], so (as with any approach) the time window
covered by data collection will be critical.
Another potential limitation of the Payback Framework
is that it is generally project-focused (commencing with a
particular funded study) and is therefore less able to ex-
plore the impact of the sum total of activities of a research
group that attracted funding from a number of sources.
As Meagher et al. concluded in their study of ESRC-
funded responsive mode psychology projects, In most
cases it was extremely difficult to attribute with certainty a
particular impact to a particular projects research find-
ings. It was often more feasible to attach an impact to a
particular researchers full body of research, as it seemed
to be the depth and credibility of an ongoing body of
research that registered with users[28] (p. 170).
Similarly, the impact of programmes of research may
be greater than the sum of their parts due to economic
and intellectual synergies, and therefore project-focused
impact models may systematically underestimate impact.
Application of the Payback Framework may include
supplementary approaches such as targeted stakeholder
interviews to fully capture the synergies of programme-
level funding [29, 30].
Research Impact Framework
The Research Impact Framework was the second most
widely used approach in the HTA review of impact
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 4 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 2 Examples of applications of research impact assessment frameworks
Author/year (country) Approach taken Main findings Comment
Payback Framework
Kwan et al., 2007 [67]
(Hong Kong)
Surveyed 205 projects funded by
the Health and Health Services
Research fund; used main Payback
categories and framework processes
Between a third and a half of principal
investigators claimed impact on policy,
practice and health service benefit; liaison
with potential users and participation
in policy committees was significantly
associated with achieving wider impacts
Multivariate analysis of data enabled
identification of factors associated with
impact; however, study relied solely
on self-reported data from researchers
Hanney et al., 2007
[7] (UK)
16 case studies randomly selected
from wider survey of all projects
funded by the NHS Health Technology
Assessment (HTA) programme
19932003; survey data supplemented
by documentary and bibliometric
analysis and researcher interviews
Survey showed considerable impact
in knowledge production (publications),
changes in policy (73 % of projects) and
behaviour (42 %); case studies showed
diversity in levels and forms of impacts
and ways in which they arose; studies
commissioned for policy customers
showed highest policy impact
All case studies were written up around
stages of Payback, which facilitated
cross-case analysis; affirmed the value
of agenda setting to meet needs of
healthcare system
Scott et al., 2011 [68]
(USA) (methods) and
Madrillon Group,
2011 [69] (findings)
Assessed impact of National Institutes
of Healths (NIH) Mind Body Interactions
and Health programme; for centres
and projects: documentary review,
bibliometric and database analysis,
interviews; impact of centres scored
using Payback scales
Findings covered programme as a whole,
centres, and research projects; study
demonstrated that centres and projects
had produced clear and positive impacts
across all five Payback categories; for
projects, 34 % claimed impact on policies,
48 % led to improved health
Payback was adaptable to meet needs
of specific evaluation, covering different
levels; assessment occurred too early to
capture many of the latentoutcomes
Hanney et al., 2013
[70] (UK)
Assessed impact of Asthma UKs
portfolio of funding including projects,
fellowships, professorial chairs and a
new collaborative centre; surveys to
163 researchers, interviews, documentary
analysis, 14 purposively selected case
studies
Findings highlighted academic
publications, and considerable leverage
of follow-on funding; each of the wider
impacts (informing guidelines, product
development, improved health) achieved
by only a small number of projects or
fellowships but some significant
examples, especially from chairs
The charity used the findings to inform
their research strategy, notably in
relation to centres; many impacts were
felt to be at an early stage
Donovan et al., 2014
[71] (Australia)
Assessed impact of research funded
by National Breast Cancer Foundation;
survey of 242 researchers, document
analysis plus 16 purposively selected
case studies; considered basic and
applied research and infrastructure;
cross-case analysis
Impacts included academic publications,
research training, research capacity
building, leveraged additional funding,
changed policy (10 %, though 29 %
expected to do so), new product
development (11 %), changed clinical
practice (14 %)
The charity considered that findings
would help to inform their research
strategy; many projects recently
completed, hence emphasis on
expected impacts
Wooding et al., 2014
[72] (Australia,
Canada, UK)
29 case studies randomly selected from
cardiovascular/stroke research funders,
scored using Payback categories;
compared impact scores with features
of research processes
Wide range of impacts; some projects
scored very high, others very low; basic
research had higher academic impacts,
clinical had more impact beyond
academia; engagement with
practitioners/patients linked to
academic and wider impacts
Payback enabled collection of data
about a wide range of impacts plus
processes/features of each project;
this facilitated innovative analysis of
factors associated with impact
Research Impact Framework
Kuruvilla et al., 2007
[32] (UK)
Pilot study, 11 projects; used
semi-structured interview and document
analysis, leading to one-page
researcher narrativethat was
sent to the researcher for validation
Interviews with researchers allowed
them to articulate and make sense of
multiple impact channels and activities;
the structured researcher narratives,
which were objectively verifiable,
facilitated comparison across projects
Applied a wider range of impact
categories than the Payback Framework;
approach was adaptable and acceptable
to researchers, however, it was only a
small pilot conducted in the researchers
group
Canadian Academy of Health Sciences (CAHS) Framework
Montague and
Valentim, 2010
[73] (Canada)
Applied the CAHS Framework to assess
the impact of a large randomised trial
of a new treatment for breast cancer;
divided the impacts into proximate
(e.g. changes in awareness) and more
long-term (including changes in breast
cancer mortality)
Numerous impacts were documented
at different levels of the CAHS
Framework; findings suggested a direct
link between publication of the trial,
change in clinical practice and subsequent
reduction in morbidity and mortality
Published as an early worked example
of how CAHS can inform the systematic
documentation of impacts
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 5 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 2 Examples of applications of research impact assessment frameworks (Continued)
Adam et al., 2012
[74] (Catalonia)
Applied the CAHS Framework to assess
the impact of clinical and health services
research funded by the main Catalan
agency; included bibiliometric analysis,
surveys to 99 researchers with 70
responses, interviews with researchers
and decision-makers, in-depth case
study of translation pathways, as well as
a focus on intended impacts
In the CAHS category of informing
decision-making by policymakers,
managers, professionals, patients,
etc. 40 out of 70 claimed decision-
making changes were induced by
research results: 29 said changed clinical
practice, 16 said organisational/policy
changes; interactions in projects with
healthcare and policy decision-makers
was crucial
The study provided both knowledge to
inform the funding agencys subsequent
actions and a basis on which to
advocate for targeted research to fill
knowledge gaps; the team noted
limitations in relation to attribution,
time lags and the counterfactual
Graham et al., 2012
[75] (Canada)
Adapted and applied CAHS to
assess impact of research funded by a
not-for-profit research and innovation
organization in Alberta, Canada
After a formal adaptation phase,
CAHS proved flexible and robust both
retrospectively (to map pre-existing data)
and prospectively (to track new
programmes); some new categories
were added
Had a particular focus on developing
data capture approaches for the many
indicators identified; also a focus on
how the research funding organisation
could measure its own contribution to
achieving health system impacts
Cohen et al., 2015
[76] (Australia)
Adapted categories from Payback and
CAHS; mixed method sequential
methodology; surveys and interviews
of lead researchers (final sample of 50);
data from surveys, interviews and
documents collated into case studies
which were scored by an expert panel
using criteria from the UK Research
Excellence Framework (REF)
19 of 50 cases had policy and practice
impacts with an even distribution of
high, medium and low impact scores
across the (REF-based) criteria of
corroboration, attribution, reach and
importance; showed that real world
impacts can occur from single
intervention studies
Innovative approach by blending
existing frameworks; limitations
included not always being able to
obtain documentary evidence to
corroborate researcher accounts
Monetisation Models
Johnston et al., 2006
[34] (USA)
Collated data on 28 Phase III clinical
trials funded by the National Institute of
Neurological Disorders and Stroke up to
2000; compared monetised health gains
achieved by use of new healthcare
interventions (measured in QALYs and
valued at GDP per head) to investment
in research, using cost-utility analyses
and actual usage
$335 m research investment generated
470,000 QALYs 10 years post funding;
return on investment was 46 % per year
Used a bottom-up approach to quantify
health gains through individual
healthcare interventions; assumed that
all changes in usage were prompted by
NIH phase III trials; no explicit time-lag;
highlights data difficulties in bottom-up
approach, as required data were only
available for eight trials
Access Economics,
2008 [39] (Australia)
Quantified returns from all Australian
health R&D funding between 1992/3
and 2004/5. Monetised health gains
estimated as predicted DALYs averted
in 203345 compared to 1993
(valued at willingness to pay for a
statistical life-year)
Return on investment of 110 % from
private and public R&D; assumed that
50 % of health gains are attributable to
R&D, of which 3.04 % is Australian R&D
Top-down approach; high uncertainty
and sensitivity of results in 50 %
assumption; forecasted future health
gains
Buxton et al., 2008
[38] (UK)
Estimated returns from UK public
and charitably funded cardiovascular
research 19751988; data from cost-
utility studies and individual intervention
usage; health gains expressed as
monetised QALYs (valued at healthcare
service opportunity cost) net costs of
delivery for years 19862005
Internal rate of return of 9 % a year,
plus a component added for non-health
economic spill-overeffects of 30 %;
assumed a 17 year lag between investment
and health gains (based on guideline
analysis knowledge cycle time), and 17 %
of health gains attributable to UK research
Bottom-up approach; judgement
on which interventions to include
was required; explicit investigation
of time-lag
Deloitte Access
Economics,
2011 [35] (Australia)
Applied same methods as Access
Economics (2008); quantified returns
from National Health and Medical
Research Council funding 20002010,
focusing on five burdensome disease
areas; monetised health gains estimated
as predicted DALYs averted in 204050
compared to 2000, valued at willingness
to pay for a statistical life-year
Return on investment ranged from
509 % in cardiovascular disease to 30 %
for muscular dystrophy research;
assumed that 50 % of health gains are
attributable to R&D, of which 3.14 % was
Australian R&D and 35 % of that is
NHMRC; assumed time lag of 40 years
between investment and benefit
Top-down approach; added layer
in attribution problem (because it
was a programme rather than
totality of research funding)
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 6 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 2 Examples of applications of research impact assessment frameworks (Continued)
Societal Impact Assessment and Related Approaches
Spaapen et al., 2007
[46] (Netherlands)
Mainly a methodological report on the
Sci-Quest Framework with brief case
examples including one in pharmaceutical
sciences; proposed mixed-method case
studies using qualitative methods, a
quantitative instrument called contextual
response analysis and quantitative
assessment of financial interactions (grants,
spin-outs, etc.). Produced a bespoke
Research Embedment and Performance
Profile (REPP) for each project
Productive interactions (direct, indirect,
financial) must happen for impact to
occur; there are three social domains:
science/certified knowledge, industry/
market and policy/societal; REPP in
pharmaceutical sciences example
developed 15 benchmarks (five for each
domain) and scored on 5-point scale
Illustrates performativeapproach
to impact (column 6 in Table 1);
ERiC (Evaluating Research in Context)
programme, focuses assessment on the
context and is designed to overcome
what were seen as the linear and
deterministic assumptions of logic
models, but complex to apply
Molas-Gallart and
Tang, 2011 [77] (UK)
Applied SIAMPI Framework to assess
how social science research in a Welsh
university supports local businesses; case
study approach using two structured
questionnaires one for researchers
and one for stakeholders
Authors found few, if any, examples of
linear research-impact links but a mesh
of formal and informal collaborations in
which academics are providing support
for the development of specific business
models in emerging areas, many of which
have not yet yielded identifiable impacts
Good example from outside the
medical field of how SIAMPI
Framework can map the processes
of interaction between researchers
and stakeholders
UK Research Excellence Framework (secondary analyses of REF impact case study database)
Hinrichs and Grant,
2015 [78] (UK)
Preliminary analysis of all 6679
non-redacted impact case studies in
REF 2014, based mainly but not
exclusively on automated text mining
Text mining identified 60 different kinds
of impact and 3709 pathways to impact
through which these had (according to
the authors) been achieved; researchers
efforts to monetise health gains (e.g. as
QALYs) appeared crude and speculative,
though in some cases the evaluation
team were able (with additional efforts)
to produce monetised estimates of
return on investment
Authors commented: the information
presented in the [REF impact] case studies
was neither consistent nor standardised.
There is potential to improve data
collection and reporting process for
future exercises
Greenhalgh and
Fahy, 2015 [79] (UK)
Manual content analysis of all 162
impact case studies submitted to a
single sub-panel of the REF, with
detailed interpretive analysis of four
examples of good practice
REF impact case study format appeared
broadly fit for purpose but most case
studies described surrogateand readily
verifiable impacts, e.g. changing a
guideline; models of good practice were
characterised by proactive links with
research users
Sample was drawn from a single
sub-panel (public health/health services
research), so findings may not be
generalizable to other branches of
medicine
Realist Evaluation
Rycroft-Malone et al.,
2015 [56] (UK)
In the national evaluation of first-wave
Collaborations for Leadership in Applied
Health Research and Care (CLAHRCs),
qualitative methods (chiefly, a series of
stakeholder interviews undertaken as the
studies unfolded) were used to tease
out actorstheories of change and
explore how context shaped and
constrained their efforts to both generate
and apply research knowledge
Impact in the applied setting of
CLAHRCs requires commitment to the
principle of collaborative knowledge
production, facilitative leadership and
acknowledgement by all parties that
knowledge comes in different forms;
impacts are contingent and appear to
depend heavily on how different
partners view the co-production task
Illustrates realist model of research
impact (column 4 in Table 1); the new
framework developed for this high-
profile national evaluation (Fig. 3) has
yet to be applied in a new context
Participatory Research Impact Model
Cacari-Stone et al.,
2014 [60] (USA)
In-depth case study of policy-oriented
participatory action research in a
deprived US industrial town to reduce
environmental pollution; mixed methods
including individual interviews, focus
groups, policymaker phone interviews,
archival media and document review,
and participant observation
Policy change occurred and was
attributed to strong, trusting pre-existing
community-campus relationships;
dedicated funding for the participatory
activity; respect for street scienceas
well as academic research; creative
and effective use of these data in civic
engagement activities; diverse and
effective networking with inter-sectoral
partners including advocacy
organisations
Illustrates criticalmodel of research
impact (column 5 in Table 1)
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 7 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
assessment, accounting for seven out of 110 applications
[1], but in these studies it was mostly used in combin-
ation with other frameworks (especially Payback) rather
than as a stand-alone approach. It was originally devel-
oped by and for academics who were interested in meas-
uring and monitoring the impact of their own research.
As such, it is a light touchchecklist intended for use by
individual researchers who seek to identify and select
impacts from their work without requiring specialist
skill in the field of research impact assessment[31]
(p. 136). The checklist, designed to prompt reflection
and discussion, includes research-related impacts, pol-
icy and practice impacts, service (including health)
impacts, and an additional societal impactcategory
with seven sub-categories. In a pilot study, its authors
found that participating researchers engaged readily
with the Research Impact Framework and were able
to use it to identify and reflect on different kinds of
impact from their research [31, 32]. Because of its
(intentional) trade-off between comprehensiveness and
practicality, it generally produces a less thorough assess-
ment than the Payback Framework and was not designed
to be used in formal impact assessment studies by third
parties.
Canadian Academy of Health Sciences (CAHS) Framework
The most widely used adaptation of the Payback Frame-
work is the CAHS Framework (Fig. 2), which informed
six of the 110 application studies in the HTA review
[33]. Its architects claim to have shaped the Payback
Framework into a systems approachthat takes greater
account of the various non-linear influences at play in
contemporary health research systems. CAHS was
constructed collaboratively by a panel of international ex-
perts (academics, policymakers, university heads), en-
dorsed by 28 stakeholder bodies across Canada (including
research funders, policymakers, professional organisations
and government) and refined through public consultation
[33]. The authors emphasise that the consensus-building
process that generated the model was as important as the
model itself.
CAHS encourages a careful assessment of context and
the subsequent consideration of impacts under five
categories: advancing knowledge (measures of research
quality, activity, outreach and structure), capacity-building
(developing researchers and research infrastructure),
informing decision-making (decisions about health and
healthcare, including public health and social care, deci-
sions about future research investment, and decisions by
public and citizens), health impacts (including health
status, determinants of health including individual risk
factors and environmental and social determinants and
health system changes), and economic and social benefits
(including commercialization, cultural outcomes, socio-
economic implications and public understanding of
science).
For each category, a menu of metrics and measures
(66 in total) is offered, and users are encouraged to
draw on these flexibly to suit their circumstances. By
choosing appropriate sets of indicators, CAHS can be
used to track impacts within any of the four pillars
of health research (basic biomedical, applied clinical,
health services and systems, and population health
or within domains that cut across these pillars) and
at various levels (individual, institutional, regional, na-
tional or international).
Fig. 1 The Payback Framework developed by Buxton and Hanney (reproduced under Creative Commons Licence from Hanney et al [70])
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 8 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Despite their differences, Payback and CAHS have
much in common, especially in how they define impact
and their proposed categories for assessing it. Whilst
CAHS appears broader in scope and emphasises
complex systemelements, both frameworks are de-
signed as a pragmatic and flexible adaptation of the
research-into-practice logic model. One key difference
is that CAHScategory decision-makingincorporates
both policy-level decisions and the behaviour of indi-
vidual clinicians, whereas Payback collects data separ-
ately on individual clinical decisions on the grounds
that, if they are measurable, decisions by clinicians to
change behaviour feed indirectly into the improved health
category.
As with Payback (but perhaps even more so, since
CAHS is in many ways more comprehensive), the ap-
plication of CAHS is a complex and specialist task
that is likely to be highly labour-intensive and hence
prohibitively expensive in some circumstances.
Monetisation models
A significant innovation in recent years has been the
development of logic models to monetise (that is, express
in terms of currency) both the health and the non-health
returns from research. Of the 110 empirical applications
of impact assessment approaches in our HTA review, six
used monetization. Such models tend to operate at a
much higher level of aggregation than Payback or CAHS
typically seeking to track all the outputs of a research
council [34, 35], national research into a broad disease
area (e.g. cardiovascular disease, cancer) [3638], or even
an entire national medical research budget [39].
Monetisation models express returns in various ways,
including as cost savings, the money value of net health
gains via cost per quality-adjusted life year (QALY) using
the willingness-to-pay or opportunity cost established by
NICE or similar bodies [40], and internal rates of return
(return on investment as an annual percentage yield).
These models draw largely from the economic evalu-
ation literature and differ principally in terms of which
costs and benefits (health and non-health) they include
and in the valuation of seemingly non-monetary com-
ponents of the estimation. A national research call, for
example, may fund several programmes of work in dif-
ferent universities and industry partnerships, subsequently
producing net health gains (monetised as the value of
QALYs or disability-adjusted life-years), cost savings to
the health service (and to patients), commercialisation
(patents, spin-outs, intellectual property), leveraging of re-
search funds from other sources, and so on.
Fig. 2 Simplified Canadian Academy of Health Sciences (CAHS) Framework (reproduced with permission of Canadian Academy of Health Sciences [33])
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 9 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A major challenge in monetisation studies is that, in
order to produce a quantitative measure of economic
impact or rate of return, a number of simplifying as-
sumptions must be made, especially in relation to the
appropriate time lag between research and impact and
what proportion of a particular benefit should be attrib-
uted to the funded research programme as opposed to all
the other factors involved (e.g. social trends, emergence of
new interventions, other research programmes occurring
in parallel). Methods are being developed to address some
of these issues [27]; however, whilst the estimates pro-
duced in monetised models are quantitative, those figures
depend on subjective, qualitative judgements.
A key debate in the literature on monetisation of
research impact addresses the level of aggregation. First
applied to major research budgets in a top-downor
macro approach [39], whereby total health gains are ap-
portioned to a particular research investment, the princi-
ples of monetisation are increasingly being used in a
bottom-up[34, 3638] manner to collect data on specific
project or programme research outputs. The benefits of
new treatments and their usage in clinical practice can be
built up to estimate returns from a body of research. By
including only research-driven interventions and using
cost-effectiveness or cost-utility data to estimate incre-
mental benefits, this method goes some way to dealing
with the issue of attribution. Some impact assessment
models combine a monetisation component alongside an
assessment of processes and/or non-monetised impacts,
such as environmental impacts and an expanded know-
ledge base [41].
Societal impact assessment
Societal impact assessment, used in social sciences and
public health, emphasises impacts beyond health and is
built on constructivist and performative philosophical
assumptions (columns 3 and 6 in Table 1). Some form of
societal impact assessment was used in three of the 110
empirical studies identified in our HTA review. Its pro-
tagonists distinguish the social relevance of knowledge
from its monetised impacts, arguing that the intrinsic
value of knowledge may be less significant than the varied
and changing social configurations that enable its produc-
tion, transformation and use [42].
An early approach to measuring societal impact was
developed by Spaapen and Sylvain in the early 1990s
[43], and subsequently refined by the Royal Netherlands
Academy of Arts and Science [44]. An important com-
ponent is self-evaluation by a research team of the rela-
tionships, interactions and interdependencies that link it
to other elements of the research ecosystem (e.g. nature
and strength of links with clinicians, policymakers and
industry), as well as external peer review of these links.
Spaapen et al. subsequently conducted a research
programme, Evaluating Research in Context (ERiC) [45],
which produced the Sci-Quest model [46]. Later, they
collaborated with researchers (who had led a major UK
ESRC-funded study on societal impact [47]) to produce
the EU-funded SIAMPI (Social Impact Assessment
Methods through the study of Productive Interactions)
Framework [48].
Sci-Quest was described by its authors as a fourth-gen-
erationapproach to impact assessment the previous
three generations having been characterised, respectively,
by measurement (e.g. an unenhanced logic model), de-
scription (e.g. the narrative accompanying a logic model)
and judgement (e.g. an assessment of whether the impact
was socially useful or not). Fourth-generation impact as-
sessment, they suggest, is fundamentally a social, political
and value-oriented activity and involves reflexivity on the
part of researchers to identify and evaluate their own re-
search goals and key relationships [46].
Sci-Quest methodology requires a detailed assessment
of the research programme in context and the develop-
ment of bespoke metrics (both qualitative and quantita-
tive) to assess its interactions, outputs and outcomes,
which are presented in a unique Research Embedment
and Performance Profile, visualised in a radar chart.
SIAMPI uses a mixed-methods case study approach to
map three categories of productive interaction: direct per-
sonal contacts, indirect contacts such as publications, and
financial or material links. These approaches have theoret-
ical elegance, and some detailed empirical analyses were
published as part of the SIAMPI final report [48]. How-
ever, neither approach has had significant uptake else-
where in health research perhaps because both are
complex, resource-intensive and do not allow easy com-
parison across projects or programmes.
Whilst extending impact to include broader societal
categories is appealing, the range of societal impacts
described in different publications, and the weights
assigned to them, vary widely; much depends on the re-
searchersown subjective ratings. An attempt to capture
societal impact (the Research Quality Framework) in
Australia in the mid-2000s was planned but later aban-
doned following a change of government [49].
UK Research Excellence Framework
The 2014 REF an extensive exercise to assess UK
universitiesresearch performance allocated 20 % of
the total score to research impact [50]. Each institution
submitted an impact template describing its strategy and
infrastructure for achieving impact, along with several
four-page impact case studies, each of which described a
programme of research, claimed impacts and supporting
evidence. These narratives, which were required to
follow a linear and time-bound structure (describing
research undertaken between 1993 and 2013, followed
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 10 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
by a description of impact occurring between 2008 and
2013) were peer-reviewed by an intersectoral assessment
panel representing academia and research users (indus-
try and policymakers) [50]. Other countries are looking
to emulate the REF model [51].
An independent evaluation of the REF impact assess-
ment process by RAND Europe (based on focus groups,
interviews, survey and documentary analysis) concluded
that panel members perceived it as fair and robust and
valued the intersectoral discussions, though many felt
the somewhat crude scoring system (in which most case
studies were awarded 3, 3.5 or 4 points) lacked granularity
[52]. The 6679 non-redacted impact case studies submit-
ted to the REF (1594 in medically-related fields) were
placed in the public domain (http://results.ref.ac.uk) and
provide a unique dataset for further analysis.
In its review of the REF, the members of Main Panel
A, which covered biomedical and health research, noted
that International MPA [Main Panel A] members cau-
tioned against attempts to metricisethe evaluation of the
many superb and well-told narrations describing the evo-
lution of basic discovery to health, economic and societal
impact[50].
Approaches with potential for the future
The approaches in this section, most of which have been
recently developed, have not been widely tested but may
hold promise for the future.
Electronic databases
Research funders increasingly require principal investi-
gators to provide an annual return of impact data on an
online third-party database. In the UK, for example,
Researchfish® (formerly MRC e-Val but now described as
afederated systemwith over 100 participating organisa-
tions) allows funders to connect outputs to awards,
thereby allowing aggregation of all outputs and impacts
from an entire funding stream. The software contains 11
categories: publications, collaborations, further funding,
next destination (career progression), engagement activ-
ities, influence on policy and practice, research materials,
intellectual property, development of products or inter-
ventions, impacts on the private sector, and awards and
recognition.
Provided that researchers complete the annual return
consistently and accurately, such databases may overcome
some of the limitations of one-off, resource-intensive case
study approaches. However, the design (and business
model) of Researchfish® is such that the only funding
streams captured are from organisations prepared to pay
the membership fee, thereby potentially distorting the
picture of whose input accounts for a research teams
outputs.
Researchfish® collects data both top-down(from fun-
ders) and bottom-up(from individual research teams).
A comparable US model is the High Impacts Tracking
System, a web-based software tool developed by the
National Institute of Environmental Health Sciences; it
imports data from existing National Institutes of Health
databases of grant information as well as the texts of
progress reports and notes of programme managers [53].
Whilst electronic databases are increasingly main-
streamed in national research policy (Researchfish®
was used, for example, to populate the Framework on
Economic Impacts described by the UK Department
of Business, Innovation and Skills [54]), we were un-
able to identify any published independent evaluations
of their use.
Realist evaluation
Realist evaluation, designed to address the question
what works for whom in what circumstances, rests on
the assumption that different research inputs and pro-
cesses in different contexts may generate different out-
comes (column 4 in Table 1) [55]. A new approach,
developed to assess and summarise impact in the na-
tional evaluation of UK Collaborations for Leadership in
Applied Health Research and Care, is shown in Fig. 3
[56]. Whilst considered useful in that evaluation, it was
resource-intensive to apply.
Contribution mapping
Kok and Schuit describe the research ecosystem as a
complex and unstable network of people and technolo-
gies [57]. They depict the achievement of impact as
shifting and stabilising the networks configuration by
mobilising people and resources (including knowledge in
material forms, such as guidelines or software) and en-
rolling them in changing actor scenarios. In this model,
the focus is shifted from attribution to contribution
that is, on the activities and alignment efforts of different
actors (linked to the research and, more distantly, un-
linked to it) in the three phases of the research process
(formulation, production and extension; Fig. 4). Contri-
bution mapping, which can be thought of as a variation
on the Dutch approaches to societal impact assessment
described above, uses in-depth case study methods but
differs from more mainstream approaches in its philo-
sophical and theoretical basis (column 6 in Table 1), in
its focus on processes and activities, and in its goal of
producing an account of how the network of actors and
artefacts shifts and stabilises (or not). Its empirical appli-
cation to date has been limited.
The SPIRIT Action Framework
The SPIRIT Action Framework, recently published by
Australias Sax Institute [58], retains a logic model
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 11 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
structure but places more emphasis on engagement and
capacity-building activities in organisations and acknowl-
edges the messiness of, and multiple influences on, the
policy process (Fig. 5). Unusually, the logic modelfocuses
not on the research but on the receiving organisations
need for research. We understand that it is currently being
empirically tested but evaluations have not yet been
published.
Participatory research impact model
Community-based participatory research is predicated on
a critical philosophy that emphasises social justice and the
Fig. 4 Kok and Schuitscontribution mappingmodel (reproduced under Creative Commons Attribution Licence 4.0 from [57])
Fig. 3 Realist model of research-service links and impacts in CLAHRCs (reproduced under UK non-commercial government licence from [56])
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 12 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
value of knowledge in liberating the disadvantaged from
oppression (column 5 in Table 1) [59]. Cacari-Stone et al.s
model depicts the complex and contingent relationship
between a community-campus partnership and the policy-
making process [60]. Research impact is depicted in
synergistic terms as progressive strengthening of the
partnership and its consequent ability to influence policy
decisions. The paper introducing the model includes a
detailed account of its application (Table 2), but beyond
those, it has not yet been empirically tested.
Discussion
This review of research impact assessment, which has
sought to supplement rather than duplicate more ex-
tended overviews [17], prompts four main conclusions.
First, one size does not fit all. Different approaches to
measuring research impact are designed for different
purposes. Logic models can be very useful for tracking
the impacts of a funding stream from award to quanti-
tised (and perhaps monetised) impacts. However, when
exploring less directly attributable aspects of the research-
impact link, narrative accounts of how these links emerged
and developed are invariably needed.
Second, the perfect is the enemy of the good. Producing
detailed and validated case studies with a full assessment
of context and all major claims independently verified,
takes work and skill. There is a trade-off between the
quality, completeness and timeliness of the data informing
an impact assessment, on the one hand, and the cost and
feasibility of generating such data on the other. It is no
Fig. 5 The SPIRIT Action Framework (reproduced under Creative Commons Attribution Licence from [58] Fig. 1, p. 151)
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 13 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
accident that some of the most theoretically elegant
approaches to impact assessment have (ironically) had
limited influence on the assessment of impact in practice.
Third, warnings from critics that focusing on short-
term, proximal impacts (however accurately measured)
could create a perverse incentive against more complex
and/or politically sensitive research whose impacts are
likely to be indirect and hard to measure [6163] should
be taken seriously. However, as the science of how to
measure intervening processes and activities advances, it
may be possible to use such metrics creatively to support
and incentivise the development of complementary
assets of various kinds.
Fourth, change is afoot. Driven by both technological
advances and the mounting economic pressures on the
research community, labour-intensive impact models
that require manual assessment of documents, researcher
interviews and a bespoke narrative may be overtaken in the
future by more automated approaches. The potential for
big datalinkage (for example, supplementing Researchfish®
entries with bibliometrics on research citations) may be
considerable, though its benefits are currently speculative
(and the risks unknown).
Conclusions
As the studies presented in this review illustrate,
research on research impact is a rapidly growing inter-
disciplinary field, spanning evidence-based medicine (via
sub-fields such as knowledge translation and implemen-
tation science), health services research, economics, in-
formatics, sociology of science and higher education
studies. One priority for research in this field is an as-
sessment of how far the newer approaches that rely on
regular updating of electronic databases are able to pro-
vide the breadth of understanding about the nature of
the impacts, and how they arise, that can come for the
more established and more manualapproaches. Future
research should also address the topical question of
whether research impact tools could be used to help target
resources and reduce waste in research (for example, to
decide whether to commission a new clinical trial or a
meta-analysis of existing trials); we note, for example, the
efforts of the UK National Institute for Health Research in
this regard [64].
Once methods for assessing research impact have been
developed, it is likely that they will be used. As the range
of approaches grows, the challenge is to ensure that the
most appropriate one is selected for each of the many
different circumstances in which (and the different pur-
poses for which) people may seek to measure impact. It
is also worth noting that existing empirical studies have
been undertaken primarily in high-income countries and
relate to health research systems in North America, Eur-
ope and Australasia. The extent to which these
frameworks are transferable to low- or middle-income
countries or to the Asian setting should be explored
further.
Competing interests
TG was Deputy Chair of the 2014 Research Excellence Framework Main Panel A
from 2012 to 2014, for which she received an honorarium for days worked
(in common with all others on REF panels). SH received grants from various
health research funding bodies to help develop and test the Payback
Framework. JR is a member of the NIHR HTA Editorial Board, on paid
secondment. He was principal investigator in a study funded by the NIHR HTA
programme which reviewed methods for measuring the impact of the health
research programmes and was director of the NIHR Evaluation, Trials and
Studies Coordinating Centre to 2012. MG declares no conflict of interest.
All authors have completed the unified competing interest form at http://
www.spp.pt/UserFiles/file/APP_2015/Declaracao_ICMJE_nao_editavel.pdf
(available on request from the corresponding author) and declare (1) no
financial support for the submitted work from anyone other than their
employer; (2) no financial relationships with commercial entities that might
have an interest in the submitted work; (3) no spouses, partners, or children
with relationships with commercial entities that might have an interest in the
submitted work; and (4) no non-financial interests that may be relevant to the
submitted work.
Authorscontributions
JR was principal investigator on the original systematic literature review and
led the research and writing for the HTA report (see Acknowledgements), to
which all authors contributed by bringing different areas of expertise to an
interdisciplinary synthesis. TG wrote the initial draft of this paper and all
co-authors contributed to its refinement. All authors have read and
approved the final draft.
Acknowledgements
This paper is largely but not entirely based on a systematic review funded by
the NIHR HTA Programme, grant number 14/72/01, with additional material
from TGs dissertation from the MBA in Higher Education Management at
UCL Institute of Education, supervised by Sir Peter Scott. We thank Amanda
Young for project management support to the original HTA review and
Alison Price for assistance with database searches.
Author details
1
Nuffield Department of Primary Care Health Sciences, University of Oxford,
Radcliffe Primary Care Building, Woodstock Rd, Oxford OX2 6GG, UK.
2
Primary
Care and Population Sciences, Faculty of Medicine, University of
Southampton, Southampton General Hospital, Southampton SO16 6YD, UK.
3
Health Economics Research Group (HERG), Institute of Environment, Health
and Societies, Brunel University London, UB8 3PH, UK.
Received: 26 February 2016 Accepted: 27 April 2016
References
1. Raftery J, Hanney S, Greenhalgh T, Glover M, Young A. Models and
applications for measuring the impact of health research: Update of a
systematic review for the Health Technology Assessment Programme
Health technology assessment (Winchester, England) 2016 (in press).
2. Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations,
and definitions of research impact:Areview.ResEvaluation.
2013:21-32.
3. Milat AJ, Bauman AE, Redman S. A narrative review of research impact
assessment models and methods. Health Res Policy Syst. 2015;13:18.
4. Grant J, Brutscher P-B, Kirk SE, Butler L, Wooding S. Capturing Research
Impacts: A Review of International Practice. Documented Briefing. Rand
Corporation 2010.
5. Greenhalgh T. Research impact in the community based health sciences:
what would good look like? (MBA Dissertation). London: UCL Institute of
Education; 2015.
6. Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: A
literature review. Sci Public Policy. 2009;36(4):25570.
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 14 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
7. Hanney S, Buxton M, Green C, Coulson D, Raftery J. An assessment of the
impact of the NHS Health Technology Assessment Programme. Health
technology assessment (Winchester, England) 2007. 11(53).
8. Hughes A, Martin B. Enhancing Impact: The value of public sector R&D. CIHE
& UKirc, available at wwwcbrcamacuk/pdf/Impact%20Report 2012, 20.
9. Anonymous. Rates of return to investment in science and innovation: A
report prepared for the Department of Business, Innovation and Skills.
Accessed 17.12.14 on https://www.gov.uk/government/uploads/system/
uploads/attachment_data/file/333006/bis-14-990-rates-of-return-to-
investment-in-science-and-innovation-revised-final-report.pdf. London:
Frontier Economics; 2014.
10. Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al.
Reducing waste from incomplete or unusable reports of biomedical
research. Lancet. 2014;383(9913):26776.
11. Guthrie S, Wamae W, Diepeveen S, Wooding S, Grant J. Measuring research:
a guide to research evaluation frameworks and tools. Arlington, VA: RAND
Corporation; 2013.
12. Weiss CH. The many meanings of research utilization. Public Administration
Review 1979:426-431.
13. Kogan M, Henkel M. Government and research: the Rothschild experiment
in a government department. London: Heinemann Educational Books; 1983.
14. Smith K. Beyond evidence based policy in public health: The interplay of
ideas: Palgrave Macmillan; 2013.
15. Amara N, Ouimet M, Landry R. New evidence on instrumental, conceptual,
and symbolic utilization of university research in government agencies. Sci
Commun. 2004;26(1):75106.
16. Swan J, Bresnen M, Robertson M, Newell S, Dopson S. When policy meets
practice: colliding logics and the challenges of mode 2initiatives in the
translation of academic knowledge. Organ Stud. 2010;31(9-10):131140.
17. Davies H, Nutley S, Walter I. Why knowledge transferis misconceived for
applied social research. J Health Serv Res Policy. 2008;13(3):18890.
18. Dopson S, Fitzgerald L. Knowledge to action? Evidence-based health care in
context: Oxford University Press; 2005.
19. Gabbay J, Le May A. Practice-based evidence for healthcare: Clinical
mindlines. London: Routledge; 2010.
20. Lomas J. Using linkage and exchangeto move research into policy at a
Canadian foundation. Health Affairs (Project Hope). 2000;19(3):23640.
21. Lomas J. The in-between world of knowledge brokering. BMJ. 2007;
334(7585):12932.
22. Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing
the gap between research and practice: an overview of systematic reviews
of interventions to promote the implementation of research findings. BMJ.
1998;317(7156):4658.
23. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of
barriers to and facilitators of the use of evidence by policymakers. BMC
Health Serv Res. 2014;14:2.
24. Long JC, Cunningham FC, Braithwaite J. Bridges, brokers and boundary
spanners in collaborative networks: a systematic review. BMC Health Serv
Res. 2013;13:158.
25. Buxton M, Hanney S. How can payback from health services research be
assessed? J Health Serv Res Policy. 1996;1(1):3543.
26. Expert Panel for Health Directorate of the European Commissions Research
Innovation Directorate General: Review of Public Health Research Projects
Financed under the Commissions Framework Programmes for Health
Research. Downloaded from https://ec.europa.eu/research/health/pdf/
review-of-public-health-research-projects-subgoup1_en.pdf on 12.8.15.
Brussels: European Commission; 2013.
27. Hanney SR, Castle-Clarke S, Grant J, Guthrie S, Henshall C, Mestre-Ferrandiz J,
Pistollato M, Pollitt A, Sussex J, Wooding S: How long does biomedical
research take? Studying the time taken between biomedical and health
research and its translation into products, policy, and practice. Health
research policy and systems/BioMed Central 2015, 13.
28. Meagher L, Lyall C, Nutley S. Flows of knowledge, expertise and influence: a
method for assessing policy and practice impacts from social science
research. Res Eval. 2008;17(3):16373.
29. Guthrie S, Bienkowska-Gibbs T, Manville C, Pollitt A, Kirtley A, Wooding
S. The impact of the National Institute for Health Research Health
Technology Assessment programme, 200313: a multimethod
evaluation. 2015.
30. Klautzer L, Hanney S, Nason E, Rubin J, Grant J, Wooding S. Assessing policy
and practice impacts of social science research: the application of the
Payback Framework to assess the Future of Work programme. Res Eval.
2011;20(3):2019.
31. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health
research: a Research Impact Framework. BMC Health Serv Res. 2006;6:
134.
32. Kuruvilla S, Mays N, Walt G. Describing the impact of health services and
policy research. J Health Serv Res Policy. 2007;12 suppl 1:2331.
33. Canadian Academy of Health Sciences: Making an Impact, A Preferred
Framework and Indicators to Measure Returns on Investment in Health
Research. Downloadable from http://www.cahs-acss.ca/wp-content/
uploads/2011/09/ROI_FullReport.pdf. Ottawa: CAHS; 2009.
34. Johnston SC, Rootenberg JD, Katrak S, Smith WS, Elkins JS. Effect of a US
National Institutes of Health programme of clinical trials on public health
and costs. Lancet. 2006;367(9519):131927.
35. Deloitte Access Economics. Returns on NHMRC funded Research and
Development. Commissioned by the Australian Society for Medical
Research Sydney, Australia: Author 2011.
36. de Oliveira C, Nguyen HV, Wijeysundera HC, Wong WW, Woo G,
Grootendorst P, et al. Estimating the payoffs from cardiovascular disease
research in Canada: an economic analysis. CMAJ Open. 2013;1(2):E8390.
37. Glover M, Buxton M, Guthrie S, Hanney S, Pollitt A, Grant J. Estimating the
returns to UK publicly funded cancer-related research in terms of the net
value of improved health outcomes. BMC Med. 2014;12:99.
38. Buxton M, Hanney S, Morris S, Sundmacher L, Mestre-Ferrandiz J, Garau M,
Sussex J, Grant J, Ismail S, Nason E: Medical researchwhats it worth?
Estimating the economic benefits from medical research in the UK. In:
London: UK Evaluation Forum (Academy of Medical Sciences, MRC,
Wellcome Trust): 2008; 2008.
39. Access Economics. Exceptional returns: the value of investing in health R&D
in Australia: Australian Society for Medical Research; 2008.
40. National Institute for Health and Care Excellence (NICE): Guide to the
methods of technology appraisal. Accessed at https://www.nice.org.uk/
article/pmg9/resources/non-guidance-guide-to-the-methods-of-technology-
appraisal-2013-pdf on 21.4.16. Lonodn: NICE; 2013.
41. Roback K, Dalal K, Carlsson P. Evaluation of health research: measuring costs
and socioeconomic effects. Int J Preventive Med. 2011;2(4):203.
42. Bozeman B, Rogers JD. A churn model of scientific knowledge value:
Internet researchers as a knowledge value collective. Res Policy. 2002;31(5):
76994.
43. Spaapen J, Sylvain C. Societal Quality of Research: Toward a Method for the
Assessment of the Potential Value of Research for Society: Science Policy
Support Group; 1994.
44. Royal Netherlands Academy of Arts and Sciences. The societal impact of
applied research: towards a quality assessment system. Amsterdam: Royal
Netherlands Academy of Arts and Sciences; 2002.
45. ERiC: Evaluating Research in Context: Evaluating the societal relevance of
academic research: A guide. Den Haag: Science System Assessment
Departmnet, Rathenau Instituut.; 2010.
46. Spaapen J, Dijstelbloem H, Wamelink F. Evaluating research in context. A
method for comprehensive assessment, 2nd edition, The Hague: COS 2007.
47. Molas-Gallart J, Tang P, Morrow S. Assessing the non-academic impact of
grant-funded socio-economic research: results from a pilot study. Res Eval.
2000;9(3):17182.
48. Spaapen J. Social Impact Assessment Methods for Research and Funding
Instruments Through the Study of Productive Interactions (SIAMPI): Final
report on social impacts of research. In. Amsterdam: Royal Netherlands
Academy of Arts and Sciences; 2011.
49. Donovan C. The Australian Research Quality Framework: A live experiment
in capturing the social, economic, environmental, and cultural returns of
publicly funded research. N Dir Eval. 2008;118:4760.
50. Higher Education Funding Council. Research Excellence Framework 2014:
Overview report by Main Panel A and Sub-panels 1 to 6. London: HEFCE.
Accessed 1.2.15 on http://www.ref.ac.uk/media/ref/content/expanel/
member/Main Panel A overview report.pdf; 2015.
51. Morgan B. Research impact: Income for outcome. Nature. 2014;511(7510):
S725.
52. Manville C, Guthrie S, Henham M-L, Garrod B, Sousa S, Kirtley A, Castle-
Clarke S, Ling T: Assessing impact submissions for REF 2014: An evaluation.
Downloaded from http://www.hefce.ac.uk/media/HEFCE,2014/Content/
Pubs/Independentresearch/2015/REF,impact,submissions/REF_assessing_
impact_submissions.pdf on 11.8.15. Cambridge: RAND Europe; 2015.
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 15 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
53. Drew CH, Pettibone KG, Ruben E. Greatest HITS: A new tool for tracking
impacts at the National Institute of Environmental Health Sciences. Res Eval.
2013;22(5):30715.
54. Medical Research Council: Economic Impact report 2013-14. Downloaded
from http://www.mrc.ac.uk/documents/pdf/economic-impact-report-2013-
14/on 18.8.15. Swindon: MRC; 2015.
55. Pawson R. The science of evaluation: a realist manifesto: Sage; 2013.
56. Rycroft-Malone J, Burton C, Wilkinson J, Harvey G, McCormack B, Baker R,
Dopson S, Graham I, Staniszewska S, Thompson C et al: Health Services and
Delivery Research. In: Collective action for knowledge mobilisation: a realist
evaluation of the Collaborations for Leadership in Applied Health Research and
Care. Volume 3, edn. Southampton (UK): NIHR Journals Library.; 2015: 44.
57. Kok MO, Schuit AJ. Contribution mapping: a method for mapping the
contribution of research to enhance its impact. Health Res Policy Syst. 2012;
10:21.
58. Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The
SPIRIT Action Framework: A structured approach to selecting and testing
strategies to increase the use of research in policy. Soc Sci Med. 2015;136-
137c:14755.
59. Jagosh J, Macaulay AC, Pluye P, Salsberg J, Bush PL, Henderson J, et al.
Uncovering the Benefits of Participatory Research: Implications of a Realist
Review for Health Research and Practice. Milbank Quarterly. 2012;90(2):31146.
60. Cacari-Stone L, Wallerstein N, Garcia AP, Minkler M. The Promise of
Community-Based Participatory Research for Health Equity: A Conceptual
Model for Bridging Evidence With Policy. American Journal of Public Health
2014:e1-e9.
61. Kelly U, McNicoll I. Through a glass, darkly: Measuring the social value of
universities. Downloaded from http://www.campusengage.ie/sites/default/
files/resources/80096 NCCPE Social Value Report (2).pdf on 11.8.15. 2011.
62. Hazelkorn E. Rankings and the reshaping of higher education: The battle for
world-class excellence: Palgrave Macmillan; 2015.
63. Nowotny H. Engaging with the political imaginaries of science: Near misses
and future targets. Public Underst Sci. 2014;23(1):1620.
64. Anonymous. Adding value in research. London: National Institute for Health
Research. Accessed 4.4.16 on http://www.nets.nihr.ac.uk/about/adding-
value-in-research; 2016.
65. Higher Education Funding Council for England: 2014 REF: Assessment
framework and guidance on submissions. Panel A criteria. London (REF 01/
2012): HEFCE; 2012.
66. LSE Public Policy Group. Maximizing the impacts of your research: A
handbook for social scientists. http://www.lse.ac.uk/government/research/
resgroups/LSEPublicPolicy/Docs/LSE_Impact_Handbook_April_2011.pdf.
London: LSE; 2011.
67. Kwan P, Johnston J, Fung AY, Chong DS, Collins RA, Lo SV. A systematic
evaluation of payback of publicly funded health and health services
research in Hong Kong. BMC Health Serv Res. 2007;7:121.
68. Scott JE, Blasinsky M, Dufour M, Mandai RJ, Philogene GS. An evaluation of
the Mind-Body Interactions and Health Program: assessing the impact of an
NIH program using the Payback Framework. Res Eval. 2011;20(3):18592.
69. The Madrillon Group. The Mind-Body Interactions and Health Program
Outcome Evaluation. Final Report. Bethesda, Maryland: Report prepared for
Office of Behavioral and Social Sciences Research, National Institutes of
Health; 2011.
70. Hanney SR, Watt A, Jones TH, Metcalf L. Conducting retrospective impact
analysis to inform a medical research charitys funding strategies: the case
of Asthma UK. Allergy Asthma Clin Immunol. 2013;9:17.
71. Donovan C, Butler L, Butt AJ, Jones TH, Hanney SR. Evaluation of the impact
of National Breast Cancer Foundation-funded research. Med J Aust. 2014;
200(4):2148.
72. Wooding S, Hanney SR, Pollitt A, Grant J, Buxton MJ. Understanding factors
associated with the translation of cardiovascular research: a multinational
case study approach. Implement Sci. 2014;9:47.
73. Montague S, Valentim R. Evaluation of RT&D: from prescriptions for
justifyingto user-oriented guidance for learning. Res Eval.
2010;19(4):25161.
74. Adam P, Solans-Domènech M, Pons JM, Aymerich M, Berra S, Guillamon I, et
al. Assessment of the impact of a clinical and health services research call in
Catalonia. Res Eval. 2012;21(4):31928.
75. Graham KER, Chorzempa HL, Valentine PA, Magnan J. Evaluating health
research impact: Development and implementation of the Alberta
Innovates Health Solutions impact framework. Res Eval. 2012;21:35467.
76. Cohen G, Schroeder J, Newson R, King L, Rychetnik L, Milat AJ, et al. Does
health intervention research have real world policy and practice impacts:
testing a new impact assessment tool. Health Res Policy Syst. 2015;13:3.
77. Molas-Gallart J, Tang P. Tracing productive interactionsto identify social
impacts: an example from the social sciences. Res Eval. 2011;20(3):21926.
78. Hinrichs S, Grant J. A new resource for identifying and assessing the
impacts of research. BMC Med. 2015;13:148.
79. Greenhalgh T, Fahy N. Research impact in the community based health
sciences: an analysis of 162 case studies from the 2014 UK Research
Excellence Framework. BMC Med. 2015;13:232.
We accept pre-submission inquiries
Our selector tool helps you to find the most relevant journal
We provide round the clock customer support
Convenient online submission
Thorough peer review
Inclusion in PubMed and all major indexing services
Maximum visibility for your research
Submit your manuscript at
www.biomedcentral.com/submit
Submit your next manuscript to BioMed Central
and we will help you at every step:
Greenhalgh et al. BMC Medicine (2016) 14:78 Page 16 of 16
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Shorter term impact can include increasing knowledge or awareness, or changes in attitudes and motivation, which can contribute to changes in individual practice. Longer-term impact include the changes in policy, behaviour or practice that benefit patients and the public, which can take years [15]. ...
... The best evidence of impact was for questions that came from the Welsh Government TAC and TAG groups, where the evidence was required to inform a specific plan, programme or guidance, and where knowledge mobilisation and gathering impact evidence was supported by the WCEC liaison person and stakeholders from the outset. While impact could be evidenced to the point of informing health and social care decision-makers (a recognised impact outcome) [15], evidence of how the research benefited public and patients would require future tracking and data analysis, which was not achievable within the timeline of the WCEC (2021-23), but which will be ongoing as part of the new Evidence Centre work. The unique environment of the pandemic and the close collaboration and co-production helped outputs to be used to inform decisions. ...
Article
Full-text available
Background The Wales COVID-19 Evidence Centre (WCEC) was established from 2021–23 to ensure that the latest coronavirus (COVID-19) relevant research evidence was readily available to inform health and social care policy and practice decision-makers. Although decisions need to be evidence-based, ensuring that accessible and relevant research evidence is available to decision-makers is challenging, especially in a rapidly evolving pandemic environment when timeframes for decision-making are days or weeks rather than months or years. We set up knowledge mobilisation processes to bridge the gap between evidence review and informing decisions, making sure that the right information reaches the right people at the right time. Aims and objectives To describe the knowledge mobilisation processes used by the WCEC, evaluate the impact of the WCEC rapid evidence reviews, and share lessons learned. Methods Our knowledge mobilisation methods were flexible and tailored to meet stakeholders’ needs. They included stakeholder co-production in our rapid evidence review processes, stakeholder-informed and participatory knowledge mobilisation, wider dissemination of outputs and associated activities including public engagement, capacity building and sharing of methodologies. Feedback on processes and evidence of impact was collected via stakeholder engagement and a stakeholder survey. Results Findings indicate that knowledge mobilisation processes successfully enabled use of the WCEC’s rapid evidence reviews to inform policy and practice decision-makers during the COVID-19 pandemic in Wales. Realising actual public and patient benefit from this ‘pathway to impact’ work will take further time and resources. Discussion and conclusion The WCEC knowledge mobilisation processes successfully supported co-production and use of rapid evidence review findings by scientific advisors and policy and practice decision-makers during the COVID-19 pandemic. Identified barriers and facilitators are of potential relevance to wider evidence initiatives, for setting up similar Centres during crisis situations, and supporting future evidence-based policy and practice decision-making.
... Research has diverse impacts on society (Bornmann, 2013;Greenhalgh et al., 2016;Martin, 1996), leading to the conclusion that it cannot be assessed using a single indicator. In contrast, the effect of research on science can be described exclusively as its contribution to the progress of knowledge. ...
Article
Full-text available
Purpose To analyze the diversity of citation distributions to publications in different research topics to investigate the accuracy of size-independent, rank-based indicators. The top percentile-based indicators are the most common indicators of this type, and the evaluations of Japan are the most evident misjudgments. Design/methodology/approach The distributions of citations to publications from countries and journals in several research topics were analyzed along with the corresponding global publications using histograms with logarithmic binning, double rank plots, and normal probability plots of log-transformed numbers of citations. Findings Size-independent, top percentile-based indicators are accurate when the global ranks of local publications fit a power law, but deviations in the least cited papers are frequent in countries and occur in all journals with high impact factors. In these cases, a single indicator is misleading. Comparisons of the proportions of uncited papers are the best way to predict these deviations. Research limitations This study is fundamentally analytical, and its results describe mathematical facts that are self-evident. Practical implications Respectable institutions, such as the OECD, the European Commission, and the U.S. National Science Board, produce research country rankings and individual evaluations using size-independent percentile indicators that are misleading in many countries. These misleading evaluations should be discontinued because they can cause confusion among research policymakers and lead to incorrect research policies. Originality/value Studies linking the lower tail of citation distribution, including uncited papers, to percentile research indicators have not been performed previously. The present results demonstrate that studies of this type are necessary to find reliable procedures for research assessments.
... É essencial reconhecer que a pesquisa só está completa quando seus resultados são apresentados à comunidade científica para avaliação, aceitação e adoção, contribuindo assim para o avanço contínuo do conhecimento (Ferrero, 2017). Além disso, a consolidação ocorre quando gera benefícios para a sociedade, seja nos aspectos de saúde, econômicos e culturais, seja na construção do conhecimento acadêmico (Greenhalgh et al., 2016). Dessa forma, escrever e publicar fazem parte do processo investigativo e devem ser feitas de forma adequada (Trzesniak e Koller, 2021). ...
Article
Full-text available
Writing a scientific text is essential for anyone pursuing a research career. Since 2022, there has been a drop in scientific publications in Brazil and other countries influenced by various factors. Therefore, this study aimed to evaluate a pedagogical intervention proposal developed to improve university students' academic and scientific writing skills through a Scientific Writing Course. A descriptive analysis of the collected data was carried out using a mixed methods approach, with data obtained through questionnaires. The course positively impacted the participants' knowledge, who expressed general satisfaction with the experience. However, even among the most experienced students, difficulties in scientific writing were identified. The experience reinforces the need for educational approaches that promote social inclusion and critical reflection on academic literacy, which could also benefit other educational contexts.
... Also more in line with what we mentioned above as the Mode 2 theory of the interactive dynamics between science and contemporary societies are several alternative frameworks for the understanding of the societal impact of research, such as the Payback framework (Levitt et al. 2010;Klautzer et al. 2011), the SIAMPI/ERiC model (Spaapen and van Drooge 2011;Molas-Gallart and Tang 2011;Olmos-Peñuela et al. 2014), the Flows of knowledge framework (Meagher et al. 2008), the Research Contribution Framework (Morton 2015), Contribution Mapping (Kok and Schuit 2012), and the IMPACT-EV (Flecha et al. 2014). Overviews of such frameworks are found in Greenhalgh et al. (2016), Pedersen et al. (2018), and Giménez-Toledo et al. (2023). ...
Chapter
Full-text available
Open access publishing has been the most prolific aspect of the transition towards open science. In this transition, increasingly national governments, national and international funding agencies, and institutional leadership have initiated policies to promote and stimulate the development to open access as the norm in scholarly publishing. However, this has not always led to the best outcomes.
... Also more in line with what we mentioned above as the Mode 2 theory of the interactive dynamics between science and contemporary societies are several alternative frameworks for the understanding of the societal impact of research, such as the Payback framework (Levitt et al. 2010;Klautzer et al. 2011), the SIAMPI/ERiC model (Spaapen and van Drooge 2011;Molas-Gallart and Tang 2011;Olmos-Peñuela et al. 2014), the Flows of knowledge framework (Meagher et al. 2008), the Research Contribution Framework (Morton 2015), Contribution Mapping (Kok and Schuit 2012), and the IMPACT-EV (Flecha et al. 2014). Overviews of such frameworks are found in Greenhalgh et al. (2016), Pedersen et al. (2018), and Giménez-Toledo et al. (2023). ...
Chapter
Full-text available
Since being released in July 2022, an Agreement on Reforming Research Assessment has been signed by more than 700 research performing and funding organisations within and outside of Europe. It is intended to guide a reform and mutual learning process within a coalition of its signatories, CoARA. This chapter analyses the agreement critically and provides recommendations for further development.
... and the literature around research impact (e.g. Donovan 2011;Penfield et al. 2014;Greenhalgh et al. 2016;) to attempt to define what research impact is, at least as defined as part of a research assessment framework. This is a generalised overview of what impact can be, but it is a useful starting point to our investigation. ...
... Additionally, productivity is measured based on the quality of results, by comparing the number of citations to the articles produced (Avanesova & Shamliyan, 2018;Eskrootchi & Sanee, 2018;Sadeghi-Bazargani et al., 2019;Ukwoma & Ngulube, 2022;Zharova et al., 2018), as well as the ratio of granted patents to registered patents (McAleer & Slottje, 2005;Rubilar-Torrealba et al., 2022) Outcome is an endogenous variable, such as productivity which provides added value in terms of quality or impact on output, and no longer focusing on numerical data like output (Alomary, 2020). Furthermore, outcome is defined as the impact that occurs when education produces academic knowledge and benefits in health, cultural, economical (Greenhalgh et al., 2016), as well as social aspects in the lives of citizens and society (Barnes, 2015). The results of the output on publications, intellectual property, and prototypes are the number of citations as a recognition of research existence (Aksnes et al., 2019;Mukundan & Narayanan, 2019;Tung et al., 2018), intellectual property licenses (Gao et al., 2014;Sanberg et al., 2014), socio-economic impacts consisting of income/revenue (Azagra-Caro et al., 2006;Degl'Innocenti et al., 2019), quality of teaching (Art es et al., 2017;Stupnisky et al., 2017), community empowerment (Kassab, 2019;Tagliaventi & Carli, 2021), and other social benefits (Bornmann, 2013;Sord e Mart ı et al., 2020;Zandniapour & Hyde, 2022) It is predicted that output, productivity, and outcome variables influence the relationship of the input or process as independent variables on research performance as the dependent. ...
Article
Full-text available
The study aims to investigate the mediating role of output, productivity, and outcome variables in the relationship between input or process variables and research performance. By examining how these variables interact, the study aims to enhance our understanding of the factors that influence research performance. This role was investigated using the bootstrap method to calculate the indirect effects of the four developed models. Furthermore, the Hayes’ Process macro SPSS Model 6 was applied to process the results of surveys and investigations of 150 research directors at Indonesian universities. The experience of these directors in conducting and managing research in the universities was used as high-quality and accurate information. The results showed the output, productivity, and outcome variables act as partial mediators in the relationship between inputs and processes on research performance. Therefore, this study theoretically and practically contributes to the implementation of policies in evaluating research performance in universities institutionally, nationally, and globally.
Chapter
Full-text available
Societal impact of research does not occur primarily as unexpected, extraordinary incidents of particularly useful breakthroughs in science. Is it more often a result of normal everyday interactions between organisations that need to create, exchange, and make use of new knowledge to further their goals. This chapter discusses how to assess and improve the cocreation and use of research in normal research–society relations.
Technical Report
Full-text available
Social Impact Assessment Methods through Productive Interactions (SIAMPI) involves two central tasks: to enlighten the mechanisms by which social impact occurs and to develop methods to assess social impact. SIAMPI produced a review on social impact assessment (see www.siampi.eu) and developed an analytical framework for the study of productive interactions and social impact in four different areas of research, and in four different European countries, and at the European level: · Health care research in the Netherlands · ICT in the UK, the Netherlands and on European level· Nanotechnology in France, the Netherlands and at the European level · Social and human sciences in Spain and the UK. Central to our analytical framework is the concept of productive interactions: the mechanisms through which research activities lead to a socially relevant application. An interaction entails a contact between a researcher and a stakeholder. The contact is mediated through various means, as diverse as a research publication, a policy report, a prototype, a guideline, a website, a design, a protocol, a membership of a committee, shared use of facilities or financial contributions by a stakeholder. We distinguish three main types of interaction: 1. direct or personal interaction, 2. indirect interaction through a medium, 3. financial or material exchanges. The interaction is productive when it leads to efforts by stakeholders to apply research results to social goals, i.e. when it induces behavioural change. SIAMPI offers an approach for social impact assessment based on concrete data about the key elements in the production of social impact: productive interactions and stakeholders. To assess productive interactions we collect data through interviews and quantitative methods, to assess the role of stakeholders we use interviews and focus groups.
Article
Full-text available
Background: This report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review. Objectives: (1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment (HTA) programme. Data sources: We searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014. Review methods: This narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015. Results: The literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers. Discussion: The findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence's remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework (REF), which assesses the quality of universities' research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish® (researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established. Limitations: There were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme. Conclusions: Research funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines. Funding: The National Institute for Health Research HTA programme.
Article
The gap between research and practice or policy is often described as a problem. To identify new barriers of and facilitators to the use of evidence by policymakers, and assess the state of research in this area, we updated a systematic review. Systematic review. We searched online databases including Medline, Embase, SocSci Abstracts, CDS, DARE, Psychlit, Cochrane Library, NHSEED, HTA, PAIS, IBSS (Search dates: July 2000 - September 2012). Studies were included if they were primary research or systematic reviews about factors affecting the use of evidence in policy. Studies were coded to extract data on methods, topic, focus, results and population. 145 new studies were identified, of which over half were published after 2010. Thirteen systematic reviews were included. Compared with the original review, a much wider range of policy topics was found. Although still primarily in the health field, studies were also drawn from criminal justice, traffic policy, drug policy, and partnership working. The most frequently reported barriers to evidence uptake were poor access to good quality relevant research, and lack of timely research output. The most frequently reported facilitators were collaboration between researchers and policymakers, and improved relationships and skills. There is an increasing amount of research into new models of knowledge transfer, and evaluations of interventions such as knowledge brokerage. Timely access to good quality and relevant research evidence, collaborations with policymakers and relationship- and skills-building with policymakers are reported to be the most important factors in influencing the use of evidence. Although investigations into the use of evidence have spread beyond the health field and into more countries, the main barriers and facilitators remained the same as in the earlier review. Few studies provide clear definitions of policy, evidence or policymaker. Nor are empirical data about policy processes or implementation of policy widely available. It is therefore difficult to describe the role of evidence and other factors influencing policy. Future research and policy priorities should aim to illuminate these concepts and processes, target the factors identified in this review, and consider new methods of overcoming the barriers described.
Book
Ten years have passed since the first global ranking of universities was published. Since then, university rankings have continued to attract the attention of policymakers and theacademy, challenging perceived wisdom about the status and reputation, as wellas quality and performance, of higher education institutions. Their impact andinfluence has impacted and influenced policymakers, students and parents,employers and other stakeholders in addition to higher education institutionsaround the world. They are now a significant factor shaping institutionalambition and reputation, and national priorities. The second edition of Rankings and the Reshaping of HigherEducation, now in paperback, brings the story of rankings up-to-date. It contains new originalresearch, and extensive analysis of the rankings phenomenon. Ellen Hazelkorndraws together a wealth of international experience to chronicle how rankingsare helping reshape higher education in the age of globalization. Written in aneasy but authoritative style, this book makes an important contribution to ourunderstanding of rankings and global changes in higher education. It is essentialreading for policymakers, institutional leaders, managers, advisors, andscholars.
Article
Health services can and should be improved by applying research findings about best practice. This book explores why it proves notoriously difficult to implement change based on research evidence in the face of strong professional views and complex organizational structures. It draws on a large body of evidence acquired in the course of nearly fifty case studies using data from 1,400 interviews with doctors, nurses, and managers, as well as observations and documentary analysis. Using qualitative methods to study hospital and primary care settings, the book aims to shed light on why attempts to introduce evidence-based practice in the UK NHS succeeded in some cases where in others it faltered. By opening up the intricacies and complexities of change in the NHS, it reveals the limitations of simplistic approaches to implementing research or introducing evidence-based health care. The book provides an analysis rooted in a range of theoretical perspectives that underlines the intimate links between organizational structures and cultures and the utilization of knowledge, and draws conclusions significant for other areas of public management. The findings have implications for the utilization of knowledge in situations where there is a professional tradition working within a politically sensitive blend of public service, managerial accountability, and technical expertise.