ArticlePDF Available

ISRIA statement: Ten-point guidelines for an effective process of research impact assessment

Authors:
  • Agency for Health Quality and Assessment of Catalonia

Abstract and Figures

As governments, funding agencies and research organisations worldwide seek to maximise both the financial and non-financial returns on investment in research, the way the research process is organised and funded is becoming increasingly under scrutiny. There are growing demands and aspirations to measure research impact (beyond academic publications), to understand how science works, and to optimise its societal and economic impact. In response, a multidisciplinary practice called research impact assessment is rapidly developing. Given that the practice is still in its formative stage, systematised recommendations or accepted standards for practitioners (such as funders and those responsible for managing research projects) across countries or disciplines to guide research impact assessment are not yet available. In this statement, we propose initial guidelines for a rigorous and effective process of research impact assessment applicable to all research disciplines and oriented towards practice. This statement systematises expert knowledge and practitioner experience from designing and delivering the International School on Research Impact Assessment (ISRIA). It brings together insights from over 450 experts and practitioners from 34 countries, who participated in the school during its 5-year run (from 2013 to 2017) and shares a set of core values from the school’s learning programme. These insights are distilled into ten-point guidelines, which relate to (1) context, (2) purpose, (3) stakeholders’ needs, (4) stakeholder engagement, (5) conceptual frameworks, (6) methods and data sources, (7) indicators and metrics, (8) ethics and conflicts of interest, (9) communication, and (10) community of practice. The guidelines can help practitioners improve and standardise the process of research impact assessment, but they are by no means exhaustive and require evaluation and continuous improvement. The prima facie effectiveness of the guidelines is based on the systematised expert and practitioner knowledge of the school’s faculty and participants derived from their practical experience and research evidence. The current knowledge base has gaps in terms of the geographical and scientific discipline as well as stakeholder coverage and representation. The guidelines can be further strengthened through evaluation and continuous improvement by the global research impact assessment community.
Content may be subject to copyright.
O P I N I O N Open Access
ISRIA statement: ten-point guidelines for an
effective process of research impact
assessment
Paula Adam
1*
, Pavel V. Ovseiko
2
, Jonathan Grant
3
, Kathryn E. A. Graham
4
, Omar F. Boukhris
5
,
Anne-Maree Dowd
6
, Gert V. Balling
7
, Rikke N. Christensen
7
, Alexandra Pollitt
3
, Mark Taylor
8
, Omar Sued
9
,
Saba Hinrichs-Krapels
3
, Maite SolansDomènech
1
, Heidi Chorzempa
4
, for the International School on Research
Impact Assessment (ISRIA)
Abstract
As governments, funding agencies and research organisations worldwide seek to maximise both the financial and
non-financial returns on investment in research, the way the research process is organised and funded is becoming
increasingly under scrutiny. There are growing demands and aspirations to measure research impact (beyond
academic publications), to understand how science works, and to optimise its societal and economic impact. In
response, a multidisciplinary practice called research impact assessment is rapidly developing. Given that the
practice is still in its formative stage, systematised recommendations or accepted standards for practitioners
(such as funders and those responsible for managing research projects) across countries or disciplines to guide
research impact assessment are not yet available.
In this statement, we propose initial guidelines for a rigorous and effective process of research impact assessment
applicable to all research disciplines and oriented towards practice. This statement systematises expert knowledge
and practitioner experience from designing and delivering the International School on Research Impact Assessment
(ISRIA). It brings together insights from over 450 experts and practitioners from 34 countries, who participated in the
school during its 5-year run (from 2013 to 2017) and shares a set of core values from the schools learning programme.
These insights are distilled into ten-point guidelines, which relate to (1) context, (2) purpose, (3) stakeholdersneeds,
(4) stakeholder engagement, (5) conceptual frameworks, (6) methods and data sources, (7) indicators and metrics,
(8) ethics and conflicts of interest, (9) communication, and (10) community of practice.
The guidelines can help practitioners improve and standardise the process of research impact assessment, but they are
by no means exhaustive and require evaluation and continuous improvement. The prima facie effectiveness of the
guidelines is based on the systematised expert and practitioner knowledge of the schools faculty and participants
derived from their practical experience and research evidence. The current knowledge base has gaps in terms of the
geographical and scientific discipline as well as stakeholder coverage and representation. The guidelines can be further
strengthened through evaluation and continuous improvement by the global research impact assessment community.
Keywords: Research impact assessment, Evaluation, Science policy, Science of science, Responsible research and
innovation, Guidelines, International School on Research Impact Assessment (ISRIA)
* Correspondence: padam@gencat.cat
Equal contributors
1
Agency for Health Quality and Assessment of Catalonia (AQuAS), Carrer de
Roc Boronat, 81, ES-08005 Barcelona, Spain
Full list of author information is available at the end of the article
© The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Adam et al. Health Research Policy and Systems (2018) 16:8
DOI 10.1186/s12961-018-0281-5
Background
Governments, funding agencies and research organisa-
tions all over the globe increasingly seek to maximise soci-
etal and economic returns on investment in research by
shaping research policy and practice. For example, in the
European Unions Horizon 2020 research and innovation
programme, excellent science, industrial leadership and
societal challenges are three mutually reinforcing priorities
[1], and the Responsible Research and Innovation ap-
proach [2]withintheScience with and for Society
programme aims to better align both the process and
outcomes of R&I [research and innovation],withthe
values, needs and expectations of European society[3]. In
Canada, the Prime Ministers mandate letter to the
Minister of Innovation, Science and Economic Develop-
ment stresses the importance of focusing on results that
benefit Canadians [4], and the Policy on Results for all fed-
eral governmental departments sets out accountability for
performance information and evaluation [5]. In Australia,
the National Innovation and Science Agenda makes a
commitment to introduce, for the first time, clear and
transparent measures of non-academic impact and indus-
try engagement when assessing university research per-
formance[6]. Inevitably, the way the research process in
all scientific domains is organised and funded is becoming
increasingly under scrutiny. There are growing aspirations
for science policy to be formulated on the basis of a
scientific understanding of how science works and how to
optimise its impact [79]. There are also important initia-
tives to scientifically measure and study science [1012]
as well as critical views on how research is shaped and
performed [1318].
In response to such growing demands and aspirations,
the practice of research impact assessment (RIA) has
been rapidly developing. Whereas interest in assessing
research impact and developing evidence-based science
policy is not new [1925], early analyses mainly exam-
ined innovation processes and research outputs, such as
publications, citations, and grants, using bibliometric
and econometric techniques. More recently, research
funders have developed interest in measuring research
impact beyond academia. For example, in the case of the
2014 Research Excellence Framework (REF) of the
Higher Education Funding Council for England, impact
was defined as any effect on, change or benefit to the
economy, society, culture, public policy or services, health,
the environment or quality of life, beyond academia
[26]. RIA uses a multitude of methods from social
science disciplines to examine the research process with
a view to maximising its societal and economic impacts
such as intellectual property, spin-out companies, health
outcomes, public understanding and acceptance, policy-
making, sustainable development, social cohesion,
gender equity, cultural enrichment, and other benefits.
In Europe, North America, Australia, and other coun-
tries around the world, RIA is already being institutiona-
lised within national research and innovation systems.
Many government agencies and research organisations
are starting to use RIA as a practical tool for deci-
sion-making in scientific strategy, demonstrating ac-
countability to research funders, or even to allocate
research resources. We anticipate that the use of RIA
will intensify and spread to other regions and
countries.
In the European Union, evaluations serve to create a
crucial evidence base for the implementation of
research and innovation programmes and are legally
required for all framework programmes. Past
programmes have been evaluated [27], the current
programmes are being monitored [28], and based on
the interim evaluation of Horizon 2020 there have
been developed recommendations on how to
maximise impact of future research and innovation
[29]. Likewise, the League of European Research
Universities recommends that universities embrace
the societal impact agenda and develop transparent
reward systems for all kinds of impact [30].
Among all European countries, RIA is most
developed in the United Kingdom, with
practice-defining contributions ranging from the
development of conceptual tools such as the
Payback Framework [3134] to the introduction of
non-academic impact assessment on the national
scale as in the case of the 2014 REF [26,35,36]. A
wealth of new resources is being put in place and
made openly accessible to identify and assess the
impact of research both at the level of organisations
and nationally [3644], with great potential to
explore methodological challenges and novel aspects
of RIA such as time lags in translation [45,46], the
gender equity pathway to maximise research impact
[4749], or the relative valuation of different kinds
of research impact by the general public, specific
patient groups and researchers [50].
In Spain, RIA has been used in the context of health
sciences programmes [51,52] and networks [53],
with the aim to improve and test applications of
various methods and frameworks. A comprehensive
health research assessment system is being
institutionalised by mandate of the Catalan Strategic
Plan for Health Research and Innovation (PERIS).
This assessment system (named SARIS) holds upon
the grounds of the global lessons learned from RIA.
In the Netherlands, the strategy is to focus on
assessing the research process as a means to
facilitate impact through the so-called productive
interactions, i.e. exchanges between researchers and
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 2 of 16
stakeholders in which knowledge is produced and
valued that is both scientifically robust and socially
relevant[54,55]. Universities, funding agencies
and academic organisations have jointly developed
a common assessment system, the Standard
Evaluation Protocol (SAP) [56], which includes
relevance to society as one of the three main
assessment criteria.
In the United States, where innovation studies,
research evaluation and the science of science first
emerged [19], the National Science Foundation
makes funding decisions on the basis of two major
criteria –‘intellectual meritand broader impacts
[57]. The National Institutes of Health and the
National Science Foundation are leading efforts to
create a repository of data and tools to assess the
impact of federal investments in research called
STAR METRICS® [58]. There are also many other
federal and institutional efforts, such as the
Evaluation of Large Initiatives project [59] and the
Becker Medical Library Model for Assessment of
Research [60].
In Canada, the Canadian Academy of Health
Sciences (CAHS) has adapted the Payback
Framework to measure returns on investment in
health research nationwide, and this has been
subsequently adapted to the provincial context
[6166]. A number of national and provincial
research funders have introduced assessment of
relevance[67]. In doing so, relevance is considered
not only as a necessary condition for impact, but
also as a value in itself [67].
In Australia, the application of impact assessment
has mainly been focused in the health domain or in
sustainable development research in agriculture
[6876]. The Commonwealth Scientific and
Industrial Research Organisation has developed an
impact model and a case study approach spanning
agriculture and fisheries, health, industry and
defence, and the natural environment [77]. In line
with the National Innovation and Science Agenda,
a pilot was conducted in 2017 and preparations
are currently underway to introduce a national
engagement and impact assessment spanning all
research fields in 2018. The national assessment
will examine how universities are translating their
research into economic, social and other benefits
and encourage greater collaboration between
universities, industries and other end-users of
research[78].
There is also a growing number of examples of RIA
spreading to countries such as Argentina [79], Brazil
[80], Guatemala [81], Hong Kong [82], Indonesia
[83], Iran [84], and Qatar [85].
In designing and implementing RIA, researchers and
practitioners worldwide face many challenges. Thus, de-
veloping standards and recommendations based on the
systematised expert knowledge and making them openly
accessible can be a compelling way to guide researchers
and practitioners on how to effectively design and imple-
ment RIA. Many methodological challenges in RIA are
well known to experts and have already been discussed
in technical reports and policy papers [86,87] (Box 1).
Moreover, recently, there have been a number of import-
ant recommendations in peer-reviewed journals regard-
ing different aspects of the research process. The Lancet
series on Research: Reducing Waste and Increasing
Value[13] provides recommendations regarding the re-
search process, including research priority setting [88];
design, conduct and analysis [89]; regulation and man-
agement [90]; inaccessible research [91]; and incomplete
or unusable research [92]. The Leiden Manifesto for
Research Metrics elaborates principles for metrics-based
evaluation of research outputs [93] and the Metric Tide
elaborates on the role of metrics in research assessment
and management [94]. A manifesto for reproducible
science puts forward measures to optimise the scientific
process with regard to methods, reporting and dissemin-
ation, reproducibility, and evaluation and incentives [15].
Box 1 Five common methodological challenges in
RIA [86]
Time lags: how do we assess the impact of research if it usually takes
a long time for impact to occur? When is the right timing?
Attribution and contribution: how do we attribute particular impacts
to particular research projects and researchers (and vice-versa) if
research is often incremental and collaborative?
Marginal differences: how do we distinguish between high and low
impact if there is no shared understanding of impact or assessment
standards yet?
Transaction costs: how do we ensure that the benefits of RIA
outweigh its costs if the assessment process can be costly and
burdensome?
Unit of assessment: how do we determine an appropriate unit of
assessment if research can be multi-disciplinary and multi-impactful?
Yet, the challenges faced during the design and
implementation of RIA by practitioners based within
funding organisations or institutions responsible for
managing a portfolio of research are not well addressed in
the current literature. We believe that standards and
recommendations to guide research programme managers
and other practitioners on how to effectively design and
conduct RIA would prove useful both for practical
applications and for establishing a common language to
facilitate mutual learning in the global community of
practice. Here, we propose initial guidelines by
systematising expert and practitioner knowledge from
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 3 of 16
designing and delivering the International School on
Research Impact Assessment (ISRIA) (http://
theinternationalschoolonria.com).
Development of the ISRIA statement
ISRIA is a community of experts and practitioners from
different organisations and research systems. For the past
five years, we have been engaged with ISRIA in designing,
delivering and applying the schools learning programme
in practice. The school was founded in 2013 based on the
recognition that research programme managers and other
practitioners faced challenges that were unaddressed at
that time and, we believe, still are. Namely, debate on RIA
lacks a focus on practitioner needs; there is a perceived
mutually exclusive relationship between different methods,
models and approaches; there is a nascent but too diffuse
community of practice; and there is a need to build
international capacity, share practice and develop
standards. ISRIA recognises the growing need for practical
skills and aims to fulfil it. It defines RIA as a growing field
of practice that is interested in science and innovation,
research ecosystems and the effective management and
administration of research funding[95].
The schools learning programme is underpinned by a
set of six core values that guide participants to develop
and implement their own RIA plan (Fig. 1). The
programme stands on the recognition that, beyond
technical challenges, there are also global, local, cultural
and other contextual challenges. Each edition of ISRIA
took place in different countries and had participants
from diverse cultures and from many research
disciplines. The experience and cultural competence
gained through the application of the schools learning
programme in different contexts has generated a wealth
of expert knowledge and practical skills that support the
formulation of these guidelines.
The ISRIA statement brings together insights from over
450 scholars and practitioners representing 34 countries
from five continents, who participated in five international
editions and seven regional ISRIA courses and workshops
in 20132017 (Fig. 2). The largest proportion of the
schools participants come from Europe, North America,
Australia, South America, and the Middle East. The best
covered research area is health, followed by education,
energy and environment. The highest represented
stakeholders are research funding agencies, charities,
government and academia.
The authors distilled these insights into recommendations
and agreed on ten-point guidelines by consensus. The ten-
point guidelines relate to (1) context, (2) purpose, (3) stake-
holdersneeds, (4) stakeholder engagement, (5) conceptual
frameworks, (6) methods and data sources, (7) indicators
and metrics, (8) ethics and conflicts of interest, (9) commu-
nication, and (10) community of practice (Fig. 3). The
guidelines are oriented towards research practitioners and
policy-makers in funding organisations, healthcare
Fig. 1 Six core values underpinning ISRIA
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 4 of 16
organisations, universities, research organisations, govern-
ment agencies, industry and charities wishing to de-
velop a process of RIA in any scientific domain and
at any level of assessment.
Ten-point guidelines for an effective process of RIA
1. Analyse your context
Context analysis helps understand the internal and
external environment in which research takes place and
is being assessed. An enhanced understanding of the
research environment illuminates why particular
research is conducted, to what extent it can contribute
to the wider research field, how it is relevant to the
needs of potential research users, and which RIA
methods and indicators to employ.
The importance of the internal research environment is
such that, in the United KingdomsREF,itformsamajor
element of the overall quality profile awarded to each
submission. The research environment is assessed in
terms of its vitality and sustainabilityusing the following
data and information: research strategy; staff and students;
equality and diversity; research income, infrastructure and
facilities; and collaboration and contribution to the
discipline [26]. Analysis of the research environment can
help benchmark and assess the strengths and weaknesses
of the given research environment. Many countries
provide detailed national data on higher education and
research organisations and there are also international
comparisons and rankings [9698] that can be useful
(although in interpreting these rankings it is important to
understand how they have been developed and the
strengths and weaknesses of the approach used).
Fig. 2 International School on Research Impact Assessment: events and participants, 20132017. Black dots indicate international editions, regional
courses and workshops; orange areas indicate countries represented by faculty and participants
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 5 of 16
Fig. 3 Ten-point guidelines for an effective process of research impact assessment
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 6 of 16
Analysis of the external research environment can also
help identify relevant macro-environmental factors and
trends that may affect, or be affected by, the research
undertaken in a particular country or context. These are
often conceptualised as PESTLE (political, economic, so-
cial, technological, legal and environmental), STEEPLED
(social, technological, economic, environmental, political,
legal, ethical and demographic), or SPELIT (social,
political, economic, legal, intercultural and technical)
[99]. For example, while developing its own home-
grown scientific and research management talent pool,
Qatars research and development enterprise draws
heavily on the international workforce and expertise.
Hence, the macro-environmental factors influencing
recruitment and international collaboration are par-
ticularly important for Qatars research and develop-
ment enterprise.
2. Reflect continuously on your purposes
Continuous reflection on the purposes of RIA and ones
relationship to the research being assessed helps refine the
assessment questions and methodology. The purposes of
RIA include advocacy, accountability, analysis and
allocation (Fig. 4)[86].
Advocacy An advocacy approach to RIA is used when
there is a need to make the casefor research, e.g. to
demonstrate the returns of science, alleviate concerns
about its value, or raise awareness and obtain more
support. An advocacy approach is particularly relevant
to addressing policy decision-makers when funding cy-
cles are changing and more research investments are
needed, in the context of austerity when research
funding needs to be protected, or when there is a
need to inform public opinion. When RIA is undertaken
to demonstrate the value of research to society and how
science can help grow the economy, economic return on
investment approaches are usually employed to estimate
impact on GDP, tax revenues, net value added, jobs cre-
ated and other returns. For example, a number of United
Kingdom studies focus on the internal rate of return, in-
cluding spill-overs, time-lag, percentage of attribution and
health gain (monetarised measures of quality of life
gained), net savings for the health system and net health
gains [3740].
Analysis A robust analytical approach should ideally
underpin all other As,particularlywhenan
understanding of how science works is required in order
to optimise its returns. This often involves understanding
the barriers to and facilitators of impact, identifying
dysfunctions within research programmes, as well as
highlighting opportunities to add more value to research
during its planning and execution. For example, a series of
Retrosightstudies in different fields of health research
has used detailed case studies to examine how individual
pieces of research generate different kinds of impact over
a1020 year timeframe, as well as characteristics of
projects, teams and institutions that are associated with
that impact [100102]. Key lessons that emerged from
these studies included the importance of engaging with
non-academic stakeholders during the research, and the
value of particular skills in a research team such as work-
ing across boundaries and being able to think strategically
about pathways to impact [103].
Accountability An accountability approach to RIA is
used to ensure accountability to tax-payers, donors and
society for research funding. With the increasing pres-
sure to reduce public spending, there is a greater em-
phasis on transparency, efficiency, value to the public
and a return for the investment made by the public, pri-
vate and charitable sectors in research. For example,
Australias national research evaluation framework,
Excellence in Research for Australia (ERA) [104], is con-
sidered to be one of the primary mechanisms that Gov-
ernment, public and private sectors have to account for
their expenditure on higher education research sector.
An independent review of the benefits of ERA found
that, while improving accountability, transparency and
policy-making, ERA helps to increase the social rate of
return from research, generate cost savings, increase
university revenue and enhance economic activity [105].
Fig. 4 The Four Asof research impact assessment: advocacy,
analysis, accountability and allocation. Adapted from [86]
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 7 of 16
Allocation An allocation approach to RIA is used to
incentivise research excellence by providing economic
rewards through the allocation of resources. Allocation
of resources is generally based on the assessment of
several dimensions of research quality, impact and
environment using explicit criteria. In the United
Kingdom, non-academic impact was included in the na-
tional REF process for the first time in 2014, with the
weighting of 20% for non-academic impact, meaning
that 20% of approximately £1.6 billion of quality-related
research funding allocated to higher education institu-
tions annually is allocated on the basis of non-academic
impact.
3. Identify stakeholders and their needs
Attention to stakeholders and their needs is important
for the success of any RIA. Identifying and analysing
stakeholders and their needs helps prioritise stakeholder
interests, develop engagement strategies and determine
RIA requirements.
Stakeholders are the people and organisations with
an interest in the outcome of a given RIA. For RIA
to influence practice, it needs to address stakeholders
interests, beliefs and behaviour. This is particularly
true in the public sector, where successfor public
organisations and certainly survival depends on
satisfying key stakeholders according to their definition
of what is valuable[106]. Stakeholdersneeds are
further influenced by their countrysorcommunitys
cultural values, associated with the geographical
location, traditions, language and religion, by political
and organisational rules of behaviour, and by personal
socio-demographic characteristics.
Different stakeholders play different roles in the
research process, operate in different contexts, possess
different types of information, and therefore value
different aspects of RIA. For example, according to their
role in the research process, stakeholders can be
classified into research funders, research participants,
researchers, research users and research beneficiaries.
Whereas research funders are usually concerned about
demonstrating an effective use of resources, improving
resource allocation and formulating evidence-based
science policy, researchers are usually interested in dem-
onstrating research outputs and impacts over time, pro-
moting personal or institutional research agendas, and
making the case for new resources.
Various theoretical approaches can be used to
identify stakeholders, determine their salience, and
prioritise the levels of attention that they require in the
RIA process. For example, stakeholderssalience can be
determined on the basis of their power, the legitimacy
of their relationship and the urgency of their claim
[107]. Stakeholder analysis and prioritisation can be
further assisted by the use of power versus interest
grids, stakeholder influence diagrams, problem-frame
stakeholder maps, and the participation planning
matrix [106,108,109]. A power versus interest grid, also
known as the Mendelow matrix [110], is one of the most
frequently used methods of stakeholder analysis (Fig. 5).
The information required for stakeholder analysis can be
gathered from organisation- and programme-level stra-
tegic plans, annual reports, governance and management
board papers and minutes, websites, surveys, interviews,
previous evaluations and other documents.
4. Engage with key stakeholders early on
Engaging with stakeholders early and throughout the
process of RIA can help ensure the social robustness of
RIA and make real advances in how science is shaped.
Developing interpersonal engagement skills and cultural
competence can further facilitate an effective translation
of RIA into practice.
It is argued that, in recent decades, a social contract
between science and society has been redrawn to include
not only production of scientifically reliable knowledge by
scientists (Mode 1), but also a transparent and participatory
process of knowledge production characterised by
researchersengagement with research users and other
stakeholders during research design and implementation
(Mode 2) [111113]. Such Mode 2 knowledge production
is likely to result in socially robustknowledge and
therefore more effective translation of knowledge into
practice [112]. Mode 2 knowledge production is also more
efficient because it presupposes direct adoption of research
findings and innovations and requires less dissemination
and knowledge mobilisation.
Fig. 5 Power versus interest grid the Mendelow matrix. Adapted
from [110]
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 8 of 16
Many research funders promote engagement with
stakeholders as a means of co-creating and enhancing
future research impact at different stages of the research
process, including design, implementation and evalu-
ation. For example, the European Unions Responsible
Research and Innovation approach aims to engage all so-
cietal actors in the research and innovation process [3].
The National Institute for Health Research (NIHR) in
England promotes involving patients and the public in
health research, not only as research participants, but
also as research users advising [NIHR] about what re-
search should be funded and helping to design research
studies[114]. The assessment system SARIS in Catalonia
includes, hand-in-hand with the evaluation, engagement
with stakeholders as a means of enhancing research
impact [115].
Effective translation of RIA into practice can be further
facilitated by developing interpersonal engagement skills
and cultural competence. Knowing ones and onesteams
preferences and biases well is as important as knowing
how to engage with stakeholders from diverse cultures
and backgrounds who may have a different set of values,
preferences and biases. Interpersonal skills and cultural
competencies are required to engage with such
stakeholders without compromising the robustness and
rigour of RIA. According to the American Evaluation
Association, cultural competence requires awareness of
self, reflection on ones own cultural position, awareness of
otherspositions, and the ability to interact genuinely and
respectfully with others[116].
5. Choose conceptual frameworks critically
Conceptual frameworks can support RIA by reducing the
complexity of the phenomenon under investigation for
the purposes of data collection, organisation and analysis.
Frameworks can also help address major methodological
challenges and comparisons of research impact across
different disciplines, institutions and countries.
Research impact is a complex and often unpredictable
phenomenon, which makes the task of assessing it
difficult [117]. Conceptual frameworks can help make this
task easier in a number of ways. First, RIA practitioners
can use readily available conceptual frameworks to reduce
the complexity of the phenomenon under investigation
for the purposes of data collection, organisation and
analysis.Second, frameworks can help address major
methodological challenges of RIA, such as attribution
(assigning the right impacts to a specific piece of research
or vice versa), time-lag (determining the time for impact
and the right timing to engage in a RIA) and the counter-
factual (examining what would have happened if the given
piece of research did not occur). Third, frameworks can
allow transparent, longitudinal and quantifiable compari-
sons of research impact across different disciplines,
institutions and countries. Finally, frameworks can facili-
tate communication of the results of RIA to stakeholders
and the public in a clear and accessible manner. Yet, be-
cause frameworks deliberately reduce the complexity of
the phenomenon under investigation, they need to be se-
lected critically and transparently.
As stated in the founding values (Fig. 1), ISRIA does
notadvocateforanyspecificframework,but
recommends to critically choose frameworks in a way
that fits the context and purpose of a given RIA
exercise and to explicitly state the limitations of the
chosen framework. There are a number of literature
reviews to help practitioners understand the advantages
and limitations of different conceptual frameworks and
approaches [87,117121]. For example, the Payback
Framework has been widely used for an understanding
of the research process and pathways to impact in the
United Kingdom and many other countries. The CAHS
model [62], an adaptation of the Payback Framework,
has been widely used in Canada because it aims to
provide consistency and comparability while remaining
flexible for interpretation at different levels tailored to
the Canadian context. The CAHS model has also been
used in Spain for communication, advocacy and
formative purposes [51,52].
6. Use mixed methods and multi-data sources
RIA is best approached using a combination of mixed
methods and a variety of data sources. Triangulating
methods and data sources can enhance the robustness
and trustworthiness of the assessment.
Unlike basic science, which strives to conduct valid and
reliable research with generalisable findings, RIA strives to
understand research impact from the perspectives of
certain stakeholders. Given the applied nature of RIA, the
value of RIA can be increased by enhancing the
robustness of methods and data as well as ensuring the
trustworthiness of findings and recommendations. An
effective way to do so is to triangulate different methods
and data, i.e. to use more than one method and data
source to develop rich accounts of research impact. If
these accounts point to the same result, then it is deemed
to be trustworthy. RIA practitioners are not expected to
be experts in all methods, but need to understand the
advantages and disadvantages, scope and limitations of
different methods in order to gather data and choose
methods that address the stakeholder assessment
questions in the most effective and efficient way.
Different design approaches have different strengths
and weaknesses, and the selection of methods imply
trade-offs between structured and purposive designs,
stratified and random experiments [122]. Case studies
provide powerful narratives that can be easily under-
stood by stakeholders and the general public, but on the
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 9 of 16
other hand they are costly, time consuming, burdensome
and might be perceived as subjective. Surveys allow the
collection of a large amount of data from a wide range
of stakeholders with a relatively low burden on key
informants, but potential limitations include sampling
errors, low response rates and inadequate questionnaire
validation in different contexts. Bibliometric approaches
are well established, have a broad range of data sources
available and are capable of providing robust quantita-
tive analysis, but caution must be exercised in using
non-normalised indicators, especially outside natural
and health sciences. Further, they also tell us little about
impact beyond academia.
In choosing the appropriate mix of methods,
practitioners face the crucial decision of what methods
to mix and how many of them. The number of methods
is usually determined by the number of questions
requiring different data and by the saturation point
when adding more methods does not improve
triangulation results. Before collecting data ex novo, it is
important to map available internal data and external
sources. Conducting complex analyses in-house might
not be cost effective as certain types of analysis, particu-
larly those requiring specialist technical expertise (e.g.
bibliometrics or economic returns), can be effectively
contracted out. It is also important to consider cost
implications and practicality issues to ensure RIA is
affordable, cost-effective and efficient. Finally, it might
be relevant to analyse whether the potential results can
address the type of messages expected or needed by
stakeholders and end-users of the research.
7. Select indicators and metrics responsibly
The misuse of quantitative indicators and metrics can lead
to gaming and unintended negative results. Any
quantitative indicators and metrics need to be used
responsibly relative to the context and in support of other
types of evidence. While using specialised methodologies
such as bibliometrics or econometrics, it is recommended
to use critical recommendations from experts.
A key concern for measuring impact is reflected in a
statement attributed to Albert Einstein, not everything
that counts, can be counted, in other words, focusing
on what is measurable rather than what is important;
this is particularly relevant when taking account of the
context of each given impact assessment. On the other
hand, indicators and metrics can be used to think
through what counts as evidence, demonstrating
whether impact occurred or not. Indicators provide
signals of impact, but do not provide comprehensive
assessment of the full range or the many factors that
contributed to those impacts. The desire is to use
indicators and metrics as one line of evidence to make
better decisions. Quantitative measurement misuse can
lead to unintended negative results such as the pressure
to publish or perishat all costs or excessive self-citation
in research. To avoid such unintended behaviours, indi-
cators and metrics need to be selected responsibly.
Namely, it is recommended that a balanced set (menu)
of indicators and metrics are used to answer the stake-
holder assessment questions that focus on their impacts
of interest.
Measuring impact is a practice that requires
measurement expertise and a transparent participatory
process to ensure that recommended indicators are valid,
reliable and socially robust. Using a mix of quantitative
and qualitative measures can help understand the what
but also the howand whyimpacts occurred. As was
noted in the recent Metric Tide report on the role of
metrics in research assessment, carefully selected
indicators can complement decision-making, but a
variable geometryof expert judgement, quantitative
indicators and qualitative measures that respect research
diversity will be required[94]. Indicator expert panels
and Delphi surveys [123] can be used to take into account
the opinions of a diverse sample of experts in the selection
of the best impact indicators and metrics. Involving lay
members of the public, stakeholders and research end-
users in the development and selection of indicators can
increase the social robustness of selecting indicators as
well as provide a balanced set of perspectives. Selecting
sets of indicators and metrics that conform to best
practice criteria, such as Focused, Appropriate, Balanced,
Robust, Integrated, Cost Effective (FABRIC), will also help
ensure proper use and quality [124]. Finally, the many
cautions for measuring impact can be addressed by
establishing mitigating strategies prior to implementa-
tion (Table 1).
Alberta Innovates provides a case example of
integrating measurement into its impact assessments. It
is a Canadian-based, publicly funded provincial research
and innovation funding organisation mandated to im-
prove the social and economic well-being of Albertans.
It uses a standardised Research to Impact Framework
for health sciences that guides the selection of indicators
and use of mixed methods in impact assessment [66].
Assessments are conducted at the programme,
Table 1 Measurement cautions and mitigating strategies
Cautions Mitigating strategies
Only selecting available
indicators
Identify a menu of aspirational
indicators and data sources
Measuring too many things Select a key set of indicators
Using only lagging indicators Balance with leading indicators
Double counting Look at contributions from different
stakeholders
Focusing on the indicator Focus on the programme change
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 10 of 16
portfolio, organisational and system levels. The frame-
work was designed to answer stakeholder questions
and identify their impacts using the five CAHS im-
pact categories. A mixed methods and multi-data
source approach is used to assess the impacts of its
investments across the funding cycle. Impact mea-
sures are collected annually, additional indicators and
measures are collected through annual scheduled
evaluations, and impact case studies are conducted
retrospectively in assessing and communicating im-
pact. Table 2provides a sample of impact indicators.
Recently, there has been an explosion of commercially
available tools and platforms for reporting impact metrics
in a standardised manner, with many research funders
developing their own. In choosing between readily available
reporting tools and developing new ones, the following
criteria can be used to evaluate their effectiveness against
the ideal system proposed by Wooding et al. [125]:
Capturing the full range of impact and benefits;
Allowing aggregation of impacts as well as their
disaggregated reporting;
Valuating different types of impacts in a
common currency;
Ensuring a low burden on researchers and having
low administration costs;
Capturing and comparing information fairly across
different grants or types of research.
Providing timely information while allowing time for
impact to occur.
8. Anticipate and address ethical issues and conflicts of
interest
Undertaking RIA and implementing its recommendations
may raise ethical issues and create conflicts of interest at
both personal and organisational levels. Anticipating and
addressing such ethical issues and conflicts of interests
can help maximise the social value of RIA.
When contacting researchers and organisations for
information, it is important to state the purpose of
RIA and to consider how they may perceive the RIA
practitioners aims and intentions. Researchers may
often be asked to report back to funders on the results
of their work; if their information is likely to affect
future funding, they should be made aware of this.
Nevertheless, even if RIA is carried out independently
with no implications for further funding, this
reassurance should also be made explicit. Moreover,
RIA practitioners should be mindful of the burden
they may place on researchers and organisations while
emphasising the importance and need for them to
report accurately and comprehensively for good quality
RIA studies to take place. A significant commitment of
time and effort to provide the required information or
to participate in RIA studies may also create conflict of
commitment.
Academic or professional recognition, funding and
other direct, indirect, actual or potential personal and
organisational benefits associated with certain RIA
projects and commissions may raise further ethical
issues and create perverse incentives for biased
assessment in favour of RIA commissioners or some
other stakeholders that run counter to the public good.
Therefore, undertaking RIA purely as a technocratic
exercise may lead to conflicts of interests between
individuals and organisations undertaking it on the one
hand, and the wider society on the other. In line with
the values of ISRIA, such conflicts of interest should be
anticipated and addressed to maximise the social value
of RIA. Given that RIA is undertaken by individuals
from different organisations and professions, in
anticipating and addressing ethical issues and conflicts
of interest, RIA practitioners should follow their
organisational and professional ethical regulations and
codes of practice, exercise their personal judgement and
be aware of their own personal cognitive biases.
Different organisations and professions have different
ethical regulations and codes of practice for disclosing and
avoiding conflicts of interest with regard to employment,
funding, remuneration, hospitality, consultancy, intellectual
property, paid governance and advisory roles, paid
membership of speakers panels, and other benefits that
may impair the objectivity and impartiality of RIA as well
as create conflicts of commitment. For example, the
American Evaluation Association considers honesty/
integrity and responsibilities for general and public welfare
as some of the most important guiding principles for the
profession of evaluation, in particular with regards to the
scope of evaluation and its results, costs, methodological
Table 2 Sample of impact indicators in health research
Impacts Indicators
Capacity-building Leveraged funding, research tools and
methods, use of facilities and resources,
career trajectory of researchers
Advancing knowledge Bibliometrics, engagements, esteem
measures, collaborations and partnerships
Informing decision-making Influence on policies, practices, products,
processes and behaviours (both in
health and the determinants of health)
Health Medical and health interventions, health
quality indicators, health status
Economic and social benefits Intellectual property and licensing, spin
outs, economic returns, jobs, economic
diversity and productivity
Social engagement Public involvement, dissemination,
engagement with relevant patient or
commissioning groups, culture and
creativity
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 11 of 16
limitations, changes to the project plans, objectivity,
underlying interests and values, freedom of information,
and maintaining a balance between client needs and other
needs [126].
9. Communicate results through multiple channels
A comprehensive and diversified communication strategy
can facilitate effective translation of RIA results into
practice. Different stakeholders can be reached most
effectively using different communication channels and
messages tailored according to their needs and knowledge
uptake capacities.
Effective translation of RIA results into practice
depends on the effective communication strategy and
skills as much as anything else. Understanding how
different stakeholders are best approached and tailoring
messages based on RIA results according to their
needs, context and preferred communication means is
imperative. Whereas a detailed RIA report may be an
effective communication strategy for one group of
stakeholders, others would better appreciate a summary
of the key messages in lay terms, an executive
summary, or a peer-reviewed academic publication. For
example, to support active public involvement in health
research, the United Kingdom national advisory group
INVOLVE recommends that all research applications
and reports include a plain English summary –“abrief
summary that has been written for members of the pub-
lic and an interested audience rather than specialists
clearly and simply, without jargon and with an explan-
ation of any technical terms that have to be included
[127]. Even within the RIA community it is worth not-
ingthattermsmaybeuseddifferentlyfromcountryto
country or in different research disciplines, making it
important to carefully define concepts and terms and
avoid overly-technical language as far as possible.
Communication is also crucial for an effective
engagement throughout the RIA process. Communication
of your RIA plan, process and findings is likely to be
strengthened by the use of visualisation tools such as
infographics, diagrams, charts and other visual aids. A
recent example of innovative visualisation of RIA results
includes infographics, alluvial and chord diagrams, word
clouds, heat maps and impact wheels, and synthesising
complex data to reveal where research has had a societal
impact [36]. With increasing use of the web and social
media by research stakeholders, impact assessment results
can be rapidly communicated through a variety of media,
including research blogs, social networks and web feeds.
Such communication channels are also useful tools for
establishing and maintaining international networks,
supporting collaboration, and building communities of
practice around particular areas and approaches.
10. Share your learning with the RIA community
Major scientific advances and impact on policy and
practice are often achieved through the mutual learning
of scholars and practitioners. For RIA to continue
developing its methods and grow its evidence base, it is
important for scholars and practitioners to share their
learning with the RIA community of practice.
As a multidisciplinary field of practice, RIA is sustained
by empirical knowledge and practical skills of the
community of practice. The latter is a group of people
who share a concern, set of problems, or a passion about a
topic, and who deepen their knowledge and expertise in
this area by interacting on an ongoing basis[128].
Sharing learning with the RIA community of practice can
be done in several ways. First, publication in peer-
reviewed journals allows shared learning and ensures its
quality and trust-worthiness through peer-review. Open
access publication has the additional benefits of increased
visibility, citation, usage and attention [129]. Second, par-
ticipation in conferences, workshops and other similar
events allows quicker sharing of learning as well as the de-
velopment of trust and collaboration between event par-
ticipants. Third, the internet provides the quickest
opportunity to share knowledge and data through web-
sites and blogs as well as to establish collaboration
between scholars and practitioners across many countries
through social networks (https://www.linkedin.com/
groups/5180935) and social media (https://twitter.com/
resimpactschool). Finally, professional associations such as
the American Evaluation Association (http://www.eval.
org/), professional societies such as the International
Society for Scientometrics and Informetrics (http://
issi-society.org), and professional training and develop-
ment networks such as the ISRIA (http://theinternatio-
nalschoolonria.com/) offer platforms for interaction and
mutual learning through regional or international courses,
conferences, workshops and thematic groups where prac-
titioners can discuss and learn about specific topics.
Conclusions
The guidelines can help practitioners improve the process
of RIA, but they are by no means exhaustive and require
evaluation and continuous improvement. The prima facie
effectiveness of the guidelines is based on the systematised
expert and practitioner knowledge of the schoolsfaculty
and participants derived from their practical experience
and research evidence. The current knowledge base has
gaps in terms of geographical and scientific areas as well as
stakeholder coverage and representation. With that in
mind, we invite readers to put these guidelines into
practice, develop them further, and strengthen them
through evaluation and continuous improvement. We also
encourage the sharing of experience and cultural
competence gained through implementing these guidelines
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 12 of 16
in new contexts. In doing so, we hope these guidelines
facilitate the further development of a global RIA
community of practice.
Abbreviations
CAHS: Canadian Academy of Health Sciences; ERA: Excellence in Research for
Australia; ISRIA: International School on Research Impact Assessment;
NIHR: National Institute for Health Research; REF: Research Excellence
Framework; RIA: Research Impact Assessment; SARIS: Sistema dAvaluació de
la Recerca i Innovació en Salut (System of Evaluation of Health Research).
Acknowledgements
The authors gratefully acknowledge the ISRIA faculty, invited speakers, panellists,
alumni and participants who contributed to ISRIA activities and provided valuable
insights for the elaboration of the statement. In particular, the authors would like
to acknowledge and remember Cy Frank, who was passionate about research
impact and a great advocate for the science of science. He was the scientific
director of the second edition of ISRIA held in Banff, Canada, in 2014. The authors
thank Neda Ahchieva, Imma Guillamon, Bhawna Singh, Joyce A. Talabong, Leonie
Van Drooge, and Adam Kamenetzky for their support and collaboration, Marianne
Siem for her expert assistance with graphic design, and Catherine Cooper for her
support in assembling the legacy of ISRIA resources on the website.
Funding
This paper received no specific grant from any funding agency in the public,
commercial or not-for-profit sectors. PVO is supported by the National
Institute for Health Research (NIHR) Biomedical Research Centre, Oxford, and
by the European Unions Horizon 2020 research and innovation programme
under grant agreement No. 709517. Article processing charges were funded
by AQuAS.
Availability of data and materials
From October 2017, an archive of the materials used and presented
in the five editions of ISRIA are available on the web page of ISRIA
(www.theinternationalschoolonria.com).
Disclaimer
The views expressed are those of the authors and not necessarily those of
the authorsrepresentative organisations, funders or sponsors.
Authorscontributions
All authors contributed towards the elaboration of the statement and
writing of the paper. PA, JG and KG founded ISRIA, elaborated the ISRIA
values, and conceived of the structure, learning approach and lessons of the
ISRIA programme. PVO and PA conceived the paper and led the drafting of
the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
PA, JG, KEAG, AP, OFB, A-MD, GVB, RNC, OS, SHK and MT are members of the
ISRIA Steering Committee. The authors all received reimbursement of travel
expenses and honoraria to attend ISRIA events. ISRIA received sponsorship
support from Researchfish®, and Beverley Sherbon of Researchfish®
participated in ISRIA 2016 as faculty.
PublishersNote
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Author details
1
Agency for Health Quality and Assessment of Catalonia (AQuAS), Carrer de
Roc Boronat, 81, ES-08005 Barcelona, Spain.
2
Radcliffe Department of
Medicine, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU,
United Kingdom.
3
The Policy Institute, Kings College London, Strand
Campus, London WC2R 2LS, United Kingdom.
4
Alberta Innovates, 10104-103
Avenue NW, Edmonton, AB T5J 4A7, Canada.
5
Qatar National Research Fund,
PO Box 5825, Doha, Qatar.
6
Commonwealth Scientific and Industrial Research
Organisation, PO Box 883, Kenmore, Brisbane 4069, Australia.
7
Novo Nordisk
Foundation, Tuborg Havnevej 19, DK-2900 Hellerup, Denmark.
8
National
Institute for Health Research, Central Commissioning Facility, Grange House
15, Church Street, Twickenham TW1 3NL, United Kingdom.
9
Fundación
Huésped, Pasaje A. Peluffo 3932, Buenos Aires C1202ABB, Argentina.
Received: 29 July 2017 Accepted: 10 January 2018
References
1. European Parliament and the Council. Regulation (EU) No 1291/2013 of the
European Parliament and of the Council of 11 December 2013 establishing
Horizon 2020 the Framework Programme for Research and Innovation
(2014-2020) and repealing Decision No 1982/2006/EC Text with EEA
relevance. http://data.europa.eu/eli/reg/2013/1291/oj. Accessed 8 Dec 2017.
2. 2014 Italian Presidency of the Council of the European Union. Rome
Declaration on Responsible Research and Innovation in Europe. https://ec.
europa.eu/research/swafs/pdf/rome_declaration_RRI_final_21_November.
pdf. Accessed 8 Dec 2017.
3. European Commission. Science With and For Society. https://ec.europa.eu/
programmes/horizon2020/en/h2020-section/science-and-society. Accessed
8 Dec 2017.
4. Trudeau J. Minister of Innovation, Science and Economic Development
Mandate Letter. http://pm.gc.ca/eng/minister-innovation-science-and-
economic-development-mandate-letter. Accessed 8 Dec 2017.
5. Government of Canada. Policy on Results. https://www.tbs-sct.gc.ca/pol/
doc-eng.aspx?id=31300. Accessed 8 Dec 2017.
6. Australian Government. National Innovation and Science Agenda Report.
https://www.innovation.gov.au/page/national-innovation-and-science-
agenda-report. Accessed 16 Jul 2017.
7. Science of Science Policy. Advancing Science Policy through Science.
https://archive.is/sVXly. Accessed 21 Jan 2018.
8. Fealing KH. The Science of Science Policy: A Handbook. Stanford: Stanford
Business Books; 2011.
9. Centre for Science and Policy. Research and Policy Engagement. http://
www.csap.cam.ac.uk/. Accessed 22 Jan 2018.
10. Lane J. Let's make science metrics more scientific. Nature. 2010;464(7288):4889.
11. Lane J, Bertuzzi S. Research funding. Measuring the results of science
investments. Science. 2011;331(6018):67880.
12. Smith R. Measuring the social impact of research. Difficult but necessary.
2001;323(7312):528.
13. Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JP, et al.
Biomedical research: increasing value, reducing waste. Lancet.
2014;383(9912):1014.
14. The Economist. How Science Goes Wrong. 2013. http://www.economist.
com/news/leaders/21588069-scientific-research-has-changed-world-now-it-
needs-change-itself-how-science-goes-wrong, Accessed 8 Dec 2017.
15. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N,
et al. A manifesto for reproducible science. Nature Human Behaviour.
2017;1:0021.
16. Macilwain C. Science economics: what science is really worth. Nature.
2010;465(7299):6824.
17. Unknown quantities. Nature. 2010;465(7299):6656.
18. Must try harder. Nature. 2012;483(7391):509.
19. Science The Endless Frontier. A Report to the President by Vannevar Bush,
Director of the Office of Scientific Research and Development, July 1945.
https://www.nsf.gov/od/lpa/nsf50/vbush1945.htm. Accessed 8 Dec 2017.
20. Sherwin CW, Isenson RS. Project hindsight. A Defense Department study of
the utility of research. Science. 1967;156(3782):15717.
21. Comroe Jr JH, Dripps RD. Scientific basis for the support of biomedical
science. Science. 1976;192(4235):10511.
22. Griliches Z. The Search for R&D Spillovers. Scand J Econ. 1992;94:S2947.
23. Kostoff RN. Research impact assessment. Principles and applications to
proposed, ongoing, and completed projects. Invest Radiol. 1994;29(9):8649.
24. Pielke R. In Retrospect: Science The Endless Frontier. Nature. 2010;466(7309):
9223.
25. Marjanovic S, Hanney S, Wooding S. A Historical Reflection on Research
Evaluation Studies, Their Recurrent Themes and Challenges. http://www.
rand.org/pubs/technical_reports/TR789.html. Accessed 8 Dec 2017.
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 13 of 16
26. Higher Education Funding Council for England. Assessment Framework and
Guidance on Submissions. http://www.ref.ac.uk/2014/pubs/2011-02/.
Accessed 22 Jan 2018.
27. European Commission. Past Evaluations: Evaluation of FP7. https://ec.europa.
eu/research/evaluations/index_en.cfm?pg=fp7. Accessed 8 Dec 2017.
28. European Commission. Horizon 2020 Evaluations. https://ec.europa.eu/
research/evaluations/index_en.cfm?pg=h2020evaluation. Accessed 8 Dec 2017.
29. European Commission. High Level Group on Maximising Impact of EU
Research and Innovation Programmes. https://ec.europa.eu/research/
evaluations/index_en.cfm?pg=hlg. Accessed 8 Dec 2017.
30. van den Akker W, Spaapen J. Productive Interactions: Societal Impact of
Academic Research in the Knowledge Society. https://www.leru.org/
publications/productive-interactions-societal-impact-of-academic-research-
in-the-knowledge-society. Accessed 22 Jan 2018.
31. Buxton M, Hanney S. How can payback from health research be assessed?
J Health Serv Res Policy. 1996;1(1):3543.
32. Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of
health research in policy-making: concepts, examples and methods of
assessment. Health Res Policy Syst. 2003;1:2.
33. Hanney SR, Grant J, Wooding S, Buxton MJ. Proposed methods for
reviewing the outcomes of health research: the impact of funding by
the UK's 'Arthritis Research Campaign'. Health Res Policy Syst. 2004;2:4.
34. Donovan C, Hanney S. The Payback Frameworkexplained. Res Eval.
2011;20(3):1813.
35. Greenhalgh T, Fahy N. Research impact in the community-based health
sciences: an analysis of 162 case studies from the 2014 UK Research
Excellence Framework. BMC Med. 2015;13:232.
36. Kings College London, Digital Science. The Nature, Scale and Beneficiaries
of Research Impact: An Initial Analysis of Research Excellence Framework
(REF) 2014 Impact Case Studies. http://www.hefce.ac.uk/media/HEFCE,2014/
Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_
of_REF_impact.pdf. Accessed 8 Dec 2017.
37. Glover M, Buxton M, Guthrie S, Hanney S, Pollitt A, Grant J. Estimating the
returns to UK publicly funded cancer-related research in terms of the net
value of improved health outcomes. BMC Med. 2014;12:99.
38. Sussex J, Feng Y, Mestre-Ferrandiz J, Pistollato M, Hafner M, Burridge P, et al.
Quantifying the economic impact of government and charity funding of
medical research on private research and development funding in the
United Kingdom. BMC Med. 2016;14:32.
39. Health Economics Research Group, Office of Health Economics, RAND
Europe. Medical Research: What's It Worth? https://www.ohe.org/
publications/medical-research-whats-it-worth. Accessed 8 Dec 2017.
40. Haskel J, Hughes A, Bascavusoglu-Moreau E. The Economic Significance of
the UK Science Base: A Report for the Campaign for Science and
Engineering. http://www.sciencecampaign.org.uk/resource/UKScienceBase.
html. Accessed 8 Dec 2017.
41. Medical Research Council. Outputs, Outcomes and Impact of MRC Research:
2014/15 Report. https://www.mrc.ac.uk/successes/outputs-report/. Accessed
8 Dec 2017.
42. Research Councils UK. Research Outcomes Overview. http://www.rcuk.ac.uk/
research/researchoutcomes/. Accessed 8 Dec 2017.
43. Hinrichs S, Grant J. A new resource for identifying and assessing the
impacts of research. BMC Med. 2015;13:148.
44. Ovseiko PV, Oancea A, Buchan AM. Assessing research impact in academic
clinical medicine: a study using Research Excellence Framework pilot impact
indicators. BMC Health Serv Res. 2012;12:478.
45. Hanney SR, Castle-Clarke S, Grant J, Guthrie S, Henshall C, Mestre-
Ferrandiz J, et al. How long does biomedical research take? Studying
the time taken between biomedical and health research and its
translation into products, policy, and practice. Health Res Policy Syst.
2015;13:1.
46. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the
question: understanding time lags in translational research. J R Soc
Med. 2011;104(12):51020.
47. Ovseiko PV, Greenhalgh T, Adam P, Grant J, Hinrichs-Krapels S, Graham KE,
et al. A global call for action to include gender in research impact
assessment. Health Res Policy Syst. 2016;14:50.
48. Ovseiko PV, Edmunds LD, Pololi LH, Greenhalgh T, Kiparoglou V, Henderson LR.
Markers of achievement for assessing and monitoring gender equity in
translational research organisations: a rationale and study protocol. BMJ Open.
2016;6(1):e009022.
49. Kalpazidou Schmidt E, Cacace M. Addressing gender inequality in science:
the multifaceted challenge of assessing impact. Res Eval. 2017;26(2):10214.
50. Pollitt A, Potoglou D, Patil S, Burge P, Guthrie S, King S, et al. Understanding
the relative valuation of research impact: a bestworst scaling experiment
of the general public and biomedical and health researchers. BMJ Open.
2016;6(8):e010916.
51. Adam P, Solans-Domènech M, Pons JMV, Aymerich M, Berra S, Guillamon I,
et al. Assessment of the impact of a clinical and health services research call
in Catalonia. Res Eval. 2012;21(4):31928.
52. Solans-Domènech M, Adam P, Guillamón I, Permanyer-Miralda G, Pons JM,
Escarrabill J. Impact of clinical and health services research projects on
decision-making: a qualitative study. Health Res Policy Syst. 2013;11:15.
53. Aymerich M, Carrion C, Gallo P, Garcia M, López-Bermejo A, Quesada M, et al.
Measuring the payback of research activities: A feasible ex-post evaluation
methodology in epidemiology and public health. Soc Sci Med. 2012;75(3):50510.
54. Spaapen J, Dijstelbloem H, Wamelink F. Evaluating Research in Context: A
Method for Comprehensive Assessment. Second edn. The Hague: Consultative
Committee of Sector Councils for Research and Development (COS); 2007.
55. Spaapen J, van Drooge L. Introducing productive interactionsin social
impact assessment. Res Eval. 2011;20(3):2118.
56. Association of Universities in the Netherlands (VSNU), Netherlands
Organisation for Scientific Research (NWO), Royal Netherlands Academy of Arts
and Sciences (KNAW). Standard Evaluation Protocol 2015-2021: Protocol for
Research Assessments in the Netherlands. https://www.knaw.nl/nl/actueel/
publicaties/standard-evaluation-protocol-2015-2021.Accessed8Dec2017.
57. National Science Foundation. Broader Impacts Improving Society.
https://www.nsf.gov/od/oia/special/broaderimpacts/. Accessed 8 Dec 2017.
58. US Department of Health and Human Services. STAR METRICS® Science
and Technology for America's Reinvestment Measuring the EffecTs of
Research on Innovation, Competitiveness and Science. https://www.
starmetrics.nih.gov/. Accessed 8 Dec 2017.
59. Trochim WM, Marcus SE, Mâsse LC, Moser RP, Weld PC. The evaluation of
large research initiatives. Am J Eval. 2008;29(1):828.
60. Sarli CC, Dubinsky EK, Holmes KL. Beyond citation analysis: a model for
assessment of research impact. J Med Libr Assoc. 2010;98(1):1723.
61. Buxton MJ, Schneider WL. Assessing the Payback from AHFMR-funded
Research. Edmonton: Alberta Heritage Foundation for Medical Research; 1999.
62. Frank C, Nason E. Health research: measuring the social, health and
economic benefits. CMAJ. 2009;180(5):52834.
63. Panel on Return on Investment in Health Research. Making an Impact: A
Preferred Framework and Indicators to Measure Returns on Investment in
Health Research. Ottawa: Canadian Academy of Health Sciences; 2009.
64. Caddell AJ, Hatchette JE, McGrath PJ. Examining the impact of health
research facilitated by small peer-reviewed research operating grants in a
women's and children's health centre. BMC Res Notes. 2010;3:107.
65. Montague S, Valentim R. Evaluation of R&D: from prescriptions for justifying
to user-oriented guidance for learning. Res Eval. 2010;19(4):25161.
66. Graham KER, Chorzempa HL, Valentine PA, Magnan J. Evaluating health
research impact: Development and implementation of the Alberta
Innovates Health Solutions impact framework. Res Eval. 2012;21(5):35467.
67. Dobrow MJ, Miller FA, Frank C, Brown AD. Understanding relevance of
health research: considerations in the context of research impact
assessment. Health Res Policy Syst. 2017;15:31.
68. Shah S, Ward JE. Outcomes from NHMRC public health research project
grants awarded in 1993. Aust N Z J Public Health. 2001;25(6):55660.
69. Clay MA, Donovan C, Butler L, Oldenburg BF. The returns from
cardiovascular research: the impact of the National Heart Foundation of
Australia's investment. Med J Aust. 2006;185(4):20912.
70. Kingwell BA, Anderson GP, Duckett SJ, Hoole EA, Jackson-Pulver LR,
Khachigian LM, et al. Evaluation of NHMRC funded research completed in
1992, 1997 and 2003: gains in knowledge, health and wealth. Med J Aust.
2006;184(6):2826.
71. Kalucy EC, Jackson-Bowers E, McIntyre E, Reed R. The feasibility of
determining the impact of primary health care research projects using the
Payback Framework. Health Res Policy Syst. 2009;7:11.
72. Reed RL, Kalucy EC, Jackson-Bowers E, McIntyre E. What research impacts do
Australian primary health care researchers expect and achieve? Health Res
Policy Syst. 2011;9:40.
73. Schapper CC, Dwyer T, Tregear GW, Aitken M, Clay MA. Research
performance evaluation: the experience of an independent medical
research institute. Aust Health Rev. 2012;36(2):21823.
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 14 of 16
74. Milat AJ, Laws R, King L, Newson R, Rychetnik L, Rissel C, et al. Policy and
practice impacts of applied research: a case study analysis of the New
South Wales Health Promotion Demonstration Research Grants Scheme
20002006. Health Res Policy Syst. 2013;11:5.
75. Donovan C, Butler L, Butt AJ, Jones TH, Hanney SR. Evaluation of the impact
of National Breast Cancer Foundation-funded research. Med J Aust. 2014;
200(4):2148.
76. Cohen G, Schroeder J, Newson R, King L, Rychetnik L, Milat AJ, et al. Does
health intervention research have real world policy and practice impacts:
testing a new impact assessment tool. Health Res Policy Syst. 2015;13:3.
77. Commonwealth Scientific and Industrial Research Organisation (CSIRO). Our
Impact. https://www.csiro.au/en/About/Our-impact. Accessed 8 Dec 2017.
78. Australian Research Council. Engagement and Impact Assessment. http://www.
arc.gov.au/engagement-and-impact-assessment. Accessed 8 Dec 2017.
79. Angelelli P, Gordon A, Di Marzo E, Peirano F, Moldovan P, Codner D.
Investigación científica e innovación tecnológica en Argentina: Impacto de
los fondos de la Agencia Nacional de Promoción Científica y Tecnológica.
https://publications.iadb.org/handle/11319/382?locale-attribute=pt#sthash.
gLmho4ez.dpuf. Accessed 8 Dec 2017.
80. Angulo-Tuesta A, Santos LMP. Evaluation of the impact of maternal and
neonatal morbidity and mortality research funded by the Ministry of Health
in Brazil. Res Eval. 2015;24(4):35568.
81. Brambila C, Ottolenghi E, Marin C, Bertrand JT. Getting results used:
evidence from reproductive health programmatic research in Guatemala.
Health Policy Plann. 2007;22(4):23445.
82. Kwan P, Johnston J, Fung AY, Chong DS, Collins RA, Lo SV. A systematic
evaluation of payback of publicly funded health and health services
research in Hong Kong. BMC Health Serv Res. 2007;7:121.
83. Probandari A, Widjanarko B, Mahendradhata Y, Sanjoto H, Cerisha A, Nungky
S, et al. The path to impact of operational research on tuberculosis control
policies and practices in Indonesia. Glob Health Action. 2016;9(1):29866.
84. Yazdizadeh B, Majdzadeh R, Janani L, Mohtasham F, Nikooee S, Mousavi A, et al.
An assessment of health research impact in Iran. Health Res Policy Syst. 2016;14:56.
85. Grant J, Culbertson S, Al-Khater L, Al-Heidous A, Pollitt A, Castle-Clarke S,
et al. QNRF Impact Measurement Framework. Doha: QNRF; 2013.
86. Morgan Jones M, Grant J, et al. Making the grade: methodologies for assessing
and evidencing research impact. In: Dean A, Wykes M, Stevens H, editors. Seven
Essays on Impact. DESCRIBE project report for JISC. Exeter: University of Exeter;
2013. p. 2543.
87. Guthrie S, Wamae W, Diepeveen S, Wooding S, Grant J. Measuring Research:
A Guide to Research Evaluation Frameworks and Tools. http://www.rand.
org/pubs/monographs/MG1217.html. Accessed 8 Dec 2017.
88. Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM,
et al. How to increase value and reduce waste when research priorities are
set. Lancet. 2014;383(9912):15665.
89. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D,
et al. Increasing value and reducing waste in research design, conduct, and
analysis. Lancet. 2014;383(9912):16675.
90. Al-Shahi Salman R, Beller E, Kagan J, Hemminki E, Phillips RS, Savulescu J,
et al. Increasing value and reducing waste in biomedical research regulation
and management. Lancet. 2014;383(9912):17685.
91. Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC. Increasing
value and reducing waste: addressing inaccessible research. Lancet.
2014;383(9913):25766.
92. Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al.
Reducing waste from incomplete or unusable reports of biomedical
research. Lancet. 2014;383(9913):26776.
93. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: The
Leiden Manifesto for Research Metrics. Nature. 2015;520(7548):42931.
94. Wilsdon J, Allen L, Belfiore E, Campbell P, Curry S, Hill S, et al. The metric tide:
report of the independent review of the role of metrics in research assessment
and management. http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/
Independentresearch/2015/The,Metric,Tide/2015_metric_tide.pdf.Accessed
8Dec2017.
95. The International School on Research Impact. What is Research Impact
Assessment (RIA)? http://theinternationalschoolonria.com/whatisRIA.php.
Accessed 8 Dec 2017.
96. Times Higher Education. World University Rankings. https://www.
timeshighereducation.com/world-university-rankings. Accessed 8 Dec 2017.
97. ShanghaiRanking. Global Ranking of Academic Subjects. http://www.
shanghairanking.com/. Accessed 8 Dec 2017.
98. Centre for Science and Technology Studies. The CWTS Leiden Ranking.
http://www.leidenranking.com/. Accessed 8 Dec 2017.
99. Saldaña J, Mallette LA. Environmental coding: a new method using the
SPELIT environmental analysis matrix. Qual Inq. 2017;23(2):1617.
100. Wooding S, Hanney S, Buxton M, Grant J. The Returns from Arthritis
Research. Volume 1: Approach, Analysis and Recommendations. www.rand.
org/content/dam/rand/pubs/monographs/2004/RAND_MG251.pdf.
Accessed 8 Dec 2017.
101. Wooding S, Hanney S, Pollitt A, Buxton M, Grant J. Project Retrosight.
Understanding the Returns from Cardiovascular and Stroke Research: The
Policy Report. www.rand.org/content/dam/rand/pubs/monographs/2011/
RAND_MG1079.pdf. Accessed 8 Dec 2017.
102. Wooding S, Pollitt A, Castle-Clarke S, Cochran G, Diepeveen S, Guthrie S,
et al. Mental Health Retrosight: Understanding the Returns from Research
(Lessons from Schizophrenia): Policy Report. https://www.rand.org/pubs/
research_reports/RR325.html. Accessed 8 Dec 2017.
103. Guthrie S, Kirtley A, Garrod B, Pollitt A, Grant J, Wooding S. A 'DECISIVE' Approach
to Research Funding: Lessons from Three Retrosight Studies. https://www.rand.
org/pubs/research_reports/RR1132.html. Accessed 8 Dec 2017.
104. Australian Research Council. Excellence in Research for Australia.
http://www.arc.gov.au/excellence-research-australia. Accessed 8 Dec 2017.
105. ACIL Allen Consulting. Benefits Realisation Review of Excellence in Research
for Australia: Final Report. http://www.arc.gov.au/sites/default/files/filedepot/
Public/ERA/Benefits%20realisation%20review.pdf. Accessed 8 Dec 2017.
106. Bryson JM. What to do when stakeholders matter. Public Manage Rev.
2004;6(1):2153.
107. Mitchell RK, Agle BR, Wood DJ. Toward a theory of stakeholder
identification and salience: defining the principle of who and what really
counts. Acad Manage Rev. 1997;22(4):85386.
108. Eden C, Ackermann F. Making Strategy: The Journey of Strategic
Management. London: Sage; 1998.
109. Bryson JM, Patton MQ, Bowman RA. Working with evaluation stakeholders: a
rationale, step-wise approach and toolkit. Eval Program Plann. 2011;34(1):112.
110. Price D, editor. The Principles and Practice of Change. Basingstoke: Palgrave
Macmillan; 2009.
111. Gibbons M, Limoges C, Nowotny H, Schwartzman S, Scott P, Trow M. The
New Production of Knowledge: The Dynamics of Science and Research in
Contemporary Societies. London: Sage; 1994.
112. Gibbons M. Science's new social contract with society. Nature.
1999;402(6761 Suppl):C8184.
113. Nowotny H, Scott P, Gibbons M. Re-Thinking Science: Knowledge and the
Public in an Age of Uncertainty. Cambridge: Polity Press; 2001.
114. National Institute for Health Research (NIHR). Patients and the Public.
https://www.nihr.ac.uk/patients-and-public/. Accessed 8 Dec 2017.
115. Agency for Health Quality and Assessment of Catalonia (AQuAS).
Convocatòries PERIS 2016-2020. http://aquas.gencat.cat/ca/projectes/mes_
projectes/avaluacio_convocatories_recerca/PERIS/. Accessed 8 Dec 2017.
116. American Evaluation Association. American Evaluation Association Public
Statement on Cultural Competence in Evaluation. http://www.eval.org/
ccstatement. Accessed 8 Dec 2017.
117. Milat AJ, Bauman AE, Redman S. A narrative review of research impact
assessment models and methods. Health Res Policy Syst. 2015;13:18.
118. Greenhalgh T, Raftery J, Hanney S, Glover M. Research impact: a narrative
review. BMC Med. 2016;14:78.
119. Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and
empirical approaches used to assess the impact of health research: an
overview of reviews. Health Res Policy Syst. 2011;9:26.
120. Hanney S, Greenhalgh T, Blatch-Jones A, Glover M, Raftery J. The impact on
healthcare, policy and practice from 36 multi-project research programmes:
findings from two reviews. Health Res Policy Syst. 2017;15:26.
121. Buxton M, Hanney S, Jones T. Estimating the economic value to societies of
the impact of health research: a critical review. Bull World Health Organ.
2004;82(10):7339.
122. Davies R, Mayne J, Befani B, Forss K, Stame N, Stern E. DFID Working Paper
38. Broadening the Range of Designs and Methods for Impact Evaluations.
https://www.gov.uk/dfid-research-outputs/dfid-working-paper-38-
broadening-the-range-of-designs-and-methods-for-impact-evaluations.
Accessed 8 Dec 2017.
123. Gagliardi AR, Simunovic M, Langer B, Stern H, Brown AD. Development of
quality indicators for colorectal cancer surgery, using a 3-step modified
Delphi approach. Can J Surg. 2005;48(6):44152.
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 15 of 16
124. Treasury HM, Cabinet Office, National Audit Office, Audit Commission, Office
for National Statistics. Choosing the Right FABRIC: A Framework for
Performance Information. London: TSO; 2001.
125. Wooding S, Nason E, Starkey T, Hanney S, Grant J. Mapping the Impact:
Exploring the Payback of Arthritis Research. Cambridge: RAND Europe; 2009.
126. American Evaluation Association. American Evaluation Association Guiding
Principles for Evaluators. http://www.eval.org/p/cm/ld/fid=51. Accessed
8 Dec 2017.
127. INVOLVE. Plain English Summaries. http://www.invo.org.uk/resource-centre/
plain-english-summaries/. Accessed 8 Dec 2017.
128. Wenger E, McDermott RA, Snyder W. Cultivating Communities of Practice: A
Guide to Managing Knowledge. Boston: Harvard Business School Press; 2002.
129. Wang X, Liu C, Mao W, Fang Z. The open access advantage considering
citation, article usage and social media attention. Scientometrics.
2015;103(2):55564.
We accept pre-submission inquiries
Our selector tool helps you to find the most relevant journal
We provide round the clock customer support
Convenient online submission
Thorough peer review
Inclusion in PubMed and all major indexing services
Maximum visibility for your research
Submit your manuscript at
www.biomedcentral.com/submit
Submit your next manuscript to BioMed Central
and we will help you at every step:
Adam et al. Health Research Policy and Systems (2018) 16:8 Page 16 of 16
... The process of moving knowledge into use is rarely straightforward, and it poses additional challenges for evaluating research impacts. Significant time lags can occur between a project's end and the adoption of new practices or other types of change (Adam et al., 2018;Penfield et al., 2014;. It takes time to translate research results into usable formats, and for organisations to be ready to integrate new knowledge into practice (Greenhalgh et al., 2016;Oliver et al., 2014;Pedersen et al., 2020). ...
... It takes time to translate research results into usable formats, and for organisations to be ready to integrate new knowledge into practice (Greenhalgh et al., 2016;Oliver et al., 2014;Pedersen et al., 2020). These challenges make it difficult to attribute change directly to a research finding or project (Adam et al., 2018;Penfield et al., 2014;. Research is only one of several factors that create societal change. ...
... Unanticipated or emergent impacts are also more likely to be identified through open-ended qualitative inquiry (Meagher and Martin, 2017;. Impact evaluation should also incorporate experiences of societal partners who were involved in, or benefited from, the research (Adam et al., 2018;Gunn and Mintrom, 2017). Partners can confirm that impacts occurred, add richness about the impacts they experienced, and describe new impacts not previously documented by researchers. ...
Article
Full-text available
Universities, researchers and funders are increasingly asking how research contributes to positive changes in society and the environment, and seeking ways to document and describe impacts consistently across diverse disciplines and organisational scales. The societal impacts framework presented in this pilot study uses a combination of impact goal and impact descriptor frameworks to elucidate the societal impacts of research. The framework blends elements of assessment-driven and mission-driven reporting frameworks, and was administered online to volunteers from one interdisciplinary environmental research institute. The 12 projects in the pilot study addressed 15 of the 17 UN Sustainable Development Goals, and all 12 projects reported impacts in two or more of six impact descriptor categories. We also identified an impact subcategory dealing with changes to higher education practice. Combining two types of impacts frameworks – societal goals and descriptors of changes – allowed us to understand how the research projects contributed to broad societal goals, not just that they addressed the goals. Responses from study participants indicated a good fit between the framework and their research efforts. However, we found that the online reporting tool, in its current form, was not effective in eliciting full and accurate reports from all participants. We reflect upon how to improve data collection in the future, as well as on opportunities for additional tests of the framework in new contexts.
... [8] Research impact is the contribution of research beyond academia [9] ; governments, funding agencies, and research organizations are interested in maximizing the socioeconomic returns on research investment by shaping policies, decisions, and practices. [10] Research impact assessment (RIA) is already being institutionalized in health research and innovation systems around the world, such as in Europe, North America, and Australia. [10] Payback, Research Impact Framework, Canadian Academy of Health Sciences, Monetization, Social Impact Assessment, and UK Research Excellence Framework are six established approaches to RIA. [9] RIA is being used as a practical tool for decision making, research resource allocation, and determining accountability to research funders by many research institutes and funding agencies. ...
... [10] Research impact assessment (RIA) is already being institutionalized in health research and innovation systems around the world, such as in Europe, North America, and Australia. [10] Payback, Research Impact Framework, Canadian Academy of Health Sciences, Monetization, Social Impact Assessment, and UK Research Excellence Framework are six established approaches to RIA. [9] RIA is being used as a practical tool for decision making, research resource allocation, and determining accountability to research funders by many research institutes and funding agencies. [11] In Iran, the Ministry of Health and Medical Education had launched a research activity evaluation system around 20 years ago. ...
Article
Full-text available
Background Research impact assessment is already being institutionalized in health research and innovation systems. In developing countries, there are many different research assessment models which have focused more on research output in academic levels and less on impact. Objective The aim of this study is designing an Iranian impact-oriented model of research and technology evaluation. Method This is a mixed study. In the quantitative part, by reviewing the literature, a list of research impact indicators that existed were gathered, reviewed, and scored by participants on importance, relevance, and measurability via a 5-point Likert scale. All indicators with a mean score equal to or greater than 3.5 entered the qualitative part, which were discussed in depth by engaging key stakeholders regarding their validity and feasibility through focus groups, interviews, and expert panels. Results The Iranian research impact evaluation model was developed with four main pillars (including input and process, output, outcome, and impact), four areas (stewardship, advancing knowledge and translation, technology, and impact), and 30 indicators through key stakeholders participation in the Iranian health research system. Conclusions This model has been introduced as the first model designed to evaluate the impact of health research and can be one of the most important tools for allocating limited funding resources while maximizing the desired impact of research in the community.
... Beyond economic influence, Kuruvilla et al. highlight the formulation of four central measures that can be used to assess the effects of medical and health studies: research-related, policy, service (health and intersectoral), and societal impacts [14]. Other models have been developed by the International School on Research Impact Assessment (ISRIA) [15], the Canadian Academy of Health Sciences (CAHS) [16], and the U.K. higher education institutes [17]. ...
Article
Full-text available
Background Research is the basis of advancement in health and wellbeing in modern societies. Our study aims to examine the funding policy of the Israel National Institute for Health Policy Research (NIHP), a national foundation responsible for assessing the impact of the national Health Insurance Law on health services in Israel. The study aims to evaluate the studies funded from 2010 to 2020, considering their publication in scientific literature and other channels that may influence decision-makers. We compare findings to a previous internal examination of studies funded by the NIHP during 1996–2014. Our paper presents an approach for measuring the impact of health policy research. Methods All 378 studies funded by NIHP during the specified years were identified. Objective data were gathered by investigating scientific literature across three datasets: Web of Science (WOS), PubMed, and Google Scholar, including journal impact factor, quarterly index, and citation metrics. Concurrently, a questionnaire was developed to collect additional and subjective data from principal investigators of the funded research projects. Results In the final sample of 364 studies funded by NIHP from 2010 to 2020, after 11 were cancelled, and 3 were duplicates. 436 publications were retrieved in peer-reviewed journals. The average time elapsed from funding to scientific publication was 4.65 years. Metric parameters for the top publications of 231 funded studies with at least one publication in peer-reviewed journals revealed an average journal impact factor of 5.97 points and an average of 7.82 citations according to WOS and 14 citations according to Google Scholar. A comparison to 459 funded studies from 1996 to 2014 found a twofold increase in the impact factor. Nearly half of the principal investigators reported some influence on policy processes in the questionnaires, and the majority of the studies were also reported in popular media outlets. Conclusions The study provides an overview of the quality and potential influence of studies funded by NIHP, dedicated to supporting research in the field of health policy in Israel. Some of the findings are supported by results from similar inquiries. Several recommendations are introduced to enhance the quality and impact of the funded studies.
... Inclusive, na tentativa de tornar mais eficiente a ação da pesquisa estadual, os IPs da APTA estão inseridos no Sistema Nacional de Pesquisa Agropecuária (SNPA), constituído pela EMBRAPA, pelas Organizações Estaduais de Pesquisa Agropecuária -OEPAS, por universidades e institutos de pesquisa de âmbito federal e estadual, além de outras organizações públicas e privadas, direta ou indiretamente vinculadas à atividade de pesquisa agropecuária (Pereira & Castro, 2017). Nesse cenário, a avaliação do impacto da pesquisa torna-se um elemento essencial para demonstrar o valor da pesquisa e, em função dos resultados, advogar pela continuidade da estrutura de pesquisa, bem como para a alocação de novos investimentos (Adam et al., 2018). ...
Article
Research conducted by Universities of Applied Sciences (UASs) is frequently driven by professional practice where researchers are challenged with finding solutions to real-life problems. These real-life solutions are significantly enhanced by the participation of stakeholders. Through this inclusion and the resulting interactions, activities, and knowledge transfer, between the stakeholder and research(ers), impacts occur at a micro level. Micro impacts are the normal impacts that occur during the research process through interactions between researchers and stakeholders, that facilitate an unexpected and unplanned effect, be it positive or negative (Lykke et al. 2023, Mapping Research Activities and Societal Impact by Taxonomy of Indicators: Uniformity and Diversity across Academic Fields, Journal of Documentation, 79: 1049–70). Contribution analysis has been recognized as a viable method for evaluating micro impacts. One recognized contribution analysis framework is Kok and Schuit’s (2012, Contribution Mapping: A Method for Mapping the Contribution of Research to Enhance Its Impact, Health Research Policy and Systems, 10: 21) Contribution Mapping. It is also one of the frameworks acknowledged as conforming to several of the recommendations for evaluating UAS research impact. However, to do justice to the nature of Practice Oriented research, a new framework is needed. The purpose of this article is to test how Contribution Mapping works in real-life to answer the question: What can we learn from Contribution Mapping as an impact evaluation tool for a future UAS specific research impact evaluation framework? This article will examine the specificity of UAS research, the relevance of Contribution Mapping for evaluating UAS research, and the theoretical and practical implications of Contribution Mapping. Through inductive analysis conducted on information gleaned from interviews and focus groups, observations, challenges, and limitations are identified, and modifications suggested to take into consideration for a new framework.
Article
The field of research impact assessment (RIA) has seen remarkable growth over the past three decades. Increasing numbers of RIA frameworks have been developed and applied by research funders and new technologies can capture some research impacts automatically. However, RIAs are too different to draw comparable conclusions about what type of methods, data or processes are best suited to assess research impacts of different kinds, or how funders should most efficiently implement RIAs. To usher in the next era of RIA and mature the field, future RIA methodologies should become more transparent, standardized and easily implementable. Key to these efforts is an improved understanding of how to practically implement and report on RIA at the funder-level. Our aim is to address this gap through two major contributions. First, we identify common items across existing best practice guidelines for RIA, creating a preliminary reporting checklist for standardized RIA reporting. Next, we systematically reviewed studies examining funders’ assessment of biomedical grant portfolios to examine how funders reported the results of their RIAs across the checklist, as well as the operational steps funders took to perform their RIA and the variation in how funders implemented the same RIA frameworks. We compare evidence on current RIA practices with the reporting checklist to identify good practice for RIA reporting, gaps in the evidence base for future research, and recommendations for future effective RIA.
Article
Upon recent establishment of the Irish technological university, there is impetus for better understanding of “impact” from a more comprehensive perspective regarding not only research, but also research equivalent activities, teaching and service engagement. The current research explored such understanding from one newly developed Irish technological university regarding the nature of impacts that are both being made and are possible in light of the recently developed sector; and how these acquiesce with existing perspectives surrounding their current and evolving identity. Twenty-six interviews were conducted in this context and resulting data analysed thematically. Further to the emergence of five themes regarding identity, teaching as the “bread-and-butter”, alumni, interdisciplinarity and the nature of “our” research – participants confirmed the framework of impact established as part of the literature review and helped facilitate the development of a list of 157 impact indicators and evidence. Implications are discussed in light of findings and theory.
Article
Full-text available
The Independent Review of the Role of Metrics in Research Assessment and Management was set up in April 2014 to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research. Its report, ‘The Metric Tide’, was published in July 2015 and is available below. The review was chaired by James Wilsdon, professor of science and democracy at the University of Sussex, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and research administration. Through 15 months of consultation and evidence-gathering, the review looked in detail at the potential uses and limitations of research metrics and indicators, exploring the use of metrics within institutions and across disciplines. The main findings of the review include the following: There is considerable scepticism among researchers, universities, representative bodies and learned societies about the broader use of metrics in research assessment and management. Peer review, despite its flaws, continues to command widespread support as the primary basis for evaluating research outputs, proposals and individuals. However, a significant minority are enthusiastic about greater use of metrics, provided appropriate care is taken. Carefully selected indicators can complement decision-making, but a ‘variable geometry’ of expert judgement, quantitative indicators and qualitative measures that respect research diversity will be required. There is legitimate concern that some indicators can be misused or ‘gamed’: journal impact factors, university rankings and citation counts being three prominent examples. The data infrastructure that underpins the use of metrics and information about research remains fragmented, with insufficient interoperability between systems. Analysis concluded that that no metric can currently provide a like-for-like replacement for REF peer review. In assessing research outputs in the REF, it is not currently feasible to assess research outputs or impacts in the REF using quantitative indicators alone. In assessing impact in the REF, it is not currently feasible to use quantitative indicators in place of narrative case studies. However, there is scope to enhance the use of data in assessing research environments. The review identified 20 recommendations for further work and action by stakeholders across the UK research system. They propose action in the following areas: supporting the effective leadership, governance and management of research cultures; improving the data infrastructure that supports research information management; increasing the usefulness of existing data and information sources; using metrics in the next REF; and coordinating activity and building evidence. These recommendations are underpinned by the notion of ‘responsible metrics’ as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research. Responsible metrics can be understood in terms of the following dimensions: Robustness: basing metrics on the best possible data in terms of accuracy and scope Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response. text from: http://www.hefce.ac.uk/pubs/rereports/year/2015/metrictide/
Article
Full-text available
Background With massive investment in health-related research, above and beyond investments in the management and delivery of healthcare and public health services, there has been increasing focus on the impact of health research to explore and explain the consequences of these investments and inform strategic planning. Relevance is reflected by increased attention to the usability and impact of health research, with research funders increasingly engaging in relevance assessment as an input to decision processes. Yet, it is unclear whether relevance is a synonym for or predictor of impact, a necessary condition or stage in achieving it, or a distinct aim of the research enterprise. The main aim of this paper is to improve our understanding of research relevance, with specific objectives to (1) unpack research relevance from both theoretical and practical perspectives, and (2) outline key considerations for its assessment. ApproachOur approach involved the scholarly strategy of review and reflection. We prepared a draft paper based on an exploratory review of literature from various fields, and gained from detailed and insightful analysis and critique at a roundtable discussion with a group of key health research stakeholders. We also solicited review and feedback from a small sample of expert reviewers. Conclusions Research relevance seems increasingly important in justifying research investments and guiding strategic research planning. However, consideration of relevance has been largely tacit in the health research community, often depending on unexplained interpretations of value, fit and potential for impact. While research relevance seems a necessary condition for impact – a process or component of efforts to make rigorous research usable – ultimately, relevance stands apart from research impact. Careful and explicit consideration of research relevance is vital to gauge the overall value and impact of a wide range of individual and collective research efforts and investments. To improve understanding, this paper outlines four key considerations, including how research relevance assessments (1) orientate to, capture and compare research versus non-research sources, (2) consider both instrumental versus non-instrumental uses of research, (3) accommodate dynamic temporal-shifting perspectives on research, and (4) align with an intersubjective understanding of relevance.
Research
Full-text available
Societal impact is high on the agenda of universities and will be even higher in the years to come. Governments are increasingly asking institutes of higher education how public money spent on research is improving society.The societal challenges of our time are enormous, ranging from global issues (climate change, renewable energy, sustainable agricultural production), to more regional concerns (inequality, migration and democratic systems). Universities play a significant role in addressing these challenges, but could become even more important when they when they exchange mostly linear views on the relation between research and society for a more iterative perspective in which societal impact is the outcome of the productive interactions between different stakeholders that jointly address challenges.
Article
Full-text available
Background We sought to analyse the impacts found, and the methods used, in a series of assessments of programmes and portfolios of health research consisting of multiple projects. Methods We analysed a sample of 36 impact studies of multi-project research programmes, selected from a wider sample of impact studies included in two narrative systematic reviews published in 2007 and 2016. We included impact studies in which the individual projects in a programme had been assessed for wider impact, especially on policy or practice, and where findings had been described in such a way that allowed them to be collated and compared. ResultsIncluded programmes were highly diverse in terms of location (11 different countries plus two multi-country ones), number of component projects (8 to 178), nature of the programme, research field, mode of funding, time between completion and impact assessment, methods used to assess impact, and level of impact identified.Thirty-one studies reported on policy impact, 17 on clinician behaviour or informing clinical practice, three on a combined category such as policy and clinician impact, and 12 on wider elements of impact (health gain, patient benefit, improved care or other benefits to the healthcare system). In those multi-programme projects that assessed the respective categories, the percentage of projects that reported some impact was policy 35% (range 5?100%), practice 32% (10?69%), combined category 64% (60?67%), and health gain/health services 27% (6?48%).Variations in levels of impact achieved partly reflected differences in the types of programme, levels of collaboration with users, and methods and timing of impact assessment. Most commonly, principal investigators were surveyed; some studies involved desk research and some interviews with investigators and/or stakeholders. Most studies used a conceptual framework such as the Payback Framework. One study attempted to assess the monetary value of a research programme?s health gain. Conclusion The widespread impact reported for some multi-project programmes, including needs-led and collaborative ones, could potentially be used to promote further research funding. Moves towards greater standardisation of assessment methods could address existing inconsistencies and better inform strategic decisions about research investment; however, unresolved issues about such moves remain.
Book
El objetivo general de la evaluación ha sido verificar el grado de cumplimiento de las metas establecidas para el Programa de Modernización Tecnológica III, analizar los desvíos y sus posibles causas, medir el impacto del Programa y extraer enseñanzas que permitan realizar ajustes en el diseño y el proceso de implementación. El enfoque metodológico se oriento a la evaluación de impacto y se focalizó en el análisis de diversos indicadores de resultados.
Article
This project explores the impacts arising from cardiovascular and stroke research funded 15-20 years ago and attempts to draw out aspects of the research, researcher or environment that are associated with high or low impact. The project is a case study-based review of 29 cardiovascular and stroke research grants, funded in Australia, Canada and UK between 1989 and 1993. The case studies focused on the individual grants but considered the development of the investigators and ideas involved in the research projects from initiation to the present day. Grants were selected through a stratified random selection approach that aimed to include both high- and low-impact grants. The key messages are as follows: 1) The cases reveal that a large and diverse range of impacts arose from the 29 grants studied. 2) There are variations between the impacts derived from basic biomedical and clinical research. 3) There is no correlation between knowledge production and wider impacts 4) The majority of economic impacts identified come from a minority of projects. 5) We identified factors that appear to be associated with high and low impact. This article presents the key observations of the study and an overview of the methods involved. It has been written for funders of biomedical and health research and health services, health researchers, and policy makers in those fields. It will also be of interest to those involved in research and impact evaluation.
Article
This study examines the impacts arising from neuroscience and mental health research going back 20-25 years, and identifies attributes of the research, researchers or research setting that are associated with translation into patient benefit, in the particular case of schizophrenia. The study combined two methods: forward-tracing case studies to examine where scientific advances of 20 years ago have led to impact today; and backward-tracing perspectives to identify the research antecedents of today's interventions in schizophrenia. These research and impact trails are followed principally in Canada, the UK and the USA. The headline findings are as follows: The case studies and perspectives support the view that mental health research has led to a diverse and beneficial range of academic, health, social and economic impacts over the 20 years since the research was undertaken.Clinical research has had a larger impact on patient care than basic research has over the 20 years since the research was undertaken.Those involved in mental health research who work across boundaries are associated with wider health and social benefits.Committed individuals, motivated by patient need, who effectively champion research agendas and/or translation into practice are key in driving the development and implementation of interventions.This study provides an overview of the methods and presents the full set of findings, with the policy provocations they raise, and an emerging research agenda. It has been written for funders of biomedical and health research and health services, health researchers, and policymakers in those fields. It will also be of interest to those involved in research and impact evaluation.
Article
The Retrosight approach consists of looking at research that was conducted in the past and, using Payback case studies, tracing that research through to the present day to understand both the extent to which the research has had impacts, within academia and more widely, and how these impacts came about. RAND Europe has conducted three studies based on this approach in different research fields: arthritis research, cardiovascular research and mental health research. Each drew out a set of observations and recommendations for policymakers and research funders in those research fields. By reviewing and comparing the findings of the three studies, we have identified eight lessons which combine to provide a "DECISIVE" approach to biomedical and health research funding: Different skills: Fund researchers with more than just research skills-individuals are key when it comes to translation of research into wider impact. Engaged: Support your researchers to engage with non-academic stakeholders to help their work have a wider impact. Clinical: For greater impact on patient care within 10-20 years, fund clinical rather than basic research. Impact on society: If you want to have a wider impact, don't just fund for academic excellence. Size: Bigger isn't necessarily better when it comes to the size of a research grant. International: For high academic impact, fund researchers who collaborate internationally and support them to do so. Variety: Simple metrics will only capture some of the impact of your research. Expectations: Most broader social and economic impact will come from just a few projects.
Article
The analysis of the reasons behind the persistent under-representation of women in senior positions in science is well-developed. In contrast, the assessment of the impact of policies addressing the problem suffers from a lack of evidence and an oversimplification of approaches. Based on the assessment of 125 programs for gender equality implemented in research organizations in Europe, North America, and Australia, we argue that holistic approaches and multidimensional frames of reference are needed for impact assessment, also to improve program design and policy. Our analysis shows that the problem of gender inequality is rooted in so many and interrelated factors that program impact assessment has to be multidimensional and complex. Having a conceptual approach grounded in the notion of complexity as a point of departure, the article presents an innovative impact assessment tool, pointing to effective ways to assess the impact of gender equality programs. Key words: impact assessment; gender equality programs; research organizations; Europe; North America; Australia.