ArticlePDF Available

Workshop on Research Assessment Practices in Indian Funding Agencies

Authors:

Abstract

Major funding agencies in India mainly determine the national research agenda. They remain essential stakeholders in research assessment and fund a significant number of projects across the nation. The Department of Science and Technology, Ministry of Science and Technology, Government of India convened a workshop on April 21, 2022 to understand how the funding agencies assess research projects, where these agencies stand in addressing the inherent challenges of evaluating impactful research, and how to ensure a responsible research culture. The workshop had two objectives: to understand the current research assessment practices of India’s funding agencies and to explore the adoption of broad-based assessment criteria beyond journal-based matrices, incorporating national priorities, Sustainable Development Goals (SDG) targets, and the societal impact of research into the research assessment frameworks. This report discusses the workshop's objective and structure, each component of the workshop and its intended outcomes, and policy recommendations for funding agencies in the research ecosystem. The intended audiences for this report are funding agencies, constituents of national and state universities, internal funding committees, and those who want to acquire a broader perspective on existing research assessment practices, look beyond the quantitative journal indicator-based metrics and make existing assessment practices more effective and inclusive. This report aims to assist in developing research assessment agendas that balance local relevance and globalization.
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
Workshop on Research Assessment Practices
in Indian Funding Agencies
Bhattacharjee Suchiradipta,Moumita Koley, and Jahnab
Bharadwaj
DST-Centre for Policy Research, Indian Institute of Science, Bengaluru, Karnataka, India
DOI hyperlink: https://doi.org/10.38126/JSPG220110
Corresponding author: b.suchiradipta@gmail.com
Keywords: research assessment; science funding; research excellence; SCOPE framework
Executive Summary: Major funding agencies in India mainly determine the national research
agenda. They remain essential stakeholders in research assessment and fund a significant
number of projects across the nation. The Department of Science and Technology, Ministry of
Science and Technology, Government of India convened a workshop on April 21, 2022 to
understand how the funding agencies assess research projects, where these agencies stand in
addressing the inherent challenges of evaluating impactful research, and how to ensure a
responsible research culture. The workshop had two objectives: to understand the current
research assessment practices of India’s funding agencies and to explore the adoption of
broad-based assessment criteria beyond journal-based matrices, incorporating national
priorities, Sustainable Development Goals (SDG) targets, and the societal impact of research
into the research assessment frameworks. This report discusses the workshop's objective
and structure, each component of the workshop and its intended outcomes, and policy
recommendations for funding agencies in the research ecosystem.
The intended audiences for this report are funding agencies, constituents of national and
state universities, internal funding committees, and those who want to acquire a broader
perspective on existing research assessment practices, look beyond the quantitative journal
indicator-based metrics and make existing assessment practices more effective and inclusive.
This report aims to assist in developing research assessment agendas that balance local
relevance and globalization.
I. Introduction
Numerous global concerns, such as climate change,
agricultural sustainability, and renewable power
transitions, require immediate attention from the
international scientific community to safeguard
humanity's future. The United Nations’ Sustainable
Development Goals (SDGs) provide roadmaps to
address these challenges while promoting
socioeconomic and planetary well-being, adopted in
2015 as part of the 2030 Agenda for Sustainable
Development (United Nations Organization 2022).
While research to understand these changing
circumstances has been ongoing, most of these
studies are undertaken by researchers from the
Global North. Moreover, funding agencies in
countries such as India do not actively incentivize
high-risk, high-reward research to address these
wicked challenges (Bhattacharya & Packalen 2020).
One significant challenge in science funding is
identifying responsible research assessments to set
equitable assessment metrics and research cultures
globally.
Currently, most organizations assess research using
journal-based metrics, such as impact factor and
h-index. (Wouters 2014, 47-66). These metrics were
initially helpful and informative, but became
overexploited tools as they proliferated. Journal
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
impact factor (JIF), for example, was introduced to
help librarians choose the relevant journals for their
respective universities, not to assess research quality
(McKiernan et al. 2019). Institutions, however,
utilize such metrics without considering the purpose
of the evaluation. Indian academic systems are not
an exception. Quantitative metrics, like the number
of publications, JIF, and citation index, are significant
in evaluating institutions and individuals.
Over-emphasizing and abusing these metrics,
particularly JIF, has led to the quality of research
being evaluated based solely on quantitative factors.
In addition to overutilization of standards intended
to support a qualitative evaluation, quantitative
metrics are misused as proxies for research impact
and quality. Consequently, niche research focusing
on societal impacts and SDGs is frequently
overlooked, as are merits including research
originality, plausibility, and soundness (Aksnes,
Langfeldt, and Wouters 2019). Qualitative
assessments evaluate research proposals for breadth
of impact, contributions to science and society,
intellectual merit, and epistemic and disciplinary
differences, thereby providing robust, transparent,
diverse, and reflexive assessments (Langfeldt,
Reymart, and Aksnes 2021; Taylor and Francis
2023). Quantitative metrics, therefore, need to be
complemented with qualitative metrics that evaluate
novelty, scientific value, research integrity, potential
for innovation, and societal outcomes.
While there have been dialogues for reform, the
debate over effective research assessment metrics
has yet to gain traction. It is essential for India to set
responsible assessment priorities amid calls for
increased science funding, the establishment of the
National Research Foundation, and efforts to
increase private sector participation and Gross
Expenditure on R&D (GERD). Responsible metrics
movements around the globe, including the Leiden
Manifesto (Hicks et al. 2015) and DORA Declaration
(DORA 2022), are crucial in spreading awareness of
holistic metrics for evaluating research. India's
proposed Fifth Draft STI Policy 2020 (DST 2020)
also calls for a broad-based approach to research
assessment to progress the national research
agenda. Radical, ambitious research reforms will
require significant support from the research
assessment framework. Their institutional capacities
to incorporate and integrate new criteria into their
present assessment process will also need to be
assessed.
Funding agencies continue to be significant
stakeholders in research assessment evaluations.
They fund multitudes of research projects by
institutions and academics around the nation and
play a pivotal role in determining the national
research agenda. One significant challenge for
funding agencies and research universities is to
support high-quality research aligned with national
and SDG priorities (Kraemer-Mbula 2020, 79-81).
Accordingly, it is vital to comprehend how these
organizations craft the research agenda and conduct
assessments. This need prompted the workshop on
Research Assessment Practices in Indian Funding
Agencies. The workshop aimed to understand where
Indian funding agencies stand in addressing these
issues and ensuring a responsible research culture.
The Department of Science and Technology, Ministry
of Science and Technology, Government of India,
convened the workshop. Officials from major science
funding agencies of the country Department of
Science and Technology (DST), Department of
Biotechnology (DBT), Council of Scientific and
Industrial Research (CSIR), Indian Council of Medical
Research (ICMR), and Science and Engineering
Research Board (SERB) - participated in the
discourse.
II. Workshop structure
i. Structure of the workshop
The one-day research assessment workshop was
designed to explore Indian funding agencies’ current
research assessment practices. It was structured to
understand the assessment practices used by these
agencies, as well as to recognize their strengths and
weaknesses. The first half of the workshop was
conducted through interactive exercises and
discussions among participants. These participants
were scientists from the represented agencies and
were actively involved in research assessment
activities. The participating scientists were
nominated by their funding organizations based on
their understanding of the research evaluation
standards utilized by each agency. The second half of
the workshop was a panel discussion on workshop
outcomes involving senior leaders for these agencies
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
and engaged in dialogue with funding agency
stakeholders about the future of research
assessment. The structure of the workshop is
detailed below.
ii. Session one: welcome session
Welcome address by Dr. Akhilesh Gupta,
Senior Adviser and Head, Policy
Coordination and Program
Management (PCPM) Division,
Department of Science & Technology
(DST), Ministry of Science &
Technology (MoS&T), Government of
India (GoI)
Introduction to the workshop
iii. Session two: hands-on workshop on research
assessment practices of national funding agencies
Introduction to the SCOPE Framework
Activity one: What do you value about the
entity you seek to evaluate?
Activity two: The Balancing Act-Quantitative
vs. Qualitative Evaluation
Activity three: Who, How, and
What—Exploring the Weaknesses
Activity four: Evaluating the evaluation
Concluding Remarks
Vote of Thanks
iv. Session Three: panel discussion—the future of
research assessment in Indian academia
Introduction
Summary of session two findings
Remarks by Guest of Honor, Panel Chair, and
Panel members.
Guest of Honor: Dr. Srivari Chandrasekhar,
Secretary, DST, MoS&T, GoI
Panel Chair: Dr. Akhilesh Gupta, Senior
Adviser, and Head, Senior Adviser
and Head, PCPM Division, DST,
MoS&T, GoI
Open discussions and comments by
attendees
Vote of thanks
III. Workshop structure
The workshop started with a welcome speech and an
introductory talk that outlined the workshop’s
objectives. It particularly emphasized the
importance of an engaged discussion with
participants from funding agencies about research
assessment practices (session one, thirty minutes).
After the inaugural session, a brief presentation of
the SCOPE framework (INORMS 2021) leads to the
main activity session (session two, three hours).
The SCOPE framework, developed by the
International Network of Research Management
Societies (INORMS) Research Evaluation Group
(REG), provides a five-step approach to designing a
robust and responsible research assessment
framework. SCOPE is an acronym defined as follows:
S- Start with what you value; C- Context
considerations; O- Options for evaluating; P- Probe
deeply; and E- Evaluate your evaluation. The
framework emphasized integrating academic rigor
in research management and assessment practices
and was used for the workshop with funding
agencies. The framework is further explained in the
appendix.
Activities conducted during the workshop were
designed within the SCOPE framework to help the
organizers understand the subjective perception of
research assessment processes in the funding
agencies. The participants were handed four activity
worksheets with dot exercises and open-ended
questions. All the activity worksheets were retrieved
after the session, and the responses were analyzed to
understand the assessment practices, process,
objectives, and perceived effectiveness by the
representatives of the funding agencies.
After lunch, the third session began with an opening
statement from Dr. Srivari Chandrasekhar, Secretary,
DST. He discussed the state of the current research
assessment within Indian funding agencies. He also
spoke on funding organizations’ obligation to
develop a responsible research assessment
ecosystem in India with contextualization based on
India’s national and regional priorities. Dr. Akhilesh
Gupta, Senior Adviser and Head, DST, presided over
the panel discussion, which included senior
representatives and science administrators from
national funding agencies. The summaries of
outcomes from activities in Session Two were
outlined in the panel discussion to give an overview
of the various research evaluation processes used by
different funding agencies in the country. A vote of
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
gratitude followed an hour-long hybrid panel
discussion.
IV. Discussion
Various structural components were utilized during
the workshop, the most significant being the SCOPE
framework. This framework involves five phases of
value-driven strategies for research evaluation.
However, to suit the Indian funding agency context,
only four steps of the SCOPE framework were
utilized. Activities and exercises created based on
this framework facilitated the discussions on how to
evaluate research beyond quantitative metrics and
indicators. The participating scientists quickly
engaged with the framework and provided
perceptive remarks on the present systems'
evaluation ethos, structures, and procedures for
evaluating research and adaptive improvements to
them. Furthermore, they offered constructive
criticisms and recommendations for potential
changes to the framework for research assessment
in their funding agencies.
The workshop's primary objective was to
understand how India's funding agencies conduct
intramural and extramural research assessments. In
the first activity, participants were asked open- and
closed-ended questions to gain insight into the
entities they evaluate. The questions prompted
participants to explore how research proposal
assessment mechanisms function, including the
super-values, values, and sub-values prioritized by
the evaluators, as well as other variables that
influence funding decisions. The second activity
aimed to investigate how research assessment
frameworks can adapt to the evolving requirements
of balancing quantitative and qualitative indicators.
The third activity explored the strengths and
weaknesses of the current research assessment
framework. Questions targeted exploitable loopholes
in the current evaluation framework and how
funding agencies support risk-taking proposals.
Finally, activity four gathered input as to how the
community could develop a responsible and
adequate research assessment framework.
Participants completed each activity in
approximately thirty minutes using a pen and
writing pad. Organizers clarified activities for
participants.
The panel discussion, which included senior officials
from national funding agencies and science
administrators, was one of the workshop’s
distinctive features and allowed for an exciting
discussion on the future of research assessment. Dr.
Srivari Chandrasekhar opened the discussion by
greeting participants and remarking on the existing
assessment framework in India's research
ecosystem. He briefly outlined the inadequacies in
the evaluation process from the funding agency and
the applicant's viewpoints. Reflecting on the growing
importance of alternate evaluation systems, he
highlighted the need for stakeholder dialogues to
develop a better research assessment framework
compatible with the Indian research ecosystem.
Following the remarks, the moderators presented an
overview of the responses from the workshop
activities. This gave participants and panel members
an understanding of the varied evaluation
techniques within the Indian research ecosystem.
Panel members then spoke, prompting an excellent
debate and interaction that helped participants
reflect on the deficiencies in the present assessment
system and alternative recommendations that may
be incorporated for a balanced assessment. The
panelists agreed that the first step should be to
identify the research and assessment techniques on
a case-by-case basis and not have the same criteria
for all calls for proposals. The panelists criticized a
one-size-fits-all evaluation technique that could lead
to a futile attempt to create recommendations for
research assessment.
V. Key takeaways
The workshop had four main activities with four
essential questions that the participants individually
considered.
i. What do you value about the entity you seek to
evaluate?
1.1) Who decides the assessment framework in your
research program?
While the general practice in DST and SERB is
decided jointly by the grants management team and
the external peer review committee, in a few cases, it
is solely determined by either the external
committee or the grants management team of the
program division; in DBT, CSIR, and ICMR it is jointly
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
decided by the internal grants management team
and the external peer review committee.
1.2) How frequently are the evaluation committees
and frameworks revised in your research program?
Generally, the selection committee is revised every
three years, but certain programs of ICMR revise
annually. The revision of assessment frameworks is
more varied, even within individual agencies. Minor
edits happen annually if needed, while major
restructuring happens every three to five years.
1.3) In your research program/organization, what do
you look for when assessing research/researchers?
When funding new research, funding agencies look
for the proposed research's societal impact,
interdisciplinarity, translational aspects, alignment
to national missions and goals, and policy-level
deliverables.
These super-values, at a granular level, are assessed
through the lens of the research leadership of the
applicants, their ability to attract extramural
funding, industry and stakeholder networks, the
methodology of the proposed research, and their
innovation and scalability.
1.4) On a scale of 1-9, 9 being the highest score and
one being the lowest, how important are the
following factors while assessing an
applicant/project?
Particulars
Average
score
Number of
publications/patents/research projects
6
Journal impact factors
6
h-index
6
Educational background
6
Affiliation
5
Professional experiences
7
Research background (overall)
7
Research background (in proposed
research area)
8
Table 1: Average score for each research assessment
criterion reported during activity one.
The findings indicate that while research quality is
essential, quantitative metrics are still valued highly
in assessment criteria.
ii. The Balancing Act - Quantitative vs. Qualitative
Evaluation
To make the project assessment framework adaptive
to changing needs, DST, SERB, ICMR, and CSIR focus
on national priorities and encourage a
problem-solving approach in the research proposals
they fund. The expert committee is also constituted
with evolved expertise to identify proposals that aim
toward the aspects mentioned above. Disciplinary
contexts and the differentials within are deeply
integrated.
DBT focuses on the translational potential of the
proposed research, and the grants management
team and review committee keep track of the
evaluation parameters that the US, UK, EU, and
others employ and tries to match them based on
national priorities.
iii. Who, how, and what—exploring the weaknesses
This activity focused on three themes: loopholes in
the funding process that can be manipulated or
sidestepped, how funding agencies accommodate
risk-taking projects, and how they encourage
collaboration and team science.
The workshop identified several loopholes in the
funding process, including contacting the reviewer
for favorable results, hiding negative results and
conflicts of interest, and false claims of expertise.
Blacklisting candidates is the only measure funding
agencies currently use to discourage such behavior.
Only SERB has a dedicated call for proposals for
high-risk research (SERB-Scientific and Useful
Profound Research Advancement). Other funding
agencies only fund high-risk research on a
case-by-case basis, depending on recommendations
from expert committees. Funding agencies
otherwise do not actively or passively encourage
risk-taking proposals.
While official provisions do not selectively
encourage collaboration, calls are present in each
agency specifically aimed toward collaborative
projects. Again, SERB has schemes that are
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
specifically for multi-institutional, collaborative
projects.
iv. Evaluating the evaluation
These earlier activities reinvestigated the national
funding agencies' assessment process and evaluated
their strengths and weaknesses. Activity four
recommended changes based on those
understandings to make the review system more
efficient. Some thoughts and recommendations that
emerged from the discussion are as follows:
1) Assessment frameworks and review committees
should be periodically reviewed. The assessment
framework needs to be examined more frequently,
and changes based on global and national priorities
must be incorporated. Similarly, the review
committee needs to be reviewed more regularly.
Membership should include more diverse
stakeholders from industry and impacted
communities, not just senior academics.
2) Inclusion of international experts in review
committees would be a welcome change to integrate
a global outlook in funded research and bring a
broader perspective to the committee.
3) The evaluation format needs to include qualitative
metrics that address ethics, societal impact,
translational value, conflict of interest, and novelty.
4) The assessment framework should be more
inclusive and customized to provide opportunities to
currently disadvantaged researchers. There is a need
to develop a better research ecosystem and connect
all the national and local stakeholders, including
research institutions, universities, industries, and
society.
5) Proposals should undergo double-anonymous
review. This procedure will minimize the existing
halo effect on researchers from well-known
institutions.
VI. Policy recommendations
Key policy recommendations coming out of the
deliberations, especially the panel discussion with
science administrators from the national funding
agencies, are noted below:
1) Introduce a double anonymous review system to
remove bias from the review process.
2) Conduct workshops for reviewers on assessing
proposals better while considering the changing
research and assessment priorities. Workshops for
researchers are also necessary on how to write
better proposals.
3) Introduce accountability and monitoring of fund
utilization to understand the scientific and social
contribution of research funding.
4) Improve the selection of review committee
experts by seeking individuals with extensive
knowledge in the discipline or subdiscipline they are
reviewing and who are ethical and honest in their
review process.
5) Critically evaluate how projects are deemed
necessary, especially for high-risk projects. At
present, the publication is the only outcome used to
measure success. In the case of high-risk projects,
however, a researcher may wait to produce any
paper until the end of the proposed period. Yhese
aspects must be acknowledged and integrated into
the assessment process to encourage better science
and innovation.
6) Themes identified for funding calls should be
concurrent to India's national priorities, rather than
those of international agencies or American and
European countries.
7) A complex but flexible assessment framework is
needed that considers the diversity of discipline and
sub-disciplines, inclusivity, the need for
interdisciplinarity and team science, and novelty and
innovativeness of the outcome.
8) Collaboration is the key to delivering socially
impactful research. To ensure resources - both
financial and human - are utilized optimally, a
collaboration between premier institutions and Tier
II/III institutions must be encouraged through
research funding.
9) The project evaluation mechanism should address
the bias towards premier institutions and bring
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
diversity in institutions and regions, geographical
areas, gender, communities, and so on.
10) To expand the pool of reviewers, retired
scientists can be involved in the system and act as
mentors to current researchers. They can be
incentivized with honoraria for their engagement.
VII. Conclusions
Indian research funding agencies play a central role
in supporting and financing scientific and
technological research in India. They significantly
shape the country's research culture through the
policies and programs they promote or discourage.
This workshop aimed to comprehend their research
evaluation procedures, identify the strengths and
limitations of the current process, and initiate a
conversation about the required improvements to
advance superior research and innovation.
The commonly used research evaluation process in
India consists of a quantitative, metric-based
screening process followed by an expert peer review.
Even though peer review introduces a qualitative
aspect to the evaluation process, it is not free of
institutional bias and overreliance on metrics such
as h-index, JIF, publication count, and the funding a
researcher previously secured. Moreover, the
present system only encourages risk-taking and
collaborative research projects in specific
circumstances. Additionally, the limited capacity of
funding agencies in processing a large volume of
applications and a small pool of over-burdened
reviewers creates injustice in the overall outcome of
funding decisions.
At this point, the Indian research funding agencies
should go beyond their static evaluation criteria and
focus on research excellence, integrate diversity and
equity into research, encourage the social impact of
science, and foster innovation. However, there needs
to be a comprehensive evaluation framework or a
review panel that considers these factors. Capacity
building of reviewers is also necessary to enhance
their understanding of what responsible research
assessment is, the nuances of a more responsible
qualitative assessment, and how to integrate the
learnings in their capacity as reviewers. Institutions
and funding agencies should appoint grant
management teams to improve the research funding
ecosystem in the country.
Together, the policy suggestions generated from this
workshop can provide valuable guidance for the
funding agencies to act and implement the necessary
changes.
Appendix A: Overview of the SCOPE framework:
a five-stage process for evaluating research
responsibly
The activities conducted during the workshop were
designed within the SCOPE framework, a five-step
responsible research evaluation approach developed
by the International Network of Research
Management Societies (INORMS) Research
Evaluation Group (REG) (INORMS 2021). By creating
a practical and feasible five-stage process that allows
for the development of better value-driven research
evaluation approaches, the INORMS REG has
attempted to address the issue of creating a
responsible research evaluation framework and
effectively putting it into practice.
The five stages of the framework are as follows:
START with what you value
CONTEXT considerations
OPTIONS for evaluating
PROBE deeply
EVALUATE your evaluation
Three basic concept that guides the five stages of
SCOPE are:
Evaluate only where necessary
Evaluate with the evaluated.
Draw on evaluation expertise.
Workshop activities worksheets
Appendix B: Activity worksheets
i. Activity one
1) Who decides the assessment framework in your
research program?
Internally by funding agency
External committee
Both
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
2) How frequently are the Selection Committee and
Assessment Framework revised in your research
program?
Every Year
Every 3 Years
Every 5 Years
More than 5 Years
3) In your research program/organization, what do
you look for as super values, values, sub-values?
4) How do you arrive at the final decision on the
selection of a project proposal? (Please write the
steps involved)
5) What do you specifically look for in the project
applicant’s profile? Please rate the following options
according to their importance between 1-9, 9 being
the highest and one being the lowest:
The number of publications/ patents/ research
projects, Journal Impact Factors, h-index,
educational background, Affiliation, Professional
experiences, Research background (overall),
Research background (in proposed research area),
and any other (please specify).
ii. Activity two
1) What steps are taken to make the project
assessment framework evolve/adaptive to changing
needs?
iii. Activity three
1) In your opinion, how can the assessment
framework be side-stepped by the applicants?
2) How do you accommodate risk-taking project
proposals?
3) How do you encourage collaborative projects?
iv. Activity Four
1) What changes would you suggest in the existing
assessment framework of your research
division/organization?
Appendix C: Workshop organizers and panelists
The names and affiliations of the individuals
associated with this workshop at the time of the
event are included below.
i. Workshop organizers
Dr. Akhilesh Gupta, Senior Adviser, and Head,
Senior Adviser and Head, PCPM Division,
Department of Science & Technology,
Ministry of Science & Technology, Govt. of
India
Bhattacharjee Suchiradipta, Senior Policy
Fellow, DST-Centre for Policy Research,
Indian Institute of Technology, Delhi, New
Delhi, India
Moumita Koley, Post-Doctoral Fellow
DST-Centre for Policy Research, Indian
Institute of Science, Bengaluru, Karnataka,
India
Dr. Rabindra Panigrahy, Scientist E, PCPM
Division, Department of Science &
Technology, Ministry of Science &
Technology, Govt. of India
ii. Panel discussion
Guest of Honor: Dr. Srivari Chandrasekhar, Secretary,
Department of Science & Technology, Ministry of
Science & Technology, Govt. of India
Panel chair: Dr. Akhilesh Gupta, Senior Adviser and
Head, PCPM Division, Department of Science &
Technology, Ministry of Science & Technology, Govt.
of India
Panel members:
Dr. Anita Gupta, Head, Technology Missions
Division (Energy, Water & all Other),
Department of Science & Technology,
Ministry of Science & Technology, Govt. of
India
Dr. Sanjeev Varshney, Head, Division of
International Cooperation, Department of
Science & Technology, Ministry of Science &
Technology, Govt. of India
Dr. Nisha Mendiratta Head, Climate Change
Program, Department of Science &
Technology, Ministry of Science &
Technology, Govt. of India
Dr. S. K. Tiwari, Chief Scientist, CSIR-NBRI,
Lucknow, Uttar Pradesh
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
Dr. M. Mohanty, Scientist E, Earth &
Atmospheric Sciences Division, Science and
Engineering Research Board (SERB), New
Delhi
Dr. Nabendu Chatterjee, Scientist G, and
Head, Basic Medical Sciences (BMS),
Division, ICMR - NIIH, Mumbai
Dr. Sarah Sabu Cherian, Scientist G, ICMR
National Institute of Virology (NIV), Pune
References
Aksnes, Dag, Liv Langfeldt, and Paul Wouters. 2019.
Citations, citation indicators, and research
quality: An overview of basic concepts and
theories. Sage Open, 9(1), p.2158244019829575.
Declaration on Research Assessment. 2022. “San
Francisco Declaration on Research Assessment.”
sfdora.org, February 23, 2022.
https://sfdora.org/read/
Department of Science and Technology. 2020. “Draft 5th
National Science, Technology, and Innovation
Policy.” Department of Science and Technology,
Ministry of Science and Technology, Government
of India.
https://dst.gov.in/draft-5th-national-science-tec
hnology-and-innovation-policy-public-consultati
on
Hicks, Diana, Paul Wouters, Ludo Waltman, Sarah de
Rijcke, and Ismael Rafols. 2015. “Bibliometrics:
The Leiden Manifesto for Research Metrics.”
Nature 520, no. 7548 (April 1, 2015): 429–31.
https://doi.org/10.1038/520429a.
INORMS Research Evaluation Group. 2021. “The SCOPE
Framework: A five-stage process for evaluating
research responsibly.”
inorms.net/research-evaluation-group. Accessed
July 10, 2022.
https://inorms.net/wp-content/uploads/2021/1
1/21655-scope-guide-v9-1636013361_cc-by.pdf
Jay Bhattacharya and Mikko Packalen. 2020. “Stagnation
and Scientific Incentives.” National Bureau of
Economic Research, Cambridge, MA.
https://doi.org/10.3386/w26752.
Kamps, Rick, Rita Brandão, Bianca Bosch, Aimee
Paulussen, Sofia Xanthoulea, Marinus Blok, and
Andrea Romano. 2017. “Next-Generation
Sequencing in Oncology: Genetic Diagnosis, Risk
Prediction and Cancer Classification.
International Journal of Molecular Sciences 18(2):
308.
Kraemer-Mbula, Erika. “Gender Diversity and the
Transformation of Research Excellence, in
Transforming Research Excellence: New Ideas from
the Global South, edited by Erika Kraemer-Mbula,
Robert Tijssen, Matthew L. Wallace and Robert
McLean, 79-91. African Minds.
https://www.africanminds.co.za/wp-content/upl
oads/2019/12/AMT-Research-Excellence-FINAL-
WEB-02012020.pdf
Langfeldt, Liv, Ingvild Reymert, and Dag W Aksnes. 2021.
“The role of metrics in peer assessments.”
Research Evaluation, 30(1): 112–126.
https://doi.org/10.1093/reseval/rvaa032.
McKiernan, Erin C., Lesley A. Schimanski, Carol Muñoz
Nieves, Lisa Matthias, Meredith T. Niles, and Juan
P. Alperin. 2019. “Meta-Research: Use of the
Journal Impact Factor in academic review,
promotion, and tenure evaluations” eLife.
https://doi.org/10.7554/eLife.47338
Taylor and Francis. 2023. “The use of metrics in research
assessment: share your views. Editor Resources.
Accessed on 12.02.2023.
https://editorresources.taylorandfrancis.com/pe
ersupport/the-use-of-metrics-in-research-assess
ment/.
United Nations Organization. 2022. Sustainable
development. Accessed July 10, 2022.
https://sdgs.un.org/goals
Wouters, Paul. 2014. “The citation: From culture to
infrastructure, in Beyond Bibliometrics:
Harnessing Multidimensional Indicators of
Scholarly Impact, edited by Blaise Cronin and
Cassidy R. Sugimoto, 47-66. The MIT Press.
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Journal of Science Policy & Governance WORKSHOP REPORT: RESEARCH ASSESSMENT IN FUNDING
AGENCIES
Bhattacharjee Suchiradipta is a social scientist with a Ph.D. in Agricultural Extension. Her work revolves
around understanding what it takes to sustainably transition to better food systems in low- and
middle-income economies. She focuses on theory building and providing evidence for sustainable food
system transformation from an agricultural innovation systems perspective in the context of climate change.
Moumita Koley is a scientist who worked in the wet lab, synthesized new biologically active compounds, and
designed novel synthetic routes using metal and enzyme as catalysts. Now, she is exploring a few questions:
how to make the research ecosystem more responsible, how to make research respond to local problems,
and how to drive and fund the research that matters most.
Jahnab Bharadwaj is currently engaged as a Project Intern for the DORA Community Engagement Grant
funded project “Exploring current research assessment practices in Indian academia. He completed his
Masters in Sociology from Delhi School of Economics in 2022.
www.sciencepolicyjournal.org JSPG, Vol. 22, Issue 1, March 2023
Article
Full-text available
Background Research and researchers are heavily evaluated, and over the past decade it has become widely acknowledged that the consequences of evaluating the research enterprise and particularly individual researchers are considerable. This has resulted in the publishing of several guidelines and principles to support moving towards more responsible research assessment (RRA). To ensure that research evaluation is meaningful, responsible, and effective the International Network of Research Management Societies (INORMS) Research Evaluation Group created the SCOPE framework enabling evaluators to deliver on existing principles of RRA. SCOPE bridges the gap between principles and their implementation by providing a structured five-stage framework by which evaluations can be designed and implemented, as well as evaluated. Methods SCOPE is a step-by-step process designed to help plan, design, and conduct research evaluations as well as check effectiveness of existing evaluations. In this article, four case studies are presented to show how SCOPE has been used in practice to provide value-based research evaluation. Results This article situates SCOPE within the international work towards more meaningful and robust research evaluation practices and shows through the four case studies how it can be used by different organisations to develop evaluations at different levels of granularity and in different settings. Conclusions The article demonstrates that the SCOPE framework is rooted firmly in the existing literature. In addition, it is argued that it does not simply translate existing principles of RRA into practice, but provides additional considerations not always addressed in existing RRA principles and practices thus playing a specific role in the delivery of RRA. Furthermore, the use cases show the value of SCOPE across a range of settings, including different institutional types, sizes, and missions.
Article
Background: Research and researchers are heavily evaluated, and over the past decade it has become apparent that the consequences of evaluating the research enterprise and particularly individual researchers are considerable. This has resulted in the publishing of several guidelines and principles to support moving towards more responsible research assessment (RRA). To ensure that research evaluation is meaningful, responsible, and effective the International Network of Research Management Societies (INORMS) Research Evaluation Group created the SCOPE framework enabling evaluators to deliver on existing principles of RRA. SCOPE bridges the gap between principles and their implementation by providing a structured five-stage framework by which evaluations can be designed and implemented, as well as evaluated. Methods: SCOPE is a step-by-step process designed to help plan, design, and conduct research evaluations as well as check effectiveness of existing evaluations. In this article, four case studies are presented to show how SCOPE has been used in practice to provide value-based research evaluation. Results: This article situates SCOPE within the international work towards more meaningful and robust research evaluation practices and shows through the four case studies how it can be used by different organisations to develop evaluations at different levels of granularity and in different settings. Conclusions: The article demonstrates that the SCOPE framework is rooted firmly in the existing literature. In addition, it is argued that it does not simply translate existing principles of RRA into practice, but provides additional considerations not always addressed in existing RRA principles and practices thus playing a specific role in the delivery of RRA. Furthermore, the use cases show the value of SCOPE across a range of settings, including different institutional types, sizes, and missions.
Article
Full-text available
Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed. We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identi-fying the best research in their field, assessing grant proposals and assessing candidates for positions). A minority of the researchers responding to the survey reported that metrics were reasons for considering something to be the best research. Still, a large majority in all the studied fields indicated that metrics were important or partly important in their review of grant proposals and assessments of candidates for academic positions. In these contexts, the citation impact of the publications and, particularly, the number of publications were emphasized. These findings hold across all fields analysed, still the economists relied more on productivity measures than the cardiologists and the physicists. Moreover, reviewers with high scores on bibliometric indicators seemed more frequently (than other reviewers) to adhere to metrics in their assessments. Hence, when planning and using peer review, one should be aware that reviewers -- in particular reviewers who score high on metrics -- find metrics to be a good proxy for the future success of projects and candidates, and rely on metrics in their evaluation procedures despite the concerns in scientific communities on the use and misuse of publication metrics.
Article
Full-text available
We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms. Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.
Article
Full-text available
Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality? These and similar issues have been addressed through several decades of scientometric research. This article provides an overview of some of the main issues at stake, including theories of citation and the interpretation and validity of citations as performance measures. Research quality is a multidimensional concept, where plausibility/soundness, originality, scientific value, and societal value commonly are perceived as key characteristics. The article investigates how citations may relate to these various research quality dimensions. It is argued that citations reflect aspects related to scientific impact and relevance, although with important limitations. On the contrary, there is no evidence that citations reflect other key dimensions of research quality. Hence, an increased use of citation indicators in research evaluation and funding may imply less attention to these other research quality dimensions, such as solidity/plausibility, originality, and societal value.
Article
Full-text available
Next-generation sequencing (NGS) technology has expanded in the last decades with significant improvements in the reliability, sequencing chemistry, pipeline analyses, data interpretation and costs. Such advances make the use of NGS feasible in clinical practice today. This review describes the recent technological developments in NGS applied to the field of oncology. A number of clinical applications are reviewed, i.e., mutation detection in inherited cancer syndromes based on DNA-sequencing, detection of spliceogenic variants based on RNA-sequencing, DNA-sequencing to identify risk modifiers and application for pre-implantation genetic diagnosis, cancer somatic mutation analysis, pharmacogenetics and liquid biopsy. Conclusive remarks, clinical limitations, implications and ethical considerations that relate to the different applications are provided.
Article
Full-text available
Use these ten principles to guide research evaluation, urge Diana Hicks, Paul Wouters and colleagues. were introduced, such as InCites (using the Web of Science) and SciVal (using Scopus), as well as software to analyse individual citation profiles using Google Scholar (Publish or Perish, released in 2007). In 2005, Jorge Hirsch, a physicist at the University of California, San Diego, proposed the h-index, popularizing citation counting for individual researchers. Interest in the journal impact factor grew steadily after 1995 (see 'Impact-factor obsession'). Lately, metrics related to social usage advice on, good practice and interpretation. Before 2000, there was the Science Citation Index on CD-ROM from the Institute for Scientific Information (ISI), used by experts for specialist analyses. In 2002, Thomson Reuters launched an integrated web platform, making the Web of Science database widely accessible. Competing citation indices were created: Elsevier's Scopus (released in 2004) and Google Scholar (beta version released in 2004). Web-based tools to easily compare institutional research productivity and impact D
Stagnation and Scientific Incentives
  • Jay Bhattacharya
  • Mikko Packalen
Jay Bhattacharya and Mikko Packalen. 2020. "Stagnation and Scientific Incentives." National Bureau of Economic Research, Cambridge, MA. https://doi.org/10.3386/w26752.
Gender Diversity and the Transformation of Research Excellence
  • Erika Kraemer-Mbula
Kraemer-Mbula, Erika. "Gender Diversity and the Transformation of Research Excellence," in Transforming Research Excellence: New Ideas from the Global South, edited by Erika Kraemer-Mbula, Robert Tijssen, Matthew L. Wallace and Robert McLean, 79-91. African Minds. https://www.africanminds.co.za/wp-content/upl oads/2019/12/AMT-Research-Excellence-FINAL-WEB-02012020.pdf
The use of metrics in research assessment: share your views. Editor Resources
  • Francis Taylor
Taylor and Francis. 2023. "The use of metrics in research assessment: share your views. Editor Resources. Accessed on 12.02.2023.
Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations
  • Erin C Mckiernan
  • A Lesley
  • Carol Muñoz Schimanski
  • Lisa Nieves
  • Meredith T Matthias
  • Juan P Niles
  • Alperin
McKiernan, Erin C., Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, and Juan P. Alperin. 2019. "Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations" eLife. https://doi.org/10.7554/eLife.47338