Article

Performance-Based University Research Funding Systems

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The university research environment has been undergoing profound change in recent decades and performance-based research funding systems (PRFSs) are one of the many novelties introduced. This paper seeks to find general lessons in the accumulated experience with PRFSs that can serve to enrich our understanding of how research policy and innovation systems are evolving. The paper also links the PRFS experience with the public management literature, particularly new public management, and understanding of public sector performance evaluation systems. PRFSs were found to be complex, dynamic systems, balancing peer review and metrics, accommodating differences between fields, and involving lengthy consultation with the academic community and transparency in data and results. Although the importance of PRFSs seems based on their distribution of universities’ research funding, this is something of an illusion, and the literature agrees that it is the competition for prestige created by a PRSF that creates powerful incentives within university systems. The literature suggests that under the right circumstances a PRFS will enhance control by professional elites. PRFSs since they aim for excellence, may compromise other important values such as equity or diversity. They will not serve the goal of enhancing the economic relevance of research.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Scientific research is a driving force of innovation designed to expand the frontiers of human knowledge and improve economic and social progress. However, research policy and exploration of promising research directions are shaped in part by the decisions of funding bodies, such as governments and universities, as well as for-profit and nonprofit private entities, to fund these research studies [1,2]. Determining which proposed research projects are funded based on impact remains a dynamic process that involves a combination of peer review and quantitative research metrics [2,3]. ...
... However, research policy and exploration of promising research directions are shaped in part by the decisions of funding bodies, such as governments and universities, as well as for-profit and nonprofit private entities, to fund these research studies [1,2]. Determining which proposed research projects are funded based on impact remains a dynamic process that involves a combination of peer review and quantitative research metrics [2,3]. The funding decision process requires transparency in the way public research funds are allocated based on peer review and metrics and can ensure reproducibility through increased use of publicly available data for responsible decision-making [4,5]. ...
... The pursuit of scientific research is intricately tied to the progress of human society, and it is shaped by decisions made by various funding bodies, such as governmental organizations, universities, and both forprofit and nonprofit private entities, who provide financial support for these research studies [1,2]. The National Institutes of Health (NIH), for instance, allocated a budget of 33.34 billion dollars towards scientific research in 2022 [18], highlighting the significant investment made in this area. ...
Preprint
Full-text available
Scientific research is propelled by allocation of funding to different research projects based in part on the predicted scientific impact of the work. Data-driven algorithms can inform decision-making of funding by identifying likely high-impact studies using bibliometrics. Compared to standardized citation-based metrics alone, we utilize a machine learning pipeline that analyzes high-dimensional relationships among a range of bibliometric features to improve the accuracy of predicting high-impact research. Random forest classification models were trained using 28 bibliometric features calculated from a dataset of 1,485,958 publications in medicine to retrospectively predict whether a publication would become high-impact. For each random forest model, the balanced accuracy score was above 0.95 and the area under the receiver operating characteristic curve was above 0.99. The high performance of high impact research prediction using our proposed models show that machine learning technologies are promising algorithms that can support funding decision-making for medical research.
... Systems of accreditation (Sanz-Menéndez and Cruz-Castro 2019) and of habilitation (Abramo and D'Angelo 2015) also involve evaluations at the individual level. In addition, evaluations could be applied to a whole institution such as performance-based research funding systems (Hicks 2012;Zacharewicz et al. 2019). ...
... Finally, in the New Zealand PBRF, the individual classification of researchers is carried out as a mechanism for the competitive allocation of budgets to universities. It can be better understood as a performance-based research funding system, closer to the European cases described by Hicks (2012) and others. ...
... The first three characteristics are essential for placing any system in a historical and institutional context and identifying policy transfer processes. The remaining six characteristics are related to the structure and governance of evaluation systems (Whitley and Glä ser 2007;Hicks 2012). The frequency of calls, the number of levels or categories, and size can help explain features related to the structure, while the benefits provided, disciplinary organization, and quotas enable us to analyze governance. ...
Article
Evaluation procedures play a crucial role in science and technology systems, particularly within academic career structures. This article focuses on an approach to evaluation that has gained prominence in Latin America over the past four decades. This scheme assesses the individual performance of academics based on their academic activities and outputs and assigns them a ‘researcher category’, which carries prestige and, in many cases, additional monthly rewards. Initially implemented in higher education contexts with limited research focus, these systems aimed to bolster knowledge production by involving more academics in research. In this study, we define National Researcher Categorization Systems (NRCSs) and distinguish them from other evaluation systems. Subsequently, we present a comparative analysis of NRCSs in seven countries, identifying common trends. Additionally, we discuss categorization systems within the broader context of strategies employed to incentivize academic research, and we explore the potential structural effects that arise when NRCSs assume a central role in a science system. Through our research, we have identified a family of systems in five countries (Mexico, Argentina, Uruguay, Paraguay, and Panama) that share a common history and structure. Furthermore, we emphasize that NRCSs may reinforce a traditional model of the academic researcher, potentially impeding the development of professional profiles aligned with research directed toward social objectives. In summary, our study sheds light on NRCSs, providing insights into their nature, comparative analysis across countries, and implications within the broader academic research landscape.
... I sistemi PBRF per essere definiti tali devono avere le seguenti caratteristiche (Hicks, 2012;Zacharewicz et al., 2019): ...
... Da questo punto di vista, i rischi maggiori che sono stati riscontrati riguardano (Geuna, 2001;Laudel, 2006;Butler, 2010;Sandström et al., 2014;van den Besselaar et al., 2017): i) una selezione "opportunistica" da parte dei ricercatori dei temi su cui avviare progetti condizionata alla possibilità di successo nei finanziamenti di tipo competitivo; ii) una tendenza a perseguire linee di ricerca già consolidate e condivise dalla comunità accademica, piuttosto che quelle più incerte ed eterodosse; iii) la necessità da parte delle istituzioni pubbliche di approntare sistemi di valutazione sempre più complessi, intrusivi e costosi; iv) l'introduzione di comportamenti opportunistici tra ricercatori per ottenere risorse sempre più scarse; v) l'eccesso di competitività; vi) un potenziale calo in aspetti che non rientrano nelle formule, ad esempio quelli inerenti la didattica. Inoltre, c'è il pericolo di innescare fenomeni di cumulatività e auto-rafforzamento, in particolare l'"effetto Matteo" 46 , e conseguenti problemi di equità (Hicks, 2012), soprattutto a detrimento di istituzioni o territori più svantaggiati, e per la difficoltà di chi è escluso dal finanziamento di poter rientrare nella competizione. ...
... Questi effetti, reali o potenziali che siano, sono molto difficili da misurare e da mettere sotto controllo e sono in relazione con il meccanismo di finanziamento in uso. I più comuni (OCSE, 1990;Hicks, 2012;Cruz-Castro et al., 2011;Zacharewicz et al., 2019) sono: a) finanziamento istituzionale non competitivo o ordinario, vale a dire il finanziamento che il governo nazionale o locale assegna alle istituzioni per sostenere le attività, in particolare, le spese per le strutture e il costo del personale, soprattutto nei sistemi di ricerca in cui i ricercatori sono dipendenti pubblici; questo finanziamento viene generalmente attribuito senza passare per la selezione di progetti, lasciando una certa autonomia alle istituzioni sulle decisioni di stanziamento; si fonda generalmente su un criterio storico di allocazione, facendo riferimento ai livelli passati di risorse ottenute, le quali subiscono variazioni in relazione alle variazioni dei valori considerati per il calcolo (ad es. numero di docenti); b) finanziamento sulla base della contrattazione, basato su una serie di modi intermedi associati alla negoziazione tra il governo e le organizzazioni di ricerca, dove nella determinazione dell'ammontare del finanziamento possono avere un ruolo sia aspetti relativi a un'allocazione storica, sia elementi relativi alla performance dell'organizzazione; c) finanziamento istituzionale di tipo premiale, rivolto alle istituzioni, ma allocato sulla base di un sistema di calcolo, denominato formula, la cui composizione si basa su una serie di indicatori di input e di risultato (ad es. ...
Book
Full-text available
The book traces the main stages of third mission and social impact assessment, in the context of the (recent) history of higher education and research assessment in Italy, starting with a review of the transformation of the relationship between the University and the society. Several social drives are leading the university to a series of transformations, at the national, European and international level. The requirement to use transparency criteria and the associated returns of research spending (in accountability) has become a fundamental issue in informing government, business, citizens and society on the achieved results. The introduction of social impact assessment frameworks is also advancing research agendas towards socially relevant domains, to provide solutions in international competitiveness, social welfare, sustainability and other grand challenges. A terminological and conceptual shift is occurring from a traditional concept of technology transfer and third mission towards a broader meaning of knowledge exchange and co-creation between universities and extra-academic actors and the impact generated. However, this shift towards a transdisciplinary, trans-epistemic and inclusive evaluation framework poses new challenges to capture this complexity, requiring the adoption of new methods and tools presented in this book. The international literature and debate on practices are vital combined with trial-and-error approaches, community involvement, and targeted pilot studies. Beyond controversies, resistances, and easily contrived enthusiasms, this is what this volume tries to investigate.
... In the era of neoliberalism, competition was widely introduced to the research funding system, such that performance-based block funding was allocated to universities and competitive grants to research projects based on either performance or 'promises of performance'. These competition-based funding reforms developed separately and in various forms globally from the mid-1980s, such as in Europe (Jongbloed and Lepori, 2015), Australasia (Hicks, 2012), and Asia (Shin et al., 2020;Shin and Lee, 2015). Although non-competitive funding methods, such as on a historical basis or via negotiation, remain in use (Auranen and Nieminen, 2010), the proportion of competitive funding has risen worldwide. ...
... For example, performance-based block funding comprised 13% of Finland's funding to universities in 2019, up from only 0.3% a decade before (Mathies et al., 2020). Through such performance-based funding reforms, funding agencies intended to increase both productivity and excellence in research per unit of investment (Hicks, 2012), which has proven fruitful in some contexts (e.g. Italy; Cattaneo et al., 2016). ...
... Second, some have criticised competitive research funding schemes for disadvantaging early-career academics, leading to further stratification of the academic labour force. Certain funding rules, such as highlighting past accomplishments, force early-career academics to rely on prestigious scholars to successfully secure funding opportunities (Laudel, Chapter 16 in this Handbook), thus bestowing further power on established scholarly elites (Dougherty and Natow, 2020;Hicks, 2012). Such rules may also somewhat restrict early-career researchers' academic mobility, as they must maintain connections with colleagues and be ready to dynamically participate in others' projects. ...
... Block funding is a core feature of RFP in the conservative and social-democratic regimes (Reale, 2017). Since the early 2000s, reforms have however included a performance-based institutional funding (PIBIF) in block funding, such as in the Netherlands where the formula considers PhD defences, and in Austria, and Germany, where indicators also include external research funding (Hicks, 2012). In Norway, 60% of the funding was allocated as block funding, 25% based on education outcomes and 15% on research performance (Frølich et al., 2010). ...
... In Norway, 60% of the funding was allocated as block funding, 25% based on education outcomes and 15% on research performance (Frølich et al., 2010). The Norwegian model inspired the 2006 Danish model (Hicks, 2012). Although block funding is smaller in the liberal regimes, Australia, New Zealand, and the UK rely on performance-based research evaluation frameworks (Auranen & Nieminen, 2010;Hicks, 2012) that include institutions' publication, citations, research income, and since the mid-2000s indicators related to the economic impacts of academic research in research evaluation frameworks (Chubb & Read, 2018). ...
... The Norwegian model inspired the 2006 Danish model (Hicks, 2012). Although block funding is smaller in the liberal regimes, Australia, New Zealand, and the UK rely on performance-based research evaluation frameworks (Auranen & Nieminen, 2010;Hicks, 2012) that include institutions' publication, citations, research income, and since the mid-2000s indicators related to the economic impacts of academic research in research evaluation frameworks (Chubb & Read, 2018). The UK 2014 Research Excellence Framework, for instance, includes 20% weighting for the demonstration of research's impacts outside the academia (Luukkonen & Thomas, 2013). ...
... Sweden's PREA is at university level, with no formal nationallevel research evaluation, despite a national, indicators-based, performance-based research funding system (Hicks 2012). Without a national structure to evaluate research performance and internationally benchmark the universities, bespoke, organization-level self-evaluation arrangements are conducted by Swedish universities, involving an 'enabling' PREA that draws upon narratives and some indicators. ...
... These four PREAs broadly capture Steering I and II and Enabling I and II from our earlier ideal types (see Table 2). As we have noted, they capture a sufficient variety of PREA characteristics to represent known forms of varied research evaluation arrangements globally (see Hicks 2012;Kolarz et al. 2019;Zacharewicz et al. 2019). Covering these four contexts, representing all ideal types, enables us to assert that if qualityrelated selection effects occur for this field across these four contexts, the same pattern would hold true even if we selected to study other contexts for the same field, given they likely mirror the same range of ideal PREA types we are already covering. ...
Article
Full-text available
This paper contributes to understanding the effects of research governance on global scientific fields. Using a highly selective comparative analysis of four national governance contexts, we explore how governance arrangements influence the dynamics of global research fields. Our study provides insights into second-level governance effects, moving beyond previous studies focusing primarily on effects on research organizations rooted in national contexts. Rather than study over 100 countries across which our selected CERN-based particle physics global research field operates, we explore conditions for changing the dynamics of global research fields and examine mechanisms through which change may occur. We predict then minimal effects on the epistemic choices and research practices of members of the four local knowledge networks despite variations in governance arrangements, and hence no second-level effects. We assert a research field’s independence from governance depends on its characteristics and the relative importance to researchers of research quality notions. This paper contributes methodologically and has practical implications for policymakers. It suggests governance arrangements affect the epistemic choices and research practices of the local knowledge networks only when certain conditions are met. Policymakers should consider the context and characteristics of a field when designing governance arrangements and policy.
... Research quality criteria are the foundation of university research evaluation systems. However, while those systems are intended to evaluate diverse research fields having different research patterns and quality standards, they tend to adopt universalistic criteria which follow the most dominant research conventions (Gulbrandsen 2000;Hicks 2012) but do not reflect the complex nature of research quality and thus exclude relevant aspects of research (Ochsner 2022). There are several efforts to expand the concept of research quality in evaluations (Andersen 2013;Franssen 2022;Hug, Ochsner, and Daniel 2013;Ochsner, Hug, and Daniel 2013), but they focus on traditional research fields (STEM & SSH) and do not include the field of Creative Arts (CA) which has traditionally been located beyond the context of university research. ...
... Specifically, we focus on the quality criteria used in the evaluation of the arts within performance-based research funding systems (PRFS) national systems used to evaluate research outputs and allocate research funding (see, e.g. Hicks 2012). While there is ample research on Art Education and their place and role in Higher Education Institutes (see, e.g. ...
Article
Full-text available
This paper investigates research quality criteria in the Creative Arts (CA). The CA has been introduced into the higher education and research sector over the last three decades. It is thus a relatively new research field and there is little empirical knowledge on how outputs in this field should be evaluated. Our study applies a mixed-method approach to assess the relevance of quality criteria used in performance-based research funding systems (PRFSs) in 10 countries. The results of a qualitative analysis of interviews with artists-academics (N = 67) and Joint Correspondence Analysis show that when art is evaluated in the context of academic research, both the traditional indicators of artistic quality as well as the cognitive and research-related aspects of the arts are believed to be significant. The JCA analysis also showed that the majority of our respondents found both extrinsic quality criteria (related to reputation and prestige) and intrinsic criteria (related to cognition and development) relevant.
... The evaluation of the research performance of the universities and respective funding allocation based on their performance, so-called performance-based research funding systems (PBRFSs), have become widespread worldwide (Dougherty et al., 2016;Hicks, 2012;Pinar & Horne, 2022;Zacharewicz et al., 2019). It has been argued that the PBRFSs emerged based on the new public management reforms to increase public accountability (Hicks, 2012;Leišytė, 2016), and these funding systems aim to allocate resources to more efficient research institutes, provide accountability for public investment and establish reputational yardsticks (REF, 2022;Zacharewicz et al., 2019). ...
... The evaluation of the research performance of the universities and respective funding allocation based on their performance, so-called performance-based research funding systems (PBRFSs), have become widespread worldwide (Dougherty et al., 2016;Hicks, 2012;Pinar & Horne, 2022;Zacharewicz et al., 2019). It has been argued that the PBRFSs emerged based on the new public management reforms to increase public accountability (Hicks, 2012;Leišytė, 2016), and these funding systems aim to allocate resources to more efficient research institutes, provide accountability for public investment and establish reputational yardsticks (REF, 2022;Zacharewicz et al., 2019). ...
Article
Full-text available
Performance-based research funding systems (PBRFSs) have been used in selectively distributing research funding, increasing public money accountability and efficiency. Two recent such evaluations in England were called the Research Excellence Framework (REF), which took place in 2014 and 2021, and the research environment, outputs and impact of the research were evaluated. Even though various aspects of the REF were examined, there has been limited research on how the performance of the universities and disciplines changed between the two evaluation periods. This paper assesses whether there has been convergence or divergence in research quality across universities and subject areas between 2014 and 2021 and found that there was an absolute convergence between universities in all three research elements evaluated, and universities that performed relatively worse in REF in 2014 experienced higher growth in their performance between 2014 and 2021. There was also an absolute convergence in the research environment and impact across different subject areas, but there is no significant convergence in the quality of research outputs across disciplines. Our findings also highlight that there has been an absolute convergence in research quality within the universities (between different disciplines in a given university) and within disciplines (between universities in a given subject).
... With this model, output-based research productivity has been used in the evaluation of scientists at HEIs. This has had profound consequences for both HEI and scientists, and many HEIs have switched to a performance-based research evaluation system [38]. It can be stated that this system is employed by almost all HEIs. ...
... The evaluation of research productivity has become a prominent focus for countries worldwide due to several factors, including international benchmarking systems, rankings, performance-based funding schemes, and the pressures of academic capitalism. Higher education scholars such as Altbach [75], Hicks [38], and Slaughter and Rhoades [76] have shed light on these pertinent issues. In this study, we focused on the effects of scientists' families and childhood periods on research productivity, which are partially neglected in studies on higher education. ...
Article
Full-text available
In the past decades, the awareness about the concept of research productivity at higher education institutions has improved which led to an increase in the number of studies dealing with the subject. Such studies mostly deal with correlations between research productivity and organizational elements, gender, age, professional experience, and alma mater characteristics. To provide an innovative dimension to the existing studies this study focuses on the interaction between the research productivity of the scientists and their childhood period and childhood setting. In this context, the aim of our study is to examine the effects of cultural, economic, and social capitals on research productivity of both scientists' current status and their parents' during their childhood. The data were collected from 9499 faculty members through a survey questionnaire which included items on cultural, economic, and social capital. The data on research productivity of the participants were taken from the Web of Science. The major findings of the study are as follows: (a) Turkish scientists both have lower levels of parents' level of-during childhood-and their current level of cultural capital, and they mostly come from families with the lower-middle economic level; (b) they have medium level social capital; (c) cultural and social capitals together can account for 69% of research productivity, and the order of the related items are found to be childhood objectified cultural capital, current embodied cultural capital and parents' embodied cultural capital during childhood; (d) among social capital structures, relational social capital is the strongest predictor of research productivity and (e) economic capital is not a significant predictor of research productivity. We believe that our current findings contribute to the studies on higher education research by uncovering the new relationships between structures.
... The growth of the 'impact agenda' has taken at least three forms: (1) the introduction of impact as an implicit, and sometimes explicit, selection criterion for research funding (Bozeman and Boardman 2009;Bozeman and Youtie 2017;Chubb and Watermeyer 2017); (2) direct funding support for non-academic engagement and knowledge exchange activity (Ulrichsen 2015;Johnson 2022;Durrant and MacKillop 2022) and; (3) the introduction of impact as an assessment criterion for allocating public funding to a university (Smith, Ward and House 2011;Hicks 2012). The expansion of academic researchers' roles to include planning and delivery of impact affects multiple stages of the research process (Collini 2012;Watermeyer 2016;Power 2018). ...
... Martin (2011) argues that while social and economic impact of research can be assessed after the fact, the methodologies that produce robust results are often time-and labour-intensive and unsuited to operation at the scale that would facilitate the evaluation of an entire national research system. In countries with performance-based research funding systems (Hicks 2012), this introduces a substantial methodological dilemma. ...
Article
Full-text available
Although ex post evaluation of impact is increasingly common, the extent to which research impacts emerge largely as anticipated by researchers, or as the result of serendipitous and unpredictable processes, is not well understood. In this article, we explore whether predictions of impact made at the funding stage align with realized impact, using data from the UK’s Research Excellence Framework (REF). We exploit REF impact cases traced back to research funding applications, as a dataset of 2,194 case–grant pairs, to compare impact topics with funder remits. For 209 of those pairs, we directly compare their descriptions of ex ante and ex post impact. We find that impact claims in these case–grant pairs are often congruent with each other, with 76% showing alignment between anticipated impact at funding stage and the eventual claimed impact in the REF. Co-production of research, often perceived as a model for impactful research, was a feature of just over half of our cases. Our results show that, contrary to other preliminary studies of the REF, impact appears to be broadly predictable, although unpredictability remains important. We suggest that co-production is a reasonably good mechanism for addressing the balance of predictable and unpredictable impact outcomes.
... Professors are expected, among others, to have a larger number of high-quality research works as compared to lower-rank academics. Nowadays most academic stakeholders tend to associate high-quality research with works published in journals indexed in Web of Science (WoS) or Scopus (Hicks, 2012). Indexed publications are those used to make world-university performance rankings. ...
... Other side-effects observed were discouraging research diversification, interdisciplinary and innovative research (Abramo et al., 2018;Hicks, 2012;Rafols et al., 2012;Wilsdon, 2015); and tilting time and energies from teaching to research activities (Enders et al., 2015;De Philippis, 2021). ...
Article
Full-text available
This article aims to explore the effects of Ukrainian policy reform, introducing Scopus and WoS publication requirements for professorship, on the publication behaviour and research performance of professors. Our analysis reveals a better scientific profile, at the time of promotion, of those who obtained professorship after the reform as compared to those who obtained it before. Also, we observe a bandwagon effect since the research performance gap between the two observed cohorts decreased after the introduction of the publication requirements. The statistical difference in differences tests revealed that in general, the incentive to produce more indexed publications worked. Nevertheless, it did not always led to higher research performance. Evidently, in several cases, the increase in research output was obtained at the expense of research impact. The effects of the reform could be far greater if combined with initiatives aimed at assessing Ukrainian professor performance regularly and extending the requirements and assessment to the impact of research.
... Scientific research has traditionally been evaluated primarily based on scientific papers, which constitute science's most visible and measurable output (Geuna & Martin, 2003;Hicks, 2012). Academics and research institutions are evaluated and ranked based on a variety of publishing performance criteria (Hirsch, 2005;Narin & Hamilton, 1996), which involves the allocation of research funds as well as the assignment of academic roles (Geuna & Martin, 2003;Hicks, 2012). ...
... Scientific research has traditionally been evaluated primarily based on scientific papers, which constitute science's most visible and measurable output (Geuna & Martin, 2003;Hicks, 2012). Academics and research institutions are evaluated and ranked based on a variety of publishing performance criteria (Hirsch, 2005;Narin & Hamilton, 1996), which involves the allocation of research funds as well as the assignment of academic roles (Geuna & Martin, 2003;Hicks, 2012). A massive literature has focused on the accuracy of modern management and performance metrics, such as productivity, citation indexes, and peer review (Anninos, 2014;Basu, 2006;Werner, 2015). ...
Preprint
Full-text available
This study explores how scientometric data and indicators are used to transform science systems in a selection of countries in the Middle East and North Africa region. I propose that scientometric-based rules inform such transformation. First, the research shows how research managers adopt scientometrics as 'global standards'. I also show how several scientometric data and indicators are adopted following a 'glocalization' process. Finally, I demonstrate how research managers use this data to inform decision-making and policymaking processes. This study contributes to a broader understanding of the usage of scientometric indicators in the context of assessing research institutions and researchers based on their publishing activities. Related to these assessments, I also discuss how such data transforms and adapts local science systems to meet so-called 'global standards'.
... Based on the available literature, we have identified the following proposed indicators to model academic research translation through: contributions to science and knowledge exchange by publications, citations, intellectual property (IP) disclosures, and patent awards; to public health represented by patent citations in FDA Approvals, ClinicalTrials.gov records with results, and contributions Clinical Practice Guidelines through references; and to economics through active licenses, start-ups, and licenses generating income (Hicks, 2012;Lanjouw & Schankerman, 2004;Luke et al., 2018;Vernon et al., 2021). ...
... Currently, publications and citations are commonly used for the evaluation of scientific productivity and translation of knowledge within an academic research environment. Numerous iterations of these metrics, from number of high impact journals to complex citation bibliometrics have been proposed (Bornmann et al., 2008;Hicks, 2012;Hicks et al., 2015). Ultimately, these single indices are not sufficient to describe the value of science to society. ...
Article
Full-text available
The aim of this study is to profile academic institutions (n = 127) based on publications, citations in the top 10% of journals, patent citations in Food and Drug Administration (FDA) approvals, clinical trials with uploaded results, contributions to clinical practice guidelines, awarded patents, start-ups, and licenses generating income in response to the Association of University Technology Managers (AUTM) Licensing Activity Survey: Fiscal Years 2011–2015. Latent variable modeling (LVM) was conducted in Mplus v.8.1, specifically latent profile analysis (LPA) was utilized to predict institutional profiles of research, which were compared with the 2015 Carnegie Classification System ranks. Multivariate regression of profile assignment on research expenditure and income generated by licensure was used to show concurrent validity. The LPA resulted in three profiles as the most parsimonious model. Mantel-Haenszel test of trend to the Carnegie Classification found a positive and significant association among institution rankings (r = 0.492, χ²(1) = 26.69, p < 0.001). Profile assignment significantly predicted differences in research expenditure and income generated by licensure. By classifying academic institutions into improving, mobilizing and thriving translational research profiles allows for a universal metric of translation of science from basic or bench to practice or policy.
... A distinction is sometimes made between evaluation exercises that focus on research quality and/or those that also consider the consequences 2 R. Blackburn et al. for research funding allocation (Hicks, 2012). In this paper, we focus on research evaluation issues, presenting an analysis of REF2021 and sharing our relevant experiences of the B&M sub-panel. ...
... In arguing for the efficient allocation of funding for research, the European Commission (in 2012) called for Member States to 'introduce or enhance competitive funding through calls for proposals and organisational assessments as the main modes of allocating public funds to research and innovation, introducing legislative reforms if necessary'. These approaches aimed to stimulate research productivity, in terms of its volume, quality and socioeconomic impact(Geuna and Martin, 2003;Hicks, 2012). © 2023 The Authors. ...
Article
Full-text available
Evaluating research is an established part of the research process, as funding agencies and governments seek to raise its quality and performance. The United Kingdom's Research Excellence Framework 2021 (REF2021) was the eighth formal assessment of research in UK universities. In Business and Management Studies (B&M), Sub-Panel 17, 108 universities submitted 16,038 research outputs and 539 impact case studies covering the period 2014-2020. Submissions were assessed by a panel of academic researchers and research users, nominated by a range of academic constituencies. The outcome was that the quality of UK research in B&M continues to improve since REF2014. The quality profile for REF2021 had 79% of research assessed as 3* (internationally excellent) and 4* (world-leading). The paper explains and reports on our experiences of the peer review process, analyses the outcomes and discusses the state of research within the discipline. Subsequently, we consider the wider implications of the REF process, its methodologies and impacts, contributing to the debate about research quality in universities. The paper concludes with support for peer review and expresses caution against the automation of research quality assessment.
... The Swedish model, along with those implemented in Poland, the Slovak Republic, and Belgium, uses article citations as one of the main inputs, while Denmark, Finland, and Norway use the number of publications [8]. In many countries (USA, UK, Italy, and Australia) special assessment systems have been developed and they are constantly being improved and adapted to the requirements of the times [9]. ...
... Citations and publication count as a measure of scientific contribution Performance-based university research funding systems have been implemented in many European countries over the last few years [9]. The most common model is to use peer review procedures, but several countries have implemented metrics-based ex-post funding models, including Sweden. ...
Article
Full-text available
Evaluating the effectiveness of research activities is one of the topical issues in the higher education system. Despite this, studies on the extensive assessment of research units' outputs at the university are rare. The main goals of this study are (1) the development of a comprehensive methodology for assessing the research performance of the units; (2) testing this methodology to compare the performance indicators of 37 research institutes and centers. Both quantitative and qualitative research methods were applied in this study. Research results can be beneficiary for government bodies, allowing them to make decisions about the allocation or reallocation of funding and top management in higher education for benchmarking and internal performance evaluation of research institutes and centers. This article contributes to the theoretical basis of research performance evaluation at HEIs and puts forward a step-by-step methodology.
... 2. University employer. This includes funding reallocated from national competitive (e.g., performance-based research funding: Hicks, 2012) or non-competitive block research grants, from teaching income, investments and other sources that are allocated for research in general rather than equipment, time, or specific projects. 3. Other university (e.g., as a visiting researcher on a collaborative project). ...
Article
Full-text available
Evaluating the effects of some or all academic research funding is difficult because of the many different and overlapping sources, types, and scopes. It is therefore important to identify the key aspects of research funding so that funders and others assessing its value do not overlook them. This article outlines 18 dimensions through which funding varies substantially, as well as three funding records facets. For each dimension, a list of common or possible variations is suggested. The main dimensions include the type of funder of time and equipment, any funding sharing, the proportion of costs funded, the nature of the funding, any collaborative contributions, and the amount and duration of the grant. In addition, funding can influence what is researched, how and by whom. The funding can also be recorded in different places and has different levels of connection to outputs. The many variations and the lack of a clear divide between “unfunded” and funded research, because internal funding can be implicit or unrecorded, greatly complicate assessing the value of funding quantitatively at scale. The dimensions listed here should nevertheless help funding evaluators to consider as many differences as possible and list the remainder as limitations. They also serve as suggested information to collect for those compiling funding datasets.
... share of highly cited papers) by amount of financial input would be obtained through these modalities rather than through traditional institutional block funding. Several justifications are advanced to explain this hypothesis (Geuna 2001;OECD 2002;Tapper and Salter 2003;Hicks 2012;Zacharewicz et al. 2019). ...
Article
Full-text available
Over the last decades, most EU countries have profoundly reshaped their public research funding systems by shifting from traditional institutional block-funding towards more project-based mechanisms. The main rationale underlying this evolution builds on the assumption that project funding would foster research performance through the introduction of competitive allocation mechanisms. In contrast with the general increase of project funding, evidence is mixed regarding a positive effect of competitive funding mechanisms on research performance, as some studies find a positive impact, other a negative one or no impact. Differences also appear across studies regarding research actors, funding streams, and research outputs considered. This article integrates these different approaches through a multilevel design gathering funding inputs for 10 countries and 148 universities between 2011 and 2019 and assesses their impact on the quantity and quality of publications. Results highlight no impact of national and university-level competitive funding mechanisms on universities highly cited publications and no clear effect on the quantity of publications.
... A portfolio of quality publications in reputable journals is an asset that scholars are proud to showcase and build their careers on. Although definition of quality varies from one country and institution to another, quantitative quality criteria dominate other aspects of scholarship (Pontika et al., 2022) and impact factor, acceptance rate, and number of citations are frequently used as quality indicators to assess scientific productivity (Hicks, 2012;Ma & Ladisch, 2019). The economics schools in the US may require publications in the top five economics journals for tenure, while many UK business schools require publications in 3-and 4-star journals in the ABS rankings. ...
Article
Full-text available
Purpose This study takes advantage of newly released journal metrics to investigate whether local journals with more qualified boards have lower acceptance rates, based on data from 219 Turkish national journals and 2,367 editorial board members. Design/methodology/approach This study argues that journal editors can signal their scholarly quality by publishing in reputable journals. Conversely, editors publishing inside articles in affiliated national journals would send negative signals. The research predicts that high (low) quality editorial boards will conduct more (less) selective evaluation and their journals will have lower (higher) acceptance rates. Based on the publication strategy of editors, four measures of board quality are defined: Number of board inside publications per editor (INSIDER), number of board Social Sciences Citation Index publications per editor (SSCI), inside-to-SSCI article ratio (ISRA), and board citation per editor (CITATION). Predictions are tested by correlation and regression analysis. Findings Low-quality board proxies (INSIDER, ISRA) are positively, and high-quality board proxies (SSCI, CITATION) are negatively associated with acceptance rates. Further, we find that receiving a larger number of submissions, greater women representation on boards, and Web of Science and Scopus (WOSS) coverage are associated with lower acceptance rates. Acceptance rates for journals range from 12% to 91%, with an average of 54% and a median of 53%. Law journals have significantly higher average acceptance rate (68%) than other journals, while WOSS journals have the lowest (43%). Findings indicate some of the highest acceptance rates in Social Sciences literature, including competitive Business and Economics journals that traditionally have low acceptance rates. Limitations Research relies on local context to define publication strategy of editors. Findings may not be generalizable to mainstream journals and core science countries where emphasis on research quality is stronger and editorial selection is based on scientific merit. Practical implications Results offer useful insights into editorial management of national journals and allow us to make sense of local editorial practices. The importance of scientific merit for selection to national journal editorial boards is particularly highlighted for sound editorial evaluation of submitted manuscripts. Originality/value This is the first attempt to document a significant relation between acceptance rates and editorial board publication behavior.
... Research citation and academic evaluation are symbiotic (Hicks, 2012). It has become commonplace to compare research performance among institutions worldwide and against industry benchmarks to rank countries and institutions (Marginson, 2013). ...
Article
Retracted articles by Indian scholars have received significant attention in recent times. However, no comprehensive study has been conducted to analyse the citations of retracted papers authored by Indian researchers. This study aimed to assess the citations to retracted works published between 2001 and 2020 pre‐ and post‐retraction. The study found that there was an increase in retractions over time, with empirical data suggesting that the number of retractions has increased significantly, from 72 papers between 2001 and 2010 to 365 papers between 2011 and 2020. Duplication ( n = 128) and plagiarism ( n = 119) were the primary reasons for retraction. Notably, 90% of the retracted articles continued to receive citations after retraction. Among the retracted papers, eight received more than 50 post‐retraction citations, 39 received 20 to 50 citations, 347 received one to 19 citations, and 43 were not cited at all post‐retraction. There was an overall 8% decrease in citations after retraction. Retractions were observed across journals of varying impact factor, with a higher number of retractions observed in journals with an impact factor of less than 5 ( n = 286; 65%). Furthermore, smaller research teams of two to five authors accounted for 72% of the total retractions.
... During the past decade, many nations around the world such as Australia, the Czech Republic, Finland, Norway, Poland, Turkey, the UK, and many others have chosen to implement performance-based research funding (Aagaard, 2015;Hicks, 2012;Kulczycki, 2017;Tonta, 2017) and incentive schemes (Franzoni et al., 2011;Quan et al., 2017). Such incentives are mostly related to the publication activities of researchers (Rochmyaningsih, 2019), which are traditionally analyzed by using multidisciplinary bibliographic data sources like the Web of Science, Scopus, Google Scholar, Dimensions and Crossref. ...
Article
Full-text available
The Arabic Citation Index (ARCI) was launched in 2020. This article provides an overview of the scientific literature contained in this new database and explores its possible usage in research evaluation. As of May 2022, ARCI had indexed 138,283 scientific publications published between 2015 and 2020. ARCI’s coverage is characterised by using the metadata available in scientific publications. First, I investigate the distributions of the indexed literature at various levels (research domains, countries, languages, open access). Articles make up nearly all the documents indexed with a share of 99% of ARCI. The Arts & Humanities and Social Sciences fields have the highest concentration of publications. Most indexed journals are published in Egypt, Algeria, Iraq, Jordan, and Saudi Arabia. About 8% of publications in ARCI are published in languages other than Arabic. Second, I use an unsupervised machine learning model, LDA (Latent Dirichlet Allocation), and the text mining algorithm of VOSviewer to uncover the main topics in ARCI. These methods provide a better understanding of ARCI’s thematic structure. Next, I discuss how ARCI can complement global standards in the context of a more inclusive research evaluation. Finally, I suggest a few research opportunities after discussing the findings of this study. Peer Review https://www.webofscience.com/api/gateway/wos/peer-review/10.1162/qss_a_00261
... Governmental attempts to make universities and colleges more effective and efficient have been a recurrent effort in various reform initiatives during the last decades throughout the world (Christensen 2011;Enders, de Boer, and Weyer 2013). Although reforms tend to be country specific, there are also a number of commonalities among them including the ambitions of providing institutions with more autonomy, streamlining institutional governance, developing new incentive structures for universities, introducing accountability schemes and performance targets, and aligning organisational structures to strategic aims (Capano 2011;Frølich, Christensen, and Stensaker 2019;Hicks 2012;Thomas et al. 2020). ...
Article
Full-text available
Many recent higher education reforms worldwide have been legitimated by their potential impact on the performance of universities and colleges. However, we know less about the actual impact of the changes implemented. This article examines the extent to which research performance can be associated with specific organizational characteristics at the department level. The analysis is based on Norwegian university departments, where high- and low-performing departments have been selected as cases for further investigations. The policy context is the organizational reform in Norway from 2016 onwards aiming at reorganizing the higher education landscape through institutional mergers. The key findings indicate that there are few distinct departmental characteristics associated with research performance, such as elected or appointed leadership, single or multi-campus organization, or departmental size. However, the study reveals that highly productive individuals do matter and suggests that cultural dimensions and working conditions may be interesting factors to pursue in further research.
... Research financing systems which are results-based do not usually differ entiate their assessment approach with regards to disciplines or research fields (Hicks, 2012) though there are significant differences between discip lines, and there is also the so-called non-academic impact (Bastow et al., 2014). Public agencies financing research and research organisations bear a great responsibility for a more comprehensive impact of the studies which they support financially. ...
Chapter
Full-text available
This book provides analysis of current trends in research evaluation worldwide and compares the research assessment and innovation ecosystems in Austria, Bulgaria, the Czech Republic, Hungary, Lithuania, the Netherlands, Poland and Slovenia. It argues that in each country the research assessment system is interdependent with the national innovation system and the overall institutional governance/enforcement. The lead author, Albena Vutsova, has over 20 years of experience in research assessment both in Bulgaria and at European institutions (incl. JRC) and has been a member of the European Network for Research Evaluation in the Social Sciences and the Humanities. The three authors work at Sofia University, Bulgaria.
... Digital technologies have not only disrupted the centuries-long publishers-academia relationship but also significantly affected the process and practice of measurement of academic achievements. Following technological transformations, many countries have introduced performance-based research funding systems (Hicks, 2012;Kerr, 1975;Moed et al., 1995) or research assessment exercises (Elton, 2000). It is believed that such systems have fostered a "publish or perish" culture across the globe and increased the pressure on researchers to produce research outputs (Cronin and La Barre, 2004;De Rond, 2005;Ecklund et al., 2012;Seeber et al., 2019). ...
Preprint
Full-text available
Books are an important output in many fields of research. However, they pose a significant challenge for research assessment systems, partly because of the limited availability of information to support the assessment of books. To inform book assessment practices, I present a systematic examination of the ISBN Manual and the Global Register of Publishers (GRP). I evaluate the extent to which these two sources can be used to determine the genre and publisher of a book as well as the country in which a book was issued. My analysis focuses on books submitted to the research assessment systems in Lithuania and the UK from 2008 to 2020. I show how the ISBN Manual captures the complex interactions between publishers, their imprints, and other organisations active in academic publishing, revealing the pitfalls of measuring books' quality by their publisher's status. The results also indicate that the ISBN standard provides no basis for the book genres mandated by research assessment systems in some countries. Finally, I demonstrate how the ISBN Manual and metadata accumulated in the GRP are convenient tools for designers of research assessment systems and are suitable for identifying ISBN registrants and performing bibliometric analysis.
... This can create tension between 'pure' and 'applied' science and mission or curiosity driven research. The problem is exacerbated by funders needing to demonstrate 'value for money' in short timeframes in recent years funders have required that funded science research should demonstrate 'impact' [18,19]. This is predominantly measured in terms of citations, financial outcomes such as spin out companies and new products that contribute to national gross domestic product, or the creation of a specific piece of policy [20][21][22] an increased focus on minimizing risk reduces opportunities for 'blue skies' exploratory research and innovation approaches [23] (Continued.) ...
Article
Full-text available
The ‘early modern’ (Renaissance) workshop was predicated on the idea that informal, open-ended cooperation enables participants to experience difference and develop new insights, which can lead to new ways of thinking and doing. This paper presents the insights that emerged from a conversation event that brought wide-ranging voices together from different domains in science, and across the arts and industry, to consider science leadership as we look to the future in a time of interlocking crises. The core theme identified was a need to regain creativity in science; in the methods of scientific endeavours, in the way science is produced and communicated, and in how science is experienced in society. Three key challenges for re-establishing a culture of creativity in science emerged: (i) how scientists communicate what science is and what it is for, (ii) what scientists value, and (iii) how scientists create and co-create science with and for society. Furthermore, the value of open-ended and ongoing conversation between different perspectives as a means of achieving this culture was identified and demonstrated.
... Research management includes processes related to scientific (knowledge production) and social (introduction of scientific knowledge into practice) dynamics (Hicks, 2012;Saghafian, Austin, & Traub, 2015;Uwizeye, et.al., 2022). Thus, the boundaries between administrative and scientific fields are blurred. ...
Article
Full-text available
Implementing Vocational Higher Education in The Army Polytechnic (Poltekad) has used the Triple Helix approach: universities, the private sector, and the government. In its development, Poltekad contributes to the direction of the development of the Army’s technological innovation field. Based on observations of the results of Poltekad research, it has yet to be utilized optimally in fulfilling Army equipment and weapons. In the last three years, 2020–2022, 163 final research assignments were recorded by Poltekad students, and 40 research by Poltekad lecturers. However, only eight lecturers’ research has become research models with the potential for defense and development in the Army. This study uses a qualitative research method, specifically a case study. The study results show that Poltekad has not been able to play a role as a research university, but is still limited to its function as a vocational education institution in the technology field. The allocation of research funds for Poltekad is still relatively small compared to the benefits of research to support the development of Army Armaments. The research themes of lecturers and students are not entirely following the needs of the development of Indonesian Army weaponry technology. The absence of a Poltekad external monitoring and evaluation team for research programs, limited educational laboratory facilities, and qualified lecturers as researchers in the Poltekad environment are several reasons for optimizing the utilization of Poltekad research results which cannot be carried out downstream in the implementation of production activities carried out by the Defense Industry in Indonesia as is stated in the Poltekad vision. Poltekad, as a Research University, needs to pay attention to elements, such as 1) budgeting, 2) Research Programs, 3) monitoring, and evaluation, 4) researcher competence, 5) facilitation facilities, 6) information and communication systems, and 7) scientific publications. Downstreaming the themes to utilize research results is carried out according to needs and developments.
... It characterizes the university's ability to generate knowledge at the cutting edge of science and to produce high-quality outputs. Research productivity has been linked to research funding opportunities (Hicks, 2012) and career progression in academia (Carr, et al., 2021) across the world. It is an important factor in determining "individual research performance and academic rank" (Abramo, et al., 2011, p. 915). ...
Article
Full-text available
The purpose of this research study was to examine challenges experienced by academics at Stellenbosch University that hinder their research productivity during the COVID-19 pandemic, involving 248 academics who completed an online questionnaire. A qualitative analysis of open-ended responses revealed five themes that characterized the extent that the COVID-19 pandemic impacted these academics’ research productivity: Online Teaching, Increase in Research Productivity, No Difference to Research Productivity, Reduced Research Productivity, and No Research Productivity. A mixed methods analysis revealed that only 25% of academics were not adversely affected by online teaching in terms of research productivity. Two thirds of the academics experienced either a reduction in productivity or reported no research productivity at all. Compared to academics who reported an increase in productivity, academics who reported undertaking no research productivity at all tended to be women, not to hold a professor position, not to have a doctorate degree, to have less experience as academics, to have access at home to a tablet, but not to have access at home to cellphone data.
... Later, Smith et al. [39] started a new vector of performance-based systems analysis with an exploration of the Research Excellence Framework aimed at changing the education funding system. Then Hicks [40] explained the impact of prestige on education investment. Following by Agyemang and Broadbent [41], who proved the negative impact of internal university processes on AFUAR reform's effectiveness. ...
... Both can be understood as enactments of a different accountability relationship between PROs and national governments, which are part of the more general trend of the rise of New Public Management in the public sector ( de Boer, Enders and Schimank 2007;Weingart and Maasen 2007;Bleiklie et al. 2011;Musselin 2021). While the nature and use of national research evaluation systems and performance-based funding systems varies widely between OECD countries (Whitley and Gläser 2007;Thomas et al. 2020), governments have generally moved away from providing unconditional core funding to PROs, and instead developed allocation models tied to performance and policy goals (Hicks 2012). At the same time, research funding has increasingly been distributed through competition for grants arranged by national and international funding bodies, decreasing the relative share of block funding (Whitley et al., 2018). ...
Article
Full-text available
Public research organizations respond to external pressures from national research evaluation systems, performance-based funding systems and university rankings by translating them into internal goals, rules and regulations and by developing organizational identities, profiles and missions. Organizational responses have primarily been studied at the central organizational level, and research on the steering of research has primarily focused on the impacts of performance-based funding systems. However, research evaluation exercises may also have a formative impact, especially below the central organizational level. This paper uses a case study of a research unit of a biomedical research school in the Netherlands to explore the organizational response to a relatively critical external assessment report. It shows that the participation in the Dutch research evaluation cycle legitimated the formation of a new organizational identity for the research unit, which functions as a frame that suggests to staff members a new interpretation of the type of research that is at the core of what the research unit does. We identify three additional steering mechanisms that support the enactment of the organizational identity: steering by resource allocation, by suggesting and by re-organizing. We, furthermore, explore the epistemic effects – the direction and conduct of research – of the organizational response, through interview data in combination with a bibliometric analysis.
... Resource seeking behaviours ("academic capitalism": Slaughter & Leslie, 2001) are long established norms in several major research countries (Johnson & Hirt, 2011;Metcalfe, 2010). Research funding is now primarily awarded for achievements (i.e., performance-based funding: Hicks, 2012) or future promise, through competitive grants (OECD, 2014). This is supplemented by incentives to seek finances from industry and other non-academic sources to research for non-academic benefits (Laudel, 2005). ...
Article
Full-text available
Whilst funding is essential for some types of research and beneficial for others, it may constrain academic choice and creativity. Thus, it is important to check whether it ever seems unnecessary. Here we investigate whether funded UK research tends to be higher quality in all fields and for all major research funders. Based on peer review quality scores for 113,877 articles from all fields in the UK’s Research Excellence Framework (REF) 2021, we estimate that there are substantial disciplinary differences in the proportion of funded journal articles, from Theology and Religious Studies (16%+) to Biological Sciences (91%+). The results suggest that funded research is likely to be higher quality overall, for all the largest research funders, and for 30 out of 34 REF Units of Assessment (disciplines or sets of disciplines), even after factoring out research team size. There are differences between funders in the average quality of the research supported, however. Funding seems particularly associated with higher research quality in health-related fields. The results do not show cause and effect and do not take into account the amount of funding received but are consistent with funding either improving research quality or being won by high quality researchers or projects. Peer Review https://www.webofscience.com/api/gateway/wos/peer-review/10.1162/qss_a_00254
... Over the last two decades, journals in different fields have increasingly required authors to disclose their contributions as part of the research paper (Larivière et al., 2021). As authorship is a proxy for scientific productivity (Cronin, 2001), coupled with an increasing trend to institute metric-guided evaluative mechanisms for governing academia (e.g., Abbott et al., 2010;Hicks, 2012;Wilsdon et al., 2015), there is a need to validate different models for the allocation of author credit against the information presented in such author contribution statements. Although Hagen (2010Hagen ( , 2013 has evaluated several models for allocating author credit against perceived author credit scores, no study to date has validated such models against the author contribution statements of scientific articles. ...
Article
Full-text available
This paper explores the relationship between an author's position in the bylines of an article and the research contributions they have made to analyze the validity of five bibliometric counting methods (arithmetic, fractional, geometric, harmonic, and harmonic parabolic author credit allocation) in the field of Chemical Biology. By classifying the tasks found in the author contribution statements of articles published in Nature Chemical Biology according to a three-tiered scheme, it was possible to divide the authors into three types: core-layer authors, middle-layer authors, and outer-layer authors. When ordering the authorships according to the position in the bylines, there is a distinct u-shaped distribution for the share of authors involved in writing the paper or designing the research (i.e., core authors) and for the average number of tasks performed by each author. The harmonic parabolic model best distributes author credit according to the observed empirical data. It also outperforms the other models in predicting which authors are core authors and which are not. The harmonic parabolic model should be the preferred choice for bibliometric exercises in chemical biology and fields with similar practices regarding authorship order.
... Historically, universities have been organized as heterogenous and highly decentralized entities (Cohen et al., 1972), perhaps primarily held together by their names and administrative units. The construction of actors has recently received assistance from various evaluations, including ratings, rankings, and other performance-based assessments, which have spread as prominent tools with which to scrutinize entire universities across Europe (Hicks, 2012). During such assessments, the quality of educational programmes, the number of doctoral degrees, and the impact of journal publications are aggregated and evaluated, along with the Stefan Arora-Jonsson, Nils Brunsson, and Peter Edlund -9781800883086 Downloaded from PubFactory at 04/24/2023 11:14:54AM via communal account amount of funding that whole universities attract independently through their scientists. ...
... First, government funding has become clustered across fewer institutions (Jongbloed & Lepori, 2015), and while it remains the main source of research income for universities, third-party funding has increased in relative terms (Geuna, 2001;Lepori et al., 2007). Second, governance and attribution mechanisms of public funding have moved from core-funding (block grants) to competitive allocation systems (Cocos & Lepori, 2020;Hicks, 2012;Lepori et al., 2007). Some countries, such as the UK, also started to accrue funding based on research impact, i.e.: the outlined contribution of research to social value and national priorities (Watermeyer, 2016). ...
Article
Full-text available
The dynamics of basic and applied research at university and industry have steadily changed since the Eighties, with the private sector reducing its investments in science and universities experiencing significant remodelling in the governance of their funding. While studies have focussed on documenting these changes in industry, less attention has been paid to observe the trajectories of basic and applied research in universities. This work contributes to fill this gap by looking at the evolution of publicly funded research that has been patented by universities between 1978 and 2015. First, we adopt a critical perspective of the basic versus applied dichotomy and identify patents according to three typologies of research: basic, mission-oriented, and applied research. Second, we describe the evolution of these three typologies in universities compared to industry. Our results show that over the years, patents from academic research that was publicly funded have become more oriented towards pure basic research, with mission-oriented basic research and pure applied research decreasing from the late 1990s. These results complement and extend the literature on basic and applied research dynamics in the private sector. By introducing mission-oriented research as a type of basic research with consideration of use, the work problematises the basic and applied research dichotomy and provides insights into the evolution of academic research focus, offering a more complex picture of how university research contributes to industry and broader social value creation.
... The choice of counting method can make a clear difference even on a national scale (Aksnes et al., 2012;Sivertsen et al., 2019) and so it is important to consider the issue carefully for important applications. National Performance-Based Funding Systems (PBFS) (Hicks, 2012) are high profile examples where counting method choice can have a substantial policy, financial and reputational impacts. Full counting may be preferred when reputation or funding is at stake in the belief that collaborative research is good and should be incentivised (Bloch & Schneider, 2016) but the influence of the decision should still be assessed. ...
Article
Full-text available
Collaborative research causes problems for research assessments because of the difficulty in fairly crediting its authors. Whilst splitting the rewards for an article amongst its authors has the greatest surface-level fairness, many important evaluations assign full credit to each author, irrespective of team size. The underlying rationales for this are labour reduction and the need to incentivise collaborative work because it is necessary to solve many important societal problems. This article assesses whether full counting changes results compared to fractional counting in the case of the UK's Research Excellence Framework (REF) 2021. For this assessment, fractional counting reduces the number of journal articles to as little as 10% of the full counting value, depending on the Unit of Assessment (UoA). Despite this large difference, allocating an overall grade point average (GPA) based on full counting or fractional counting gives results with a median Pearson correlation within UoAs of 0.98. The largest changes are for Archaeology (r=0.84) and Physics (r=0.88). There is a weak tendency for higher scoring institutions to lose from fractional counting, with the loss being statistically significant in 5 of the 34 UoAs. Thus, whilst the apparent over-weighting of contributions to collaboratively authored outputs does not seem too problematic from a fairness perspective overall, it may be worth examining in the few UoAs in which it makes the most difference.
... The key areas where this increase in regulatory burden is particularly apparent are the domains of performance-based governance and quality assurance. In Australia, national performance-based governance has become particularly extensive when it comes to the steering of university-based research, comprising national policy instruments such as performance-based funding mechanisms (Hicks, 2012;Woelert & McKenzie, 2018), the national research evaluation scheme, and the regular national research grant scheme, each of which having their own comprehensive set of rules and reporting requirements. The national performance-based research funding system, for example, was introduced with the ambition of rewarding universities' productivity and doing so in a relatively efficient manner, through tying funding allocations to specified results or 'outputs'. ...
Article
Full-text available
Anecdotal evidence suggests that there is growing concern about increasing administrative burden within universities around the world. At the same time, the literature explicitly devoted to the issue of administrative burden within universities remains relatively scant. Drawing on various bodies of literature and reflections on the situation at Australian universities, this paper (a) presents a conceptualisation of what constitutes administrative burden, considering its organisational implications for universities, (b) interrogates a range of potential drivers of administrative burden, and (c) outlines avenues for both future research into and for practical responses to the issue. The specific contributions of this paper include, first, showing that administrative burden can impact universities’ core activities not only indirectly but also more directly, and second, illustrating that some of the key changes that were meant to make universities more efficient may have inadvertently increased levels of administrative burden.
... The disadvantage is that it could encourage researchers to publish in lower quality journals to increase their number of published articles (Butler, 2003). Therefore certain countries also incorporates the Thomson-Reuters impact factor in their calculations while other countries incorporates citation information (Hicks, 2012). Wallin (2005) also noted that there is not enough scientific evidence to use quantitative bibliometrics, beyond a doubt, to assess research quality. ...
Technical Report
Full-text available
Water scarcity remains one of the challenges the World currently faces. For the period of 2016 to 2020 the World Economic Forum identified the water crisis as one of the top five global risks. The World Economic Forum defines water crisis as “a significant decline in the available quality and quantity of fresh water, resulting in harmful effects on human health and/or economic activity” (World Economic Forum, 2020). Water scarcity already has a negative impact on a quarter of the world’s population (World Resources Institute, 2019). The situation in South Africa is also dire with the country facing a projected 17% water deficit by 2030. Challenges the water sector face include recurrent droughts, insufficient water infrastructure maintenance and investment, inequalities in access to water, a decline in water quality and a lack of skilled water engineers (DWS, 2018). The Department of Water and Sanitation (DWS) is now known as the Department of Human Settlements, Water and Sanitation (DHSWS). This reality requires the implementation and management of innovative solutions, technologies and processes carried out by highly skilled individuals. This has led to the development of South Africa’s Water Research, Development and Innovation (RDI) Roadmap (2015 - 2025) during a partnership by the Department of Science and Innovation (DSI) and the Water Research Commission (WRC). The Department of Human Settlements, Water and Sanitation (DHSWS) endorses the venture as the implementation plan for the National Water Resource Strategy II (NWRS2), in particular the chapter regarding Research and Innovation (Chapter 14) and Water Sector Skills and Capacity (Chapter 15) (WRC, 2015). There are also linkages between the Roadmap and the Industrial Policy Action Plan 2017-2021 (WRC, 2018). The goal of the Water RDI Roadmap is to provide a high-level planning tool to facilitate and guide the refocusing of research, reprioritisation of funds, synergising of existing initiatives and ring-fencing of new resources to address the challenges in the water innovation system. Seven thematic clusters form the focus of the Roadmap and were developed during 2014 and 2015. In brief, the water community was divided into four sectors: Agriculture, Industry, Public Sector and Environmental Protection. In the first set of workshops participants identified a list of needs in the respective water sectors. The list of needs was reviewed, and interventions were identified during a second set of workshops. Lastly participants grouped the reviewed list of needs and interventions into seven clusters which formed the basis of the programme of work in the Water RDI Roadmap
Chapter
In this study, we focus on the central question of how Turkey managed the expansion of its higher education system in terms of public policies and public interest over the past 20 years. We analyzed public policy documents, academic papers, and written records such as news websites and newspaper columns to evaluate how the recent expansion in Turkish higher education evolved and impacted the quality of Turkish universities. Our findings indicated that during the third expansion era starting in the early 2000s, the Turkish higher education system was successful in improving access to higher education immensely. The government’s openness to expansion and the pressure from the members of Parliament for different areas of Turkey were effective in this process. On the other hand, after the expansion period, the quality of instruction, student services, and research were impacted negatively in many newly founded universities. In conclusion, with a citizen-centered public policy making perspective, the relevant literature and the written records from different sources show that the expansion was a correct policy decision; however, all stakeholders of the higher education system had question marks about how this process was managed.KeywordsHigher education Public policy Policy analysis Expansion Universities Policy decision Stakeholders Citizen-centered Government Education system
Article
The conditions of mainstream research funding constrain risky, novel research. However, alternative grants are emerging. We study grantees of a double-blinded funding scheme targeting risky, novel research: The Villum Experiment (VE). Without prompting, scientists juxtaposed the experience of performing research under these conditions with that of performing research funded by mainstream grants: fun and less fun. The conditions of VE felt less intrusive and appealed to their self-perceptions and idealized views of scientific work, which shaped how they conducted the funded research. This paper makes three contributions: (1) it reaffirms that how researchers experience grant conditions affects whether a scheme affords what it intends, (2) it highlights that the affordances of research funding are relative to other concurrent funding options, and (3) it shows that small, more broadly allocatable grants can afford scientists a protected space for autonomous research, usually associated with elusive tenure positions or European Research Council (ERC) grants.
Article
Scientific advisory boards are frequently established to provide scientific insights and advice to policymakers. Advisory board appointing bodies often state that research excellence and scientific seniority are the main grounds on which advisory board members are selected. Many authors have pointed out that there is more to giving good scientific advice than just being an expert for a specific research field. The aim of this study is to analyse if and how research excellence correlates with the probability of being appointed as a scientific advisory board member. We collected data for scientific advisory boards from both the USA and Germany. We use logit regression models to analyse how research excellence correlates with the probability of appointment to a scientific advisory board. Our results suggest that research excellence is insignificant or even correlates negatively with the probability of being appointed to a scientific advisory board.
Book
What is the point of publishing in the humanities? This Element provides an answer to this question. It builds on a unique set of quantitative and qualitative data to understand why humanities scholars publish. It looks at both basic characteristics such as publication numbers, formats, and perceptions, and differences of national academic settings alongside the influences of the UK's Research Excellence Framework and the German Exzellenzinitiative. The data involve a survey of more than 1,000 humanities scholars and social scientists in the UK and Germany, allowing for a comprehensive comparative study, and a series of qualitative interviews. The resulting critique provides scholars and policy makers with an accessible and critical work about the particularities of authorship and publishing in the humanities. And it gives an account of the problems and struggles of humanities scholars in their pursuit of contributing to discourse, and to be recognised with their intellectual work.
Chapter
Policy ideas are seldom original; they are often borrowed from other jurisdictions and sometimes adapted to the local context. This chapter reviews research on policy diffusion within the U.S. higher education context, with a focus on the diffusion of higher education policies across states. In addition to reviewing existing literature on U.S. higher education policy diffusion, we present findings from a study that examines newspaper language related to state-level performance-based funding policies for higher education. We focus on two states that considered performance-based funding but were among the few states that did not implement this higher education funding approach. By examining these two states, we are able to explore reasons for non-implementation and shed light on the phenomenon of resistance to policy diffusion.
Article
Full-text available
This study aims to analyze Korean research trends in ‘Art’ and ‘Design’. Especially, this study concentrated on identifying various factors that significantly affect the citation counts of Korean ‘Art’ and ‘Design’ articles. The entire 18,872 ‘Art’ and ‘Design’ related articles published in 40 KCI-accredited journals from 2002 to 2016 and the citing articles published from 2002 to 2021 served the context of this study. Four journal-level, three article-level and three author-level factors were examined for the potential determinants of the citation counts. In accordance with the differences in the scope and strength of research impact between ‘Art’ and ‘Design’ research, two separate regression tests based on the citations of ‘Art’ and ‘Design’ fields displayed the dissimilar results. In order to increase the research impact in ‘Art’ field, it is advantageous to publish articles in journals where maintained higher research impact and have more diverse topical scope. To exert higher research impact in ‘Design’ field, it is better to publish articles in Korean and to study popular topics. The main contribution of this study is, first, this is to expand its limited bibliometric foci to the impact in Korean ‘Art’ and ‘Design’ research. Second, this is the first attempt to estimate the influence of various determinants on research impacts of Korean ‘Art’ and ‘Design’ articles using a predictive modeling
Preprint
Full-text available
Science is a cumulative activity, which can manifest itself through the act of citing. Citations are also central to research evaluation, thus creating incentives for researchers to cite their own work. Using a dataset containing more than 63 million articles and 51 million disambiguated authors, this paper examines the relative importance of self-citations and self-references in the scholarly communication landscape, their relationship with the age and gender of authors, as well as their effects on various research evaluation indicators. Results show that self-citations and self-references evolve in different directions throughout researchers' careers, and that men and older researchers are more likely to self-cite. Although self-citations have, on average, a small to moderate effect on author's citation rates, they highly inflate citations for a subset of researchers. Comparison of the abstracts of cited and citing papers to assess the relatedness of different types of citations shows that self-citations are more similar to each other than other types of citations, and therefore more relevant. However, researchers that self-reference more tend to include less relevant citations. The paper concludes with a discussion of the role of self-citations in scholarly communication.
Article
Full-text available
The aim of this article is to clarify the nature of the management style most suited to the emergence of networked governance. The paradigms of traditional public administration and new public management sit uncomfortably with networked governance. In contrast, it is argued the public value management paradigm bases its practice in the systems of dialogue and exchange that characterize networked governance. Ultimately, the strength of public value management is seen to rest on its ability to point to a motivational force that does not solely rely on rules or incentives to drive public service practice and reform. People are, it suggests, motivated by their involvement in networks and partnerships, that is, their relationships with others formed in the context of mutual respect and shared learning. Building successful relationships is the key to networked governance and the core objective of the management needed to support it.
Article
Full-text available
We use the principal-agent model as a focal theoretical frame for synthesizing what we know, both theoretically and empirically, about the design and dynamics of the implementation of performance management systems in the public sector. In this context, we review the growing body of evidence about how performance measurement and incentive systems function in practice and how individuals and organizations respond and adapt to them over time, drawing primarily on examples from performance measurement systems in public education and social welfare programs. We also describe a dynamic framework for performance measurement systems that takes into account strategic behavior of individuals over time, learning about production functions and individual responses, accountability pressures, and the use of information about the relationship of measured performance to value added. Implications are discussed and recommendations derived for improving public sector performance measurement systems. © 2010 by the Association for Public Policy Analysis and Management.
Article
Full-text available
Conventional liberal frameworks – in which power is seen asthe property of states, and repressive in character, and market and stateexclude each other – are unable to comprehend the recent changes inliberal government, including the government of systems and institutions inhigher education. Neo-liberal government rests on self-managing institutionsand individuals, in which free agents are empowered to act on their ownbehalf but are steered from a distance by policy norms and rules of thegame. In the universities government-created markets and quasi-markets havebeen used to advance both devolution and central control, simultaneously,and national government and institutional management are increasinglyimplicated in each other. These issues are explored in relation to recenthigher education literature, and empirically, the latter by examining thechanges in the Australian higher education system in the last decade. TheAustralian system provides an example of a quasi-market in which thedevelopment of a stronger institutional management, the introduction ofgovernment-institution negotiations over educational profiles, and the newsystems of competitive bidding, performance management and qualityassessment have all been used to steer academic work and to install aprocess of continuous self-transformation along modern neo-liberal lines.Following a change of government in 1996 there has been some movement from aquasi-market to a more fully developed economic market, but no relaxation ofgovernment control.
Article
Full-text available
Many countries have introducedevaluations of university research, reflectingglobal demands for greater accountability. Thispaper compares methods of evaluation usedacross twelve countries in Europe and theAsia-Pacific region. On the basis of thiscomparison, and focusing in particular onBritain, we examine the advantages anddisadvantages of performance-based funding incomparison with other approaches to funding.Our analysis suggests that, while initialbenefits may outweigh the costs, over time sucha system seems to produce diminishing returns.This raises important questions about itscontinued use.
Article
Full-text available
This paper describes the development of research evaluation in Spain. It assumes that research evaluation, R&D policy and programme evaluation are embedded in the development of and R&D system and are characterised by general Spanish policy-making. Research evaluation in a context of delegation and as a self-organising system for research actors guaranteed by the state, has been strongly developed in the last few years: R&D policy and programmme evaluation in less institutionalised. The explanation is linked to the sequence of reforms of the R&D system and to the set up of the first Spanish science and technology policy. The support of the European Commission is acknowledged (HCM contract CHRX-CT93-0240) and the Spanish National R&D Plan (projects SEC 93-0688 and SEC 94-0796).
Book
The establishment of national systems of retrospective research evaluations is one of the most significant of recent changes in the governance of science. In many countries, state attempts to manage public science systems and improve their quality have triggered the institutionalisation of such systems, which vary greatly in their methods of assessing research performance, and consequences for universities. The contributions to this volume discuss, inter alia, the birth and development of research evaluation systems as well as the reasons for their absence in the United States, the responses by universities and academics to these new governance regimes, and their consequences for the production of scientific knowledge. By integrating new theoretical approaches with country studies and studies of general phenomena such as university rankings and bibliometric evaluations, the book shows how these novel state steering mechanisms are changing the organisation of scientific knowledge production and universities in different countries. In combining latest research and an overview of trends in the changing governance of research, the book is essential not only for scholars engaged in higher education research, science policy studies, and the sociology of science but also for policy makers and analysts from science policy and higher education policy as well as university managers and senior scientists.
Article
This paper reports the results of empiricalresearch designed to explore the impact of researchselectivity on the work and employment of academiceconomists in U.K. universities.Research selectivityis seen as part of the general trend towardmanagerialism in higher education in both the U.K. andabroad. Managerialism based on performance indicatorsand hierarchical control has been contrasted withcollegiate control-based or informal peer review. However,analysis of the academic labor process has idealizedcollegiate relations at the expense of professionalhierarchies and intellectual authority relations. We argue that in the U.K., there has evolved amainstream economics which is located within awell-defined neoclassical core. We find that theexistence of lists of core mainstream journals which arebelieved to count most in the periodic ranking exerciseposes a serious threat to academic freedom and diversitywithin the profession, institutionalizing the controlwhich representatives of the mainstream exercise over both the academic labor process and jobmarket. In this way, managerialism combines with peerreview to outflank resistance to new forms ofcontrolling academic labor at the same time asreinforcing disciplinary boundaries through centralizedsystems of bureaucratic standardization andcontrol.
Article
Technical and political difficulties involved in allocating block infrastructure research grants to institutions in large, post-binary higher education systems are identified through consideration of the operation of the British Research Assessment Exercise (RAE) and the Australian Composite Index used for allocation of the Research Quantum. These two mechanisms use distinctively different assessment approaches but each provides the basis for annual allocations of substantial public funds. Both have generated considerable controversy, involved their institutions and their staffs in considerable additional work and effort, and led to various unintended consequences, especially affecting institutional and researcher behaviour. While the RAE and the Research Quantum have limited objectives, comparison with other national systems of research assessment raises the possibility of the design of assessment systems that could perform a variety of purposes, including allocation of block research grants, quality assurance and public accountability, informing national research policy, and international benchmarking.
Article
Given that the current Research Assessment Exercise (RAE 2001) has been completed, it is an appropriate time to explore the impact of the RAEs upon the character of British higher education. This timeliness is reinforced by the earlier publication of HEFCE's own ‘Review of Research’ (September 2000), the report from the House of Commons’ Select Committee on Science and Technology Committee (April 2000), with a report due in April 2003 from the Joint Funding Bodies (under the auspices of Gareth Roberts). We are therefore in a period of review and consultation, which may culminate in a new assessment regime or, as its severest critics would hope, even its demise. While our analysis genuflects to these contemporary developments, it is constructed within a framework that interprets the RAE process as constituting a continuous struggle for the control of the production of high-status knowledge.
Article
A system of research assessment was developed and implemented inHong Kong during the period from 1991 to 1994 as an input to theassessment of the public recurrent funding allocations of theterritory's higher education institutions and as an extension ofthe University Grants Committee's other quality assuranceactivities. Refinements were subsequently introduced for the nexttwo assessment exercises in 1996 and 1999. This paper describesthe evolution of the process, identifies some significantdifferences from that in the UK on which it was modelled, andevaluates the 1999 research assessment exercise, in particularthe application in that context of the Carnegie Foundation'sdefinitions of research and research-related scholarlyactivities, viz. the scholarships of ``discovery, integration,application and teaching''.
Article
Year-on-year trends in research outputs show increases in research activity as the date of the research assessment exercise—in New Zealand the Performance-Based Research Fund (PBRF)—looms. Moreover, changes with time in the number and types of conference presentation indicate that the vehicle of publication is also being influenced by the PBRF. Within New Zealand business schools, relating the published journal articles to the Australian Business Deans Council rankings list shows a trend towards more publications of lower rank, raising doubts about whether the rhetoric about the PBRF raising the quality of research is really justified. This ‘drive’ towards increasing numbers of research outputs is also fostered by an increasing trend towards co-authorship in publishing across all disciplines. KeywordsResearch outputs-Research publications-Research quality-Research collaboration-Author collaboration-Performance-based research fund (PBRF)
Chapter
Many recent discussions of state science and technology policies in OECD economies since the end of the Second World War have distinguished a number of distant phases or ‘paradigms’ (Ruivo 1994) in state-science relations, which reflected both different perceptions of the role of scientific research in industrialised societies and changes in the size and complexity of the public science systems (see e.g. Brooks 1990; Freeman and Soete 1997: 374-395; Martin 2003).
Article
The article draws on two research projects to explore the implications of policy change in the UK for academic identities within a predominantly communitarian theoretical perspective. It focuses on biological scientists and science policies. It examines the impacts of changes upon the dynamic between individuals, disciplines and universities within which academic identities are formed and sustained and upon individual and collective values central to academic identity, namely the primacy of the discipline in academic working lives and academic autonomy. Challenges to these have been strong but they have retained much of their normative power, even if the meaning of academic autonomy has changed. Communitarian theories of academic identity may need to be modified in the contemporary environment but they do not need to be abandoned.
Article
Since 1980, national university departmental ranking exercises have developed in several countries. This paper reviews exercises in the U.S., U.K. and Australia to assess the state-of-the-art and to identify common themes and trends. The findings are that the exercises are becoming more elaborate, even unwieldy, and that there is some retreat from complexity. There seems to be a movement towards bibliometric measures. The exercises also seem to be effective in enhancing university focus on research strategy.
Article
Australia’s share of publications in the Science Citation Index (SCI) has increased by 25% in the last decade. The worrying aspect associated with this trend is the significant decline in citation impact Australia is achieving relative to other countries. It has dropped from sixth position in a ranking of 11 OECD countries in 1988, to 10th position by 1993, and the distance from ninth place continues to widen.The increased publication activity came at a time when publication output was expected to decline due to pressures facing the higher education sector, which accounts for over two-thirds of Australian publications. This paper examines possible methodological and contextual explanations of the trends in Australia’s presence in the SCI, and undertakes a detailed comparison of two universities that introduced diverse research management strategies in the late 1980s. The conclusion reached is that the driving force behind the Australian trends appears to lie with the increased culture of evaluation faced by the sector. Significant funds are distributed to universities, and within universities, on the basis of aggregate publication counts, with little attention paid to the impact or quality of that output. In consequence, journal publication productivity has increased significantly in the last decade, but its impact has declined.
Article
Nobel Prizes are an important indicator of research excel- lence for a country. Spain has not had a science Nobel Prize winner since 1906, although its gross domestic product (GDP) is high, research and development (R&D) investments, in monetary terms, are high, and conven- tional bibliometric parameters are fairly good. Spanish research produces many sound papers that are reason- ably cited but does not produce top-cited publications. This absence of top-cited publications suggests that important achievements are scarce and, consequently, explains the absence of Nobel Prize awards. I argue that this negative research trend in Spain is caused by the extensive use of formal research evaluations based on the number of publications, impact factors, and journal rankings. These formal evaluations were introduced to establish a national salary bonus that mitigated the lack of research incentives in universities. When the process was started, the results were excellent but, currently, it has been kept too long and should be replaced by methods to determine the actual interest of the research. However, this replacement requires greater involvement of universities in stimulating research.
Chapter
We analyze Italy's recent research evaluation exercise (VTR) as a salient example in discussing some internationally relevant issues emerging from the evaluation of research in economics. We claim that evaluation and its criteria, together with its linkage to research institutions' financing, are likely to affect the direction of research in a problematic way. As the Italian case documents, it is specifically economists who adopt unorthodox paradigms or pursue less diffused topics of research that should be concerned about research evaluation and its criteria. After outlining the recent practice of economic research in Italy and highlighting the relevant scope for pluralism that traditionally characterizes it, we analyze the publications submitted for evaluation to the VTR. By comparing these publications to all the entries in the EconLit database authored by economists located in Italy, we find a risk that the adopted ranking criteria may lead to disregarding historical methods in favor of quantitative and econometric methods, and heterodox schools in favor of mainstream approaches. Finally, by summarizing the current debate in Italy, we claim that evaluation should not be refused by heterodox economists, but rather that a reflection on the criteria of evaluation should be put forward at an international level in order to establish fair competition among research paradigms, thus, preserving pluralism in the discipline.
Article
In this paper, we describe the development of a methodology and an instrument to support a major research funding allocation decision by the Flemish government. Over the last decade, and in parallel with the decentralization and the devolution of the Belgian federal policy authority towards the various regions and communities in the country, science and technology policy have become a major component of regional policy making. In the Flemish region, there has been an increasing focus on basing the funding allocation decisions that originate from this policy decentralization on "objective, quantifiable and repeatable" decision parameters. One of the data sources and indicator bases that have received ample attention in this evolution is the use of bibliometric data and indicators. This has now led to the creation of a dedicated research and policy support staff, called "Steunpunt O&O Statistieken," and the first time application of bibliometric data and methods to support a major inter-university funding allocation decision. In this paper, we analyze this evolution. We show how bibliometric data have for the first time been used to allocate 93 million Euro of public research money between 6 Flemish universities for the fiscal year 2003, based on Web-of-Science SCI data provided to "Steunpunt O&O Statistieken" via a license agreement with Tbomson-ISI. We also discuss the limitations of the current approach that was based on inter-university publication and citation counts. We provide insights into future adaptations that might make it more representative of the total research activity at the universities involved (e.g., by including data for the humanities) and of its visibility (e.g., by including impact measures). Finally, based on our current experience and interactions with the universities involved, we speculate on the future of the specific bibliometric approach that has now been adopted. More specifically, we hypothesize that the a
Article
In December 2003, seventeen years after the first UK research assessment exercise, Italy started up its first-ever national research evaluation, with the aim to evaluate, using the peer review method, the excellence of the national research production. The evaluation involved 20 disciplinary areas, 102 research structures, 18,500 research products and 6,661 peer reviewers (1,465 from abroad); it had a direct cost of 3.55 millions Euros and a time length spanning over 18 months. The introduction of ratings based on ex post quality of output and not on ex ante respect for parameters and compliance is an important leap forward of the national research evaluation system toward meritocracy. From the bibliometric perspective, the national assessment offered the unprecedented opportunity to perform a large-scale comparison of peer review and bibliometric indicators for an important share of the Italian research production. The present investigation takes full advantage of this opportunity to test whether peer review judgements and (article and journal) bibliometric indicators are independent variables and, in the negative case, to measure the sign and strength of the association. Outcomes allow us to advocate the use of bibliometric evaluation, suitably integrated with expert review, for the forthcoming national assessment exercises, with the goal of shifting from the assessment of research excellence to the evaluation of average research performance without significant increase of expenses.
Article
The Polish Parliament this month began voting on legislation creating a new national agency charged with distributing competitive grants for frontier research. The proposed National Center for Science (NCN), to be located in Krakow, is meant to be free from political pressures and would use an international peer-review system modeled on those of the European Research Council and the U.S. National Science Foundation. NCN would also earmark at least 20% of its budget to grants for scientists under age 35. Michal Kleiber, president of the Polish Academy of Sciences, sees in NCN the type of reform the country's scientific community needs.
Article
Nature - the world's best science and medicine on your desktop
Article
This article outlines the evolution of international scientific production in Spain over the last 25 years, a period characterised by steady growth in research production. The following stages in this process are identified in accordance with some of the factors that predominated at different times. From 1974 to 1982 production increased due to causes endogenous to the scientific system itself, as scientists brought their work into line with the patterns which characterised research in other industrialised countries. From 1982 to 1991 the prioritisation of R&D by government administrative bodies represented a constant stimulus, implemented through a set of legal measures, investments and the creation of posts for new researchers. From 1989 to the present the creation of the Comisión Nacional de Evaluación de la Actividad Investigadora (National Commission for the Evaluation of Research Activity, CNEAI) and the research incentive system have provided a further stimulus, which has led to the maintenance of, and an increase in, the rate of research production in spite of the net decrease in the monetary value of research grants awarded during the last period analysed. Other special characteristics of Spanish research, such as its dependence on the public sector and its essentially academic nature, are discussed.
Article
In previous research on the impact of the Research Assessment Exercise on heterodox economics and heterodox economists in the UK, the author concluded that reliance on Diamond List journals to rank departments would drive economic departments to discriminate positively in terms of their hiring, promotion and research strategies in favour of mainstream economists and their research, in order to maintain or improve their ranking. As a consequence, the author predicted there would be no or only a token presence of heterodox economists in an increasing number of departments. Whether the conclusions still hold and the predictions materialise is the subject of the paper. Copyright 2007, Oxford University Press.
Research Evaluation: Methods, Practice, and Experi-ence. Danish Agency for Science, Technology and Innovation (downloaded on
  • Fosse Hansen
Fosse Hansen, H., 2009. Research Evaluation: Methods, Practice, and Experi-ence. Danish Agency for Science, Technology and Innovation (downloaded on June 9, 2011 from http://en.fi.dk/research/rearch-evaluation-methods-practice-and-experience/Research%20Evaluation%20Methods-%20Practice-%20and%20Experience.pdf/).
Using Metrics to Allocate Research Funds: A short evaluation of alternatives to the Research Assessment Exercise. Higher Educa-tion Policy Institute A bibliometric funding model based on a national research information system. Presented at the ISSI
  • T Sastry
  • B Bekhradnia
Sastry, T., Bekhradnia, B., 2006. Using Metrics to Allocate Research Funds: A short evaluation of alternatives to the Research Assessment Exercise. Higher Educa-tion Policy Institute, Oxford. Sivertsen, G., 2009. A bibliometric funding model based on a national research information system. Presented at the ISSI 2009, Rio de Janeiro, Brazil (downloaded on June 9, 2011 from www.issi2009.org/ agendas/issiprogram/activity.php?lang=en&id=108).
How jour-nal rankings can suppress interdisciplinarity. The case of innovation studies in business and management Sound research, unimportant discoveries: research, universities, and formal evaluation of research in Spain
  • I Rafols
  • L Leydesdorff
  • O Hare
  • A Nightingale
  • P Stirling
Rafols, I., Leydesdorff, L., O'Hare, A., Nightingale, P., Stirling, A., 2011. How jour-nal rankings can suppress interdisciplinarity. The case of innovation studies in business and management, May. Rodríguez-Navarro, A., 2009. Sound research, unimportant discoveries: research, universities, and formal evaluation of research in Spain. Journal of the American Society for Information Science and Technology 60, 1845–1858.
The ratings game Australian Research Council (ARC), 2009. ERA 2010 Submission Guidelines (down-loaded on
  • Anonymous
Anonymous, 2010. The ratings game. Nature 464 (March (7285)), 7. Australian Research Council (ARC), 2009. ERA 2010 Submission Guidelines (down-loaded on June 9, 2011 from www.arc.gov.au/era/key docs10.htm).
Feasibility Study: the Evaluation and Benchmarking of Humanities Research in Europe. HERA D4.2.1. Arts and Humanities Research Council, United Kingdom. European Commission, 2010. Assessing Europe's University-Based Research
  • C Dolan
Dolan, C., 2007. Feasibility Study: the Evaluation and Benchmarking of Humanities Research in Europe. HERA D4.2.1. Arts and Humanities Research Council, United Kingdom. European Commission, 2010. Assessing Europe's University-Based Research. EUR 24187 EN, Science in Society 2008 Capacities, 1.
RAE Selection Gets Brutal. Times Higher Education Supplement 1779
  • A Lipsett
Lipsett, A., 2007. RAE Selection Gets Brutal. Times Higher Education Supplement 1779, p. 1.
Tertiary Education Commission, 2010. Performance-based Research Fund-Tertiary Education Commission, www.tec.govt.nz/Funding
  • New Zealand
New Zealand, Tertiary Education Commission, 2010. Performance-based Research Fund-Tertiary Education Commission, www.tec.govt.nz/Funding/Fund-finder/Performance-Based-Research-Fund-PBRF-/ (accessed 09.06.11).
Towards a Bibliometric Database for the Social Sciences and Humanities—A European Scoping Project. Final Report on Project for the European Science Foundation. Higher Education Funding Council for England (HEFCE), 1997. The Impact of the
  • D Hicks
  • J Wang
Hicks, D., Wang, J., 2009. Towards a Bibliometric Database for the Social Sciences and Humanities—A European Scoping Project. Final Report on Project for the European Science Foundation. Higher Education Funding Council for England (HEFCE), 1997. The Impact of the 1992 Research Assessment Exercise on Higher Education Institutions in England, No. M6/97. Higher Education Funding Council for England, Bristol (downloaded on June 9, 2011 from www.hefce.ac.uk/pubs/hefce/1997/m6 97.htm).
The UK Research Assessment Exercise: a case of regulatory capture? Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and their Consequences for Intellectual Innovation
  • B R Martin
  • R Whitley
Martin, B.R., Whitley, R., 2010. The UK Research Assessment Exercise: a case of regulatory capture? In: Whitley, R., Gläser, J., Engwall, L. (Eds.), Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and their Consequences for Intellectual Innovation. Oxford University Press, Oxford, UK, pp. 51–80 (Chapter 2).
Improvements to Excellence in Research for Australia Press release, Senator the Hon Kim Carr, Minister for Innovation
  • K Carr
Carr, K., 2011. Improvements to Excellence in Research for Australia. Press release, Senator the Hon Kim Carr, Minister for Innovation, Industry, Science and Research, 30 May 2011. http://minister.innovation.gov.au/Carr/MediaReleases/ Pages/IMPROVEMENTSTOEXCELLENCEINRESEARCHFORAUSTRALIA.aspx (accessed 09.06.11).
Impacts of performance-based research funding systems: a review of the concerns and the evidence. In: Chapter 4 in OECD, Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Pro-ceedings
  • L Butler
Butler, L.,2010. Impacts of performance-based research funding systems: a review of the concerns and the evidence. In: Chapter 4 in OECD, Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Pro-ceedings. OECD Publishing.
General Principles of Parametric Evaluation of Scientific Institutions
  • Poland
Poland, Ministry of Science and Higher Education, 2010. General Principles of Parametric Evaluation of Scientific Institutions, http://translate.google. com/translate?hl=en&sl=pl&u=http://www.nauka.gov.pl/finansowanie/ finansowanie-nauki/dzialalnosc-statutowa/&ei=lDG7S-KLEJGu9gSdlZCFCA &sa=X&oi=translate&ct=result&resnum=7&ved=0CCQQ7gEwBg&prev=/ search%3Fq%3Dfinansowania%2Bbada%25C5%2584%2Bstatutowych %26hl%3Den%26sa%3DG%26rlz%3D1G1GGLQ ENUS375 (accessed 09.06.11).
Performance-based R&D institutional funding in Flemish Universi-ties
  • M Luwel
Luwel, M., 2010. Performance-based R&D institutional funding in Flemish Universi-ties. In: Presented at the OECD Workshop, 21 June, Paris.