Article

The Research Excellence Framework and the ‘impact agenda’: are we creating a Frankenstein monster?

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

In pursuit of public accountability, the mechanisms for assessing research performance have become more complicated and burdensome. In the United Kingdom, the Research Assessment Exercise (RAE) evolved from an initially simple framework to something much more complex and onerous. As the RAE now gives way to the Research Excellence Framework (REF), ‘impact assessment’ is being added to the process. Impact comes in numerous forms, however, so its assessment is far from straightforward. While the Higher Education Funding Council for England is initially proposing a relatively simple methodology, the history of the RAE suggests that this approach will over time become ever more sophisticated. Yet if the ‘costs’ of an elaborate system for assessing ‘research excellence’ and its impact then exceed the benefits, the time may have come to re-examine whether a dual-support system still represents the optimum way of funding university research.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The nature of change in R4D creates significant evaluation challenges (Douthwaite and Hoffecker 2017), and one of the critical challenges is the nonlinear and uncertain pathways to achieving impact (Martin 2011). Research activities interact in different ways within complex systems, and often have many feedback loops and unexpected results (Roling 2011). ...
... Step 6: Assess the VfM of each project using a rubric and peer/expert review In the research community, peer review is an important approach for assessing quality, where it is held that experts from the relevant fields of research are best placed to judge the quality or value of research (Ofir et al. 2016;Bozeman and Youtie 2017;IDRC 2017). Peer review is not without critique; however, GCRF and Newton are embedded in the wider system of UK research assessment, where there are significant concerns that assessment allows for subversive control and influence by government over research (Martin 2011). Peer review is viewed as an essential mechanism to limit this (Ibid.). ...
... Care needs to be taken when feeding back results to individual projects. BEIS and Delivery Partners are unlikely to want to incentivise perverse behaviour in projects merely to score well in a tokenistic fashion, as has been seen in some other research assessment exercises (Martin 2011;Hollow 2013). Rather, BEIS and Delivery Partners want to facilitate genuine, meaningful learning around the VfM of R4D. ...
Article
Research for development (R4D) funding is increasingly expected to demonstrate value for money (VfM). However, the dominance of positivist approaches to evaluating VfM, such as cost-benefit analysis, do not fully account for the complexity of R4D funds and risk undermining efforts to contribute to transformational development. This paper posits an alternative approach to evaluating VfM, using the UK's Global Challenges Research Fund and the Newton Fund as case studies. Based on a constructivist approach to valuing outcomes, this approach applies a collaboratively developed rubric-based peer review to a sample of projects. This is more appropriate for the complexity of R4D interventions, particularly when considering uncertain and emergent outcomes over a long timeframe. This approach could be adapted to other complex interventions, demonstrating that our options are not merely "CBA or the highway" and there are indeed alternative routes to evaluating VfM. Supplementary information: The online version contains supplementary material available at 10.1057/s41287-022-00565-7.
... The evaluation of science policy has been a concern for researchers in the last decades (Cozzens 1997;Georghiou 1998, Salter andMartin 2001;Shapira and Kuhlmann 2003, Martin 2011, Feller 2017) and for the policy makers (European Commission 2018 3 , 2019 4 and 2020 5 ;and OECD 1998 6 ). ...
... The above makes a SO evaluation relevant for the Spanish Government, but in the meantime it is of interest to review the literature on the evaluations of similar policy instruments published to-date -the focus of the next section. Martin-Sadersai, 2017;Barros, 2018;Lewis, 2018;Williams, 2018;Woeler, 2018;Hussey, 2019); the Czech Republic (Good, 2015;Vanecek, 2014), Denmark (Carter, 2016) Finland (Mathies, 2019), Germany (Kehm 2013;Gaehtengs, 2015;IEKE, 2017;Knie and Simon 2019); Italy (Cattaneo, 2016); New Zealand (Buckle, 2018;Lewis 2018); Slovakia (Pisar, 2017); Sweden (Lundequist, 2010); and the UK: Moed 2008; Hicks 2009;Martin, 2011Martin, , 2013Rosli, 2016;Watermeyer, 2016;Lewis 2018;Hall and Martin, 2018;Pinar 2019. As a result of the survey that the OECD (2014) carried out to the 40 governments' officials, a set of results have come out enumerated as "effects of the CoE instrument". ...
... Science, technology and innovation policy have shown to be hard to evaluate (Cozzens 1997;Georghiou 1998, Salter andMartin, 2001;Shapira and Kuhlmann 2003, Martin 2011, Feller 2017 On the accountability aspects, the general political move towards the New Public Management (that has already been explained above) influenced science policy by placing emphasis upon accountability (Georghiou 1998) which has continued and it has been reinforced through establishing more complicated and burdensome mechanisms (Martin 2011). In the last years the accountability aspects of science policy are seen more prominent with movements like those of open science or citizen science, which include different concerns to the ones relevant to my study. ...
Thesis
The research herein aims to generate learning for the Spanish government about the effects that “Severo Ochoa” Centres of Excellence Programme has produced; and to give an insider’s systematic critical reflection on the policy environment. The Programme was launched by the Spanish government as a high profile science policy instrument. The research uses programme theory and realistic evaluation approaches rooted in qualitative methods and analysis. As far as the author is aware, it is the first time a realistic evaluation approach has been used to evaluate science policy. The literature review focuses on approaches to science policy evaluation, in general, and science policy evaluation in Spain, specifically. It includes approaches to rethinking science policy evaluation, expands on theory-based, realistic evaluation and finishes with a focus on learning from evaluation for policy making, institutions and deliberative policy analysis. The empirical research chapters firstly explore policymakers’ conceptual framework of impact at the time the programme was launched, and then, analyses the programme’s effects in the first 20 awarded centres. The effects are explained through a realist lens, identifying context-mechanism-outcome configurations. Finally, the research conclusions identify what worked well from the policy instrument. All findings will be made available to the ministry in charge of science policy in the Spanish Government to enable policy learning from the research.
... Some of the existing studies found that the PBRFSs in the United Kingdom (UK), Italy and South Korea led to increased performance and funding differences across universities. For instance, the research assessment exercise in 2008 (RAE2008) and research excellence framework in 2014 (REF, 2014) increased the concentration of funding among larger universities in the UK (Marques et al., 2017;Martin, 2011;Pinar, 2020;Torrance, 2020). 1 For instance, Pinar and Unlu (2020a) showed that the inclusion of new criteria to the REF (2014) (i.e., the evaluation of the non-academic impact of the research) led to an increased quality-related research funding gap among universities. In other words, the differences in performance-based funding allocated to universities based on their performances in environment and output categories were lower than the differences in funding allocated based on their performances in the impact category. ...
... It has been argued that the research assessment system in the United Kingdom aimed at increasing the concentration of high-quality researchers in a handful number of universities (Buckle et al., 2020) and would lead to a concentration of research funding in a few research-intensive universities (Martin, 2011;Torrance, 2020). In other words, previous research argued that the quality difference between better and worse performers is expected to increase, resulting in increased research-quality funding differences between institutes as these funds are distributed based on the REF results. ...
Article
Full-text available
Performance-based research funding systems (PBRFSs) have been used in selectively distributing research funding, increasing public money accountability and efficiency. Two recent such evaluations in England were called the Research Excellence Framework (REF), which took place in 2014 and 2021, and the research environment, outputs and impact of the research were evaluated. Even though various aspects of the REF were examined, there has been limited research on how the performance of the universities and disciplines changed between the two evaluation periods. This paper assesses whether there has been convergence or divergence in research quality across universities and subject areas between 2014 and 2021 and found that there was an absolute convergence between universities in all three research elements evaluated, and universities that performed relatively worse in REF in 2014 experienced higher growth in their performance between 2014 and 2021. There was also an absolute convergence in the research environment and impact across different subject areas, but there is no significant convergence in the quality of research outputs across disciplines. Our findings also highlight that there has been an absolute convergence in research quality within the universities (between different disciplines in a given university) and within disciplines (between universities in a given subject).
... Promising, demonstrating and documenting impact outside academia is now a major part of the research policy infrastructure (Collini 2012;Penfield et al. 2014;Greenhalgh et al. 2016). The 'impact agenda' (Martin 2011;Watermeyer 2016) has spread across research systems, featuring in countries such as the USA, Netherlands, Italy, Sweden, Australia, New Zealand, and many others. ...
... Social impact is difficult to assess and measure, particularly compared to economic impact, where there are more established methodologies (Bozeman and Boardman 2009;Bozeman and Youtie 2017). Martin (2011) argues that while social and economic impact of research can be assessed after the fact, the methodologies that produce robust results are often time-and labour-intensive and unsuited to operation at the scale that would facilitate the evaluation of an entire national research system. In countries with performance-based research funding systems (Hicks 2012), this introduces a substantial methodological dilemma. ...
Article
Full-text available
Although ex post evaluation of impact is increasingly common, the extent to which research impacts emerge largely as anticipated by researchers, or as the result of serendipitous and unpredictable processes, is not well understood. In this article, we explore whether predictions of impact made at the funding stage align with realized impact, using data from the UK’s Research Excellence Framework (REF). We exploit REF impact cases traced back to research funding applications, as a dataset of 2,194 case–grant pairs, to compare impact topics with funder remits. For 209 of those pairs, we directly compare their descriptions of ex ante and ex post impact. We find that impact claims in these case–grant pairs are often congruent with each other, with 76% showing alignment between anticipated impact at funding stage and the eventual claimed impact in the REF. Co-production of research, often perceived as a model for impactful research, was a feature of just over half of our cases. Our results show that, contrary to other preliminary studies of the REF, impact appears to be broadly predictable, although unpredictability remains important. We suggest that co-production is a reasonably good mechanism for addressing the balance of predictable and unpredictable impact outcomes.
... The Research Excellence Framework (REF) and its predecessors, the Research Assessment Exercise (RAE) and Research Selectivity Exercise (RSE), have exerted profound impact on research autonomy, distribution of funding and prestige, and working conditions of scholars in the United Kingdom for decades (McNay, 2022;Schäfer, 2018). They have introduced a focus on topics with easily measurable scientific, social, and economic impact, while open-ended, explorative research has been devalued and often left unfunded (Martin, 2011;Thorpe et al., 2018;Watermeyer and Chubb, 2019;Watermeyer and Hedgecoe, 2016). Despite the REF's, RAE's, and RSE's aim to improve the competitiveness of the UK research system (Tight, 2019), no substantial improvements were found in previous studies. ...
... If available, research is mostly restricted to management and business research (Lee et al., 2013;Stockhammer et al., 2021;Tourish and Willmott, 2015). Furthermore, critical studies on the impact of assessments on the academic field indicate a loss of research diversity (Hamann, 2016;Lee, 2007;McNay, 2022;Martin, 2011), yet these studies refrain from investigating the impact of research assessments on research diversity and topic structure empirically. We address this research gap by combining natural language processing (NLP) (topic modeling) with multiple factor analysis (MFA). ...
Article
Full-text available
Our study investigates the impact of the British Research Assessment Exercise in 2008 and Research Excellence Framework in 2014 on the diversity and topic structure of UK sociology departments from the perspective of habitus-field theory. Empirically, we train a Latent Dirichlet allocation on 819,673 abstracts stemming from the journals in which British sociologists submitted at least one paper in the Research Assessment Exercise 2008 or Research Excellence Framework 2014. We then employ the trained model on the 4822 papers submitted in the Research Assessment Exercise 2008 and 2014. Finally, we apply multiple factor analysis to project the properties of the departments in the topic space. Our topic model uncovers generally low levels of research diversity. Topics with global reach related to political elites, demography, knowledge transfer, and climate change are on the rise, whereas locally constrained research topics on social problems and different dimensions of social inequality get less prevalent. Additionally, some of the declining topics are getting more aligned to elite institutions and high ratings. Furthermore, we see that the associations between different funding bodies, topics covered, and specialties among sociology departments changed from 2008 to 2014. Nonetheless, topics aligned to different societal elites are found to be associated with high Research Assessment Exercise/Research Excellence Framework scores, while social engineering topics, postcolonial- and cultural-related, as well as more abstract topics are related to lower Research Assessment Exercise/Research Excellence Framework scores.
... 990). This was not unique for science, but part of a general government policy that demanded that public spending should adhere to the principles of 'value for money' (Martin, 2011;Rhodes, 1994). Initially, the assessment focused on a few key publications from each evaluated unit, but after critique against this selective approach, the scheme was extended. ...
... An estimate made already in 2006 found that the expenses -including the time spent by universities to prepare for the exerciseamounted to a figure in the order of £100 million (Sastry and Bekhradnia, 2006: 5). The increasing demands and complexities of the assessment have resulted in it being described as a Frankenstein's monster (Martin, 2011). Despite the increasing costs of the evaluations, it has always relied mainly on collegial evaluation (peer review) exercised by an academic elite. ...
Article
Full-text available
In this article, we problematize the notion that the continuously growing use of bibliometric evaluation can be effectively explained by ‘neoliberal’ ideology. A prerequisite for our analysis is an understanding of neoliberalism as both denoting a more limited set of concrete principles for the organization of society (the narrow interpretation) or as a hegemonic ideology (the broad interpretation). This conceptual framework, as well as brief history of evaluative bibliometrics, provides an analytical framing for our approach, in which four national research evaluation systems are compared: Norway, Russia, Sweden, and the United Kingdom. On basis of an analysis of the rationales for implementing these systems, as well as their specific design, we discuss the existence or non-existence of neoliberal motivations and rationales. Overall, we find that a relatively homogeneous academic landscape, with a high degree of centralization and government steering, appears to be a common feature for countries implementing national evaluation systems relying on bibliometrics. Such characteristics, we argue, may not be inductively understood as neoliberal but as indications of national states displaying strong political steering of its research system. Consequently, if used without further clarification, ‘neoliberalism’ is a concept too broad and diluted to be useful when analyzing the development of research evaluation and bibliometric measures in the past half a century.
... These motives, and especially accountability and incentive requirements, were associated with policy reforms in many countries which granted increased autonomy to publicly funded universities. An early influential example is the UK's Research Assessment Exercise introduced in the 1980s, subsequently renamed the Research Excellence Framework; on its early development, see Martin (2011). ...
... Cost-benefit analysis of these schemes has been another area of investigation; see Hazledine and Kurniawan (2005), and Geuna and Piolatto (2016 Other areas of enquiry have focussed on the metric devised to measure research quality, the assessment method (whether peer review or bibliometric methods), and compliance costs. Extensive discussions on these issues include Aksnes and Taxt (2004), OECD (2010), Martin (2011), de Boer et al. (2015, Wilsdon (2015), Baccini and De Nicolao (2016), Aksnes, Langfedlt, and Wouters (2019, pp. 6-7), and Kolarz, Arnold, Dijkstal, Nielsen, and Farla (2019). ...
Article
This paper reviews changes in New Zealand universities since the introduction of the Performance Based Research Fund (PBRF) in 2003, and evaluates changes in relation to the stated objectives. This stocktake of research findings is in part a response to the official report of the Review Panel, which made no attempt to review evidence of performance. A key objective was to achieve an improvement in research quality. It is suggested that improvements have been related closely to the incentives created by the scheme, and achieved by considerable staff turnover. The present stocktake of the changed nature of universities and the details of the evaluation process suggests that substantial simplifications could usefully be made while maintaining incentives that are at the heart of any PBRF.
... A key principle of these efforts has been the use of "competition for funding" with expectations that competition can improve the system-level efficiency through the generation of outcome incentives. As a result, major national research evaluation exercises have been established (Edler et al., 2012;Martin, 2011) and external/extramural funding has rapidly increased while internal/intramural funding has reduced (de Boer et al., 2007;Herbst, 2007;Hicks, 2012;Lewis, 2015). Although a positive impact of funding is observed on the productivity of research organizations (Bolli & Somogyi, 2011;Cattaneo et al., 2016) and scientists (Jacob & Lefgren, 2011;Lee & Bozeman, 2005), the evidence of research system-level productivity gains as a result of higher levels of competition in the allocation of research funding are mixed (Auranen & Nieminen, 2010;Himanen et al., 2009). ...
... Research is therefore undertaken in "quasi-market" conditions since research actors compete and access (also through collaboration) different funding sources. This trend has resulted in an increasing emphasis on the evaluation of research outputs, thus driving the emergence of metrics and major national evaluation programs (Edler et al., 2012;Martin, 2011;Reed et al., 2021;Wilsdon et al., 2015). ...
Article
Full-text available
Academic research often draws on multiple funding sources. This paper investigates whether complementarity or substitutability emerges when different types of funding are used. Scholars have examined this phenomenon at the university and scientist levels, but not at the publication level. This gap is significant since acknowledgement sections in scientific papers indicate publications are often supported by multiple funding sources. To address this gap, we examine the extent to which different funding types are jointly used in publications, and to what extent certain combinations of funding are associated with higher academic impact (citation count). We focus on three types of funding accessed by UK-based researchers: national, international, and industry. The analysis builds on data extracted from all UK cancer-related publications in 2011, thus providing a 10-year citation window. Findings indicate that, although there is complementarity between national and international funding in terms of their co-occurrence (where these are acknowledged in the same publication), when we evaluate funding complementarity in relation to academic impact (we employ the super-modularity framework), we found no evidence of such a relationship. Rather, our results suggest substitutability between national and international funding. We also observe substitutability between international and industry funding.
... Furthermore, according to Tijssen (2003), excellence is a comparative expression denoting superiority in terms of quality or quantity and driven by the researchers' need to demonstrate return on investment to funders of research activities (Martin, 2011). While the common definition of excellence is elusive, excellence is not only perceived as an utmost indication of performance but also as the motivating power for progressive policies with high levels of national competition (Tijssen et al., 2002;Rodríguez Navarro, 2011;Vertesy and Tarantola, 2012). ...
... The need to confirm the quality of research has in turn necessitated the advancement of the concept of research excellence. Martin (2011) opines that the number and diversity of methods applied to measure research excellence have increased over the years and have become progressively more sophisticated, hence the use of bibliometric concepts to evaluate research excellence. Bibliometrics measures quantity, quality and visibility of research output, production and use of scientific literature. ...
Research
Research excellence (RE) is a relatively new concept that has been gaining traction among scholars, government agencies, and funders. This paper seeks to unravel Research excellence issues to promote research excellence in the country. The study anchoring this paper assessed the top papers produced in Kenya to gauge the country’s RE. Data were obtained from Clarivate Analytics’ Web of Science (WoS) Core Collection databases, the Essential Science Indicators, and the Journal Citation Reports.
... Estas reformas a las características institucionales tradicionales de las universidades (promovidas por la agenda del impacto), no están libres de cuestionamientos (Maassen y Stensaker, 2011): se alude al control institucional que ejerce este tipo de exigencia (Rebora y Turri, 2013), a la sofisticación (complejidad) con que operarán en un futuro, a la sobrecarga laboral de los investigadores, al encarecimiento de las investigaciones y al aumento de los incentivos perversos al mundo científico (Martin, 2011). Se discute, además, la capacidad de predecir el tipo de impacto o beneficio que logrará un estudio, considerando que no hay impactos obvios asegurados y que existen impactos indirectos los investigadores chinos, ante la ausencia de criterios y protocolos que respalden un nuevo sistema nacional de información científica que incluya métricas alternativas e independientes y donde se equilibre la globalización y relevancia local de los estudios, según el tipo y el campo de la investigación (Zhang y Sivertsen, 2020). ...
... En la mayoría de los países latinoamericanos las actividades de investigación y desarrollo (I+D) son financiadas principalmente por el sector público -salvo México y Chile-y ejecutadas por la academia (Arias y Zuluaga, 2014). Cabe mencionar que en Chile, prima un sistema de autofinanciamiento donde los académicos deben solicitar a los organismos de financiación el costo total de la investigación prevista; caso similar al de Estados Unidos, donde las universidades reciben financiación básica para la docencia, pero no para la investigación, mientras que en Europa, han adoptado sistemas de apoyo dual, que financian universidades (recursos que pueden administrar con libertad), proyectos y programas de investigación específicos (sujetos a la responsabilidad pública) (Martin, 2011). Independiente del sistema que domine, el éxito de la evaluación científica dependerá de la capacidad de creación e implementación de una política que considere mediciones que no socaven la investigación ni invadan el tiempo que todo investigador necesita para generar conocimientos sólidos (Gunn y Mintrom, 2016b). ...
Article
Full-text available
Este artículo busca identificar criterios e indicadores de evaluación científica, que permitan mejorar la forma en que las agencias de financiación, las instituciones académicas y otros grupos evalúan la calidad e impacto de la investigación. Para ello, se realiza una revisión bibliográfica, basada en artículos sobre políticas de evaluación de la investigación científica y agendas internacionales implementadas en los últimos años (principalmente en el Reino Unido, Estados Unidos, Australia, China y Latinoamérica). Los resultados indican que no existe un solo método de evaluación científica, ya que ningún indicador es absoluto. Cada investigación posee actores distintos que deben ser considerados y se debe valorar la investigación en su contexto. Se recomienda un sistema de evaluación mixto, que incorpore criterios cuantitativos y cualitativos, pero que reconozca los límites y alcances de ambos y también de cada disciplina.
... In fact, publicly funded scholarly research is held accountable to produce Mode II knowledge related to "users" in industry by the UK's "Research Excellence Framework". The academics' reaction: "A Frankenstein Monster!" 28 In 1999, Anne Huff gave her presidential address at the Academy of Management's annual meeting where she called upon colleagues to at least consider Mode 1.5 research-combining theory and practice perspectives. 29 The audience responded with polite applause but also plenty of eye-rolls. ...
... Likewise, one of Roy Bhaskar's charges against Western philosophy was precisely its obsession with epistemology over ontology that, in his view, impeded actions and ultimately problem solution [6]. The so-called impact agenda that concurrently represents a hallmark of Western Higher Education institutions, has institutionalized a utilitarian view of knowledge production through evaluation regimes [7]. From a viewpoint of traditional Higher Education, this clashes with an emphasis of pedagogy over ideology. ...
Article
Full-text available
1) Background: This conceptual paper departs from the background of how Higher Education represents a critical component of the continuation of Western civilisation and culture. Specifically, the paper addresses the knowledge gap of what an emphasis on the outcome/impact does to pedagogy at Western universities. (2) Methods: Methodologically the paper subdivides the educational process into four discrete phases as to reflect upon whom and on what premises the pedagogy happens (teaching, research, funding, and curriculum formation). (3) Research findings: The presented argument suggests that universities can focus on educating students for its own sake or as means to an end. The current impact agenda prioritizes achieving specific goals at the expense of exploratory research, leading to a different definition of research success. This could result in only end-goal-focused individuals being successful and the curriculum being changed to align with their impact ambitions, the unintended consequence being that Higher Education stops being a genuine mechanism for education and instead becomes inadvertent indoctrination. (4) Conclusions: Only by having student benefit as the primary focus of pedagogy (process view) can the inter-generational feedback loop be safeguarded, regardless of how noble other sentiments may appear to be for related practical purposes (end-product view).
... The author states that it is often impossible to connect research and "impact" outcomes in a maze of complex social interactions and serendipitous turns. So, a cruder approach may be necessary (Martin, 2011). Molas-Gallart and Tang (2011) encourage "contribution not attribution" evaluations, while impact need not be conceived of purely in terms of economic returns but can embody broader public value in the form of social, cultural, environmental and economic benefits. ...
Article
Full-text available
Understanding production types helps define the impact of different knowledge areas. Informa-tion from the Sucupira postgraduate database was taken. Cluster and correspondence analyses were conducted to determine the behavior of different areas. Social Sciences (SS), Humanities (H), Letters, Literature & Arts (LLA) were responsible for almost all productions, except for those in scientific journals and events. All areas have a high interaction with the business sector. SS, H, and LLA showed more work for governments (local, state, or federal), which funded products such as computer/mobile apps, books, and chapters. Funding related to art and culture is varied. Demand for maps came from SEEG (System for Estimates of Emissions and Removal of Greenhouse Gases) and WRI (World Resources Institute). The technical, artistic, and cultural sectors cannot be excluded from the evaluation, as they are part of knowledge and have a political and socioeconomic impact.
... Therefore, it seems worthwhile to look into some performance measurement practices in business schools situated in the RoW. For instance, while absolutely not free from criticism and downsides (e.g., Martin, 2011;Sivertsen, 2017), ...
Article
Full-text available
With reference to recent debate about the increasing "an A is an A" mentality at business schools, I provide evidence on the prevalence of this mentality in North America versus other regions of the world (RoW). The evidence presented is derived from the data selection procedures employed in conducting systematic reviews of management research since a focus on specific journals in this selection can be seen as an artifact of the "an A is an A" mentality. My findings suggest that this mindset is more widespread in North American business schools and less so elsewhere. This implies that in order to find remedies against the detrimental effects of the "an A is an A" mentality, North American business school leaders and academics might find inspiration in other countries. In addition, I suggest that a part of the solution could also be directing PhD students towards a more inclusive selection of journals and articles in reviews of management research.
... This last point is crucial for those working in the social sciences and humanities where the creation of knowledge follows less linear paths and has less obviously tangible impacts. Impact in these fields is far more likely to be one of gradual 'knowledge creep' rather than a simple, direct and instantaneous causative effect (Martin, 2011;Watermeyer, 2019). ...
Article
Full-text available
The evaluation of research to allocate government funding to universities is now common across the globe. The Research Excellence Framework, introduced in the UK in 2014, marked a major change by extending assessment beyond the ‘quality’ of published research to include its real-world ‘impact’. Impact submissions were a key determinant of the £4 billion allocated to universities following the exercise. The case studies supporting claims for impact are therefore a high stakes genre, with writers keen to make the most persuasive argument for their work. In this paper we examine 800 of these ‘impact case studies’ from disciplines across the academic spectrum to explore the rhetorical presentation of impact. We do this by analysing authors’ use of hyperbolic and promotional language to embroider their presentations, discovering substantial hyping with a strong preference for boosting the novelty and certainty of the claims made. Chemistry and physics, the most abstract and theoretical disciplines of our selection, contained the most hyping items with fewer as we move along the hard/pure – soft/applied continuum as the real-world value of work becomes more apparent. We also show that hyping varies with the type of impact, with items targeting technological, economic and cultural areas the most prolific.
... Since the 1990s, governments have increasingly desired to show the public the value of public expenditures and have thus pressured funding agencies to demonstrate that the research they fund can impact daily lives (Martin, 2011). This emphasis on research impact also aligned with theoretical development from the triple-to quadruple-helix innovation framework, which highlighted the role of the media-based and culture-based public in driving innovation in democratic societies (Carayannis and Campbell, 2009). ...
... Defining the societal impact of one's research is difficult, can be confusing, and even contentious. Attempts to measure societal impact have also been controversial with at least one scholar referring to societal impact metrics as a Frankenstein monster (Martin, 2011). Part of the problem is that research impact can be measured at different stages of the research process, starting from the initiation of the project through its completion, followed by awareness, use and application of the research findings in subsequent studies, and beyond the academic world, in the influence that scholars who take such research into the world can have on firms, governments and transnational institutions, as they are invited to consult or sit on boards or on policy-making bodies. ...
Article
The purpose of this JIBS editorial is to outline the vision and mission of the new JIBS Societal Impact Advisory Committee (SIAC). We define “societal impact” as research that has potential effects outside academia, for example, on communities, economies, environments, and other actors. We propose that societal impact is especially important for international business (IB) research but also particularly challenging, given the cross-national dimensions of IB and the differing social, economic, and political preferences faced by MNEs across the contexts in which they operate. We reference the Responsible Research in Business and Management (RRBM) movement and other professional association initiatives as potential sources of inspiration and guidance for understanding the societal impact of IB research generally and the vision and mission of the SIAC in particular. We outline some implications for IB scholarship to improve its societal impact and conclude by describing SIAC’s roles and responsibilities as related to JIBS authors, editors, reviewers, and the broader IB scholarly and practice communities.
... As Bornmann (2013) says, research's social impact often takes years to become apparent, and in many cases, it is hard to identify the chain of cause and effect between research and its influence. Furthermore, sundry authors claim that expected social impact differs widely by research area: an engineer's scientific work may be anticipated to have a different impact than the work of a sociologist or historian (Martin, 2011;Molas-Gallart et al., 2002). The same thing happens in fields like health and economics, whose expected social impacts may differ very much indeed. ...
Article
Full-text available
This paper analyses the scientific activity related to open science in Spain and its influence on public policy from a bibliometric perspective. For this purpose, Spanish centres' projects and publications on open science from 2010 to 2020 are studied. Subsequently, policy documents using papers related to open science are analysed to study their influence on policymaking. A total of 142 projects and 1491 publications are analysed, 15% of which are mentioned in policy documents.The publications cited in policy documents display high proportions of international collaboration, open access publication and publication in first-quartile journals. The findings underline governments’ leading role in the implementation of open science policies and the funding of open science research. The same government agencies that promote and fund open science research are shown to use that research in their institutional reports, a process known as knowledge flow feedback. Other non-academic actors are also observed to make use of the knowledge produced by open science research, showing how the open science movement has crossed the boundaries of academia.
... Altmetrics, such reflections infer, "might mainly reflect the public interest and discussion of scholarly works rather than their societal impact" (Tahamtan and Bornmann, 2020: 1; cf. also Nicholas et al., 2020: 269;Regan and Henchion, 2019: 485). They seem to be designed mainly "for research managers to judge whether the researchers have had satisfactory broader impacts" (Holbrook, 2019: 6; cf. also Martin, 2011). And if research administrators start to mandate a certain (now measurable) degree of societal reception, this may "breed resentment [against altmetrics] by forcing compliance" (Holbrook, 2019: 4). 2 Figure 1. ...
Article
Full-text available
Many discussions on serendipitous research discovery stress its unfortunate immeasurability. This unobservability may be due to paradoxes that arise out of the usual conceptualizations of serendipity, such as “accidental” versus “goal-oriented” discovery, or “useful” versus “useless” finds. Departing from a different distinction drawn from information theory—bibliometric redundancy and bibliometric variety—this paper argues otherwise: Serendipity is measurable, namely with the help of altmetrics, but only if the condition of highest bibliometric variety, or randomness, obtains. Randomness means that the publication is recommended without any biases of citation counts, journal impact, publication year, author reputation, semantic proximity, etc. Thus, serendipity must be at play in a measurable way if a paper is recommended randomly, and if users react to that recommendation (observable via altmetrics). A possible design for a serendipity-measuring device would be a Twitter bot that regularly recommends a random scientific publication from a huge corpus to capture the user interactions via altmetrics. Other than its implications for the concept of serendipity, this paper also contributes to a better understanding of altmetrics’ use cases: not only do altmetrics serve the measurement of impact, the facilitation of impact, and the facilitation of serendipity, but also the measurement of serendipity.
... In fact, such an environment incentivises researchers to maximise the impact of their scientific work. On the one hand scientific impact is a multifaceted concept, encompassing various dimensions including -among others -the plausibility [2,3], originality [4,5], scientific value [6,7,8], and societal value [9,10,11] of scientific publications. On the other hand, however, current academic evaluation practices mostly operationalise scientific impact in terms of bibliometric impact, i.e., the amount of citations that published scientific work receives from other publications [12,13,14]. ...
Preprint
Full-text available
We examine the innovation of researchers with long-lived careers in Computer Science and Physics. Despite the epistemological differences between such disciplines, we consistently find that a researcher's most innovative publication occurs earlier than expected if innovation were distributed at random across the sequence of publications in their career, and is accompanied by a peak year in which researchers publish other work which is more innovative than average. Through a series of linear models, we show that the innovation achieved by a researcher during their peak year is higher when it is preceded by a long period of low productivity. These findings are in stark contrast with the dynamics of academic impact, which researchers are incentivised to pursue through high productivity and incremental - less innovative - work by the currently prevalent paradigms of scientific evaluation.
... While it is a lofty ideal, there has been an ongoing debate within academia about how impact strategies shape impact practices and ultimately affect the production of academic knowledge (de Jong & Balaban, 2022). The 'impact agenda', as it has become known across many universities, ultimately influences what academic research does and doesn't get funded (de Jong & Balaban, 2022;Martin, 2011). Zheng et al. (2021) analysed 6882 case studies submitted to the UK's REF and Australia's EI (Engagement and Impact assessment) to determine the main types of societal impact employed by academic researchers in these countries. ...
... While it is a lofty ideal, there has been an ongoing debate within academia about how impact strategies shape impact practices and ultimately affect the production of academic knowledge (de Jong & Balaban, 2022). The 'impact agenda', as it has become known across many universities, ultimately influences what academic research does and doesn't get funded (de Jong & Balaban, 2022;Martin, 2011). Zheng et al. (2021) analysed 6882 case studies submitted to the UK's REF and Australia's EI (Engagement and Impact assessment) to determine the main types of societal impact employed by academic researchers in these countries. ...
Chapter
Full-text available
In this chapter, Boyd describes the staging and evaluation of the ‘Finding Home’ exhibition within the context of university ‘impact agendas’. The notion of societal impact is critiqued before the tasks involved in staging the ‘Finding Home’ exhibition are detailed. The findings from the exhibition’s evaluation, which included 100 visitor surveys and 31 phone interviews with exhibition audiences, are also presented in this chapter. The chapter concludes with reflections on the labour involved in bringing a research exhibition to multiple publics.
... Academic research impact evaluation is defined the assessment of any effect due to research in academic level (1). Different factors such as individual, institutional and international collaborations can effect on research impact. ...
... Recently interdisciplinarity has also been promoted as a way to increase public accountability (Huutoniemi 2015), or seen to enhance the impact of academic work (Lariviere et al. 2015). The latter is particularly interesting in light of the operationalization of "impact" as an assessment criterion in the national research evaluation framework in the UK, which influences the distribution of public funding to universities (Martin 2011;McKenna 2015). The presumed superiority of interdisciplinary knowledge also implicitly informs many efforts to distinguish what is epistemologically special about interdisciplinary research (Faber and Shepper 1997;Fuller 2004), efforts to develop taxonomies of interdisciplinary research (Aboelela et al. 2007, Huutoniemi et al. 2010Klein 2010b;Salter and Hearn 1996), and work examining the everyday dynamics of interdisciplinary interactions in small-group research settings (Hackett and Rhoten 2009;Jeffrey 2003;Scerri 2000;Stokols 2003). ...
... Readings (1996, p. 8) pointed to the influence of the French-American historian Jacques Barzun in the late 1960s, who called for universities to build up an administrative capacity, populated by non-academic bureaucrats, to exercise proper civil service authority and thus keep the organisations and their operations in check (Barzun, 1969). A parallel and perhaps not unrelated process is connected to globalisation, which took off for real in the 1990s and created what several analysts have identified as an international market of students, researchers and funding (Altbach & McGill Peterson, 2007;Wildavsky, 2010), in whose traces a major international and local effort of evaluation and rankings has been built up (Hazelkorn, 2011;Hallonsten, 2021), sufficiently administratively demanding to warrant the claim that this has created a 'Frankenstein monster' of evaluation (Martin, 2011). Needless to say, this development should be visible in the changes in volume of university administrations, along with the rise of managerialism and economisation of higher education and academic research demonstrated in other recent works (Berman, 2012;M€ unch, 2014). ...
Article
Full-text available
Swedish universities and colleges have received a substantial funding increase since the turn of the millennium, as part of continued policies of expanding the admission of students to higher education to broader layers of the population and strengthen Swedish public research and development to increase the competitiveness of the Swedish knowledge-based economy. In this article, publicly available statistics are used to trace how this increase in funding has been used by the sector. Comparing figures on income (base grant for research, third-party funding and base grant for education) with statistics on personnel and student enrolment as well as data on actual expenditure, the article draws some conclusions that are used to discuss some common misunderstandings and erroneous beliefs, including claims of a ‘depletion’ of the base grant for research and an uninhibited growth of the number of administrative staff, which are common themes in the Swedish and international debate over higher education.
... Second, creation of societal impact as an academic activity is a relatively new concept and, often, is conceived outside of rigorous academic practice, with the result that there are no norms and standards of what represents legitimate and quality activity in impact creation (Holbrook and Frodeman 2011;Ma et al. 2020). Bozeman and Boardman (2009) critique a notion of research impact that assumes that it is created by the knowledge creator (what Martin (2011) refers to as the attribution problem). Knowledge that creates impact draws on a range of existing knowledge. ...
Article
Over the last decade, the idea of societal impact resulting from publicly funded research has changed from being a relatively fringe concern related to high-technology entrepreneurship and spin-off companies to becoming an increasingly important public policy concern. This has stimulated academic science policy research to investigate the impact of research and conceptualize how knowledge created in academic contexts can be coupled to real-world problems or needs. Most of the work in this stream of research focuses on ex post impacts, that is, the impacts created by individual research activities or research strands after their completion and likewise there has been much research on ex post impact evaluation. However, ex ante impact evaluations have become increasingly important for funding decisions, but little is known about how to evaluate impact when considering research project proposals. In this article, we propose a conceptual framework to evaluate ex ante impact of research based on the idea that a research proposal is a ‘promise’ to achieve impact. We suggest that evaluators could assess social impact promises by considering two elements, namely the activities that couple their knowledge with non-academic users and the interdependency and consistency of such activities throughout the overall project proposal. We ultimately propose an analytical framework for refining our ‘openness’ conceptual framework in future empirical research.
... For years, there has been a common belief that there is no possibility to make objective assessments of the impact of scientific research within the SSH and it is necessary to rely mainly on the subjective opinions of expert panels, presenting results that are often a source of doubts and controversy (Martin, 2011). It is well-established that the basis for building research excellence are two separate elements of assessment, i.e. 'research quality' and 'research impact'; the evaluation of the latter element mainly consisting in the analysis of case studies qualitatively assessed by field experts working in panels (Grant et al., 2010). ...
Article
Full-text available
Motivation: Assessment of research impact in the business and management field is more difficult than in the case of Science, Technology, Engineering, and Mathematics (STEM) disciplines and, therefore, it is justified to improve the approaches, methods and tools used in this field and social sciences in general. Methodological research concerning such assessment is quite a challenge as it is not easy to identify useful assessment methods, indicators and evaluation criteria for carrying out objective processes for conceptualizing and measuring research impact. Creating conditions for obtaining reliable results of research impact assessment is accompanied by the growing interest of scientists and public institutions sponsoring their study. Aim: The article aims to indicate the current main methodological trends in assessing the impact of research in business and management. Results: The paper presents the results of bibliometric research enabling the identification of leading study centers and main methodological solutions, which may be a source of progress in the field of research on systems and methods of research impact assessment in business and management. This is especially important for the scientific community and public sponsors from countries that are currently starting to implement impact assessment systems. It is worth drawing from the experience, good practices and vast resources of knowledge related to evaluation systems and models of knowledge exchange between academia and non-academic stakeholders.
... Considering these frameworks' influence on the performance evaluation of researchers and universities, they have been targeted with criticism and concern due to their potential effects on academic freedom (Martin 2011;Smith et al. 2011;McGettigan 2013;Bandola-Gill 2019;Johnson and Orr 2020). Academic freedom refers to the independence of academics to carry out academic work (research, teaching, and service) without external pressures and interference (Robinson and Moulton 2001;Poff 2012). ...
Article
Full-text available
Academic freedom is critical for the sound production and dissemination of new knowledge. However, the growing emphasis that research funders have placed on the societal impact of research has concerned some scholars, particularly with regard to its potential impact on their academic freedom. These concerns can be about pressures to research with immediate applications, scientific impartiality and reduced investment into fundamental research. However, we argue that these concerns can also relate to the ever-growing pressure to publish, experienced by most academics (the so-called 'publish or perish' culture). Understanding the dynamic between academic freedom and the impact agenda would be incomplete, we argue, without accounting for the effects of the publish or perish culture in academia. For this purpose , we first investigated the justification for academic freedom and the function it is supposed to perform. Our analysis then examined the relationship between academic freedom and the impact agenda on the fundamental level with a focus on societal impact, knowledge mobilization, and accountability in using public funds. Finally, this discussion paper highlighted the effects of the publish or perish culture in academia as they contradict the shared values of academic freedom and the impact agenda. Ultimately, these effects pose a serious threat to academic freedom by questioning its underlying justification and function. We conclude that addressing the effects of the publish or perish culture has more urgency and significance for academics in order to protect academic freedom.
... The increasingly competitive nature of the profession is becoming a more prominent topic with far-reaching policy, economic, and societal implications, largely due to the emergence of the now well-known 'publish or perish' dynamics (McGrail, Rickard and Jones 2006;Backes-Gellner and Schlinghoff 2010). Although this is still an ongoing debate, it has been noted in the literature that part of this can be due to evaluation schemes, which might have created perverse incentives to gamify the system (Martin 2011;Stephan 2012), leading to the maximization of indicators with the sole goal of securing funding and not necessarily leading to good science (Young 2015); indeed, and although it is unlikely that this is the sole cause, the rates of worldwide innovation have been steadily decreasing (Huebner 2005). Simultaneously, it has been shown that funding tends to be concentrated in 'scientific elites' (Larivière et al. 2010), causing further issues as it has also been shown that such concentration of resources tends to yield diminishing returns (Mongeon et al. 2016). ...
Article
Securing research funding is essential for all researchers. The standard evaluation method for competitive grants is through evaluation by a panel of experts. However, the literature notes that peer review has inherent flaws and is subject to biases, which can arise from differing interpretations of the criteria, the impossibility for a group of reviewers to be experts in all possible topics within their field, and the role of affect. As such, understanding the dynamics at play during panel evaluations is crucial to allow researchers a better chance at securing funding, and also for the reviewers themselves to be aware of the cognitive mechanisms underlying their decision-making. In this study, we conduct a case study based on application and evaluation data for two social sciences panels in a competitive state-funded call in Portugal. Using a mixed-methods approach, we find that qualitative evaluations largely resonate with the evaluation criteria, and the candidate’s scientific output is partially aligned with the qualitative evaluations, but scientometric indicators alone do not significantly influence the candidate’s evaluation. However, the polarity of the qualitative evaluation has a positive influence on the candidate’s evaluation. This paradox is discussed as possibly resulting from the occurrence of a halo effect in the panel’s judgment of the candidates. By providing a multi-methods approach, this study aims to provide insights that can be useful for all stakeholders involved in competitive funding evaluations.
... Although academic freedom and individuality are still championed in Europe (Capano and Pritoni 2020), academics' freedom has been threatened in recent years by uncertainties over acquiring research funding, which has led academics to pursue more conservative research to secure even the most precarious post-doctoral positions (Miller and Feldman 2014). Meanwhile, the implementation of impact-focused evaluation schemes has created perverse incentives to 'game' performance indicators, potentially further curtailing academic freedom (Gunn and Mintrom 2016;Martin 2011). Despite the availability of some centralised funding opportunities, the academic profession in Europe is increasingly perceived as highly precarious (Lempiäinen 2015). ...
Article
This paper investigates the research agendas of academics in Asia and Europe with reference to cultural influences rooted in the two continents. Unlike studies on the influence of culture on research that focus on only one or a few countries, this study explores the relationship between cultural dimensions and research agendas at the continental level, across Europe and Asia. The study uses general linear modelling with interaction terms to identify how cultural dimensions influence research agendas and how their influence differs between Europe and Asia. Hofstede’s cultural dimensions model and the Multidimensional Research Agendas Inventory-Revised scale are adopted in this study, as measures of cultural dimensions and research agenda-setting, respectively. The results show that culture influences several aspects of research agenda-setting in both Asia and Europe, but these dynamics are not always identical across continents. These findings are relevant both for academics studying the cultural dynamics of science, and also for policymakers who need to consider these cultural dimensions while striving to promote specific research agendas.
... While the basic budget was calculated mainly based on the number of students, a system of budgeting based on output evaluation was introduced for research resource allocation. Every third or fourth year, the Research Assessment Exercise (RAE), now known as the Research Excellence Framework (REF), has been conducted (Martin, 2011;Sousa & Brennan, 2014). Results from the RAE/REF have been used to allocate funds based on rankings. ...
Chapter
In the German higher education (HE) sector, third-party funding plays a prominent role in the valorization of performance of academics and universities. In the last three decades, policies that centered around competitive third-party funding have led to a significant vertical stratification of higher education institutions (HEIs). This paper analyzes the relationship between funding-induced vertical stratification and the evolution toward a post-Humboldtian organization that favors research over teaching. Conceptually, the UK is used as a reference country for analyzing the more recent developments in Germany. Based on data from three successive surveys (Carnegie-1992, CAP-2007, and APIKS-2018), a continuous evolution toward prioritizing research over teaching and a higher administrative workload for German academics over time is observed. We associate this trend with a funding-related dissolution of research and teaching at the individual and organizational level in a methodic toppled T multilevel research design. The analysis shows a clear differentiation between research-oriented, well-funded German HEIs (universities and universities of applied sciences, UAS) at the top of the status hierarchy and more teaching-focused HEIs at the bottom. We also indicate that the higher research preference of academics in high-status HEIs is accompanied by a higher administrative workload but not by more time for research.KeywordsVertical stratificationValorization of performanceThird-party fundingResearch-teaching nexus(Post-)Humboldtian systemHigher education
... While the basic budget was calculated mainly based on the number of students, a system of budgeting based on output evaluation was introduced for research resource allocation. Every third or fourth year, the Research Assessment Exercise (RAE), now known as the Research Excellence Framework (REF), has been conducted (Martin, 2011;Sousa & Brennan, 2014). Results from the RAE/REF have been used to allocate funds based on rankings. ...
Chapter
Full-text available
Similar trends have been shaping higher education systems in Europe. First, in modern university, the influence of Humboldtian values as the unity of teaching and research framed the organisation of higher education institutions (HEIs). More recently, under the ideological influence of both the knowledge economy/society and neoliberalism, European systems are compelled to demonstrate the utility of the knowledge produced, while they are making accountable to society, imposing an audit culture. This context leads to a stratification of institutions and academics, where the knowledge produced, usually measured by the number of publications, is an essential feature to determine the most prestigious institutions and academics.At present, the time European academics dedicate to their main roles differs, with some dedicating more time to teaching, while others dedicate more time to research. It is expected that this distinction impacts directly on research outputs. Notwithstanding, personal characteristics, such as gender and seniority, are acknowledged to impact the number of research outputs.This chapter illuminates on the effects of time organisation (time dedicated to teaching and to research) and of academics’ individual characteristics (gender and seniority), on research outputs, placing Portugal in a comparative perspective with other six countries of Finland, Germany, Lithuania, Slovenia, Sweden and Turkey.Findings confirm that prioritising one of academics’ roles influences research outputs, with relevant variations between academics’ gender and seniority, more than among countries.KeywordsTime organisationAcademics’ trade-offsGenderSeniorityTeaching timeResearch time
... Exits are researchers who leave a university, either transferring to another NZ university or moving outside the NZ university 1 On the development of PBRFSs see, for example, de Boer et al. (2015), Hicks (2012), Kolarz et al. (2019), OECD (2010) and Wilsdon et al. (2015). Examples of assessments and critical evaluations of these schemes include Adams and Gurney (2010), Broadbent (2010), Creedy (2019a, 2020), Buckle et al. (2021), Checchi et al. (2019), Hare (2003), Martin (2011), Payne and Roberts (2010), Woerlert and McKenzie (2018). 2 On the use of these terms in the cross-country growth literature, see Quah (1993). These types of convergence are examined in detail in . ...
Article
Full-text available
The introduction of performance-based research funding systems (PBRFS) in many countries has generated new information on their impacts. Recent research has considered whether such systems generate convergence or divergence of research quality across universities and academic disciplines. However, little attention has been given to the processes determining research quality changes. This paper utilises anonymised longitudinal researcher data over 15 years of the New Zealand PBRFS to evaluate whether research quality changes are characterised by convergence or divergence, and the processes determining those dynamics. A unique feature is the use of longitudinal data to decompose changes in researcher quality into contributions arising from the entry, exit and quality transformations of retained researchers, and their impacts on convergence or divergence of research quality across universities and disciplines. The paper also identifies how researcher dynamics vary systematically between universities and disciplines, providing new insights into the effects of these systems.
Book
What is the point of publishing in the humanities? This Element provides an answer to this question. It builds on a unique set of quantitative and qualitative data to understand why humanities scholars publish. It looks at both basic characteristics such as publication numbers, formats, and perceptions, and differences of national academic settings alongside the influences of the UK's Research Excellence Framework and the German Exzellenzinitiative. The data involve a survey of more than 1,000 humanities scholars and social scientists in the UK and Germany, allowing for a comprehensive comparative study, and a series of qualitative interviews. The resulting critique provides scholars and policy makers with an accessible and critical work about the particularities of authorship and publishing in the humanities. And it gives an account of the problems and struggles of humanities scholars in their pursuit of contributing to discourse, and to be recognised with their intellectual work.
Article
In this article, we present a conceptual framework for studying research impact focusing on the foundations that need to be in place to accelerate an observable change of policy, practice or behaviour. The article investigates the relationship between micro-impacts and societal change, and how smaller impacts scale into larger cascades of end effects and value creation. We define micro-impacts as interactions and connections where information is exchanged between a researcher or research group and external audiences, stakeholders or co-producers. Micro-impacts are elements in highly complex causal relations between research activities and larger societal macroshifts. We argue that even though these causal relations are complex, micro-impacts are tangible and observable and should be integrated in research evaluations as constitutive elements of causal impact relations leading to larger macroshifts. We suggest a working model for studying micro-impacts and for reflecting on the causality of impacts by drawing on contributions from philosophy of causation. A proper understanding of causation is a prerequisite for eventually understanding and capturing research impact, which itself is a prerequisite for responsible research assessment and planning.
Article
Full-text available
This conceptual paper scrutinises the ‘research impact’ of the impact agenda by Western governments, in terms of what it is doing to the research process as a whole. Tourism studies with its specific intricacies and disputed disciplinary status represents the focal point, but the argument extends to the entire research ecosystem as a whole. In specific, the paper addresses changes to the training regime of early career researchers. The created survivor bias of impact claims becomes the basis of scholarly career progression. How the accounting and narrating of research impact claims represents a new workload requirement for scholars. The challenges in identifying and articulating impact claims in the first place, and last but not least, the power dimension and political conflicts that arise in who has the authority to label impact claims as beneficial in the first place. The paper’s discussion focuses on the short, medium and long-term consequences of these changes to the scholarly lives. With the conclusion, that whilst the created vulnerabilities to the authority of [tourism] research claims are real, such developments also represents a viable opportunity to reassess, revalue and acknowledge parts of the research process that were normalised and/or trivialised in the past.
Article
Evaluation is ubiquitous in current (academic) science, to the extent that it is relevant to talk about an evaluation regime. How did it become this way? And what does it mean for scientists, groups, organizations, and fields? Picking up on the inspiring debate in a previous issue of this journal, four articles in this special section go deeper in studying the causes and consequences of the current evaluation regime in (academic) science, contributing with new insight as well as opening important new routes for further investigation. This introductory essay provides a background and framework to the special section and points out some key takeaways from the articles included.
Article
Full-text available
Publication activity in modern society is presented as a driving force of scientific and technological development and as an indicator of university management reporting. The article is devoted to the study of approaches to determining the monetary value of a scientific publication, taking into account different behavior’s motives of researchers and teachers in higher education as authors. The methodological basis of the study was formed by concepts of creating a public good and financial good, concepts of scientific productivity and academic remuneration, neoclassical and neoinstitutional economic theories, approaches to the implementation of state policy in the field of science and education in terms of stimulating scientific publications. Research methods: critical and comparative analysis, with the inclusion of three groups of methods: a) the study of direct and indirect authors’ motives and traps in the publication of scientific papers; b) asset evaluation methods; c) a combination of socio-economic campaigns for monetary evaluation of the results of scientific research. Asset valuation approaches have been adapted to assess the scientific publication’s value from the position of the author as a beneficiary. The theoretical and practical significance of the research lies in the contribution to the value dimension of scientific publications for their authors in the conditions of academic capitalism with potential opportunities to receive monetary income from the results of their research.
Chapter
This chapter attends to the ways in which participants in health and related sciences—subjects often described as ‘practice professions’ because of their clinical focus (Boore, 1996)—attempted to develop a sense of belonging within their academic communities. It explores the challenges that participants encountered to belonging and the strategies they developed in trying to overcome these barriers. Drawing on literature which highlights the uncertain status of doctoral students within academic hierarchies (Morris & Wisker, 2011; Wisker et al., 2010), this chapter explores how participants understood their position within their academic communities and how this awareness shaped their sense of belonging. Further, attention is paid to how different aspects of academic cultures influenced individuals’ ability to feel a sense of belonging within their departmental community. In particular, this chapter explores the impact of neoliberalised academic working practices, such as expectations of research productivity, on women doctoral students and how they viewed an academic career. This chapter also examines how gendered academic cultures shaped the lived experiences of women doctoral students in health sciences, attending to the ways in which gendered dynamics such as ‘lad culture’ and ‘banter’ influenced participants’ experiences of doctoral study and contributed to feelings of marginalisation (Jackson & Sundaram, 2020; Phipps & Young, 2013). Using concepts of legitimate peripheral participation (Lave & Wenger, 1991; Teeuwsen et al., 2014), I explore some of the challenges to legitimacy that participants experienced, but also the strategies they devised in order to develop feelings of legitimacy, validity and, ultimately, belonging.
Preprint
Full-text available
Impact assessment research has developed theory-based approaches to trace the societal impact of scientific research and technological innovation. Impact assessment typically starts from the perspective of a research investment, organisation, or project. Research users, non-academic actors involved in knowledge production, translation, and application, are well represented in many of these approaches. Researcher users are usually positioned as contributors to research, recipients of research outputs, or beneficiaries of research-driven outcomes. This paper argues that impact assessment would benefit from a more comprehensive understanding and analysis of research valorisation processes from the user perspective. The first half of the paper reviews key impact assessment literature to identify how research users are positioned and portrayed in relation to valorisation processes. In the second half of the paper, we use the results of this review to propose a set of principles to guide a systematic approach to constructing user perspectives on research impact. We suggest four concepts for operationalisation of this approach. The paper concludes that the addition of a more comprehensive research user perspective on research valorisation would complement and enhance existing impact assessment approaches.
Article
Ratings and rankings are omnipresent and influential in contemporary society. Individuals and organizations strategically respond to incentives set by rating systems. We use academic publishing as a case study to examine organizational variation in responses to influential metrics. The Journal Impact Factor (JIF) is a prominent metric linked to the value of academic journals, as well as career prospects of researchers. Since scholars, institutions, and publishers alike all have strong interests in affiliating with high JIF journals, strategic behaviors to ‘game’ the JIF metric are prevalent. Strategic self-citation is a common tactic employed to inflate JIF values. Based on empirical analyses of academic journals indexed in the Web of Science, we examine institutional characteristics conducive to strategic self-citation for JIF inflation. Journals disseminated by for-profit publishers, with lower JIFs, published in academically peripheral countries and with more recent founding dates were more likely to exhibit JIF-inflating self-citation patterns. Findings reveal the importance of status and institutional logics in influencing metrics gaming behaviors, as well as how metrics can affect work outcomes in different types of institutions. While quantitative rating systems affect many who are being evaluated, certain types of people and organizations are more prone to being influenced by rating systems than others.
Article
Full-text available
This thesis focuses on my work as a professional member of staff in one academic School in a higher-status UK university (Midtown). Specifically, it explores the process of tackling the constraints to collaboration between professional university and academic staff through the medium of action research and using the case and location of my work, widening participation (WP). The research was motivated by my desire to understand why academics often appeared reluctant to engage with WP work, and by my interest in action research as a mutually supportive approach to delivering the WP agenda. The research, therefore, was informed by action research principles of collaboration, co-construction of knowledge and action for social change and involved me, and three academics. There were two phases to the research encompassing two aspects of WP: access to higher education (HE) in the form of ‘taster’ sessions for secondary schools; and participation in HE, during which phase the three academics experimented with more inclusive forms of pedagogy when teaching undergraduates. Empirical data included: meeting notes, teaching observations, lesson plans, session feedback (academic co-researchers and pupils), research project evaluation, co-researcher interviews and my research diary notes. Data analysis was thematic and based on action research principles and the principles of inclusive pedagogy. Insights that were generated included finding how pedagogic considerations are common to thinking about how to improve both the access and participation elements of WP; and how four disparate individuals overcame considerable constraints to evolve towards a collaborative collective during the research. More broadly, the research contributes to knowledge by furthering understanding of how university-based professionals and academics might work more effectively in partnership in arenas such as WP. The research involved a transformative process of surfacing professional and academic anxieties and accepting the differences that hindered collaborative cross-boundary working. Through affording the time and space that was needed to address the institutional and relational hierarchies, the action research approach provided opportunities to co-produce effective taught sessions and understand what was needed to engage students at both the access and the participation stages. I argue that for HE professionals whose work involves collaboration with academics, pursuing action research principles opens communicative spaces, enabling mutual learning and development across the academic/professional divide and developing more inclusive and richer working relationships which yield better outcomes for staff and for students.
Article
In 2014, UK higher education institutions implemented a new system for assessing the quality of research, the Research Excellence Framework (REF) and took the opportunity to introduce "impact beyond academia" as a 'new' assessment criterion. Transformation and innovation-oriented R&I policy are roughly similar in Norway and the Netherlands regarding underlying ideas as well as timing. In occasion of this convergence this article tackles the discursive and performative construction of “societal impact” as a metamorphic constantly changing, transforming, and evolving criterion. Using data from policy documents from the UK, the Netherlands, and Norway from 2014 until now, the comparative semantic analysis draws on theories of speech acts and performativity to reveal the dual effect (normalising and norming) of the discursive device by R&I policymakers. The resulting typology, based on four criteria (terminology, positive and negative valences, oikonomia of knowledge and policy slogan), sets the ground for the exploration of further dimensions of societal impact evaluation challenges.
Article
Full-text available
The diffusion of evaluation systems based on research excellence has been confronting scholars with the dilemma of how to combine the different activities and roles characterizing the academic profession. Besides research, other types of knowledge transfer and academic citizenship, i.e., the service activities and roles carried out on behalf of the university within and outside organizational boundaries, are in fact cornerstones of universities’ functioning that allow for their thriving and need to be valued. This study investigates the complementarity, substitution, and independence effects between the various types of knowledge transfer and academic citizenship in a sample of 752 Italian academics working in business schools. We collected data combining different sources including CVs, publication records, and national datasets. Multivariate path analysis was employed to measure covariances between knowledge transfer and academic citizenship. We contribute to the debate on academic citizenship by showing that public and discipline-based service are complementary to knowledge transfer activities, while institutional service is independent from knowledge transfer. Remarkably, journal papers are research outcomes complementary to most academic activities, and the same holds true for dissemination at workshops and conferences. Running counter dominant rhetoric, this study testifies to the likelihood of faculty being “all-round” professionals. We disclose that activities and roles are influenced by academics’ previous pathways and research grants and discuss the need to value academic citizenship in performance measurement systems.
Article
Bibliometric and scientometric analyses are widely in university headquarters, across multiple disciplines, in companies, and governments. Therefore, we need further research and expertise on how this analysis can be used in practice. In this study, we focus on the role of bibliometric analysis in evidence-based policymaking (EBPM). We divide the type of analysis into descriptive, predictive, and explorative analyses, and their different roles in EBPM processes. To discuss the role of scientometrics in EBPM, we illustrate a case of hydrogen energy technologies. We derive four propositions based on arguments on evidence and prerequisites for the analysis, that are necessary for: (1) strict distinction between policy evidence and policy reason, (2) application of relevant type of analysis to each unit process of policymaking, (3) multi-layered expertise including data and algorithms, domain knowledge, and understanding of policy context and social issues, and (4) a knowledge system to archive data, algorithms, and results. This paper contributes broadly to transdisciplinary bibliometric research, and specifically to scientometric research and science-based policymaking.
Article
Full-text available
Impact is at one and the same time an object of derision and acclaim, anxiety and confidence. It is a troubled terrain, discussed from quite different directions, and there seems little prospect of developing a common conversation between those who traverse it This reflective paper seeks to outline a common core of questions that define the impact of impact. While it offers no answers to them, it establishes the grounds on which the debate can at least be taken forward in the future. These questions are: What is impact? Impact for whom? What are the domains in which it is displayed? What are its indicators? How is it measured?
Article
Full-text available
Science, technology and innovation (STI) policy aimed at technological advance, international competitiveness and wealth creation underpins the regulation of publicly funded research. Familiar quantitative evaluative 'metrics' fit snugly with these economic objectives. A re-imagined STI policy embraces wider intellectual, social, cultural, environmental and economic returns, using qualitative measures and processes to capture research outcomes.
Article
Full-text available
Social impact of research is difficult to measure. Attribution problems arise because of the often long time-lag between research and a particular impact, and because impacts are the consequences of multiple causes. Furthermore, there is a lack of robust measuring instruments. We aim to overcome these problems through a different approach to evaluation where learning is the prime concern instead of judging. We focus on what goes on between researchers and other actors, and so narrow the gap between research and impact, or at least make it transparent. And by making the process visible, we are able to suggest indicator categories that arguably lead to more robust measuring instruments. We propose three categories of what we refer to as ‘productive interactions’: direct or personal interactions; indirect interactions through texts or artefacts; and financial interactions through money or ‘in kind’ contributions.
Article
Full-text available
Understanding the impact of research is important for funding bodies in accounting for funds, advocating additional resources and learning how better to achieve their aims. The Health Research Board (HRB) has funded research in Ireland for over 20 years. We analysed eight examples of HRB grants from between 10 and 15 years earlier using the Payback Framework to catalogue the impacts. They ranged from world-class academic articles and new clinical assays through to improvements in recovery time for acute myocardial infarction and development of a drug company worth over €5 million. Here we first describe the study, then examine the role of the Payback Framework in research impact assessment including examining impacts made by the HRB study itself following its completion in 2008. We discuss how that study has contributed to further development of research impact assessment methods that could be used by the HRB and others.
Article
Full-text available
Papers in this special issue were developed at an international workshop on ‘State of the Art in Assessing Research Impact’, hosted by the Health Economics Research Group at Brunei University. The workshop debated what constitutes state-of-the-art methods for assessing the ‘impact’ (or broader societal returns) of research. Metrics-only approaches employing economic data and science, technology and innovation indicators were found to be behind the times: best practice combines narratives with relevant qualitative and quantitative indicators to gauge broader social, environmental, cultural and economic public value. Limited consultation between policy-makers and the research evaluation community has led to a lack of policy-learning from international developments. Little engagement between research evaluation specialists and the academic community has cast ‘impact’ as the height of philistinism: yet ‘impact’ is a strong weapon for making an evidence-based case to governments and research funders for enhanced financial support, and ‘the state of the art’ is suited to the characteristics of all research fields (including the humanities, creative arts and social sciences) in their own terms.
Article
Full-text available
In a parallel paper, we have outlined a methodology for assessing the comparative scientific performance of large basic research facilities (and their associated user groups) working in the same specialty, and applied this method of `converging partial indicators' to an evaluation of the contributions to science made by a number of radio telescopes. In this paper, we employ this methodology to evaluate the scientific performance of various optical telescopes — in particular, the 2.5-metre Isaac Newton Telescope, operated as a central facility by the Royal Greenwich Observatory in South-East England. For several years, this was Britain's only major optical telescope, as well as being the largest such instrument in Europe. We compare its performance over the last decade with that of three American telescopes of similar size. This paper has three aims: first, to ascertain whether the method of converging partial indicators, originally applied to radio astronomy, provides a more general policy tool that can be extended to other specialties; second, to determine just how successful each optical telescope has been in producing new astronomical knowledge over the past decade; and, third, to discuss whether our results on the comparative scientific performance of the Isaac Newton Telescope may have any implications for British astronomy policy in general.
Article
Full-text available
This paper describes the first direct application of the Payback Framework (PF) in the United States for an evaluation of the Mind-Body Interactions and Health Program, a trans-National Institutes of Health program funded over a ten-year period beginning in 1999. The program funded 15 research centers and 44 investigator-initiated research projects. We present results from an initial planning study and describe how we selected the PF as a conceptual framework for an outcome evaluation of the program. We outline the overall design for the outcome evaluation study and describe how we adapted the PF with reference to the initial phase of the study focusing in the 15 research centers. Copyright , Beech Tree Publishing.
Article
Full-text available
Evaluation of university-based research already has a reasonably long tradition in the UK, but proposals to revise the framework for national evaluation aroused controversy in the academic community because they envisage assessing more explicitly than before the economic, social and cultural ‘impact’ of research as well as its scientific quality. Using data from the 2009 public consultation on the proposals for a Research Excellence Framework, this paper identifies three main lines of controversy: the threats to academic autonomy implied in the definition of expert review and the delimitation of reviewers, the scope for boundary-work in the construction of impact narratives and case studies, and the framing of knowledge translation by the stipulation that impact ‘builds on’ research. Given the behaviour-shaping effects of research evaluation, the paper demonstrates how the proposed changes could help embed impact considerations among the routine reflexive tools of university researchers and enhance rather than restrict academic autonomy at the level of research units. It also argues that the REF could constitute an important dialogical space for negotiating science–society relations in an era of increasing heteronomy between academia, state and industry. But the paper raises doubts about whether the proposed operationalisation of impact is adequate to evaluate the ways that research and knowledge translation are actually carried out.Highlights► Evaluating the impact of research is perceived to threaten academic autonomy. ► UK proposals give researchers new ways to narrativise research purpose and relevance. ► UK proposals can create a dialogical space for managing science–society relations. ► ‘Impact builds on research’ formula cannot capture multiplex knowledge translations. ► Consultation on UK proposals became a public forum for professional boundary-work.
Article
Full-text available
Many countries have introducedevaluations of university research, reflectingglobal demands for greater accountability. Thispaper compares methods of evaluation usedacross twelve countries in Europe and theAsia-Pacific region. On the basis of thiscomparison, and focusing in particular onBritain, we examine the advantages anddisadvantages of performance-based funding incomparison with other approaches to funding.Our analysis suggests that, while initialbenefits may outweigh the costs, over time sucha system seems to produce diminishing returns.This raises important questions about itscontinued use.
Article
Full-text available
In this paper we present the results of a study on the potentialities of “bibliometric” (publication and citation) data as tools for university research policy. In this study bibliometric indicators were calculated for all research groups in the Faculty of Medicine and the Faculty of Mathematics and Natural Sciences at the University of Leiden. Bibliometric results were discussed with a number of researchers from the two faculties involved.Our main conclusion is that the use of bibliometric data for evaluation purposes carries a number of problems, both with respect to data collection and handling, and with respect to the interpretation of bibliometric results. However, most of these problems can be overcome. When used properly, bibliometric indicators can provide a “monitoring device” for university research-management and science policy. They enable research policy-makers to ask relevant questions of researchers on their scientific performance, in order to find explanations of the bibliometric results in terms of factors relevant to policy.
Article
This exploratory study of 57 large bankruptcies and 57 matched survivors examined the dynamics of major corporate failure. Prior research was used to guide selection of the four major constructs studied: domain initiative, environmental carrying capacity, slack, and performance. What emerges is a clear portrayal of a protracted process of decline, aptly portrayed by prior theorists, and modeled here, as a downward spiral. In the firms studied, significant features of the downward spiral included early weaknesses in slack and performance, extreme and vacillating strategic actions, and abrupt environmental decline. An elaboration of the last two stages of decline is also presented, based on the findings from this study. The down-ward-spiral model is then illustrated with a case example. The study sheds light on major debates and dilemmas in the fields of organization theory and strategy regarding why major firms fail.
Article
This paper applies the SIAMPI approach, which focuses on the concept of productive interactions, to the identification of the social impact of research in the social sciences. An extensive interview programme with researchers in a Welsh university research centre was conducted to identify the productive interactions and the perceived social impacts. The paper argues that an understanding of and focus on the processes of interaction between researchers and stakeholders provides an effective way to study social impact and to deal with the attribution problem common to the evaluation of the social impact of research. The SIAMPI approach thereby differentiates itself from other forms of impact assessment and evaluation methods. This approach is particularly well-suited to the social sciences, where research is typically only one component of complex social and political processes. Copyright , Beech Tree Publishing.
Article
The UK Economic and Social Research Council funded exploratory evaluation studies to assess the wider impacts on society of various examples of its research. The Payback Framework is a conceptual approach previously used to evaluate impacts from health research. We tested its applicability to social sciences by using an adapted version to assess the impacts of the Future of Work (FoW) programme. We undertook key informant interviews, a programme-wide survey, user interviews and four case studies of selected projects. The FoW programme had significant impacts on knowledge, research and career development. While some principal investigators (PIs) could identify specific impacts of their research, PIs generally thought they had influenced policy in an incremental way and informed the policy debate. The study suggests progress can be made in applying an adapted version of the framework to the social sciences. However, some impacts may be inaccessible to evaluation, and some evaluations may occur too early or too late to capture the impact of research on a constantly changing policy environment.
Article
Long-term changes in knowledge production can produce mismatches between the research that society requires and the research that society produces — what we term ‘relevance gaps’. This paper explores what can be done to close them. The paper argues that current structures for governing research are often inappropriate, damage the reputation and value system of the academy, and produce a widespread perception that much research is irrelevant. New ways are needed to address how disciplinary value judgements and the structure of peer review influence the direction of academic research. Alternatives to current peer-review practices and guidelines for funding agencies are proposed. Copyright , Beech Tree Publishing.
Article
Reviews the literature on the Hawthorne effect (HE) which originated out of the studies at the Hawthorne Works of the Western Electric Company. This effect is generally defined as the problem in field experiments that Ss' knowledge that they are in an experiment modifies their behavior from what it would have been without the knowledge. An examination of the Hawthorne studies conducted 50 yrs ago does not reveal this "effect" probably because there were so many uncontrolled variables. HE is inconsistently described in contemporary psychology textbooks, and there is lack of agreement on how the effect is mediated. Controls for the HE in current field research (mostly in education) took several forms, each designed for different purposes. In 13 studies designed to produce HEs, only 4 using adult Ss were successful. It is suggested that most persons in any clearly identified situation define the context for their behavior and respond accordingly; the necessity to ascertain Ss' view of the experiment requires different procedures than those typically used to control for HEs in the past. It is concluded that better articulation of how to adapt postexperimental questioning procedures to a diversity of experimental settings is needed. (68 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Rod Rhodes is Professor of Politics and Head of Department at the University of York and editor of Public Administration. He thanks Neil Carter (York) and Janice McMillan (Robert Gordon) for their comments on an early version.
Article
This article critically reviews the literature on the economic benefits of publicly funded basic research. In that literature, three main methodological approaches have been adopted — econometric studies, surveys and case studies. Econometric studies are subject to certain methodological limitations but they suggest that the economic benefits are very substantial. These studies have also highlighted the importance of spillovers and the existence of localisation effects in research. From the literature based on surveys and on case studies, it is clear that the benefits from public investment in basic research can take a variety of forms. We classify these into six main categories, reviewing the evidence on the nature and extent of each type. The relative importance of these different forms of benefit apparently varies with scientific field, technology and industrial sector. Consequently, no simple model of the economic benefits from basic research is possible. We reconsider the rationale for government funding of basic research, arguing that the traditional ‘market failure’ justification needs to be extended to take account of these different forms of benefit from basic research. The article concludes by identifying some of the policy implications that follow from this review.
Article
This paper looks at the issues surrounding the measurement of university research performance by examining the controversial evaluation exercise undertaken in Britain in 1985–1986 by the University Grants Committee (UGC). The economic and political background to the higher education debate in Britain is briefly discussed, followed by an account of the conceptual, technical and philosophical issues raised by the UGC exercise. A number of research performance indicators proposed by the UGC and various critics of its evaluation are listed according to the aspect of performance which each indicator is purported to measure, and weaknesses and inconsistencies of each indicator are highlighted.“Performance” is shown to be a complex concept for which no objective indicators exist, while the context and process through which indicators of performance are arrived at, and the subsequent use to which they are put, are judged to be as important as the information which each indicator conveys. Finally, a number of policy issues arising out of the debate are analysed, and suggestions are made for government and universities to consider in order to avoid a repeat of the mistakes of the UGC exercise. This is seen to be particularly important in the light of more recent government proposals for university research and the higher education system in general.
Article
As the costs of certain types of scientific research have escalated and as growth rates in overall national science budgets have declined, so the need for an explicit science policy has grown more urgent. In order to establish priorities between research groups competing for scarce funds, one of the most important pieces of information needed by science policy-makers is an assessment of those groups' recent scientific performance. This paper suggests a method for evaluating that performance.After reviewing the literature on scientific assessment, we argue that, while there are no simple measures of the contributions to scientific knowledge made by scientists, there are a number of ‘partial indicators’ — that is, variables determined partly by the magnitude of the particular contributions, and partly by ‘other factors’. If the partial indicators are to yield reliable results, then the influence of these ‘other factors’ must be minimised. This is the aim of the method of ‘converging partial indicators’ proposed in this paper. We argue that the method overcomes many of the problems encountered in previous work on scientific assessment by incorporating the following elements: (1) the indicators are applied to research groups rather than individual scientists; (2) the indicators based on citations are seen as reflecting the impact, rather than the quality or importance, of the research work; (3) a range of indicators are employed, each of which focusses on different aspects of a group's performance; (4) the indicators are applied to matched groups, comparing ‘like’ with ‘like’ as far as possible; (5) because of the imperfect or partial nature of the indicators, only in those cases where they yield convergent results can it be assumed that the influence of the ‘other factors’ has been kept relatively small (i.e. the matching of the groups has been largely successful), and that the indicators therefore provide a reasonably reliable estimate of the contribution to scientific progress made by different research groups.In an empirical study of four radio astronomy observatories, the method of converging partial indicators is tested, and several of the indicators (publications per researcher, citations per paper, numbers of highly cited papers, and peer evaluation) are found to give fairly consistent results. The results are of relevance to two questions: (a) can basic research be assessed? (b) more specifically, can significant differences in the research performance of radio astronomy centres be identified? We would maintain that the evidence presented in this paper is sufficient to justify a positive answer to both these questions, and hence to show that the method of converging partial indicators can yield information useful to science policy-makers.
Article
The author regards development of Australia's ill-fated Research Quality Framework (RQF) as a “live experiment” in determining the most appropriate approach to evaluating the extra-academic returns, or “impact,” of a nation's publicly funded research. The RQF was at the forefront of an international movement toward richer qualitative, contextual approaches that aimed to gauge the wider economic, social, environmental, and cultural benefits of research. Its construction and implementation sent mixed messages and created confusion about what impact is, and how it is best measured, to the extent that this bold live experiment did not come to fruition. © Wiley Periodicals, Inc.
Article
This article examines the origins and evolution of the field of science policy and innovation studies (SPIS). Like other studies in this Special Issue, it seeks to systematically identify the key intellectual developments in the field over the last 50 years by analysing the publications that have been highly cited by other researchers. The analysis reveals how the emerging field of SPIS drew upon a growing range of disciplines in the late 1950s and 1960s, and how the relationship with these disciplines evolved over time. Around the mid-1980s, substantial parts of SPIS started to coalesce into a more coherent field centred on the adoption of an evolutionary (or neo-Schumpeterian) economics framework, an interactive model of the innovation process, and (a little later) the concept of ‘systems of innovation’ and the resource-based view of the firm. The article concludes with a discussion of whether SPIS is perhaps in the early stages of becoming a discipline.
Article
We investigated a consecutive series of children with chronic enterocolitis and regressive developmental disorder. 12 children (mean age 6 years [range 3-10], 11 boys) were referred to a paediatric gastroenterology unit with a history of normal development followed by loss of acquired skills, including language, together with diarrhoea and abdominal pain. Children underwent gastroenterological, neurological, and developmental assessment and review of developmental records. Ileocolonoscopy and biopsy sampling, magnetic-resonance imaging (MRI), electroencephalography (EEG), and lumbar puncture were done under sedation. Barium follow-through radiography was done where possible. Biochemical, haematological, and immunological profiles were examined. Onset of behavioural symptoms was associated, by the parents, with measles, mumps, and rubella vaccination in eight of the 12 children, with measles infection in one child, and otitis media in another. All 12 children had intestinal abnormalities, ranging from lymphoid nodular hyperplasia to aphthoid ulceration. Histology showed patchy chronic inflammation in the colon in 11 children and reactive ileal lymphoid hyperplasia in seven, but no granulomas. Behavioural disorders included autism (nine), disintegrative psychosis (one), and possible postviral or vaccinal encephalitis (two). There were no focal neurological abnormalities and MRI and EEG tests were normal. Abnormal laboratory results were significantly raised urinary methylmalonic acid compared with age-matched controls (p=0.003), low haemoglobin in four children, and a low serum IgA in four children. We identified associated gastrointestinal disease and developmental regression in a group of previously normal children, which was generally associated in time with possible environmental triggers.
Article
The "Hawthorne effect" is often mentioned as a possible explanation for positive results in intervention studies. It is used to cover many phenomena, not only unwitting confounding of variables under study by the study itself, but also behavioral change due to an awareness of being observed, active compliance with the supposed wishes of researchers because of special attention received, or positive response to the stimulus being introduced. At times, the term seems to be used as a social equivalent to "placebo effect". In social research, there is much critical literature indicating that, in general, the term "Hawthorne effect" should be avoided. Instead of referring to the ambiguous and disputable Hawthorne effect when evaluating intervention effectiveness, researchers should introduce specific psychological and social variables that may have affected the outcome under study but were not monitored during the project, along with the possible effect on the observed results.
The UK Research Assessment Exercise: a case of regulatory capture? In Recon-figuring Knowledge Production: Changing Authority Relation-ships in the Sciences and their Consequences for Intellectual Innovation
  • Richard Whitley Martin
Martin, Ben R and Richard Whitley 2010. The UK Research Assessment Exercise: a case of regulatory capture? In Recon-figuring Knowledge Production: Changing Authority Relation-ships in the Sciences and their Consequences for Intellectual Innovation, eds R Whitley, J Gläser and L Engwall, pp. 51–80.
Master of science fiction. The Observer
  • Morgan
  • Oliver
Morgan, Oliver 2002. Master of science fiction. The Observer, 20 October 2002. <http://www.guardian.co.uk/business/2002/oct/ 20/interviews.politics on 12 July 2011>, last accessed 12 July 2011.
Analysis; UK's spin-offs reap £3 billion reward
  • Davis
  • Caroline
Davis, Caroline 2002. Analysis; UK's spin-offs reap £3 billion reward. Times Higher Education, 25 October 2002. <http:// www.timeshighereducation.co.uk/story.asp?storyCode=17251 6&sectioncode=26>, last accessed 12 July 2011. DEST, Department of Education, Science and Training 2006. Research Quality Framework: Assessing the Quality and Impact of Research in Australia: the Recommended RQF. Canberra: DEST.
Evaluating science and scientists: factors affecting the acceptance of evaluation results. In Evaluating Science and Scientists: an East-West Dialogue on Research Evaluation in Post-Communist Europe
  • Martin
  • Ben
Martin, Ben R 1997. Evaluating science and scientists: factors affecting the acceptance of evaluation results. In Evaluating Science and Scientists: an East-West Dialogue on Research Evaluation in Post-Communist Europe, eds M S Frankel and J Cave, pp. 28–45.
Using Metrics to Allocate Research Funds Oxford: Higher Education Policy Institute Assessing the impact of an NIH program using the Research Payback Framework
  • Sastry
  • Bahram Tom
  • Scott
  • Margaret E Jack
  • Blasinsky
Sastry, Tom and Bahram Bekhradnia 2006. Using Metrics to Allocate Research Funds. Oxford: Higher Education Policy Institute. Scott, Jack E, Margaret Blasinsky, Mary Dufour, Rachel Mandal and Stephane Philogene 2011. Assessing the impact of an NIH program using the Research Payback Framework. Research Evaluation, 20(3), September, 185–192.
The Economic and Social Benefits of Publicly Funded Basic Research. Report to the Of-fice of Science and Innovation, Department of Trade and Industry
  • Martin
  • Tang
Martin, Ben R and Puay Tang 2006. The Economic and Social Benefits of Publicly Funded Basic Research. Report to the Of-fice of Science and Innovation, Department of Trade and Industry. Brighton: SPRU.
Monetary relationships: a view from Threadneedle Street
  • Charles A Goodhart
Goodhart, Charles A E 1975. Monetary relationships: a view from Threadneedle Street. In Papers in Monetary Economics, Vol.
Capturing Research Impacts: a Review of International Practice. Report prepared for the Higher Education Funding Council for England
  • Grant
  • Jonathan
  • Brutscher
  • Susan Ella
  • Linda Kirk
  • Steven Butler
  • Wooding
Grant, Jonathan, Philipp-Bastian Brutscher, Susan Ella Kirk, Linda Butler and Steven Wooding 2009. Capturing Research Impacts: a Review of International Practice. Report prepared for the Higher Education Funding Council for England. Cambridge: RAND Europe.
The role of produc-tive interactions in the assessment of social impact of research
  • Spaapen
  • Leonie Jack
  • Van
Spaapen, Jack and Leonie van Drooge 2011. The role of produc-tive interactions in the assessment of social impact of research. Research Evaluation, 20(3), September, 211–218.
The Transformational Leader
  • Tichy
  • Noel
Tichy, Noel M and Mary Anne Devanna 1986. The Transformational Leader. New York: Wiley.
Assessing the impact of basic research on society and the economy Invited presentation at the FWF-ESF International Conference on 'Science Impact: Rethinking the Impact of Basic Research on Society and the Economy
  • Martin
  • Ben
Martin, Ben R 2007. Assessing the impact of basic research on society and the economy. Invited presentation at the FWF-ESF International Conference on 'Science Impact: Rethinking the Impact of Basic Research on Society and the Economy', Vienna, 11 May 2007.
Proposals for the Research Excel-lence Framework: a Critique Oxford: Higher Education Policy Institute. Bernal, John D 1939. The Social Function of Science. London: Routledge. Brewer, John 2011a. Viewpoint: from public impact to public value
  • Bekhradnia
  • Bahram
Bekhradnia, Bahram 2009. Proposals for the Research Excel-lence Framework: a Critique. Oxford: Higher Education Policy Institute. Bernal, John D 1939. The Social Function of Science. London: Routledge. Brewer, John 2011a. Viewpoint: from public impact to public value. Methodological Innovations Online, 6(1), 9–12.
Goodhart's Law: its origins, meaning and implications for monetary policy. Pre-pared for the Festschrift in honour of Charles Goodhart held on 15-16 November
  • Chrystal
Chrystal, K Alec and Paul D Mizen 2001. Goodhart's Law: its origins, meaning and implications for monetary policy. Pre-pared for the Festschrift in honour of Charles Goodhart held on 15-16 November 2001 at the Bank of England. Collins, Harry 2011. Measures, markets and information. Method-ological Innovations Online, 6(1), 3–6.
What direction for basic scientific research? In Science and Technology Policy in the 1990s and Beyond
  • Irvine
  • John
  • Ben
  • Martin
Irvine, John and Ben R Martin 1984. What direction for basic scientific research? In Science and Technology Policy in the 1990s and Beyond, eds M Gibbons, P Gummett and B M Udgaonkar, pp. 67–98.
Jacquel-ine Senker, Margaret Sharp and Nick von Tunzelmann 1996. The Relationship between Publicly Funded Basic Research and Economic Performance: a SPRU Review
  • Ben R Martin
  • Ammon Salter
  • Diana Hicks
  • Keith Pavitt
Martin, Ben R, Ammon Salter, Diana Hicks, Keith Pavitt, Jacquel-ine Senker, Margaret Sharp and Nick von Tunzelmann 1996. The Relationship between Publicly Funded Basic Research and Economic Performance: a SPRU Review. London: HM Treasury.