Article

Do Rankings Reflect Research Quality?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Publication and citation rankings have become major indicators of the scientific worth of universities and determine to a large extent the career of individual scholars. Such rankings do not effectively measure research quality, which should be the essence of any evaluation. These quantity rankings are not objective; two citation rankings, based on different samples, produce entirely different results. For that reason, an alternative ranking is developed as a quality indicator, based on membership on academic editorial boards of professional journals. It turns out that the ranking of individual scholars based on that measure is far from objective. Furthermore, the results differ markedly, depending on whether research quantity or quality is considered. Thus, career decisions based on rankings are dominated by chance and do not reflect research quality. We suggest that evaluations should rely on multiple criteria. Public management should return to approved methods such as engaging independent experts who in turn provide measurements of research quality for their research communities.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In addition to these measures of prestige, the achievements of editorial board members can create benchmarks for research excellence and can be used to evaluate both individual and institutional performance (Frey & Rost, 2010;Hardin, Beauchamp, Liano, & Hill, 2006;Lahiri & Kumar, 2012;Lu, Li & Wu, 2018). The editors of the top journals are individuals who have a higher academic level of influence and act as gatekeepers for scientific studies in their subject field. ...
... The definitions of each rate are as follows: A journal rating of 4 or 4* indicates that the journal is an elite world journal; 3-rated journals are highly regarded; 2-rated journals publish research of an acceptable standard; and the last class of journals, 1, publishes research with a more modest standard (Hussain, 2013;Thomas et al., 2009). These journals on the ABS list have also been considered leading journals in previous department rankings (Andrikopoulos & Economou, 2015;Frey & Rost, 2010;Lee, Cronin, McConnell, & Dean, 2010). ...
... Among different journals, there are various editor positions, including the editor, managing editor, board member, advi- sory editor, and more. The lack of uniformity in the similar positions' terms across journals can lead to problems, both in assessing the editor's influence and in assessing institutional quality (Frey & Rost, 2010). Additionally, within an editorial board, editor titles reflect different academic reputations and standings. ...
... In theory, editorial board members probably obtained their positions because of their high research achievements. Some researchers believe that the number of editorial board members of a given university can reflect the impact of research output of that university (Frey and Rost 2010). Therefore, it may be meaningful to analyse the relationship between universities' editorial board representation and the impact of research output from their universities. ...
... Using the number of editorial board members as an indicator of university ranking is expanding. In fact, an increasing number of sub-disciplines of economics and management, including marketing, international business, and tourism management have introduced university ranking based on number of editorial board members (Chan, Fung and Lai 2005;Frey and Rost 2010;Law, Leung and Buhalis 2010;Urbancic 2005;Urbancic 2011). However, similar studies on disciplines such as science, engineering, agriculture, and medicine remain relatively scarce ). ...
... Several researchers have examined the correlation between university ranking based on the number of editorial board members and the research output. Some studies found a positive correlation (Chan and Fok 2003;Frey and Rost 2010;Gibbons and Fish 1991;Kaufman 1984;Urbancic 2005), whereas others did not Burgess and Shaw 2010;Chan, Fung and Lai 2005). Hence, there is a lack of convergent results between the studies of various disciplines. ...
Article
This study uses the quantile regression models to explore the relationship between SCI (Science Citation Index) editorial board representation and research output of universities in the field of computer science. Quantile regression allows the investigation of the variation of the relationship between editorial board representation and research output. A total of 447 journals and 14,442 editorial board members were analysed. The results suggest that the number of editorial board members is positively and significantly related to the quantity (number of articles) and impact (total number of citations and citations per paper) of the research output from their respective universities. A deeper analysis using quantile regression, indicates that the relationship between the number of editorial board members and the research output is stronger when the university is at the higher quartile of the conditional research output distribution. In addition, to speculate on possible mechanisms behind the relationship between editorial board representation and research output, two exploratory studies based on two small samples were conducted at the individual and journal level, respectively. © 2003, Faculty of Computer Science & Information Technology, University of Malaya.
... The factors of research inputs and research outputs should be taken into consideration. Public management should return to approved methods such as engaging independent experts who in turn provide measurements of research quality for their research communities (Bruno & Katja, 2010). ...
... Expertise from individual universities would be of great help rather third party analyst seemed a better option to them. They argued that public management, especially university management, should stop the mass euphoria of rankings and return to approved methods, such as engaging independent experts who in turn provide measurements of research quality that is applicable to their specific research community (Bruno & Katja, 2010). ...
... The opinions of experts may indeed be influenced by subjective elements, narrow mindedness, and limited cognitive horizons. These shortcomings may result in conflicts of interest, unawareness of quality, or a negative bias against young scientists or newcomers to a particular field (Bruno & Katja, 2010). ...
Article
Full-text available
Publication in a ranked journal has been considered as indicator of scholarship among individuals. A ranked journal and its relationship with an individual's research have been unknown for last two decades. But it is seen in last few years that most of the institutes and individual authors give tremendous importance to these rankings. The research explores the relationship between these rankings and evaluation of an individual author. The research study uses multiple literature reviews to study the methods for evaluation of author and a journal. The research also explores existing flaws in the method and relates them to the interrelations that may exist between journal ranking and author's research. The research concludes with the design of a flowchart proving the interrelations existing among the current research trend and journal rankings. The research in its conclusion tries to bring up suggestions given by renowned scholars and tries to relate them with the flaws. The research study tries to incorporate these suggestions into a new flowchart showcasing the changes taking place in the interrelations among the methods.
... The factors of research inputs and research outputs should be taken into consideration. Public management should return to approved methods such as engaging independent experts who in turn provide measurements of research quality for their research communities (Bruno & Katja, 2010). ...
... Expertise from individual universities would be of great help rather third party analyst seemed a better option to them. They argued that public management, especially university management, should stop the mass euphoria of rankings and return to approved methods, such as engaging independent experts who in turn provide measurements of research quality that is applicable to their specific research community (Bruno & Katja, 2010). ...
... The opinions of experts may indeed be influenced by subjective elements, narrow mindedness, and limited cognitive horizons. These shortcomings may result in conflicts of interest, unawareness of quality, or a negative bias against young scientists or newcomers to a particular field (Bruno & Katja, 2010). ...
... 9 Which removes structural differences across disciplinary areas, like a high number of authors in chemistry sciences, and allows us to compare the relative data of each department. Like the previous outputs, these indicators detail the quantity and quality of inputs expended to produce the research activity [22]. In particular, the numbers of teaching staff for various academic positions indicate the quantity of inputs used by a given institution, while the salary levels indicate the quality of inputs. ...
... In consequence, the higher research 21 There are no significant differences between rural and urban areas in the relative weights attributed to these outputs. 22 With a higher rate of publications before tenure graduation. See, for example [48], for a discussion about age and tenure. ...
Article
This paper proposes to study the American efficiency of educational diffusion and research productivity following two distinctions: urban vs. rural areas and public vs. private universities. Following this geographical consideration, knowledge diffusion seems to be homogeneous over the American territory, whereas research productivity is more heterogeneous: American research efficiency decreased of 7% points, due to some rural university localizations. Universities in urban areas favor educational quality through high student selection criteria, contrary to those located in more rural areas. Third, public universities present higher educational efficiency, in favoring educational quality over research productivity: the lesser research efficiency of public institutions comes from difficulties in the management of several campuses, by comparison with the private institutions which are all single-campus.
... There are three conventional ways of assessing journal quality: (i) subjective (perceptual), (ii) objective (citation-based) and (iii) a combination thereof (hybrid). All three feature well-known methodological limitations [9][10][11]. Recently, a fourth approach has gained momentum-the 'meta'-ranking approach-which, like the hybrid approach, is intended to provide a balanced view by delivering a composite journal ranking [cf. ...
... Yet, and despite these advancements, extensive discussions of the underlying methodological issues raise concern of the sole reliance on citation-based analysis in journal ranking exercises. This is because important work may be considered as "common knowledge" and is sometimes left uncited-with acknowledgement given to other work or citation counts frequently representing simply fashion and herding within the academic community which implicates that citing does not necessarily imply influence [9,22,23]. There are also problems of selective citations and the opportunity for self-and mutual citations, a poor association between the quality of a journal and that of individual articles in it, as well as possible subjectivity which can be pertinent to the analysis based on the objective citation data [5,24,25]. ...
Technical Report
Full-text available
The question of how to assess research outputs published in journals is now a global concern for academics. Numerous journal ratings and rankings exist, some featuring perceptual and peer-review-based journal ranks, some focusing on objective information related to citations, some using a combination of the two. This research consolidates existing journal rankings into an up-to-date and comprehensive list. Existing approaches to determining journal rankings are significantly advanced with the application of a new classification approach, ‘random forests’, and data envelopment analysis. As a result, a fresh look at a publication’s place in the global research community is offered. While our approach is applicable to all management and business journals, we specifically exemplify the relative position of ‘operations research, management science, production and operations management’ journals within the broader management field, as well as within their own subject domain.
... Kotiaho et al. (1999) found that names from unfamiliar languages lead to a geographical bias against non-English speaking countries. Small changes in measurement techniques and classifications can have considerable consequences for the position in rankings (Ursprung and Zimmer, 2006;Frey and Rost, 2010). ...
... Adler and Harzing, 2009). This holds in particular as far as the ranking of individuals is concerned (Frey and Rost, 2010). It may even be argued that the number of rankings should be augmented so that each individual one loses its importance (Osterloh and Frey, 2009). ...
Article
Full-text available
Background: Research rankings based on bibliometrics today dominate governance in academia and determine careers in universities. Method: Analytical approach to capture the incentives by users of rankings and by suppliers of rankings, both on an individual and an aggregate level. Result: Rankings may produce unintended negative side effects. In particular, rankings substitute the "taste for science" by a "taste for publication." We show that the usefulness of rankings rests on several important assumptions challenged by recent research. Conclusion: We suggest as alternatives careful socialization and selection of scholars, supplemented by periodic self-evaluations and awards. The aim is to encourage controversial discourses in order to contribute meaningful to the advancement of science.
... Despite the popularity of rankings and their increasing impact on decision making in higher education, their methods are often criticised as being flawed and lacking the due methodological diligence that would be adequate to match the complexity of processes like teaching and learning or of research publications (cf. Frey, Rost 2010, Adler, Harzing 2009, Albers 2009, Devinney, Dowling & Perm-Ajchariyawong 2008. ...
... But this, critics claim, is often not the case in existing rankings (cf. Frey, Rost 2010, Albers 2009). ...
Article
Full-text available
As a result of an action research, innovative changes were made to the running of Monash University Sunway campus’ Problem Based Learning (PBL) session. The timeline for this action research was four years; from the conception of the idea and evaluation in 2007, implementation and reassessment in 2009 and final review of outcome in 2010. The main objective of this action research paper is to compare and evaluate between the single day PBL session with the two-day sessions that is offered to medical students. Secondary objective is to propose solutions for more effective resource utilisation within our institution. Mix method approaches were used in the collection of data. The results showed the new approach in PBL delivery through single day was favourably accepted both by students and tutors. This single day PBL delivery which was initiated by the Malaysian campus was positively received by students and tutors and adopted by both campuses in Australia and Malaysia
... Despite the popularity of rankings and their increasing impact on decision making in higher education, their methods are often criticised as being flawed and lacking the due methodological diligence that would be adequate to match the complexity of processes like teaching and learning or of research publications (cf. Frey, Rost 2010, Adler, Harzing 2009, Albers 2009, Devinney, Dowling & Perm-Ajchariyawong 2008. ...
... But this, critics claim, is often not the case in existing rankings (cf. Frey, Rost 2010, Albers 2009). ...
Article
Full-text available
The development of the teaching evaluation has been a longitudinal action research approach. Step-by-step measures and strategies were taken by the Medical Education Unit (MEU) to improve the quality of teaching and validate the evaluation documentation. These measures formed part of the accreditation of the medical curriculum. The main objectives of the teaching evaluation are to provide a forum for students to provide feedback on the quality of the teaching provided within the School of Medicine and enable tutors to receive feedback on their performance. Evaluation also provides a medium for tutors to identify areas of need regarding their Continuing Professional Development (CPD). In 2007, the MEU looked at all aspects of course evaluation and student experiences by using Pendleton’s feedback tool and a Likert-scale. In 2009, evaluation questionnaires were created in Optical Mark Reader (OMR) format through Form and Label Integrated Printing System (FLIPS) software. In 2011, standard operating procedures were created for disseminating the feedback to staff. To conclude, this presentation will focus on the development of the teaching evaluation and the measures taken by the MEU to improve the quality of the evaluation process that has made a positive impact on the School.
... Consistent with this idea, many research indicators have been developed (see, for example [2][3][4]). It is unclear, however, whether the indicators currently used accurately measure all that governments and research administrators need to know, or whether such indicators are always correctly interpreted and applied by governments and research administrators [5][6][7]. ...
... In fact, researchers value these papers. However, the larger society, which pays for the research, is interested in tangible evidence of progress, in both technological and basic research, not in the intermediate steps (this is a short description of a complex problem; see, for example [5,21,22]). Many of these intermediate steps may be considered exploratory research that leads nowhere. ...
Article
Full-text available
A Kuhnian approach to research assessment requires us to consider that the important scientific breakthroughs that drive scientific progress are infrequent and that the progress of science does not depend on normal research. Consequently, indicators of research performance based on the total number of papers do not accurately measure scientific progress. Similarly, those universities with the best reputations in terms of scientific progress differ widely from other universities in terms of the scale of investments made in research and in the higher concentrations of outstanding scientists present, but less so in terms of the total number of papers or citations. This study argues that indicators for the 1% high-citation tail of the citation distribution reveal the contribution of universities to the progress of science and provide quantifiable justification for the large investments in research made by elite research universities. In this tail, which follows a power low, the number of the less frequent and highly cited important breakthroughs can be predicted from the frequencies of papers in the upper part of the tail. This study quantifies the false impression of excellence produced by multinational papers, and by other types of papers that do not contribute to the progress of science. Many of these papers are concentrated in and dominate lists of highly cited papers, especially in lower-ranked universities. The h-index obscures the differences between higher- and lower-ranked universities because the proportion of h-core papers in the 1% high-citation tail is not proportional to the value of the h-index.
... However, this practice has been strongly criticized for several years (Seglen, 1997;Moed and Van Leeuwen, 1996;Laband and Tollison, 2003;Starbuck, 2005;Oswald, 2007;Singh et al., 2007;Adler and Harzing, 2009;Frey and Rost, 2010;Baum, 2011;Macdonald and Kam, 2011;Mingers and Willmott, 2013;Alberts, 2013;Osterloh and Frey, 2014;Wilsdon et al., 2015;Martin, 2016;Larivière et al., 2016;Berg, 2016;Callaway, 2016;Waltman, 2016;Wang et al., 2017), even by Eugene Garfield, the inventor of the impact factor (Garfield, 1973). The San Francisco Declaration on Research Assessment (DORA (San Francisco Declaration on Research Assessment) and DORA, 2012), which has been endorsed by many leading institutions, clearly states: "Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions." ...
Article
Publications in top journals today have a powerful influence on academic careers although there is much criticism of using journal rankings to evaluate individual articles. We ask why this practice of performance evaluation is still so influential. We suggest this is the case because a majority of authors benefit from the present system due to the extreme skewness of citation distributions. “Performance paradox” effects aggravate the problem. Three extant suggestions for reforming performance management are critically discussed. We advance a new proposal based on the insight that fundamental uncertainty is symptomatic for scholarly work. It suggests focal randomization using a rationally founded and well-orchestrated procedure.
... Gibbons and Fish [66] confirm this idea: 'Certainly, the more editorial boards an economist is on, the more prestigious the economist.' Consequently, serv- ing on an editorial board can be regarded as indicative of scholarly quality among one's peers [67]. In our sample, we counted the number of positions that a professor held as editor or board member of an academic journal. ...
Article
Full-text available
This paper examines how gender proportions at the workplace affect the extent to which individual networks support the career progress (i.e. time to promotion). Previous studies have argued that men and women benefit from different network structures. However, the empirical evidence about these differences has been contradictory or inconclusive at best. Combining social networks with tokenism, we show in a longitudinal academic study that gender-related differences in the way that networks affect career progress exist only in situations where women are in a token position. Our empirical results further show that women not in severely underrepresented situations benefit from the same network structure as men.
... The dominant framework has been to use the counts of direct citations between publications, journal, and research fields in order to quantify the relationships between them, and, for example, to rank authors [22], publications [17], universities [46], and institutions [8]. As these methods work reasonably well [2], and apart from few exceptions [12,41], the improvements of these methods have been correcting technical flaws [15,44]. However, most of the previous research has ignored the intrinsic conceptual issue of methods based on just counting citations: they ignore the fact that scientific publications are not only based on the information created in the publications they cite, but on the whole body of literature underneath this first layer of publications. ...
Preprint
In all of science, the authors of publications depend on the knowledge presented by the previous publications. Thus they "stand on the shoulders of giants" and there is a flow of knowledge from previous publications to more recent ones. The dominating paradigm for tracking this flow of knowledge is to count the number of direct citations, but this neglects the fact that beneath the first layer of citations there is a full body of literature. In this study, we go underneath the "shoulders" by investigating the cumulative knowledge creation process in a citation network of around 35 million publications. In particular, we study stylized models of persistent influence and diffusion that take into account all the possible chains of citations. When we study the persistent influence values of publications and their citation counts, we find that the publications related to Nobel Prizes i.e. Nobel papers have higher ranks in terms of persistent influence than that due to citations, and that the most outperforming publications are typically early works leading to hot research topics of their time. The diffusion model reveals a significant variation in the rates at which different fields of research share knowledge. We find that these rates have been increasing systematically for several decades, which can be explained by the increase in the publication volumes. Overall, our results suggest that analyzing cumulative knowledge creation on a global scale can be useful in estimating the type and scale of scientific influence of individual publications and entire research areas as well as yielding insights which could not be discovered by using only the direct citation counts.
... World university rankings are subject to some controversy: subjectivity in the choice of indicators, arbitrary attribution of weights, excessive consideration of the reputational factor, variability in the definition of the classification of results between ordinal/numerical representation and the effective quality of university education [16][17][18][19][20][21]. ...
Chapter
International rankings are an important communication tool that allows the comparison of Universities according to combinations of different parameters, appropriately weighted. The causes of the widespread diffusion of this information tool are to be found in the process of internationalization of the university system and in the massive increase in the demand and supply of diversified university education. The purpose of the rankings is to allow external subjects to have synthetic and comparable information, for immediate reading, on a university institution. However, often the information rate of the ranking is not combined with a careful examination of the performance indicators that distinguish it and its weighting. The aim of this study is to deepen the methodological aspects of the main global rankings, highlighting the strengths and weaknesses of these tools, while comparing statistical positions to those of the universities that occupy the top place in the rankings.
... However, a satisfied customer can be of great benefit from the foundation of any successful business thereby resulting in continuous purchases, product loyalty and helpful word of mouth advertisement. From the above illustration, customer satisfaction and customer loyalty are not clear-cut as incidences of customer defection are possible [15]. ...
Conference Paper
The loyalty program is a reward scheme for customers purchasing programs in a store of a supermarket. Frequent customers who have loyalty cards where that accumulate awarded points every time they purchase products from the store. The loyalty reward system is encouraging customers to purchase your goods and services. Businesses have to invest in money, time and resources to market their products and services. The most companies are faced challenge of maintaining their customers' satisfaction and loyalty. Clubcards are the prominent loyalty cards used in maintaining customers to attain maximum profits. This paper aims to analysis and testing the loyalty card as advancement tools can increase satisfaction and loyalty between the buyers and sellers such as Tesco plc case study, in order to cover the research aim, we conducted both qualitative and quantitative research strategies, which is often necessary for getting information from different places and opinion.
... Suggested academic quality indicators include student entry criteria, program completion rates, proportion of graduates entering employment upon graduation, professional training, higher degrees, and the average starting salaries of graduates. Frey and Rost [13] concluded that publications and citations were not suitable indicators of scientific institutional worth. Their results suggest that multiple criteria should be imple- mented when assessing institutions for quality or choice for career decision. ...
Article
Full-text available
Introduction Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. Methods A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted. Results A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems. Discussion No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.
... In all cases results (available on request) are similar. 13 See also Frey and Rost [91] for a discussion on the appropriate measures of research quality and quantity. ...
Article
In recent years more and more numerous are the rankings published in newspapers or technical reports available, covering many aspects of higher education, but in many cases with very conflicting results between them, due to the fact that universities’ performances depend on the set of variables considered and on the methods of analysis employed. This study measures the efficiency of Italian higher education using both parametric and non-parametric techniques and uses the results to provide guidance to university managers and policymakers regarding the most appropriate method for their needs. The findings reveal that, on average and among the macro-areas of the country, the level of efficiency does not change significantly among estimation approaches, which produce different rankings, instead. This may have important implications as rankings have a strong impact on academic decision-making and behaviour, on the structure of the institutions and also on students and graduates recruiters.
... In addition, a review of research productivity often includes an assessment of the number and achievements of a faculty member's highly qualified personnel. The system for assessing research may have its flaws (Donovan, 2007;Feist, 1997;Frey & Rost, 2010;Gibb, 2012;Sahel, 2011;van Gunsteren, 2015) but at least there is a system that is dependent upon review by qualified peers. Indicators of teaching productivity are not as easily quantified. ...
... One approach that has been used relates to assessing the citations of works, where it is argued that papers with higher citations are more impactful and thus influence the development of the domain ( Arnold et al., 2003;Schaffer et al., 2006). Citation analysis is the most common objective approach to ranking authors, articles and journals, used in several business disciplines such as marketing ( Baumgartner and Pieters, 2003;Chan et al., 2012Chan et al., , 2017Jaffe, 1997;Jobber and Simpson, 1988;Leone et al., 2012;Leong, 1989;Schlegelmilch and Oberseder, 2010;Wang et al., 2015), eco- nomics and finance (Chan et al., 2013;Chen and Huang, 2007;Frey and Rost, 2010;Mabry and Sharplin, 1985;Pinkowitz, 2002), in- formation technology (Deng and Lin, 2012;Willcocks et al., 2008), and operations research ( Davarzani et al., 2016;Petersen et al., 2011;Vokurka, 1996). It is claimed that citations are free from biases as- sociated with perceptual evaluations of impactful works (Jobber and Simpson, 1988;Zupic and C ˇ ater, 2015), and thus citations are an effective measure of the scientific impact among authors and jour- nals. ...
Article
Full-text available
Sustainability requires that consumers and organisations consider how their activities impact on the natural environment. The initial marketing discussion of ‘sustainability’ as we now define it was into green consumer behaviour and within the literature in this area has continued to grow. This paper analyses 677 journal articles with a green consumer focus that have appeared in 34 leading marketing, psychology and environmental journals between 1975 and 2014. The most influential articles, authors, and institutions are identified using citation analysis. An examination of the trends in topics focused on in the research, over eight five-year periods, identified behavioural intentions, demographics and marketing strategy as the top three subjects in the domain. Overall, the results show that green consumer research is a multidisciplinary research domain that has been explored across a diverse range of issues and contexts, with researchers dispersed globally, ensuring that sustainability continues to be an area of interest within the consumer domain.
... A comprehensive survey on the pitfalls of research evaluation and a plan for objectivity in evaluation metrics is presented by Retzer and Jurasinski (2009). Another aspect that is also under extensive examination is how to evaluate the actual quality of a scientic work, which is an a rather multifaceted and complicated task [Andersen, 2013, Frey andRost, 2010] that comprises more than simple publication and citation count [Ochsner et al., 2014]. In this article, we propose a set of indexes that can be used to evaluate multiple facets of an author's research potential, we penalize the impact of a work as time passes and focus on the changes of these indexes across time in order to remove the cumulative bias and instead of ranking authors or classifying authors to good or bad ones, we cluster authors of similar potential into groups. ...
Article
Full-text available
In today's complex academic environment the process of performance evaluation of scholars is becoming increasingly difficult. Evaluation committees often need to search in several repositories in order to deliver their evaluation summary report for an individual. However, it is extremely difficult to infer performance indicators that pertain to the evolution and the dynamics of a scholar. In this paper we propose a novel computational methodology based on unsupervised machine learning that can act as an important tool at the hands of evaluation committees of individual scholars. The suggested methodology compiles a list of several key performance indicators (features) for each scholar and monitors them over time. All these indicators are used in a clustering framework which groups the scholars into categories by automatically discovering the optimal number of clusters using clustering validity metrics. A profile of each scholar can then be inferred through the labeling of the clusters with the used performance indicators. These labels can ultimately act as the main profile characteristics of the individuals that belong to that cluster. Our empirical analysis gives emphasis on the “rising stars” who demonstrate the biggest improvement over time across all of the key performance indicators (KPIs), and can also be employed for the profiling of scholar groups.
... Editors make decisions about which papers to send out for review, which referees to ask for comments, requirements for additional analysis, and which papers to ultimately publish. These decisions work to check on the correctness of submitted papers, but they also let other scientists, administrators, and funding agencies know what is considered novel, important, and worthy of study (e.g. Brown 2014;Frey and Katja 2010;Katerattanakul et al. 2005;Weingart 2005). Highly ranked journals thus exert considerable influence on the direction that scientific disciplines move. ...
Article
Full-text available
The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists’ incentives to pursue innovative work.
... In the same issue, Sgroi and Oswald (2013) discuss how peer review panels should combine output counts with citations to generate an overall view of quality. Other literature has looked at research assessment more generally, investigating accusations of bias (Clerides et al., 2011), bias related to the use of bibliometrics (Wang et al. [2016] and references therein) and a lack of stability in rankings when the balance of quality/quantity weightings and citations systems change (Frey and Rost, 2010). 6 We have nothing to add on the issue of indicator aggregation and hence on the relative weight that should be put on the two types of simple bibliometrics that we propose. ...
Article
Many countries perform research assessment of universities, although the methods differ widely. Significant resources are invested in these exercises. Moving to a more mechanical, metrics-based system could therefore create very significant savings. We evaluate a set of simple, readily accessible metrics by comparing real Economics departments to three possible benchmarks of research excellence: a fictitious department composed exclusively of former Nobel Prize winners, actual world-leading departments, and reputation-based rankings of real departments. We examine two types of metrics: publications weighted by the quality of the outlet and citations received. The publication-based metric performs better at distinguishing the benchmarks if it requires at least four publications over a six year period and allows for a top rate for a very small set of elite reviews. Cumulative citations received over two six-year review periods appear to be somewhat more consistent with our three benchmarks than within-period citations, although within-period citations still distinguish quality. We propose a simple evaluation process relying on a composite index with a journal-based and a citations-based component. We also provide rough estimates of the cost: assuming that all fields of research would be amenable to a similar approach, we obtain a total cost of about £12M per review period.
... However, interested readers may refer to Dyson et al. [49] on an excellent discussion of some of the pitfalls usually faced by researchers in several application areas and the possible protocols to be followed to avoid those pitfalls. 7 counts may at times be more a fashion within the academic community than a true indicator of the impact of the journal [50][51][52][53]. Citation-based analyses can also be biased due to selective citations or self-and mutual citations which render the association between the quality of a journal and that of an individual article in it rather uninformative [53][54][55]. ...
Article
Given the growing emphasis on research productivity in management schools of India over the years, the present authors developed a composite indicator (CI) of research productivity, using the directional- benefit-of-doubt (directional-BOD) model. Specifically, we examined overall research productivity of the schools and their respective faculty members during the 1968-2014 and 2004-2014 periods. There are four key findings. First, the relative weights of the journal tier, total citations, author h-index, number of papers, impact factor, and journal h-index varied from high to low in order for estimating the CI of a faculty member. Second, both public and private schools were seemingly similar in research productivity. However, faculty members at the Indian Institutes of Technology (IITs) outperformed those at the Indian Institutes of Management (IIMs). Third, faculty members who had their doctoral degrees from foreign schools were more productive than those who had similar degrees from Indian schools. Among those trained in India, however, alumni of IITs were more productive than those of IIMs. Finally, IIMs at Ahmedabad and Bangalore and the Indian School of Business, Hyderabad have more names than other schools among the list of top 5% researchers during 2004-2014. These findings indicate a shift in the priority from mere training of managers to generating impactful knowledge by at least two of the three established public schools, and call further attention to improving the quality of doctoral training in India in general and IIMs in particular. Five suggestions for improving research productivity are offered.
... Measuring and quantifying prestige have been discussed in some articles. Frey and Rost (2010) compared three types of university ranking based on the number of articles, number of citations, and membership of editorial board or of academic associations. The authors indicated that these rankings are not compatible with each other and suggested the use of multiple measurements. ...
Article
Having combined data on Quebec scientists’ funding and journal publication, this paper tests the effect of holding a research chair on a scientist’s performance. The novelty of this paper is to use a matching technique to understand whether holding a research chair contributes to a better scientific performance. This method compares two different sets of regressions which are conducted on different data sets: one with all observations and another with only the observations of the matched scientists. Two chair and non-chair scientists are deemed matched with each other when they have the closest propensity score in terms of gender, research field, and amount of funding. The results show that holding a research chair is a significant scientific productivity determinant in the complete data set. However, when only matched scientists are kept in data set, holding a Canada research chair has a significant positive effect on scientific performance but other types of chairs do not have a significant effect. In the other words, in the case of two similar scientists in terms of gender, research funding, and research field, only holding a Canada research chair significantly affects scientific performance.
... The evaluation of university departments as well as scientists based on their publication record has become standard in many scientific fields (see, e.g., Graber, Launov and Wälde, 2008;Schulze, Warning and Wiermann, 2008), even though academics have also been critical about various rankings of journals, departments, and individual scientists (see, e.g., Oswald, 2007;Frey and Rost, 2010). In Germany, the public evaluation of scientists based on publication records is a relatively recent phenomenon though, especially in social sciences. ...
Article
Quantitative measures of research output, especially bibliometric measures, have not only been introduced within research funding systems in many countries, but they are also increasingly used in the media to construct rankings of universities, faculties and even individual scientists. In almost all countries, in which significant attempts have been made to quantify research output, parts of the scientific community have criticized the specific procedures used or even protested against them. In 2012, a significant fraction of German business scholars has even opted out of the most important German research ranking for business and economics which is conducted by the Germany's leading business daily Handelsblatt. Using this example, we show that observed resistance to change can consistently be explained by observable factors related to individual cost and benefits of the concerned researchers. We present empirical evidence consistent with the hypothesis that those scholars for whom the costs of a change in evaluation methods exceed the expected benefits are more likely to boycott the ranking exercise.
... Such policies have leaded the administration over zealous and preoccupied by the numbers which might not mean production of quality in the end. The essence of research is finding the hidden and a quest for truth which the numbers beguile (Frey and Rost, 2010). The culture of research means that the students ask questions about everything and are inquisitive. ...
Article
Full-text available
The purpose of this paper is to give an insight of quality of research done particularly at MPhil and PhD level and factors related to system, structure and culture that can be used by leaders to enhance quality of research. Study is based on extensive and thorough interviews conducted from researchers in different universities of Islamabad, Pakistan. Research showed some flaws in existing systems, structures and culture of education in Pakistan, leaders (HEC and other concerned institutes) could take some actions to modify current research systems, structure and culture to enhance quality of research in Pakistan. Research is designed to focus research quality enhancement. Certain recommendations were made to improve that quality particularly academic research. Intention of research is to improve quality of research in educational institutes particularly which are dealing in MPhil and PhD programs in different specializations. Qualitative research to find out factors for improvement of research quality is limited. Moreover, 19 this study recommends improvement in quality of research by improving current systems, structure and culture.
... Another criticism of citation counting is how it does not consider differences among scientific fields 16,17 the measure has the same scale, regardless of the knowledge field to which it is applied. ...
Article
Full-text available
The high volume of health information creates a need for processes and tools to select, evaluate and disseminate relevant information to health professionals in clinical practice. To introduce an index of the clinical relevance of information and to show that it is different from existing measures. A conceptual model of knowledge translation was developed to explain the need for a new index, whose application was verified by an exploratory study with two (quantitative and qualitative) phases. The Clinical Relevance of Information Index (CRII) was defined employing descriptive statistical analyses of assessments performed by health professionals. The model and the CRII were applied in a primary healthcare context. The CRII was applied to 4574 relevance assessments of 194 evidence synopses. The assessments were performed by 41 family physicians in 2008. The CRII value of each synopsis was compared with the number of citations received by its corresponding research paper and with the level of evidence of the study, presenting weak correlation with both. The CRII captures aspects of information not considered by other indices. It can be a parameter for information providers, institutions, editors, as well as health and information professionals targeting knowledge translation.
Article
Full-text available
Bu çalışma akademik bilgi ürünlerini ve üretim süreçlerini incelemek amacıyla sıkça kullanılan bibliyometrik yöntemlere ilişkin giriş seviyesinde bilgi vermeyi amaçlamaktadır. Bu bağlamda öncelikle bibliyometrik yöntemlere ve bu yöntemlere olan ihtiyaca dair öz teorik bilgi verilerek akademik yayınlarda yer alan bibliyometrik veriler ve bu verilerin nasıl toplandığı ve kürate edildiğinden bahsedilmektedir. Ardından, yayın sayısı, atıf sayısı, h-dizini, sosyal ağ analizi kullanılarak geliştirilen bibliyografik eşleme ve ortak atıf ağları analizleri, ortak sözcük birlikteliği ağları, ortak yazarlık ağları, tematik harita ve üç-kavram grafiği kavramları tanıtılmıştır. Bu kavramlar, “akademik girişimcilik” konusuna uyarlanarak bibliyometrik analiz ve görselleştirme örnekleri paylaşılmıştır.
Article
Full-text available
Today ethics is embodied not only in day-to-day life, but also in the communication that surrounds it. The study of communication in professional communities makes it possible to determine the relationship between declared and practically embodied values in work. Ethical attitudes are not only postulates embedded in ethical codes, but also principles of interaction embodied in the construction of the information space and decision-making. Features of modern communications influence the way professional ethics is structured, which, in turn, affects its content and practical implementation. The communication through the Internet makes scientific work performative, filling it with symbols and labels. Increasingly, communication practices have to be carried out around indicators, and thus communication becomes a conductor of neoliberal reforms in scientific work. Therefore, the consequence of modern forms of communication is the forced utilitarianism of ethics associated with the need to compete in the “scientific market”. The article suggests possible ways to overcome the contradictions of communicative transformations of professional values.
Article
In academic communication, editors exert a significant influence on a journal’s mission and content. We examined how the composition of editorial board members, in particular diversity in terms of institution, is related to journal quality. Our sample comprised 6916 editors who were affiliated with 246 economics journals. Using Stirling Index of Diversity, we provided a single numeric index (DI) to measure the diversity of institutions which is composed of variety, balance, and disparity. Then we related it to journal quality, as reflected in three widely used indices in economics: the five-year impact factor, the association of business schools’ (ABS) journal quality guide, and the eigenfactors. The results show that academic journals in the field of economics are heavily dominated by US institutions, but in terms of geographic distribution, there are more institutions in Europe than in North America. Surprisingly, we found that the diversity of editorial board members in terms of institution is negatively related to ABS ranking, but unrelated to the five-year impact factor and the eigenfactors. While when we removed the US journals from the sample, there was a significant positive impact between institutional diversity and the five-year impact factor. Our study extends the scarce knowledge on the composition of editorial teams and their relevance to journal quality by study the correlation between the institutional diversity index and three different journal quality indices. The implication of this study is that more effort is needed to increase the diversity in the composition of editorial teams in order to ensure transparency and promote equity.
Article
Evaluating the scholarly reputation of journals has become one of the key concerns and research focuses in academia. The scholarly performance of a journal’s editorial team helps to enhance the journal’s academic impact. This paper develops an editorial team scholarly index from the new perspective of journal editorship, combining the editors’ scholarly performance and the editors’ titles (e.g., associate, assistant) to provide an alternative indicator for evaluating academic journal reputation. This index is useful to measure and rank journals, especially new journals. The paper classifies journal editorial teams and evaluates academic journals using data for 738 members of editorial teams for 21 well-known journals in the field of library and information science. The study concludes that the new index has a significantly positive relationship with journal reputation and shows that the journals’ rankings according to the new index are neither far away from nor uselessly close to the four baseline indicators traditionally measuring journal reputation. Finally, the research finds that there are significant positive correlations between journal reputation and the new index when three different levels of titles of editors are considered, and a comparative empirical analysis of the title levels is provided.
Article
Full-text available
From the two premises that (1) economies are complex systems and (2) the accumula- tion of knowledge about reality is desirable, I derive the conclusion that pluralism with regard to economic research programs is a more viable position to hold than monism. To substantiate this claim an epistemological framework of how scholars study their objects of inquiry and relate their models to reality is discussed. Furthermore, it is argued that given the current institutions of our scientific system, economics self-organizes towards a state of scientific unity. Since such a state is epistemologically inferior to a state of plurality, critical intervention is desirable.
Article
Full-text available
Purpose The purpose of this paper is to empirically investigate whether an individual’s knowledge, skills and capabilities (human capital) are reflected in their compensation. Design/methodology/approach Data are drawn from university academics in the Province of Ontario, Canada, earning more than CAD$100,000 per annum. Data on academics human capital are drawn from Research Gate. The authors construct a regression analysis to examine the relationship between human capital and salary. Findings The analyses performed indicates a positive association between academic human capital and academic salaries. Research limitations/implications This study is limited in that it measures an academic’s human capital solely through their research outputs as opposed to also considering their teaching outputs. Continuing research needs to be conducted in different country contexts and using negative proxies of human capital. Practical implications This study will create awareness about the value of human capital and its contribution towards improving organisational structural capital. Social implications The study contributes to the literature on human capital in accounting and business by focussing on the economic relevance of individual level human capital. Originality/value The study contributes to the literature on human capital in accounting and business by focussing on the economic relevance of individual level human capital. It will help create awareness of the importance of valuing human capital at the individual level.
Article
Full-text available
In recent year, a growing attention is dedicated to the assessment of research’s social impact. While prior research has often dealt with results of research, the last decade has begun to generate knowledge on the assessment of health research’s social impact. However, this knowledge is scattered across different disciplines, research communities, and journals. Therefore, this paper analyzes the heterogeneous picture research has drawn within the past years with a focus on the health research’s social impact on different stakeholders through an interdisciplinary, systematic review. By consulting major research databases, we have analyzed 53 key journal articles bibliographically and thematically. We argued that the adoption of a multi-stakeholder could be an evolution of the existing methods used to assess impact of research. After presenting a model to assess the health research’s social impact with a multi stakeholder perspective, we suggest the implementation in the research process of three practice: a multi-stakeholder workshop on research agenda; a multi stakeholder supervisory board; a multi-stakeholder review process.
Article
Full-text available
Data sets of publication meta data with manually disambiguated author names play an important role in current author name disambiguation (AND) research. We review the most important data sets used so far, and compare their respective advantages and shortcomings. From the results of this review, we derive a set of general requirements to future AND data sets. These include both trivial requirements, like absence of errors and preservation of author order, and more substantial ones, like full disambiguation and adequate representation of publications with a small number of authors and highly variable author names. On the basis of these requirements, we create and make publicly available a new AND data set, SCAD-zbMATH. Both the quantitative analysis of this data set and the results of our initial AND experiments with a naive baseline algorithm show the SCAD-zbMATH data set to be considerably different from existing ones. We consider it a useful new resource that will challenge the state of the art in AND and benefit the AND research community.
Chapter
In the course of the last thirty years, science enjoys a remarkable quantitative boom. For example, the total number of substances, registered in the Chemical Abstracts Service Registry File (CAS RF) at the end of the year 1985, was about 8 millions while at the end of the year 2015 it reached up to 104 millions. But, still more and more behind this quantitative boom of science are some of its qualitative aspects. So, e.g., the x–y–z coordinates of atoms in molecules are presently known for no more than 1 million of substances. For the majority of substances registered in CAS RF, we do not know much on their properties, how they react with other substances and to what purpose they could serve. Gmelin Institute for Inorganic Chemistry and Beilstein Institute for Organic Chemistry, which systematically gathered and extensively published such information since the nineteenth century, were canceled in 1997 (Gmelin) and 1998 (Beilstein). The number of scientific papers annually published increases, but the value of information they bring falls. The growth of sophisticated ‘push-and-button’ apparatuses allows easier preparation of publications while facilitating ready-to-publish data. Articles can thus be compiled by mere combination of different measurements usually without idea what it all is about and to what end this may serve. Driving force for the production of ever growing number of scientific papers is the need of authors to be distinguished in order to be well considered in seeing financial support. The money and fame are distributed to scientists according to their publication and citation scores. While the number of publications is clearly a quantitative criterion, much hopes have been placed on the citation, which promised to serve well as an adequate measure of the genuine scientific value, i.e., of quality of the scientific work. That, and why these hopes were not accomplished, is discussed in detail in our contribution. Special case of Journal of Thermal Analysis and Calorimetry is discussed in more particulars.
Article
It is inevitable that the ´publish or perish´ paradigm has implications for the quality of research published because this leads to scientific output being evaluated based on quantity and not preferably on quality. The pressure to continually publish results in the creation of predatory journals acting without quality peer review. Moreover the citation records of papers do not reflect their scientific quality but merely increase the impact of their quantity. The growth of sophisticated ´push -button´ technologies allows for easier preparation of publications while facilitating ready-to-publish data. Articles can thus be compiled merely through combining various measurements, usually without thought to their significance and to what purpose they may serve. Moreover any deep-rooted theory which contravenes mainstream assumptions is not welcomed because it challenges often long-established practice. The driving force for the production of an ever growing number of scientific papers is the need for authors to be recognised in order to be seriously considered when seeking financial support. Funding and fame are distributed to scientists according to their publication and citation scores. While the number of publications is clearly a quantitative criterion, much hope has been placed on citation analysis, which promised to serve as an adequate measure of genuine scientific value, i.e. of the quality of the scientific work.
Article
Along with the advance of internet and fast updating of information, nowadays it is much easier to search and acquire scientific publications. To identify the high quality articles from the paper ocean, many ranking algorithms have been proposed. One of these methods is the famous PageRank algorithm which was originally designed to rank web pages in online systems. In this paper, we introduce a preferential mechanism to the PageRank algorithm when aggregating resource from different nodes to enhance the effect of similar nodes. The validation of the new method is performed on the data of American Physical Society journals. The results indicate that the similarity-preferential mechanism improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Though our method is only applied to citation networks in this paper, it can be naturally used in many other real systems, such as designing search engines in the World Wide Web and revealing the leaderships in social networks.
Article
Full-text available
This paper analyses the interrelationship between perceived journal reputation and its relevance for academics’ work. Based on a survey of 705 members of the German Economic Association (GEA), we find a strong interrelationship between perceived journal reputation and relevance where a journal’s perceived relevance has a stronger effect on its reputation than vice versa. Moreover, past journal ratings conducted by the Handelsblatt and the GEA directly affect journals’ reputation among German economists and indirectly also their perceived relevance, but the effect on reputation is more than twice as large as the effect on perceived relevance. In general, citations have a non-linear impact on perceived journal reputation and relevance. While the number of landmark articles published in a journal (as measured by the so-called H-index) increases the journal’s reputation, an increase in the H-index even tends to decrease a journal’s perceived relevance, as long as this is not simultaneously reflected in a higher Handelsblatt and/or GEA rating. This suggests that a journal’s relevance is driven by average article quality, while reputation depends more on truly exceptional articles. We also identify significant differences in the views on journal relevance and reputation between different age groups.
Article
Full-text available
In a "publish-or-perish culture", the ranking of scientific journals plays a central role in assessing performance in the current research environment. With a wide range of existing methods and approaches to deriving journal rankings, meta-rankings have gained popularity as a means of aggregating different information sources. In this paper, we propose a method to create a consensus meta-ranking using heterogeneous journal rankings. Using a parametric model for paired comparison data we estimate quality scores for 58 journals in the OR/MS community, which together with a shrinkage procedure allows for the identification of clusters of journals with similar quality. The use of paired comparisons provides a flexible framework for deriving a consensus score while eliminating the problem of data missingness.
Article
In this study, we introduce the concept of legitimacy to the rigor-relevance debate and investigate empirically how rigor, relevance, and legitimacy are related to the process of knowledge dissemination within a research field. We argue that this analysis has been a missing piece in the debate on rigor and relevance when institutional logics about what constitutes both elements lead researchers to act according to what they perceive to be appropriate behavior in the research field. We draw on insights from the micro and macro levels of institutional theory to show how researchers aiming to bestow legitimacy on their own work conform to these “rules of the game.” Using meta-analytical techniques, we focus on the field of strategic entrepreneurship and analyze how rigor- and relevance-related characteristics of studies in this field are linked to their legitimacy and therefore to the impact they have in the research community.
Article
Full-text available
This study examined the relationships among perceived editorial responsiveness, perceived journal quality, and review time of submissions for authors in mainland China. Online review data generated by authors who have experienced the submission process in 10 Chinese academic journals were collected. The results of Spearman correlation analysis show that Chinese authors' perceived responsiveness of an editorial office is positively correlated with perceived quality of the journal, and the total review time does not affect perceptions of the quality of a journal and its editorial responsiveness.
Article
Full-text available
Numerous studies published in the academic literature address the issue of journal quality assessment. However, little has been done to compare the factors that influence the perceptions of journal quality in different disciplines. From Chinese authors' viewpoint, this study explored the factors influencing author quality perceptions of journals in computer science and technology as well as in library and information science. Our empirical findings indicate that author-perceived journal quality in these two fields is significantly positively correlated with impact factors and not statistically significantly correlated with technical delay and immediacy index. Slightly different results are also found between the two fields in terms of the effects of editor service, editorial delay and acceptance rates.
Article
This paper analyzes whether formal collaboration in terms of coauthorship enhances paper quality in financial research. Analyzing all papers presented at DGF annual meetings in the period from 1996 to 2005, we report the following major findings: First, we find superior paper quality of coauthored papers compared to single-authored papers. This holds true for two quality proxies, publication probability and publication quality (as measured by the original and the updated Jourqual rating). Second, the capability of scholars, measured by a citation measure derived from citations in Google Scholar, shows to be an additional important factor for explaining paper quality. Third, the employed methodology of a paper (e.g., empirical analysis, theoretical analysis) does not systematically affect paper quality. However, it is important to differentiate between empirical and theoretical papers. Whereas coauthorship shows to be a quality enhancing factor for empirical papers, this does not hold true for theoretical papers. Fourth, the origin of data is a crucial determinant of the publication success for empirical papers. In particular, papers which exclusively analyze data from Germany are published in less reputable journals.
Article
The paper addresses the issue of characterization and classification of universities in the European system, by using the recently developed Aquameth dataset. Preliminary cluster analysis based on structural variables identifies systematic differences in size. However, this structural distinction is associated with differences in strategic orientation of universities (towards research or towards teaching, respectively) only in a few countries. In most European countries there are no discernible differences across universities along these dimensions. The paper argues that countries in which universities are more differentiated according to research or teaching dimensions have implemented differentiation policies through a variety of policy instruments. In turn, these countries also are ranked high in international rankings of universities. This suggests a structural linkage between the poor performance of European universities in research-based rankings and the lack of differentiation. Copyright , Beech Tree Publishing.
Article
The article reviews the roles played by the Department of Education and the National Research Foundation in South Africa in defining the meaning of scholarship and in evaluating and funding it. The ideas that inform policy and practice include: the view that scholarship must serve the requirements of the national economy in becoming more globally competitive; attempts to manage and direct knowledge production; redressing apartheid’s legacies; and a positivist discourse. I argue in favour of diversity in the pursuit of knowledge and greater consistency with national historical development. The tensions between science and technology and the humanities, and between disciplinary-based and applied scholarship are also highlighted. They lie at the heart of the status of ‘other knowledges’ in relation to more ‘traditional’ scholarship. The political and practical implications of these analyses are raised for debate.
Article
After a short sketch of the history of modern business schools in the German speaking countries, their four major activity fields are considered: (i) academic teaching, (ii) scientific research, (iii) consulting and (iv) executive education. While teaching was traditionally dominant, research has gained more importance in recent decades, not only in Economics but also in Management departments. With respect to consulting, we have to distinguish between consulting for governments by economists and for private companies by professors of management. Executive education is mainly a domain of management (and law) departments; economists only play a minor role in this area. We conclude with discussing some of the ethical questions with which Economics and Management departments are confronted today.
ResearchGate has not been able to resolve any references for this publication.