Journal of the American Society for Information Science and Technology

Published by Association for Information Science and Technology (ASIS&T)
Online ISSN: 1532-2890
Print ISSN: 1532-2882
Publications
Citation distributions differ by level of aggregation, as well as time period. Citations for all papers in these sets are tabulated at the end of 2006. The left panel is on double logarithmic scales, and the right panel is the same data on linear scale. A, Distribution of the number of citations to all scientific articles indexed in the Web of Science between 1955 and 2006. B, Distribution of citations to all scientific articles published in calendar year 2000. Note that the tail decays faster, for high impact papers have not yet had enough time to accumulate citations. C, Distribution of citations to all scientific articles published in year 2000 in the field of “Hematology.” Note that the median number of citations is significantly (p < 0.001) higher in hematology than the median number of citations overall. D, Distribution of citations to all scientific articles published in year 2000 in the journal Circulation (which is classified in the field of “Hematology”). The median number of citations to papers published in Circulation is significantly higher (p < 0.001) than the median number of citations to papers in hematology. At the aggregation level of D, we find that almost all of the data is consistent with a discrete lognormal distribution. Thus, the global distribution A is likely a mixture of discrete lognormal distributions.
Goodness of fit of latent normal model to journal citation distribution data.A, Comparison of the model to data for articles published in the steady state period (1970–1998) for the journal Circulation. We can not reject hypothesis H1 (p1=0.4). B, Plot of residuals,  against the independent variable, n for the journal Circulation. For journals where hypothesis bf H1 cannot be rejected (p > 0.05), the residuals are uncorrelated with the number of citations. C, Comparison of the model to data for articles published in the steady state period (1996–1998) of the journal Science. In this case, p1 is near zero, indicating that we can reject hypothesis H1 with high confidence. D, Plot of residuals,  against the independent variable, n for the journal Science. For journal where hypothesis H1 is rejected (p < 0.05), the residuals are correlated. In this particular case, the model under-predicts the number of uncited articles. Whereas for the “true” model of all journal citation distributions, we would expect 5% (109) of the journals to yield p > 0.05, we observe that 10% (229) of the journals yield p > 0.05. For the purpose of comparison, the dashed lines in A and C indicate best fit normal distributions.
Hypothesis H1 tends to be rejected more often for high volume journals. We divide journals in our analysis by quintile according to two attributes: number of papers per annum, Npa, and the best fit mean of the discrete lognormal model, μDLN. Most (70%) of the journals for which hypothesis H1 is rejected are in the top quintile of Npa. This is to be expected to some extent, since the test has more statistical power to detect even small deviations from the model for larger N (see Appendix). However, among the journals in the top quintile of Npa, we find that 57% of the journals for which hypothesis H1 is rejected are in the top quintile of μDLN. That is, large, highly-cited journals are the most likely to fail the test.
Hypothesis H3 is rejected for seven journals. In the set of journals for which hypothesis H1 is rejected at αFDR, some tests fail because the parameters of the discrete lognormal distribution actually vary slightly in time. Panel A shows the mean of the discrete lognormal distribution as a function of time for The Astrophysical Journal (Ap. J.). The error bars are intended to show the “width” of the distribution, or the standard deviation σDLN, as opposed to the estimated error. For The Astrophysical Journal, none of the individual years are inconsistent with the discrete lognormal model, ∀y∈{1958, 1959, … 1988}: p > 0.05. However, when the data from all years in the steady state period (shaded) are aggregated, p is low enough for hypothesis H1 to be rejected with high confidence. In Panel B, we see that for Journal of the American Chemical Society (JACS) there are 4 years out of 20 for which p < 0.05. This number of rejections is sufficient to rejected hypothesis H3 at p3 < 0.001. Thus, the time varying mean is not sufficient to explain the deviations from the model expectations for JACS. For purposes of estimating the ultimate number of citations that papers will receive, the heuristic method for determining a “steadystate” is adequate. However, when the number of papers is large enough for the test to be very sensitive, we see that the distribution is actually a mixture of discrete lognormal distributions with a time-varying mean.
Figure A1. Power analysis of our testing procedure. We draw data from a mixture of two normal distributions with different means but identical variance. 25% (A), 50% (B), and 75% (C) of the points in these samples were drawn from the normal with the largest mean. Nsample is the number of data points in each sample considered and Δμ is the difference in the means of the distributions. The power of the testing procedure depends on both the specific nature of the deviation from the null hypothesis and the number of points in the dataset. It is clear that a distribution that results from a mixture of lognormal distributions that are separated by only one standard deviation are essentially impossible to detect.
A central issue in evaluative bibliometrics is the characterization of the citation distribution of papers in the scientific literature. Here, we perform a large-scale empirical analysis of journals from every field in Thomson Reuters' Web of Science database. We find that only 30 of the 2,184 journals have citation distributions that are inconsistent with a discrete lognormal distribution at the rejection threshold that controls the False Discovery Rate at 0.05. We find that large, multidisciplinary journals are over-represented in this set of 30 journals, leading us to conclude that, within a discipline, citation distributions are lognormal. Our results strongly suggest that the discrete lognormal distribution is a globally accurate model for the distribution of "eventual impact" of scientific papers published in single-discipline journal in a single year that is removed sufficiently from the present date.
 
Explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot-test in which two information specialists use the adapted application for a realistic information seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design.
 
Automatic document categorization is an important research problem in Information Science and Natural Language Processing. Many applications, including Word Sense Disambiguation and Information Retrieval in large collections, can benefit from such categorization. This paper focuses on automatic categorization of documents from the biomedical literature into broad discipline-based categories. Two different systems are described and contrasted: CISMeF, which uses rules based on human indexing of the documents by the Medical Subject Headings(®) (MeSH(®)) controlled vocabulary in order to assign metaterms (MTs), and Journal Descriptor Indexing (JDI) based on human categorization of about 4,000 journals and statistical associations between journal descriptors (JDs) and textwords in the documents. We evaluate and compare the performance of these systems against a gold standard of humanly assigned categories for one hundred MEDLINE documents, using six measures selected from trec_eval. The results show that for five of the measures, performance is comparable, and for one measure, JDI is superior. We conclude that these results favor JDI, given the significantly greater intellectual overhead involved in human indexing and maintaining a rule base for mapping MeSH terms to MTs. We also note a JDI method that associates JDs with MeSH indexing rather than textwords, and it may be worthwhile to investigate whether this JDI method (statistical) and CISMeF (rule based) might be combined and then evaluated showing they are complementary to one another.
 
An experiment was performed at the National Library of Medicine((R)) (NLM((R))) in word sense disambiguation (WSD) using the Journal Descriptor Indexing (JDI) methodology. The motivation is the need to solve the ambiguity problem confronting NLM's MetaMap system, which maps free text to terms corresponding to concepts in NLM's Unified Medical Language System((R)) (UMLS((R))) Metathesaurus((R)). If the text maps to more than one Metathesaurus concept at the same high confidence score, MetaMap has no way of knowing which concept is the correct mapping. We describe the JDI methodology, which is ultimately based on statistical associations between words in a training set of MEDLINE((R)) citations and a small set of journal descriptors (assigned by humans to journals per se) assumed to be inherited by the citations. JDI is the basis for selecting the best meaning that is correlated to UMLS semantic types (STs) assigned to ambiguous concepts in the Metathesaurus. For example, the ambiguity transport has two meanings: "Biological Transport" assigned the ST Cell Function and "Patient transport" assigned the ST Health Care Activity. A JDI-based methodology can analyze text containing transport and determine which ST receives a higher score for that text, which then returns the associated meaning, presumed to apply to the ambiguity itself. We then present an experiment in which a baseline disambiguation method was compared to four versions of JDI in disambiguating 45 ambiguous strings from NLM's WSD Test Collection. Overall average precision for the highest-scoring JDI version was 0.7873 compared to 0.2492 for the baseline method, and average precision for individual ambiguities was greater than 0.90 for 23 of them (51%), greater than 0.85 for 24 (53%), and greater than 0.65 for 35 (79%). On the basis of these results, we hope to improve performance of JDI and test its use in applications.
 
One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image components. A collection of 180 medical documents containing an image accompanied by a short text describing it was divided into training and test sets. Content-based image analysis and natural language processing techniques are applied individually and combined for multimodal document analysis. The evaluation consists of an indexing task and a retrieval task based on the "gold standard" codes manually assigned to corpus documents. The performance of text-based and image-based access, as well as combined document features, is compared. Image analysis proves more adequate for both the indexing and retrieval of the images. In the indexing task, multimodal analysis outperforms both independent image and text analysis. This experiment shows that text describing images can be usefully analyzed in the framework of a hybrid text/image retrieval system.
 
We describe the use of a domain-independent methodology to extend a natural language processing (NLP) application, SemRep (Rindflesch, Fiszman, & Libbus, 2005), based on the knowledge sources afforded by the Unified Medical Language System (UMLS®) (Humphreys, Lindberg, Schoolman, & Barnett, 1998) to support the area of health promotion within the public health domain. Public health professionals require good information about successful health promotion policies and programs that might be considered for application within their own communities. Our effort seeks to improve access to relevant information for the public health profession, to help those in the field remain an information-savvy workforce. NLP and semantic techniques hold promise to help public health professionals navigate the growing ocean of information by organizing and structuring this knowledge into a focused public health framework paired with a user-friendly visualization application as a way to summarize results of PubMed searches in this field of knowledge.
 
Many information portals are adding social features with hopes of enhancing the overall user experience. Invitations to join and welcome pages that highlight these social features are expected to encourage use and participation. While this approach is widespread and seems plausible, the effect of providing and highlighting social features remains to be tested. We studied the effects of emphasizing social features on users' response to invitations, their decisions to join, their willingness to provide profile information, and their engagement with the portal's social features. The results of a quasi-experiment found no significant effect of social emphasis in invitations on receivers' responsiveness. However, users receiving invitations highlighting social benefits were less likely to join the portal and provide profile information. Social emphasis in the initial welcome page for the site also was found to have a significant effect on whether individuals joined the portal, how much profile information they provided and shared, and how much they engaged with social features on the site. Unexpectedly, users who were welcomed in a social manner were less likely to join and provided less profile information; they also were less likely to engage with social features of the portal. This suggests that even in online contexts where social activity is an increasingly common feature, highlighting the presence of social features may not always be the optimal presentation strategy.
 
Many recent studies on MEDLINE-based information seeking have shed light on scientists' behaviors and associated tool innovations that may improve efficiency and effectiveness. Few if any studies, however, examine scientists' problem-solving uses of PubMed in actual contexts of work and corresponding needs for better tool support. Addressing this gap, we conducted a field study of novice scientists (14 upper level undergraduate majors in molecular biology) as they engaged in a problem solving activity with PubMed in a laboratory setting. Findings reveal many common stages and patterns of information seeking across users as well as variations, especially variations in cognitive search styles. Based on findings, we suggest tool improvements that both confirm and qualify many results found in other recent studies. Our findings highlight the need to use results from context-rich studies to inform decisions in tool design about when to offer improved features to users.
 
724 journals related in their citing patterns with cosine ≥0.3 and with factor loading ≤ −0.1 or ≥0.1, colored in accordance with the 12-factor solution (Kamada & Kawai, 1989).[Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]
Visualization of the 12 main dimensions of 565 (of the 1,157) journals included in the A&HCI 2008; factor loadings ≥0.2 or ≤ −0.2.
The distribution of Humanities PhD graduates in the United States in 2008.[Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]Source. U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, Integrated Postsecondary Data System; accessed via the National Science Foundation's online integrated science and engineering resources data system, WebCASPAR (at https://webcaspar.nsf.gov/).
Departmental structure of UCLA Humanities section (according to shared faculty among teaching programs).Source. UCLA General Catalog 2010–2011, available at http://www.registrar.ucla.edu/archive/catalog/2010-11/uclageneralcatalog10-11.pdf
Twelve factors distinguished by factor analysis (Varimax; SPSS v18.0).
Using the Arts & Humanities Citation Index (A&HCI) 2008, we apply mapping techniques previously developed for mapping journal structures in the Science and Social Science Citation Indices. Citation relations among the 110,718 records were aggregated at the level of 1,157 journals specific to the A&HCI, and the journal structures are questioned on whether a cognitive structure can be reconstructed and visualized. Both cosine-normalization (bottom up) and factor analysis (top down) suggest a division into approximately twelve subsets. The relations among these subsets are explored using various visualization techniques. However, we were not able to retrieve this structure using the ISI Subject Categories, including the 25 categories which are specific to the A&HCI. We discuss options for validation such as against the categories of the Humanities Indicators of the American Academy of Arts and Sciences, the panel structure of the European Reference Index for the Humanities (ERIH), and compare our results with the curriculum organization of the Humanities Section of the College of Letters and Sciences of UCLA as an example of institutional organization.
 
This paper challenges recent research (Evans, 2008) reporting that the concentration of cited scientific literature increases with the online availability of articles and journals. Using Thomson Reuters' Web of Science, the present paper analyses changes in the concentration of citations received (two- and five-year citation windows) by papers published between 1900 and 2005. Three measures of concentration are used: the percentage of papers that received at least one citation (cited papers); the percentage of papers needed to account for 20, 50 and 80 percent of the citations; and, the Herfindahl-Hirschman index. These measures are used for four broad disciplines: natural sciences and engineering, medical fields, social sciences, and the humanities. All these measures converge and show that, contrary to what was reported by Evans, the dispersion of citations is actually increasing.
 
In a recent presentation at the 17th International Conference on Science and Technology Indicators, Schneider (2012) criticised the proposal of Bornmann, de Moya Anegon, and Leydesdorff (2012) and Leydesdorff and Bornmann (2012) to use statistical tests in order to evaluate research assessments and university rankings. We agree with Schneider's proposal to add statistical power analysis and effect size measures to research evaluations, but disagree that these procedures would replace significance testing. Accordingly, effect size measures were added to the Excel sheets that we bring online for testing performance differences between institutions in the Leiden Ranking and the SCImago Institutions Ranking.
 
Hirsch has introduced the h-index to quantify an individual's scientific research output by the largest number h of a scientist's papers that received at least h citations. In order to take into account the highly skewed frequency distribution of citations, Egghe proposed the g-index as an improvement of the h-index. I have worked out 26 practical cases of physicists from the Institute of Physics at Chemnitz University of Technology and compare the h and g values. It is demonstrated that the g-index discriminates better between different citation patterns. This can also be achieved by evaluating Jin's A-index which reflects the average number of citations in the h-core and interpreting it in conjunction with the h-index. h and A can be combined into the R-index to measure the h-core's citation intensity. I have also determined the A and R values for the 26 data sets. For a better comparison, I utilize interpolated indices. The correlations between the various indices as well as with the total number of papers and the highest citation counts are discussed. The largest Pearson correlation coefficient is found between g and R. Although the correlation between g and h is relatively strong, the arrangement of the data set is significantly different, depending on whether they are put into order according to the values of either h or g.
 
Co-occurrence matrices, such as co-citation, co-word, and co-link matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of this data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This paper discusses the difference between a symmetrical co-citation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (like the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical co-citation matrix, but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment where the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index. A set of data collected with the Google Scholar search engine is analyzed using both the traditional methods of multivariate analysis and the new visualization software Pajek that is based on social network analysis and graph theory.
 
The h-index provides us with nine natural classes which can be written as a matrix of three vectors. The three vectors are: X=(X1, X2, X3) indicate publication distribution in the h-core, the h-tail, and the uncited ones, respectively; Y=(Y1, Y2, Y3) denote the citation distribution of the h-core, the h-tail and the so-called "excess" citations (above the h-threshold), respectively; and Z=(Z1, Z2, Z3)= (Y1-X1, Y2-X2, Y3-X3). The matrix V=(X,Y,Z)T constructs a measure of academic performance, in which the nine numbers can all be provided with meanings in different dimensions. The "academic trace" tr(V) of this matrix follows naturally, and contributes a unique indicator for total academic achievements by summarizing and weighting the accumulation of publications and citations. This measure can also be used to combine the advantages of the h-index and the Integrated Impact Indicator (I3) into a single number with a meaningful interpretation of the values. We illustrate the use of tr(V) for the cases of two journal sets, two universities, and ourselves as two individual authors.
 
We introduce a novel methodology for mapping academic institutions based on their journal publication profiles. We believe that journals in which researchers from academic institutions publish their works can be considered as useful identifiers for representing the relationships between these institutions and establishing comparisons. However, when academic journals are used for research output representation, distinctions must be introduced between them, based on their value as institution descriptors. This leads us to the use of journal weights attached to the institution identifiers. Since a journal in which researchers from a large proportion of institutions published their papers may be a bad indicator of similarity between two academic institutions, it seems reasonable to weight it in accordance with how frequently researchers from different institutions published their papers in this journal. Cluster analysis can then be applied to group the academic institutions, and dendrograms can be provided to illustrate groups of institutions following agglomerative hierarchical clustering. In order to test this methodology, we use a sample of Spanish universities as a case study. We first map the study sample according to an institution's overall research output, then we use it for two scientific fields (Information and Communication Technologies, as well as Medicine and Pharmacology) as a means to demonstrate how our methodology can be applied, not only for analyzing institutions as a whole, but also in different disciplinary contexts.
 
In this article, we analyze the citations to articles published in 11 biological and medical journals from 2003 to 2007 that employ author-choice open access models. Controlling for known explanatory predictors of citations, only 2 of the 11 journals show positive and significant open access effects. Analyzing all journals together, we report a small but significant increase in article citations of 17%. In addition, there is strong evidence to suggest that the open access advantage is declining by about 7% per year, from 32% in 2004 to 11% in 2007. Comment: citation changes; final manuscript
 
This article statistically analyses how the citation impact of articles deposited in the Condensed Matter section of the preprint server ArXiv (hosted by Cornell University), and subsequently published in a scientific journal, compares to that of articles in the same journal that were not deposited in that archive. Its principal aim is to further illustrate and roughly estimate the effect of two factors, 'early view' and 'quality bias', upon differences in citation impact between these two sets of papers, using citation data from Thomson Scientific's Web of Science. It presents estimates for a number of journals in the field of condensed matter physics. In order to discriminate between an 'open access' effect and an early view effect, longitudinal citation data was analysed covering a time period as long as 7 years. Quality bias was measured by calculating ArXiv citation impact differentials at the level of individual authors publishing in a journal, taking into account co-authorship. The analysis provided evidence of a strong quality bias and early view effect. Correcting for these effects, there is in a sample of 6 condensed matter physics journals studied in detail, no sign of a general 'open access advantage' of papers deposited in ArXiv. The study does provide evidence that ArXiv accelerates citation, due to the fact that that ArXiv makes papers earlier available rather than that it makes papers freely available.
 
In a recent paper entitled "Inconsistencies of Recently Proposed Citation Impact Indicators and how to Avoid Them," Schreiber (2012, at arXiv:1202.3861) proposed (i) a method to assess tied ranks consistently and (ii) fractional attribution to percentile ranks in the case of relatively small samples (e.g., for n < 100). Schreiber's solution to the problem of how to handle tied ranks is convincing, in my opinion (cf. Pudovkin & Garfield, 2009). The fractional attribution, however, is computationally intensive and cannot be done manually for even moderately large batches of documents. Schreiber attributed scores fractionally to the six percentile rank classes used in the Science and Engineering Indicators of the U.S. National Science Board, and thus missed, in my opinion, the point that fractional attribution at the level of hundred percentiles-or equivalently quantiles as the continuous random variable-is only a linear, and therefore much less complex problem. Given the quantile-values, the non-linear attribution to the six classes or any other evaluation scheme is then a question of aggregation. A new routine based on these principles (including Schreiber's solution for tied ranks) is made available as software for the assessment of documents retrieved from the Web of Science (at http://www.leydesdorff.net/software/i3).
 
This article examines the relationship between acquaintanceship and coauthorship patterns in a multi-disciplinary, multi-institutional, geographically distributed research center. Two social networks are constructed and compared: a network of coauthorship, representing how researchers write articles with one another, and a network of acquaintanceship, representing how those researchers know each other on a personal level, based on their responses to an online survey. Statistical analyses of the topology and community structure of these networks point to the importance of small-scale, local, personal networks predicated upon acquaintanceship for accomplishing collaborative work in scientific communities.
 
The bibliometric measure impact factor is a leading indicator of journal influence, and impact factors are routinely used in making decisions ranging from selecting journal subscriptions to allocating research funding to deciding tenure cases. Yet journal impact factors have increased gradually over time, and moreover impact factors vary widely across academic disciplines. Here we quantify inflation over time and differences across fields in impact factor scores and determine the sources of these differences. We find that the average number of citations in reference lists has increased gradually, and this is the predominant factor responsible for the inflation of impact factor scores over time. Field-specific variation in the fraction of citations to literature indexed by Thomson Scientific's Journal Citation Reports is the single greatest contributor to differences among the impact factors of journals in different fields. The growth rate of the scientific literature as a whole, and cross-field differences in net size and growth rate of individual fields, have had very little influence on impact factor inflation or on cross-field differences in impact factor.
 
Changes in impact factor from 1994 to 2005. Panel (a) is a log-log plot of 1994 impact factor against 2005 impact factor for the 4,300 journals that were listed in every year from 1994 to 2005 in the JCR dataset. Shading indicates density of points, with darker tones representing higher density. Panel (b) plots the rank-order distribution of impact factors for these 4,300 journals from 1994 to 2005. The progression of darkening shade indicates years, with the lightest shade representing 1994 and the darkest 2005. Note that the highest and lowest-scoring journals do not fall within the scales of the plot. 
Differences in citation patterns across fields. This figure illustrates the differences in c , p , v , and IF across fields. Panel (a) shows differences in c , the average number of items cited per paper. Panel (b) shows differences in p , the fraction of citations to papers published in the two previous calendar years. Panel (c) shows differences in v , the fraction of citations to papers published in JCR-listed journals, and panel (d) shows differences in IF, the weighted impact factor. Fields are categorized and mapped as in Rosvall and Bergstrom (2008). 
Calculating v t. The top panel gives the schematic for calculating v t for the entire dataset, and the bottom panel gives the schematic for specific fields. Details are provided in the text.
Table showing the results of hierarchical partitioning.
The impact factor of an academic journal for any year is the number of times the average article published in that journal in the previous two years are cited in that year. From 1994-2005, the average impact factor of journals listed by the ISI has been increasing by an average of 2.6 percent per year. This paper documents this growth and explores its causes.
 
Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (i) fractional counting on the impact factor (IF) and (ii) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the two-year IF (IF2), we consider the five-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/National Science Foundation provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
 
The launching of Scopus and Google Scholar, and methodological developments in Social Network Analysis have made many more indicators for evaluating journals available than the traditional Impact Factor, Cited Half-life, and Immediacy Index of the ISI. In this study, these new indicators are compared with one another and with the older ones. Do the various indicators measure new dimensions of the citation networks, or are they highly correlated among them? Are they robust and relatively stable over time? Two main dimensions are distinguished -- size and impact -- which together shape influence. The H-index combines the two dimensions and can also be considered as an indicator of reach (like Indegree). PageRank is mainly an indicator of size, but has important interactions with centrality measures. The Scimago Journal Ranking (SJR) indicator provides an alternative to the Journal Impact Factor, but the computation is less easy.
 
In this paper we distinguish between top-performance and lower performance groups in the analysis of statistical properties of bibliometric characteristics of two large sets of research groups. We find intriguing differences between top-performance and lower performance groups, but also between the two sets of research groups. Particularly these latter differences are interesting, as they may indicate the influence of research management strategies. Lower performance groups have a larger scale-dependent cumulative advantage than top-performance groups. We also find that regardless of performance, larger groups have less not-cited publications. We introduce a simple model in which processes at the micro level lead to the observed phenomena at the macro level. Top-performance groups are, on average, more successful in the entire range of journal impact. We fit our findings into a concept of hierarchically layered networks. In this concept, the network of research groups constitutes a layer of one hierarchical step higher than the basic network of publications connected by citations. The cumulative size-advantage of citations received by a group looks like preferential attachment in the basic network in which highly connected nodes (publications) increase their connectivity faster than less connected nodes. But in our study it is size that causes an advantage. In general, the larger a group (node in the research group network), the more incoming links this group acquires in a non-linear, cumulative way. Moreover, top-performance groups are about an order of magnitude more efficient in creating linkages (i.e., receiving citations) than the lower performance groups.
 
The ISI-Impact Factors suffer from a number of drawbacks, among them the statistics-why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by counting citation weights fractionally instead of using whole numbers in the numerators? (i) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (ii) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (iii) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted Impact Factors for 2008 is available online at http://www.leydesdorff.net/weighted_if/weighted_if.xls. The in-between group variance among the thirteen fields of science identified in the U.S. Science and Engineering Indicators is not statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions could not be used as a reliable instrument for the classification.
 
The aggregated citation relations among journals included in the Science Citation Index provide us with a huge matrix which can be analyzed in various ways. Using principal component analysis or factor analysis, the factor scores can be used as indicators of the position of the cited journals in the citing dimensions of the database. Unrotated factor scores are exact, and the extraction of principal components can be made stepwise since the principal components are independent. Rotation may be needed for the designation, but in the rotated solution a model is assumed. This assumption can be legitimated on pragmatic or theoretical grounds. Since the resulting outcomes remain sensitive to the assumptions in the model, an unambiguous classification is no longer possible in this case. However, the factor-analytic solutions allow us to test classifications against the structures contained in the database. This will be demonstrated for the delineation of a set of biochemistry journals.
 
10,330 journals similar in their citing patterns above cosine >0.2; 12 colors (clusters; Q = 0.575). This map can be viewed directly in VOSViewer via WebStart at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/journals11/citing_all.txt&label_size=1.0&label_size_variation=0.3. Using single-level refinement (in Pajek): Q = 0.5747; multi-level refinement Q = 0.5750. The number of clusters is 12 in either case (Blondel et al., 2008). [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]
Overlay maps 2011 comparing journal publication portfolios from 2006 to 2010 between the Science and Technology Policy Research Unit SPRU at the University of Sussex (on the left; N = 148; available at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/journals11/a.txt&label_size=1.35) and the London Business School (on the right; N = 343; available at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/journals11/b.txt&label_size=1.35). [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]
2,552 references in 155 documents published by SPRU authors 2006–2010; mapped as citing patterns; Rao-Stirling diversity is 0.214. This map can be viewed directly in VOSViewer via WebStart at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/journals11/txt&label_size=1.35. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]
Local map of 60 journals most frequently cited by the Journal of the American Society for Information Science and Technology in 2011; cosine >0.2; Q = 0.389; 3 communities. This map can be viewed directly in VOSViewer via WebStart at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/journals11/map.txt&network=http://www.leydesdorff.net/journals11/net.txt&n_lines=3000&label_size=1.35. (Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.
Using the option Analyze Results with the Web of Science, one can directly generate overlays onto global journal maps of science. The maps are based on the 10,000+ journals contained in the Journal Citation Reports (JCR) of the Science and Social Sciences Citation Indices (2011). The disciplinary diversity of the retrieval is measured in terms of Rao-Stirling's “quadratic entropy” (Izsák & Papp, 1995). Since this indicator of interdisciplinarity is normalized between 0 and 1, interdisciplinarity can be compared among document sets and across years, cited or citing. The colors used for the overlays are based on Blondel, Guillaume, Lambiotte, and Lefebvre's (2008) community-finding algorithms operating on the relations among journals included in the JCR. The results can be exported from VOSViewer with different options such as proportional labels, heat maps, or cluster density maps. The maps can also be web-started or animated (e.g., using PowerPoint). The “citing” dimension of the aggregated journal–journal citation matrix was found to provide a more comprehensive description than the matrix based on the cited archive. The relations between local and global maps and their different functions in studying the sciences in terms of journal literatures are further discussed: Local and global maps are based on different assumptions and can be expected to serve different purposes for the explanation.
 
The aggregation of web performance (page count and visibility) of internal university units could constitute a more precise indicator than the overall web performance of the universities and, therefore, be of use in the design of university web rankings. In order to test this hypothesis, a longitudinal analysis of the internal units of the Spanish university system was conducted over the course of 2010. For the 13800 URLs identified, page count and visibility was calculated using the Yahoo API. The internal values obtained were aggregated by university and compared with the values obtained from the analysis of the university general URLs. The results indicate that, although the correlations between general and internal values are high, internal performance is low in comparison to general performance, and that they give rise to different performance rankings. The conclusion is that the aggregation of unit performance is of limited use due to the low levels of internal development of the websites, and so its use is not recommended for the design of rankings. Despite this, the internal analysis enabled the detection of, amongst other things, a low correlation between page count and visibility due to the widespread use of subdirectories and problems accessing certain content.
 
In the process of scientific research, many information objects are generated, all of which may remain valuable indefinitely. However, artifacts such as instrument data and associated calibration information may have little value in isolation; their meaning is derived from their relationships to each other. Individual artifacts are best represented as components of a life cycle that is specific to a scientific research domain or project. Current cataloging practices do not describe objects at a sufficient level of granularity nor do they offer the globally persistent identifiers necessary to discover and manage scholarly products with World Wide Web standards. The Open Archives Initiative's Object Reuse and Exchange data model (OAI-ORE) meets these requirements. We demonstrate a conceptual implementation of OAI-ORE to represent the scientific life cycles of embedded networked sensor applications in seismology and environmental sciences. By establishing relationships between publications, data, and contextual research information, we illustrate how to obtain a richer and more realistic view of scientific practices. That view can facilitate new forms of scientific research and learning. Our analysis is framed by studies of scientific practices in a large, multi-disciplinary, multi-university science and engineering research center, the Center for Embedded Networked Sensing (CENS). Comment: 28 pages. To appear in the Journal of the American Society for Information Science and Technology (JASIST)
 
Based on the complete set of firm data for Sweden (N = 1,187,421; November 2011), we analyze the mutual information among the geographical, technological, and organizational distributions in terms of synergies at regional and national levels. Mutual information in three dimensions can become negative and thus indicate a net export of uncertainty by a system or, in other words, synergy in how knowledge functions are distributed over the carriers. Aggregation at the regional level (NUTS3) of the data organized at the municipal level (NUTS5) shows that 48.5% of the regional synergy is provided by the three metropolitan regions of Stockholm, Gothenburg, and Malm\"o/Lund. Sweden can be considered as a centralized and hierarchically organized system. Our results accord with other statistics, but this Triple Helix indicator measures synergy more specifically and quantitatively. The analysis also provides us with validation for using this measure in previous studies of more regionalized systems of innovation (such as Hungary and Norway).
 
The aggregated journal-journal citation matrix -based on the Journal Citation Reports (JCR) of the Science Citation Index- can be decomposed by indexers and/or algorithmically. In this study, we test the results of two recently available algorithms for the decomposition of large matrices against two content-based classifications of journals: the ISI Subject Categories and the field/subfield classification of Glaenzel & Schubert (2003). The content-based schemes allow for the attribution of more than a single category to a journal, whereas the algorithms maximize the ratio of within-category citations over between-category citations in the aggregated category-category citation matrix. By adding categories, indexers generate between-category citations, which may enrich the database, for example, in the case of inter-disciplinary developments. The consequent indexer effects are significant in sparse areas of the matrix more than in denser ones. Algorithmic decompositions, on the other hand, are more heavily skewed towards a relatively small number of categories, while this is deliberately counter-acted upon in the case of content-based classifications. Because of the indexer effects, science policy studies and the sociology of science should be careful when using content-based classifications, which are made for bibliographic disclosure, and not for the purpose of analyzing latent structures in scientific communications. Despite the large differences among them, the four classification schemes enable us to generate surprisingly similar maps of science at the global level. Erroneous classifications are cancelled as noise at the aggregate level, but may disturb the evaluation locally.
 
In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves should therefore be integrated instead of averaged. For example, the journals MIS Quarterly and JASIST differ by a factor of two in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an integrated impact indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or instititutional units such as nations, universities, etc., because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two ISI Subject Categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The LIS set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
 
The recently proposed fractional scoring scheme is used to attribute publications to percentile rank classes. It is shown that in this way uncertainties and ambiguities in the evaluation of percentile ranks do not occur. Using the fractional scoring the total score of all papers exactly reproduces the theoretical value.
 
Percentile distributions for four universities. The violin plots show a marker for the median of the percentiles, a box indicating the interquartile range, and spikes extending to the upper-and lower-adjacent values. Furthermore, plots of the estimated kernel density are overlaid.
Percentile distributions for four universities broken down by publication year. The solid line in the figure indicates an average citation impact of 50. The cross in the box designates the median for one university in one year. The outer margins of the box indicate the first quartile (25% of the values and the third quartile (75% of the values). The median of the percentiles for each university across all years is given in the labels; furthermore, those universities are named here from which a university differs statistically significantly in its citation performance.
Summary percentile statistics for four universities
(iii) shows the results
According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalising the citation counts of individual publications in terms of the subject area, the document type and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles can be analysed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (here: universities) and on the other hand concentrate on the range of publications with the highest citation impact - that is, the range which is usually of most interest in the evaluation of scientific performance.
 
Via the Internet, information scientists can obtain cost-free access to large databases in the hidden or deep web. These databases are often structured far more than the Internet domains themselves. The patent database of the U.S. Patent and Trade Office is used in this study to examine the science base of patents in terms of the literature references in these patents. University-based patents at the global level are compared with results when using the national economy of the Netherlands as a system of reference. Methods for accessing the on-line databases and for the visualization of the results are specified. The conclusion is that 'biotechnology' has historically generated a model for theorizing about university-industry relations that cannot easily be generalized to other sectors and disciplines.
 
Betweenness centrality of 164 journals in the citation-impact environment of Cognitive Science in 2004; cosine ≥0.2.
Betweenness centrality of 43 journals in the vector space of the citation-impact environment of Social Networks in 2004 (cosine ≥0.2).
Betweenness centrality of Nanotechnology between clusters of journals in applied physics and chemistry, in 2001.
The development of betweenness centrality of Nanotechnology in its citation-impact environment during the period 1996–2006.
The dynamic analysis of structural change in the organization of the sciences requires methodologically the integration of multivariate and time-series analysis. Structural change--e.g., interdisciplinary development--is often an objective of government interventions. Recent developments in multi-dimensional scaling (MDS) enable us to distinguish the stress originating in each time-slice from the stress originating from the sequencing of time-slices, and thus to locally optimize the trade-offs between these two sources of variance in the animation. Furthermore, visualization programs like Pajek and Visone allow us to show not only the positions of the nodes, but also their relational attributes like betweenness centrality. Betweenness centrality in the vector space can be considered as an indicator of interdisciplinarity. Using this indicator, the dynamics of the citation impact environments of the journals Cognitive Science, Social Networks, and Nanotechnology are animated and assessed in terms of interdisciplinarity among the disciplines involved.
 
Many studies on coauthorship networks focus on network topology and network statistical mechanics. This article takes a different approach by studying micro-level network properties, with the aim to apply centrality measures to impact analysis. Using coauthorship data from 16 journals in the field of library and information science (LIS) with a time span of twenty years (1988-2007), we construct an evolving coauthorship network and calculate four centrality measures (closeness, betweenness, degree and PageRank) for authors in this network. We find out that the four centrality measures are significantly correlated with citation counts. We also discuss the usability of centrality measures in author ranking, and suggest that centrality measures can be useful indicators for impact analysis. Comment: 17 pages, 4 figures
 
Top rankings for different YouTube channel types.
The h-index can be a useful metric for evaluating a person's output of Internet media. Here we advocate and demonstrate adaption of the h-index and the g-index to the top video content creators on YouTube. The h-index for Internet video media is based on videos and their view counts. The index h is defined as the number of videos with >= h*10^5 views. The index g is defined as the number of videos with >= g*10^5 views on average. When compared to a video creator's total view count, the h-index and g-index better capture both productivity and impact in a single metric.
 
This paper aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information Retrieval (IR) was selected as a test field and data from 1956-2008 were collected from Web of Science (WOS). Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal Component Analysis (PCA) was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures.
 
The possibilities of using the Arts & Humanities Citation Index (A&HCI) for journal mapping have not been sufficiently recognized because of the absence of a Journal Citations Report (JCR) for this database. A quasi-JCR for the A&HCI (2008) was constructed from the data contained in the Web-of-Science and is used for the evaluation of two journals as examples: Leonardo and Art Journal. The maps on the basis of the aggregated journal-journal citations within this domain can be compared with maps including references to journals in the Science Citation Index and Social Science Citation Index. Art journals are cited by (social) science journals more than by other art journals, but these journals draw upon one another in terms of their own references. This cultural impact in terms of being cited is not found when documents with a topic such as "digital humanities" are analyzed. This community of practice functions more as an intellectual organizer than a journal.
 
We propose using the technique of weighted citation to measure an article's prestige. The technique allocates a different weight to each reference by taking into account the impact of citing journals and citation time intervals. Weighted citation captures prestige, whereas citation counts capture popularity. We compare the value variances for popularity and prestige for articles published in the Journal of the American Society for Information Science and Technology from 1998 to 2007, and find that the majority have comparable status. Comment: 17 pages, 6 figures
 
This paper aims to review the fiercely discussed question of whether the ranking of Wikipedia articles in search engines is justified by the quality of the articles. After an overview of current research on information quality in Wikipedia, a summary of the extended discussion on the quality of encyclopedic entries in general is given. On this basis, a heuristic method for evaluating Wikipedia entries is developed and applied to Wikipedia articles that scored highly in a search engine retrieval effectiveness test and compared with the relevance judgment of jurors. In all search engines tested, Wikipedia results are unanimously judged better by the jurors than other results on the corresponding results position. Relevance judgments often roughly correspond with the results from the heuristic evaluation. Cases in which high relevance judgments are not in accordance with the comparatively low score from the heuristic evaluation are interpreted as an indicator of a high degree of trust in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its necessarily incoherent user model. A further tuning of the suggested criteria catalogue, for instance the different weighing of the supplied criteria, could serve as a starting point for a user model differentiated evaluation of Wikipedia articles. Approved methods of quality evaluation of reference works are applied to Wikipedia articles and integrated with the question of search engine evaluation.
 
We present a theoretical and empirical analysis of a number of bibliometric indicators of journal performance. We focus on three indicators in particular, namely the Eigenfactor indicator, the audience factor, and the influence weight indicator. Our main finding is that the last two indicators can be regarded as a kind of special cases of the first indicator. We also find that the three indicators can be nicely characterized in terms of two properties. We refer to these properties as the property of insensitivity to field differences and the property of insensitivity to insignificant journals. The empirical results that we present illustrate our theoretical findings. We also show empirically that the differences between various indicators of journal performance are quite substantial.
 
The use of Pearson's correlation coefficient in Author Cocitation Analysis was compared with Salton's cosine measure in a number of recent contributions. Unlike the Pearson correlation, the cosine is insensitive to the number of zeros. However, one has the option of applying a logarithmic transformation in correlation analysis. Information calculus is based on both the logarithmic transformation and provides a non-parametric statistics. Using this methodology one can cluster a document set in a precise way and express the differences in terms of bits of information. The algorithm is explained and used on the data set which was made the subject of this discussion.
 
Cosine normalized representation of the asymmetrical citation matrix (Pajek;33  Kamada & Kawai, 1989).
Jaccard-index-based representation of the co-citation matrix using total citations for the normalization (Pajek;3 Kamada & Kawai, 1989)
Relation between the Jaccard index and Salton's cosine in the case of the asymmetrical citation matrix: [ N = (24 * 23)/2 = 276].
The debate about which similarity measure one should use for the normalization in the case of Author Co-citation Analysis (ACA) is further complicated when one distinguishes between the symmetrical co-citation--or, more generally, co-occurrence--matrix and the underlying asymmetrical citation--occurrence--matrix. In the Web environment, the approach of retrieving original citation data is often not feasible. In that case, one should use the Jaccard index, but preferentially after adding the number of total citations (occurrences) on the main diagonal. Unlike Salton's cosine and the Pearson correlation, the Jaccard index abstracts from the shape of the distributions and focuses only on the intersection and the sum of the two sets. Since the correlations in the co-occurrence matrix may partially be spurious, this property of the Jaccard index can be considered as an advantage in this case.
 
A new method for visualizing the relatedness of scientific areas is developed that is based on measuring the overlap of researchers between areas. It is found that closely related areas have a high propensity to share a larger number of common authors. A methodology for comparing areas of vastly different sizes and to handle name homonymy is constructed, allowing for the robust deployment of this method on real data sets. A statistical analysis of the probability distributions of the common author overlap that accounts for noise is carried out along with the production of network maps with weighted links proportional to the overlap strength. This is demonstrated on two case studies, complexity science and neutrino physics, where the level of relatedness of areas within each area is expected to vary greatly. It is found that the results returned by this method closely match the intuitive expectation that the broad, multidisciplinary area of complexity science possesses areas that are weakly related to each other while the much narrower area of neutrino physics shows very strongly related areas.
 
Google's PageRank has created a new synergy to information retrieval for a better ranking of Web pages. It ranks documents depending on the topology of the graphs and the weights of the nodes. PageRank has significantly advanced the field of information retrieval and keeps Google ahead of competitors in the search engine market. It has been deployed in bibliometrics to evaluate research impact, yet few of these studies focus on the important impact of the damping factor (d) for ranking purposes. This paper studies how varied damping factors in the PageRank algorithm can provide additional insight into the ranking of authors in an author co-citation network. Furthermore, we propose weighted PageRank algorithms. We select 108 most highly cited authors in the information retrieval (IR) area from the 1970s to 2008 to form the author co-citation network. We calculate the ranks of these 108 authors based on PageRank with damping factor ranging from 0.05 to 0.95. In order to test the relationship between these different measures, we compare PageRank and weighted PageRank results with the citation ranking, h-index, and centrality measures. We found that in our author co-citation network, citation rank is highly correlated with PageRank's with different damping factors and also with different PageRank algorithms; citation rank and PageRank are not significantly correlated with centrality measures; and h-index is not significantly correlated with centrality measures. Comment: 19 pages, 7 figures
 
International co-authorship relations and university-industry-government ("Triple Helix") relations have hitherto been studied separately. Using Japanese (ISI) publication data for the period 1981-2004, we were able to study both kinds of relations in a single design. In the Japanese file, 1,277,823 articles with at least one Japanese address were attributed to the three sectors, and we know additionally whether these papers were co-authored internationally. Using the mutual information in three and four dimensions, respectively, we show that the Japanese Triple-Helix system has continuously been eroded at the national level. However, since the middle of the 1990s, international co-authorship relations have contributed to a reduction of the uncertainty. In other words, the national publication system of Japan has developed a capacity to retain surplus value generated internationally. In a final section, we compare these results with an analysis based on similar data for Canada. A relative uncoupling of local university-industry relations because of international collaborations is indicated in both national systems.
 
It is shown that under certain circumstances in particular for small datasets the recently proposed citation impact indicators I3(6PR) and R(6,k) behave inconsistently when additional papers or citations are taken into consideration. Three simple examples are presented, in which the indicators fluctuate strongly and the ranking of scientists in the evaluated group is sometimes completely mixed up by minor changes in the data base. The erratic behavior is traced to the specific way in which weights are attributed to the six percentile rank classes, specifically for the tied papers. For 100 percentile rank classes the effects will be less serious. For the 6 classes it is demonstrated that a different way of assigning weights avoids these problems, although the non-linearity of the weights for the different percentile rank classes can still lead to (much less frequent) changes in the ranking. This behavior is not undesired, because it can be used to correct for differences in citation behavior in different fields. Remaining deviations from the theoretical value R(6,k) = 1.91 can be avoided by a new scoring rule, the fractional scoring. Previously proposed consistency criteria are amended by another property of strict independence which a performance indicator should aim at.
 
In addition to science citation indicators of journals like impact and immediacy, social network analysis provides a set of centrality measures like degree, betweenness, and closeness centrality. These measures are first analyzed for the entire set of 7,379 journals included in the Journal Citation Reports of the Science Citation Index and the Social Sciences Citation Index 2004, and then also in relation to local citation environments which can be considered as proxies of specialties and disciplines. Betweenness centrality is shown to be an indicator of the interdisciplinarity of journals, but only in local citation environments and after normalization because otherwise the influence of degree centrality (size) overshadows the betweenness-centrality measure. The indicator is applied to a variety of citation environments, including policy-relevant ones like biotechnology and nanotechnology.
 
Top-cited authors
Loet Leydesdorff
  • University of Amsterdam
Mike Thelwall
  • University of Wolverhampton
Chaomei Chen
  • College of Computing and Informatics, Drexel University
Lokman I. Meho
  • Georgetown University in Qatar
Lutz Bornmann
  • Max Planck Society