Chapter

Metodología bibliométrica para evaluar la actividad científica

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
La minería de proceso es una poderosa técnica para la gestión y la inteligencia empresarial que en los últimos años ha despertado interés en la comunidad científica, notándose un incremento de las investigaciones en esta área del conocimiento. En el presente trabajo la metodología que se aplica abarca un conjunto de indicadores bibliométricos que permitió cuantificar, visualizar y evaluar los resultados de la producción científica sobre minería de proceso. A partir del análisis realizado en la base de datos Scopus en el periodo 2002-2017 se pudo evidenciar que el sector educativo representado por las universidades es el más destacado, y el continente europeo es el que representa el núcleo de investigaciones científicas, con Holanda como el país con mayores resultados de investigación en el tópico examinado. En relación a la tipología documental se determina que fue la publicación de artículoscientíficos la que sobresale respecto a otras salidas de investigación y el año 2015 se destaca como el más productivo. En América, Estados Unidos es el país líder. Este estudio nos permite concluir que en los países altamente industrializados existe una sostenible producción científica sobre el tema en cuestión y que en América Latina y el Caribe comienza a emerger una tendencia a incluir como estrategia de gestión de procesos de negocio la minería de proceso.
Article
Full-text available
Resumo La Bibliometría es una herramienta para realizar estudios de tendencias, estos últimos son considerados uno de los servicios de la Inteligencia Empresarial. En la actualidad la Bibliometría constituye un área instrumental válida para la evaluación de la actividad científica; usada para determinar el comportamiento del desempeño científico individual e institucional de la comunidad científica y académica, liderazgo científico y áreas emergentes del conocimiento, aspectos que se consideran por los programas internacionales para financiar proyectos de investigación e innovación; así como por las políticas institucionales para otorgar mejores plazas y validar doctorados. La dimensión evaluativa de la Bibliometría se enmarca en tres variables: producción científica, colaboración científica e impacto científico. En este orden de ideas, la presente investigación se basó en la Bibliometría, como herramienta de la inteligencia empresarial, a través de un estudio de caso aplicado a una Universidad cubana. Se aplicó la metodología bibliométrica para la evaluación de la actividad científica, desarrollada por el Instituto de Información Científica y Tecnológica de Cuba. Se compiló la producción científica de la universidad en la base de datos bibliográfica SCOPUS durante el período 2005-2012 y se empleó el Bibexcel y Ucinet 6.1 para el procesamiento matricial y la visualización de los datos. Como resultado, se obtuvo que los profesores más productivos son los doctores en ciencias, predominando el género masculino. Se concluye que la aplicación de la Bibliometría como método para realizar análisis de tendencias fue de gran utilidad para la toma de decisiones, por la dirección de ciencia y tecnología de la Universidad cubana. Palabras-clave: Bibliometría. Inteligencia de Negocios. Toma de Decisión. Abstract Bibliometrics is a tool to study trends; these last ones are consider one of the Business Intelligence services. Today bibliometrics is a valid instrument for the assessment area of scientific activity; frequently used to determine the behavior of the individual/departmental and institutional scientific performance of scientific and academic community. Hence their utility to determine: scientific leadership, emerging areas of knowledge, socio-scientific structures, among other aspects considered by the programs international funding for research and innovation projects; as well as institutional policies to deliver better jobs, academic awards and validate doctorates. Any way the evaluative dimension of bibliometrics is essentially frame by three variables: scientific production, scientific collaboration and
Article
Full-text available
El trabajo, que se llevó a cabo sobre datos de Scopus, tuvo como propósito retratar las principales tendencias y características de la producción científica de América Latina y el Caribe durante el periodo 1996-2007. Para ello se analizaron la producción y la citación, los perfiles de publicación y la colaboración científica, con especial énfasis en los diez mayores productores de la región. El análisis reveló un importante aumento de la producción científica regional en los últimos años, crecimiento que se ha observado en casi todos los países, con Brasil a la cabeza, que representó el 50% del total de la producción en 2007. La región en conjunto ha aumentado su peso porcentual en el mundo, tanto en producción como en citación, e incluso el crecimiento ha sido mayor en citación; sin embargo, la visibilidad de la ciencia regional, en términos de citación media por documento, se ubicó por debajo de la media mundial y fueron los países pequeños los que alcanzaron los mejores promedios. En síntesis, la actividad científica se ha caracterizado por una intensa colaboración internacional, que ha crecido también de manera significativa, y aunque la colaboración intrarregional ha experimentado un mayor crecimiento, sus cifras siguen siendo precarias.
Article
Full-text available
A psychometric test was constructed and validated to determine stress. The test denominated stress vulnerability questionnaire was basically based on the psychometric battery of the systemic approach method for evaluating stress. An initial questionnaire composed of 63 items and 3 subscales was made to conduct a pilot study. On depurating the questionnaire, the final form contained 39 items and it was subjected to a study to find out reliability and validity. A trifactorial structure coinciding with the previous design was determined. The internal consistency was 0.92 according to Cronbach's alpha and 0.90 according to Spearman-Brown's coefficient.. A test-retest correlation of 0.97 was attained and it was also significantly correlated to the external criteria of vulnerability-neuroticism scale of Eysenck's test, Spielberg's anxiety trait and the clinical criterion. It was concluded that the stress vulnerability questionnaire is a valid and reliable instrument to measure vulnerability to stress based on the systemic approach method.
Article
Full-text available
Based on the foundation laid by the h-index we introduce and study the R- and AR-indices. These new indices eliminate some of the disadvantages of the h-index, especially when they are used in combination with the h-index. The R-index measures the h-core’s citation intensity, while AR goes one step further and takes the age of publications into account. This allows for an index that can actually increase and decrease over time. We propose the pair (h, AR) as a meaningful indicator for research evaluation. We further prove a relation characterizing the h-index in the power law model.
Article
Full-text available
During the past decades, journal impact data obtained from the Journal Citation Reports (JCR) have gained relevance in library management, research management and research evaluation. Hence, both information scientists and bibliometricians share the responsibility towards the users of the JCR to analyse the reliability and validity of its measures thoroughly, to indicate pitfalls and to suggest possible improvements. In this article, ageing patterns are examined in ‘formal’ use or impact of all scientific journals processed for the Science Citation Index (SCI) during 1981-1995. A new classification system of journals in terms of their ageing characteristics is introduced. This system has been applied to as many as 3,098 journals covered by the Science Citation Index. Following an earlier suggestion by Glnzel and Schoepflin, a maturing and a decline phase are distinguished. From an analysis across all subfields it has been concluded that ageing characteristics are primarily specific to the individual journal rather than to the subfield, while the distribution of journals in terms of slowly or rapidly maturing or declining types is specific to the subfield. It is shown that the cited half life (CHL), printed in the JCR, is an inappropriate measure of decline of journal impact. Following earlier work by Line and others, a more adequate parameter of decline is calculated taking into account the size of annual volumes during a range of fifteen years. For 76 per cent of SCI journals the relative difference between this new parameter and the ISI CHL exceeds 5 per cent. The current JCR journal impact factor is proven to be biased towards journals revealing a rapid maturing and decline in impact. Therefore, a longer term impact factor is proposed, as well as a normalised impact statistic, taking into account citation characteristics of the research subfield covered by a journal and the type of documents published in it. When these new measures are combined with the proposed ageing classification system, they provide a significantly improved picture of a journal‘s impact to that obtained from the JCR.
Article
Full-text available
Personal observations and reflections on scientific collaboration and its study, past, present, and future, containing new material on motives for collaboration, and on some of its salient features. Continuing methodological problems are singled out, together with suggestions for future research.
Article
Full-text available
This paper gives an outline of a new bibliometric database based upon all articles published by authors from the Netherlands, and processed during the time period 1980–1993 by the Institute for Scientific Information (ISI) for theScience Citation Index (SCI),Social Science Citation Index (SSCI) andArts & Humanities Citation Index (A&HCI). The paper describes various types of information added to the database: data on articles citing the Dutch publications; detailed citation data on ISI journals and subfields; and a classification system of publishing main organizations, appearing in the addresses. Moreover, an overview is given of the types of bibliometric indicators that were constructed. Their relationship to indicators developed by other researchers in the field is discussed. Finally, two applications are given in order to illustrate the potentials of the database and of the bibliometric indicators derived from it. The first represents a synthesis of classical macro indicator studies at the one hand, and bibliometric analyses of research groups or institutes at the other. The second application gives for the first time a detailed analysis of a country's publication output per institutional sector.
Article
Full-text available
With the ready accessibility of bibliometric data and the availability of ready-to-use tools for generating bibliometric indicators for evaluation purposes, there is the danger of inappropri- ate use. Here we present standards of good practice for analyzing bibliometric data and presenting and interpreting the results. Comparisons drawn between research groups as to research perfor- mance are valid only if (1) the scientific impact of the research groups or their publications are looked at by using box plots, Lorenz curves, and Gini coefficients to represent distribution characteristics of data (in other words, going beyond the usual arithmetic mean value), (2) different reference stan- dards are used to assess the impact of research groups, and the appropriateness of the reference stan- dards undergoes critical examination, and (3) statistical analyses comparing citation counts take into consideration that citations are a function of many influencing factors besides scientific quality.
Conference Paper
Full-text available
Introduction and motivation The evaluation of scientific activity and technological innovation has become the norm in industrialized societies. Attesting to this fact is the regular publication of technical reports that have documented such developments since the end of the 70´s 1 . In addition to the traditional functions of evaluation (certification and detection of scientific excellence), we now see its importance as an added value in decision-making processes that involve science and technology, and as a tool used in strategically advancing systems of R+D and innovation. Evaluation plays a key role in building scientific and technological potential, making it essential for social well-being and economic competitiveness. For these reasons, Scientific Policy and Scientometrics are closely linked, and the assessment of science, technology and innovations at all levels calls for tools that will permit measurement in various dimensions (Rinia, 2000) At present, politicians place great emphasis on innovation as a collective process of interaction and mutual instruction amid a group of actors that form part of the system of science and technology. From the standpoint of innovation, political intervention can be justified to overcome institutional paralysis and promote energetic incentives for cooperation, learning, and adaptative conduct among all the members of the system. Such actions have two objectives. On the one hand, they attempt to resolve the "systemic failures" that reflect deficiencies in interaction aimed toward technological development (Laranja, Uyarra and Flanagan, 2007), while on the other hand, they can enhance the efficiency of the system by lending it an architecture with the power of distribution of technological information and knowledge (David and Foray, 1995).
Article
Full-text available
Scientometrics cannot offer a simple consistent method for measuring the scientific eminence of individuals. The h-index method introduced by Hirsch was found applicable for evaluating publications of senior scien-tists with similar publishing features, only. Some simple methods – using the number of citations and jour-nal papers, and the number of citations obtained by the most frequently cited papers – are suggested and tested to demonstrate the advantages and disadvantages of such indexes. The results indicate that calculat-ing scientometric indexes for individuals, self-citations should be excluded and the effect of the different bib-liometric features of the field should be taken into account. The correctness of the indexes used for evaluating journal papers of individuals should be investigated also on the individual level.
Article
Full-text available
Designing fair and unbiased metrics to measure the “level of excellence” of a scientist is a very significant task because they recently also have been taken into account when deciding faculty promotions, when allocating funds, and so on. Despite criticism that such scientometric evaluators are confronted with, they do have their merits, and efforts should be spent to arm them with robustness and resistance to manipulation. This article aims at initiating the study of the coterminal citations—their existence and implications—and presents them as a generalization of self-citations and of co-citation; it also shows how they can be used to capture any manipulation attempts against scientometric indicators, and finally presents a new index, the f index, that takes into account the coterminal citations. The utility of the new index is validated using the academic production of a number of esteemed computer scientists. The results confirm that the new index can discriminate those individuals whose work penetrates many scientific communities.
Article
Full-text available
The authors apply a new bibliometric measure, the h-index (Hirsch, 2005), to the literature of information science. Faculty rankings based on raw citation counts are compared with those based on h-counts. There is a strong positive correlation between the two sets of rankings. It is shown how the h-index can be used to express the broad impact of a scholar's research output over time in more nuanced fashion than straight citation counts.
Chapter
Full-text available
This paper covers the differences between two separate bibliometric approaches, labelled ‘descriptive’ versus ‘evaluative’, or top down versus bottom up. The most important difference between these two approaches is found in the level of validity of the underlying research output. Whilst the publications in a top down approach, having a descriptive character, are collected by following general characteristics of these publications (such as country names, or fields), the consequence is that findings from such studies have a ‘meaning’ that is limited with respect to actual research assessment. On the other hand, in a bottom up approach the publications are collected from individual oeuvres of scientists, including a process of verification by the researchers involved. This procedure contributes significantly to the validity of the publication material, and consequently research assessment procedures can be based on the results of this type of bibliometric analyses. A strong focus in the paper will be on the actual application of bibliometric analysis within research assessment procedures, in particular within the UK and the Netherlands.
Chapter
Full-text available
After a review of developments in the quantitative study of science, particularly since the early 1970s, I focus on two current main lines of ‘measuring science’ based on bibliometric analysis. With the developments in the Leiden group as an example of daily practice, the measurement of research performance and, particularly, the importance of indicator standardisation are discussed, including aspects such as interdisciplinary relations, collaboration, ‘knowledge users’. Several important problems are addressed: language bias; timeliness; comparability of different research systems; statistical issues; and the ‘theory-invariance’ of indicators. Next, an introduction to the mapping of scientific fields is presented. Here basic concepts and issues of practical application of these ‘science maps’ are addressed. This contribution is concluded with general observations on current and near-future developments, including network-based approaches, necessary ‘next steps’ are formulated, and an answer is given to the question ‘Can science be measured?’
Article
Full-text available
Citations support the communication of specialist knowledge by allowing authors and readers to make specific selections in several contexts at the same time. In the interactions between the social network of (first-order) authors and the network of their reflexive (that is, second-order) communications, a sub-textual code of communication with a distributed character has emerged. The recursive operation of this dual-layered network induces the perception of a cognitive dimension in scientific communication.Citation analysis reflects on citation practices. Reference lists are aggregated in scientometric analysis using one (or sometimes two) of the available contexts to reduce the complexity: geometrical representations (‘mappings’) of dynamic operations are reflected in corresponding theories of citation. For example, a sociological interpretation of citations can be distinguished from an information-theoretical one. The specific contexts represented in the modern citation can be deconstructed from the perspective of the cultural evolution of scientific communication.
Book
This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.
Article
The University of São Paulo compares favourably with a middle-ranking research university in the United States. Peer review at its School of Medicine is basically honest. However, in today's world of intense specialization, most medical subdisciplines are simply too small and close-knit to allow for objective peer review from within. Moreover, the biomedical sciences have come to the point where there is no real separation between the disciplines. Most university faculties are poorly paid public servants who have a job for life, and a salary paid by the government. In São Paulo, Brazil, state universities receive around 10% of the state taxes; sooner or later they will have to have something to show for it. Following three decades of very low wages, the present system pays salaries to non-productive staff and results in poor peer review of academic productivity. Our School of Medicine receives substantial support through a private foundation set up for that purpose. An efficient use of resources should include a budgetary system that provides the faculty staff with incentives to excel in academic activities. This has been obtained by complementing the low wages through the awarding of fellowships, grants and prizes (on top of salary) in order to raise the ceiling up to international standards. To spend part of the budget in complementing university faculties salaries would require special programs, including adoption of routines for regular assessment of academic productivity. Reliable performance indicators permitted the classification of faculty staff into quality categories in order to differentially complement their salaries according to productivity. The selection of individuals is based strictly on merit, with explicit guidelines helping the faculty to understand the criteria by which their requests will be judged. Budgetary priorities should be intended to enhance the ability to deliver high quality health care to patients and to provide medical education as a whole, and combine a more equitable distribution of clinical and basic research. Thus, several parameters were defined for monitoring productivity in health-related areas, including the activities of the Clinical Hospital's medical labour force, in addition to the formal academic career at the School of Medicine. Trends for performance indicators presented here can be useful to inform the decision process about policies directed at health science research, medical education and public health priorities, because the parameters employed were intended to extend the analyses of the results of academic activity (frequently based on publication outputs) to its educational impact and the quality of health care to patients. This evaluation program, and the resulting focused investments, have played an important role in boosting morale in the academic community by offering the right combination of pressure and incentives to allow dramatic improvement in productivity. The implementation of the evaluation method proposed here should, thanks to its objective approach, free Deans and managers of the usual political issues involved in decision-making
Article
The publications by the Spanish scientists recorded in eight international databases in the years 1978 and 1983 are retrieved. Science indicators able to give a perception of the scientific productivity, the institutions involved, the habits of publishing in foreign or domestic journals and co-authorship are presented. The changes observed in these indicators in the two analysed years are examined and the trend in the evolution of the Spanish science is shown. The time delay in recording items by the databases and coverage of the Spanish journals are also studied.
Article
An interpretation of citation practice in scientific literature is offered which regards citation of a document as an act of symbol usage. By examining the language of the text around the footnote number the particular idea the citing author is associating with the cited document may be determined. the document is viewed as symbolic of the idea expressed in the text. This analysis was done for a sample of very highly cited documents in chemistry. A high degree of uniformity is revealed in the association of specific concepts with specific documents. These documents may be seen, in Leach's terms, as 'standard sym bols' for particular ideas, methods, and experimental data in chemical science. Some implications of these findings for the social determination of scientific knowledge (conceived as a dialogue among citing authors on the 'meaning' of earlier texts), and the relationship between cited documents as concept symbols and Kuhn's exemplars, are discussed.
Article
The paper discusses the strengths and limitations of ‘metrics’ and peer review in large-scale evaluations of scholarly research performance. A real challenge is to combine the two methodologies in such a way that the strength of the first compensates for the limitations of the second, and vice versa. It underlines the need to systematically take into account the unintended effects of the use of metrics. It proposes a set of general criteria for the proper use of bibliometric indicators within peer-review processes, and applies these to a particular case: the UK Research Assessment Exercise (RAE). Copyright , Beech Tree Publishing.
Article
The Barnaby Rich effect is defined as a high output of scientific writings accompanied by complaints on the excessive productivity of other authors.
Article
On-line interactive literature searching systems have “come of age” and have revolutionized information retrieval techniques. They are now widely used for subject-oriented searching. Much more than subject information is available in most of the data bases currently available, such as author names, corporate affiliations, journal titles, and CODEN. These are useful for bibliometric-type studies, that is, quantitative analysis of the bibliographic features of a body of literature. Several examples are given, including journal comparison studies, corporate affiliation studies, and statistical studies. Inconsistencies and errors in data bases become important, and the searcher must be alert to their existence. Indexing policies of the different data bases must also be taken into consideration.
Article
A quantitative estimate is made of the magnitude of the world's scientific and technical journal literature problem. Using a number of basic sources of statistical information, a composite picture is established to show such things as the total volume, linguistic and national origins, breakdown by subject field, and degree of coverage by the abstracting and indexing services.
Article
The g-index is introduced as an improvement of the h-index of Hirsch to measure the global citation performance of a set of articles. If this set is ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least citations. We prove the unique existence of g for any set of articles and we have that . 2 g g h ≥ The general Lotkaian theory of the g-index is presented and we show that 1 1 1 g T 2 α α α α α − ⎛ ⎞ − ⎟ ⎜ = ⎟ ⎜ ⎟ ⎜ ⎝ ⎠ − (*) Permanent address. Key words and phrases: g-index, h-index, Lotka, citation performance, Price medallist. Acknowledgement: The author is grateful to Drs. M. Goovaerts for the preparation of the citation data of the Price medallists (January 2006). 2 where is the Lotkaian exponent and where T denotes the total number of sources. 2 α > We then present the g-index of the (still active) Price medallists for their complete careers up to 1972 and compare it with the h-index. It is shown that the g-index inherits all the good properties of the h-index and, in addition, better takes into account the citation scores of the top articles. This yields a better distinction between and order of the scientists from the point of view of visibility.
Article
A Cumulative Advantage Distribution is proposed which models statistically the situation in which success breeds success. It differs from the Negative Binomial Distribution in that lack of success, being a non-event, is not punished by increased chance of failure. It is shown that such a stochastic law is governed by the Beta Function, containing only one free parameter, and this is approximated by a skew or hyperbolic distribution of the type that is widespread in bibliometrics and diverse social science phenomena. In particular, this is shown to be an appropriate underlying probabilistic theory for the Bradford Law, the Lotka Law, the Pareto and Zipf Distributions, and for all the empirical results of citation frequency analysis. As side results one may derive also the obsolescence factor for literature use. The Beta Function is peculiarly elegant for these manifold purposes because it yields both the actual and the cumulative distributions in simple form, and contains a limiting case of an inverse square law to which many empirical distributions conform.
Article
We investigated how citations from documents labeled by the Institute for Scientific Information (ISI) as “editorial material” contribute to the impact factor of academic journals in which they were published. Our analysis is based on records corresponding to the documents classified by the ISI as editorial material published in journals covered by the Social Sciences Citation Index between 1999 and 2003 (50,273 records corresponding to editorial material published in 2,374 journals). The results appear to rule out widespread manipulation of the impact factor by academic journals publishing large amounts of editorial material with many citations to the journal itself as a strategy to increase the impact factor.
Article
Because of the widespread use of citations in evaluation, we tend to think of them primarily as a form of colleague recognition. This interpretation neglects rhetorical factors that shape patterns of citations. After reviewing sociological theories of citation, this paper argues that we should think of citations first as rhetoric and second as reward. Some implications of this view for quantitative modeling of the citation process are drawn.
Article
The scope and significance of the field of informetrics is defined and related to the earlier fields of bibliometrics and scientometrics. The phenomena studied by informetricians are identified. The major contributors to the field in the past are described and current emphases are related to the contributions in this Special Issue.
Article
This paper introduces the Hirsch spectrum (h-spectrum) for analyzing the academic reputation of a scientific journal. h-Spectrum is a novel tool based on the Hirsch (h) index. It is easy to construct: considering a specific journal in a specific interval of time, h-spectrum is defined as the distribution representing the h-indexes associated to the authors of the journal articles. This tool allows defining a reference profile of the typical author of a journal, compare different journals within the same scientific field, and provide a rough indication of prestige/reputation of a journal in the scientific community. h-Spectrum can be associated to every journal. Ten specific journals in the Quality Engineering/Quality Management field are analyzed so as to preliminarily investigate the h-spectrum characteristics.
Article
As the costs of certain types of scientific research have escalated and as growth rates in overall national science budgets have declined, so the need for an explicit science policy has grown more urgent. In order to establish priorities between research groups competing for scarce funds, one of the most important pieces of information needed by science policy-makers is an assessment of those groups' recent scientific performance. This paper suggests a method for evaluating that performance.After reviewing the literature on scientific assessment, we argue that, while there are no simple measures of the contributions to scientific knowledge made by scientists, there are a number of ‘partial indicators’ — that is, variables determined partly by the magnitude of the particular contributions, and partly by ‘other factors’. If the partial indicators are to yield reliable results, then the influence of these ‘other factors’ must be minimised. This is the aim of the method of ‘converging partial indicators’ proposed in this paper. We argue that the method overcomes many of the problems encountered in previous work on scientific assessment by incorporating the following elements: (1) the indicators are applied to research groups rather than individual scientists; (2) the indicators based on citations are seen as reflecting the impact, rather than the quality or importance, of the research work; (3) a range of indicators are employed, each of which focusses on different aspects of a group's performance; (4) the indicators are applied to matched groups, comparing ‘like’ with ‘like’ as far as possible; (5) because of the imperfect or partial nature of the indicators, only in those cases where they yield convergent results can it be assumed that the influence of the ‘other factors’ has been kept relatively small (i.e. the matching of the groups has been largely successful), and that the indicators therefore provide a reasonably reliable estimate of the contribution to scientific progress made by different research groups.In an empirical study of four radio astronomy observatories, the method of converging partial indicators is tested, and several of the indicators (publications per researcher, citations per paper, numbers of highly cited papers, and peer evaluation) are found to give fairly consistent results. The results are of relevance to two questions: (a) can basic research be assessed? (b) more specifically, can significant differences in the research performance of radio astronomy centres be identified? We would maintain that the evidence presented in this paper is sufficient to justify a positive answer to both these questions, and hence to show that the method of converging partial indicators can yield information useful to science policy-makers.