Scientific and technical knowledge is cumulative and international in nature. The collection, storage, processing, and dissemination of information is of strategic national and international importance these days. Such information facilitates socioeconomic development within countries. Governments are establishing national information policies to better manage their data accumulation. There is now an ongoing process of international information cooperation, as well. International programs for cooperation in the data collection and dissemination field are described. International professional groups are also fostering international information cooperation.
This paper reviews a selection of international collaborative efforts in the production of information services and attempts to characterize modes of cooperation. Information systems specifically discussed include: international nuclear information system (INIS); Nuclear Science Abstract (NSA); EURATOM; AGRIS; AGRINDEX; Information Retrieval Limited (IRL); IFIS (International Food Information Service); Chemical Abstracts Service (CAS); MEDLARS; and TITUS. 3 methods of international information transfer are discussed: commercial transactions; negotiated (bilateral) barter arrangements; and contribution to internationally managed systems. Technical, economic, and professional objectives support the rationale for international cooperation. It is argued that economic and political considerations, as much as improved technology or information transfer, will determine the nature of collaboration in the future.
The issue of duplicate publications has received a lot of attention in the medical literature, but much less in the information science community. This paper aims at analyzing the prevalence and scientific impact of duplicate publications across all fields of research between 1980 and 2007, using a definition of duplicate papers based on their metadata. It shows that in all fields combined, the prevalence of duplicates is one out of two-thousand papers, but is higher in the natural and medical sciences than in the social sciences and humanities. A very high proportion (>85%) of these papers are published the same year or one year apart, which suggest that most duplicate papers were submitted simultaneously. Furthermore, duplicate papers are generally published in journals with impact factors below the average of their field and obtain a lower number of citations. This paper provides clear evidence that the prevalence of duplicate papers is low and, more importantly, that the scientific impact of such papers is below average. Comment: 13 pages, 7 figures
The aggregated journal-journal citation matrix derived from the Journal Citation Reports 2001 can be decomposed into a unique subject classification by using the graph-analytical algorithm of bi-connected components. This technique was recently incorporated in software tools for social network analysis. The matrix can be assessed in terms of its decomposability using articulation points which indicate overlap between the components. The articulation points of this set did not exhibit a next-order network of 'general science' journals. However, the clusters differ in size and in terms of the internal density of their relations. A full classification of the journals is provided in an Appendix. The clusters can also be extracted and mapped for the visualization.
The increasing flood of documentary information through the Internet and other information sources challenges the developers of information retrieval systems. It is not enough that an IR system is able to make a distinction between relevant and non-relevant documents. The reduction of information overload requires that IR systems provide the capability of screening the most valuable documents out of the mass of potentially or marginally relevant documents. This paper introduces a new concept-based method to analyze the text characteristics of documents at varying relevance levels. The results of the document analysis were applied in an experiment on query expansion (QE) in a probabilistic IR system.
This article investigates how consistent different newspapers are in their choice of words when writing about the same news events. News articles on the same news events were taken from three Finnish newspapers and compared in regard to their central concepts and words representing the concepts in the news texts. Consistency figures were calculated for each set of three articles (the total number of sets was sixty). Inconsistency in words and concepts was found between news articles from different newspapers. The mean value of consistency calculated on the basis of words was 65%; this however depended on the article length. For short news wires consistency was 83% while for long articles it was only 47%. At the concept level, consistency was considerably higher, ranging from 92% to 97% between short and long articles. The articles also represented three categories of topic (event, process and opinion). Statistically significant differences in consistency were found in regard to length but not in regard to the categories of topic. We argue that the expression inconsistency is a clear sign of a retrieval problem and that query expansion based on semantic relationships can significantly improve retrieval performance on free-text sources.
In this paper we give a synoptic view of the growth text processing technology of information extraction (IE) whose function is to extract information about a pre-specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the 1960's and 70's till the present, discuss the techniques being used to carry out the task, describe application areas where IE systems are or are about to be at work, and conclude with a discussion of the challenges facing the area. What emerges is a picture of an exciting new text processing technology with a host of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining. 1 Introduction: IE and IR Information extraction (IE) is a term which has come to be applied to...
In this paper we report on a theoretical model of structured document indexing and retrieval based on Dempster-Shafer's Theory of Evidence. This includes a description of our model of structured document retrieval, the representation of structured documents, the representation of individual components, how components are combined, details of the combination process, and how relevance is captured within the model. We also present a detailed account of an implementation of the model, and an evaluation scheme designed to test the effectiveness of our model. Finally we report on the details and results of a series of experiments performed to investigate the characteristics of the model. 1. Introduction In traditional information retrieval (IR) systems, documents are indexed and retrieved as single complete units. Often, however, only part of a document is relevant to an information need. This is especially true for long documents, documents that cover a variety of subjects and documents ...
The evaluation of an implication by Imaging is a logical technique developed in the framework of modal logic. Its interpretation in the context of a "possible worlds" semantics is very appealing for IR. In 1989, Van Rijsbergen suggested its use for solving one of the fundamental problems of logical models in IR, the evaluation of the implication d --> q (where d and q are respectively a document and a query representation). Since then, others have tried to follow that suggestion proposing models and applications, though without much success. Most of these approaches had as their basic assumption the consideration that "a document is a possible world". We propose instead an approach based on a completely different assumption: "a term is a possible world". This approach enables the exploitation of term-term relationships which are estimated using an information theoretic measure.
Large numbers of Africans still living in rural areas are considerably influenced by oral tradition. A lot of information can therefore be obtained through this form of communication. However, this kind of material has been largely neglected by librarians in Africa. Although a few centres exist in some countries where the oral tradition is collected, organised and disseminated, a number of obstacles prevent these centres from achieving their aims. The activities of some of these centres are discussed and some of the associated problems highlighted, with proposals made for their solution. The paper concludes that librarians in Africa must place a greater emphasis on oral tradition as a supplement to documentary sources. Oral tradition is an integral part of the African's heritage and it would be criminal to let it disappear.
The implementation of hierarchic agglomerative methods of cluster anlaysis for large datasets is very demanding of computational resources when implemented on conventional computers. The ICL Distributed Array Processor (DAP) allows many of the scanning and matching operations required in clustering to be carried out in parallel. Experiments are described using the single linkage and Ward's hierarchical agglomerative clustering methods on both real and simulated datasets. Clustering runs on the DAP are compared with the most efficient algorithms currently available implemented on an IBM 3083 BX. The DAP is found to be 2.9-7.9 times as fast as the IBM, the exact degree of speed-up depending on the size of the dataset, the clustering method, and the serial clustering algorithm that is used. An analysis of the cycle times of the two machines is presented which suggests that further, very substantial speed-ups could be obtained from array processors of this type if they were to be based on more powerful processing elements.
Thirty-seven years as a member of the Editorial Board of a learned journal is a remarkable record in any circumstances. To have served in that capacity in a field which has undergone such rapid and spectacular development as library and information management, and to have exerted upon it a consistently constructive influence for so long, is cause for celebration beyond the normal. It is fitting, therefore, that his colleagues should offer this Festschrift issue of the Journal of Documentation to Geoffrey Woledge as a tribute not only to his part in maintaining the high academic standards which Aslib has always prescribed for its premier journal, but also to his distinguished career in librarianship which has had a lasting effect upon many aspects of our professional scene. Having worked harmoniously with him as my principal guide and mentor for some fifteen of the twenty-eight years of my directorship of Aslib, I am delighted to be associated with this acknowledgement of his work.
Purpose - The purpose of this article is to explore the concept of information culture, and to demonstrate its utility when considering information management in organisations. Design/methodology/approach - Case studies were conducted of organisations with similar functions, located in regions likely to have different cultural dimensions. Findings - The findings show that different values and attitudes to information are influencing factors of the information culture in the organisations studied. Practical implications - Knowledge and understanding of the features of information culture will assist with addressing the challenges of organisational information management in this globalised age. Originality/value - This research adds to the body of knowledge about information culture, in particular national dimensions.
Synopsis journals have been suggested in recent years as a possible solution to some of the problems of scholarly journal publishing. In a synopsis journal, the conventional printed version contains a one-or two-page summary of the paper, possibly including one or two diagrams, tables or references. The full paper appears in microfiche or miniprint directly from the typescript, or it is archived and photocopies are made available on request. Typically, the full paper has the conventional layout of a scholarly paper with a short abstract at the start, but the synopsis as such does not appear in the full paper. (Miniprint is printing in reduced size, usually with four or nine typescript pages on one printed page. A magnifying glass is required to read it.)
– This paper aims to discuss the history of online searching through the views of one of its pioneers.
– The paper presents, and comments on, the recollections of Jim Hall, one of the earliest UK‐based operators of, and writers on, online retrieval systems.
– The paper gives an account of the development of online searching in the UK during the 1960s and 1970s.
– The paper presents the perspective of one of the pioneers of online searching.
Searching behaviour in a university library is studied using a wholistic approach, encompassing the use of bibliographic tools and shelf browsing. The present study is designed as the first half of a 'before and after' study to permit the evaluation of the impact of a future online catalogue on users' searching behaviour. A combined methodology was devised: searchers were encouraged to talk aloud during their search, and this information, together with some probing and real time expert interpretation, enabled the experimenter to record the searching activity on a highly structured observation form. The study reveals the extent of subject searching activity, and suggests that this may have been underestimated in previous studies. The analysis of expressed topics, search formulation strategy and documents retrieved reveals the adaptive nature of the subject searching process, whereby the user adapts to the structure of the available tools. The information retrieval task in a traditional library system is tailored by the system to a single, one dimensional, sequential process. It is suggested that a major obstacle to subject searching effectiveness may lie in the lack of interaction between the different possible approaches in the searching process: the indexing language, the classification, and the titles. It is to be hoped that a future online searching environment will encourage a more truly interactive approach to subject searching.
Recent work has shown that potentially useful predictions of the circulation of library materials can be made which do not require very restrictive assumptions about underlying probability distributions. In the same spirit, we here consider one of the classic problems of bibliometrics, viz. predicting the number of 'new' journals carrying 'relevant' articles in the future, using both established parametric approaches and the newer, empirical methods.
The establishment of the British Library (BL) from the Parry Report of 1967, through Dainton in 1969, to the White Paper of 1971 and the Act of 1972, was contemporaneous with the formation of the polytechnics. The latter had their origins in the White Paper of 1966, A Plan for Polytechnics . The majority of the polytechnics were formed in 1970/71. Their libraries have grown, without central government assistance, to respectable size.
The objective of the paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set in a cognitive framework, the paper outlines the concept of polyrepresentation applied to both the user's cognitive space and the information space of IR systems. The concept seeks to represent the current user's information need, problem state, and domain work task or interest in a structure of causality. Further, it implies that we should apply different methods of representation and a variety of IR techniques of different cognitive and functional origin simultaneously to each semantic full-text entity in the information space. The cognitive differences imply that by applying cognitive overlaps of information objects, originating from different interpretations of such objects through time and by type, the degree of uncertainty inherent in IR is decreased. Polyrepresentation and the use of cognitive overlaps are associated with, but not identical to, data
Classification, indexing and abstracting can all be regarded as summarisations of the content of a document. A model of text comprehension by indexers (including classifiers and abstractors) is presented, based on task descriptions which indicate that the comprehension of text for indexing differs from normal fluent reading in respect of: operational time constraints, which lead to text being scanned rapidly for perceptual cues to aid gist comprehension; comprehension being task oriented rather than learning oriented, and being followed immediately by the production of an abstract, index, or classification; and the automaticity of processing of text by experienced indexers working within a restricted range of text types. The evidence for the interplay of perceptual and conceptual processing of text under conditions of rapid scanning is reviewed. The allocation of mental resources to text processing is discussed, and a cognitive process model of abstracting, indexing and classification is described.
The Sterling C. Evans Library, Texas A & M University, holds over 3,000,000 microforms. As many of the Evans microform collections are not catalogued, access to them can be perplexing to patrons. To ease that problem, the microtext staff created Guide to the Microform Collections in the Sterling C. Evans Library , which describes the microform materials currently housed in six different departments of the Library. Entries are arranged alphabetically by title and are identified by format. The Guide allows patrons to examine a scope note for each set, to discover indexes which enable efficient use of various sets, and to search for microform materials by subject. Call numbers for microform materials, locations, indexes with appropriate call numbers, and subject headings are integral parts of each listing. In addition to describing the current collection, the Guide provides an effective means of assessing collection strengths or weaknesses. The article presents information on selecting materials for inclusion in the Guide , content and form of entries, and updating the guide.
The idea of conceptual mapping goes back to the semantic differential and conceptual clustering. Using multivariate statistical techniques, one can map a dispersion of texts onto another dispersion of their content indicators, such as keywords. The resulting configurations of texts/indicators differ from one another according to their meaning, expressed in terms of co-ordinates of a semantic field. We suggest that by using principal component analysis, one can design a user-friendly semantic space which can be navigated. Further, to learn the names of embedded magnitudes in semantic space, the idea of conceptual clustering is used in a broader context. This is a two-mode statistical approach, grouping both documents and their index terms at the same time. By observing the agglomerations of narrower, related terms over a corpus, one arrives at broader, more general thesaurus entries which denote and conceptualise the major dimensions of semantic space.
This paper argues for precision in citing. It proposes an approach to citation analysis that would encourage that precision, and contrasts this approach with one based on inferred motivations for citing. It argues that it is misguided to classify citation-signallers by the nature of the motivations inferred to underlie their use, the inference process being both philosophically and methodologically less satisfactory than the approach this paper proposes; and that, moreover, it is unnecessary, since certain formal characteristics of citation-signallers of themselves provide ample means of classification.
In the post-war years 1945-50, university and other large research libraries were confronted both by new problems and new opportunities. First, university libraries had to provide for greatly increased student populations, swelled by returning ex-servicemen and women; secondly, the supply of foreign books was uncertain, unreliable and subject to the bureaucratic delays of import controls; and thirdly, the atmosphere of post-war reconstruction called for new and more structured approaches to the provision of scientific information. For their effective resolution, these challenges required group consideration and communal action. Amongst academic librarians, there was a widespread but ill-focused feeling that if the problems of the day were to be successfully tackled and the opportunities seized, the Library Association was not the most suitable medium through which to address them. It was evident that public library affairs had achieved an overwhelming dominance in its collective attitudes and actions. The University and Research Section, the principal channel through which academic libraries input their views, cut little ice with the powerful Council of the Association. Indeed, the Section was at loggerheads with the Council over several matters and itself was far from united. Although it could still be said to represent the university library interest, in the fast-growing post-war educational scene its membership had become more diffuse and its purposes less distinct. A number of librarians had come to believe that there was a positive need for an authoritative body that could speak for large national and university libraries and represent their collective views to governmental and other organizations.
An indexing language is made more accessible to searchers and indexers by the presence of entry terms or near-synonyms. This paper first presents an evaluation of existing entry terms and then presents and tests a strategy for creating entry terms. The key tools in the evaluation of the entry terms are documents already indexed into the Medical Subject Headings (MeSH) and an automatic indexer. If the automatic indexer can better map the title to the index terms with the use of entry terms than without entry terms, then the entry terms have helped. Sensitive assessment of the automatic indexer requires the introduction of measures of conceptual closeness between the computer and human output. With the tools described in this paper, one can systematically demonstrate that certain entry terms have ambiguous meanings. In the selection of new entry terms another controlled vocabulary or thesaurus, called the Systematized Nomenclature of Medicine (SNOMED), was consulted. An algorithm for mapping terms from SNOMED to MeSH was implemented and evaluated with the automatic indexer. The new SNOMED-based entry terms did not help indexing but did show how new concepts might be identified which would constitute meaningful amendments to MeSH. Finally, an improved algorithm for combining two thesauri was applied to the Computing Reviews Classification Structure (CRCS) and MeSH. CRCS plus MeSH supported better indexing than did MeSH alone.
The Subject-Object Relationship Interface model (SORI) described in this paper is a novel approach that displays many of the structures necessary to map between the conceptual level and the external level in a database management system, which is an information-oriented view of data. The model embodies a semantic synthesiser, which is based on an algorithm that maps the syntactic representation of a tuple or a record onto a semantic representation. This is based on table-driven semantics which are embedded in the database model. The paper introduces a technique for translating tuples into natural language sentences, and discusses a system that has been fully implemented in PROLOG.
The aim of legal deposit is to ensure the preservation of and access to a nation’s intellectual and cultural heritage over time. There is a global trend towards extending legal deposit to cover digital publications in order to maintain comprehensive national archives. However, including digital publications in legal deposit regulations is not enough to ensure the long-term preservation of these publications. Indeed, there are many practical difficulties associated with the entire deposit process. Conceptsm, principles and practices that are accepted and understood in the print environment, such as publication, publisher, place of publication and edition, may have new meanings or no longer be appropriate in a networked environment. Mechanisms for identifying, selecting and depositing digital material either do not exist or are inappropriate for some kinds of digital publication. There is a great deal of work on developing digital preservation strategies; this is at an early stage. National and other deposit libraries are at the forefront of research and development in this area, often in partnership with other libraries, publishers and technology vendors. Most of this activity is of a technical nature. There is some work on developing policies and strategies for managing digital resources. However, not all management issues or users’ needs are being addressed.
Study of information retrieval (IR) interaction from a viewpoint of an appropriate discipline of human communication, such as semiotics, should be useful. Application of semiotic categories to IR reveals that the basic distinction in the retrieval interaction is between the two particular types of "language games" (speech acts) known as "denotations" and "presentations". The denotative act in IR is needed to transmit information from the database to the user of the system. The prescriptive act, however, can be used to "invent" new connections between documents that constitute documentation systems and, thus, to create new knowledge. The research project being carried out by the present author applies semiotic concepts and tools to the IR systems design problem. IR systems design practice is viewed as a social practice in which the main disjunction is between the two conflicting acts of denotation and prescription. It is the aim of the reported project to balance these two conflicting language games within the framework of the Okapi experimental information retrieval system.
The study of popular reading habits is in many ways an important one. While the reading habits of the elite form the leading edge of intellectual thought, the vast majority of humanity have had, in the past as well as the present, different habits and aims. Popular literature has been bought right from the beginning by its readers, but from the seventeenth century there has been an interest in it from above, and from the nineteenth century some attempt to study it in detail. In order to recover the reading habits of a real community (Ulster) between 1700 and 1900, a number of methodologies were examined, and the conclusion was come to that a full examination of contemporary evidence was of the utmost importance. Of great use were several advertisements specifically aimed at the unsophisticated reader, dating from the mid-eighteenth to the mid-nineteenth century. The material recovered from these agreed well with other evidence. In addition, a contemporary eighteenth century classification of the physical types of popular reading material was found.
This paper provides an introduction to the use of n-grams in textual information systems, where an n-gram is a string of n, usually adjacent, characters extracted from a section of continuous text. Applications that can be implemented efficiently and effectively using sets of n-grams include spelling error detection and correction, query expansion, information retrieval with serial, inverted and signature files, dictionary look-up, text compression, and language identification.
Purpose - To review critically the applicability of Grounded Theory. Design/methodology/approach - Two perspectives are used: that of the author's personal experience and that of the internal pros and cons of Grounded Theory. Findings - Grounded Theory is called into question regarding problems with pre-understanding, with everyday knowledge, with disconnection of context, and with coding procedure. Practical implications - It is important to think twice before using Grounded Theory in spite of its promising features at the outset. Originality/value - Empirically and theoretically founded critique of Grounded Theory
This paper describes research that aims to define the
information needs of mobile individuals, to implement a mobile
information system that can satisfy those needs, and finally to
evaluate the performance of that system with end-users. First a
The Knowledge Warehouse project, an exercise in collecting, storing and re-using the electronic versions of published text, is briefly described. Some of the factors affecting library and scholarly use of electronic archives and electronically delivered documents are discussed.
– This philosophical essay aims to explore the concept of information science.
– The philosophical argumentation is composed of five phases. It is based on clarifying the meanings of its basic concept “data”, “information” and “knowledge”.
– The study suggests that the name of the field “information science” should be changed to “knowledge science”.
– The paper offers reflections on the explored phenomena of information science.
This is a longer version, with additional material, of the biography contributed to the Introductory Volume of the new (2nd) edition of the Bibliographic Classification, the first parts of which should appear this year. It goes into Bliss's private as well as his professional life and shows for the first time in print the reasons why he devoted himself first to librarianship, and later to a life of scholarship, particularly to the study of classification and the production of an entirely new general scheme.
The exhaustivity of document descriptions and the specificity of index terms are usually regarded as independent. It is suggested that specificity should be interpreted statistically, as a function of term use rather than of term meaning. The effects on retrieval of variations in term specificity are examined, experiments with three test collections showing in particular that frequently-occurring terms are required for good overall performance. It is argued that terms should be weighted according to collection frequency, so that matches on less frequent, more specific, terms are of greater value than matches on frequent terms. Results for the test collections show that considerable improvements in performance are obtained with this very simple procedure.
Although many large systems have by-passed the problem by employing ‘natural language’, compound words remain a difficulty in thesaurus construction. In the past, rules have been devised which attempted to approach the problem via syntax, but these were not altogether satisfactory. Instead, it is proposed that the major criteria for handling compound words should rest upon their orthography (i.e. physical form), lexicography (dictionary definition) and semantics, with special attention being given to the possible occurrence of homographs—words which differ in meaning, but share a common form. The suggestions contained in BS 5723, Guidelines for the establishment and development of monolingual thesauri, are assessed in relation to these criteria. BS 5723 is criticized for failing to pay sufficient attention to the requirements of mechanized systems, and for its partial failure in not recording the divergent needs of pre-and post-coordinate systems.
The bibliography of Cook's voyages is both lengthy and complicated, and, in spite of their far-reaching importance, their historical and geographical significance, and their considerable literary influence, it has never yet been attempted in its entirety. ‘L'immortel Cook’ was honoured almost as much in France as he was in England, but no satisfactory account exists of the French translations of his works. Sir Maurice Holmes's Introduction to the bibliography of Captain James Cook, R.N., London, Edwards, 1936, is excellent for the original editions, but does not attempt to include translations. Of great value, too, is the Bibliography of Captain James Cook, R.N., F.R.S., circumnavigator, published in 1928 by the Public Library of New South Wales. This is the catalogue of what must have been a remarkably fine exhibition to celebrate the bicentenary of Cook's birth, but it does not, of course, pretend to include items which were not available for display. The only other bibliography specifically devoted to Cook is the one by James Jackson prepared for the centenary of Captain Cook's death and published in the Bulletin de la Société de Géographie, 1879. This must be used with great caution. It has the appearance of having been compiled from entries sent in by various owners and put together without sufficient examination. At all events, while it naturally contains a very large number of French editions, many of them appear twice or even three times in slightly different disguises.
As every librarian knows only too well, there has been in recent years, particularly since the war, a vast increase in the amount of periodical literature published. Scarcely a day passes which does not herald the advent of yet another new title; mercifully for the librarian, many of these are fated to expire prematurely, but a high proportion flourish and live to form an important contribution to every conceivable branch of knowledge. As with the periodicals themselves, so with the bibliographical tools which seek to bring some sort of order to this journalistic chaos, until there can be few European countries in which there has been no attempt to undertake at least one of several possible bibliographical enterprises. To enumerate comprehensively every published union-catalogue, catalogue, bibliography, or press-guide would itself be a task resulting in the compilation of yet another bibliography of considerable size, while a critical assessment of the value of each such work would be not only impossible but verging on the impertinent. Each has been compiled with a special purpose in view, and the fact that an individual foreign publication may be of little practical use to the British librarian does not necessarily detract from its value in its native country.
In writing about reference books it is difficult to recommend one rather than another unless one knows the exact purpose for which it is needed. This article, therefore, is a survey of the contents and scope of some important catalogues, union lists, and bibliographies of periodicals, and does not attempt to say that one is better or more useful than another. Each librarian can only decide that for himself.
The focus of the citation analysis reported is the information exchange between the Danish library-information profession and LIS communities in other countries. Consideration is given to the diffusion of ideas and innovations from foreign countries into the Danish LIS world. Citation evidence is also used to shed light on structural characteristics of the LIS periodical literature and other communication media and some of the communication patterns characterising the LIS field in Denmark. The raw material for the citation analysis was gathered by the manual citation counting method and not drawn from computerised citation databases. The fact that a surprisingly large proportion of the references cited by Danish LIS authors belong to the so-called ‘hidden’ category — denoting cited references embedded in the text of journal papers — is noted as a key finding. The observation on the considerable number of ‘hidden’ citations is developed further. Journals and books (monographs) are the publication formats most frequently relied on by LIS authors. It was found that the majority of the citations are to relatively recent materials. Next to Danish material, publications in English and produced in the United States and in Great Britain are those most heavily relied on by the Danish LIS community. Ranking of journals by number of citations shows that a very small number of journals accounts for the majority of journal citations. On the whole, the works cited point to a definite interest in public libraries and issues relating to the planning, structure and legislation of public libraries. Works on research and academic libraries and on theoretical aspects of LIS did not attract the same amount of citations.
– The purpose of this paper is to discuss the Namibian liberation struggle, 1966‐1990, as an information war rather than a military conflict, so as to explore the dimensions of information activity under conditions of conflict. This builds on the idea, expressed by participants in earlier struggles of this kind, that the contest for “hearts and minds” is more significant than the armed confrontation that accompanies it.
– A model that incorporates information and communication activity by both contestants, at their command centres, in the field and in the media, was elaborated in a previous paper using data from a number of conflicts, mainly in Southern and Central Africa. The present paper focuses on the Namibian struggle so as to examine the capacity of the model to assist in explaining the outcomes of the conflict. Using published sources, printed archive material and oral testimony, the range of information inputs, the incidence of suppression of information and information outputs are set out in the pattern provided by the model. This shows how both sides used covert intelligence gathering, secret communication, propaganda and disinformation accompanied by censorship and the suppression of critical comment by force to further their political/military aims.
– Whilst South Africa and its Namibian military structures were generally successful in armed confrontation with the forces of the chief liberation organisation (SWAPO), they were not able to bring the conflict to a successful military conclusion. This was because SWAPO's attention to the diplomatic war, based on strong and consistent information flows, convinced the United Nations and other allies to press for a negotiated solution. Once this was agreed, the success of the liberation movement's news and education campaigns in attaching the people to the cause of liberation was revealed by SWAPO's overwhelming success in free elections in 1989.
– It is important to establish that the war in Namibia was much more a clash of information‐related activities directed at hearts and minds than it was of guns and bombs. When this is demonstrated, we can perhaps learn from the fact that the contestant most effectively committed to waging war by peaceful means was victorious.
The information presented in this paper is derived from a census conducted in March 1981, and published in 1983. Subjective observation of special libraries in the interim would suggest an overall retrenchment, particularly severe in smaller units. During the decade 1972–81, there appears to have been a net decrease in the number of special library and information service units, though probably a small growth overall in the number of staff in the special library sector, particularly those in qualified posts. Special librarians, as a group, tend to have less professional LIS qualifications than their counterparts in public and academic libraries, but are more likely to have a degree in another field. There has been a considerable growth in the number of staff with qualifications of all types in special libraries during the decade. Women constitute a majority of the staff in posts of all types in special libraries though less so than in academic and public libraries. Nevertheless, there has been a significant growth in the number and percentage of females occupying professional posts in special libraries during the decade, a trend which seems likely to continue, despite a much higher ‘wastage’ of females in the profession. Whereas female special librarians are more likely than males to have a formal LIS qualification, they are less likely to hold non-LIS degrees. The bulk of ‘information science’ posts are in special libraries and the majority of these are in industry and commerce. Nearly 70% of all special library posts are in the south east of England—a situation that has not changed during the decade.
Perhaps the first whisper of the British Library(hereinafter BL) may be found in the report of the Parry Committee which recommended the formation of a national policy in regard to libraries and the provision of information. This was swiftly followed by the Dainton Committee report, a White Paper, and finally the British Library Act, which came into force on 1 July 1973 when the Board of the new BL formally took over responsibility for the library departments (excepting Prints and Drawings) and the Science Reference Library from the Trustees of the British Museum, plus the National Lending Library for Science and Technology and the former National Central Library. To this weighty nucleus were added the major responsibilities of the former Office of Scientific and Technical Information, in April 1974, forming the basis of a new Research and Development Department, and the British National Bibliography, in August 1974, as the foundation of the new Bibliographic Services Division. The way for this very considerable re-shaping of the country's library resources had been thoroughly prepared by a body familiarly known as BLOC (British Library Organizing Committee) between January and July 1973. There are a number of accounts of the creation of the new library which do not differ in substance. Later developments can be studied from the series of annual reports which provide the most authoritative data available, although it should be noted that statistics provided are not always compatible from year to year.
The annual reports of British university libraries all tell similar stories in the period under review: each year was described as a period of financial constraint, with hopes expressed that future years would bring an improvement. This never proved to be the case as on the whole the situation gradually worsened, the only respite being occasional periods of level funding. In line with university budgets, those for libraries more than doubled in actual terms, but this did not keep up with inflation of books, periodicals, or salaries. In many cases the number of books bought, the number of periodical titles purchased, and the number of staff employed all fell. Student numbers did not increase greatly, but many libraries reported very substantial increases in borrowing and library use. Services like online information retrieval were hardly known in 1977, but were in great demand by 1987. At the beginning of the period only a few libraries had any computerisation, but by the end, almost all had automated, many with integrated online systems which had required substantial capital investment. Had it not been for this investment in automation, university libraries would certainly not have been able to cope with the increased levels of demand with their reduced staffing. In fact it was in the whole area of information technology that the most exciting changes took place in university libraries in the past decade. As a result library users have seen great improvements in services, in spite of the cries of anguish from university librarians at the severity of cuts imposed upon them.
A study was carried out to assess the correlation between scores achieved by academic departments in the UK in the 1992 Research Assessment Exercise, and the number of citations received by academics in those departments for articles published in the period 1988±1992, using the Institute for Scientific Information’s citation databases. Only those papers first authored by academics identified from the Commonwealth Universities Yearbook were examined. Three subject areas: Anatomy, Genetics and Archaeology were chosen to complement Library and Information Management that had already been the subject of such a study. It was found that in all three cases, there is a statistically significant correlation between the total number of citations received, or the average number of citations per member of staff, and the Research Assessment Exercise score. Surprisingly, the strongest correlation was found in Archaeology, a subject noted for its heavy emphasis on monographic literature and with relatively low citation counts. The results make it clear that citation counting provides a robust and reliable indicator of the research performance of UK academic departments in a variety of disciplines, and the paper argues that for future Research Assessment Exercises, citation counting should be the primary, but not the only, means of calculating Research Assessment Exercise scores.
A citation study was carried out on all 217 academics who teach in UK library and information science schools. These authors between them received 622 citations in Social Scisearch for articles they had published between 1988 and the present. The results were ranked by department, and compared to the ratings awarded to the departments in the 1992 Universities Funding Council Research Assessment Exercise. Using the Spearman Rank Order Correlation coefficient, it was found that there is a statistically significant correlation between the numbers of citations received by a department in total, or the average number of citations received in the department per academic, and the Research Assessment Exercise rating. The paper concludes that this provides further independent support for the validity of citation counting, even when using just the first authors as a search tool for cited references. The paper also concludes that the cost and effort of the Research Assessment Exercise may not be justified when a simpler and cheaper alternative, namely a citation counting exercise, could be undertaken. The paper also concludes that the University of North London would probably have benefitted from being included in the 1992 Research Assessment Exercise.