Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

When we look up information in the WWW we hope to find information that is correct, fitting in quantity for our purposes and written at a level that we can understand. Unfortunately, very often one of the above criteria will not be met. A young person looking for information on some aspect of physics may well be frustrated when finding a complex formula whose understanding requires higher mathematics. In other cases, information may be much too voluminous or too short. This seems to indicate that what we need is presentation of material at various levels of detail and complexity. But most important of all, and this is what we are going to discuss in this paper is: how do we know that what we read is actually true? We will analyse this problem in the introductory section. We will show that it is impossible to expect “too much”. We will argue that some improvements can be made, particularly if the domain is restricted. We will then examine certain types of geographical information. Detailed research shows that some quantitative measurements like the area of a country or the highest mountains of a country, even if different sources disagree, can be verified by explaining why the discrepancies occur and by trusting numbers if they are identical in very different databases.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Since salinity changes because of the tides, a definition would be good to have, but there is no universally accepted one! [3]. ...
Article
Full-text available
We try to argue in this paper that the WWW is full of large collections of information, yet the reliability is always in doubt, independent if the information comes from a company server, an institution that was usually founded for some specific reason, or even a system like Wikipedia: certain aspects are often omitted, distorted or not dealt with enough. In this paper, first attempts and ideas on how to organize such a large body of very diverse material in a way to assure high quality and reliability are discussed. We also present some preliminary but encouraging results.
Chapter
This study examines the profiles of Critical Online Reasoning (COR) among young professionals in their first year of training using log data collected during the COR assessment with unrestricted Internet access. Participants were hypothesized to exhibit heuristic or systematic COR patterns, which were mostly confirmed by latent profile analyses, where group 1 (systematic searchers) showed a higher total number of web page views and a longer dwell time than group 2 (heuristic searchers). Group 1 also visited a higher proportion of trustworthy websites, stayed longer, and had a higher COR performance. Consistent with the heuristic approach, group 2 participants placed more trust in different types of media and had a lower self-reported need for cognition compared to group 1. The study thus advocates the need for a data-triangulated framework consisting of objective log data including credibility of sources used, performance, and self-report data to discern meaningful Internet learning profiles.
Article
Full-text available
With the rapid penetration of the Web into all areas of society there is also an increasing number of warning voices that the Web is endangering creative work: it encourages plagiarism, the spreading of half-truths, causes loss of memorizing ability, reduces the ability to read complex matters, provides so many distractions that prevent coherent thinking, networks of pseudo-friends eat up valuable productive time, etc. One early target of complaints were search engines with which we "build up a distorted reality", this was followed by researchers who seemed to show the rather negative effects of new (social media) on "reading with understanding", and it has culminated in a number of publications showing negative effects of many aspects of the internet including scathing attacks on e-Learning, like in the German book by Manfred Spitzer on "Digital Dementia: How we make sure that all of us are getting stupid". In this paper we discuss how the Web both supports and stifles creative activities. We report on our experiences, many based on a project that was part of the "Sparkling Science" initiative of the Austrian Federal Ministry for Science and Research. We show that some claims can be validated, others are exaggerated.
Article
Full-text available
Over the past few years the amount of data immediately available to the consumer has rapidly increased in size. This is due to the growth of the web as an information exchange and creation environment. Data creation on the Internet is increasing as it gives web publishers the opportunity to publish their content without any standards to govern the content. Although the consumer has access to this abundance of information, the lack of standards has lead to various levels of quality problems. There has been much advancement in Search Engine technology to search through these large amounts of content and to retrieve relevant, quality information. However, not all information returned is relevant to its context and it has become more difficult for the consumer to find quality information due to these information quality issues. Barriers that have been identified with regard to the retrieval of relevant information are the problem of too much information and the quality of that information.This paper therefore address some of the issues of information quality on the web and evaluates a number of frameworks in order to identify common elements, differences and missing elements of such frameworks. A summary of the most common information quality elements is presented as a basis for a more comprehensive view of information quality frameworks available for managing and implementing quality strategies on the web.
Article
This study highlights how the auto-complete search algorithm offered by the search tool Google can produce suggested terms which could be viewed as racist, sexist or homophobic. Google was interrogated by entering different combinations of question words and identity terms such as ‘why are blacks…’ in order to elicit auto-completed questions. Two thousand, six hundred and ninety questions were elicited and then categorised according to the qualities they referenced. Certain identity groups were found to attract particular stereotypes or qualities. For example, Muslims and Jewish people were linked to questions about aspects of their appearance or behaviour, while white people were linked to questions about their sexual attitudes. Gay and black identities appeared to attract higher numbers of questions that were negatively stereotyping. The article concludes by questioning the extent to which such algorithms inadvertently help to perpetuate negative stereotypes.
Article
This study investigates how Chinese students make credibility assessments of web-based information for their research, and what evaluation criteria they employ. Our findings indicate that presumed credibility, reputed credibility, and surface credibility have a stronger impact on undergraduate students than on graduate students in credibility assessment. Graduate students tend to value experienced credibility more than undergraduate students. Undergraduate students predominantly rely on author's name/reputation/affiliation as well as website reputation for their credibility evaluation. In contrast, graduate students focus more than undergraduate students on information accuracy/quality. Similarities and differences in credibility assessment between American students and Chinese students are also discussed.
Chapter
This article addresses a general problem in media sociology – how to understand the media both as an internal production process and as a general frame for categorizing the social world, with specific reference to a version of this problem in recent work on media within Bourdieu’s field-based tradition of research (work previously reviewed by Rodney Benson in Theory and Society28). It argues that certain problems arise in reconciling this work’s detailed explanations of the media field’s internal workings (and the interrelations of that field’s workings to the workings of other fields) and general claims made about the “symbolic power” of media in a broader sense. These problems can be solved, the author argues, by adopting the concept of meta-capital developed by Bourdieu himself in his late work on the state, and returning to the wider framework of symbolic system and symbolic power that was important in Bourdieu’s social theory before it became dominated by field theory. Media, it is proposed, have meta-capital over the rules of play, and the definition of capital (especially symbolic capital), that operate within a wide range of contemporary fields of production. This level of explanation needs to be added to specific accounts of the detailed workings of the media field. The conclusion points to questions for further work, including on the state’s relative strength and the media’s meta-capital that must be carried out through detailed empirical work on a global comparative basis.
Article
In the Web, making judgments of information quality and authority is a difficult task for most users because overall, there is no quality control mechanism. This study examines the problem of the judgment of information quality and cognitive authority by observing people's searching behavior in the Web. Its purpose is to understand the various factors that influence people's judgment of quality and authority in the Web, and the effects of those judgments on selection behaviors. Fifteen scholars from diverse disciplines participated, and data were collected combining verbal protocols during the searches, search logs, and postsearch interviews. It was found that the subjects made two distinct kinds of judgment: predictive judgment, and evaluative judgment. The factors influencing each judgment of quality and authority were identified in terms of characteristics of information objects, characteristics of sources, knowledge, situation, ranking in search output, and general assumption. Implications for Web design that will effectively support people's judgments of quality and authority are also discussed.