Patient-Oriented Cancer Information on the Internet: A Comparison of Wikipedia and a Professionally Maintained Database

Bruce and Ruth Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel.
Journal of Oncology Practice 09/2011; 7(5):319-23. DOI: 10.1200/JOP.2010.000209
Source: PubMed


A wiki is a collaborative Web site, such as Wikipedia, that can be freely edited. Because of a wiki's lack of formal editorial control, we hypothesized that the content would be less complete and accurate than that of a professional peer-reviewed Web site. In this study, the coverage, accuracy, and readability of cancer information on Wikipedia were compared with those of the patient-orientated National Cancer Institute's Physician Data Query (PDQ) comprehensive cancer database.
For each of 10 cancer types, medically trained personnel scored PDQ and Wikipedia articles for accuracy and presentation of controversies by using an appraisal form. Reliability was assessed by using interobserver variability and test-retest reproducibility. Readability was calculated from word and sentence length.
Evaluators were able to rapidly assess articles (18 minutes/article), with a test-retest reliability of 0.71 and interobserver variability of 0.53. For both Web sites, inaccuracies were rare, less than 2% of information examined. PDQ was significantly more readable than Wikipedia: Flesch-Kincaid grade level 9.6 versus 14.1. There was no difference in depth of coverage between PDQ and Wikipedia (29.9, 34.2, respectively; maximum possible score 72). Controversial aspects of cancer care were relatively poorly discussed in both resources (2.9 and 6.1 for PDQ and Wikipedia, respectively, NS; maximum possible score 18). A planned subanalysis comparing common and uncommon cancers demonstrated no difference.
Although the wiki resource had similar accuracy and depth as the professionally edited database, it was significantly less readable. Further research is required to assess how this influences patients' understanding and retention.

Download full-text


Available from: Yaacov Richard Lawrence
  • Source
    • "Several studies have attempted to examine the accuracy and completeness of health and drug information in Wikipedia articles45678. In general, health-related articles in Wikipedia have been found to be fairly accurate, but the information is often incomplete45678. Wikipedia content guidelines state that the content in articles should be based on reliable, published sources[9]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: References from drug-related Wikipedia articles and a drug information database were compared. Drugs in Food and Drug Administration (FDA) MedWatch alerts from January-July 2013 were searched in Wikipedia and Lexicomp to compare reference types and to assess the time for drug safety information to be incorporated into Wikipedia articles. Wikipedia most commonly cited peer-reviewed journal articles (49.2%) and news articles (12.0%). MedWatch citations were incorporated into Wikipedia on average in 5.9 days. Wikipedia cited various sources but may not be a reliable, up-to-date resource for drug safety information.
    Preview · Article · Jul 2015 · Journal of the Medical Library Association JMLA
  • Source
    • "Overall, the studies that have examined the writing style and readability of Wikipedia articles have generally found that they are at least as easy to read as their online and offline counterparts (Elia, 2009); however, results were varied when specific knowledge domains were investigated (Korosec et al., 2010; Rajagopalan et al., 2010, 2011). Wikipedia's writing style was found to be inconsistent (West & Williamson, 2009), especially concerning international topics (Dalby, 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Wikipedia may be the best-developed attempt thus far to gather all human knowledge in one place. Its accomplishments in this regard have made it a point of inquiry for researchers from different fields of knowledge. A decade of research has thrown light on many aspects of the Wikipedia community, its processes, and its content. However, due to the variety of fields inquiring about Wikipedia and the limited synthesis of the extensive research, there is little consensus on many aspects of Wikipedia's content as an encyclopedic collection of human knowledge. This study addresses the issue by systematically reviewing 110 peer-reviewed publications on Wikipedia content, summarizing the current findings, and highlighting the major research trends. Two major streams of research are identified: the quality of Wikipedia content (including comprehensiveness, currency, readability, and reliability) and the size of Wikipedia. Moreover, we present the key research trends in terms of the domains of inquiry, research design, data source, and data gathering methods. This review synthesizes scholarly understanding of Wikipedia content and paves the way for future studies. Open access version:
    Full-text · Article · Feb 2015 · Journal of the Association for Information Science and Technology
  • Source
    • "One the one hand, the crowd allows for a wider range of expertise and preferences to be leveraged in group decision‐making, suggesting that 'collective wisdom' might make evaluation more accurate, reducing information frictions and providing greater efficiency in financing decisions. For example, studies on forecasting have supported the idea that experts are no more accurate than informed amateurs (Tetlock, 2005), while examination of Wikipedia has shown that it compares favorably on many dimensions to expert‐created content (Clauson, Polen, Boulos, & Dzenowagis, 2008; Giles, 2005; Rajagopalan et al., 2011). Further, recent research on Wikipedia has shown that while there may be political biases in certain articles (Greenstein & Zhu, 2012), as these articles are more heavily edited by the crowd, they ultimately become less biased than similar work produced by the experts at Encyclopedia Britannica (Greenstein & Zhu, 2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In fields as diverse as technology entrepreneurship and the arts, crowds of interested stakeholders are increasingly responsible for deciding which innovations to fund, a privilege that was previously reserved for a few experts, such as venture capitalists and grant‐making bodies. Little is known about the degree to which the crowd differs from experts in judging which ideas to fund, and, indeed, whether the crowd is even rational in making funding decisions. Drawing on a panel of national experts and comprehensive data from the largest crowdfunding site, we examine funding decisions for proposed theater projects, a category where expert and crowd preferences might be expected to differ greatly. We instead find substantial agreement between the funding decisions of crowds and experts. Where crowds and experts disagree, it is far more likely to be a case where the crowd is willing to fund projects that experts may not. Examining the outcomes of these projects, we find no quantitative or qualitative differences between projects funded by the crowd alone, and those that were selected by both the crowd and experts. Our findings suggest that crowdfunding can play an important role in complementing expert decisions, particularly in sectors where the crowds are end users, by allowing projects the option to receive multiple evaluations and thereby lowering the incidence of "false negatives."
    Full-text · Article · Jan 2015 · Management Science
Show more