Article

Distributed Medical Image Analysis and Diagnosis through Crowd-Sourced Games: A Malaria Case Study

Electrical Engineering Department, University of California Los Angeles, Los Angeles, California, United States of America.
PLoS ONE (Impact Factor: 3.53). 05/2012; 7(5):e37245. DOI: 10.1371/journal.pone.0037245
Source: PubMed

ABSTRACT In this work we investigate whether the innate visual recognition and learning capabilities of untrained humans can be used in conducting reliable microscopic analysis of biomedical samples toward diagnosis. For this purpose, we designed entertaining digital games that are interfaced with artificial learning and processing back-ends to demonstrate that in the case of binary medical diagnostics decisions (e.g., infected vs. uninfected), with the use of crowd-sourced games it is possible to approach the accuracy of medical experts in making such diagnoses. Specifically, using non-expert gamers we report diagnosis of malaria infected red blood cells with an accuracy that is within 1.25% of the diagnostics decisions made by a trained medical professional.

Full-text

Available from: Swati Padmanabhan, Jun 03, 2015
1 Follower
 · 
131 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The development of tools in computational pathology to assist physicians and biomedical scientists in the diagnosis of disease requires access to high-quality annotated images for algorithm learning and evaluation. Generating high-quality expert-derived annotations is time-consuming and expensive. We explore the use of crowdsourcing for rapidly obtaining annotations for two core tasks in com-putational pathology: nucleus detection and nucleus segmentation. We designed and implemented crowdsourcing experiments using the CrowdFlower platform, which provides access to a large set of labor channel partners that accesses and manages millions of contributors worldwide. We ob-tained annotations from four types of annotators and compared concordance across these groups. We obtained: crowdsourced annotations for nucleus detection and segmentation on a total of 810 images; annotations using automated methods on 810 images; annotations from research fellows for detection and segmentation on 477 and 455 images, respectively; and expert pathologist-derived annotations for detection and segmentation on 80 and 63 images, respectively. For the crowdsourced annotations, we evaluated performance across a range of contributor skill levels (1, 2, or 3). The crowdsourced annotations (4,860 images in total) were completed in only a fraction of the time and cost required for obtaining annotations using traditional methods. For the nucleus detection task, the research fellow-derived annotations showed the strongest concordance with the expert pathologist-derived annotations (F-M =93.68%), followed by the crowd-sourced contributor levels 1,2, and 3 and the automated method, which showed relatively similar performance (F-M = 87.84%, 88.49%, 87.26%, and 86.99%, respectively). For the nucleus segmentation task, the crowdsourced contributor level 3-derived annotations, research fellow-derived annotations, and automated method showed the strongest concordance with the expert pathologist-derived annotations (F-M = 66.41%, 65.93%, and 65.36%, respectively), followed by the contributor levels 2 and 1 (60.89% and 60.87%, respectively). When the research fellows were used as a gold-standard for the segmentation task, all three con-tributor levels of the crowdsourced annotations significantly outperformed the automated method (F-M = 62.21%, 62.47%, and 65.15% vs. 51.92%). Aggregating multiple annotations from the crowd to obtain a consensus annotation resulted in the strongest performance for the crowd-sourced seg-mentation. For both detection and segmentation, crowd-sourced performance is strongest with small images (400 x 400 pixels) and degrades significantly with the use of larger images (600 x 600 and 800 x 800 pixels). We conclude that crowdsourcing to non-experts can be used for large-scale labeling microtasks in computational pathology and offers a new approach for the rapid generation of labeled images for algorithm development and evaluation.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Feature tracking and 3D surface reconstruction are key enabling techniques to computer-assisted minimally invasive surgery. One of the major bottlenecks related to training and validation of new algorithms is the lack of large amounts of annotated images that fully capture the wide range of anatomical/scene variance in clinical practice. To address this issue, we propose a novel approach to obtaining large numbers of high-quality reference image annotations at low cost in an extremely short period of time. The concept is based on outsourcing the correspondence search to a crowd of anonymous users from an online community (crowdsourcing) and comprises four stages: (1) feature detection, (2) correspondence search via crowdsourcing, (3) merging multiple annotations per feature by fitting Gaussian finite mixture models, (4) outlier removal using the result of the clustering as input for a second annotation task. On average, 10,000 annotations were obtained within 24 h at a cost of $100. The annotation of the crowd after clustering and before outlier removal was of expert quality with a median distance of about 1 pixel to a publically available reference annotation. The threshold for the outlier removal task directly determines the maximum annotation error, but also the number of points removed. Our concept is a novel and effective method for fast, low-cost and highly accurate correspondence generation that could be adapted to various other applications related to large-scale data annotation in medical image computing and computer-assisted interventions.
    International Journal of Computer Assisted Radiology and Surgery 04/2015; DOI:10.1007/s11548-015-1168-3 · 1.66 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives. We sought to explore the feasibility of using a crowdsourcing study to promote awareness about automated external defibrillators (AEDs) and their locations. Methods. The Defibrillator Design Challenge was an online initiative that asked the public to create educational designs that would enhance AED visibility, which took place over 8 weeks, from February 6, 2014, to April 6, 2014. Participants were encouraged to vote for AED designs and share designs on social media for points. Using a mixed-methods study design, we measured participant demographics and motivations, design characteristics, dissemination, and Web site engagement. Results. Over 8 weeks, there were 13 992 unique Web site visitors; 119 submitted designs and 2140 voted. The designs were shared 48 254 times on Facebook and Twitter. Most designers-voters reported that they participated to contribute to an important cause (44%) rather than to win money (0.8%). Design themes included: empowerment, location awareness, objects (e.g., wings, lightning, batteries, lifebuoys), and others. Conclusions. The Defibrillator Design Challenge engaged a broad audience to generate AED designs and foster awareness. This project provides a framework for using design and contest architecture to promote health messages. (Am J Public Health. Published online ahead of print October 16, 2014: e1-e7. doi:10.2105/AJPH.2014.302211).
    American Journal of Public Health 10/2014; DOI:10.2105/AJPH.2014.302211 · 4.23 Impact Factor