Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We often locate ourselves in a trade-off situation between what is predicted and understanding why the predictive modeling made such a prediction. This high-risk medical segmentation task is no different where we try to interpret how well has the model learned from the image features irrespective of its accuracy. We propose image-specific fine-tuning to make a deep learning model adaptive to specific medical imaging tasks. Experimental results reveal that: a) proposed model is more robust to segment previously unseen objects (negative test dataset) than state-of-the-art CNNs; b) image-specific fine-tuning with the proposed heuristics significantly enhances segmentation accuracy; and c) our model leads to accurate results with fewer user interactions and less user time than conventional interactive segmentation methods. The model successfully classified ’no polyp’ or ’no instruments’ in the image irrespective of the absence of negative data in training samples from Kvasir-seg and Kvasir-Instrument datasets.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Background and aims Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization and quality of colonoscopy. Methods A systematic literature search was performed in MEDLINE and Embase to identify AI-studies describing publicly available colonoscopic imaging datasets published after 2010. Second, a targeted search using Google’s Dataset Search, Google Search, GitHub and Figshare was done to identify datasets directly. Datasets were included if they contained data about polyp detection, polyp characterization or quality of colonoscopy. To assess accessibility of datasets the following categories were defined: open access, open access with barriers and regulated access. To assess the potential usability of the included datasets, essential details of each dataset were extracted using a checklist derived from the CLAIM-checklist. Results We identified 22 datasets with open access, 3 datasets open access with barriers and 15 datasets with regulated access. The 22 open access databases containing 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization and/or segmentation, six on polyp characterization and three on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train or benchmark their AI-system. Although technical details were in general well-reported, important details such as polyp and patient demographics and the annotation process were underreported in almost all databases. Conclusion This review provides greater insight on public availability of colonoscopic imaging databases for AI-research. Incomplete reporting of important details limits the ability of researchers to assess the usability of the current databases.
ResearchGate has not been able to resolve any references for this publication.