Conference Paper

Automated Mobile Image Acquisition of Macroscopic Dermatological Lesions

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The present work integrates a larger project, DermAI, that aims to improve the existing Teledermatology processes between Primary Care Units (PCU) and Dermatology Services in the Portuguese National Health Service (NHS) for skin lesion referral. Through the usage of Artificial Intelligence (AI) and Computer Vision, we envision two major goals: (a) to support doctors in Primary Care Units through the development of a mobile application that fosters image acquisition standardization [9] and (b) to assist dermatologists in the referral process for booking specialist consultations in the hospital through the adequate prioritization of cases. Improving dermatology consultations' prioritization is particularly relevant in the Portuguese scenario due to the lack of specialists in the NHS and the long waiting lists for this type of consultation. ...
Article
Full-text available
Teledermatology has developed rapidly in recent years and is nowadays an essential tool for early diagnosis. In this work, we aim to improve existing Teledermatology processes for skin lesion diagnosis by developing a deep learning approach for risk prioritization with a dataset of retrospective data from referral requests of the Portuguese National Health System. Given the high complexity of this task, we propose a new prioritization pipeline guided and inspired by domain knowledge. We explored automatic lesion segmentation and tested different learning schemes, namely hierarchical classification and curriculum learning approaches, optionally including additional patient metadata. The final priority level prediction can then be obtained by combining predicted diagnosis and a baseline priority level accounting for explicit expert knowledge. In both the differential diagnosis and prioritization branches, lesion segmentation with 30% tolerance for contextual information was shown to improve classification when compared with a flat baseline model trained on original images; furthermore, the addition of patient information was not beneficial for most experiments. Curriculum learning delivered better results than a flat or hierarchical approach. The combination of diagnosis information and a knowledge map, created in collaboration with dermatologists, together with the priority achieved interesting results (best macro F1 of 43.93% for a validated test set), paving the way for new data-centric and knowledge-driven approaches.
... Digital cameras and cameras embedded in mobile devices increase the quality and resolution of the images captured [1,2], allowing for powerful solutions for different areas of human life [3,4]. Currently, the emergence of studies related to healthcare and technology is increasing, and it is participating in the new era of medicine related to patient empowerment [5]. ...
Article
Full-text available
Healthcare treatments might benefit from advances in artificial intelligence and technological equipment such as smartphones and smartwatches. The presence of cameras in these devices with increasingly robust and precise pattern recognition techniques can facilitate the estimation of the wound area and other telemedicine measurements. Currently, telemedicine is vital to the maintenance of the quality of the treatments remotely. This study proposes a method for measuring the wound area with mobile devices. The proposed approach relies on a multi-step process consisting of image capture, conversion to grayscale, blurring, application of a threshold with segmentation, identification of the wound part, dilation and erosion of the detected wound section, identification of accurate data related to the image, and measurement of the wound area. The proposed method was implemented with the OpenCV framework. Thus, it is a solution for healthcare systems by which to investigate and treat people with skin-related diseases. The proof-of-concept was performed with a static dataset of camera images on a desktop computer. After we validated the approach's feasibility, we implemented the method in a mobile application that allows for communication between patients, caregivers, and healthcare professionals.
... In summary, the detailed control of image quality and adequacy should be considered an extremely important factor during the design of a mobile application intended for trapbased insect monitoring. In fact, promising results have been recently reported for different healthcare solutions [22][23][24], by embedding AI to effectively support the user in the handheld image acquisition process. However, the use of similar approaches in viticulture is non-existent. ...
Article
Full-text available
The increasing alarming impacts of climate change are already apparent in viticulture, with unexpected pest outbreaks as one of the most concerning consequences. The monitoring of pests is currently done by deploying chromotropic and delta traps, which attracts insects present in the production environment, and then allows human operators to identify and count them. While the monitoring of these traps is still mostly done through visual inspection by the winegrowers, smartphone image acquisition of those traps is starting to play a key role in assessing the pests’ evolution, as well as enabling the remote monitoring by taxonomy specialists in better assessing the onset outbreaks. This paper presents a new methodology that embeds artificial intelligence into mobile devices to establish the use of hand-held image capture of insect traps for pest detection deployed in vineyards. Our methodology combines different computer vision approaches that improve several aspects of image capture quality and adequacy, namely: (i) image focus validation; (ii) shadows and reflections validation; (iii) trap type detection; (iv) trap segmentation; and (v) perspective correction. A total of 516 images were collected, divided into three different datasets and manually annotated, in order to support the development and validation of the different functionalities. By following this approach, we achieved an accuracy of 84% for focus detection, an accuracy of 80% and 96% for shadows/reflections detection (for delta and chromotropic traps, respectively), as well as mean Jaccard index of 97% for the trap’s segmentation.
ResearchGate has not been able to resolve any references for this publication.