Conference PaperPDF Available

Abstract and Figures

Dengue is one of the deadly and fast spreading diseases in Sri Lanka. The female Aedes mosquito is the dengue vector and these mosquitoes breed in clear and non-flowing water. The Public Health Inspectors (PHIs) are tasked with detecting and eliminating such water collection areas. However, they face the problem of detecting potential breeding sites in hard-to-reach areas. With the technological development, the drones come as one of the most cost effective unmanned vehicles to access the places that a man cannot access. This paper presents a novel approach for identifying mosquito breeding areas via drone images through the distinct coloration of those areas by applying the Histogram of Oriented Gradients (HOG) algorithm. Using the HOG algorithm, we detect potential water retention areas using drone images.
Content may be subject to copyright.
Poster Abstract: A Machine Learning Approach for Identifying
Mosquito Breeding Sites via Drone Images
Akarshani Amarasinghe, Chathura Suduwella, Charith Elvitigala, Lasith Niroshan, Rangana
Jayashanka Amaraweera, Kasun Gunawardana, Prabash Kumarasinghe, Kasun De Zoysa,
Chamath Keppetiyagama
University of Colombo, School of Computing, Sri Lanka.
akarshani@scorelab.org,cps@ucsc.cmb.ac.lk,{charitha,lasith}@scorelab.org
{rja,kgg,jpk,kasun,chamath}@ucsc.cmb.ac.lk
ABSTRACT
Dengue is one of the deadly and fast spreading diseases in Sri
Lanka. The female Aedes mosquito is the dengue vector and these
mosquitoes breed in clear and non-owing water. The Public Health
Inspectors (PHIs) are tasked with detecting and eliminating such
water collection areas. However, they face the problem of detecting
potential breeding sites in hard-to-reach areas.
With the technological development, the drones come as one of the
most cost eective unmanned vehicles to access the places that a
man cannot access.
This paper presents a novel approach for identifying mosquito
breeding areas via drone images through the distinct coloration
of those areas by applying the Histogram of Oriented Gradients
(HOG) algorithm. Using the HOG algorithm, we detect potential
water retention areas using drone images.
CCS CONCEPTS
Computing methodologies Supervised learning by clas-
sication;Computer systems organization Robotics;
KEYWORDS
Dengue, Drone Systems, Mosquito Breeding Sites
ACM Reference Format:
Akarshani Amarasinghe, Chathura Suduwella, Charith Elvitigala, Lasith
Niroshan, Rangana Jayashanka Amaraweera, Kasun Gunawardana, Prabash
Kumarasinghe, Kasun De Zoysa, Chamath Keppetiyagama. 2017. Poster
Abstract: A Machine Learning Approach for Identifying Mosquito Breeding
Sites via Drone Images. In Proceedings of 15th ACM Conference on Embedded
Networked Sensor Systems (SenSys’17). ACM, New York, NY, USA, 2 pages.
https://doi.org/10.1145/3131672.3136986
1 INTRODUCTION
Good health is one of the expectations of all human beings. As well
as the life expectancy is an indicator of the development in a certain
country. The inuence of spreading deadly diseases such as Dengue
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
SenSys’17, November 6–8, 2017, Delft, The Netherlands
©2017 Association for Computing Machinery.
ACM ISBN 978-1-4503-5459-2/17/11. . . $15.00
https://doi.org/10.1145/3131672.3136986
can victimize thousands of human lives per year. According to the
WHO (World Health Organization), 70% of dengue infected persons
are from South East Asia and Western Pacic [
5
]. As well as Latin
American, Caribbean, African and Eastern Mediterranean regions
are severely aected by Dengue in the last decade [
5
]. Dengue is a
viral infection and transmitting by the bite of an infected female
Aedes mosquito [
5
]. Nowadays, dengue becomes a global threat
and spreads its authority in urban and semi urban areas with the
tropical and the subtropical climates in many countries. Unfortu-
nately, there is not a specic treatment against dengue fever [2].
The female Aedes mosquitoes aim the places with stagnant wa-
ter for breeding. However, there can be some places with stagnant
water that a man cannot easily reach or identify (i.e. roof gutters,
water tanks, inaccessible rooftops and cement materials which are
capable of retaining water). Due to the impact of retaining water
for a long period time, those areas are covered with lichens and
algaes [
1
]. According to the observations, these lichens and algaes
of those places can be distinguished from the rest, due to their dark
color [
1
]. This color specication is utilized for the identication of
mosquito breeding sites in unreachable places.
A drone or an Unmanned Aerial Vehicle (UAV) bulks among a
larger audience, due to its smaller size and maneuverability com-
pared with other unmanned equivalents. Accordingly, a drone can
reach and perceive the places that a human cannot. Therefore, this
work presents a solution for identifying possible mosquito breeding
sites through a drone management system by analyzing the drone
images of the places that are dicult to reach in the urban and the
semi urban areas with the tropical and the subtropical climates as
a guidance for the PHIs to suppress dengue mosquito breeding.
2 HOG FEATURES FOR WATER RETENTION
AREA IDENTIFICATION
According to our reviews, there are only two approaches for water
retention area identication; Suduwella et al.
'
s [
7
] and Amaras-
inghe et al.
'
s [
6
]. The main problem of those two advances is that
the color of some roads really looks like the color of the water re-
tention areas. So some roads have also been marked as the possible
water retention areas in the output image. Furthermore, the nal
results depend on the drone camera tilt angle and the eects of
shadows in Suduwella et al.'s methodology [7].
SenSys’17, November 6–8, 2017, Del, The Netherlands Amarasinghe et al.
As a solution for aforementioned problems and as an improve-
ment of Amarasinghe et al.
'
s method, we decided to come up with
a solution for detecting certain water retention areas through a
machine learning approach. The HOG algorithm is used to extract
features from test dataset, since it is one of the major object detec-
tion methods [
3
]. We have generated 660 positive features and 140
negative features using the HOG feature extraction methodology.
After that, we generated and trained three classiers referring Sup-
port Vector Classication (SVC) and utilizing those positive and
negative features under dierent gamma values (kernel =
'
rbf
'
, cost
function (C) = 1). Then we analyzed those classiers by determining
the capability of detecting the possible water retention areas.
3 PRELIMINARY EVALUATION AND FUTURE
WORK
The approach is mainly focusing on the water retention area iden-
tication that a man cannot access. The evaluation was carried
out for 100 images which are captured from dierent areas in Sri
Lanka through several drone ights. After that, we checked each
image after applying the HOG feature detection algorithm using
three dierent gamma values and observe whether it identies the
possible water retention areas.
For the nal result we analyzed True Positive (TP), True Negative
(TN), False Positive (FP) and False Negative (FN) counts. In here,
TP is the HOG feature detection algorithm detects a possible water
retention area, and there is such an area in the image. Similarly,
remaining TN, FP and FN are dened accordingly in this context.
Based on those values, recall and precision were derived (Table 1,
Figure 1).
Gamma Value Recall Precision
0.01 90.56% 84.01%
1 94.33% 91.14%
100 72.64% 82.51%
Table 1: Recall and Precision Values for Three Dierent
Gamma Values for 100 Images
Figure 1: The Line Chart for Recall and Precision Values for
Three Dierent Gamma Values
According to the Table 1 and the Figure 1, when the gamma
value equals to 1, it shows the higher recall and precision values.
Furthermore, when the gamma value is decreasing and increasing
from 1 to 0.01 and from 1 to 100, the recall and precision values are
decreasing. So to get better results the more adequate gamma value
is 1 when the kernel is 'rbf 'and the cost function is 1.
We compared our suggested approach when gamma equals to 1
(since it shows the higher recall and precision) with Amarasinghe
et al.
'
s approach (Figure 2). The suggested approach shows a com-
paratively higher recall and precision values.
With this paper we are publishing a database which contains
possible water retention areas [
4
]. Also, we expect to train those
positive and negative features using other machine learning algo-
rithms to get more accurate results. Furthermore, we are planning to
create another database which consists of other mosquito breeding
sites such as coconut shells and tyres.
Figure 2: The Bar Chart for the Comparison Results with
Other Approaches
ACKNOWLEDGMENTS
Special thanks go to the University Grants Commission (UGC), Sri
Lanka.
REFERENCES
[1]
2017. Algae, Lichen, Moss can eat roof, concrete or siding. (2017). Retrieved
August 14, 2017 from http://wparc.net/algae-lichen-moss- eating-roof/
[2]
2017. Dengue fever - Treatment - Mayo Clinic. (2017). Retrieved Au-
gust 14, 2017 from http://www.mayoclinic.org/diseases-conditions/dengue-fever/
diagnosis-treatment/treatment/txc- 20345589
[3]
2017. Histogram of Oriented Gradients | Learn OpenCV. (2017). Retrieved August
14, 2017 from http://www.learnopencv.com/histogram-of-oriented- gradients/
[4]
2017. scorelab/D4D-Drone-4-Dengue, GitHub. (2017). Retrieved August 14, 2017
from https://github.com/scorelab/D4D---Drone-4- Dengue/tree/master/d4d-data/
detecting_water_retention_areas
[5]
2017. WHO | What is dengue and how is it treated? (2017). Retrieved August 02,
2017 from http://www.who.int/features/qa/54/en/
[6]
Akarshani Amarasinghe, Chathura Suduwella, Lasith Niroshan, Charith Elvitigala,
Kasun De Zoysa, and Chamath Keppetiyagama. 2017. Suppressing Dengue via
a Drone System. Advances in ICT for Emerging Regions (ICTer), 2017 Sixteenth
International Conference on. IEEE. (2017).
[7]
Chathura Suduwella, Akarshani Amarasinghe, Lasith Niroshan, Charith Elviti-
gala, Kasun De Zoysa, and Chamath Keppetiyagama. 2017. Identifying Mosquito
Breeding Sites via Drone Images. Proceedings of the 3rd Workshop on Micro Aerial
Vehicle Networks, Systems and Applications (2017), 27-30. https://doi.org/10.1145/
3086439.3086442
... In addition to the convex boundary remaining relatively stable from one timestep to the next, this method of replanning at much quicker intervals (e.g., 1 minute) than the round duration (e.g., 10 minutes) makes Mobius resilient to uncertainty in the environment. 4 For instance, Mobius can react to streaming requests in a punctual manner, and can also incorporate requests that are unfullled due to unexpected delays (e.g., road trac or wind). Moreover, since Mobius uses a VRP solver as a building block to compute its schedule ( §3), it can also leverage algorithms that solve the stochastic VRP [8], where requests arrive and disappear probabilistically. ...
... Setting. The recent proliferation of commodity drones has generated an increased interest in the development of aerial sensing and data collection applications [2,4,16,20,33,34], as well as generalpurpose drone orchestration platforms [26,37,40]. An emerging mobility platform is a drones-as-a-service system [21,27,32,46,48], where developers submit apps to a platform that deploys these app tasks on a shared eet of drones. ...
... Multi-agent vehicular systems, where large numbers of vehicles coordinate to execute complex missions, have the potential to transform the transportation and mobility domains. Such systems have wide-ranging applications [44], including disaster response [12], wildfire detection [3], sensing and monitoring [6,7,40], ridesharing [4], agriculture [50], and spacecraft operations [9,17]. Despite the differences in application domains, these operations tend to occur in 1 Code base: https://github.com/Jaroan/Fair-MARL ...
Preprint
Full-text available
Multi-agent systems are trained to maximize shared cost objectives, which typically reflect system-level efficiency. However, in the resource-constrained environments of mobility and transportation systems, efficiency may be achieved at the expense of fairness -- certain agents may incur significantly greater costs or lower rewards compared to others. Tasks could be distributed inequitably, leading to some agents receiving an unfair advantage while others incur disproportionately high costs. It is important to consider the tradeoffs between efficiency and fairness. We consider the problem of fair multi-agent navigation for a group of decentralized agents using multi-agent reinforcement learning (MARL). We consider the reciprocal of the coefficient of variation of the distances traveled by different agents as a measure of fairness and investigate whether agents can learn to be fair without significantly sacrificing efficiency (i.e., increasing the total distance traveled). We find that by training agents using min-max fair distance goal assignments along with a reward term that incentivizes fairness as they move towards their goals, the agents (1) learn a fair assignment of goals and (2) achieve almost perfect goal coverage in navigation scenarios using only local observations. For goal coverage scenarios, we find that, on average, our model yields a 14% improvement in efficiency and a 5% improvement in fairness over a baseline trained using random assignments. Furthermore, an average of 21% improvement in fairness can be achieved compared to a model trained on optimally efficient assignments; this increase in fairness comes at the expense of only a 7% decrease in efficiency. Finally, we extend our method to environments in which agents must complete coverage tasks in prescribed formations and show that it is possible to do so without tailoring the models to specific formation shapes.
... [8]. Amarasinghe, et al. used a machine learning approach using the histogram of oriented gradients (HOG) feature for water retention areas for identifying mosquito breeding sites via drone images [9]. ...
Conference Paper
Full-text available
Climate change is causing extreme weather conditions that have led to rapid transmission of one of the deadliest vector-borne diseases on earth, Dengue. The main form to combat this disease is to control the Aedes mosquito population by searching for and eliminating the potential mosquito breeding areas. Major challenges in this case are to locate, monitor, reach and spray insecticides in critical locations, such as out-of-sight spots in high-rise buildings, ponds, and side areas of large drains. One possible solution is to use a drone that can be controlled to reach such locations from above the ground. However, to handle the out-of-sight or unapproachable locations, it is extremely difficult to automatically identify the target locations and spray the insecticides. In this paper, a deep learning-based algorithm is incorporated in a proposed hexacopter drone-based effective method for automatic identification of mosquito breeding grounds from drone images. Due to a wide variation of the target locations, apart from existing datasets, a comprehensive dataset is created in this work. Moreover, the proposed design includes autonomous navigation to the target spot and spraying the insecticides to eliminate mosquito larvae. It is to be noted that the detected location data of these areas are also preserved for creating a better prediction model that can efficiently map the dengue risk zones. The proposed design is being implemented in an autonomous drone flying in real-time and the target zone identification performance is evaluated in different critical locations. From extensive experiments on several real-life areas, a very accurate location estimation and spraying is achieved.
... Bird-50 and CUB-200 datasets of 2751 and 6033 images respectively were fed into customized deep networks which run over 120 k iterations. [4] identified the location of mosquito breeding hotspots by examining water retention areas in a customized UAV dataset containing over 300 images captured by drone. The location of mosquitoes was identified through CNN architecture combined with an SVM classifier for predicting class scores. ...
Article
Full-text available
One of the major factors creating advance prospects in the aerial imaging classification solutions market is the recently published drone policies by Government of India and availability of artificial intelligence-based technologies. The images in low-altitude aerial datasets are inherently different from standard datasets in terms of the appearance cues and the number of bounding box hypotheses. The appearance cues exist due to the present challenges in low-altitude aerial images such as change in viewpoints, arbitrarily orientation and occluded objects. The wide coverage of objects in low-altitude aerial images accounts for a large number of objects in aerial images resulting in complex and multiple bounding boxes. These challenges trigger a need for powerful classification architectures for low-altitude aerial images. This research paper discusses high-performance classification technique based on powerful feature extractor proposed for low-altitude aerial images. The proposed classification architecture makes use of the new improved VGG16 network and dilated ResNet50 model in which fusion takes place between various transformed feature maps. The fusion helps in embedding extra semantic information which further aids in accurate classification of low-altitude aerial images. The performance evaluation is done on approximately 23 k images with different classes of objects gathered from various benchmark low-altitude aerial datasets. The proposed classification architecture achieved a validation accuracy of 99.70% and test-set accuracy of 96.23% which is better than other classification models.
... Unmanned Aerial Vehicles (UAVs) commonly known as drones are autonomous and meant to be hovering around a working area to gather information for the deployment of computer vision-related applications ( McNeal, 2014 ). The applications of UAVs include human crowd detection using Convolutional Neural Networks (CNN) ( Tzelepi & Tefas, 2017 ); crop classifications ( George, Tiwari, Yadav, Peters & Sadana, 2013 ); wildlife conservation ( Bondi, Dey, Kapoor, Piavis, Shah, Fang & Tambe, 2018 ); traffic monitoring through detection of vehicles ( Zhang, Cao & Mao, 2017 ); drone surveillance system for violent human actions identification ( Singh, Patil & Omkar, 2018 ); breach detection and mitigation ( Shijith, Poornachandran, Sujadevi & Dharmana, 2017 ); pedestrian detections ( Ma, Wu, Yu, Xu & Wang, 2016 ); and identification of mosquito breeding areas ( Amarasinghe, Suduwella, Elvitigala, Niroshan, Amaraweera, Gunawardana & Keppetiyagama 2017 ). The real-time hardware cost of UAVs is expensive and accessing them requires high skilled training so it is not advisable to work on real-time with single and multiple instances of UAV objects is captured from this simulated model. ...
Article
Full-text available
The understanding and implementation of object detection and classification algorithms help in deploying diverse applications of UAVs. There is a need for a simulated UAV dataset to incorporate a pipeline for various algorithms. To reduce human efforts, multiple simulators have been utilized to mimic the real-time behavior of drones. Our work inspired simulators and can be considered by engineering students to create a dataset in a simulated environment. In this paper, we focused on the study of MATLAB-based Simulink through multiple environment settings. The core objective of the paper is to create a simulated dataset from the utilized quadcopter-based flight control model in MATLAB-based Simulink. In this customized model, few modifications have been made to obtain drone videos to detect object categories such as pedestrians, other drones and obstacles while navigating in a simulated environment. Additionally, these simulated images are annotated for aerial image interpretation with multiple object categories. The dataset is annotated and is freely downloadable from: https://bit.ly/38jlAsh. In this research study, we mainly focus on the process of drone simulation in the MATLAB-based Simulink model. Further, the captured dataset is verified on state-of-the-art object detectors such as YoloV3, TinyYolov3 etc. by evaluating the authenticity of the dataset.
... The real-time applications deployed in low-altitude UAV datasets do CNNs work in civilian airspace in a robust manner. The development of complex applications in low-altitude aerial images includes crowd surveillance by estimating violent human poses [12], recycling of plastic waste in wilds [13], monitoring power infrastructures [14], identifying mosquito breeding areas [15], and landslide accidents [16]. In this section, we discuss the challenges of low-altitude UAV-based object classification, machine learning-based classifiers, and deep models. ...
Article
Full-text available
This paper compares the classification performance of machine learning classifiers vs. deep learning-based handcrafted models and various pretrained deep networks. The proposed study performs a comprehensive analysis of object classification techniques implemented on low-altitude UAV datasets using various machine and deep learning models. Multiple UAV object classification is performed through widely deployed machine learning-based classifiers such as K nearest neighbor, decision trees, naïve Bayes, random forest, a deep handcrafted model based on convolutional layers, and pretrained deep models. The best result obtained using random forest classifiers on the UAV dataset is 90%. The handcrafted deep model's accuracy score suggests the efficacy of deep models over machine learning-based classifiers in low-altitude aerial images. This model attains 92.48% accuracy, which is a significant improvement over machine learning-based classifiers. Thereafter, we analyze several pretrained deep learning models, such as VGG-D, InceptionV3, DenseNet, Inception-ResNetV4, and Xception. The experimental assessment demonstrates nearly 100% accuracy values using pretrained VGG16- and VGG19-based deep networks. This paper provides a compilation of machine learning-based classifiers and pretrained deep learning models and a comprehensive classification report for the respective performance measures.
... The methodology adopted in this research work is similar to it was done using Tensor flow object detection algorithm. The algorithm is able to detect the object and also generate the probability level which is not available in the methods used by Suduwella et.al (2017) [5]. The vulnerable sites identified are classified on the basis of probability as high, medium and low. ...
Article
Full-text available
This study deals with Drone based Aerial Survey in analyzing and identification of highly vulnerable mosquito breeding sites at Buckingham canal Chepauk, Chennai. It is small section of Buckingham canal studied using the drone for capturing the images and furtherly images were processed for identifying the sites prone to breeding of mosquitos. Approach used here of capturing images and classifying the vulnerable sites, using a machine learning approach in for extracting features using the algorithm and later generating a Support Vector Machine to train and classify the images. Tensorflow object detection algorithm was used to detect the object and also generate the probability level. Tensorflow based supervised classification involves stacking multiple layers of neural network for a classification. Methods like back propagation invoked in the neural networks ensures the classification accuracy are increased. In this algorithm Single shot multi box detector (SSD) has been used which provides fast detection. Single shot multi box detector (SSD) method is based on a feed-forward convolutional network, followed by a non-maximum suppression step to produce final detections. The accuracy constraint of 70% was kept for qualifying as a potential site to negate ambiguities arising due to processing errors.
... UAVs can also be used to characterize land cover, ecological boundaries and transition zones [14], and ancillary features, such as paths or access points, that may be useful for qualitatively understanding the geography of a community, or for logistical planning purposes [13]. To apply UAV mapping to the identification of Ae. aegypti in human communities, there is interest in identifying larval habitats and density directly [15], but also in characterizing factors such as vegetation that influence local vector populations [16] because they are associated with food, shade and local moisture supply that can reduce evaporation from containers, and decreased sub-canopy wind speed and increased humidity near the ground-all factors that increase vector competence [17][18][19][20]. Applications of UAV remote sensing for vegetation mapping have been extensively developed for agriculture and ecosystems management. ...
Article
Full-text available
Dengue is recognized as a major health issue in large urban tropical cities but is also observed in rural areas. In these environments, physical characteristics of the landscape and sociodemographic factors may influence vector populations at small geographic scales, while prior immunity to the four dengue virus serotypes affects incidence. In 2019, a rural northwestern Ecuadorian community, only accessible by river, experienced a dengue outbreak. The village is 2–3 hours by boat away from the nearest population center and comprises both Afro-Ecuadorian and Indigenous Chachi households. We used multiple data streams to examine spatial risk factors associated with this outbreak, combining maps collected with an unmanned aerial vehicle (UAV), an entomological survey, a community census, and active surveillance of febrile cases. We mapped visible water containers seen in UAV images and calculated both the green-red vegetation index (GRVI) and household proximity to public spaces like schools and meeting areas. To identify risk factors for symptomatic dengue infection, we used mixed-effect logistic regression models to account for the clustering of symptomatic cases within households. We identified 55 dengue cases (9.5% of the population) from 37 households. Cases peaked in June and continued through October. Rural spatial organization helped to explain disease risk. Afro-Ecuadorian (versus Indigenous) households experience more symptomatic dengue (OR = 3.0, 95%CI: 1.3, 6.9). This association was explained by differences in vegetation (measured by GRVI) near the household (OR: 11.3 95% 0.38, 38.0) and proximity to the football field (OR: 13.9, 95% 4.0, 48.4). The integration of UAV mapping with other data streams adds to our understanding of these dynamics.
Article
Multi-agent systems are trained to maximize shared cost objectives, which typically reflect system-level efficiency. However, in the resource-constrained environments of mobility and transportation systems, efficiency may be achieved at the expense of fairness — certain agents may incur significantly greater costs or lower rewards compared to others. Tasks could be distributed inequitably, leading to some agents receiving an unfair advantage while others incur disproportionately high costs. It is, therefore, important to consider the tradeoffs between efficiency and fairness in such settings. We consider the problem of fair multi-agent navigation for a group of decentralized agents using multi-agent reinforcement learning (MARL). We consider the reciprocal of the coefficient of variation of the distances traveled by different agents as a measure of fairness and investigate whether agents can learn to be fair without significantly sacrificing efficiency (i.e., increasing the total distance traveled). We find that by training agents using min-max fair distance goal assignments along with a reward term that incentivizes fairness as they move towards their goals, the agents (1) learn a fair assignment of goals and (2) achieve almost perfect goal coverage in navigation scenarios using only local observations. For goal coverage scenarios, we find that, on average, the proposed model yields a 14% improvement in efficiency and a 5% improvement in fairness over a baseline model that is trained using random assignments. Furthermore, an average of 21% improvement in fairness can be achieved by the proposed model as compared to a model trained on optimally efficient assignments; this increase in fairness comes at the expense of only a 7% decrease in efficiency. Finally, we extend our method to environments in which agents must complete coverage tasks in prescribed formations and show that it is possible to do so without tailoring the models to specific formation shapes. [Code]
Conference Paper
Full-text available
Public Health Inspectors (PHIs) in Sri Lanka are facing a problem of identifying certain mosquito breeding sites since they cannot easily reach places such as roof gutters, overhead water tanks, inaccessible rooftops and cement materials which are capable of retaining water. The goal of a such inspection of suspected sites is to reduce the number of dengue patients by eradicating dengue mosquito habitats. Due to the retention of water in the aforementioned sites for a long period of time, those places tend to be full of lichens. In general, lichens are visible in dark color. This characteristic helps to identify prolonged water retention areas. With the rapid advancement of technology, the drone has been created as one of the most cost effective apparatus to capture the places that a human cannot access. With respect to the aforesaid context, this paper presents a simple and a novel approach to identify mosquito breeding sites via drone images. The proposed approach processes images captured from a drone to identify possible sites where stagnant water may retain and highlights if such areas are apparent within the image. The evaluation process found that the proposed method, produces a satisfactory level of accuracy in identification of possible water retention areas and the final results depend on the drone camera tilt angle and the effect of shadows.
Kasun De Zoysa, and Chamath Keppetiyagama
  • Akarshani Amarasinghe
  • Chathura Suduwella
  • Lasith Niroshan
  • Charith Elvitigala
Akarshani Amarasinghe, Chathura Suduwella, Lasith Niroshan, Charith Elvitigala, Kasun De Zoysa, and Chamath Keppetiyagama. 2017. Suppressing Dengue via a Drone System. Advances in ICT for Emerging Regions (ICTer), 2017 Sixteenth International Conference on. IEEE. (2017).
Lichen, Moss can eat roof, concrete or siding
2017. Algae, Lichen, Moss can eat roof, concrete or siding. (2017). Retrieved August 14, 2017 from http://wparc.net/algae-lichen-moss-eating-roof/ [2] 2017. Dengue fever -Treatment -Mayo Clinic. (2017). Retrieved August 14, 2017 from http://www.mayoclinic.org/diseases-conditions/dengue-fever/ diagnosis-treatment/treatment/txc-20345589