Access to this full-text is provided by MDPI.
Content available from Future Internet
This content is subject to copyright.
future internet
Review
Computer Vision for Fire Detection on UAVs—From Software
to Hardware
Seraphim S. Moumgiakmas , Gerasimos G. Samatas and George A. Papakostas *
Citation: Moumgiakmas, S.S.;
Samatas, G.G.; Papakostas, G.A.
Computer Vision for Fire Detection
on UAVs—From Software to
Hardware. Future Internet 2021,13,
200. https://doi.org/10.3390/
fi13080200
Academic Editors: Remus Brad and
Arpad Gellert
Received: 17 June 2021
Accepted: 29 July 2021
Published: 31 July 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Computer Science Department, International Hellenic University, 65404 Kavala, Greece;
semoumg@cs.ihu.gr (S.S.M.); gesamat@cs.ihu.gr (G.G.S.)
*Correspondence: gpapak@cs.ihu.gr; Tel.: +30-2510-462-321
Abstract:
Fire hazard is a condition that has potentially catastrophic consequences. Artificial intelli-
gence, through Computer Vision, in combination with UAVs has assisted dramatically to identify
this risk and avoid it in a timely manner. This work is a literature review on UAVs using Computer
Vision in order to detect fire. The research was conducted for the last decade in order to record the
types of UAVs, the hardware and software used and the proposed datasets. The scientific research
was executed through the Scopus database. The research showed that multi-copters were the most
common type of vehicle and that the combination of RGB with a thermal camera was part of most
applications. In addition, the trend in the use of Convolutional Neural Networks (CNNs) is increas-
ing. In the last decade, many applications and a wide variety of hardware and methods have been
implemented and studied. Many efforts have been made to effectively avoid the risk of fire. The fact
that state-of-the-art methodologies continue to be researched, leads to the conclusion that the need
for a more effective solution continues to arouse interest.
Keywords: UAV; Computer Vision; fire detection; wildfire; smoke
1. Introduction
Unmanned Aerial Vehicles (UAVs) in recent years have been the center of many
studies. They are aerial robotic vehicles that are capable of high speeds and carrying
heavy loads. Due to their maneuverability which makes them capable of avoiding easier
obstacles and flying above them, unlike ground-based robots, they are ideal for many
applications. Some of these applications are: infrastructure monitoring and inspection,
earth science, defense and security, agricultural and in applications of environmental
interest [
1
]. Research on UAVs has led to the development and design of different types of
vehicles that differ in a variety of characteristics, such as weight, size, mode of operation
and flight, engine type and mechanisms involved in the vehicle.
Two main categorizations that have been made in UAVs concern their weight and
flight mode based on their aerodynamic design. Based on their weight [
2
], UAVs can be
distinguished according to the characteristics of Table 1in Super Heavy, Heavy, Medium,
Light and Micro. This categorization has an impact on further categorizations of UAVs,
such as the variation in the volume or type of engine that powers their system.
Table 1. Classification based on weight.
Type Weight
Super Heavy W > 2000 kg
Heavy 200 kg < W ≤2000 kg
Medium 50 kg < W ≤200 kg
Light 5 kg < W ≤50 kg
Micro W ≤5 kg
Future Internet 2021,13, 200. https://doi.org/10.3390/fi13080200 https://www.mdpi.com/journal/futureinternet
Future Internet 2021,13, 200 2 of 17
Another classification can be done regarding their mechanical design. Based on this,
there are four main categories. These are Fixed wing, Flapping wing, Multicopter and
Single rotor [3–5].
Fixed wing
: Models in this category use wings that have one or more propellers to
move in the environment and a runway is mandatory for takeoff and landing procedures.
The change in their course depends on a combination of movable surfaces and thrust.
These models compared to the other types of UAVs can travel with high speed carrying
heavy payloads [
6
] and, due to the design of their wings, they cannot be easily adapted to
wind conditions.
Flapping wing
: This category includes vehicles that have many common features with
the UAVs of the previous category. The difference is inside the mechanism that exists in
the wings which facilitates the change of direction of the vehicle and helps increase the
lift. Their increased maneuverability makes them more flexible than fixed wing models,
in cases where there are strong winds. Models in this category require also a runway for
taking off and landing.
Multicopter
: This category of UAVs differs in many aspects from the previous two.
The operation of the multicopter is based only on the rotors it has, as it has no wings
at all. They are capable of vertical taking off and landing and do not require any kind
of runway. Its rotors are mounted horizontally on the main body of the vehicle and the
models have increased stability and are able to adapt to flight conditions and constantly
change their speed.
Single rotor
: This category includes models with a single central rotor and a tail rotor.
They share the same features as multicopters except that they are more unstable models.
The various classifications in UAV show that every category has its own pros and
cons. As a result, the suitable platform selection varies and depends on the application.
The applications using UAV capabilities are many and differ in their structure and pur-
pose. More specifically, different types of UAVs have been used in applications, such as
agriculture, inspections, delivery supplies, rescue operations, surveying, filming, military
applications and disaster or hazards identifications.
One application that belongs to the group of implementations related to hazard
monitoring and prevention is fire detection. With the help of UAVs, fires can be detected
early or even before they start. Due to the fact of their movement in the air above obstacles,
they have a larger optical range that covers a large area. However, as the height increases,
the resolution of the sensors decreases. UAVs flying at low heights are able to detect easier
fires over areas such as forests or residential areas. Based on the above and a proposed
communication system or network, it is possible to detect a fire early by UAVs before it
becomes hazardous to the environment. UAVs are very effective because they are suitable
for supervision and monitoring applications and due to their flexibility and free movement
in the aerial environment.
In addition, in many cases, Synthetic Vision Systems (SVSs) are used. SVSs provide
a computer-generated visualization of the environment, through the sensors, according
to the path that the UAV follows or its position. These types of systems are very useful in
cases where the operation of vehicles is done remotely through an operator. However, the
use of these systems in autonomous navigation missions is also important and feasible.
SVS systems provide an augmented visualization of non-physical constraints and exo/ego
centered views [7,8].
The camera with which the UAV will be equipped plays a very important role in the
successful achievement of early fire detection. Without the necessary sensor equipment,
such an application becomes impossible. Of course, it is not only the sensor that is very
important but also the way in which the risk will be identified, the type of classification
and all those procedures related to the data transfer to a server.
The goal of this research was to examine the use of UAVs and the state-of-the-art
Computer Vision techniques to improve efforts to detect and prevent fires. Fire is a very
dangerous situation for humans and animals as well as for the environment itself. Therefore,
Future Internet 2021,13, 200 3 of 17
the research in this field has to keep evolving in order to prevent any kind of unplanned
or uncontrolled fires. The study presents the Computer Vision models, the corresponding
UAVs and the mission processes and materials for detecting fires. This literature review
presents the stages and equipment that can be included in a fire detection and classification
plan. The main contribution of the review is the screening of the most-used types of UAVs,
cameras, Computer Vision models and AI algorithms. Furthermore, the key components
and techniques which improve the quality of the mission are presented. In addition,
frameworks, software and the main datasets are also presented. It is noted here that the
above emerged after a study of published scientific works of the last decade.
The present work is organized as follows: Section 1consists of the introduction
which presents core meanings of UAVs. Section 2describes fire detection using Computer
Vision on UAVs and, more specifically, the related work and the fire detection framework.
Section 3
describes the research methods used to construct this review, the way the research
was executed, and the early statistics of the research. Section 4presents the taxonomy of
the hardware and software/methods of the research and the dataset subsection includes
the literature used datasets. Section 5contains the discussion on the results of the literature
review. Finally, Section 6summarizes the final conclusion of the literature review.
2. Fire Detection Using Computer Vision on UAVs
This section describes the big picture of applied Computer Vision methods to UAVs in
order to detect a fire. More specifically, the related work of the field is described and also
the framework of UAV fire detection workflow is laid out.
2.1. Related Work
In the recent years, there are significant research projects for fire detection algorithms
on a UAV platform, using Computer Vision methods. Firstly, fire detection is a stan-
dalone research field. There are a lot of proposed applications and algorithms, to obtain
the best accuracy of the early detection of a fire. Images and videos are used for data
acquisition. It is proven that when the data are forwarded through a Machine Learning
Algorithm, prediction is more accurate than using a bare sensor [
9
]. These results are
getting better using more advanced Artificial Intelligence algorithms. When the proposed
fire detection models went deeper, using Convolutional Neural Networks, then the ac-
curacy increased and showed a lot of potential [
10
]. There are great Computer Vision
models, such as Visual Geometry Group (VGG) or GoogleNet with tremendous accuracy
in image
classification [11,12]
. The various models are continuously improving and obtain
better scores on various benchmarks such as the ImageNet Large Scale Visual Recognition
Challenge [
13
]. Moreover, there are literature reviews, focusing on UAV and Computer
Vision in different fields, foremost in navigation [14].
2.2. Fire Detection Framework
Prior to beginning the mission, the navigation plan, the identification of appropriate
algorithms and models must be selected and implemented in the system. The procedures
that will be done through the camera, consist of taking photos or videos and preprocessing
them, including image segmentation. Then, the fire detection and features extraction are
due:
Preprocessing
: The purpose of Computer Vision is to analyze the information coming
from the image in order to perform the appropriate processes to locate the fire. After
the image acquisition procedures, preprocessing consists of procedures related to image
enhancement or verification of its suitability for the respective method. Procedures related
to image optimization through preprocessing are noise reduction, in order to reduce or
remove the image’s noise, normalization processes for changing the range of pixel intensity
values and scaling for image’s resizing.
Segmentation
: Once the image has been obtained and the appropriate optimization
procedures have been performed through the preprocessing, the pixels must be separated
Future Internet 2021,13, 200 4 of 17
into those that describe the object of interest from other image information. In the case of
fire, the segmentation consists of pixels associate with fire. The way in which segmentation
will be done differs between the methods. More specifically, these methods can be based
on colors, motion or even intensities.
Fire Detection and Features Extractions
: Through the feature extraction, the appro-
priate actions are taking place on the image in order to analyze the segmented image and
to identify the key points of interest. The image then is passed to a trained model in order
to find the patterns that will confirm or reject the presence of fire.
In the next step, in case of a positive result of the artificial intelligence model processes,
the system sends an alarm via the UAV or ground support station to the fire protection
personnel for further actions. Figure 1shows the aforementioned flowchart.
At this point, it is worth noting that the methodology described above is the basic way
of fire detection. Deep learning techniques involved in this process have greatly simplified
the processes of segmentation and feature extractions by replacing classical algorithms [
15
].
Figure 1. Fire Detection with UAV using Computer Vision Framework.
3. Literature Review
This section consists of two parts. The scope and the research criteria are presented first.
The second part contains the taxonomy of the results about the Hardware and Software
used in UAV for fire detection purposes. Figure 2shows the flow diagram followed using
the PRISMA (http://prisma-statement.org/ (accessed on 31 July 2021)) methodology.
3.1. Research Execution
The paper is a systematic literature review (SLR), which is a secondary research. The
first step of this review method is to address the research questions.
3.1.1. Research Questions
Q1. What is the suitable hardware for fire detection with UAV?
Q2. What methods are used for image processing to detect the fire after the im-
ages/video acquisition?
Q3. What is the current framework for fire detection using UAV?
Q4. What datasets are used to evaluate the models’ accuracy?
Future Internet 2021,13, 200 5 of 17
Figure 2. Adopted PRISMA flow diagram 2020.
3.1.2. Research Database
The database used was Scopus (https://www.scopus.com/ (accessed on 31 July 2021)),
which is a very reliable database [
16
]. In order to answer the above questions, the following
search query was designed:
“Computer Vision” OR “video tracking” OR “image restoration” OR “image analysis”
OR “image processing” OR “object detection”
AND
“UAV” OR “unmanned aircraft system” OR “UAS” OR “aerial robotics” OR “au-
tonomous aerial vehicle” OR “unmanned aerial vehicles” OR “unmanned combat aerial
vehicle” OR “UCAV”
AND
“Wildfire” OR “firefighting” OR “fire fight” OR “firefight” OR “conflagration” OR
“fire” OR “smoke”
AND
Publication Year > 2010
3.2. Research Early Statistics
The query was executed on 25 March 2021 and 72 documents emerged. All the papers
were reviewed in order to obtain the necessary information and early statistics came in
place. First of all, the usage of the VOSviewer (https://www.vosviewer.com/ (accessed
on 31 July 2021)) software tool was proposed. It is a tool for constructing and visualizing
bibliographic couplings. These couplings are presented in Figure 3, according to the
country of origin. The 7th most referred countries filter was applied and eight of them are
visible. This clustering technique shows the country impact for publications related to fire
detection using UAV’s [17].
More specifically, the cluster connections shows Canada has a huge impact. According
to Figure 4, whose data were taken from the platform Statista (https://www.statista.com/
Future Internet 2021,13, 200 6 of 17
(accessed on 31 July 2021)), this great impact on Canada is due to the great forest fires issue
Canada is facing for decades. The average number of wildfires in Canada is 6704 fires per
year, bringing the total number of wildfires in the last two decades to 134,082.
Figure 5
,
whose data were also taken from Statista, shows the number of burned areas by forest
fires. It is clear that Canada has been facing a major fire problem for many years and its
contribution to solving this problem through the UAVs and Computer Vision is huge. In
addition, Greece, Cyprus and Spain are suffering from deforestation through wildfires.
Moreover, based on Figure 3, Canada, France, China, Greece and Australia are biblio-
graphically neighboring. This fact shows that between these countries, there is a strong
citation relationship by referencing common papers from countries that participate in the
same cluster. Conversely, USA and Austria are showing two different approaches to UAV
fire detection, setting the papers apart from what is published in the other countries [18].
Figure 3. Top-7 bibliographic couplings between countries.
Figure 4. Number of forest fires in Canada from 2000 to 2019.
Finally, the number of publications was analyzed via Scopus, in order to reveal any
trend in the last decade. Figure 6shows an exponential increase from 2016 until year
2019. This increase is due to the integration of deep learning through CNN models in
the field of Computer Vision and, when compared to classical algorithms, it is observed
that their implementation is an easier process. In 2020, there was a slight decrease which
continues to decrease further in 2021. The exponential growth until the year 2019 is due to
the development of Computer Vision and more specifically Machine Learning methods.
The years 2020 and 2021 are pandemic years due to the COVID-19 disease. Moreover, the
research was done in the first trimester of 2021, so there were fewer publications.
Future Internet 2021,13, 200 7 of 17
Figure 5. Burnt areas by forest fires in Canada from 2000 to 2019.
Figure 6. Fire Detection using UAV trendline.
4. Taxonomy
4.1. Hardware
4.1.1. UAVs
Initially, in terms of vehicle type, 17 types of UAVs were mentioned specifically. The
types of UAVs were three: drones, fixed-wing and single-rotor. It should be noted here
that the drones category includes all vehicles that have more than one propeller mounted
horizontally to the main body of the vehicle. More specifically, there were ten applications
with drones: seven with quadcopters [
19
–
25
], three applications with hexacopter [
15
,
26
,
27
]
and one with octacopter [
28
]. In addition, there were four applications with fixed-wing
UAVs [
29
–
32
], while with single-rotor, it was limited to one application [
33
]. Finally, one
application [34] was implemented with 2 UAVs, one fixed wing and one quadcopter.
The application implemented via single-rotor UAV (Helicopter) is one of the oldest
applications of the review (2011). In addition, two applications with fixed-wing UAVs are
also two of the oldest implementations (2011, 2012). The two newer applications with fixed-
wing UAVs (2019, 2020) were implemented through these vehicles, as their mission was
to supervise a large area that required great autonomy of vehicles. The applications with
octacopter and hexacopter were part of a stereovision system. The rest of the applications
were implemented via quadcopters.
The results of the research lead to the conclusion that the implementations of the last
decade regarding UAVs and Computer Vision in the field of fire detection are implemented
in the largest percentage (70.58%) with drones. In Figure 7, the tree-map for UAVs types
used in the reviewed applications is shown.
Compared to other types of UAVs, multicopters are capable of being firmly above a
point in hover mode. This capability provides a 360
◦
visual contact from the camera. Other
Future Internet 2021,13, 200 8 of 17
than that, they do not require a runway to take off and land. A disadvantage of this type
of UAVs is their reduced autonomy, compared to fixed wing that have a long duration
time [4].
Figure 7. UAV type tree-map based on number of applications.
4.1.2. Cameras
Computer Vision in most applications and studies is based on the use of cameras.
In some applications, more than one camera or even sensors were used to help capture
images from the UAV. During the review of scientific studies, some categories of cameras
emerged:
Visible Spectrum: This category includes all those devices that have input images
that are visible to the human eye. In the present review, this category includes RGB 4K,
HD, monocular, 3D devices, CCD technology devices, panorama, stereo depth devices,
webcams, mirrorless digital and optical cameras. Of course, all these cameras have different
principles of operation and different results in terms of quality, but they all work in the
visible light.
IR Systems: The infrared waves can be displayed through the IR sensors. These
systems can be categorized into three types: short-wave (SWIR), middle-wave (MWIR) and
long-wave Infrared (LWIR). MWIR and LWIR sensors have the ability to integrate a passive
temperature sensor which is able to detect the temperature difference in an environment
and then displays the difference in an image (thermal cameras). Many devices of this type,
including SWIR, can also present a black and white image. This makes it harder to detect
temperature changes in the image but easier to recognize image features. As a result, in
many cases, the image is not completely clear in terms of the morphology of its content
compared to an RGB camera. Its operation is not limited only during the day, but it can
also offer night vision capabilities. Most of the time, such a sensor is combined with a
simple camera, so that the images that are taken are not limited only to thermal infrared
radiation but also to visible light [
35
]. Additionally, the SWIR sensors do not need the use
of an IR light source, unlike the MWIR and LWIR sensors which require such a light source.
Multispectral/Hyperspectral: These types of cameras capture the information they re-
ceive using multiple wavelengths of light. They are devices for obtaining millions of
spectral information for each pixel of the image [
36
,
37
]. More specifically, multispectral
cameras use 3–10 extended bands, while hyperspectral cameras use hundreds of narrow
bands in wavelengths of light. Hyperspectral cameras consist of detailed spectral infor-
mation and are used for image acquisition with high spatial and spectral resolutions [
38
].
In contrast, multispectral cameras, although not as detailed as their display due to their
smaller pixel spectral distribution, are used in applications where real-time information
exchange is required. This is achieved due to the smaller size of the information being
processed [39], compared to the size of the hyperspectral camera information [38].
Future Internet 2021,13, 200 9 of 17
Additional to the types of the previous cameras described, regarding Computer Vision,
some detectors were included in some applications. More specifically, an ultraviolet (UV)
detector was used to detect flames through smoke. Through this detector, the danger is
identified from the ultraviolet radiation that is created during the fire [
19
]. Obviously, there
are applications and equipment for ultraviolet, visible and also for short, middle and long
infrared wavelengths.
Figure 8shows the total percentages of cameras per type used in the applications
included in the review. Out of the total number of applications, 38 mentioned what type of
camera was used.
Figure 8. The total percentages of cameras per type used in the applications.
In addition to the above, other types of hardware were used in the applications
included in the review. As shown in Table 2, the hardware that was implemented in
each application, depended on the purpose of each mission. The most prevalent type of
microprocessors was the ARM Cortex, while there were applications based on single- board
computers and, more specifically, the Raspberry Pi. In addition, the on-board computers
for drones DJI Manifold and Odroid-XU4 system were used. Finally, the type of the
implemented integrated circuits (ICs) were Field-Programmable Gate Array (FPGA).
Table 2. Types of hardware.
Hardware Reference
Raspberry [23,40]
ARM [41,42]
FPGA [43,44]
Smartphones/Tablets [45,46]
IMU [19,40]
GPS [19,26,29,40,41,47]
GNSS [47]
Bluetooth [19]
On- board Computers [20,30,48]
Future Internet 2021,13, 200 10 of 17
Concerning the communication and monitoring purposes, Bluetooth modules, smart-
phones and tablets were used. As regards the navigation and positioning systems, the main
module was GPS. Furthermore, one application was implemented with a proposed mobile
station called D-RTK 2 through the Global Navigation Satellite System (GNSS). Finally, in
some applications, it was a combination of GPS and Inertial Measurement Unit (IMU) for
the vehicle’s maneuvers and positioning.
4.2. Software/Method
Artificial intelligence models for image classification and recognition could not be
missing from such applications. Kinaneva D., Hristov G., Raychev J and Zahariev present in
their paper an object detector for smoke and fire detection based on Faster R-CNN [
34
]. The
corresponding ROC curve shows great results above 90%. It is noted here that prediction
results of the fire detection were slightly better than the smoke detection. As expected, a
lot of images were required for training and testing purposes. In order for the training
to be more efficient, the images have to be different but similar. Sometimes, the model
performance will be reduced if the images failed to match the quality and quantity criteria.
The aforementioned object detector was applied by the same scientific team to a UAV
platform for Early Forest Fire Detection [
29
]. Faster R-CNN had also great performance
and a threshold of 90% accuracy was applied. The threshold was used as a proposed
trigger and when the possibility of fire detection was over 90%, the system recognized an
emergency and sent a notification to the authorities. Object detectors in this specific area
are the You Only Look Once model (YOLO) and the Single Shot MultiBox Detector (SSD).
Three applications used the YOLOv3 [
48
–
50
]. YOLOv3 shows great adaptability achieving
very high performance. When using the precision as a metric, the model’s accuracy was
over 90%. However, in the application of Anim Hossain, Youmin M. Zhang, and Masuda
Akter Tonima [
48
], the observation of the recall and F1 metrics reveals a YOLO weakness
achieving very low results (50%). They manage to overcome this hurdle by creating a novel
model for flame and smoke signatures using a combination of a proposed local binary
pattern alongside an Artificial Neural Network (ANN) and a Support Vector Machine
(SVM). The SSD [
40
], in this specific application, was a group of MobilNets. The reason
for this architecture was to gain benefits from the MobileNet characteristics achieving
great prediction results with low latency. There are also other CNN approaches for the
fire detection problem. Kyrkou, C. with Theocharides [
51
], Qiao L. with Zhang Y. and
Qu, Y. [
52
] and Nguyen A. with Nguyen H., Tran, V., Pham H and Pestana [
40
] used
pre-trained CNN models and, more specifically, the VGG-16, Resnet34, Resnet50, U-Net
and MobileNet. The advantages of a pre-trained CNN model is that when it is applied
to an experimental problem, it can achieve great prediction results. However, when they
are used for real-world problems, their results have problems regarding the number of
false negatives or positives. Furthermore, Kyrkou C. and Theocharides T. selected another
approach, thus creating new CNN models instead of using pre-trained versions of popular
CNN’s [
20
,
51
]. These are the ERNet and the Emergency Net. Both models were designed
for low power consumption and computational needs matching UAV special characteristics.
Additionally, they have been trained on the same dataset such as the AIDER and achieved
similar results compared to the traditionally powerful CNN such as VGG-16 and ResNet50,
but with less computational cost. More specifically, ERNet achieved an average accuracy
of 90.1% with 18.7 ms latency compared to VGG-16 and ResNet50 with 91.9% (346 ms) and
90.2% (257 ms), respectively. The Emergency Net F1 score was 95.7% with 57
×
10
6
FLOPS
comparing also with VGG-16 and ResNet50 with 96.4% (17,620
6
FLOPS) and 96.1% (4533
×
10
6
FLOPS). Besides the above, Fuzzy Systems were also used in [
41
]. Their approach was
by fusing the images from two cameras and reducing noise from vibrations. As a result,
they obtained improved accuracy of the fire detection. In addition, a proposed Optimal
Residual Network-Based Features Extraction algorithm (O-RNBFE) for feature extraction
and a Latent Variable Support Vector Machine (LVSVM) for classification can be used for
Future Internet 2021,13, 200 11 of 17
fire detection. Such a combination of methods was also used for a Generative Adversarial
Network (GAN) enhancement with a U-net [52] and achieved also good results.
In addition to artificial intelligence models, some software is used. More specifically,
two applications used the well-known MatLab computing environment with appropriate
Computer Vision packages [
26
,
53
]. In addition, the DroneDeploy [
54
] and the PIX4D [
27
]
as drone mapping software were used to assist the fire detection procedure. Finally, the
open-source Robot Operating System (ROS) [
19
,
49
] for drone navigation as well as the
Node-Red [29] for programming event-driven applications were also selected.
Moreover, some methods and algorithms were used for feature extraction. The Gray
Level Co-occurrence Matrix (GLCM) is able to perform texture analysis and extract the
features from images [
53
]. Furthermore, other methods include the Spatial and Geometric
Histograms (SGH) descriptor, which is a feature descriptor for three-dimensional (3D) local
surface [
54
,
55
], the visual descriptor Local Binary Pattern (LBP) [
48
], which is a texture
operator for image classification [
49
] or even a novel method the Forest Fire Detection
Index (FFDI) [
56
] which was developed first by Henry Cruz, Martina Eckert, Juan Meneses
and José-Fernán Martínez [57].
In terms of APIs, in addition to OpenGL, TensorFlow Object Detection was used in
four applications [
29
,
30
,
34
,
58
] and the OpenCV library in two applications [
29
,
50
]. Of
particular interest is the THEASIS system which is a stand-alone proposed platform for
early detection of big wildfires and was implemented in three applications [22,30,34].
4.3. Datasets
Every machine learning application has a common hurdle to overcome. This is the
proper dataset acquisition. A rule of thumb is that when more data are used to train the
model, then its accuracy will be better. When the dataset is big, the various Machine
Learning Models and, especially, Artificial Neural Networks have a tendency to perform
better, achieving better prediction accuracy. In addition, it is desirable that the contents of
the various items have diversity ensuring better training results. As a result of the above,
the quantity and the quality of the different dataset elements are major concerns for all
researchers and, in order to create a prediction model, some ML methods require image
data. The fire detection prediction models belong to this category. The images are obtained
directly from raw/pre-processed images or from a video frame. In this section, they are
presented in form of a table obtained from the datasets used in the reviewed papers so that
future researchers have a quick access and can use it as a reference guide.
There are many approaches for fire-detection datasets. In some papers, the images
were obtained directly from the experimental customized UAV. The benefit of this approach
is that it is easy to obtain the desired content and apply some customized criteria. However,
these data are not public and most of the owners do not share them. It is likely that
there are a few publicly available databases. Furthermore, some of them are images
and others consist of videos capturing a scene involving a fire. Another approach is to
obtain images or videos from various websites after a thorough search. The websites
http://www.forestryimages.org/ (accessed on 31 July 2021) and http://www.wildlandfire.
com (accessed on 31 July 2021) have images from forest and some wildfires. Additionally,
the well-known flickr (https://www.flickr.com/ (accessed on 31 July 2021)), is a way to
obtain visual data when specific search criteria applied.
A more efficient way for optimal model training is to develop a dataset, especially for
fire detection. In some papers, the databases included fire and other categories
[20,51,59]
and others contain only fire images/videos [
40
,
60
]. In some cases, the raw data were an
image, a video, or a combination of them. The aforementioned datasets are presented in
Table 3. Finally, in another paper [
61
], the dataset was a combination from web-searched
images, with a proposed method of synthesized image datasets [62].
Future Internet 2021,13, 200 12 of 17
Table 3. Fire Detection Datasets.
Datasets Corsican Fire Dyntex YUPENN Maryland Foggia Aider
Number of Images/Videos
1135 650 video 60 10 videos 31 videos 520 Fire Images
Resolution 1024 ×768 720 ×576
original aspect ratio
varies varies varies
Year 2017 2010 2015 2010 2015 2019 2020
Reference [60] [63] [64] [65] [66] [20,51]
5. Discussion
Fires are catastrophic events that are constantly being studied and researched. As a re-
sult, more effective ways are being discovered to prevent them. The present review, which
was carried out in order to record all the above used methods, techniques, software and
hardware, presents a complete set of possible solutions for future researchers. Initially, the
most used UAV models were drones. Such vehicles, in addition to their mechanical advan-
tages, such as weight, vertical takeoff and landing have also the tremendous ability for fast
maneuvers and high integration of the various components and materials. Furthermore,
drones can also hover. Through this state of flight, the vehicle can be stationary at one point
and monitor a specific area. This increases its flying and taking-images capabilities. Of
course, for such a case, due to their structure, these vehicles can easily integrate more than
one camera with a more evenly distributed weight, compared to other types of vehicles.
In terms of cameras, the RGB, with various resolutions and architecture, alongside
thermal cameras have key positions in this type of applications. Through thermal cameras,
a sharp change in temperature can be detected, even if there are visual obstructions, which
makes it easier to detect fires [67].
A very important role, of course, is the ability of the vehicle to send information
about the location via GPS. Many implementations have used this module as well as
various others such as the use of GNSS or IMUs. Without this kind of technique, it would
not be possible to identify the vehicle’s location and position. In this case, a real-world
fire detection application will be useless and such an implementation would be only for
research purposes.
The fire detection is done through AI models. The development of new artificial
intelligence models leads to applications with high accuracy. More specifically, the Convo-
lutional Neural Networks (CNNs) have been applied to more and more Computer Vision
applications. Convolutional Neural Networks gradually replace the classic Computer
Vision algorithms, due to their great accuracy results and their ease of implementation.
For this reason, CNNs, such as Faster CNN, YOLO v3 and VGG-16, have the largest
number of applications compared to classical Computer Vision algorithms or even other
artificial intelligence models, such as neural networks or fuzzy systems and Support Vector
Machines (SVM).
In addition, the increased interest in UAVs and Computer Vision leads to constantly
improved versions of platforms and software which are used more and more in applications
related to this type of implementation. Some of them aim at the construction, the mapping
of the environment of the UAV or its path planning. Furthermore, others have been
developed to achieve higher accuracy of the models or algorithms as well as to improve
the quality of the images and the data transmission capabilities. The constant development
in terms of hardware and software, shows that the interest of researchers and engineers
remains unwavering and the need to achieve greater accuracy in applications where UAVs
and Computer Vision are used.
The used datasets are always one of the most important elements of the training
process of an AI model. It has been observed that there are datasets consisting of a large
number of diverse photographs, which concerns the fires themselves as well as the image
filters and their environment. As already mentioned, CNNs are constantly evolving and
being implemented in more and more implementations. This leads to the construction and
Future Internet 2021,13, 200 13 of 17
the usage of large datasets because CNN’s seems to perform better with large datasets.
The more successful the training of a model is, the better its performance in practical
applications. When this application concerns a dangerous situation, then the effectiveness
of each model acquires an ethical framework with respect to humans, fauna and flora of
the planet.
Finally, after researching the applications, it was observed that the models achieve a
high degree of accuracy in detecting fire in general, regardless of the ways of transmitting
various information such as location or information about the fire or the vehicle which have
reached a satisfactory level but continue to evolve. However, one problem that continues to
exist in this area is vehicle autonomy. Drones have many advantages that help in the early
and effective detection of fire but lack autonomy even when in hovering mode. In addition,
CNN models have high computational costs and, in the case that the calculations are done
on-board, then the power consumption is even more important. This fact in combination
with a battery-powered UAV results in the reduction in the autonomy of these vehicles.
Interesting future research might be to compare CNN models, as they are now the
most accurate and used in most implementations, in terms of computational cost. Such
research would provide interesting information in order to make a careful comparison of
the models in the computational part in order to combine the power consumption and
the processing power. In this way, the best models can emerge based on the relationship
between computational cost, performance and autonomy.
6. Conclusions
The present work is a product of a literature review, consisting of 72 scientific pub-
lications. These publications were Computer Vision implementations with UAVs in the
field of fire detection. The time frame of these implementations concerned the last decade.
From the beginning of 2016 until the end of 2019, there is a sharp increase in fire detection
applications. It should be noted that the time period in which the downward trend of
publications begins coincides with the beginning of the COVID-19 pandemic situation.
In this literature review, different types of UAVs, Computer Vision AI models, sensor
types as well as integrated hardware are presented. After a thorough study of the papers, a
comparison was made between the various vehicles, models and methods and the proposal
for the most efficient solution in fire detection applications were presented. In terms of
vehicles, multicopters seem to be the most suitable for such applications. Their vertical
take-off/landing and hovering capabilities in combination with their stability and their
ability to remain in a constant position allows a 360
◦
-view of the operational area. In
the field of sensors, the combination between visible light and IR sensors increases the
effectiveness of early detection of fire or smoke. Additionally, very important auxiliary
systems, which are integrated in the vehicle, are the GPS and IMUs, providing information
about the location of the vehicle. Furthermore, with regard to the Computer Vision applied
methods, when the calculations are made on-board, ERNet provides high accuracy and
low power consumption compared to the other high-accuracy models. In case when the
calculations are done in a ground-based station, VGG-16 constitutes an effective solution,
because alongside its high performance, it implies big computational costs. In any case,
CNN models achieve higher performance. Finally, ROS is ideal for path planning and
autonomous navigation and Pix4D for photogrammetry and mapping. Both of them are
software applications that automate the essential processes of each mission and provide a
larger amount of necessary information.
Funding: This research received no external funding.
Data Availability Statement: Not applicable, the study does not report any data.
Acknowledgments:
This work was supported by the MPhil program “Advanced Technologies in
Informatics and Computers”, hosted by the Department of Computer Science, International Hellenic
University, Greece.
Future Internet 2021,13, 200 14 of 17
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
CDD Charge-Coupled Device
CNN Convolutional Neural Network
FFDI Forest Fire314Detection Index
FPGA Field-Programmable Gate Array
GNSS Global Navigation Satellite System
GPS Global Positioning System
HD High Definition
LBP Local Binary Pattern
RGB Red-Green-Blue
ROS Robot Operating System
SVM Support Vector Machine
SVS Synthetic Vision System
UAS Unmanned Aerial Systems
UAV Unmanned Aerial Vehicle
UCAV Unmanned Combat Aerial Vehicle
UV Ultra Violet
YOLO You Only Look Once
References
1. Mitka, E.; Mouroutsos, S.G. Classification of Drones. Am. J. Eng. Res. (AJER) 2017,6, 36–41.
2.
Arjomandi, M.; Agostino, S.; Mammone, M.; Nelson, M.; Zhou, T. Classification of Unmanned Aerial Vehicles. Available online:
https://www.e-education.psu.edu/geog892/node/5 (accessed on 31 July 2021).
3.
Mueller, T.J. Fixed and Flapping Wing Aerodynamics for Micro Air Vehicle Applications; American Institute of Aeronautics and
Astronautics: Reston, VA, USA, 2001.
4.
Boon, M.; Drijfhout, A.; Tesfamichael, S. Comparison of a Fixed-Wing and Multi-Rotor Uav for Environmental Mapping
Applications: A Case Study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017,42, 47. [CrossRef]
5.
Fennelly, L.J.; Perry, M.A. Unmanned Aerial Vehicle (Drone) Usage in the 21st Century. In The Professional Protection Officer;
Elsevier: Amsterdam, The Netherlands, 2020; pp. 183–189.
6.
Alladi, T.; Chamola, V.; Sahu, N.; Guizani, M. Applications of blockchain in unmanned aerial vehicles: A review. Veh. Commun.
2020,23, 100249. [CrossRef]
7.
McCarley, J.S.; Wickens, C.D. Human Factors Concerns in UAV Flight. Available online: https://ininet.org/human-factors-
concerns-in-uav-flight.html (accessed on 31 July 2021).
8.
Torres, O.; Ramirez, J.; Barrado, C.; Tristancho, J. Synthetic vision for Remotely Piloted Aircraft in non-segregated airspace. In
Proceedings of the 2011 IEEE/AIAA 30th Digital Avionics Systems Conference, Seattle, WA, USA, 16–20 October 2011; p. 5C4-1.
9. Çetin, A.E.; Dimitropoulos, K.; Gouverneur, B.; Grammalidis, N.; Günay, O.; Habibo?lu, Y.H.; Töreyin, B.U.; Verstockt, S. Video
Fire Detection—Review. Digit. Signal Process. 2013,23, 1827–1843. [CrossRef]
10.
Frizzi, S.; Kaabi, R.; Bouchouicha, M.; Ginoux, J.M.; Moreau, E.; Fnaiech, F. Convolutional Neural Network for Video Fire and
Smoke Detection. In Proceedings of the IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, Florence,
Italy, 23–26 October 2016; pp. 877–882.
11.
He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
12.
Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with
Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June
2015; pp. 1–9.
13.
Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al.
Imagenet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015,115, 211–252. [CrossRef]
14.
Kanellakis, C.; Nikolakopoulos, G. Survey on Computer Vision for UAVs: Current Developments and Trends. J. Intell. Robot.
Syst. 2017,87, 141–168. [CrossRef]
15.
Chen, Y.; Zhang, Y.; Xin, J.; Yi, Y.; Liu, D.; Liu, H. A UAV-Based Forest Fire Detection Algorithm Using Convolutional Neural
Network. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 10305–10310.
16.
Martín-Martín, A.; Orduna-Malea, E.; Thelwall, M.; López-Cózar, E.D. Google Scholar, Web of Science, and Scopus: A Systematic
Comparison of Citations in 252 Subject Categories. J. Inf. 2018,12, 1160–1177. [CrossRef]
17.
Zhao, D.; Strotmann, A. Analysis and Visualization of Citation Networks. Synth. Lect. Inf. Concepts Retr. Serv.
2015
,7, 1–207.
[CrossRef]
Future Internet 2021,13, 200 15 of 17
18.
Van Eck, N.J.; Waltman, L. Software Survey: VOSviewer, a Computer Program for Bibliometric Mapping. Scientometrics
2010
,
84, 523–538. [CrossRef]
19.
Esfahlani, S.S. Mixed Reality and Remote Sensing Application of Unmanned Aerial Vehicle in Fire and Smoke Detection. J. Ind.
Inf. Integr. 2019,15, 42–49. [CrossRef]
20.
Kyrkou, C.; Theocharides, T. Deep-Learning-Based Aerial Image Classification for Emergency Response Applications Using
Unmanned Aerial Vehicles. arXiv 2019, arXiv:1906.08716.
21.
Bilgilioglu, B.B.; Ozturk, O.; Sariturk, B.; Seker, D.Z. Object Based Classification of Unmanned Aerial Vehicle (UAV) Imagery for
Forest Fires Monitoring. Fresenius Environ. Bull. 2019,28, 1011.
22.
Aspragathos, N.; Dogkas, E.; Koutmos, P.; Paterakis, G.; Souflas, K.; Thanellas, G.; Xanthopoulos, N.; Lamprinou, N.; Psarakis,
E.Z.; Sartinas, E. THEASIS System for Early Detection of Wildfires in Greece: Preliminary Results from Its Laboratory and Small
Scale Tests. In Proceedings of the 2019 29th Annual Conference of the European Association for Education in Electrical and
Information Engineering (EAEEIE), Ruse, Bulgaria, 4–6 September 2019; pp. 1–6.
23.
Chamoso, P.; González-Briones, A.; De La Prieta, F.; Corchado, J.M. Computer Vision System for Fire Detection and Report Using
UAVs. Available online: http://ceur-ws.org/Vol-2146/paper95.pdf (accessed on 31 July 2021).
24.
Saadat, M.N.; Husen, M.N. An Application Framework for Forest Fire and Haze Detection with Data Acquisition Using
Unmanned Aerial Vehicle. In Proceedings of the 12th International Conference on Ubiquitous Information Management and
Communication, Langkawi, Malaysia, 5–7 January 2018; pp. 1–7.
25.
Almeida, M.; Azinheira, J.R.; Barata, J.; Bousson, K.; Ervilha, R.; Martins, M.; Moutinho, A.; Pereira, J.C.; Pinto, J.C.; Ribeiro, L.M.
Analysis of Fire Hazard in Campsite Areas. Fire Technol. 2017,53, 553–575. [CrossRef]
26.
Ciullo, V.; Rossi, L.; Pieri, A. Experimental Fire Measurement with UAV Multimodal Stereovision. Remote Sens.
2020
,12, 3546.
[CrossRef]
27.
Shin, J.I.; Seo, W.W.; Kim, T.; Park, J.; Woo, C.S. Using UAV Multispectral Images for Classification of Forest Burn Severity—A
Case Study of the 2019 Gangneung Forest Fire. Forests 2019,10, 1025. [CrossRef]
28.
Ciullo, V.; Rossi, L.; Toulouse, T.; Pieri, A. Fire Geometrical Characteristics Estimation Using a Visible Stereovision System Carried
by Unmanned Aerial Vehicle. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and
Vision (ICARCV), Singapore, 18–21 November 2018; pp. 1216–1221.
29.
Georgiev, G.D.; Hristov, G.; Zahariev, P.; Kinaneva, D. Forest Monitoring System for Early Fire Detection Based on Convolutional
Neural Network and UAV Imagery. In Proceedings of the 2020 28th National Conference with International Participation
(TELECOM), Sofia, Bulgaria, 29–30 October 2020; pp. 57–60.
30.
Kinaneva, D.; Hristov, G.; Raychev, J.; Zahariev, P. Early Forest Fire Detection Using Drones and Artificial Intelligence. In
Proceedings of the 2019 42nd International Convention on Information and Communication Technology, Electronics and
Microelectronics (MIPRO), Opatija, Croatia, 20–24 May 2019; pp. 1060–1065.
31.
Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M. Planning and Management of Real-Time Geospatial UAS Missions within a
Virtual Globe Environment. Available online: https://pdfs.semanticscholar.org/e165/7605fa9ef451e2152ba012688ed47b992ac4
.pdf (accessed on 31 July 2021).
32.
He, Q.; Chu, C.H.H.; Camargo, A. Architectural Building Detection and Tracking under Rural Environment in Video Sequences
Taken by Unmanned Aircraft System (UAS). In Proceedings of the International Conference on Image Processing, Computer
Vision, and Pattern Recognition (IPCV), Las Vegas, NV, USA, 16–19 July 2012; p. 1.
33.
Royo Chic, P.; Pastor Llorens, E.; Solé, M.; Lema Rosas, J.M.; López Rubio, J.; Barrado Muxí, C. UAS Architecture for Forest Fire
Remote Sensing. Available online: https://upcommons.upc.edu/handle/2117/16549 (accessed on 31 July 2021).
34.
Kinaneva, D.; Hristov, G.; Raychev, J.; Zahariev, P. Application of Artificial Intelligence in UAV Platforms for Early Forest Fire
Detection. In Proceedings of the 2019 27th National Conference with International Participation (TELECOM), Sofia, Bulgaria,
30–31 October 2019; pp. 50–53.
35.
Giordan, D.; Adams, M.S.; Aicardi, I.; Alicandro, M.; Allasia, P.; Baldo, M.; De Berardinis, P.; Dominici, D.; Godone, D.; Hobbs,
P.; et al. The Use of Unmanned Aerial Vehicles (UAVs) for Engineering Geology Applications. Bull. Eng. Geol. Environ.
2020
,
79, 3437–3481. [CrossRef]
36.
Liang, H. Advances in Multispectral and Hyperspectral Imaging for Archaeology and Art Conservation. Appl. Phys. A
2012
,
106, 309–323. [CrossRef]
37.
White, R.A.; Bomber, M.; Hupy, J.P.; Shortridge, A. UAS-GEOBIA Approach to Sapling Identification in Jack Pine Barrens after
Fire. Drones 2018,2, 40. [CrossRef]
38.
Qin, J.; Chao, K.; Kim, M.S.; Lu, R.; Burks, T.F. Hyperspectral and Multispectral Imaging for Evaluating Food Safety and Quality.
J. Food Eng. 2013,118, 157–171. [CrossRef]
39.
Akhloufi, M.A.; Toulouse, T.; Rossi, L.; Maldague, X. Multimodal Three-Dimensional Vision for Wildland Fires Detection and
Analysis. In Proceedings of the 2017 Seventh International Conference on Image Processing Theory, Tools and Applications
(IPTA), Montreal, QC, Canada, 28 November–1 December 2017; pp. 1–6.
40.
Nguyen, A.; Nguyen, H.; Tran, V.; Pham, H.X.; Pestana, J. A Visual Real-Time Fire Detection Using Single Shot MultiBox Detector
for UAV-Based Fire Surveillance. In Proceedings of the 2020 IEEE Eighth International Conference on Communications and
Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021; pp. 338–343.
Future Internet 2021,13, 200 16 of 17
41.
Sherstjuk, V.; Zharikova, M.; Dorovskaja, I.; Sheketa, V. Assessing Forest Fire Dynamicsin UAV-Based Tactical Monitoring System.
In International Scientific Conference—Intellectual Systems of Decision Making and Problem of Computational Intelligence; Springer:
Berlin/Heidelberg, Germany, 2020; pp. 285–301.
42.
Khachumov, V.M.; Portnov, E.M.; Fedorov, P.A.; Kasimov, R.A.; Linn, A.N. Development of an Accelerated Method for Calculating
Streaming Video Data Obtained from UAVs. In Proceedings of the 2020 8th International Conference on Control, Mechatronics
and Automation (ICCMA), Moscow, Russia, 6–8 November 2020; pp. 212–216.
43.
Giitsidis, T.; Karakasis, E.G.; Gasteratos, A.; Sirakoulis, G.C. Human and Fire Detection from High Altitude Uav Images.
In Proceedings of the 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing,
Turku, Finland, 4–6 March 2015; pp. 309–315.
44.
Amanatiadis, A.; Bampis, L.; Karakasis, E.G.; Gasteratos, A.; Sirakoulis, G. Real-Time Surveillance Detection System for
Medium-Altitude Long-Endurance Unmanned Aerial Vehicles. Concurr. Comput. Pract. Exp. 2018,30, e4145. [CrossRef]
45.
Fuentes, S.; Tongson, E.J.; De Bei, R.; Gonzalez Viejo, C.; Ristic, R.; Tyerman, S.; Wilkinson, K. Non-Invasive Tools to Detect Smoke
Contamination in Grapevine Canopies, Berries and Wine: A Remote Sensing and Machine Learning Modeling Approach. Sensors
2019,19, 3335. [CrossRef]
46.
Athanasis, N.; Themistocleous, M.; Kalabokidis, K.; Chatzitheodorou, C. Big Data Analysis in UAV Surveillance for Wildfire
Prevention and Management. In European, Mediterranean, and Middle Eastern Conference on Information Systems; Springer:
Berlin/Heidelberg, Germany, 2018; pp. 47–58.
47.
Shao, Z.; Li, Y.; Deng, R.; Wang, D.; Zhong, X. Three-Dimensional-Imaging Thermal Surfaces of Coal Fires Based on UAV Thermal
Infrared Data. Int. J. Remote Sens. 2021,42, 672–692. [CrossRef]
48.
Hossain, F.A.; Zhang, Y.M.; Tonima, M.A. Forest Fire Flame and Smoke Detection from UAV-Captured Images Using Fire-Specific
Color Features and Multi-Color Space Local Binary Pattern. J. Unmanned Veh. Syst. 2020,8, 285–309. [CrossRef]
49.
Raveendran, R.; Ariram, S.; Tikanmäki, A.; Röning, J. Development of Task-Oriented ROS-Based Autonomous UGV with 3D
Object Detection. In Proceedings of the 2020 IEEE International Conference on Real-Time Computing and Robotics (RCAR),
Asahikawa, Japan, 28–29 September 2020; pp. 427–432.
50.
Meng, L.; Peng, Z.; Zhou, J.; Zhang, J.; Lu, Z.; Baumann, A.; Du, Y. Real-Time Detection of Ground Objects Based on Unmanned
Aerial Vehicle Remote Sensing with Deep Learning: Application in Excavator Detection for Pipeline Safety. Remote Sens.
2020
,
12, 182. [CrossRef]
51.
Kyrkou, C.; Theocharides, T. Emergencynet: Efficient Aerial Image Classification for Drone-Based Emergency Monitoring Using
Atrous Convolutional Feature Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020,13, 1687–1699. [CrossRef]
52.
Qiao, L.; Zhang, Y.; Qu, Y. Pre-Processing for UAV Based Wildfire Detection: A Loss u-Net Enhanced GAN for Image Restoration.
In Proceedings of the 2020 2nd International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 23–25 October
2020; pp. 1–6.
53.
Hossain, F.A.; Zhang, Y.; Yuan, C.; Su, C.Y. Wildfire Flame and Smoke Detection Using Static Image Features and Artificial Neural
Network. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (Iai), Shenyang, China,
22–26 July 2019; pp. 1–6.
54.
Garcia Millan, V.E.; Rankine, C.; Sanchez-Azofeifa, G.A. Crop Loss Evaluation Using Digital Surface Models from Unmanned
Aerial Vehicles Data. Remote Sens. 2020,12, 981. [CrossRef]
55.
Rajagopal, A.; Ramachandran, A.; Shankar, K.; Khari, M.; Jha, S.; Lee, Y.; Joshi, G.P. Fine-Tuned Residual Network-Based Features
with Latent Variable Support Vector Machine-Based Optimal Scene Classification Model for Unmanned Aerial Vehicles. IEEE
Access 2020,8, 118396–118404. [CrossRef]
56.
Jiao, Z.; Zhang, Y.; Xin, J.; Yi, Y.; Liu, D.; Liu, H. Forest Fire Detection with Color Features and Wavelet Analysis Based on
Aerial Imagery. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018;
pp. 2206–2211.
57.
Cruz, H.; Eckert, M.; Meneses, J.; Martínez, J.F. Efficient Forest Fire Detection Index for Application in Unmanned Aerial Systems
(UASs). Sensors 2016,16, 893. [CrossRef] [PubMed]
58.
Nagaraj, K.; Sadashiva, T.G.; Ramani, S.K.; Iyengar, S.S. Image Feature Based Smoke Recognition in Mines Using Monocular
Camera Mounted on Aerial Vehicles. In Proceedings of the 2017 2nd International Conference On Emerging Computation and
Information Technologies (ICECIT), Tumakuru, India, 15–16 December 2017; pp. 1–6.
59.
Zheng, J.; Cao, X.; Zhang, B.; Huang, Y.; Hu, Y. Bi-Heterogeneous Convolutional Neural Network for UAV-Based Dynamic Scene
Classification. In Proceedings of the 2017 Integrated Communications, Navigation and Surveillance Conference (ICNS), Herndon,
VA, USA, 18–20 April 2017; pp. 5B4-1–5B4-12.
60.
Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M.A. Computer Vision for Wildfire Research: An Evolving Image
Dataset for Processing and Analysis. Fire Saf. J. 2017,92, 188–194. [CrossRef]
61.
Kamilaris, A.; van den Brink, C.; Karatsiolis, S. Training Deep Learning Models via Synthetic Data: Application in Unmanned
Aerial Vehicles. In International Conference on Computer Analysis of Images and Patterns; Springer: Berlin/Heidelberg, Germany,
2019; pp. 81–90.
62.
Kar, A.; Prakash, A.; Liu, M.Y.; Cameracci, E.; Yuan, J.; Rusiniak, M.; Acuna, D.; Torralba, A.; Fidler, S. Meta-Sim: Learning to
Generate Synthetic Datasets. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28
October 2019; pp. 4551–4560.
Future Internet 2021,13, 200 17 of 17
63.
Péteri, R.; Fazekas, S.; Huiskes, M.J. DynTex: A Comprehensive Database of Dynamic Textures. Pattern Recognit. Lett.
2010
,
31, 1627–1632. [CrossRef]
64.
Derpanis, K.G.; Lecce, M.; Daniilidis, K.; Wildes, R.P. Dynamic Scene Understanding: The Role of Orientation Features in Space
and Time in Scene Classification. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition,
Providence, RI, USA, 16–21 June 2012; pp. 1306–1313.
65.
Shroff, N.; Turaga, P.; Chellappa, R. Moving Vistas: Exploiting Motion for Describing Scenes. In Proceedings of the 2010
IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010;
pp. 1911–1918.
66.
Foggia, P.; Saggese, A.; Vento, M. Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts
Based on Color, Shape, and Motion. IEEE Trans. Circuits Syst. Video Technol. 2015,25, 1545–1556. [CrossRef]
67. Gade, R.; Moeslund, T.B. Thermal cameras and applications: A survey. Mach. Vis. Appl. 2014,25, 245–262. [CrossRef]
Available via license: CC BY 4.0
Content may be subject to copyright.