Conference PaperPDF Available

Artificial Intelligence based Missile Guidance System

Authors:
Artificial Intelligence based Missile
Guidance System
Darshan Diwani1, Archana Chougule2, Debajyoti Mukhopadhyay3
Sanjay Ghodawat University Kolhapur, India
1
darshan@enverto.in
2
chouguleab@gmail.com
Mumbai University, WIDiCoReL Research Lab, Mumbai, India
3
debajyoti.mukhopadhyay@gmail.com
Abstract - In the 20th century, wars are being won by the
nations with superior air power, for example, U.S. invasion of
Afghanistan and Iraq. Such wars caused major deaths of civilian
peoples since the missiles which were heavily used in these wars
were not equipped with atomicity and intelligence. With
escalating cost of a missile and the potential damage that an
intruding aircraft can cause, there is a need to give atomicity and
intelligence to a missile guidance system which can choose and
track the specified targets on its own or choose among many to
hit on. Major contribution of this paper in this direction is to
automate detection of target object and identification of exact
location of the same. The paper proposes use of artificial
intelligence for identification of object and location. The paper
also explains how this information can be used for exact
automated positioning of the missile.
Keywords—Intelligence in missiles, intelligence in Guidance
System, Object Detection for UAV’s., YOLO, Object detection
I.
I
NTRODUCTION
Recent enhancements in Artificial Intelligence such as
deploying intelligent agents (IA) hold assurance of bettering
the performance and giving the intelligence to the guidance
systems. The word agent means one which looks among
different options and makes a suitable choice on its own
without any human interface with it. Intelligent agents or IA
are the software based entities that performs this similar
features. They are distinguished by some general qualities like
independence, autonomy and social ability. Such agent falls
into category of Artificial Intelligence and they have capacity
of solving complex problems given to it on its own. The
Intelligent agents can be distinguished by their features, such
as connection agents, informational agents and priority agents
etc. An informational agent gives data access to a huge
collection of information sources. A connection agent draws
given information and passes that to the users of the system.
Efforts have been taken by researchers to use intelligent agents
to automate and guide missile vision systems [11], [12]. The
paper introduces better vision guidance for priority based
object tracking and directing missile using popular YOLO
algorithm. Algorithm helps to improve accuracy of missile
target identification and avoid mis-hitting of the missile.
Detail information of YOLO algorithm is given below.
A.
YOLO Algorithm
YOLO is a super-fast cutting edge object detection
algorithm. YOLO is the abbreviation of YOU ONLY LOOK
ONCE [6]. The YOLO algorithm applies neural network to
entire image at once rather than dividing the image and
applying the network to the sub images. The biggest benefit of
using YOLO is its ultimate speed; it is super fast and it is able
process 45 frames per second with ease. When it is trained on
real photos of objects and tested on artistic work, YOLO is the
ultimate king and it beats top detection methods like R-CNN
by a much difference in terms of accuracy obtained. YOLO is
also capable of understanding the generalized object
representation.
B.
Working of YOLO Algorithm
Compared to other neural network algorithms like region
convolutional neural network (RCNN) which tries to perform
detection of objects on various regions which leads to
obtaining prediction many times for different types of regions
in a image frame, Yolo algorithm is more like Full
Convolutional Neural Network (F-CNN) and inputs the image
frame at once to the FCNN and gives the prediction as output.
Fig. 1.
Working of YOLO Algorithm
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
978-1-7281-5475-6/20/$31.00 ©2020 IEEE 873
The object bounding box might come larger than the grid itself
in most of the cases.
Because of that we must need to reframe
object detection as a single regression problem, right from the
image pixels to bounding box coordinates and class
probabilities.
A single CNN in parallel try to predict N number of
bounding boxes of detected objects and class probabilities
matrix for those N boxes. YOLO Algorithm gets trained on
full analyzed images and it then directly enhances overall
detection performance. This consolidated module has vast
benefits over traditional methods of object detection. First,
YOLO is extremely fast. Since we try to reframe object
disclosure as a single regression problem we don’t need any
complicated pipeline. We simply use our Convolutional
network on a unfamiliar picture while testing to predict
disclosures. This helps us in processing of streaming video
frames straightaway with latency of lower than 30
milliseconds. Secondly, YOLO algorithm reasons globally
about the image when making predictions. The sliding
window algorithms and region proposal-based protocols
divides the original image into smaller pieces before
processing, Unlike them YOLO takes the whole image when
it is in training and testing so it encodes meaningful
information about objects and their outward aspects on its
own. Fast region convolutional neural network, a top and
currently trending object detection method, misunderstands
background patches in an image for objects because it can’t
see the larger context. YOLO performs much better when we
take this aspect into consideration by making 50% less
background errors as compared to Fast R-CNN. YOLO also
understands generalized depiction of objects with their labels.
Since YOLO can understand generalized depiction of objects
it gives less or no errors when applied to unexpected images.
The YOLO divides the entire image into an N x N grids and
then it tries to obtain the bounding boxes, which are the boxes
to be painted around images and predicted probabilities for
each of these object regions around the boxes. The strategy
used to obtain these probabilities map in YOLO is logistic
regression. The obtained bounding boxes are then arranged by
the associated probabilities of the object regions. YOLO
makes the use of Independent logistic classifiers for the object
class prediction or to get the detected object label. YOLO
algorithm accepts the entire image frame as whole at once as
input and it then tries to predict the object bounding box
coordinates and object class probabilities for these boxes. The
working of YOLO is shown in Figure 1.
II.
LITERATURE SURVEY
Artificial neural network is used as preferred choice for
intelligent missile guidance system including proportional
navigation guidance [9]. The use of video streams to obtain
and locate targets has the ability to reduce cost when
compared to the use of active sensors. Such technique presents
initial results for a system that combines use of visual sensing
to locate and point the target [10]. The system makes the use
of foreground segmentation to identity new objects or
elements in a video steaming frame. Then it tries to apply
Speeded Up Robust Features (SURF) feature detection and a
LAN based IP camera module to determine their position in
the real world [1]. Based on the analysis of flight attribute for
the toy rocket, the system can now generate the altitude and
position launch angles that will allow a toy missile to intercept
the target. A real-time state of art object determination and
tracking technique from video stream frame images is
developed by combining the object determination and tracking
into a dynamic Kalman model [3]. At the object detection step,
the object which we are concerned about is automatically
detected from a map of saliency processed via the image
background cue at each frame; at the stalking stage, Kalman
filter is deployed to obtain a raw prediction about the object
state, which is further re - processed via a on board detector
combined with the saliency map and sensual information
between two successive frames obtained from the video steam
[4]. When we compare this technique with existing methods,
the approach which we have proposed does not desire any
hand-operated initialization for tracking, runs much faster than
the trackers of the same category, and obtains competitive
performance on a huge number of images sequences.
Comprehensive analysis illustrates the impressiveness and
exceptional performance of the proposed approach.
Real-time
state of art object detection is essential for vast number of
applications of Unmanned Aerial Vehicles (UAVs) such as
exploration and surveillance, search-and-rescue, and
infrastructure survey. In recent time, Convolutional Neural
Networks (CNNs) have stood out as a prominent class of
technique for identifying image content, and are widely
considered in the computer vision community to be easy to
adopt standard approach for most problems [2], [8]. However,
object detection algorithms which are based on CNNs are
extremely complex to be running on normal set of processors
and they are exceedingly computationally demanding, typically
it requires high powered Graphics Processing Units (GPUs)
which might need high power and be very high in weight,
especially for a lightweight and low-cost drone. So it’s always
better to move our computations to an computing cloud which
is off-board. We then apply RCNN to detect hundreds of object
in real time.
III. D
ESIGN AND SETUP
This section describes details of design of missile guidance
system. It includes hardware and software details, quadcopter
system used for streaming, object detection technique, rocket
motor ignition method and overall system architecture.
A.
Scope of Work
To design a prototype of Missile Guidance system that
will autonomously select targets based on the priorities given
to it to hit on. This Guidance system will be tested using UAV
(Unnamed Aerial Vehicle) equipped with toy Missiles. The
Missile Guidance system will be designed by using YOLO
algorithm. The UAV will be built on the top of Naza Flight
Controller and with IP camera as its payload which will
Stream the real-time data to Ground Station where our
intelligent Agent will be running on.
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
874
B. Hardware and Software Used
We use YOLO v3 for detection of objects from images
frames received from our camera streaming module which our
UAV will be holding. The detected objects obtained from the
frame are then analyzed based on the priorities given to our
intelligent agent and it will then choose the targets to hit if
any. The missile prototype is built using the Rocket Candy (R-
Candy) propellant. Spark plug is connected to
NodeMCU/Arduino which is used to ignite the Missile which
is again based on the commands from intelligent agent.
Hardware: F450 Quadcopter Frame, 2200mAH 4S Lithium
Polymer Battery, 4 – 930KV Brushless DC Motors,4 –
Simonk Electroic Speed Control, Arduino Mega, Nichrome
Wire, IP Camera.
Software:
Arduino IDE :
This integrated development environment is used to program
the functionality of Missile triggering module which is
responsible for igniting the nichrome wire connected to the
Rocket Candy motor of missile.
Spyder IDE :
This integrated development environment is used to program
the intelligent agent in python language and YOLO deep
learning algorithm.
C.
Unmanned Aerial Vehicle
Unmanned aerial vehicles (UAV) are a category of aerial
vehicles that can fly without the on-vehicle existence of
human as pilots. Typically, Unmanned Vehicle systems do
consist the aircraft component, payloads and a station on
ground to control various aspects of aircraft. The UAV is built
with NAZA M Lite flight controller which will help us to
aerially test our missile guidance system in real time.
Fig. 2.
Quadcopter System Architecture
A quadcopter has total of 4 motors fixed on a symmetric
quadcopter frame, every arm is basically ninety degree aligned
for the X config. Two motors rotate in clockwise direction,
whereas the other two rotate in counter clockwise direction to
create the opposite force required to stay stable. The Figure 2
represents the System Architecture of designed Quadcopter
that basically show each component of the Quadcopter, how
the Quadcopter works.
D.
Camera Streaming Module
Internet cam is a camera unit that circulates and accepts data
over a local area network (LAN) or over the internet.
Processing video stream on UAV itself requires large on-
board processor which increases the overall system cost with
much extent. In this project the use of IP Camera on board
with UAV makes it easier to take the live stream to Ground
Control Station for further video processing.
The network and the other configurations is a
comparatively simple process for most of the devices;
generally the set up is as easy as connecting to the Wi-Fi
Network. While some of the camera models requires basic
understanding of Internet technology to make them running,
but most of them can be used as Plug And Play device. Most
of the camera modules now a day comes with their own set up
connecting tools and integrating such cameras has become
easy because of documentations they provide.
Fig. 3. Camera calibration setup, showing the defined axes, the
target, and the central point of the captured image
E.
Video Processing and Object Detection
The video stream which we obtained using camera module
will be the input to object detection module which in this case
will be the YOLO. YOLO is a network we will use for
detection of objects. The object detection technique consists in
indentifying the location on the image frame at which certain
objects are present, as well as labeling those objects.
Alternative methods for this, like R-CNN and its alterations,
used a singular pipeline to process this task in multiple steps.
This makes it run slowly and makes it hard to enhance,
because every entity must be trained separately. The biggest
benefit of using YOLO is its ultimate speed; it is super fast and
it is able process 45 frames per second with ease. When it is
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
875
trained on real photos of objects and tested on artistic work,
YOLO is the ultimate king and it beats top detection methods
like R-CNN by a much difference in terms of accuracy
obtained. In mAP (mean Average Precision) measured at 0.5
Intersection over Union (IOU) YOLO v3 is on par with Focal
Loss but about 4x faster. Moreover, we can easily tradeoff
between accuracy and speed easily by altering the overall size
of the model. YOLO algorithm accepts the entire image frame
as whole at once as input and it then tries to predict the object
bounding box coordinates and object class probabilities for
these boxes. Camera calibration is used to find exact location
of the object from image as shown in figure 3. Figure 4 shows
comparison of YOLO algorithm with other algorithms.
Fig. 4. YOLO V/s Others
F.
Rocket Motor Ignition Module
Arduino is a widely used electronics platform based on easy to
go hardware and software toolkit and it is open-source.
Arduino boards have capabilities of reading the inputs from
various sensors or any Relational Databases and can also react
to such inputs by giving some output signals. One can easily
code Arduino using their open-source IDE. The Prototype of
missile which will have potential rocket fuel will have to get
ignite by means this triggering module which is based on
Arduino based Micro-Controller. The Estimates of targets if
any is obtained by Our YOLO algorithm is sent to this
module. System uses Nichrome metal based ignition system.
After receiving the co-ordinates from the feature extraction
module the Arduino will send high signal to relay which will
then ignite the Nichrome wire which is connected to the rocket
fuel. Figure 5 shows pinout for rocket motor ignition module.
Fig. 5. Rocket Motor Ignition Module PinOut
G. System Architecture
The Figure 6 represents overall System Architecture of
proposed system that basically shows each component of the
system, how the system works, and the flow of the system and
so on. Video Streaming is taken from the IP Camera and that
goes under pre-processing stages to enhance the feature of a
Video Stream. These processed video frames are then passed
to object detection module which in our case in the Yolo
algorithm itself. Yolo analyzes every frame obtained and then
it tries to detect the objects using our trained model. Yolo then
passes the detected objects to intelligent agent which checks
the tagged labels and based on their priorities it signals the
rocket motor ignition module.
Fig. 6 Block diagram of system architecture
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
876
IV.
I
MPLEMENTATION OF THE
I
NTELLIGENT
A
GENT
Here we are going to discuss about how we are implementing
our Intelligent Agent right from training to testing and feature
extraction.
A.
Training Of System
Every deep learmimg task needs trainng of system which
again requires Dataset to work on. System is implemented
using the images from Google's OpenImagesV4 dataset [7]
and the coco Dataset provided and publicly available online. It
is a kind of huge dataset with almost 500 classes of objects
and their labels. The dataset also contains the bounding box
annotations for these objects.The open-source code which is
named as darknet, is a neural network framework written in
CUDA and C.
B.
Train-test split
As with every machine learning algorithm there comes need
of splitting the data into train set and test set to watch and look
for results.
1. Training set: this is random part of data from our dataset
used to train our model. Depending on the requirement the
algorithm randomly chooses 70-90% of data for this set.
2. Test Set : this is random part of data from our dataset used
to test our model. Depending on the requirement the algorithm
randomly chooses 10-30% of data for this set.
Fig. 7.
Prototype of UAV
C.
Data Annotation
Data Annotation is technique used in Machine Learning and
computer vision to label the data such a way that Machine can
understand. This step is usually done by humans using Data
Annotation softwares provided to store the huge amount of
data generated. The bounding box is the most commonly used
technique for image annotation basically highlights an object
in the image to make it recognizable for machines by training
them to learn from these data and give a relevant output. The
annotated images are used as datasets in machine learning
while building an AI-based model that can work itself using
the deep learning process to helps humans in performing
various tasks without human intervention.
Fig. 8. Prototype of Guidance System
D.
Feature Extraction and Recognition
We unify the separate components of object detection into a
single neural network. Our network uses features from the
entire image to predict each bounding box. It also predicts all
bounding boxes across all classes for an image
simultaneously. This means our network reasons globally
about the full image and all the objects in the image. The
YOLO design enables end-to-end training and realtime speeds
while maintaining high average precision. Our system divides
the input image into an S * S grid. If the center of an object
falls into a grid cell, that grid cell is responsible for detecting
that object. Prototypes of UAV and missile guidance system
are developed for testing proposed approach ass shown in
figure 7 and figure 8.
V.
RESULTS
Autonomous missile guidance systems can be a helpful in
modern day wars to minimize the unwanted destruction. The
proposed aims to lower this destruction by providing easy to
train intelligent agent which chooses the hitting target without
any human interface. The projected methodology is based on
priority based trained intelligent agent. This system processes
the video frames and it tries to detect the objects in real time.
The detected objects are then passed to intelligent agent which
chooses the High Risk target to hit on based on priorities given
to it while training and labelling the objects. Figure 9 shows
example of object detected using proposed approach and the
developed object tracking system for UAVs.
VI.
FUTURE SCOPE
In order to archive more accuracy in detecting the objects
from distance, ZigBee can be used instead of Wi-FI. Zig-Bee’s
have high accuracy and quick response can make this system
powerful.
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
877
VII.
C
ONCLUSION
Artificial Intelligence based Missile Guidance System is
successfully executing using deep learning and artificial
intelligence. The method takes video stream as input , detects
the objects in video frame in real time and it then also decides
one High Risk target among many to hit on.
Fig. 9.
Object Detection Frame
REFERENCES
[1] Jingya Wand Jangwon Lee and David Crandall. Real-time
object detection for un-manned aerial vehicles based on cloud-
based convolutional neural network. International Research
Journal of Engineering and Technology, 2018.
[2] Ben Itzstein Kit Axelrod and Michael West. A self-
targeting missile system using computer vision. International
Journal of Innovative Research in Science and Engineering
and Technology, 02, 2016
[3] Xiao Bian Haibin Lingand Qinghua Hu Pengfei Zhu,
Longyin Wen. Vision meets drones: A challenge.
International Journal of Computer Science and Network,
04,2018.
[4] K. N. Swamy V. Krishnabrahmam, N. Bharadwaj.
Guided missile with intelligent agent. Defense science journal,
1.50:25–30, 2009.
[5] Guanghui Wang Yuanwei Wu, Yao Sui. Vision-based
real-time aerial object local-ization and tracking for uav
sensing system. International Journal of Innovative Research
in Computer and Communication Engineering, 07, 2018.
[6] Joseph Redmon, Santosh Divval, Ross Girshick, Ross
Girshick. You Only Look Once: Unified, Real-Time Object
Detection. arXiv:1506.02640v5 [cs.CV] 9 May 2016
[7] Google Open Image Dataset,
https://opensource.google/projects/open-images-dataset
[8] Jangwon Lee, Jingya Wang, David Crandall, Selma
Sabanovi and Geoffrey Fox. Real-Time Object Detection for
Unmanned Aerial Vehicles based on Cloud-based
Convolutional Neural Networks. School of Informatics and
Computing, Indiana University, Bloomington,
IN 47408, USA
[9] Arvind Rajagopalan, Farhan A. Faruqi, D (Nanda)
Nandagopal. Intelligent missile guidance using artificial neural
networks. Artificial Intelligence Research 2015, Vol. 4, No. 1
[10] Kit Axelrod, Ben Itzstein and Michael West. A self-
targeting missile system using computer vision. Experimental
Robotics Major Project, University of Sydney
[11] Qiang Gao ; Yijie Zou ; Jianhua Zhang ; Sheng Liu ;
Zhen Xie ; Shengyong Chen. Missile vision guidance based-
on adaptive image filtering. 2015 IEEE International
Conference on Information and Automation, DOI:
10.1109/ICInfA.2015.7279500
[12] Bahaaeldin Gamal Abdelaty, Mohamed Abdallah
Soliman, Ahmed Nasr Ouda. Reducing Human Effort of the
Optical Tracking of Anti-Tank Guided Missile Targets via
Embedded Tracking System Design. American Journal of
Artificial Intelligence. Vol. 2, No. 2, 2018, pp. 30-35.
doi: 10.11648/j.ajai.20180202.13
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
878
... Unmanned combat platforms supported by artificial intelligence technology are developing strongly. UAV-borne weapons and equipment with high sensing and strong strike capabilities have become the key to changing the battlefield pattern [1]. Among them, the high-precision navigation and positioning system is the core for unmanned combat platforms. ...
Article
Full-text available
Scene-matching navigation is one of the essential technologies for achieving precise navigation in satellite-denied environments. Selecting suitable-matching areas is crucial for planning trajectory and reducing yaw. Most traditional selection methods of suitable-matching areas use hierarchical screening based on multiple feature indicators. However, these methods rarely consider the interrelationship between different feature indicators and use the same set of screening thresholds for different categories of images, which has poor versatility and can easily cause mis-selection and omission. To solve this problem, a suitable-matching areas’ selection method based on multi-level saliency is proposed. The matching performance score is obtained by fusing several segmentation levels’ salient feature extraction results and performing weighted calculations with the sub-image edge density. Compared with the hierarchical screening methods, the matching performance of the candidate areas selected by our algorithm is at least 22.2% higher, and it also has a better matching ability in different scene categories. In addition, the number of missed and wrong selections is significantly reduced. The average matching accuracy of the top three areas selected by our method reached 0.8549, 0.7993, and 0.7803, respectively, under the verification of multiple matching algorithms. Experimental results show this paper’s suitable-matching areas’ selection method is more robust.
... The guidance law is of importance for the missiles to meet the demand of the military task [1−3]. In view that the external threats may exist to destroy the operation [4], the regular guidance law under such situation cannot satisfy the strict requirements [5,6]. Therefore, it is necessary to make the guidance strategy more flexible and intelligent [7]. ...
Article
The guidance strategy is an extremely critical factor in determining the striking effect of the missile operation. A novel guidance law is presented by exploiting the deep reinforcement learning (DRL) with the hierarchical deep deterministic policy gradient (DDPG) algorithm. The reward functions are constructed to minimize the line-of-sight (LOS) angle rate and avoid the threat caused by the opposed obstacles. To attenuate the chattering of the acceleration, a hierarchical reinforcement learning structure and an improved reward function with action penalty are put forward. The simulation results validate that the missile under the proposed method can hit the target successfully and keep away from the threatened areas effectively.
... It is the current study direction of all the researchers, programmers and developers to transform the human society into an intelligent society that can make its maximum decisions without human involvement. Artificial intelligence is being studied and tested in almost all fields of life in the form of intelligent transportation system (Hasan et al., 2019), intelligent military communication and detection systems (Le et al., 2020;Zou et al., 2019;Fu et al., 2020;Allahham et al., 2020;Diwani et al., 2020;Jeong et al., 2020), medical department (Tiwari et al., 2019;Fu, 2019;Aljurayfani et al., 2019), fighting crime and terrorism (Ionescu et al., 2020), smart animal care (Cheng, 2019) and many more. But currently, it is at its very early stage of development due to the lack of understanding and trust of people on its data acquiring, reasoning and execution capability. ...
Article
Physical layer security (PLS) has proven to be a potential solution for enhancing the security performance of future 5G networks, which promises to fulfill the demands of increasing user traffic. Preventing eavesdroppers from overhearing and stealing useful information in such high traffic environments is as challenging as eliminating them from the network. The goal of this survey is to present a comprehensive study of the latest PLS works proposed to enhance the security performance in different 5G technologies. The survey starts by first giving a detailed introduction and overview of existing surveys that explicitly or partially discuss PLS in 5G and its emerging technologies. Many researchers have presented a number of PLS schemes, using either a separate technology such as Multiple-input-multiple-output (MIMO), Millimeter Wave (mmWave), Radio frequency (RF), Non-orthogonal multiple access (NOMA), Visible light communication (VLC), etc., or a combination of two or more technologies, for securing each field of future 5G networks such as Heterogeneous networks (HetNets), Device-to-Device (D2D), Internet-of-Things (IoT), Cognitive radio network (CRN), Unmanned Aerial Network (UAV), etc. After summarizing the existing surveys, we present a detailed overview on the PLS research works performed till now in HetNets, with respect to its different underlaying technologies, as well as in other emerging 5G technologies. Then, optimization ontology is presented that discusses different security metrics used for measuring PLS performance. Different from rest of the surveys, our survey includes a comprehensive discussion regarding the proposed PLS techniques based on artificial intelligence and machine learning techniques, especially highlighting the works performed using reinforcement learning and deep learning algorithms, allowing us to understand how artificial intelligence can help to achieve better PLS. Towards the end, we discuss numerous challenges being encountered in practical implementation of PLS techniques, and propose different interesting areas that can be opted as future research direction.
Chapter
Full-text available
Proponents of artificial intelligence boast its many promises, including the potential for creativity. Whether the realities of artificial cognition align with these promises, however, remains hotly debated. In this chapter, we explore the role of artificial intelligence in creative problem solving through the lens of cognition. Through this lens, we advance the argument that, at present, creative problem solving remains a distinctly human capability. Specifically, we examine how artificial cognition can and cannot engage in each stage of creative problem solving, as well as the underlying mechanisms of divergent and convergent thinking. Although we find little evidence to support the creativity of artificial cognition, we advance several ways in which artificial cognition can augment human cognition to enhance creative problem solving.
Article
Full-text available
Planning is a critical part of project management work, that requires estimates of effort for a given project. Given the importance of meeting delivery deadlines while maintaining quality levels, the imperative to monitor and control the evolution of projects and the uncertainty generated by estimation, the need to create methods to solve these issues has arisen, which has aroused the interest of companies dedicated to software production. Researchers have developed machine learning algorithms, which allow a more accurate prediction of the effort to adjust the planning. Recently, techniques have been defined in the software industry, where artificial intelligence and algorithmic models are combined for effort estimation. This article presents the state of the art in the use of artificial neural networks for this purpose. A compilation of academic papers where neural networks are combined with hybrid algorithms based on the behavior of animals and insects for network learning was carried out, demonstrating the trend of its utilization to optimize effort estimation during early planning in software development projects.
Chapter
In this century, every government in the world is more concern about forest degradation and deforestation. Nowadays, the world population is increasing at a rapid rate. There are many countries which are economically developed but suffering from adverse effects of pollution. So, it has become essential for growing a sufficient number of trees for feeding oxygen to such a huge population. But with the time passing by, due to deforestation, the oxygen level is depleting to a great extent. As trees are the most important source of oxygen, so there is a need to keep a track of the number of trees in a particular area. A manual survey of tracking trees is practically impossible and costly. In this proposed approach, the web-based software is developed to count the number of trees in a particular area of town. The detection and counting of trees are done using TensorFlow Object Detection API to train dataset of Google Earth Image using Faster Region Convolutional Neural Network (Faster-RCNN) and Single Shot Multi-Box Detector (SSD) with InceptionV2 technique. This technique will save a large amount of manual work for monitoring/counting number of trees.
Article
Full-text available
Human role reduction in the firing process in the physical military systems is the way to improve the overall system performance and achieve the requirement of operation, especially for the anti-tank guided missile (ATGM). In the second-generation ATGM system, the human operator is responsible for following the target until the missile clash the target (Manual Target Tracking). Achieving an acceptable flight trajectory with getting a minimum miss distance, which is a distance between the center of the target and the impact point, is the factor that used to measure the ATGM performance. This paper is dedicated to designing and implementation of an embedded tracking system capable of dealing with the slow-moving objects, which is carried out as a step to reduce the human operator role during the operation, in addition, upgrade the second-generation anti-tank guided missile system to third generation ATGM system (Automatic target tracking). The present work seeks to take benefits of a System on Chip (SoC) technology, including embedded Linux systems, in the real-time computer vision applications. The nonlinear flight simulation model of the intended missile system is presented in a MATLAB environment. The tracking algorithm is described using Python programing language with the aid of OpenCV library and implemented based on embedded Raspberry Pi system (RPI). Hardware-in-Loop experimental test is carried out to evaluate and validate the methodology of the proposed work to achieve the overall system requirement with an acceptable flight trajectory and minimum miss-distance.
Article
Full-text available
In this paper we present a large-scale visual object detection and tracking benchmark, named VisDrone2018, aiming at advancing visual understanding tasks on the drone platform. The images and video sequences in the benchmark were captured over various urban/suburban areas of 14 different cities across China from north to south. Specifically, VisDrone2018 consists of 263 video clips and 10,209 images (no overlap with video clips) with rich annotations, including object bounding boxes, object categories, occlusion, truncation ratios, etc. With intensive amount of effort, our benchmark has more than 2.5 million annotated instances in 179,264 images/video frames. Being the largest such dataset ever published, the benchmark enables extensive evaluation and investigation of visual analysis algorithms on the drone platform. In particular, we design four popular tasks with the benchmark, including object detection in images, object detection in videos, single object tracking, and multi-object tracking. All these tasks are extremely challenging in the proposed dataset due to factors such as occlusion, large scale and pose variation, and fast motion. We hope the benchmark largely boost the research and development in visual analysis on drone platforms.
Article
Full-text available
The paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared to existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.
Article
Full-text available
Missile guidance systems using the Proportional Navigation (PN) guidance law is limited in performance in supporting wide class of engagement scenarios with varying mission and target parameters. For surpassing this limitation, an Artificial Neural Network (ANN) to substitute the PN guidance is proposed by the authors. The ANN based system enables learning, adaptation, and faster throughput and thus equips the guidance system with capability akin to intelligent biological organisms. This improvement could remove the barrier of limitations with allowable mission scope. In this paper, a Multi-Layer Perceptron (MLP) has been selected to implement the ANN based approach for replacing PN guidance. Attempts to replace PN guidance using MLP are limited in the literature and warrant greater attention due to significant theoretical development with the MLP field in recent times. It is shown in this paper, that the MLP based guidance law can effectively substitute PN for a wide range of engagement scenarios with variations in initial conditions. A foundational argument to justify using an MLP for substituting PN is provided. Besides this, the design, training and simulation based testing approach for an MLP to replace PN has been devised and described. The potential for faster throughput is possible as the MLP nodes process information in parallel when generating PN like guidance commands. The results clearly demonstrate the potential of MLP in future applications to effectively replace and thus upgrade a wide spectrum of modern missile guidance laws.
Conference Paper
A novel target tracking algorithm based on adaptive image filtering is proposed to realize missile vision guidance. According to the specific condition of application, we propose an adaptive filtering, light and effective motion model, and a fusion algorithm. Small target scale in the first, large scale variation during the procession, and the changing complex backgrounds can all be solved effectively by our algorithm. Simulated experiments show that our algorithm performs well with 15% distance error ratio, and more than 70% overlapping in average. The speed of real-time procession is faster than 300FPS.
Article
Guided missiles involve the use of a conventional deviated pursuit course like proportionalnavigation algorithm and its variants, which is optimal when the speed advantage of the guided missile is veryhighandthe target maneouvering is minimal. Against the present-day aircraft,whichemploys fly-by-wire technology for high maneouverability andhigh speed, missiles needto have amuchhigher speed advantage or to use a combination of artificial intelligence and modern controlalgorithms. Results of simulation of pursuit and evasion with an autonomous intelligent agentincorporated in the control loop are presented.
Vision meets drones: A challenge
  • Xiao Bian Haibin Lingand Qinghua Hu Pengfei Zhu
  • Longyin Wen
Xiao Bian Haibin Lingand Qinghua Hu Pengfei Zhu, Longyin Wen. Vision meets drones: A challenge. International Journal of Computer Science and Network, 04,2018.
A self-targeting missile system using computer vision
  • Kit Axelrod
  • Ben Itzstein
  • Michael West
Ben Itzstein Kit Axelrod and Michael West. A selftargeting missile system using computer vision. International Journal of Innovative Research in Science and Engineering and Technology, 02, 2016
Intelligent missile guidance using artificial neural networks
  • Arvind Rajagopalan
  • A Farhan
  • Faruqi