Chapter

Cloud-Based Infiltration Detection System for Military Purposes

Authors:
  • Independent Researcher
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Massive military manpower is deployed on borders to keep a vigilant eye on possible infiltration from neighboring countries. This traditional approach is prone to errors because of human factors. To make border surveillance more effective, countries have installed CCTV cameras on borders but generally such systems are passive in nature and require human operators to keep an eye on the captured video footage. This chapter describes a cloud based approach for infiltration detection in border defense environments. The processing of video data, in cloud, and the real-time response are the factors that make system suitable for military purposes. As far as affordability is concerned, the governments can easily bear expense of establishing a private cloud for implementing visual surveillance. This chapter represents pertinent research of authors in the field of visual surveillance as well as other state-of-theart breakthroughs in this area. The chapter is of multi-disciplinary significance in the field of cloud computing, video & image processing, behavioral sciences, and defense studies.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Limitations imposed by the traditional practice in financial institutions of running risk analysis on the desktop mean many rely on models which assume a "normal" Gaussian distribution of events which can seriously underestimate the real risk. In this paper, we propose an alternative service which uses the elastic capacities of Cloud Computing to escape the limitations of the desktop and produce accurate results more rapidly. The Business Intelligence as a Service (BIaaS) in the Cloud has a dual-service approach to compute risk and pricing for financial analysis. The first type of BIaaS service uses three APIs to simulate the Heston Model to compute the risks and asset prices, and computes the volatility (unsystematic risks) and the implied volatility (systematic risks) which can be tracked down at any time. The second type of BIaaS service uses two APIs to provide business analytics for stock market analysis, and compute results in the visualised format, so that stake holders without prior knowledge can understand. A full case study with two sets of experiments is presented to support the validity and originality of BIaaS. Additional three examples are used to support accuracy of the predicted stock index movement as a result of the use of the Heston Model and its associated APIs. We describe the architecture of deployment, together with examples and results which show how our approach improves risk and investment analysis and maintaining accuracy and efficiency whilst improving performance over desktops.
Article
Full-text available
The Cloud Computing Business Framework (CCBF) is proposed to help organisations achieve good Cloud design, deployment, migration and services. There are four key areas to be addressed: (i) Classification; (ii) Organisational Sustainability Modelling (OSM); (iii) Service Portability and (iv) Linkage. Each area's focus is described, and we explain how each fits into the CCBF and work altogether. The process that leads the CCBF is supported by literature, case studies, where examples in each CCBF key area are used to illustrate its effectiveness and contributions to organisations adopting it. CCBF has been used in several organisations offering added values and positive impacts.
Article
Full-text available
Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human-computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions.
Article
Full-text available
Detection, tracking, and understanding of moving objects of interest in dynamic scenes have been active research areas in computer vision over the past decades. Intelligent visual surveillance (IVS) refers to an automated visual monitoring process that involves analysis and interpretation of object behaviors, as well as object detection and tracking, to understand the visual events of the scene. Main tasks of IVS include scene interpretation and wide area surveillance control. Scene interpretation aims at detecting and tracking moving objects in an image sequence and understanding their behaviors. In wide area surveillance control task, multiple cameras or agents are controlled in a cooperative manner to monitor tagged objects in motion. This paper reviews recent advances and future research directions of these tasks. This article consists of two parts: The first part surveys image enhancement, moving object detection and tracking, and motion behavior understanding. The second part reviews wide-area surveillance techniques based on the fusion of multiple visual sensors, camera calibration and cooperative camera systems. KeywordsBehavior understanding-cooperative camera system-image interpretation-motion detection-object tracking-wide area surveillance
Conference Paper
Full-text available
A framework for modeling and recognition of temporal activities is proposed. The modeling of sets of exemplar activities is achieved by parameterizing their representation in the form of principal components. Recognition of spatio-temporal variants of modeled activities is achieved by parameterizing the search in the space of admissible transformations that the activities can undergo. Experiments on recognition of articulated and deformable object motion from image motion parameters are presented
Article
Full-text available
In this article, we present Knight, an automated surveillance system deployed in a variety of real-world scenarios ranging from railway security to law enforcement. We also discuss the challenges of developing surveillance systems, present some solutions implemented in Knight that overcome these challenges, and evaluate Knight's performance in unconstrained environments
Article
Full-text available
A view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal template-a static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms
Conference Paper
The visual surveillance systems are becoming heavier day by day in terms of number of cameras to manage. It poses a challenge to manage such systems provided that most of these systems are still managed manually. To save cost and to increase reliability in response critical applications, the shift of visual surveillance is towards cloud-based solutions. This paper proposes a framework for implementation of a cloud-based visual surveillance system for human activity recognition. System works in both ways; either video captured by CCTV camera can be live-streamed to cloud in real time or stored videos can be sent to cloud for video content analysis (VCA). The paper advocates some of the improvements in practices and laws pertaining to cloud in order to make its applications widespread.
Article
Motion segmentation is a crucial step for video analysis and has many applications. This paper proposes a method for motion segmentation, which is based on construction of statistical background model. Variance and Covariance of pixels are computed to construct the model for scene background. We perform average frame differencing with this model to extract the objects of interest from the video frames. Morphological operations are used to smooth the object segmentation results. The proposed technique is adaptive to the dynamically changing background because of change in the lighting conditions and in scene background. The method has the capability to relearn the background to adapt these variations. The immediate advantage of the proposed method is its high processing speed of 30 frames per second on large sized (high resolution) videos. We compared the proposed method with other five popular methods of object segmentation in order to prove the effectiveness of the proposed technique. Experimental results demonstrate the novelty of the proposed method in terms of various performance parameters. The method can segment the video stream in real-time, when background changes, lighting conditions vary, and even in the presence of clutter and occlusion
Conference Paper
In this paper, we describe a novel template matching based approach for recognition of different human activities in a video sequence. We model the background in the scene using a simple statistical model and extract the foreground objects present in a scene. The matching templates are constructed using the motion history images (MHI) and spatial silhouettes for recognizing activities like walking, standing, bending, sleeping and jogging in a video sequence. Experimental results demonstrate that the proposed method can recognize these activities accurately for standard KTH database as well as for our own database.
Conference Paper
Human activity recognition is a challenging area of research because of its various potential applications in visual surveillance. A spatio-temporal template matching based approach for activity recognition is proposed in this paper. We model the background in a scene using a simple statistical model and extract the foreground objects in a scene. Spatio-temporal templates are constructed using the motion history images (MHI) and object shape information for different human activities in a video like walking, standing, bending, sleeping and jumping. Experimental results show that the method can recognize these multiple activities for multiple objects with accuracy and speed.
Conference Paper
Object Tracking is an important task in video processing because of its variety of applications in visual surveillance, human activity monitoring and recognition, traffic flow management etc. Multiple object detection and tracking in outdoor environment is a challenging task because of the problems raised by poor lighting conditions, variation in poses of human object, shape, size, clothing, etc. This paper proposes a novel technique for detection and tracking of multiple human objects in a video. A classifier is trained for object detection using Haar-like features from training image set. Human objects are detected with help of this trained detector and are tracked using particle filter. The experimental results show that the proposed technique can detect and track multiple humans in a video adequately fast in the presence of poor lighting conditions, variation in poses of human objects, shape, size, clothing etc. and the technique can handle varying number of human objects in a video at various points of time.
Article
In this paper we investigate object tracking in video sequences by using the potential of particle filtering to process features from video frames. A particle filter (PF) and a Gaussian sum particle filter (GSPF) are developed based upon multiple information cues, namely colour and texture, which are described with highly nonlinear models. The algorithms rely on likelihood factorisation as a product of the likelihoods of the cues. We demonstrate the advantages of tracking with multiple independent complementary cues compared to tracking with individual cues. The advantages are increased robustness and improved accuracy. The performance of the two filters is investigated and validated over both synthetic and natural video sequences.
Article
Visual analysis of human motion is currently one of the most active research topics in computer vision. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Human motion analysis concerns the detection, tracking and recognition of people, and more generally, the understanding of human behaviors, from image sequences involving humans. This paper provides a comprehensive survey of research on computer-vision-based human motion analysis. The emphasis is on three major issues involved in a general human motion analysis system, namely human detection, tracking and activity understanding. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are discussed.
Conference Paper
In this paper, we present a method for detecting and tracking rigid moving objects in a monocular image sequence. The originality of this method lies in a state modelling of this estimation problem which is solved in an unified way. This hybrid estimation problem leads to nonlinear state equations that are solved by the particle filtering. A particle filter is set for each shape model (modes). It estimates the motion and position parameters, tracks the object in the sequence and also computes at each time the probability of all modes.
Conference Paper
This paper describes an end-to-end method for extracting moving targets from a real-time video stream, classifying them into predefined categories according to image-based properties, and then robustly tracking them. Moving targets are detected using the pixel wise difference between consecutive image frames. A classification metric is applied these targets with a temporal consistency constraint to classify them into three categories: human, vehicle or background clutter. Once classified targets are tracked by a combination of temporal differencing and template matching. The resulting system robustly identifies targets of interest, rejects background clutter and continually tracks over large distances and periods of time despite occlusions, appearance changes and cessation of target motion
Article
Visual surveillance in dynamic scenes, especially for humans and vehicles, is currently one of the most active research topics in computer vision. It has a wide spectrum of promising applications, including access control in special areas, human identification at a distance, crowd flux statistics and congestion analysis, detection of anomalous behaviors, and interactive surveillance using multiple cameras, etc. In general, the processing framework of visual surveillance in dynamic scenes includes the following stages: modeling of environments, detection of motion, classification of moving objects, tracking, understanding and description of behaviors, human identification, and fusion of data from multiple cameras. We review recent developments and general strategies of all these stages. Finally, we analyze possible research directions, e.g., occlusion handling, a combination of twoand three-dimensional tracking, a combination of motion analysis and biometrics, anomaly detection and behavior prediction, content-based retrieval of surveillance videos, behavior understanding and natural language description, fusion of information from multiple sensors, and remote surveillance.
Article
Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding
Article
In this paper we describe our work on 3-D modelbased tracking and recognition of human movement from real images. Our system has two major components. The first component takes real image sequences acquired from multiple views and recovers the 3-D body pose at each time instant. The poserecovery problem is formulated as a search problem and entails finding the pose parameters of a graphical human model for which its synthesized appearance is most similar to the actual appearance of the real human in the multi-view images. Currently, we use a best-first search technique and chamfer matching as a fast similarity measure between synthesized and real edge images. The second component of our system deals with the representation and recognition of human movement patterns. The recognition of human movement patterns is considered as a classification problem involving the matching of a test sequence with several reference sequences representing prototypical activities. A variation of dynamic ti...