Article

Modeling of the 3D-view geometry based motion detection system for determining trajectory and angle of the unguided fighter aircraft-rocket

Authors:
  • Indonesian Air Force Academy
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The motion detection process of fighter-aircraft rocket is important in the Aerial Weapon Scoring System (AWSS). Recently, available AWSS is merely based on the final position of the rocket in the shooting-range. The safety regulation pushes the observation which can only be made far away from shooting-range. This paper presents 3D-View Geometry based motion detection system to detect the unguided aircraft-rocket trajectory in the shooting-range. In the developed model, not only the final position is observed, the trajectory and angle of the rocket are also captured. This technique uses 3CCD-digital video camera safely placed outside and closer to the shooting-range. The camera is directed to the centralpoint with certain altitude from the ground, which is placed perpendicular to the source of the rocket-shot. The 3D-View Geometry system makes the similar point of views between the observing camera and computer screen while observing the rocket in the shooting-range. The motion detection system generates two important marks for final-process calculation (i.e., explosion image and captured rocket-image) from the video processed. It computes the rocket angle based on its trajectory. The position based on the distance of Euclidian is then quickly calculated. The results can be developed as the important data to generate the strategic information which is very useful to be informed to the pilot and to produce the analysis of aircraft-rocket firing exercises more completely.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To get the angle, position of the explosion, and the trajectory of the rocket, we used the algorithm of the lastest research by Infantono and Wahyudi [2] and Infantono et.al. [7] as seen in Figure 5. Infantono et al. [7] presented modeling of the 3D-view geometry based motion detection system for observing the unguided aircraft-rocket in the shooting-range. Their proposed model generated the similar perception between two points of views (i.e. the observing-camera and the computer-screen) by transforming the view of the Z-axis using some proposed formulas. ...
... To get the angle, position of the explosion, and the trajectory of the rocket, we used the algorithm of the lastest research by Infantono and Wahyudi [2] and Infantono et.al. [7] as seen in Figure 5. Infantono et al. [7] presented modeling of the 3D-view geometry based motion detection system for observing the unguided aircraft-rocket in the shooting-range. Their proposed model generated the similar perception between two points of views (i.e. the observing-camera and the computer-screen) by transforming the view of the Z-axis using some proposed formulas. ...
... The block diagram of 3D-view geometry motion detection system.[7] ...
... Recent research had been conducted by Infantono, et al. [6]. They used a computer vision algorithm and managed to extract the coordinates of the rocket explosion into 3D. ...
... The results obtained had several output parameters, among others, including the position and the distance from the central point, the heading of aircraft, and angle of the rocket. The model used in the study was illustrated in Fig. 2. Their finding [6] had proposed this method to overcome problem for achieving information about the angle of rocket. The advanced model of Weapon Scoring System developed by Infantono et al. [6] A novel way based on AR to display the explosion point, final angle, and trajectory in the air-to-ground rocket firing was introduced in this paper to improve the quality and safety of exercise. ...
... The model used in the study was illustrated in Fig. 2. Their finding [6] had proposed this method to overcome problem for achieving information about the angle of rocket. The advanced model of Weapon Scoring System developed by Infantono et al. [6] A novel way based on AR to display the explosion point, final angle, and trajectory in the air-to-ground rocket firing was introduced in this paper to improve the quality and safety of exercise. ARoket was designed by using the technique of Markerless [7] AR and developing the software of Vuforia [8]. ...
Conference Paper
Full-text available
The firing exercise of air-to-ground rocket should be run safely. The result and evaluation were to be reported quickly. Generally, the Weapon Impact Scoring Systems (WISS) displayed the rocket impact point by using PC monitor at a fixed location. This may produce the gap of real time information which expected by all safety components of exercise. In this paper, we proposed a novel system to visualize results of the rocket firing exercise anytime, anywhere, and realtime. The system called ARoket was developed based on the markerless Augmented Reality and run on Android-based smartphone. ARoket was integrated with current method of the rocket detection, Image Subtraction and 3D-View geometry. It displayed process of how the rocket hit the target which informed some important data such as the position of the rocket explosion and its angle when hitting the target area, which were extracted from the cloud database and visualized in 3D. This proposed system succeded to display the rocket trajectory on the firing range based on the output of WISS which were sent to the internet cloud database. The interchangeable maps installed in the Military Strategic Desk represented the firing range and its environment. By using smartphones in any different places, all officers who related to the firing exercise kept watching the screen of devices to observe the process of the rocket hitting the firing range, and evaluated the results quickly. This proposed system could be contributed to improve the quality and safety level of the exercise thoroughly, even in military or other aspects.
Conference Paper
We present a motion detection algorithm by a change detection filter matrix derived from Discrete Cosine Transform. Recently, a Fourier reconstruction scheme shows good results for motion detection. However, its computational cost is a major drawback. We revisit the problem and achieve two orders of magnitude faster than the previous algorithm with better performance. The proposed algorithm runs at about 800 frames per second for VGA resolution images on a consumer hardware by using only integer matrix multiplication and the symmetric property of the change detection filter matrix. In addition, our algorithm is fundamentally robust to sudden illumination changes because it works based on edge information. We verify our algorithm with challenging datasets that contain strong and sudden illumination changes.
Indoor functional objects exhibit large view and appearance variations, thus are difficult to be recognized by the traditional appearance-based classification paradigm. In this paper, we present an algorithm to parse indoor images based on two observations: i) The functionality is the most essential property to define an indoor object, e.g. "a chair to sit on", ii) The geometry (3D shape) of an object is designed to serve its function. We formulate the nature of the object function into a stochastic grammar model. This model characterizes a joint distribution over the function-geometry-appearance (FGA) hierarchy. The hierarchical structure includes a scene category, functional groups, functional objects, functional parts and 3D geometric shapes. We use a simulated annealing MCMC algorithm to find the maximum a posteriori (MAP) solution, i.e. a parse tree. We design four data-driven steps to accelerate the search in the FGA space: i) group the line segments into 3D primitive shapes, ii) assign functional labels to these 3D primitive shapes, iii) fill in missing objects/parts according to the functional labels, and iv) synthesize 2D segmentation maps and verify the current parse tree by the Metropolis-Hastings acceptance probability. The experimental results on several challenging indoor datasets demonstrate the proposed approach not only significantly widens the scope of indoor scene parsing algorithm from the segmentation and the 3D recovery to the functional object recognition, but also yields improved overall performance.
Article
In the existing methods of a moving object detection using image recognition technology, they have processed an obtained whole image, and have detected a moving object in a picture. However, when a moving portion in the scene is hidden by some obstacles, the recognition of a moving object is sometimes difficult. In this study, we propose a novel method of discriminating moving objects, such as a person or a vehicle. We use only a narrow and tall area of the video called the Strip Frame Image for detecting moving portions. By using this method, we are able to obtain the patterns of moving objects while avoiding the obstacles in a picture. Then we classify them by DP matching against previously stored reference patterns in the database for all possible classes (person, bicycle, car, and bus). In this paper, we compare several variations of an algorithm used to detect and classify objects passing laterally in front of a security camera. We test both a moving object speed independent and a speed based method for constructing patterns for the passing objects. The results show that the relatively simple method of pattern classification by DP matching can be successfully applied for classifying graphic objects of a certain degree of complexity. Finally, non-homogeneity of people's patterns and their subsequent frequent misclassification is addressed by not producing reference patterns for people, and differentiating them from correctly classified bicycles by the DP distance to the first candidate.
This paper presents a new approach for multi-view object class detection. Appearance and geometry are treated as separate learning tasks with different training data. Our approach uses a part model which discriminatively learns the object appearance with spatial pyramids from a database of real images, and encodes the 3D geometry of the object class with a generative representation built from a database of synthetic models. The geometric information is linked to the 2D training data and allows to perform an approximate 3D pose estimation for generic object classes. The pose estimation provides an efficient method to evaluate the likelihood of groups of 2D part detections with respect to a full 3D geometry model in order to disambiguate and prune 2D detections and to handle occlusions. In contrast to other methods, neither tedious manual part annotation of training images nor explicit appearance matching between synthetic and real training data is required, which results in high geometric fidelity and in increased flexibility. On the 3D Object Category datasets CAR and BICYCLE, the current state-of-the-art benchmark for 3D object detection, our approach outperforms previously published results for viewpoint estimation.
Conference Paper
According to the result of moving object detection research on video sequences, this paper proposes a new method to detect moving object based on background subtraction. First of all, we establish a reliable background updating model based on statistical and use a dynamic optimization threshold method to obtain a more complete moving object. And then, morphological filtering is introduced to eliminate the noise and solve the background disturbance problem. At last, contour projection analysis is combined with the shape analysis to remove the effect of shadow, the moving human body are accurately and reliably detected. The experiment results show that the proposed method runs quickly, accurately and fits for the real-time detection.
Article
When an observer moves through the world, he or she must detect moving objects in order to avoid or intercept them. Accomplishing this task presents a problem for the visual system, because the motion of the observer causes the images of nearly all objects in the scene to move across the retina. We tested observers' abilities to detect a moving object when its angle of motion deviated from the radial optic flow pattern generated by observer motion in a straight line. To test whether global information is important for this task, we compared the results for a radial pattern with those for a deformation pattern. The results show that observer accuracy depends on the global pattern of the optic flow. In addition, we tested the effects of the duration of the trial, the number of objects, the eccentricity of the moving object and the speed of the observer.