Conference Paper
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Camera [11][12][13], LiDAR [14][15][16][17][18], and acoustic [19][20][21] sensors are widely used to extract information about the environment. Transloading work can last for a few days, so the scanning system needs to work in both the daytime and nighttime. ...
... Remote Sens. 2022,14, 5069 ...
Article
Full-text available
This paper considers the problem of determining the time-varying location of a nearly full hatch during cyclic transloading operations. Hatch location determination is a necessary step for automation of transloading, so that the crane can safely operate on the cargo in the hatch without colliding with the hatch edges. A novel approach is presented and evaluated by using data from a light detection and ranging (LiDAR) mounted on a pan-tilt unit (PT). Within each cycle, the hatch area is scanned, the data is processed, and the hatch corner locations are extracted. Computations complete less than 5 ms after the LiDAR scan completes, which is well within the time constraints imposed by the crane transloading cycle. Although the approach is designed to solve the challenging problem of a full hatch scenario, it also works when the hatch is not full, because in that case the hatch edges can be more easily distinguished from the cargo data. Therefore, the approach can be applied during the whole duration of either loading or unloading. Experimental results for hundreds of cycles are present to demonstrate the ability to track the hatch location as it moves and to assess the accuracy (standard deviation less than 0.30 m) and reliability (worst case error less than 0.35 m).
... In Figure 5 ronized acoust more antennas t of the drone (i nes of bearing ude coordinate corded data pre n the North axis r object detecti gated of this ap probability as ge that means a (UAV "attacs" color) is set up nces above 30 es the opening n be seen, that t rather remote t ntify and track t ed pan&tilt uni ms. In future, th on by external broadcasted a a. Figure 6 ( [15]. In the theor weaknesses. ...
... Besides that, LiDAR is impervious to changing illumination and environmental conditions. A more detailed description of the detection algorithm and the detection results of LiDAR sensors for sUAV detection is given in [15]. ...
... State-of-the-art techniques for detecting drones include acoustic [3,4], optical camera [5,6], lidar [7,8], and radar [9][10][11]. Despite the wide range of sensing practices, there are several trade-offs associated with each method. ...
Article
Full-text available
This paper investigates the use of micro-Doppler spectrogram signatures of flying targets, such as drones and birds, to aid in their remote classification. Using a custom-designed 10-GHz continuous wave (CW) radar system, measurements from different scenarios on a variety of targets were recorded to create datasets for image classification. Time/velocity spectrograms generated for micro-Doppler analysis of multiple drones and birds were used for target identification and movement classification using TensorFlow. Using support vector machines (SVMs), the results showed an accuracy of about 90% for drone size classification, about 96% for drone vs. bird classification, and about 85% for individual drone and bird distinction between five classes. Different characteristics of target detection were explored, including the landscape and behavior of the target.
... Another technique, departing from the typical sequence of track-after-detect, is to leverage motion information by searching for minor 3D details in the 360 • LiDAR scans of the scene and analyzing the trajectory of the tracked object to classify UAVs and non-UAV objects by identifying typical movement patterns [22], [23]. ...
Preprint
With the increasing use of drones across various industries, the navigation and tracking of these unmanned aerial vehicles (UAVs) in challenging environments (such as GNSS-denied environments) have become critical issues. In this paper, we propose a novel method for a ground-based UAV tracking system using a solid-state LiDAR, which dynamically adjusts the LiDAR frame integration time based on the distance to the UAV and its speed. Our method fuses two simultaneous scan integration frequencies for high accuracy and persistent tracking, enabling reliable estimates of the UAV state even in challenging scenarios. The use of the Inverse Covariance Intersection method and Kalman filters allow for better tracking accuracy and can handle challenging tracking scenarios. We have performed a number of experiments for evaluating the performance of the proposed tracking system and identifying its limitations. Our experimental results demonstrate that the proposed method achieves comparable tracking performance to the established baseline method, while also providing more reliable and accurate tracking when only one of the frequencies is available or unreliable.
... In [54], a LADAR (laser detection and ranging) is described, based on LASER, with a peak power of 700 kW, which allows for the increase of the operating range up to 2 km. In [55], an interesting experimental test campaign was carried out with a 3D LiDAR system mounted on a land vehicle to determine the probability of detection for mini drones. The results showed how, with sensors with a maximum operating range of 100 m, it is possible to have a high detection success rate for targets within 30 m. ...
Article
Full-text available
In recent years, the drone market has had a significant expansion, with applications in various fields (surveillance, rescue operations, intelligent logistics, environmental monitoring, precision agriculture, inspection and measuring in the construction industry). Given their increasing use, the issues related to safety, security and privacy must be taken into consideration. Accordingly, the development of new concepts for countermeasures systems, able to identify and neutralize a single (or multiples) malicious drone(s) (i.e., classified as a threat), has become of primary importance. For this purpose, the paper evaluates the concept of a multiplatform counter-UAS system (CUS), based mainly on a team of mini drones acting as a cooperative defensive system. In order to provide the basis for implementing such a system, we present a review of the available technologies for sensing, mitigation and command and control systems that generally comprise a CUS, focusing on their applicability and suitability in the case of mini drones.
... To improve the performance, instead of looking only for the target ball, both the drone and the ball are detected as it provides a larger target than the ball alone. Most of the research in detection of drones with a 3D LiDAR focuses on ground-based systems, such as in [25], [26], [27]. This greatly simplifies the problem as the LiDAR can be stationary and consecutive pointclouds do not require alignment. ...
Article
Full-text available
In this paper, a novel approach for autonomously catching fast flying objects is presented, as inspired by the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020. In this competition, an autonomous Unmanned Aerial Vehicle (UAV) was used to intercept a ball carried by a fast flying drone. The presented solution utilizes a 3D LiDAR sensor for quick and robust target detection. The trajectory of the target is estimated and predicted to select a suitable interception position. The interceptor UAV is navigated into the interception position to safely approach the target. The interception position is frequently being adjusted based on the updated estimation and prediction of the target’s motion to ensure that the ball is caught in the dedicated onboard net. After a successful interception is detected, the UAV lands in a designated landing area. The proposed concept was intensively tested and refined in demanding outdoor conditions with strong winds and varying perception conditions to achieve the robustness required by both the demanding application and the competition. In the MBZIRC 2020 competition, our solution scored second place in Challenge 1 and first place in a combined Grand Challenge. This manuscript will provide a detailed description of the applied methods and an evaluation of our approach with data collected from real-world experiments. In addition, we present achievements of our R&D towards the transition from the MBZIRC competition to an autonomous drone interceptor, which was the main motivation of this challenge.
... In one study [241], the authors proposed an algorithm for detecting small UAVs and generating 3D coordinates by employing LiDAR. In another [101], detecting small UAVs was assessed using a LiDAR system mounted on a vehicle. ...
Article
Full-text available
Recognizing the various and broad range of applications of unmanned aerial vehicles (UAVs) and unmanned aircraft systems (UAS) for personal, public and military applications, recent un-intentional malfunctions of uncontrollable UAVs or intentional attacks on them divert our attention and motivate us to devise a protection system, referred to as a counter UAV system (CUS). The CUS, also known as a counter-drone system, protects personal, commercial, public, and military facilities and areas from uncontrollable and belligerent UAVs by neutralizing or destroying them. This paper provides a comprehensive survey of the CUS to describe the key technologies of the CUS and provide sufficient information with wich to comprehend this system. The first part starts with an introduction of general UAVs and the concept of the CUS. In the second part, we provide an extensive survey of the CUS through a top-down approach: i) the platform of CUS including ground and sky platforms and related networks; ii) the architecture of the CUS consisting of sensing systems, command-and-control (C2) systems, and mitigation systems; and iii) the devices and functions with the sensors for detection-and-identification and localization-and-tracking actions and mitigators for neutralization. The last part is devoted to a survey of the CUS market with relevant challenges and future visions. From the CUS market survey, potential readers can identify the major players in a CUS industry and obtain information with which to develop the CUS industry. A broad understanding gained from the survey overall will assist with the design of a holistic CUS and inspire cross-domain research across physical layer designs in wireless communications, CUS network designs, control theory, mechanics, and computer science, to enhance counter UAV techniques further.
... For pre-defined sliding boxes, even though the increase in the IoU slows when the number of boxes increases, the execution time still increases greatly. Accordingly, we selected five boxes with the scale of ( (14,9), (17,13), (19,9), (24,12), (27,19)) for this stage to balance the detection accuracy and efficiency. Compared with RPN [21], our approach increased IoU by 21.0%, with the number of candidate boxes reduced from nine to five. ...
Article
Full-text available
Real-time detection of small unmanned aerial vehicle (SUAV) targets in SUAV surveillance systems has become a challenge due to their high mobility, sudden bursts, and small sizes. In this study, we used infrared sensors and Convolutional Neural Networks (CNN)-based detectors to achieve the realtime detection of SUAV targets. Existing object detectors generally suffer from a computational burden or low detection accuracy on small targets, which limits their practicality and further application in SUAV surveillance systems. To solve these problems, we developed a real-time SUAV target detection algorithm based on deep residual networks. In order to improve the sensitivity to small targets, a laterally connected multi-scale feature fusion approach was proposed to fully combine the context features and semantic features. A densely paved pre-defined box with geometric analysis was used for single-stage prediction. Compared with the state-of-the-art object detectors, the proposed method achieved superior performance with respect to average-precision and frames-per-second. As the training set was limited, to improve generalization, we investigate the benefits introduced by data augmentation and data balance, and proposed a weighted augmentation approach. The proposed approach improved the robustness of the detector and the overall accuracy.
... A laser scanner based LR is developed to cope with problems of EO/IR systems which are difficult to detect for 3D coordinates, also RF radars have disadvantages which are difficult to show high position accuracy and intuition detection result. [13][14][15][16] Figure. 4 shows a LR system capable of detecting drone approaching at a distance about 180 meter. ...
... Inter-vehicle distance estimation is a crucial part of ADAS. Distance estimation methods can be divided into two major classes: sensor-based [9], [12] and vision-based [10], [21] systems. Sensor-based systems use sensors, such as RADAR and LIDAR [19], to accurately provide the distance information of a target vehicle. ...
Article
Full-text available
Advanced driver assistance systems (ADAS) based on monocular vision are rapidly becoming a popular research subject. In ADAS, inter-vehicle distance estimation from an in-car camera based on monocular vision is critical. At present, related methods based on monocular vision for measuring the absolute distance of vehicles ahead experience accuracy problems in terms of the ranging result, which is low, and the deviation of the ranging result between different types of vehicles, which is large and easily affected by a change in the attitude angle. To improve the robustness of a distance estimation system, an improved method for estimating the distance of a monocular vision vehicle based on the detection and segmentation of the target vehicle is proposed in this study to address the vehicle attitude angle problem. The angle regression model (ARN) is used to obtain the attitude angle information of the target vehicle. The dimension estimation network determines the actual dimensions of the target vehicle. Then, a 2D base vector geometric model is designed in accordance with the image analytic geometric principle to accurately recover the back area of the target vehicle. Lastly“, area–distance” modeling based on the principle of camera projection is performed to estimate distance. Experimental results on the real-world computer vision benchmark, KITTI, indicate that our approach achieves superior performance compared with other existing published methods for different types of vehicles (including front and sideway vehicles).
...  Low visibility due to small dimensions.  Reduced RADAR signature [2][3][4][5][6].  Reduced acoustic signature. ...
Article
Full-text available
With the advent of micro-sized unmanned helicopters and airplanes weighing under 250 g certain needs for measurement instruments and setups are emerging. The authors have identified the lack of micro-motor measurement instruments and more specifically a static thrust gauge device. For this reason, scales to measure static motor thrust was advised and further developed as a laboratory setup. The motors that are subject to testing with the described instrument are the brushed micro-sized coreless electric motors, well suited for micro-drones with total weight under 250 g. The scale is fitting different sizes of micro-motors and appropriate control of the input voltage is provided. The instrument is measuring simultaneously the applied voltage to the motor and the current it consumes, along with the static thrust the motor generates. The scale is versatile – it measures both pusher and tractor propeller configurations. Pusher propellers in micro-drones are gaining significant attention lately due to their better efficiency characteristics.
... -Further, first investigations consider the application of scanning LiDAR (light detection and ranging) systems [8] [10]. ...
... With regard to the hardware, we based the design of the vehicle on past experiences with the development of an airborne laser scanning system (Schatz, 2008). Currently we use the vehicle for several research topics in the area of object detection and tracking (Becker et al., 2016;Borgmann et al., 2017Borgmann et al., , 2018Hammer et al., 2018), mobile mapping and change detection (Gehrung et al., 2018). In the later part of this paper, we present the use case of pedestrian detection, which uses several capabilities of the vehicle. ...
Article
Full-text available
In this paper we present a versatile multi-sensor vehicle which is used in several research projects. The vehicle is equipped with various sensors in order to cover the needs of different research projects in the area of object detection and tracking, mobile mapping and change detection. We show an example for the capabilities of this vehicle by presenting camera- and LiDAR-based pedestrian detection methods. Besides this specific use case, we provide a more general in-depth description of the vehicle’s hard- and software design and its data-processing capabilities. The vehicle can be used as a sensor carrier for mobile mapping, but it also offers hardware and software components to allow for an adaptable onboard processing. This enables the development and testing of methods related to real-time applications or high-level driver assistance functions. The vehicle’s hardware and software layout result from several years of experience, and our lessons learned can help other researchers set up their own experimental platform.
Chapter
Drones have become essential tools in a wide range of industries, including agriculture, surveying, and transportation. However, tracking unmanned aerial vehicles (UAVs) in challenging environments, such as cluttered or GNSS-denied environments, remains a critical issue. Additionally, UAVs are being deployed as part of multi-robot systems, where tracking their position can be essential for relative state estimation. In this chapter, we evaluate the performance of a multi-scan integration method for tracking UAVs in GNSS-denied environments using a solid-state LiDAR and a Kalman Filter (KF). We evaluate the algorithm’s ability to track a UAV in a large open area at various distances and speeds. Our quantitative analysis shows that while “tracking by detection” using a constant-velocity model is the only method that consistently tracks the target, integrating multiple scan frequencies using a KF achieves lower position errors and represents a viable option for tracking UAVs in similar scenarios.
Article
The reliable detection and neutralization of unmanned aircraft systems (UAS), known as counter UAS (cUAS), is pivotal in restricted air spaces. The application of deep learning (DL) classifiers on electro-optical (EO) sensor data is promising for cUAS, but it introduces three key challenges. Specifically, DL-based cUAS (i) produces point estimates at test-time with no associated measure of uncertainty (softmax outputs produced by typical DL models are often overconfident predictions, resulting in unreliable measures of uncertainty), (ii) easily trigger false positive detections for birds and other aerial wildlife, and (iii) cannot accurately characterize out-of-distribution (OOD) input samples. In this work, we develop an epistemic uncertainty quantification (UQ) framework, which utilizes the advantages of DL while simultaneously producing uncertainty estimates on both in-distribution and OOD input samples. In this context, in-distribution samples refer to testing samples collected according to the same data generation process as the training data, and OOD samples refer to in-distribution samples that are intentionally perturbed in order to shift the distribution of the testing set away from the distribution of the training set. Our framework produces a distributive estimate of each prediction, which accurately expresses UQ, as opposed to a point estimate produced by standard DL. Through evaluation on a custom field-collected dataset consisting of images captured from EO sensors, and in comparison to prior cUAS baselines, we show that our framework effectively expresses low and high uncertainty on in-distribution and OOD samples, respectively, while retaining accurate classification performance.
Article
Full-text available
The hyperloop transportation system has emerged as an innovative next-generation transportation system. In this system, a capsule-type vehicle inside a sealed near-vacuum tube moves at 1000 km/h or more. Not only must this transport tube span over long distances, but it must be clear of potential hazards to vehicles traveling at high speeds inside the tube. Therefore, an automated infrastructure anomaly detection system is essential. This study sought to confirm the applicability of advanced sensing technology such as Light Detection and Ranging (LiDAR) in the automatic anomaly detection of next-generation transportation infrastructure such as hyperloops. To this end, a prototype two-dimensional LiDAR sensor was constructed and used to generate three-dimensional (3D) point cloud models of a tube facility. A technique for detecting abnormal conditions or obstacles in the facility was used, which involved comparing the models and determining the changes. The design and development process of the 3D safety monitoring system using 3D point cloud models and the analytical results of experimental data using this system are presented. The tests on the developed system demonstrated that anomalies such as a 25 mm change in position were accurately detected. Thus, we confirm the applicability of the developed system in next-generation transportation infrastructure.
Article
Vehicles have evolved into synthetic engineering systems owing to the developments in mechanical, electronic, and information technologies. Recently, driving assistance systems have gained considerable attention and are installed in vehicles under development by manufacturers. An important technology for the driving assistance systems is intervehicle distance measurement, for which several technologies have been proposed through hardware and software developments. The intervehicle distance measurement generally requires auxiliary sensor systems, complex algorithms, and high costs. An intervehicle distance estimation method is proposed, which calculates the distance using images obtained through a camera installed in the vehicle. For calculating the distance, the proposed method does not need any other physical sensors or information about the roads on which the vehicle moves. For estimating the intervehicle distance, the proposed method takes advantage of the fact that the width between two adjacent lanes on the road is constant. The difference in the distance calculated using the proposed method and the actual distance is <7 %.
Article
Full-text available
Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.
Conference Paper
Full-text available
People detection and tracking is a key component for robots and autonomous vehicles in human environments. While prior work mainly employed image or 2D range data for this task, in this paper, we address the problem using 3D range data. In our approach, a top-down classifier selects hypotheses from a bottom-up detector, both based on sets of boosted features. The bottom-up detector learns a layered person model from a bank of specialized classifiers for different height levels of people that collectively vote into a continuous space. Modes in this space represent detection candidates that each postulate a segmentation hypothesis of the data. In the top-down step, the candidates are classified using features that are computed in voxels of a boosted volume tessellation. We learn the optimal volume tessellation as it enables the method to stably deal with sparsely sampled and articulated objects. We then combine the detector with tracking in 3D for which we take a multi-target multi-hypothesis tracking approach. The method neither needs a ground plane assumption nor relies on background learning. The results from experiments in populated urban environments demonstrate 3D tracking and highly robust people detection up to 20 m with equal error rates of at least 93%.
Article
Laser imaging systems are prominent candidates for detection and tracking of small unmanned aerial vehicles (UAVs) in current and future security scenarios. Laser reflection characteristics for laser imaging (e.g., laser gated viewing) of small UAVs are investigated to determine their laser radar cross section (LRCS) by analyzing the intensity distribution of laser reflection in high resolution images. For the first time, LRCSs are determined in a combined experimental and computational approaches by high resolution laser gated viewing and three-dimensional rendering. An optimized simple surface model is calculated taking into account diffuse and specular reflectance properties based on the Oren–Nayar and the Cook–Torrance reflectance models, respectively.
Article
Convolutional network techniques have recently achieved great success in vision based detection tasks. This paper introduces the recent development of our research on transplanting the fully convolutional network technique to the detection tasks on 3D range scan data. Specifically, the scenario is set as the vehicle detection task from the range data of Velodyne 64E lidar. We proposes to present the data in a 2D point map and use a single 2D end-to-end fully convolutional network to predict the objectness confidence and the bounding boxes simultaneously. By carefully design the bounding box encoding, it is able to predict full 3D bounding boxes even using a 2D convolutional network. Experiments on the KITTI dataset shows the state-of-the-art performance of the proposed method.
Conference Paper
Unmanned Aircraft Systems (UAS) are being used commonly for video surveillance, providing valuable video data and reducing the risks associated with human operators. Thanks to its benefits, the UAS traffic is nearly doubling every year. However, the risks associated with the UAS are also growing. According to the FAA, the volume of air traffic will grow steadily, doubling in the next 20 years. Paired with the exponential growth of the UAS traffic, the risk of collision is also growing as well as privacy concerns. An effective UAS detection and/or tracking method is critically needed for air traffic safety. This research is aimed at developing a system that can identify/detect a UAS, which will subsequently enable counter measures against UAS. The proposed system will identify a UAS through various methods including image processing and mechanical tracking. Once a UAS is detected, a countermeasure can be employed along with the tracking system. In this research, we describe the design, algorithms, and implementation details of the system as well as some performance aspects. The proposed system will help keep the malicious or harmful UAS away from the restricted or residential areas.
Article
The paper presents new techniques for automatic segmentation, classification, and generic pose estimation of ships and boats in laser radar imagery. Segmentation, which primarily involves elimination of water reflections, is based on modeling surface waves and comparing the expected water reflection signature to the ladar intensity image. Shape classification matches a parametric shape representation of a generic ship hull with parameters extracted from the range image. The extracted parameter vector defines an instance of a geometric 3D model which can be registered with the range image for precision pose estimation. Results show that reliable automatic acquisition, aim point selection and realtime tracking of maritime targets is feasible even for erratic sensor and target motions, temporary occlusions, and evasive target maneuvers.
Article
A novel technique is presented for rapid partitioning of surfaces in range images into planar patches. The method extends and improves Pavlidis' algorithm (1976), proposed for segmenting images from electron microscopes. The new method is based on region growing where the segmentation primitives are scan line grouping features instead of individual pixels. We use a noise variance estimation to automatically set thresholds so that the algorithm can adapt to the noise conditions of different range images. The proposed algorithm has been tested on real range images acquired by two different range sensors. Experimental results show that the proposed algorithm is fast and robust.