ArticlePDF Available

Abstract and Figures

Radar signatures of several small consumer drones are investigated by laboratory measurement. The drones are rotated on a turntable, and backscattered data are collected at two different frequency bands. The data are post-processed into inverse synthetic aperture radar images. The effects of frequency, aspect, polarization, dynamic blade rotation, camera mount, and drone types are presented.
Content may be subject to copyright.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
1
(a)
(b)
Fig. 1. (a) Drone ISAR collection setup. (b) DJI Phantom 2 with GoPro
camera [11].
AbstractRadar signatures of several small consumer drones
are investigated by laboratory measurement. The drones are
rotated on a turntable and backscattered data are collected at
two different frequency bands. The data are post-processed into
inverse synthetic aperture radar (ISAR) images. The effects of
frequency, aspect, polarization, dynamic blade rotation, camera
mount, and drone types are presented.
Index Termsradar imaging, inverse synthetic aperture
radar, radar measurements, radar cross-sections.
I. INTRODUCTION
MALL consumer drones have been rapidly gaining
popularity. Their various uses include aerial photography,
surveying, mapping, and package delivery. The proliferation
of these small drones has raised much recent interest in their
regulation and monitoring [1-5].
A potential way to detect and identify drones is to use
ground-based radar. One fundamental issue that needs to be
addressed is what the radar cross section (RCS) of a small
consumer drone is. In this paper, we conduct laboratory
measurements of several small consumer drones and report on
their radar signatures versus frequency, aspect, polarization,
etc. We present the radar signatures in the form of two-
dimensional (2-D) inverse synthetic aperture radar (ISAR)
images, as they provide not only information about the
strength of the radar cross section of a target, but also the
spatial locations of where the dominant scattering on the drone
comes from. While ISAR imaging is a standard technique for
radar diagnostics and larger military drones have been
extensively studied [6-8], we believe this is the first ISAR
measurement study for these consumer-type drones. We
present their radar signatures in detail. Two particularly
important questions that need to be addressed regarding
consumer drones are: (i) will the small size and low
(c) 2017 IEEE. Personal use of this material is permitted. Permission from
IEEE must be obtained for all other users, including reprinting/ republishing
this material for advertising or promotional purposes, creating new collective
works for resale or redistribution to servers or lists, or reuse of any
copyrighted components of this work in other works.
This work was supported in part by the National Science Foundation under
Grant ECCS-1232152.
C. J. Li and H. Ling are with the Electrical and Computer Engineering
Department, University of Texas at Austin, Austin, TX 78701 USA. (e-mail:
cjli@utexas.edu; ling@ece.utexas.edu).
Color versions of one or more of the figures in this letter are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LAWP.2016.2594766
reflectivity of the plastic body result in very low radar cross
section, and (ii) will the spinning blades of drones result in
significant dynamic signature features similar to other
rotorcraft [5, 9, 10]. Both of these questions will be examined.
This paper is organized as follows. The measurement setup
and image formation algorithm are first described. The
resulting ISAR images are then presented and the scattering
features are discussed. We begin with a baseline scenario
before deviating from this scenario to illustrate the effects of
frequency, aspect, polarization, dynamic blade rotation,
camera mount, and drone types.
II. MEASUREMENT SETUP AND POST-PROCESSING
Multi-frequency, multi-aspect, monostatic backscattered
data are measured from a drone mounted on a turntable. Fig.
1(a) shows the measurement setup. A vector network analyzer
(Agilent N5230A) is used to collect S11 data. Depending on
the frequency range, either a Ku-band standard gain horn
(Narda 4609, 12-18 GHz) or a dual-ridged horn (TDK HORN-
0118, 1-18 GHz) is used. Background subtraction is used to
reduce the horn input mismatch and background clutter. Fig.
1(b) shows one of the target drones, the DJI Phantom 2 [11],
with zero azimuth angle (AZ) and zero elevation angle (EL)
defined as the frontal view. Data are collected at two
frequency bands, 12-15 GHz and 3-6 GHz.
Post-processing the measured frequency response data vs.
aspect yields the sinogram (i.e. range profiles vs. aspect) and
the corresponding ISAR image snapshots. The sinogram is
obtained through the inverse Fourier transform of the
frequency response at each aspect after a Hamming window is
applied. A 2-D ISAR image of the drone is obtained by using
the k-space formulation [12]. The band-limited, finite-angle
An Investigation on the Radar Signatures of
Small Consumer Drones
Chenchen J. Li, Student Member, IEEE, and Hao Ling, Fellow, IEEE
S
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
2
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 2. DJI Phantom 2 baseline scenario. (a) Sinogram. (b)-(f) ISAR image
at, 4, 90°, 135°, 172° respectively.
Down-range (cm)
Azimuth (Degrees)
100 200 300
-50
-25
0
25
50
-30
-25
-20
-15
-10
-5
Max = -13.8 dBsm
EL = , AZ = 8°
Max = -19.4 dBsm
EL = , AZ = 45°
Max = -9.3 dBsm
EL = , AZ = 90°
Max = -18.8 dBsm
EL = , AZ = 135°
Max = -16.3 dBsm
EL = , AZ = 172°
data are processed via a 2-D inverse Fourier transform into a
down-range vs. cross-range image as follows:

(1)






(2)
In the above expressions, is the down-range,  is the cross-
range, is the frequency, is the incremental sweep angle
about a central AZ, EL view of the target, is the
backscattered field as a function of frequency and angle, and
is the speed of light. Since the collected data are uniformly
sampled in frequency and angle, a polar reformatting is
applied to interpolate the data onto a uniform kx-ky grid first.
The ISAR image can then be obtained via a 2-D inverse fast
Fourier transform (IFFT) of the backscattered field. A 2-D
Hamming window is applied to the interpolated data before
the IFFT to reduce image sidelobes. Each ISAR snapshot is
obtained using a 12.7° angular swath for the data from 12-15
GHz and 38.1° for the data from 3-6 GHz. These angular
windows are chosen based on the narrow-band, small-angle
approximation shown in the second part of Eq. (2) (where is
the center frequency) to achieve an equal down-range and
cross-range resolution of 5 cm (without windowing). It should
be pointed out that due to the large angular swath used,
especially for the low-frequency band, non-persistent
scatterers within the angular window may not be as well
focused as persistent ones. High-resolution spectral estimation
algorithms [13, 14] may be used to mitigate such difficulty.
Finally, an 18 cm-radius calibration sphere is measured to
calibrate the results in terms of absolute RCS in dBsm.
III. MEASUREMENT RESULTS: DJI PHANTOM 2
A. Baseline Scenario
First, we examine the results for a baseline scenario, viz.
DJI Phantom 2 from 12-15 GHz with the blades stationary,
without a camera mounted, using vertical polarization on
transmit and receive, and azimuth scan at zero elevation angle.
The resulting sinogram is shown in Fig. 2(a). The majority of
the backscattered signal is confined within a 35 cm range
extent. This agrees with the diagonal width of the drone.
Additional returns beyond 17.5 cm in down-range are likely
due to multiple scattering, but they are not prominent.
Figs. 2(b)-(f) show the ISAR images at  = 8°, 45°, 90°,
135°, and 172°, respectively. These angles are defined by the
central AZ angle of each angular window. Thus, Fig. 2(b) is
the ISAR image at 8° to the left of the exact frontal view. The
geometrical outline of the drone in its proper orientation is
overlaid onto each ISAR image for comparison. In addition,
the highest RCS level is marked in each figure. Due to the
small size of the drone, there are fewer than 7 resolution cells
in either the down-range or cross-range dimensions over the
drone. Through the sequence of images, five main scattering
mechanisms are revealed. The strongest scattering feature is
shown in Fig. 2(d) where the AZ angle is at 90° (or the
broadside view of the drone). Here, the strongest scattering is
located at the center of the drone and can be attributed to the
battery pack of the drone, which is a rectangular cuboid with a
“bulge” at the tail end. At the broadside view, the largest
surface area of the battery pack is perpendicular to the radar
line of sight (RLOS). At -9.3 dBsm, this is the highest RCS
level over all azimuth angles. Overall, the battery pack return
is prominent at the cardinal angles and much weaker
elsewhere. The four other scattering mechanisms are due to
the four drone motors. Their returns are visible except when
shadowed. The full ISAR movie can be found online [15]. It is
clear that the ISAR images are more insightful than the
sinogram since they reveal the 2-D spatial locations of the
scattering features.
B. Effect of Rotating Blades
Next, we deviate from the baseline scenario by repeating
the measurement with the plastic blades rotating at their
minimum speed (which do not create sufficient lift for flight).
There are no observable differences from the baseline scenario
in either the sinogram or the ISAR images over all azimuth
angles. A side-by-side ISAR image comparison with the
baseline scenario at 102° AZ is shown in Fig. 3 to illustrate
this observation. Thus, the spinning blades do not create any
significant dynamic features in the radar signature relative to
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
3
(a)
(b)
Fig. 3. Effect of rotating blades. (a) Baseline scenario (static blades). (b)
Rotating blades scenario.
Max = -14.6 dBsm
EL = , AZ = 102°
Max = -15.3 dBsm
EL = , AZ = 102°
(a)
(b)
Fig. 4. Effects of horizontal polarization. (a) Baseline scenario (vertical
polarization). (b) Horizontal polarization scenario.
Max = -13.5 dBsm
EL = , AZ = 6°
Max = -15.3 dBsm
EL = , AZ = 6°
(a)
(b)
Fig. 5. Effects of a mounted camera. (a) Baseline scenario (no camera). (b)
Mounted camera scenario.
Max = -12.0 dBsm
EL = , AZ = 60°
Max = -12.0 dBsm
Camera
EL = , AZ = 60°
(a)
(b)
Fig. 6. Effects of frequency change. (a) Baseline scenario (12-15 GHz). (b)
3-6 GHz scenario.
Max = -17.2 dBsm
EL = , AZ = 194°
Max = -27.5 dBsm
EL = , AZ = 194°
(a)
(b)
Fig. 7. Elevation scan. (a) ISAR image centered at -90° elevation. (b) ISAR
image centered at 0° elevation.
Max = -9.0 dBsm
EL = -90°, AZ = 0°
Max = -9.6 dBsm
EL = , AZ = 0°
the drone body. This is consistent with the fact that the
stationary blades were not visible in the ISAR images in the
static-blade scenario.
C. Effects of Polarization
We change the polarization from vertical to horizontal on
transmit and receive. A side-by-side ISAR image comparison
with the baseline scenario at AZ is shown in Fig. 4. By
switching to horizontal polarization, the return strength from
the battery pack has decreased but the return strength from the
drone motors have increased. Of note, the plastic blades
(stationary or spinning) of the drone are still not visible under
horizontal polarization.
D. Effects of a Mounted Camera
Next, we mount a GoPro HERO4 camera to the base of the
drone. A side-by-side ISAR image comparison with the
baseline scenario at 60° AZ is shown in Fig. 5. The camera
return is indicated by the arrow in Fig. 5(b). In fact, this aspect
is where the mounted camera is most prominent. For most
azimuth angles, it is found that the results are not significantly
different from those of the without-camera case. The camera is
mounted below the battery pack. Thus, its return, in general,
will coalesce with the battery pack returns when illuminated at
a zero elevation angle.
E. Effects of Frequency Change
Next, we change the frequency range in the measurement
from 12-15 GHz to 3-6 GHz. A side-by-side ISAR image
comparison with the baseline scenario at 194° AZ is shown in
Fig. 6. The maximum RCS in Fig. 6(b) has decreased by 10.3
dB in comparison to Fig. 6(a). When averaged over all
azimuth angles, the maximum RCS level is 11.6 dB lower at
the 3-6 GHz band in comparison to that at 12-15 GHz band.
Multiple scattering also appears more prominent at the lower
frequency band.
F. Elevation Scan
Finally, an elevation scan of the drone is collected at the
zero azimuth angle at 12-15 GHz. Vertical polarization is used
on transmit and receive. Fig. 7(a) shows an ISAR image with
the angular window centered at -90° EL. This corresponds to
the scenario where the radar observes the drone flying by
directly overhead and results in a side-view image of the
drone. Fig. 7(b) shows an ISAR image with the angular
window centered at EL. Unlike the previous azimuth-scan
scenarios where the length and width of the drone are
captured, these elevation-scan images capture the length and
height of the drone.
IV. MEASUREMENT RESULTS: LARGER DRONES
Next, we examine two larger drones: the 3DR Solo (shown
in Fig. 8(a) with a 46 cm diagonal width [16]) and the DJI
Inspire 1 (shown in Fig. 8(d) with a 56 cm diagonal width
[17]). We present the results of each at both the 12-15 GHz
and 3-6 GHz band, using vertical polarization on transmit and
receive, azimuth scan at zero elevation angle, and with the
blades stationary.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
4
(a)
(d)
(b)
(e)
(c)
(f)
Fig. 8. (a) 3DR Solo [16]. (b) Solo ISAR image at 90° (12-15 GHz). (c)
Solo ISAR image at 90° (3-6 GHz). (d) DJI Inspire 1 [17]. (e) Inspire 1 ISAR
image at 270° (12-15 GHz). (f) Inspire 1 ISAR image at 270° (3-6 GHz).
Max = -14.1 dBsm
EL = , AZ = 90°
Max = -3.0 dBsm
EL = , AZ = 270°
Max = -24.2 dBsm
EL = , AZ = 90°
Max = -13.7 dBsm
EL = , AZ = 270°
A. 3DR Solo
The resulting ISAR images for the 3DR Solo at broadside
are shown in Figs. 8(b) and 8(c) for 12-15 GHz and 3-6 GHz,
respectively. The ISAR image shows that the dominant returns
are due to the drone battery pack and its four motors. It is clear
from the image that the Solo is larger than the Phantom 2. The
maximum RCS shown at the broadside view, -14.1 dBsm, is
the maximum RCS level over all azimuth angles.
Interestingly, despite the larger size, this is about 5 dB lower
than the Phantom 2. We believe this is due to the shape of the
Solo’s body and battery pack. Similar to the Phantom 2, the
overall RCS is weaker by approximately 10 dB in the 3-6 GHz
band in comparison to that at 12-15 GHz. Multiple scattering
also appears more prominent in the lower band.
B. DJI Inspire 1
The ISAR images of the DJI Inspire 1 at broadside are
shown in Figs. 8(e) and 8(f) for 12-15 GHz and 3-6 GHz,
respectively. In this case, in addition to the drone battery pack
and rotor motors, the horizontal frame of the Inspire 1 has a
large contribution to the drone return. The large size of the
drone is also reflected in the ISAR images. The maximum
RCS at the broadside view, -3 dBsm, is again the highest RCS
level over all azimuth angles and is due to the drone battery
pack. Analogous to the other two drones, in the 3-6 GHz band,
the overall RCS has decreased by about 10 dB and multiple
scattering is more prominent.
V. CONCLUSION
In this paper, we have presented a measurement study of the
radar signatures of several consumer drones. The results show
that the non-plastic portions of the drones (battery pack,
motors, carbon fiber frame, etc.) dominate their radar
signatures. It has also been shown that the plastic drone blades
do not contribute a significant return (while stationary or
spinning). While the overall RCS level is low, the resulting
ISAR images reveal the size and geometrical outlines of each
drone, which could enable drone detection and identification.
The ISAR images presented were collected on a turntable
under idealized conditions. However, it would be feasible to
collect such data from an actual drone in flight. By using
motion compensation (both translation and rotation) [18, 19],
it would be possible to form 2-D ISAR images of the drone for
classification. This topic is currently under study.
ACKNOWLEDGMENT
The authors would like to thank UAV Direct, Liberty Hill,
TX for providing the drones used in our measurement.
REFERENCES
[1] “Amazon unveils futuristic plan: delivery by drone,” CBS News, 60
Minutes: Overtime, Dec. 2013 [Online]. Available: http://www.cbsnews
.com/news/amazon-unveils-futuristic-plan-delivery-by-drone
[2] Federal Aviation Administration, “Registration and marking
requirements for small unmanned aircraft,” FAA-2015-7396, Dec. 2015.
[3] Advanced Radar Technologies, Madrid, Spain, “ART drone sentinel
product brief,” May 2015 [Online]. http://www.advancedradar
technologies.com/-products-services/art-drone-sentinel
[4] F. Fioranelli, M. Ritchie, H. Griffiths, and H. Borrion, “Cla ssification of
loaded/unloaded micro-drones using multistatic radar, IET Elec. Lett.,
vol. 51, no. 22, pp. 1813-1815, 2015.
[5] J. Bric, “Imaging a BQM-74E target drone using coherent radar cross
section measurements,” Johns Hopkins APL Technical Digest, vol. 18,
no. 3, pp. 365-376, 1997.
[6] L. To, A. Bati, and D. Hillard, “Radar cross section measurements of
small unmanned air vehicle systems in non-cooperative field
environments,” in Proc. 2009 European Conference on Antennas and
Propagation, Berlin, Germany, pp. 3637-3641, Mar. 2009.
[7] Efield AB, Kista, Sweden, “RCS simulation of a predator drone,” Ja n.
2010 [Online]. http://www.efieldsolutions.com/example_rcs_predator
.pdf
[8] P. Molchanov, K. Egiazarian, J. Astola, R. I. A. Harmanny,
“Classification of small UAVs and birds by micro-Doppler signatures,”
in Proc. 2013 European Radar Conference, Nuremberg, Germany, pp.
172-175, Oct. 2013.
[9] P. Pouliguen, L. Lucas, F. Muller, S. Quete, and C. Terret, "Calculation
and analysis of electromagnetic scattering by helicopter rotating blades,"
IEEE Trans. Antennas Propagat., vol. 50, pp. 1396-1408, Oct. 2002.
[10] M. Bell and R. A. Grubbs, “JEM modeling and measurement for radar
target identification,” IEEE Trans. Aerosp. Elect. Syst., vol. 29, pp. 73-
87, Jan. 1993.
[11] DJI, http://store.dji.com/product/phantom-2.
[12] M. Soumekh, Synthetic Aperture Radar Signal Processing with
MATLAB Algorithms, New York: Wiley, 1999.
[13] I. J. Gupta, “High-resolution radar imaging using 2-D linear prediction,”
IEEE Trans. Antennas and Propag., vol. 42, pp. 31-37, Jan. 1994.
[14] J. W. Odendaal, E. Barnard, and C. W. I. Pistorius, “Two-dimensional
superresolution radar imaging using the MUSIC algorithm,” IEEE
Trans. Antennas and Propag., vol. 42, pp. 1386-1391, Oct. 1994.
[15] http://users.ece.utexas.edu/~ling/DroneISARMovie.gif.
[16] 3D Robotics, https://3drobotics.com/.
[17] DJI, http://store.dji.com/product/inspire-1.
[18] J. Holzner, U. Gebhardt, and P. Berens, “Autofocus for high resolution
ISAR imaging,” in Proc. European Conference on Synthetic Aperture
Radar (EUSAR 2010), Aachen, Germany, pp. 720-723, June 2010.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
5
[19] C. J. Li and H. Ling, “Wide-angle ISAR imaging of vehicles,” in Proc.
2015 European Conference on Antennas and Propagation, Lisbon,
Portugal, pp. 1-2, Apr. 2015.
... Drones consisting of multiple components have many scattering points. It follows that the largest reflection point of a drone changes depending on the positional relationship between the radar and the drone [27]. The coordinates, [x l (t), y l (t)], do not necessarily record a certain scattering point, and therefore, the coordinate data contain errors related to the drone size. ...
Article
With the development of drone technology, concerns have been raised regarding the potential application of drones in terrorism and other crimes. Accordingly, a drone detection system that can classify incoming drones is needed to contain potential drone threats. In this study, we generate the inverse synthetic aperture radar (ISAR) imagery of various drones using millimeter-wave (mmW) fast chirp modulation (FCM) multiple-input and multiple-output (MIMO) radar and propose a drone classification method to distinguish the generated ISAR imagery using convolutional neural networks (CNNs). Two experimental cases were investigated to demonstrate the effectiveness of our proposed method. In case 1, we tested five types of drones (3DR Solo, DJI Phantom 3, DJI Mavic Pro, Parrot Anafi, and DJI Mavic Mini) moving under ideal conditions in the laboratory and generated the ISAR images of the drones. The models of five drones could be classified with high accuracy by learning the features of the ISAR images. In case 2, we classified the same models of flying drones using trained CNN models based on their ISAR images. Notably, its classification accuracy was comparable to that of other studies in drone classification. The experimental results indicated that ISAR imagery features are valid for drone classification.
... To achieve a high-resolution image of the target in the scene, reflected signals are coherently accumulated to obtain a sufficiently large synthetic aperture. Although Synthetic Aperture Radar (SAR) imaging of large objects such as UAVs, drones, vessels, buildings, and cities is widespread [1], [2], [3], imaging of small targets remains challenging due to their small Radar Cross-Section (RCS) and limitations in radar range resolution. Existing ISAR image reconstruction methods typically require complex hardware setups [4], highprecision measurement test beds [5], or noise-less expensive anechoic chambers [6], [7]. ...
Preprint
Full-text available
Inverse Synthetic Aperture Radar (ISAR) imaging presents a formidable challenge when it comes to small everyday objects due to their limited Radar Cross-Section (RCS) and the inherent resolution constraints of radar systems. Existing ISAR reconstruction methods including backprojection (BP) often require complex setups and controlled environments, rendering them impractical for many real-world noisy scenarios. In this paper, we propose a novel Analysis-through-Synthesis (ATS) framework enabled by Neural Radiance Fields (NeRF) for high-resolution coherent ISAR imaging of small objects using sparse and noisy Ultra-Wideband (UWB) radar data with an inexpensive and portable setup. Our end-to-end framework integrates ultra-wideband radar wave propagation, reflection characteristics, and scene priors, enabling efficient 2D scene reconstruction without the need for costly anechoic chambers or complex measurement test beds. With qualitative and quantitative comparisons, we demonstrate that the proposed method outperforms traditional techniques and generates ISAR images of complex scenes with multiple targets and complex structures in Non-Line-of-Sight (NLOS) and noisy scenarios, particularly with limited number of views and sparse UWB radar scans. This work represents a significant step towards practical, cost-effective ISAR imaging of small everyday objects, with broad implications for robotics and mobile sensing applications.
... • The lower the frequency of the radars, the more difficult it is to detect small objects. • The continuous wave (CW) radar movement of the object cross-wise to the radar is difficult to detect under certain conditions, especially for low-power radars [15]. • The disturbance may come from the multiplication of the real object in the case of echo reflection from the Earth's surface or the atmosphere. ...
Article
Full-text available
Given the growing threat posed by the widespread availability of unmanned aircraft systems (UASs), which can be utilised for various unlawful activities, the need for a standardised method to evaluate the effectiveness of systems capable of detecting, tracking, and identifying (DTI) these devices has become increasingly urgent. This article draws upon research conducted under the European project COURAGEOUS, where 260 existing drone detection systems were analysed, and a methodology was developed for assessing the suitability of C-UASs in relation to specific threat scenarios. The article provides an overview of the most commonly employed technologies in C-UASs, such as radars, visible light cameras, thermal imaging cameras, laser range finders (lidars), and acoustic sensors. It explores the advantages and limitations of each technology, highlighting their reliance on different physical principles, and also briefly touches upon the legal implications associated with their deployment. The article presents the research framework and provides a structural description, alongside the functional and performance requirements, as well as the defined metrics. Furthermore, the methodology for testing the usability and effectiveness of individual C-UAS technologies in addressing specific threat scenarios is elaborated. Lastly, the article offers a concise list of prospective research directions concerning the analysis and evaluation of these technologies.
... ISAR images for drone targets have been investigated in the literature. In [9], drone types of different sizes are detected by radar. Drones are made to rotate according to the radar by using a turntable. ...
Article
Full-text available
In today's technology, the use of drones has become very popular for they can be easily purchased over the Internet and can be easily developed. With drones that have wide usage areas, swarm structures have become popular. However, this has brought about some problems. The issue of drone detection has emerged in order to prevent the uncontrolled use of drone swarms in the airspace. Drone swarm detection is important to prevent dangerous accidents or criminal acts. In this study, a new classification algorithm is proposed with deep learning using inverse synthetic aperture radar (ISAR) images of drone swarms based on various formation swarm types. ISAR images are created using ANSYS simulation. Additionally, high frequency structural simulator (HFSS) - shooting bouncing ray (SBR+) solver is used for high-speed computation. Radar and simulation parameters to obtain ISAR images are discussed. Especially, down-range and cross-range resolution parameters are taken into account to achieve high resolution. ISAR images are classified using deep learning methods in terms of formation. Formation types include Line, Square, Cross, and Triangle. The convolutional neural network (CNN) model is used to solve classification problems. The model consists of train, validation, and test steps. Classification performance results are presented with high accuracy. The developed method can be used for anti-drone technologies.
Article
The integration of unmanned aircraft systems (UAS), commonly referred to as drones, into various sectors including agriculture, healthcare, and military operations has transformed industries and services. While drones offer numerous benefits, they have also been exploited by terrorists and criminals for illicit activities, such as smuggling and surveillance. Incidents involving drones flying over restricted airspaces or sensitive locations have highlighted the vulnerabilities of critical infrastructure and limitations of current law enforcement capabilities. To address these challenges, there is a growing need for effective counter-UAS (C-UAS) systems including detection, tracking, and identification technologies. However, evaluating the capabilities of these systems poses a significant challenge, leading to the demand for standardized test methodologies. The COURAGEOUS project seeks to fill this gap by developing standardized scenarios covering various use cases, such as airport security, critical infrastructure protection, and border security. These scenarios are informed by a systematic approach involving a literature review, analysis of previous incidents, assessment of current C-UAS frameworks, and gathering feedback from end users. The resulting standardized test methodology aims to enhance the understanding of C-UAS system capabilities, improve preparedness against evolving threats, and facilitate the global dissemination of project results. Through validation trials and feedback from law enforcement agencies, the COURAGEOUS project strives to contribute to a broader network of counter-UAS initiatives, ultimately enhancing the lower airspace security.
Article
In drone wireless charging, the performance is highly affected by imperfect landing of drones on the charging pad. To enhance the lateral misalignment tolerance, a novel mortarboard-shaped receiver (Rx) antenna for the drone is proposed. The antenna comprises of five coils and employs a hybrid combining topology integrated with the rectifying circuits to amalgamate the harvested voltages from the individual Rx coils. The design is analytically optimized to efficiently exploit the three orthogonal H-field components generated by the transmitter coil antenna to maximize the Rx misalignment region within which a uniform voltage is received. A low-cost PCB technology is employed to realize the proposed Rx antenna. The analytically optimized design of the proposed antenna is experimentally validated and demonstrates a 90.25%90.25\% uniformity achieved in the load voltage, which is 276%276\% higher than the conventional Rx coil of the literature. This means that the drone can land anywhere on the 90%\sim 90\% (measured 94%94\% ) region of the Tx antenna area without degrading the power transfer performance. The wide tolerance range proves that the proposed Rx can potentially eliminate the lateral misalignment problem to a greater extent for realizing a misalignment tolerant wireless charging system for drones.
Article
Full-text available
This letter presents preliminary results on the use of multistatic radar and micro-Doppler analysis to detect and discriminate between micro- drones hovering carrying different payloads. Two suitable features related to the centroid of the micro-Doppler signature have been identified and used to perform classification, investigating also the added benefit of using information from a multistatic radar as opposed to a conventional monostatic system. Very good performance with accuracy above 90% has been demonstrated for the classification of hovering micro-drones.
Article
The Privacy Arms RaceDrones are becoming more widespread, monitoring endangered wildlife, mapping rainforests, and filming athletes. And although there is little doubt that they can be very useful, they also pose new threats to privacy; the robotic fliers could film you in your own house or garden, for instance. Many countries are still debating how to balance privacy and freedom as drones proliferate, but current laws may offer some protection. In the United States, for instance, the Fourth Amendment, which protects citizens inside their homes from unreasonable searches and seizures without a warrant, may shield Americans from miniature government drones searching for illicit substances.
Conference Paper
The problem of unmanned aerial vehicles classification using continuous wave radar is considered in this paper. Classification features are extracted from micro-Doppler signature. Before the classification, the micro-Doppler signature is filtered and aligned to compensate the Doppler shift caused by the target's body motion. Eigenpairs extracted from the correlation matrix of the signature are used as informative features for classification. The proposed approach is verified on real radar measurements collected with 9.5 GHz radar. Planes, quadrocopter, helicopters and stationary rotors as well as birds are considered for classification. Moreover, a possibility of distinguishing different number of rotors is considered. The obtained results show the effectiveness of the proposed approach. It provides capability of correct classification with a probability of around 95%.
Article
ince the early 1980s and the advent of the modern computer, digital radar imaging has developed into a mature field. In this article, the specific problem of imaging a rotating target with a stationary radar is reviewed and built upon. The relative motion between the rotating target and the stationary radar can be used to create a circular synthetic aperture for imaging the target. Typically, an image is reconstructed by first reformatting the raw data onto a two-dimensional grid in the spatial frequency domain (commonly referred to as k-space) and then inverting into the spatial domain using a Fourier transform. This article focuses on a less popular reconstruction paradigm, tomographic processing, which can be used to coherently reconstruct the image incrementally as data become available. Both techniques suffer from sidelobe artifacts. It is shown that one-dimensional adaptive windowing can be applied during the reconstruction phase to reduce sidelobe energy while preserving mainlobe resolution in the image plane. The algorithms are applied to real and simulated radar cross section data.
Conference Paper
The growing use of unmanned air vehicle systems (UAVS) is drawing increased interest in their radar signature to search and track radars. Because it is not always possible to transport UAVS to radar cross section (RCS) measurement facilities, a portable RCS measurement system has been developed and demonstrated in non-cooperative field environments. This paper presents the portable RCS measurement system design and results of RCS tests on UAVS conducted in three countries. The paper discusses methods used to overcome challenges of RCS measurements in non-ideal environments. The technical staff of the radar reflectivity laboratory (RRL), located at NAVAIR's Naval Air Warfare Center Weapons Division (NAWCWD), Point Mugu, California, developed the portable wideband RCS measurement system, and conducted the tests in the United States, United Kingdom, and Australia under The Technical Cooperation Program (TTCP) for UAVS and aerial targets collaborative technology development program.
Conference Paper
This paper investigates autofocus techniques for high resolution airborne Inverse SAR (ISAR) imaging of ground moving vehicles. The FHR PAMIR instrument which can be mounted onboard a Transall C-160 provides high resolution range data with resolutions in the order of 10 cm. ISAR images of even private cars may be acquired in this way. For the generation of such images translational and rotational motion components of the target must be known. These can be either derived from GPS data which is usually not available or from combining range tracking and road information. Errors in the geometrical data lead to a defocusing of the images and severe image degradation. If at least the rotational motion component is known accurately enough remaining translational motion components may be estimated from the data using autofocus techniques like prominent point processing (PPP), phase gradient autofocus (PGA) or image contrast based autofocussing techniques (ICT). This paper shows some of the known autofocus techniques and compares results achieved with real data.
Article
This paper describes an application of physical optics and the method of equivalent currents to the calculation of radar cross section (RCS) of a helicopter rotor. The problem is treated using a quasi-stationary approach. The calculation can be parameterized as a function of the locations of the radar transmitter and receiver in relation to the rotor center. Therefore, this offers the possibility of monostatic and bistatic simulations in the far field and near field. Blade geometry is taken into account using a triangular meshing generated by the I-DEAS meshing software. Digital applications are presented and the effects on the RCS spectrum of incidence, frequency, blade number, and the near field are analyzed.
Article
The multiple signal classification (MUSIC) algorithm developed by Schmidt [1986] is applied for two-dimensional radar imaging. The performance of the MUSIC algorithm using spatial smoothing for decorrelation is demonstrated. Two-dimensional radar images are generated for a simulated target as well as a target measured in the compact range at the University of Pretoria, South Africa
Article
An algorithm for radar imaging is described. The algorithm is based on two-dimensional (2-D) linear prediction of 2-D Cartesian frequency spectra. It is shown that the algorithm provides much better resolution than the ISAR image obtained using a 2-D inverse Fourier transform. The algorithm is especially useful for imaging targets using small-bandwidth RCS data over limited aspect angle regions