ArticlePDF Available

The Impact of Adverse Weather Conditions on Autonomous Vehicles: Examining how rain, snow, fog, and hail affect the performance of a self-driving car



Recently, the development of autonomous vehicles and intelligent driver assistance systems has drawn a significant amount of attention from the general public. One of the most critical issues in the development of autonomous vehicles and driver assistance systems is their poor performance under adverse weather conditions, such as rain, snow, fog, and hail. However, no current study provides a systematic and unified review of the effect that weather has on the various types of sensors used in autonomous vehicles. In this article, we first present a literature review about the impact of adverse weather conditions on state-ofthe-art sensors, such as lidar, GPS, camera, and radar. Then, we characterize the effect of rainfall on millimeter-wave (mmwave) radar, which considers both the rain attenuation and the backscatter effects. Our simulation results show that the detection range of mm-wave radar can be reduced by up to 45% under severe rainfall conditions. Moreover, the rain backscatter effect is significantly different for targets with different radar cross-section (RCS) areas.
Recently, the development of autonomous vehicles and
intelligent driver assistance systems has drawn significant
amount of attention from the general public. One of the most
critical issues in the development of autonomous vehicles and
driver assistance systems is their poor performance under ad-
verse weather conditions such as rain, snow and fog.
However, up to now there has been no study, whic h provide s
re aders with a systematic and unified review about
the weather effect on various types of sensors, used in
autonomous vehicles. In this paper, we first present a literature
review about the impact of adverse weather conditions on
state-of-the-art sensors, such as light radar (LIDAR), global
positioning system (GPS), camera and radar. Then we propose
a new model to characterize the rain effect on millimeter-wave
(MMW) radar, which considers both the rain attenuation and
the backscatter effects. Our simulation results show that the
detection range of the MMW radar can be reduced by up to 55%
under severe rainfall conditions. Moreover, the rain backscatter
effect is significantly different for targets with different radar
cross-s ection areas.
Recently, the development of autonomous vehicles and
intelligent driver assistance systems has drawn significant
amount of attention from the public. In August 2013, a Mercedes-
Benz S-class vehicle called “Bertha” drove autonomously
without human intervention for about 100 km from Mannheim
to Pforzheim, Germany. Later in June 2016, Google tested their
fleet in autonomous mode over a total of 2,777,585 km. The
fleet included Audi TT, Toyota Prius, Lexus RX450h and
Googles own
. This advanced technology is not only able
to improve road safety, but also relieve the burden of many tasks
performed by a driver. This includes the following:
o Self-steering with lane recognition;
o Distance maintenance in platooning vehicles;
o Self-parking;
o Automatic braking systems with pedestrian recognition;
Wikipedia, “Google self-driving car wikipedia, the free
o Notification of traffic lights, signs, and so on.
Since the on-road driving environment is very complex and
dynamic, especially for the navigation and control systems, most
of the current autonomous vehicles are equipped with different
types of sensors. This enables systems to take advantage of their
respective strengths and obtain more accurate awareness of the
environment by means of fusion techniques. These sensors
include camera, light radar (LIDAR), radar, GPS and sonar.
Currently, one of the most critical issues in the development
of autonomous vehicles and driver assistance systems is their
poor performance under adverse weather conditions, such as
rain, snow and fog. In the case of bad weather, human vision is
degraded and proper functioning of driver assistance systems
becomes even more essential to drivers. Unfortunately, like
the human vision, these sensors are also negatively impacted
by adverse weather conditions. For example, rainy and foggy
conditions cause a significant degradation on the camera and
LIDAR [1]. Consequently, inaccurate information from sensors
can lead to wrong decisions and in turn car crashes. Therefore,
research on the sensor performance under adverse weather
conditions is particularly urgent for the development of
autonomous vehicles.
Research on the sensor degradation due to bad weather
conditions has emerged in recent decades. For example, the
authors of [2] focused on the performance of LIDAR under
various weather phenomena. The authors of [3] studied the
rain effect on cameras. The authors of [4] and [5] described
propagation effect on millimeter-wave (MMW) radar under
various weather conditions. However, up to now there has been
no study which provides readers with a systematic and unified
review about the weather effect on various types of sensors,
used in autonomous vehicles.
Since radar can measure radial distance and velocity of
remote object very precisely, the market for driver assistance
systems based on millimeter-wave radar sensor technology is
gaining momentum. According to [4], the most serious source
for radar signal attenuation is rain. In [5], it is stated that
backscatter also contributes significantly to the impact of
rain on millimeter-wave radar because the droplet sizes are
comparable to the radar wavelength. The attenuation effect
reduces the received power of useful signals and the backscatter
The Impact of Adversary Weather
Conditions on Autonomous Vehicles
Shizhe Zang
(The University of Sydney), Ming Ding, David Smith, Paul Tyler, Thierry
Rakotoarivelo, Mohamed Ali
(Data 61, CSIRO).
effect increases the interference at the receiver. Therefore, the
combined effect of these two factors should be carefully
In this paper, we first present a literature review about the
influence of adverse weather conditions on state-of-the- art
sensors such as LIDAR, GPS, camera and radar. Then, we
propose a new model to characterize the rain effect on the
MMW radar, which considers both the rain attenuation and
the backscatter effects. Our simulation results show that the
detection range of the MMW radar can be reduced by up
to 55% under severe heavy rainfall conditions. Moreover, for
targets with different radar cross-section areas, the rain
backscatter effect is significantly different.
Figure 1 Diagram of various sensors on an autonomous vehicle
The rest of the article is organized as follows. Section II
reviews the weather effects on LIDAR, camera and radar.
Sections III presents our mathematical modeling of rain effect
on the MMW radar. Section IV presents simulation results
and provides implementation considerations of MMW radar
according to the simulation results. Finally, Section V
concludes the paper.
This section reviews different sensing technologies for
autonomous vehicles and their respective issues under a
variety of adverse weather conditions (e.g. rain, fog, snow, or
space weather). These considered sensing technologies
include LIDAR, video camera, GPS, and radar.
Note that besides the mentioned sensors, sonar is also
a popular equipment for navigation and ranging on autonomous
vehicles. Sonar is able to operate in adverse weather with little
interference due to its short measurement range of less than
10m [6]. Due to its short detection range and limited usage in
autonomous cars, it has not been considered in the following
Since light radar (LIDAR) sensors are able to provide
outstanding angular resolution and highly accurate range
measurements, in recent years they have been proposed as the
essential part of a high performance perception system for
advanced driver assistant functions.
The working principal for LIDAR is to first transmit
and then measure the time until it receives the bounced or
reflected pulse signal from a target object. The distance
between the LIDAR device and the target is thus equal to half
the round trip time multiplied by the speed of light.
Clemens et al. [1] proposed that the criterion for object
detection at the receiver should be that the voltage of the
reflected pulse is greater than a noise voltage threshold. The
authors of [7] gave formulas about receiving power and signal-
to-noise-ratio (SNR) of LIDAR with an avalanche photodiode
(APD) receiver. They also studied the relationship among peak
return detection, false alarm rate and SNR. From the
formulation of LIDAR in [1], the receiving power is affected
by an extinction coefficient α and a backscattering coefficient
β from different weather conditions. In [2], Rasshofer et al.
presented the mathematical formulas to calculate these
coefficients under rainy, foggy and snowy conditions,
respectively. According to [2] and [7], these coefficients are
determined by the distributions of particle diameters and size,
which are related to rainfall and snowfall rates.
In [7], it was mentioned that fogs produce high extinction
and backscattering coefficients
which are higher than those of snowfall
to 
and rainfall (less than
). For rain and snow, their
extinction coefficients show no frequency selectivity. As a
result, for a LIDAR with wavelength of 900nm, fog has the
largest impact on its detection ability. Under rainy conditions,
if the raindrop is very close in distance to the laser emitter,
there will be a high chance for false detection. In other words,
if the laser beam intersects a particle and it generates a burst
of light” like a small surface, there will be a return of peak,
which is similar to the one from object on road.
Some of the commercial products already have automatic
fog correction. Therefore, raindrops and snowflakes can be
filtered out by the use of pixel oriented evaluation [3]. Pixel
oriented evaluation involves saving sequential measurement
values from each individual spot in each scan, and a separate
counter being started for each spot. Erroneous measurements
can be filtered out by repeatedly examining the reported spot.
However, there is no study on how much accuracy these
techniques could guarantee in adverse weather conditions.
Vehicle on-board cameras provide drivers with crucial visual
signals and information for safe driving. The information can
help detect pavement markings, road signs or hazards such
as obstacles. These systems provide a good service in more
typical weather conditions, especially in clear daylight.
However, such systems should not degrade in severe weather
and the camera system should continue to provide helpful
information under these conditions, especially as the driver is
under an increased workload. Bad weather reduces the scene
contrast and visibility, and hence this subsection will analyze
the performance of camera based systems in the three most
frequent adverse weather conditions: snow, rain and fog.
Rainy and Snowy Conditions
Most algorithms used in outdoor vision systems assume that
intensities of image pixels are proportional to scene brightness.
However, dynamic weather (rain and snow) introduces sharp
intensity fluctuations in images and videos, which degrade the
qualities of images and videos, and thus violate this basic
assumption [8]. For example, raindrops in the air can create
raindrop pattern on the image, which will decrease the image
intensity and blur the edge of other patterns behind it [3]. Heavy
snow in the air can increase the image intensity and obscure the
edge of the pattern of object in the image or video so that the
object cannot be recognized. Technologies to deal with this
issue can be classified into two main categories: real-time
processing and post-time processing.
In terms of real-time processing, Garg and Nayar [8] showed
that by appropriately selecting camera parameters, one could
reduce (and sometimes remove) the effect of rain, without
appreciably altering the appearance of the scene. This is done
during image acquisition and does not require any post-
processing. The work first did the analysis of visibility of rain
and then presented a method that automatically sets the camera
parameters (exposure time, F-number and focus setting) to
remove/reduce the effect of rain based on the analysis.
Apart from their influence on the image, raindrops and
snowflakes can affect the camera directly. In snowy condition,
cool temperature affects a camera system due to optical and
mechanical disruptions. An un-shielded camera can be easily
damaged by
. If the on-board camera is a powered rotation
camera such as an auto tracker, the ice formed residual moisture
from rain or sleet can cause the camera to be locked in place and
prevent rotation. If there is moisture around a camera below
freezing point, the frost can cover the cameras lens and deny the
viewer seeing any activities besides the crystalline patterns of
the snow. These issues can be solved by a self-heating camera,
which can generate heat during its operation to avoid frozen
moisture inside or on its lens. Rainy condition affects the system
due to electrical and optical disruptions. If the system is not
waterproof, it can be damaged by short circuit from raindrops.
Optically, raindrops on the lens can change the focus of camera.
As a result, part of the image affected by raindrops will be out of
focus and blurred. These effects can lead to the failure of image
processing such as pattern recognition.
Foggy Condition
Under foggy condition, moisture such as condensation on a
lens has the similar impact on the image processing as snowy
Weather affects surveillance cameras, Available:
llance- cameras/
conditions. Furthermore, fog negatively influences perception
and creates potentially dangerous situations as it can reduce the
contrast of the image and increase the difficulty of pattern edge
recognition [1]. In the case of an observed fog scene, the
frequency components are concentrated at zero frequency
whereas in the scene without fog one finds a broadly spread
spectrum. Sharp edges are modeled by different low and high
frequencies, whereas smooth edges are characterized only by low
From [9], the current technologies to remove fog effects are
of two types: fog correction and fog removal. Fog correction is
based on the correction of contrast level, for example, color
correction. In a fog removal process, the fog level over an image
is estimated and removed. The authors of [9] reviewed state-of-
art image enhancement and restoration methods for improving the
quality and visibility level of an image through several aspects:
image acquisition, estimation, enhancement, restoration and
compression. They also compared these approaches through
implementation of the methods while using the same parameter
values for critical analysis.
Global Positioning System (GPS)
The GPS can provide real time location of a vehicle, which
is the core functionality for any navigation system combined
with a digital map. In theory, local weather condition does not
affect the accuracy of GPS positioning as it is designed for all-
weather conditions. The GPS signal frequency around
1575MHZ is chosen because it is a frequency "window" for
signal propagation that is mostly unaffected by the weather [10].
However, vehicle GPS does suffer from some performance
degradation in rainy or snowy days. For example, if a GPS
module is installed inside a car attached on the windscreen, the
wipers running across the windshield will intermittently block
reception and make it difficult for a GPS device to detect a
complete navigation data string. Since GPS receives location
signal from different satellites, it is important for the receiver to
have a clear line of sight of sky. When the wipers are in the
middle between the GPS receiver and sky, the receiver is
blocked and has trouble receiving those bits of string arriving at
the passing of the wiper. Therefore, the GPS may not decode the
received string properly and is likely to give inaccurate
information. On the other hand, if the GPS is mounted outside a
car, the raindrop will affect the receiving frequency of the GPS
antenna and also attenuate the signal.
Although GPS is not affected much by local weather, it can
be influenced by space weather. Irregularities in the ionospheric
layer of the Earth’s atmosphere can at times lead to rapid fading
in received signal power levels due to destructive interference in
multi-path signals. This phenomenon, referred as ionospheric
scintillation, can lead to a receiver being unable to track one or
more visible satellites for short periods of time. Other factors
“Rain, snow, clouds and gps reception,
affecting GPS accuracy include radio interference from other
research satellites and multi-path fading from nearby
environment [10]. One way to improve the GPS accuracy is to
enhance the receiver tracking threshold, especially by means of
external velocity aiding from an inertial measurement unit
(IMU). This has already been used in some luxury cars. Other
techniques include redesigning the receiver antenna and signal
processing procedures.
Over the last decade, radar based driver assistance and active
safety systems have found wide applications in nearly all vehicle
manufacturers the world. The basic principal for this kind of
radar is range and velocity detection of moving objects.
According to the American and European standards, the
frequency band of the radar for automobiles is around 24 and
77GHz [11] and the radar operating in this band is called
millimeter-wave radar (MMW). Compared to traditional
microwave radar systems, millimeter-wave radar is able to
provide higher resolution. In microwave systems, transmission
loss is mainly caused by the free-space loss. However, in the
millimeter-wave spectrum, additional loss factors come into
play, such as rain, snow and mist in the transmission medium
[4]. According to [4], the most serious source for radar signal
attenuation is rain. In [5], it is stated that backscatter also
contributes significantly to the impact by rain on millimeter-
wave radar. This is because the droplet sizes are comparable to
the radar wavelength. The attenuation effect reduces the
received power of useful signals and the backscatter effect
increases the interference at the receiver. In the events of snow
and mist, MMW radars are also affected in the form of
attenuation and backscatter. From [12], the mathematical models
for attenuation and backscatter of snow and mist conditions are
the same as those of the rain model though mist and snow have
different attenuation and backscatter coefficients and the
calculation methods are also studied there. In the next section,
we will investigate the rain attenuation effect, the backscatter
effect and their combined effect in more detail.
As mentioned above, rain effects on MMW radar can be
classified as two types: attenuation and backscatter.
Mathematical models of both effects, receiver noise and the
combined effect will be described in the following subsections.
The Attenuation Effect
 
where is the distance between radar and target;  is the signal
power at receiver; is the radar wavelength. Other variables are
explained in Table 1. Parameters (multipath coefficient) and
(rain attenuation coefficient) need to be calculated by other
formulas in [4] and (rain backscatter coefficient) needs to be
calculated by using the formula for Marshall-Palmer distribution
in [13].
The Backscatter Effect
According to formula 4.1 in [14], the detection process also
depends on the signal-to-interference-plus-noise-ratio (SINR).
Maintaining the SINR above a certain threshold is vital for
reliable detection. From formula 4.20 in [14], the relationship
between the power intensity of the target signal and that of the
backscatter signal is characterized by
  
where and are power intensities of target and backscatter
signals; c is the speed of light and other parameters are explained
in Table 1.
, Transmission Power
, Antenna Gain
, Radar Frequency
, Pulse Duration
 s
, Antenna Beamwidth
2 - 4
, Radar Cross-Section of
Sedan: 15.85, Pedestrian: 1
, Receiver Noise Figure
, Receiver Filter Bandwidth
20MHz 4GHz
, Thermal Temperature
293 Kelvin
Table 1 Simulation parameter values
Receiver Noise
From [15], the receiver noise for a MMW radar is given by
  
where is the receiver thermal noise in watts and other variables
are explained in Table 1.
The Combined Effect on MMW Radar
If we treat the rain backscatter effect as interference at the
receiver, the SINR can be calculated as   ,
where is the signal power at the receiver, and are the
backscatter signal power and receiver thermal noise,
respectively. Without loss of generality, we assume the areas of
radar receiver and transmitter are the same. Therefore, 
becomes the same as  in Equation (2) and can be
calculated by using Equation (1) and Equation (2). Obviously,
the signal-to-noise ratio (SNR) at the receiver can be written as
  .
In this section, four general scenarios and implementation
aspects are considered, which are described as follows.
Scenario 1: The radar detection range of a sedan is
calculated versus different rainfall rates (0 - 400mm/hr) with
fixed SINRs (10dB, 13dB and 20dB). Effect of multi-path is also
considered and the target of radar is a sedan.
Scenario 2: The target of radar is changed to a pedestrian and
the rest are the same as Scenario 1.
Scenario 3: Comparison between the rain attenuation and
backscatter effects is performed under different rainfall rates (0
- 400mm/hr) with fixed distance (100m) for detection of a sedan.
Both SNR and SINR versus rainfall rate are plotted.
Scenario 4: The target of the radar is changed to a pedestrian
and the fixed distance is changed to 50m. The rest of the scenario
are the same as Scenario 3.
The parameter values used in our simulations are
summarized in Table 1.
Scenario 1: Radar Detection Range of a Sedan
versus Rainfall Rate
As mentioned above, for reliable target detection, there is a
requirement for the receiver SINR. The SINR for radar detection
is related to the probability of detection and false alarming rate.
A typical radar system will operate with a detection probability
of 0.9 and a probability of false alarm of . The required
SINR for millimeter-wave radar was suggested to be 13.2dB in
[16]. Therefore, in our simulation, the minimum required SINR
for the combined effect is set to13 dB with multi-path effect and
10, 13 and 20dB without multi-path effect.
Different detection ranges can be solved by using the SINR
equation with different rainfall rates. Since the maximum
rainfall rate in the world is 401mm/hr according to the record,
the range of rainfall rate is set to 0 400mm/hr.
The simulation results are shown in Figure 2a with multi-
path effect and Figure 2b without multi-path effect. From Figure
2a, it can be seen that the multi-path effect (causing the variance
of the detection distance) is less observable with the increase of
the rainfall rate. This is because the high rainfall rate tends to
attenuate the detection distance severely and lead to the
reduction of the variance caused by multi-path. From Figure 2b,
we can observe that the detection range decreases faster with
rainfall rate at a higher SINR requirement. For example, when
SINR = 20dB, the detection range decreases 17% at severe rain
condition (50mm/hr) and 33% at extreme condition (400mm/hr).
However, when SINR = 10dB, the reduction is 9% for the severe
“World’s record rainfall,” Available:
condition (50mm/hr) and 20% for the extreme condition
(400mm/hr), respectively. For normal detection requirement
(SINR = 13dB), the drop is around 11% for the severe condition
(50mm/hr) and 22% for the extreme condition (400mm/hr),
respectively. Therefore, we can conclude that severe and
extreme rainfall conditions have a significant impact on the radar
detection range.
Scenario 2: Radar Detection Range of a Pedestrian
versus Rainfall Rate
The difference between this scenario and the above scenario
is the different radar-cross section (RCS) due to the change of
target from a sedan to a pedestrian. The RCS of the target may
be characterized by the following factors: material of target
surface, the size of the target, incident and reflected angles of
radar wave, the polarization of transmitted and received
radiation, etc. From Subsection III-A, it can be seen that the
receiving power of radar is proportional to the target radar cross
section area. Since the radar cross section area of a pedestrian is
less than a sedan (see Table 1), the detection range is expected
to be shorter. This is demonstrated in the simulation results in
this subsection. We plot the simulation results in Figure 3a with
multi-path and Figure 3b without multi-path. From Figure 3a,
the multi-path effect of this pedestrian scenario diminishes faster
than Scenario 1 due to the difference in the RCS. From Figure
3b, it can be seen that the detection range is reduced by 26% at
severe rainy condition (50mm/hr) and 55% at extreme rainy
condition (400mm/hr) when SINR is at 13dB. Compared to
Scenario 1, a target with lower RCS value experiences a faster
decrease in detection range at the same SINR requirement.
Furthermore, from the comparison, rainfall has a more severe
impact on the detection range of target with a small RCS.
Scenario 3: Radar Receiver SNR and SINR of a
Sedan versus Rainfall Rate
In this scenario, the distance between the target and the radar
is fixed and we consider the effect of rainfall rate on the
attenuation and backscatter effects. As mentioned before, the
SNR formulates the case where only rain attenuation effect is
considered. SINR formulates the case where the combined
effects of rain attenuation and backscatter are concerned. The
receiver SNR and SINR are plotted in the same graph versus the
rainfall rate. The distances are set to 50, 100 and 150 m. The
results are shown in Figure 4a without multi-path effect.
From this figure, we can see that the gap between SNR and
SINR increases with rainfall rate. In other words, the rain
backscatter effect becomes severe in heavy rain conditions. For
example, when the rainfall rate is 50mm/hr, the degradation of
“Radar cross-section wikipedia, the free encyclopedia,” Available:
SINR due to attenuation is 1.11dB and 0.78dB due to
backscatter. At a rainfall rate of 100mm/hr, the reduction due to
attenuation increases to 1.55dB and 1.06dB due to backscatter.
At a rainfall rate of 400mm/hr, the backscatter effect will
degrade 2.98dB because of attenuation and 1.64dB due to
backscatter. Therefore, the attenuation effect is always greater
than backscatter effect on a sedan. Also, for different
measurement distances at the same rainfall rate, the gap between
SNR and SINR increases with the decrease of distance. This is
due to the fact that backscatter from shorter distance experiences
less attenuation and produces stronger interference when the
target distance is also shorter. Overall, from Figure 4a, we can
conclude that rainfall can influence radar detection significantly
especially at heavy and extreme conditions.
Scenario 4 Radar Receiver SNR and SINR of
Pedestrian versus Rainfall Rate
This scenario is similar to Scenario 3 except that the target
is changed from a sedan to a pedestrian. As a result, the radar
cross-section area of the target is also changed. The distance is
set to 50m since the radar cross section area of pedestrian is
smaller than vehicle’s. The results are shown in Figure 4b. The
results are similar to those of Scenario 3 except that the rain
backscatter effect is much larger than the rain attenuation effect.
For example, when the rainfall rate is 100 mm/hr, the SINR
degradation due to the rainfall attenuation is 0.56dB and 2.82dB
due to backscatter. Therefore, the backscatter effect dominates
the decrease of SINR. This is because the radar cross-section
area of a pedestrian is much smaller than a sedan.
(b) Radar Detection Range of Sedan versus Rainfall Rate without Multi-
path Effect
Figure 2 Radar Detection Range of Sedan versus Rainfall Rate
(b) Radar Detection Range of Pedestrian versus Rainfall Rate with Multi-
path Effect
Figure 3 Radar Detection Range of Pedestrian versus Rainfall Rate
(b) Radar Receiver SNR and SINR of Pedestrian versus Rainfall Rate
Figure 4 Radar Receiver SNR and SINR of Sedan and Pedestrian versus Rainfall Rate
Implementation Aspects of Millimeter-wave Radar
From the literature review and simulation results, we
conclude that there are three implementation considerations for
millimeter-wave radar of autonomous vehicle.
The first aspect is the transmission power of the radar, which
is dependent on the function and detection range of radar. For
example, for a forward looking long range radar, its transmission
power is higher than the short range rear parking aid radar. High
transmission power not only provides long detection range in
good weather, but also provides high SINR, especially in
adverse weather conditions. However, using a radar constantly
on high power transmission mode is not energy efficient and
adaptive transmission power should be indicated based on the
function range and weather conditions. For instance, if the rainy
condition is detected by the car (this technology is already
mature for the automatic windscreen swift), the system should
also increase the transmission power for the radar to reach the
required SINR.
The second aspect is the bandwidth for the radar. As high
bandwidth provides high resolution for a radar, the radar has the
ability to distinguish objects like human and vehicle and also
give more accurate estimations for the target distance and speed.
Nonetheless, from Equation (3), high bandwidth also causes
noise issues for the radar receiver. Therefore, there is a trade-off
in radar bandwidth and it is important to find the optimal
bandwidth for the radar receiver based on the function
requirements. If the vehicle is also equipped with LIDAR which
provides high resolution speed and distance measurement, then
a low bandwidth radar will be a better choice to reduce the noise
at the receiver.
The third aspect is the radar beamwidth. Obviously, a wider
beamwidth and a larger number of beams could help a radar
detect more objects around vehicle. But it will also consume
more power and lead to greater rain backscatter interference
from Equation (2). As a result, it is also crucial to find the
optimal beamwidth for the radar based on the function
In conclusion, this paper provides a literature review about
the influence of adverse weather conditions on state-of-the-art
sensors such as LIDAR, GPS, camera, and radar. Furthermore,
we proposed a new model to characterize the rain effect on the
millimeter-wave radar, which considers both the attenuation and
backscatter effect. The simulation results show that the detection
range of a millimeter-wave radar can be reduced by up to 55%
under severe rainfall conditions. Moreover, for a bigger target
with a larger radar cross section areas, the backscatter effect
plays a more significant role causing additional performance
[1] C. Dannheim, C. Icking, M. Mader, and P. Sallis, “Weather detection
in vehicles by means of camera and lidar systems,” in Computational
Intelligence, Communication Systems and Networks (CICSyN), 2014
Sixth International Conference on, May 2014, pp. 186191.
[2] R. H. Rasshofer, M. Spies, and H. Spies, “Influences of weather
phenomena on automotive laser radar systems,” Advances in Radio
Science, vol. 9, pp. 4960, 2011. [Online]. Available: http://www.adv-
[3] H. Kurihata, T. Takahashi, I. Ide, Y. Mekada, H. Murase, Y.
Tamatsu, and T. Miyahara, “Rainy weather recognition from in-vehicle
camera images for driver assistance,” in IEEE Proceedings. Intelligent
Vehicles Symposium, 2005., June 2005, pp. 205210.
[4] G. P.Kulemin, “Influence of propagation effects on millimeter wave
radar operation,” in SPIE Conference of Radar Sensor Technology, vol.
3704, SPIE. SPIE, apr 1999, pp. 170178.
[5] H. B. Wallace, “Millimeter-wave propagation measurements at the
ballistic research laboratory,” IEEE Transactions on Geoscience and
Remote Sensing, vol. 26, no. 3, pp. 253258, May 1988.
[6] B. Yamauchi, “All-weather perception for man-portable robots using
ultra-wideband radar,” in Robotics and Automation (ICRA), 2010 IEEE
International Conference on, May 2010, pp. 36103615.
[7] L. Hespel, N. Riviere, T. Huet, B. Tanguy, and R. Ceolato,
“Performance evaluation of laser scanners through the atmosphere with
adverse condition,” pp. 818 606–818 60615, 2011. [Online]. Available:
[8] K. Garg and S. K. Nayar, “When does a camera see rain?” in Tenth
IEEE International Conference on Computer Vision (ICCV’05) Volume
1, vol. 2, Oct 2005, pp. 10671074 Vol. 2.
[9] G. Yadav, S. Maheshwari, and A. Agarwal, “Fog removal techniques
from images: A comparative review and future directions,” in Signal
Propagation and Computer Technology (ICSPCT), 2014 International
Conference on, July 2014, pp. 4452.
[10] C. H. Elliott Kaplan, Understanding GPS: Principles and
Applications, C. H. Elliott Kaplan, Ed. Artech House, 2005.
[11] D. Johnson, “Experimental comparison of two automotive radars
for use on an autonomous vehicle,” in The 2nd International Conference
on Wireless Broadband and Ultra Wideband Communications
(AusWireless2007), Aug 2007, pp. 2828.
[12] V. N. Pozhidaev, “Estimation of attenuation and backscattering
of millimeter radio waves in meteorological formations,” Journal of
Communication Technology and Electronics, vol. 55, no. 11, pp. 1223
1230, 2010. [Online]. Available:
[13] J. Huang, S. Jiang, and X. Lu, “Rain backscattering properties and
effects on the radar performance at mm wave band,” International
Journal of Infrared and Millimeter Waves, vol. 22, no. 6, pp. 917922,
2001. [Online]. Available:
[14] S. Hovanessian., Introduction to Sensor Systems. Artech House,
[15] J. C. Toomay and P. J. Hannen, Radar Principles for the Non-
Specialist, ser. Radar, Sonar, Navigation and Avionics. Institution of
Engineering and Technology, 2004. [Online]. Available: http://digital-
[16] ACFR, University of Sydney, “Chapter 10. detection of signal
in noise,” [Online; accessed 28-August-2016]. [Online]. Available:
Shizhe Zang received the B.S. (University Medal, first class
Hons.) in electrical engineering (telecommunication) from
University of Sydney, Australia. He received summer
scholarship from School of Medicine, University of Sydney in
2014 and published a literature review about telehealth
reimbursement in Australia in Internal Medicine Journal. He
was a summer scholar at NICTA in 2015 and in data 61, CSIRO
in 2016. He is currently a PhD candidate at University of
Sydney. His current research focuses on millimeter wave
communication in heterogeneous network.
Ming Ding (M’12) received the B.S. and M.S. degrees (with first
class Hons.) in electronics engineering from Shanghai Jiao
Tong University (SJTU), Shanghai, China, and the Doctor of
Philosophy (Ph.D.) degree in signal and information processing
from SJTU, in 2004, 2007, and 2011, respectively. He has
authored more than 30 papers in IEEE journals and
conferences, all in recognized venues, and about 20 3GPP
standardization contributions, as well as a Springer book
Multipoint Cooperative Communication Systems: Theory and
Applications. Also, as the first inventor, he holds 15 CN, 7 JP, 3
US, 2 KR patents and co-authored another 100+ patent
applications on 4G/5G technologies. He is currently a senior
research scientist in network measurements and modelling team
in data61, CSIRO. His research interests include B3G, 4G, and
5G wireless communication networks, synchronization, MIMO
technology, cooperative communications, heterogeneous
networks, device-to-device communications, and modelling of
wireless communication systems.
David Smith received the B.E. degree in electrical engineering
from the University of New South Wales, Australia, in 1997, and
the M.E. (research) and Ph.D. degrees in telecommunications
engineering from the University of Technology Sydney, in 2001
and 2004, respectively. He is a Senior Researcher Scientist with
Data61, CSIRO (previously in NICTA) and an Adjunct Fellow
with Australian National University (ANU). He was with NICTA
since 2004, and in Data61 since 2016. He has been with ANU
since 2004. He He has published over 100 technical refereed
papers and made various contributions to IEEE standardization
activity. His research interests are in wireless body area
networks, game theory for distributed networks, mesh networks,
5G networks, disaster tolerant networks, radio propagation,
MIMO wireless systems, space-time coding, antenna design, and
also in distributed optimization for smart grid.
Paul Tyler is currently a Senior Research Engineer with
Data61’s Netowrk Research Group. Paul has been with Data61
(and formally NICTA) since 2004 bringing ICT research to the
transport and infrastructure domain. For the past 4 years, Paul
has been providing project management and technical
knowledge to Transport for NSW to deploy the Cooperative
Intelligent Transport Initiative (CITI), a connected vehicles trial
in the Illawarra region. Previous to this he has worked on
projects such as traffic state estimations from loop detectors at
signalised roundabouts, investigations into adaptive traffic
control systems and video tracking of vehicles. Previous to
Data61, Paul worked at the Australian Nuclear Science and
Technology Organisation (ANSTO) fullfilling a scientific
computing and research engineering role with a particle
accelerator. Paul has considerable experience in systems
engineering, computer systems administration and software
development as well as performing project management roles.
Paul has a PhD in Computer Science and a Bachelor of Science
both from the University of Sydney.
Thierry Rakotoarivelo completed his PhD in cotutelle with
University of New South Wales (UNSW, Australia) and the
Institut National Polytechnique of Toulouse (INPT, France). He
worked on peer-to-peer mechanisms to discover and utilise
Quality-of-Service enhanced alternate paths on the Internet. His
thesis received the “Prix Léopold Escande” award from INPT.
He was a Senior Researcher with NICTA (National ICT
Australia). I worked on protocols and frameworks for large
scale distributed testbeds (i.e. design, provisioning,
control/orchestration, and instrumentation/measurement). He
also worked on tools to enable reproducible experiments on
these testbeds. Currently, he is a Senior Research Scientist with
the Networks Group at Data61, CSIRO.
Mohamed Ali Kaafar is the Group Leader of the Networks
Group and a Senior Principal Researcher at Data61. His main
research interests are in the area of data Privacy, Networks
Security and Performance modelling. He holds the position of
visiting professor of the Chinese Academy of Science (CAS). He
was previously a research leader and a principal researcher at
the Mobile Networks Systems group at NICTA and a researcher
at the Privatics team at INRIA in France. Prof. Kaafar obtained
a Ph.D. in Computer Science from University of Nice Sophia
Antipolice at Inria France. He published over 200 scientific
peer-reviewed papers with several repetitive publications in the
prestigious ACM SIGCOMM and IEEE INFOCOM. Prof.
Kaafar is also a member of the editorial board of the Privacy
Enhancing Technologies Symposium and Journal (PETS and
PoPETS). In 2015, he has been appointed as the editor of the
IEEE Internet Computing on Small Wearables and currently
serves as the associate editor of the ACM Transactions on
Modeling and Performance Evaluation of Computing Systems.
He is also member of several technical committees including the
ACM International Conference on emerging Networking
Experiments and Technologies (CoNEXT), ACM Internet
Measurement Conference (IMC) and WWW. He is the general
Chair of Passive Active Measurement 2017.
... Although the datasets are ever-increasingly massive, the acquisition of accurate ground-truth data to supervise the artificial intelligence systems is limited due to the need for manual labelling and deficiencies of the existing sensors. Cameras and lidars constitute the two primary perception sensors that are commonly adopted in AV research; however, as these sensors operate in the visible and infrared spectrum, inclement weather dramatically disrupts their sensory data, causing attenuation, multiple scattering and absorption 10 (Supplementary Note 2). Millimetre-wave radars provide a key advantage over visible spectrum sensors in their immunity to adverse conditions, for example, they are agnostic to scene illumination and airborne obscurants 10,11 . ...
... Cameras and lidars constitute the two primary perception sensors that are commonly adopted in AV research; however, as these sensors operate in the visible and infrared spectrum, inclement weather dramatically disrupts their sensory data, causing attenuation, multiple scattering and absorption 10 (Supplementary Note 2). Millimetre-wave radars provide a key advantage over visible spectrum sensors in their immunity to adverse conditions, for example, they are agnostic to scene illumination and airborne obscurants 10,11 . The wavelength of millimetre-wave radars is much larger than the tiny airborne particles that form fog, rain and snow, and hence easily penetrates or diffracts around them. ...
Full-text available
Interest in autonomous vehicles (AVs) is growing at a rapid pace due to increased convenience, safety benefits and potential environmental gains. Although several leading AV companies predicted that AVs would be on the road by 2020, they are still limited to relatively small-scale trials. The ability to know their precise location on the map is a challenging prerequisite for safe and reliable AVs due to sensor imperfections under adverse environmental and weather conditions, posing a formidable obstacle to their widespread use. Here we propose a deep learning-based self-supervised approach for ego-motion estimation that is a robust and complementary localization solution under inclement weather conditions. The proposed approach is a geometry-aware method that attentively fuses the rich representation capability of visual sensors and the weather-immune features provided by radars using an attention-based learning technique. Our method predicts reliability masks for the sensor measurements, eliminating the deficiencies in the multimodal data. In various experiments we demonstrate the robust all-weather performance and effective cross-domain generalizability under harsh weather conditions such as rain, fog and snow, as well as day and night conditions. Furthermore, we employ a game-theoretic approach to analyse the interpretability of the model predictions, illustrating the independent and uncorrelated failure modes of the multimodal system. We anticipate our work will bring AVs one step closer to safe and reliable all-weather autonomous driving. Changing weather conditions pose a challenge for autonomous vehicles. Almalioglu and colleagues use a geometry-aware learning technique that fuses visual, lidar and radar information, such that the benefits of each can be used under different weather conditions.
... Another observation is the occurrence of false points in front of the sensor caused by backscattered light from the particles in adverse weather [7], [8]. Moreover, the intensity of the reflected light is affected by the weather condition. ...
... Another approach, compared to capturing adverse weather conditions through real world test drives, is to simulate these desired conditions. Zang et al. [7] investigated the influences of adverse weather conditions to different sensors like LiDAR, camera, RADAR, or GPS. They simulated attenuation and backscatter effects to estimate detection ranges at different rain rates for RADAR. ...
... However, on actual roads, multiple factors of traffic conditions such as weather, road surface condition, as well as speed, pedestrians, other cars, etc., change in complex ways. Therefore, it is challenging for autonomous cars to drive stably at all times while being affected by all these factors [7] [8]. ...
p>How should the autonomous car behave when faced with an unavoidable fatal accident? The answer may vary depending on the perspective from which the choice is made. If people answer this question as a driver, choosing a car that prioritizes the driver’s safety looks egocentric, and choosing a car that prioritizes pedestrians’ safety looks altruistic. On the other hand, if people’s attitudes change depending on whether one’s choice is visible to others, this time that looks hypocritic if they tend to choose the pedestrian-first car when others can see the choice. At the same time, we may also expect that these answers vary culturally. However, if there are such cultural differences, that should affect policies of governments, lawmakers, car manufacturers, and consumers’ choices. To investigate people’s safety preference from the driver’s perspective and their possible hypocritic tendency, together with its possible cultural variance, we conducted a survey (N = 683) with Japanese, Chinese, and American participants. We found some interesting and unexpected cultural differences in their answers, which should provide valuable new data for future discussions on the issues surrounding the autonomous car.</p
... How Physical world Adversaries Affect the Classifier:-A square denotes a set of data (PavolBielik et al., 2020) Apart from these Shizhe et al. andTemel et al. showed adverse weather like rain, and snowing also affected the radar and object recognition components in AVs. However, the author's research is not focusing on defense against those weather conditions such as a deraining application(Zang et al., 2019).There are several physical world adverse condition-based data generation studies that have been done. Thomas Tracey used the "Contrast limited adaptive histogram equalization (CLAHE)" method to reduce contrast on traffic sign recognition models(Thomas Tracey, 2019). ...
Full-text available
Though a wide range of domains has been influenced by the rise of deep learning and machine learning technologies, recent research works have identified these intelligent models are vulnerable to intentionally synthesized adversarial perturbations by attackers that are reliable enough to alter the prediction output without appealing a noticeable change in the input image to the human eye. With the advent of autonomous vehicles, this has earned higher attention and while moving deeper into the research domain, it can identify that, apart from adversarial attacks, the physical world itself acts as a performance degradation producer by constructing different adversarial constraints such as illumination changes, noises .etc. This research aims to design, develop and evaluate a general model robustness approach for both man-made and physical world adversaries without changing the given model architecture or no usage of auxiliary tools in the inference primarily on the autonomous vehicle domain. As a result, the models that are robustified by the suggested approach are capable of easily integrating into any application without hesitating about the improvements in computational resource consumption. Grounded on the literature review, the author has proposed a combined two-step training approach (ATERT) of Projected Gradient Descent l∞ based adversarial training and an improved version of the mix-up image transformation method named ERT. The experiment results demonstrate that the ATERT is capable of improving the resilience against both adversarial types without affecting the standard models' performance. In particular, ATERT improves the robustness for both digital and physical world adversaries up to 5-30% and 5-25% respectively on the evaluated models. Besides, a separate study conducted using Explainable AI further confirms that the ATERT improves the network's ability to capture pixel feature attributes under adverse conditions.
Full-text available
The transportation sector is heading to a new futuristic era, the fully-autonomous vehicles. Radar, lidar, and Sonar are considered critical aspects of self-driving technology. However, these sensors come with inherent danger. Their exposure has a hazardous impact on humans and living things. A forecast made using the level of service classification shows that there will be excessive radiation exposure when these cars dominate the traffic. This prediction must be taken into account when designing a level-five autonomous fleet.
Full-text available
We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. To effectively and efficiently train the network, different with common strategies that roughly increase depth or breadth of network, we utilize some image processing domain knowledge to modify the objective function. Specifically, we train our DerainNet on the detail layer rather than the image domain. Better results can be obtained under the same net architecture. Though DerainNet is trained on synthetic data, we still find that the learned network is very effective on real-world images for testing. Moreover, we augment the CNN framework with image enhancement to significantly improve the visual results. Compared with state-of-the- art single image de-rain methods, our method has better rain removal and much faster computation time after network training.
Full-text available
Laser radar (lidar) sensors provide outstanding angular resolution along with highly accurate range measurements and thus they were proposed as a part of a high performance perception system for advanced driver assistant functions. Based on optical signal transmission and reception, laser radar systems are influenced by weather phenomena. This work provides an overview on the different physical principles responsible for laser radar signal disturbance and theoretical investigations for estimation of their influence. Finally, the transmission models are applied for signal generation in a newly developed laser radar target simulator providing – to our knowledge – worldwide first HIL test capability for automotive laser radar systems.
We describe a method for the automatic recognition of weather conditions from a moving car. Our system consists of sensors to acquire data from cameras as well as from Light Detection and Recognition (LIDAR) instruments. We discuss how this data can be collected, analyzed and merged to assist the control systems of moving vehicles in making autonomous decisions. Laboratory based experimental results are presented for weather conditions like rain and fog, showing that the recognition scenario works with better than adequate results. This paper demonstrates that LIDAR technology, already onboard for the purpose of autonomous driving independent from auxiliary light sources, can be used to improve weather condition recognition when compared with a camera only system. We conclude that the combination of a front camera and a LIDAR laser scanner is well suited as a sensor instrument set for weather recognition that can contribute accurate data to driving assistance systems.
Conference Paper
Fog, haze and smoke are a big reason of road accidents. Fog reduces contrast level of the image that affects the visual quality of the image. In field of computer vision visual quality and visibility level of an image is affected by airlight and attenuation phenomena. Air particles, which present in atmosphere and affect the visibility level of any object, are called noise or unwanted signal between observer and object. For improving the visibility level of an image and reducing fog and noise various image enhancement methods are used. After enhancement is again restored the enhanced image by restoration methods. For improving the visibility level 4 major steps are used. First step is acquisition process of foggy images. Second is estimation process (estimate scattering phenomena, visibility level). Third is enhancement process (improve visibility level, reduce fog or noise level). Last step is restoration process (restore enhanced image). The main aim of this paper is to review state-of-art image enhancement and restoration methods for improving the quality and visibility level of an image which provide clear image in bad weather condition. We also compare prevalent approaches in this area through implementation of the methods keeping parameters common for critical analysis. In the end we provide the future scope for working directions in this area for the readers.
Advanced driver assistance systems (ADASs) based on video cameras are becoming pervasive in today?s automotive industry. However, while most of these systems perform nicely in clear weather conditions, their performances fail drastically in adverse weather and particularly in the rain. We present two novel approaches that aim to detect unfocused raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithms rely on image processing techniques to highlight them. The results will be used to improve ADAS behavior under rainy conditions. Both approaches are compared with each other and the techniques from the literature.
This thoroughly updated second edition of an Artech House bestseller brings together a team of leading experts who provide a current and comprehensive treatment of the Global Positioning System (GPS). The book covers all the latest advances in technology, applications, and systems. The second edition includes new chapters that explore the integration of GPS with vehicles and cellular telephones, new classes of satellite broadcast signals, the emerging GALILEO system, and new developments in the GPS marketplace. This single-source reference provides a quick overview of GPS essentials, an in-depth examination of advanced technical topics, and a review of emerging trends in the GPS industry.
The joint influence of multipath propagation and rain attenuation effects and also the land and rain clutter on the millimeter wave radar operation, in particular, on the maximum detection range and target detectability is considered. The parameters of millimeter wave radar are compared with parameters of analogous radar of X-band and the comparison is carried out for two cases: with frequency change an antenna aperture is constant (1) and with frequency change an antenna gain is constant, i.e. the proportional change of antenna aperture takes the place (2). The results of this analysis are presented for different types of terrain (quasi-smooth, rough, rough with vegetation, etc.) and rains with intensity from 2 to 10 mm/h. It is shown that for constant antenna aperture and joint effects of multipath propagation and rain attenuation the all-weather millimeter wave radar is more effective than analogous X-band radar at ranges less than 2 - 4 km.