ChapterPDF Available

Autonomous Cars: Technical Challenges and a Solution to Blind Spot

Autonomous Cars: Technical Challenges and
Solution to Blind Spots Issue
Hrishikesh M. Thakurdesai1and Dr. Jagannath Aghav2
College of Engineering, Wellesley Rd, Shivajinagar, Pune, Maharashtra 411005
Abstract. Automotive industry is shifting towards future, where role of
driver is becoming smaller and smaller, which will end up being totally
driver-less. Designing a fully driver-less car (DC) or self driving car is
a most challenging automation project at present, since we are trying
to automate complex processing and decision making of driving a heavy
and fast moving vehicle in public. There are many scenarios where self
driving cars are still not able to perform like human drivers.There are
still lots of technical, non-technical, ethical and moral challenges to be
addressed. Furthermore, two recent accidents caused by self driving cars
of Uber(Crash in Arizona on March 2018) and Tesla (Crash in California
on March 2018) have raised a concern among the people towards readi-
ness and safety of these cars. Therefore, it is necessary to perform more
careful analysis of current challenges and issues in DC. In this paper, we
have investigated various technical challenges and scenarios where DCs
are still facing issues.We have also addressed issue of Blind spots and
proposed a system to tackle the issue. Before self driving cars go live
on road, we have to overcome these challenges and work on technology
barriers so that we can make the DCs safe and trustworthy.
Keywords: Self Driving cars ·Machine Learning ·LIDAR ·Robot Car
1 Introduction to Self Driving Cars
A Self driving car is also known as an “Autonomous Car (AC)” or “Robot Car”
which has the capability to sense its environment and move ahead by its own.
DC contains various sensors and cameras such as Radars (Radio Detection and
Ranging), LIDAR (Light Detection and Ranging) and SONAR (Sound Navi-
gation and Ranging) to understand the surrounding environment. Radar is a
detection system which makes use of radio waves to determine various param-
eters like distance, speed or angle of the object. SONAR makes use of Sound
waves to determine presence of Object. LIDAR is considered as an eye of self
driving car. It uses pulsed laser light for detection.These sensors and other actu-
ators generate huge amount of data in real time. This data is processed by cen-
tral computer for driving decisions. DCs have huge benefits in various domains,
mainly in military and surveillance. These vehicles will be extremely useful for
2 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
elderly and disabled people, also to children as they will be safer and secured
without human interventions [34].DCs will reduce the number of accidents which
are caused by human errors in driving (94% of total accidents are because of hu-
man errors).Other benefits includes fuel efficiency,car availability for everyone
(e.g person without a licence, small child, old persons etc.) and efficient park-
ing.Experimental work is going on in the field of driverless cars since 1920. In
1977, the first truly automated car was invented by Tsukuba Labs in Japan.
This car travelled with the speed of 30 kilometres per hour with the help of two
cameras. In 2015, US government gave clearance to test the DCs on public roads.
In 2017, Audi A8 was proved to be first Level 3 automated car which travelled
with the speed of 60 kilometres per hour using Audi AI. In 2018, Google Waymo
started to test its autonomous car and completed around 16,000,000 kilometers
of Road Test by October 2018. Waymo is the first company which has launched
fully autonomous taxi service in US from December 2018.
1.1 Key Components in Self Driving car
One of the major component in self driving car is a LIDAR (Light Detection
and Ranging) which is considered as an eye of the vehicle. Main objective of
LIDAR is to create 3D map of the surrounding world in real time. It emits
laser beams which are invisible to human eyes and calculates the time taken to
come back after hitting the nearby objects. This calculation gives distances of
the surrounding objects along with their identification and hence helps the car
to guide “How to Drive”. LIDARs are capable of giving high resolution images
with minute details and exact distances.In addition to LIDAR, DC also uses
video camera and other cameras to identify traffic lights & road signs, and to
maintain safety distance between other vehicles and pedestrians. Radar Sensors
are used to monitor position of nearby vehicles especially in bad weather condi-
tions.These sensors are also used in Adaptive Cruise Control.Cameras can only
give the object image but can not give the depth of the object. Hence we need
LIDAR and RADAR. LIDAR sometimes fails in foggy conditions. RADAR is
used in those cases to get the perception of surrounding.
There is a GPS (Global Positioning System) antennae on the top of DC which
gives car position on the road [13]. Another major component of DC is cen-
tral computer unit which collects data from all sensors and manipulates steer-
ing, acceleration and monitoring control (Refer Fig. 1. for working of compo-
Movement of self driving car from one point to another point includes per-
severance of environment, planning of path and controlled movements on the
path.Environment perseverance includes tracking of objects,lanes and identi-
fication of self-position.This is achieved by using various medium and short
term sensors and cameras. Radars are proven to be more effective than cameras
for objects tracking in vehicles.Autonomous vehicles uses LIDARs. One LIDAR
mounted on the top of the car gives 360 degree view of surrounding with long
range.The data obtained from this device is used to maintain speed and to ap-
ply breaks when necessary.Navigation and path planning is majorly done using
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 3
Fig. 1. Components of Driver-less Car (Source: The Economist, How does a self-driving
car work )
GPS.These paths are initially calculated based on the destination, later dynam-
ically changed based on the road scenarios like blocks or traffic. accelerometers
and gyroscopes are used along with GPS as the satellite signals sometimes may
not reach in tunnels or underground roads.Data processing is done for taking
actions like lane keeping, braking, maintaining distance, stopping and overtak-
ing.Various control systems, Electronic units and CAN networks are used for
this purpose.
1.2 Levels of Automation
As per The Standards organization Society of Automotive Engineers (SAE) In-
ternational, there are six levels in driving system. These are defined based on
4 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
automation involved in the driving, which ranges from “No Automation” to “Full
Automation” (Refer Fig. 2 for summary of levels)
Level 0 : In this level, there is no automation. All operations like steering con-
trol, braking , speed control are performed by human driver.
Level 1 : This level involves some functions for driver assistance.e.g Brakes will
be applied more powerfully if some vehicle comes suddenly in front on the road.
Still, most of the main tasks like steering, braking and monitoring are handled
by driver only.
Level 2 : This is Partial Automation level. Most of the companies are working
currently in this level where driver gets assistance for steering and acceleration.
Here, driver must always be attentive and monitoring to take the control in case
of safety-critical issues.
Level 3 : This level also includes monitoring assistance along with steering and
braking. The vehicles in this level uses LIDAR (Light Detection and Ranging)
Sensors as an eye of the car. This level do not require human attention when
the speed is moderate (around 37 miles per hour) and conditions are safe. But
in case of higher speed and unusual scenarios, still drivers control plays critical
role. Major players in the industry like Audi, Ford have announced the launch
of Level 3 cars in 2018-2019.
Level 4 : It is known as High Automation. Here, the vehicle is capable of taking
steering control, braking, acceleration and monitoring. Human attention is re-
quired only for certain critical scenarios like complex traffic jams or merge into
Level 5 : This level involves Complete automation. Here, absolutely no human
attention is required for driving .All Complex situations like traffic jams are also
handled by vehicle, allowing driver to sit back without paying attention. Still
the research is going on in this level and there are many technical and moral
challenges which are to be addressed before these cars are made available to
2 Technical Challenges in Self driving cars
Extensive research is going on in the field of self driving cars and all big compa-
nies in the world like Google, BMW, Audi, Uber and many others are constantly
working to make these cars available to the common public [3]. In 2018, Google
Waymo have already launched a commercial taxi in Arizona U.S in four suburbs
of Phoenix-Chandler, Tempe, Mesa and Gilbert.The day is not far when entire
world will travel driverless. Still, there are lot of technical,legal and moral chal-
lenges which needs to be addressed before self driving cars can be made available
all over the world [2,6,7].It is necessary to develop the trust among the people
for user acceptance and making the cars ready to hit the roads [14,15,17]. Hence
It is essential to work in this field by analyzing current technical challenges faced
by DCs so that we can contribute to enhance the safety and mobility of these
cars. Below is the list of some key challenges which are to be addressed before
making the DC live on the road [1,4,5].
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 5
Fig. 2. Levels of Automation
2.1 Unpredictable Road Conditions
Road conditions vary from place to place. Some roads are very smooth and con-
tains proper lane markings whereas some roads are very deteriorated which can
contain potholes and are mountainous [31]. It becomes challenging for driverless
car to drive itself on such road where there are no signs and lane markings [33].
Lane markings can also be disappeared because of quick snow fall. Humans can
still identify lanes by natural road curves but DCs may not. Even the roads with
slight water or flood can also confuse DC.
Bosch has announced a system designed to give feel of road conditions to DCs.
Advance information about wet roads or snow will help the AV to decide exactly
where it can drive autonomously. This system aims to increase driving safety and
availability of autonomous driving function.
2.2 Unusual Traffic Conditions
DCs can be made available on the roads only when they can handle all sorts
of traffic conditions. Roads will contain other autonomous vehicles as well as
human driven vehicles. The situations may arise when humans are breaking the
traffic rules and unexpected situation may occur for which DC is not trained to
handle [12]. In case of dense traffic, even a few centimeters of movement matters
a lot. If the DC is waiting for the traffic to automatically clear, it may have to
wait indefinitely. Deadlock conditions may also arise where all cars are waiting
for others and no one is moving ahead.
6 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
2.3 Radar Interference
As stated above, DC uses Radars and Sensors for navigation which are mounted
on the top and on the body of the car. DC emits the radio waves which strikes
the object and reflects back. The time taken for reflection is used to calculate
the distance between car and object. When hundreds on cars on the road will
use this technology, the specific car will not be able to distinguish between the
waves of its own and the waves emitted by other cars. Even if we have range
of bands for the waves, still they can be insufficient in case of large number of
2.4 A Challenge at Crosswalk
There are some situations which are very easy to handle for the humans but not
for the machine. E.g Driving the car between the two walking persons on the cross
walk and take immediate left turn. This situation is common for humans but very
difficult for machine as they don’t have perception power and cannot read from
human faces. This is known as Moravec’s Paradox where it becomes difficult
to teach or engineer the machine. Current algorithms which are used in DCs
will become policy of the future. Humans are not actually good drivers as they
drive with emotions. One of the surveys have found that in specific crosswalk,
black pedestrians waits more than white ones.Human driver decides to yield at
crosswalk based on pedestrians age or speed. But this becomes challenging for
DC to analyze and take driving step ahead.
2.5 Bugs in the software system and Intrusion attacks
As DC uses Machine Learning algorithms, we can not be 100% confident about
any result when human safety comes in, as it learns from experience. Even a
small error in the code can cause huge consequences [8]. The example of Elaine
Herzberg Death can be given to prove the point.This was the first death case of
killed pedestrian by autonomous car on 18th March 2018. This lady in Tempe,
Arizona was struck by Ubers car which was operating in self driving mode. As a
result, Uber suspended testing of self-driving car in Arizona. It is obvious that
this death was caused by bugs in the system of DC. These bugs can be missing
lines of codes or external attack in the system which ignored the sensor data.DCs
can also suffer from intrusion attacks like denial of service which can affect
the working of the system [32]. Research is going on for developing Intrusion
Detection Systems(IDS) for self driving cars [22].
2.6 Need for expensive classification
Machine learning and AI systems learn from datasets and past experience which
results in detection of various objects like cyclists, pedestrians, road markings,
signs etc. Hence, if something is missing in the Data set using which the car
is trained, there is a risk of misinterpretation resulting is malfunctioning. For
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 7
example, the STOP sign dropped on the side of the road which is not supposed
to be followed, may confuse DC. Another example is a person walking with
bicycle in hand or speedy boat came out of river and landed on the road. Zombie
Kangaroo Costume Challenge can also be example of the same where the small
child is wearing Kangaroo costume or Halloween costume. This situation is too
unusual to be included in the risk database. Hence the vehicle will either wrongly
categorize it or will freeze and wait for human instructions.
Hence it is important to work on extensive classifier which will be able to identify
unusual scenarios for taking respective actions like human capabilities.
2.7 Issues due to co-existence of human driven and self driving cars
At some some point, we are going to come across a scenario where there will
be considerable number of driverless cars and human driven cars [9,10]. This
may lead to lot of problems as drivers from different country will have different
etiquettes.For example, Indian drivers will show hand to turn on the road which
may not be identified by the DC. Hence it is essential to develop some sort of
universal signaling language which can be followed by all to avoid failures.
2.8 Platooning Problem
Platooning means driving very close to other vehicle and together. This reduces
road space consumption and wind resistance resulting into reduction on fuel
consumption [16,20]. Along with these advantages, there is also a risk of car
crashes especially if human driver tries to come and merge into platoon space.
One of the solutions is to maintain dedicated lanes for the DCs but its not
practical in all areas.
2.9 Sharing Cost Challenge
At present, the cost of DC is high hence it is predicted that most of the au-
tonomous vehicles will be shared [18,19]. Hence efficient methods must be devel-
oped to pick up the passengers on road so that other passengers in the car will
not have to wait for huge time.Shared cars will also help to increase per-vehicle
occupancy and decrease number of cars on the road, thereby helping in reduction
of traffic.
2.10 Making the DC Cost Effective
As stated above, DC uses LIDAR, various cameras and sensors which are very
costly. In particular, the cost of LIDAR is huge which increases overall cost
of DC. At present, a DC setup can cost up to $80,000 in which LIDAR cost
itself can range from $30,000 to $70,000. One strategy to reduce cost is to use
LIDAR with fewer lasers. Audi has claimed that LIDAR with only four lasers
will be sufficient for safe driving on highways. Further, research is going on in
the companies like “Valeo” and “Ibeo” to make LIDARs in less than $1000.
8 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
2.11 Providing prediction power and Judgmental calls taking
capability to DC
To drive safely on the roads, it is essential to see, interpret and predict the
behaviour of humans [11].The car must understand if it is in the blind spot of
some other car. Human drivers are able to pass through high congestion areas
because of eye contact with the humans.Eye contact also helps us to predict
state of mind. E.g we can identify if the person is distracted or not realizing the
vehicle movement OR the other driver is not attentive for lane change etc.This
prediction capability has to be established in the DCs to make them safe for
humans. Sometimes, driver has to take instant decision when a particular object
suddenly comes in front. Most easy solution is instant braking but it may lead to
a hit from behind which may result in huge accidents on the highway. Another
situation is to decide weather to hit a child on the road or to hit a empty box on
the side which is blocking the lane [21]. These scenarios are familiar to humans
hence they can take right calls. The call taken by DC may be technically correct
but may fail in real world.Obviously, huge trial and error experimentation is
required to make sure that DCs can handle these tasks.
2.12 Challenge of Identifying animals which do not stay on ground
DC needs to identify and respond to dozens on animals like horses, moose and
Kangaroos. Wild animals can suddenly jump onto the DCs path. Here, the ani-
mals especially like Kangaroos represent a unique problem. Most identification
systems uses ground as a reference to identify the location. But Kangaroos hop
off the ground and it becomes challenging for the system to identify the location
and where they will land.As per National Roads and Motorists’ Association of
Australia, 80% of animal collisions in the country are with Kangaroos. More
than 16 thousand kangaroo strikes each year with the vehicles.These accidents
also creates millions of dollars of insurance claims. Major companies like Volvo
are taking this issue seriously.The safety engineers from Volvo have started film-
ing Kangaroos and they have used this data to developed Kangaroo detection
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 9
system in there vehicle.
3 Issue of Blind Spots in self driving car
“Blind Spots” are the areas around the car which are not directly visible from
the rear and side mirrors. These areas can be seen by driver by little extra efforts
like turning the head on sides.Blind spots also consists of areas which are not
visible because of body of the cars which includes Pillars which are used to join
top and body of the car [30].Like human driven cars, DCs are also suffering from
Blind Spot issues. DCs uses LIDAR as an eye to get overall 360 degree view of
surrounding in 3D. Detection of object for an DC is done using LIDAR sensors.
Hence the blind spot area depends on the number of sensors used. Uber have
recently encountered into big trouble because of this issue [36]. Ubers self driving
car have stroked the lady in Arizona which resulted in her death. The reason for
this accident is said to be the Blind Spot due to reduction of Sensors from five
to one [35].In 2016, Uber shifted to Volvo instead of using Fords autonomous
cars. This lead to large changes in Sensors design. The sensors were reduced from
Five to just One which is mounted on the top of the roof. To compensate this
change, they increased RADAR sensors from seven to ten. Removal of number
of LIDARs, reduced the cost but increased the blind spots. One LIDAR on the
top results in blind spot area low to the ground all around the car.Hence there
is a chance of accident if the object is not detected in the blind spot area.
Therefore, it is necessary to design a system which will guide the autonomous
vehicle while changing lanes or taking turns.
3.1 Current Solutions to Blind Spots Problem
For human driven cars, number of solutions are there for elimination of blind
spots.Most simple solution to reduce blind spots is to adjust the mirrors of
10 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
the car.Blind spots are eliminated in heavy vehicles by using special seat de-
signs.Special blind spot mirrors are developed which gives wide view to the
driver.Some high end cars also uses electronic based systems which detects the
cars in the blind spots and warning is given on the mirrors using LED light.Vision
based Blind Spot Detection systems (BSD) are there for daytime and nighttime
driver assistance which makes use of lasers and cameras [23,24,25]. At present,
LIDAR is used in autonomous vehicle to see around the world. LIDARs used at
the sides of the car to reduce the blind spots. But the cost of LIDAR is huge
which increases the overall cost of Driver-less Car.Only LIDAR can cost upto
$80,000. If we reduce the number of LIDARs to one (mounted on the top), there
can be some areas around the car which may not be in the visibility. Any small
object like cyclist can remain undetected because of this. In 2018, Uber already
had an accident in Arizona due to this reason. One of the Uber car in Arizona
stroked the pedestrian woman who died due to this accident. As a result, Uber
had to terminate testing in Arizona.
Further, even if we detect the object by using sensors instead of LIDAR, the
driverless car will not get enough assistance for further steps (to reduce or in-
crease the speed) which is done easily by human driver using Eye Contact.
Hence it is necessary to develop a system for blind spot detection which will be
cost effective as well as will assist the vehicle for driving actions.
3.2 Proposed System for Blind Spot Detection
We are proposing a novel approach using machine learning to tackle the problem
of Blind spots. The aim of this approach is to reduce the accidents caused by
visiblility and effectively to improve humans safety. This BSD system will consist
of two modules. First module will be using Sensors to identify any object in the
blind spot area of vehicle [26,27,28].Second module will use machine learning
techniques to identify relative speed of that vehicle. First module is Object
Detector Module and second module is Assistance Module. Based on the speed
calculations, assistance can be given to the DC. If the object in the blind spot is
moving with relatively same speed as our DCs speed, it has to vary its speed so
that the object will be out of blind spot area [29]. Post that, it can change the
lane or take the turn. If the speed of detected object is relatively slow or fast,
our DC will wait for that vehicle to move out.
Note that, we are not using any extra LIDAR for detection, hence the solution
will not add huge cost in the design. Here,the use of machine learning to calculate
relative speed is essential to avoid long wait. The scenario may occur where
the car behind our car is also moving with relatively same speed and hence
maintaining same distance with our car. Therefore, our can cannot indefinitely
wait for that car to vary its speed. In case of human drivers, the driver will
judge the speed of that car and vary its speed to take it out of blind spot. But
this is not that simple for the machine. Also, we are taking “speed” into account
instead of distance because vehicles with same speed will maintain same distance
for longer time but calculation of relative speed will give instant assistance.
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 11
3.3 Proposed Algorithm for BSD system
Step 1: Detect the object using Object Detection Module in the blind spot area.
Object Detection Module will contain sensors to detect the presence of object.
Step 2: Calculate the relative speed of the object using using Machine Learning
Techniques (Refer Output in the figure 3) .
Step 3: If the relative speed is Zero (Moving with same speed), Decrease the
speed slowly so that it will move out of blind spot.
Step 4: If the relative speed is Negative (Moving slow with respect to our DC),
Increase the speed and give the appropriate indicator.
Step 5: If the relative speed is Positive (Moving fast with respect to our DC),
Wait for the car to move out without any efforts.
Step 6: Once the blind spot area detector gives green signal, indicating Safe to
Move, give the proper indicator (left or right) and change the lane or take the
Step 7: Perform above steps continuously while driving.
12 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
Fig. 3. Vehicle Speed estimation.(Source: Zheng Tang, vehicle speed estimation for the
NVIDIA AI City Challenge Workshop at CVPR 2018)
4 Conclusion
The “Development of self driving cars” is one of the most active research topics
going on in the industry. Still there are lot of challenges and issues for which the
self driving cars are not ready to perform like humans.In this paper, we have
addressed some of the key challenges in this area. Later, we explained the issue
of Blind Spots. We have also proposed a system for self driving cars which can
be used to reduce the accidents and improve human safety. We believe that the
day is not so far when the driverless car will be on the road for general public if
we mainly address safety and legal issues.
1. Rasheed Hussain, Sherali Zeadally: Autonomous Cars: Research Results, Issues
and Future Challenges. In:IEEE Communications Surveys & Tutorials, pp. 1-1 (10
September 2018)
2. R. Okuda, Y. Kajiwara, and K. Terashima: A survey of technical trend of adas
and autonomous driving,In: Proceedings of Technical Program - 2014 International
Symposium on VLSI Technology, Systems and Application (VLSI-TSA), pp. 14,
April 2014.
3. T. Kanade, C. Thorpe, and W. Whittaker: Autonomous land vehicle project at
cmu, In: Proceedings of the 1986 ACM Fourteenth Annual Conference on Computer
Science, CSC 86, (New York, NY, USA), pp. 7180, ACM, 1986.
4. “Biggest Challenges in Driverless cars”,
5. I Barabs, A Todoru, N Cordo, A Molea: “Current challenges in autonomous driving”,
In: IOP Conference Series: Materials Science and Engineering, 252 012096 (2017)
6. Kanwaldeep Kaur, Giselle Rampersad: “Trust in driverless cars: Investigating
key factors influencing the adoption of driverless cars”,Volume 48, Pages 87-96
(AprilJune 2018)
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 13
7. Jan De Bruyne, Jarich Werbrouck: “Merging self-driving cars with the law”,In:
Computer Law & Security Review, Volume 34, Issue 5, Pages 1150-1153 (October
8. Nynke E.Velling: “From the testing to the deployment of self-driving cars: Legal
challenges to policymakers on the road ahead”, In:Computer Law & Security Re-
view, Volume 33, Issue 6, Pages 847-863 (December 2017)
9. Eric R.Teoh,David G.Kidd: “Rage against the machine? Google’s self-driving cars
versus human drivers”, In:Journal of Safety Research, Volume 63, Pages 57-60 (
December 2017)
10. “Uber Self Driving car fatality”, In:NewScientist, Volume 237, Issue 3170 Pages
5-57 (24 March 2018)
11. Meixin Zhu, Xuesong Wang, Yinhai Wang: “Human-like autonomous car-following
model with deep reinforcement learning”, In:Transportation Research Part C:
Emerging Technologies, Volume 97, Pages 348-368 (December 2018)
12. Wen-Xing Zhu, H.M.Zhang: “Analysis of mixed traffic flow with human-driving and
autonomous cars based on car-following model”,In:Physica A: Statistical Mechanics
and its Applications, Volume 496, Pages 274-285 (15 April 2018)
13. Yassine Zein,Mohamad Darwiche, Ossama Mokhiamar:“GPS tracking system for
autonomous vehicles”,Alexandria Engineering Journal Available online (13 Novem-
ber 2018)
14. Nadia Adnan, Shahrin Md, Nordin Mohamad Ariffbin, Bahruddin, Murad Ali:
“How trust can drive forward the user acceptance to the technology? In-vehicle
technology for autonomous vehicle”, Transportation Research Part A: Policy and
Practice Volume 118, Pages 819-836 (December 2018)
15. Paul Marks:“Autonomous cars ready to hit our roads”, In: New Scientist,Volume
213, Issue 2858, Pages 19-20 (31 March 2012)
16. MartinLd, Ivo Herman, Zdenk Hurk:“Vehicular platooning experiments using au-
tonomous slot cars”, In:IFAC-PapersOnLine, Volume 50, Issue 1, Pages 12596-12603
(July 2017)
17. Yi-ChingLee, Jessica H.Mirman :“Parents perspectives on using autonomous vehi-
cles to enhance childrens mobility”, In:Transportation Research Part C: Emerging
Technologies, Volume 96, Pages 415-431 (November 2018)
18. Riccardo Iacobucci, Benjamin Mc,Lellan Tetsuo Tezuka: “Modeling shared au-
tonomous electric vehicles: Potential for transport and power grid integration”,In:
Energy Volume 158, Pages 148-163 (1 September 2018)
19. Josiah P.Hanna, Michael Albert, Donna Chen, Peter Stone: “Minimum Cost
Matching for Autonomous Carsharing”, In:IFAC-PapersOnLine, Volume 49, Issue
15, Pages 254-259 (2016)
20. Jinke Yu, Leonard Petng: “Space-based Collision Avoidance Framework for Au-
tonomous Vehicles”,In: Procedia Computer Science, Volume 140, Pages 37-45 (2018)
21. Patricia Bhm, Martin Kocur, Murat Firat, Daniel Isemann: “Which Factors In-
fluence Attitudes Towards Using Autonomous Vehicles”, In: AutomotiveUI, Pro-
ceedings of the 9th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications Adjunct, Pages 141-145 (2017)
22. Khattab M. Ali Alheeti, Anna Gruebler, Klaus D. McDonald-Maier: “An intrusion
detection system against malicious attacks on the communication network of driver-
less cars”,In: 2015 12th Annual IEEE Consumer Communications and Networking
Conference (CCNC)(2015).
23. Guiru Liu, Mingzheng Zhou, Lulin Wang, Hai Wang, Xiansheng Guo, “A blind spot
detection and warning system based on millimeter wave radar for driver assistance”,
ScienceDirect, Volume 135, April 2017, Pages 353-365
14 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
24. B.F. Wu, H.Y. Huang, C.J. Chen, Y.H. Chen, C.W. Chang, Y.L. Chen “A vision-
based blind spot warning system for daytime and nighttime driver assistance” Com-
put. Electr. Eng., 39 (April) (2013), pp. 846-862
25. M.W. Park, K.H. Jang, S.K. Jung Panoramic vision system to eliminate driver’s
blind spots using a laser sensor and cameras Int. J. ITS Res., 10 (2012), pp. 101-114
26. Y.L. Chen, B.F. Wu, H.Y. Huang, C.J. FanA real-time vision system for nighttime
vehicle detection and traffic surveillance IEEE Trans. Ind. Electron., 58 (5) (2011),
pp. 2030-2044
27. Y.C. Kuo, N.S. Pai, Y.F. Li, Vision-based vehicle detection for a driver assistance
system Comput. Math. Appl., 61 (2011), pp. 2096-2100
28. C.T. Chen, Y.S. Chen, Real-time approaching vehicle detection in blind-spot area
Proceedings IEEE International Conference Intelligence Transport System (2009),
pp. 1-6
29. Vicente Milanes, David F. Llorca, Jorge Villagra, Joshua Perez, Carlos Fernandez,
Ignacio Parra, Carlos Gonzalez, Miguel A. Sotelo, Intelligent automatic overtaking
system using vision for vehicle detection Expert Syst. Appl., 39 (2012), pp. 3362-
30. Y.H. Cho, B.K. HanApplication of slim A-pillar to improve driver’s field of vision
Int. J. Auto. Technol., 11 (2010), pp. 517-524
31. Scott A.Cohen, Debbie Hopkins, “Autonomous vehicles and the future of urban
tourism”, Annals of Tourism Research, Volume 74, January 2019, Pages 33-42
32. Jin Cui, Lin Shen Liew, Giedre Sabaliauskaite, Fengjun Zhou, “A Review on Safety
Failures, Security Attacks, and Available Countermeasures for Autonomous Vehi-
cles”, Ad Hoc Networks Available online 7 December 2018, In Press, Accepted
33. Lanhang Ye, Toshiyuki Yamamoto, “Impact of dedicated lanes for connected and
autonomous vehicle on traffic flow throughput”, Physica A: Statistical Mechanics
and its Applications Volume 512, 15 December 2018, Pages 588-597
34. Jonas Meyer, Henrik Becker, Patrick M.BschKay, W.Axhausen, “Autonomous ve-
hicles: The next jump in accessibilities”, Research in Transportation Economics
Volume 62, June 2017, Pages 80-91
35. Kieren McCarthy:“Uber self-driving car death riddle: Was LIDAR blind spot to
blame?”, Emergent Tech, 28 Mar 2018
36. Keith Naughton: “Ubers Fatal Crash Revealed a Self-Driving Blind Spot: Night
Vision”,Bloomberg, May 29, 2018
... 1) Non-deterministic environment: A non-deterministic environment is a significant testing challenge [16]. We cannot predict the condition of the roads that a HAV will encounter once we deploy them on public roads [17]. Not all streets contain smooth surfaces and lane marking, and it could be potentially dangerous for HAVs that have not explored such environments [18]. ...
... In countries where it snows, there are chances of lane marking disappearing. Human is good at finding the natural road curves without the visibility of the lane marking, but autonomous vehicles would be in danger if they lack this capability [17]. After the road mapping process, the appearance of the road may change, which would affect the performance of HAVs if the vehicle never encountered such a situation during the training and testing [18]. ...
... Extreme adverse weather conditions such as heavy rain and fog can reduce the sensors' performance, which could lead to a challenging situation [18]. For instance, the lane markings are not visible on inundated roads, making it harder for the AV to navigate [17]. An adverse condition can change the performance of the sensors, and the system must be evaluated in various extreme conditions [19]. ...
... Parkinson et al., (2017) stated LIDAR is capable to construct the 3D maps of the location quickly. Secondly, Thakurdesai & Aghav (2021) explained RADAR is used at SDV technology for measuring distance. According to the authors, RADAR emits radio waves for measuring distances. ...
... According to the authors, RADAR emits radio waves for measuring distances. Furthermore, Thakurdesai & Aghav (2021) mentioned as LIDAR fails in foggy weathers, RADAR replaces it. Finally, Hussain & Zeadally (2019) expressed GPS is used in SDV technology for generating a path between destination point and current location. ...
... According to Thakurdesai & Aghav (2021), SDV's have 6 different levels of automation: Level 0, vehicle has 0 percent of automation which makes it mandatory for human drivers to do the driving task. Level 1 involves functions like driver assistance, braking assist. ...
Research Proposal
Full-text available
Self-Driving Vehicle's (SDV's) technologies are highly involved in contemporary World's agenda. However, this technology brings various types of risks alongside with it. Some of them have already caused problems in real life but some of them only have the potential of causing problems in real life. In this paper, risks among SDV technology are researched.
... First, from sensor data, AVs perceive objects (e.g., pedestrians, vehicles, cyclists, and other uncommon objects), roads (e.g., lanes, road edges, intersections) and traffic rules (e.g., traffic lights, speed, limits and stop signs), where deep neural networks (DNNs), e.g., YOLO [59] and PSPNet [60], are widely adopted for object detection. Second, the received data from V2X communication enables blind spot observation [61], redundant perception [62], cooperative perception [62], and surrounding views perception [54]. ...
Recent advances in machine learning have enabled its wide application in different domains, and one of the most exciting applications is autonomous vehicles (AVs), which have encouraged the development of a number of ML algorithms from perception to prediction to planning. However, training AVs usually requires a large amount of training data collected from different driving environments (e.g., cities) as well as different types of personal information (e.g., working hours and routes). Such collected large data, treated as the new oil for ML in the data-centric AI era, usually contains a large amount of privacy-sensitive information which is hard to remove or even audit. Although existing privacy protection approaches have achieved certain theoretical and empirical success, there is still a gap when applying them to real-world applications such as autonomous vehicles. For instance, when training AVs, not only can individually identifiable information reveal privacy-sensitive information, but also population-level information such as road construction within a city, and proprietary-level commercial secrets of AVs. Thus, it is critical to revisit the frontier of privacy risks and corresponding protection approaches in AVs to bridge this gap. Following this goal, in this work, we provide a new taxonomy for privacy risks and protection methods in AVs, and we categorize privacy in AVs into three levels: individual, population, and proprietary. We explicitly list out recent challenges to protect each of these levels of privacy, summarize existing solutions to these challenges, discuss the lessons and conclusions, and provide potential future directions and opportunities for both researchers and practitioners. We believe this work will help to shape the privacy research in AV and guide the privacy protection technology design.
... al have stated various aspects of the automotive car. They have enlisted various challenges and technology barriers to overcome for driverless trustworthy cars [18]. An overview of [19]. ...
This research paper proposes a blind-spot monitoring and alert system that is devised to provide an early warning with regard to the proximity of surrounding vehicles to the driver. First, the significance of blind-spot monitoring is described. Then, the different blind-spot detection technologies compiled during research are presented. Finally, an in-depth description of the proposed system, it’s functioning, and results obtained are presented followed by a discussion for additional improvement ideas. The contributions in this paper are (1) A detailed study for blind spot detection using Lidar. (2) Design and development of a retrofittable wireless blind-spot detection system.
Developing in the smart cities concept and internet of things (IoT) technologies has become more demanding for the modern governments to overcome the growth of population and monitoring and controlling of the available resources. Smart cities with the use of the latest technologies in communications and electronics, which are the heart of IoT, have been developed and branched to cover most of the daily life needs. One of these branches is the intelligent transportation system (ITS), which is the heart of several fields like the logistics sector. This chapter aims to introduce a comprehensive overview for deploying two main examples of the technology revolution, which are autonomous vehicles and robotics technologies, in the logistics sector and illustrate the effect of using current and future approaches like IoT, AI, 5G, and more in the intelligent supply chain field.
With the rapid development of electric vehicles and artificial intelligence technology, the automatic driving industry has entered a rapid development stage. However, there is a risk of traffic accidents due to the blind spot of vision, whether autonomous vehicles or traditional vehicles. In this article, a multi-sensor fusion perception method is proposed, in which the semantic information from the camera and the range information from the LiDAR are fused at the data layer and the LiDAR point cloud containing semantic information is clustered to obtain the type and location information of the objects. Based on the sensor equipments deployed on the roadside, the sensing information processed by the fusion method is sent to the nearby vehicles in real-time through 5G and V2X technology for blind spot early warning, and its feasibility is verified by experiments and simulations. The blind spot warning scheme based on roadside multi-sensor fusion perception proposed in this article has been experimentally verified in the closed park, which can obviously reduce the traffic accidents caused by the blind spot of vision, and is of great significance to improve traffic safety.
One of the recent advancements in Internet of Things (IoT) is connected vehicles (CV), where each vehicle can connect to the things or vehicles nearby using wireless networks such as Dedicated Short Range Communication (DSRC). This gives rise to many vehicular applications such as platooning. A group of vehicles can negotiate and drive jointly close to each other in a cooperative manner to form a platoon. Using connected and automated driving systems, platooning can aid in cutting total fuel costs, reducing CO2 emissions, improving efficiency, decreasing traffic congestion, increasing safety, and providing comfort for the drivers. The existing work on platooning assumes sensors besides DSRC sensors which may not be reliable in extremely poor weather conditions. In this work, we propose DSRC only based platoon negotiation where each vehicle is an edge node. A vehicle that is interested in platooning can broadcast a DSRC message and interesting vehicles can establish communication to negotiate their route. Then, in a series of transactions over the DSRC channel they can agree upon the leader position as well as the follower position(s). We employ the relative position estimation technique we proposed in prior work. The host vehicle (HV) computes the relative angle with the remote vehicle (RV) to accurately estimate its relative position based on Basic Safety Messages (BSMs) only. In this paper, we propose a new platoon negotiation algorithm based only on DSRC communication messages that works well in any weather condition. To test this, we extend CARLA, an autonomous driving simulator to support IoT connectivity as well as platooning. We implemented DSRC agent class for connectivity that can be instantiated for every vehicle in the simulation. The proposed edge node negotiation algorithms include platoon-ready, pre-negotiation, negotiation resolver, and platoon member algorithms. We experimented with a two-vehicle case scenario in the platoon negotiation phase and validated the algorithms in simulation.
This study alms at the automatic understanding of pedestrians’ car-hailing intention in traffic scenes. Traffic scenes are highly complex, with a completely random spatial distribution of pedestrians. Different pedestrians use different behavior to express car-hailing intention, making it difficult to accurately understand the intention of pedestrians for autonomous taxis in complex scenes. A novel intention recognition algorithm with interpretability is proposed in this paper to solve the above problems. Firstly, we employ OpenPose to obtain skeleton data and the facial region. Then, we input the facial region into a facial attention network to extract the facial attention features and infer whether the pedestrian is paying attention to the ego-vehicle. In addition, the skeleton data are also input into a random forest classifier and GCN to extract both explicit and implicit pose features. Finally, an interpretable fusion rule is proposed to fuse the facial and pose features. The fusion algorithm can accurately and stably infer the pedestrians’ intention and identify pedestrians with car-hailing intentions. In order to evaluate the performance of the proposed method, we collected road videos using experimental cars to obtain suitable datasets, and established the corresponding evaluation benchmarks. The experimental results demonstrate that the proposed algorithm has high accuracy and robustness.
Full-text available
This paper presents a proposed design of a mechatronics system for autonomous vehicles. The proposed design is able to memorize a route based on Global Positioning System (GPS) rather than using pre-saved maps that are infrequently updated and do not include all roads of all countries. Moreover, it can autonomously avoid obstacles and detect bumps. Experimental tests are conducted using a small-scale car equipped with the proposed mechatronics system. The results show that the proposed system operates with minor errors and slips. The proposed autonomous vehicle can serve normal, disabled, and elderly people. It can be used on roads and even inside facilities like campuses, airports, and factories to transport passengers or loads thus reducing workmanship and costs.
Full-text available
Connected and autonomous vehicles (CAVs) have the potential to disrupt all industries tied to transport, including tourism. This conceptual paper breaks new ground by providing an in-depth imaginings approach to the potential future far-reaching implications of CAVs for urban tourism. Set against key debates in urban studies and urban tourism, we discuss the enchantments and apprehensions surrounding CAVs and how they may impact cities in terms of tourism transport mode use, spatial changes, tourism employment and the night-time visitor economy, leading to new socioeconomic opportunities and a range of threats and inequities. We provide a concluding agenda that sets the foundation for a new research sub-field on CAVs and tourism, of relevance to urban planners, policymakers and the tourism industry.
Full-text available
High confidence in the safe operation of autonomous systems remains a critical hurdle on their path to becoming ubiquitous. Recent accidents of Uber and Google driverless cars illustrate the difficulty ahead. Leading collision avoidance framework for autonomous systems fail to properly capture and account for the high variability of geometries, shapes, and sizes of the agents (e.g., 18 wheels truck vs. 4 doors sedan), capabilities that are critical in situations with high risk of accident (e.g., intersection crossing). We introduce a simple and efficient multi-agent collision avoidance framework for Autonomous Vehicles (AV) in various collision configurations (i.e., glancing, away, clipping). Machine learning techniques are proposed to properly train the autonomous systems involved. Vehicle-to-Vehicle (V2V) communication technologies and shape-based spatial-temporal collision avoidance algorithms are leveraged to ensure the accurate prediction of the collision and correct decision on the appropriate steps to avoid its occurrence. A prototype implementation and simulation is currently under development for a clipping collision problem at a lightless intersection crossing using the AirSim platform.
Prior research has estimated the impact of an autonomous vehicle (AV) environment on the mobility of underserved populations such as adult non-drivers. What is currently unknown is the impact of AVs on enhancing the mobility of children who are also mobility disadvantaged, as child passengers are likely part of AV ridership scenarios in the perceivable future. To address this question, our study collected perceived benefits and concerns of AVs from a US convenience sample of parents whose children relied on them for mobility. We found that parents' intentions to travel in AV and their technology readiness as well as parent (sex, residence area) and child (age, restraint system) demographic profiles were important determinants of potential AV acceptance and impact. In addition, two groups of potential AV users emerged from the data: the curious and the practical. This study addresses a gap in the literature by assessing parents' perspectives on using AVs to transport children. The results have great potentials to guide the design of mobility features, safety evaluations, and implementation policies, as a decline in public interest in AVs has been recently documented.
Throughout the last century, the automobile industry achieved remarkable milestones in manufacturing reliable, safe, and affordable vehicles. Because of significant recent advances in computation and communication technologies, autonomous cars are becoming a reality. Already autonomous car prototype models have covered millions of miles in test driving. Leading technical companies and car manufacturers have invested a staggering amount of resources in autonomous car technology, as they prepare for autonomous cars’ full commercialization in the coming years. However, to achieve this goal, several technical and non-technical issues remain: software complexity, real-time data analytics, and testing and verification are among the greater technical challenges; and consumer stimulation, insurance management, and ethical/moral concerns rank high among the non-technical issues. Tackling these challenges requires thoughtful solutions that satisfy consumers, industry, and governmental requirements, regulations, and policies. Thus, here we present a comprehensive review of state-of-the-art results for autonomous car technology. We discuss current issues that hinder autonomous cars’ development and deployment on a large scale. We also highlight autonomous car applications that will benefit consumers and many other sectors. Finally, to enable cost-effective, safe, and efficient autonomous cars, we discuss several challenges that must be addressed (and provide helpful suggestions for adoption) by designers, implementers, policymakers, regulatory organizations, and car manufacturers.
This paper presents an application of a recently proposed methodology for modeling connected and autonomous vehicles (CAVs) in heterogeneous traffic flow, to investigate the impact of setting dedicated lanes for CAVs on traffic flow throughput. A fundamental diagram approach was introduced which reveals the pros and cons of setting dedicated lanes for CAVs under various CAV penetration rates and demand levels. The performance of traffic flow under different number of CAV dedicated lanes is compared with mixed flow situation. Simulation results indicate that at a low CAV penetration rate, setting CAV dedicated lanes deteriorates the performance of the overall traffic flow throughput, particularly under a low density level. When CAVs reach a dominant role in the mixed flow, the merits of setting dedicated lanes also decrease. The benefit of setting CAV dedicated lane can only be obtained within a medium density range. CAV penetration rate and individual CAV performance are significant factors that decide the performance of CAV dedicated lane. The higher level of performance the CAV could achieve, the greater benefit it will attain through the deployment of CAV dedicated lane. Besides, the performance of CAV dedicated lane can be improved through setting a higher speed limit for CAVs on the dedicated lane than vehicles on other normal lanes. This work provides some insights into the impact of the CAV dedicated lane on traffic systems, and helpful in deciding the optimal number of dedicated lanes for CAVs.
One-way car-sharing systems are becoming increasingly popular, and the introduction of autonomous vehicles could make these systems even more widespread. Shared Autonomous Electric Vehicles could also allow for more controllable charging compared to private electric vehicles, allowing large scale demand response and providing essential ancillary services to the electric grid. In this work, we develop a simulation methodology for evaluating a Shared Autonomous Electric Vehicle system interacting with passengers and charging at designated charging stations using a heuristic-based charging strategy. The influence of fleet size is studied in terms of transport service quality and break-even prices for the system. We test the potential of the system to supply operating reserve by formulating an optimization problem for the optimal deployment of vehicles during a grid operator request. The results of the simulations for the case study of Tokyo show that a fleet of Shared Autonomous Electric Vehicles would only need to be about 10%–14% of a fleet of private cars providing a comparable level of transport service, with low break-even prices. Moreover, we show that the system can provide operating reserve under several operational conditions even at peak transport demand without significant disruption to transport service.