Content uploaded by Jagannath Aghav
Author content
All content in this area was uploaded by Jagannath Aghav on Oct 25, 2020
Content may be subject to copyright.
Autonomous Cars: Technical Challenges and
Solution to Blind Spots Issue
Hrishikesh M. Thakurdesai1and Dr. Jagannath Aghav2
College of Engineering, Wellesley Rd, Shivajinagar, Pune, Maharashtra 411005
thakurdesaihm17.is@coep.ac.in
jva.comp@coep.ac.in
Abstract. Automotive industry is shifting towards future, where role of
driver is becoming smaller and smaller, which will end up being totally
driver-less. Designing a fully driver-less car (DC) or self driving car is
a most challenging automation project at present, since we are trying
to automate complex processing and decision making of driving a heavy
and fast moving vehicle in public. There are many scenarios where self
driving cars are still not able to perform like human drivers.There are
still lots of technical, non-technical, ethical and moral challenges to be
addressed. Furthermore, two recent accidents caused by self driving cars
of Uber(Crash in Arizona on March 2018) and Tesla (Crash in California
on March 2018) have raised a concern among the people towards readi-
ness and safety of these cars. Therefore, it is necessary to perform more
careful analysis of current challenges and issues in DC. In this paper, we
have investigated various technical challenges and scenarios where DCs
are still facing issues.We have also addressed issue of Blind spots and
proposed a system to tackle the issue. Before self driving cars go live
on road, we have to overcome these challenges and work on technology
barriers so that we can make the DCs safe and trustworthy.
Keywords: Self Driving cars ·Machine Learning ·LIDAR ·Robot Car
1 Introduction to Self Driving Cars
A Self driving car is also known as an “Autonomous Car (AC)” or “Robot Car”
which has the capability to sense its environment and move ahead by its own.
DC contains various sensors and cameras such as Radars (Radio Detection and
Ranging), LIDAR (Light Detection and Ranging) and SONAR (Sound Navi-
gation and Ranging) to understand the surrounding environment. Radar is a
detection system which makes use of radio waves to determine various param-
eters like distance, speed or angle of the object. SONAR makes use of Sound
waves to determine presence of Object. LIDAR is considered as an eye of self
driving car. It uses pulsed laser light for detection.These sensors and other actu-
ators generate huge amount of data in real time. This data is processed by cen-
tral computer for driving decisions. DCs have huge benefits in various domains,
mainly in military and surveillance. These vehicles will be extremely useful for
2 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
elderly and disabled people, also to children as they will be safer and secured
without human interventions [34].DCs will reduce the number of accidents which
are caused by human errors in driving (94% of total accidents are because of hu-
man errors).Other benefits includes fuel efficiency,car availability for everyone
(e.g person without a licence, small child, old persons etc.) and efficient park-
ing.Experimental work is going on in the field of driverless cars since 1920. In
1977, the first truly automated car was invented by Tsukuba Labs in Japan.
This car travelled with the speed of 30 kilometres per hour with the help of two
cameras. In 2015, US government gave clearance to test the DCs on public roads.
In 2017, Audi A8 was proved to be first Level 3 automated car which travelled
with the speed of 60 kilometres per hour using Audi AI. In 2018, Google Waymo
started to test its autonomous car and completed around 16,000,000 kilometers
of Road Test by October 2018. Waymo is the first company which has launched
fully autonomous taxi service in US from December 2018.
1.1 Key Components in Self Driving car
One of the major component in self driving car is a LIDAR (Light Detection
and Ranging) which is considered as an eye of the vehicle. Main objective of
LIDAR is to create 3D map of the surrounding world in real time. It emits
laser beams which are invisible to human eyes and calculates the time taken to
come back after hitting the nearby objects. This calculation gives distances of
the surrounding objects along with their identification and hence helps the car
to guide “How to Drive”. LIDARs are capable of giving high resolution images
with minute details and exact distances.In addition to LIDAR, DC also uses
video camera and other cameras to identify traffic lights & road signs, and to
maintain safety distance between other vehicles and pedestrians. Radar Sensors
are used to monitor position of nearby vehicles especially in bad weather condi-
tions.These sensors are also used in Adaptive Cruise Control.Cameras can only
give the object image but can not give the depth of the object. Hence we need
LIDAR and RADAR. LIDAR sometimes fails in foggy conditions. RADAR is
used in those cases to get the perception of surrounding.
There is a GPS (Global Positioning System) antennae on the top of DC which
gives car position on the road [13]. Another major component of DC is cen-
tral computer unit which collects data from all sensors and manipulates steer-
ing, acceleration and monitoring control (Refer Fig. 1. for working of compo-
nents:summary).
Movement of self driving car from one point to another point includes per-
severance of environment, planning of path and controlled movements on the
path.Environment perseverance includes tracking of objects,lanes and identi-
fication of self-position.This is achieved by using various medium and short
term sensors and cameras. Radars are proven to be more effective than cameras
for objects tracking in vehicles.Autonomous vehicles uses LIDARs. One LIDAR
mounted on the top of the car gives 360 degree view of surrounding with long
range.The data obtained from this device is used to maintain speed and to ap-
ply breaks when necessary.Navigation and path planning is majorly done using
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 3
Fig. 1. Components of Driver-less Car (Source: The Economist, How does a self-driving
car work )
GPS.These paths are initially calculated based on the destination, later dynam-
ically changed based on the road scenarios like blocks or traffic. accelerometers
and gyroscopes are used along with GPS as the satellite signals sometimes may
not reach in tunnels or underground roads.Data processing is done for taking
actions like lane keeping, braking, maintaining distance, stopping and overtak-
ing.Various control systems, Electronic units and CAN networks are used for
this purpose.
1.2 Levels of Automation
As per The Standards organization Society of Automotive Engineers (SAE) In-
ternational, there are six levels in driving system. These are defined based on
4 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
automation involved in the driving, which ranges from “No Automation” to “Full
Automation” (Refer Fig. 2 for summary of levels)
Level 0 : In this level, there is no automation. All operations like steering con-
trol, braking , speed control are performed by human driver.
Level 1 : This level involves some functions for driver assistance.e.g Brakes will
be applied more powerfully if some vehicle comes suddenly in front on the road.
Still, most of the main tasks like steering, braking and monitoring are handled
by driver only.
Level 2 : This is Partial Automation level. Most of the companies are working
currently in this level where driver gets assistance for steering and acceleration.
Here, driver must always be attentive and monitoring to take the control in case
of safety-critical issues.
Level 3 : This level also includes monitoring assistance along with steering and
braking. The vehicles in this level uses LIDAR (Light Detection and Ranging)
Sensors as an eye of the car. This level do not require human attention when
the speed is moderate (around 37 miles per hour) and conditions are safe. But
in case of higher speed and unusual scenarios, still drivers control plays critical
role. Major players in the industry like Audi, Ford have announced the launch
of Level 3 cars in 2018-2019.
Level 4 : It is known as High Automation. Here, the vehicle is capable of taking
steering control, braking, acceleration and monitoring. Human attention is re-
quired only for certain critical scenarios like complex traffic jams or merge into
highways.
Level 5 : This level involves Complete automation. Here, absolutely no human
attention is required for driving .All Complex situations like traffic jams are also
handled by vehicle, allowing driver to sit back without paying attention. Still
the research is going on in this level and there are many technical and moral
challenges which are to be addressed before these cars are made available to
public.
2 Technical Challenges in Self driving cars
Extensive research is going on in the field of self driving cars and all big compa-
nies in the world like Google, BMW, Audi, Uber and many others are constantly
working to make these cars available to the common public [3]. In 2018, Google
Waymo have already launched a commercial taxi in Arizona U.S in four suburbs
of Phoenix-Chandler, Tempe, Mesa and Gilbert.The day is not far when entire
world will travel driverless. Still, there are lot of technical,legal and moral chal-
lenges which needs to be addressed before self driving cars can be made available
all over the world [2,6,7].It is necessary to develop the trust among the people
for user acceptance and making the cars ready to hit the roads [14,15,17]. Hence
It is essential to work in this field by analyzing current technical challenges faced
by DCs so that we can contribute to enhance the safety and mobility of these
cars. Below is the list of some key challenges which are to be addressed before
making the DC live on the road [1,4,5].
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 5
Fig. 2. Levels of Automation
2.1 Unpredictable Road Conditions
Road conditions vary from place to place. Some roads are very smooth and con-
tains proper lane markings whereas some roads are very deteriorated which can
contain potholes and are mountainous [31]. It becomes challenging for driverless
car to drive itself on such road where there are no signs and lane markings [33].
Lane markings can also be disappeared because of quick snow fall. Humans can
still identify lanes by natural road curves but DCs may not. Even the roads with
slight water or flood can also confuse DC.
Bosch has announced a system designed to give feel of road conditions to DCs.
Advance information about wet roads or snow will help the AV to decide exactly
where it can drive autonomously. This system aims to increase driving safety and
availability of autonomous driving function.
2.2 Unusual Traffic Conditions
DCs can be made available on the roads only when they can handle all sorts
of traffic conditions. Roads will contain other autonomous vehicles as well as
human driven vehicles. The situations may arise when humans are breaking the
traffic rules and unexpected situation may occur for which DC is not trained to
handle [12]. In case of dense traffic, even a few centimeters of movement matters
a lot. If the DC is waiting for the traffic to automatically clear, it may have to
wait indefinitely. Deadlock conditions may also arise where all cars are waiting
for others and no one is moving ahead.
6 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
2.3 Radar Interference
As stated above, DC uses Radars and Sensors for navigation which are mounted
on the top and on the body of the car. DC emits the radio waves which strikes
the object and reflects back. The time taken for reflection is used to calculate
the distance between car and object. When hundreds on cars on the road will
use this technology, the specific car will not be able to distinguish between the
waves of its own and the waves emitted by other cars. Even if we have range
of bands for the waves, still they can be insufficient in case of large number of
vehicles.
2.4 A Challenge at Crosswalk
There are some situations which are very easy to handle for the humans but not
for the machine. E.g Driving the car between the two walking persons on the cross
walk and take immediate left turn. This situation is common for humans but very
difficult for machine as they don’t have perception power and cannot read from
human faces. This is known as Moravec’s Paradox where it becomes difficult
to teach or engineer the machine. Current algorithms which are used in DCs
will become policy of the future. Humans are not actually good drivers as they
drive with emotions. One of the surveys have found that in specific crosswalk,
black pedestrians waits more than white ones.Human driver decides to yield at
crosswalk based on pedestrians age or speed. But this becomes challenging for
DC to analyze and take driving step ahead.
2.5 Bugs in the software system and Intrusion attacks
As DC uses Machine Learning algorithms, we can not be 100% confident about
any result when human safety comes in, as it learns from experience. Even a
small error in the code can cause huge consequences [8]. The example of Elaine
Herzberg Death can be given to prove the point.This was the first death case of
killed pedestrian by autonomous car on 18th March 2018. This lady in Tempe,
Arizona was struck by Ubers car which was operating in self driving mode. As a
result, Uber suspended testing of self-driving car in Arizona. It is obvious that
this death was caused by bugs in the system of DC. These bugs can be missing
lines of codes or external attack in the system which ignored the sensor data.DCs
can also suffer from intrusion attacks like denial of service which can affect
the working of the system [32]. Research is going on for developing Intrusion
Detection Systems(IDS) for self driving cars [22].
2.6 Need for expensive classification
Machine learning and AI systems learn from datasets and past experience which
results in detection of various objects like cyclists, pedestrians, road markings,
signs etc. Hence, if something is missing in the Data set using which the car
is trained, there is a risk of misinterpretation resulting is malfunctioning. For
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 7
example, the STOP sign dropped on the side of the road which is not supposed
to be followed, may confuse DC. Another example is a person walking with
bicycle in hand or speedy boat came out of river and landed on the road. Zombie
Kangaroo Costume Challenge can also be example of the same where the small
child is wearing Kangaroo costume or Halloween costume. This situation is too
unusual to be included in the risk database. Hence the vehicle will either wrongly
categorize it or will freeze and wait for human instructions.
Hence it is important to work on extensive classifier which will be able to identify
unusual scenarios for taking respective actions like human capabilities.
2.7 Issues due to co-existence of human driven and self driving cars
At some some point, we are going to come across a scenario where there will
be considerable number of driverless cars and human driven cars [9,10]. This
may lead to lot of problems as drivers from different country will have different
etiquettes.For example, Indian drivers will show hand to turn on the road which
may not be identified by the DC. Hence it is essential to develop some sort of
universal signaling language which can be followed by all to avoid failures.
2.8 Platooning Problem
Platooning means driving very close to other vehicle and together. This reduces
road space consumption and wind resistance resulting into reduction on fuel
consumption [16,20]. Along with these advantages, there is also a risk of car
crashes especially if human driver tries to come and merge into platoon space.
One of the solutions is to maintain dedicated lanes for the DCs but its not
practical in all areas.
2.9 Sharing Cost Challenge
At present, the cost of DC is high hence it is predicted that most of the au-
tonomous vehicles will be shared [18,19]. Hence efficient methods must be devel-
oped to pick up the passengers on road so that other passengers in the car will
not have to wait for huge time.Shared cars will also help to increase per-vehicle
occupancy and decrease number of cars on the road, thereby helping in reduction
of traffic.
2.10 Making the DC Cost Effective
As stated above, DC uses LIDAR, various cameras and sensors which are very
costly. In particular, the cost of LIDAR is huge which increases overall cost
of DC. At present, a DC setup can cost up to $80,000 in which LIDAR cost
itself can range from $30,000 to $70,000. One strategy to reduce cost is to use
LIDAR with fewer lasers. Audi has claimed that LIDAR with only four lasers
will be sufficient for safe driving on highways. Further, research is going on in
the companies like “Valeo” and “Ibeo” to make LIDARs in less than $1000.
8 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
2.11 Providing prediction power and Judgmental calls taking
capability to DC
To drive safely on the roads, it is essential to see, interpret and predict the
behaviour of humans [11].The car must understand if it is in the blind spot of
some other car. Human drivers are able to pass through high congestion areas
because of eye contact with the humans.Eye contact also helps us to predict
state of mind. E.g we can identify if the person is distracted or not realizing the
vehicle movement OR the other driver is not attentive for lane change etc.This
prediction capability has to be established in the DCs to make them safe for
humans. Sometimes, driver has to take instant decision when a particular object
suddenly comes in front. Most easy solution is instant braking but it may lead to
a hit from behind which may result in huge accidents on the highway. Another
situation is to decide weather to hit a child on the road or to hit a empty box on
the side which is blocking the lane [21]. These scenarios are familiar to humans
hence they can take right calls. The call taken by DC may be technically correct
but may fail in real world.Obviously, huge trial and error experimentation is
required to make sure that DCs can handle these tasks.
2.12 Challenge of Identifying animals which do not stay on ground
DC needs to identify and respond to dozens on animals like horses, moose and
Kangaroos. Wild animals can suddenly jump onto the DCs path. Here, the ani-
mals especially like Kangaroos represent a unique problem. Most identification
systems uses ground as a reference to identify the location. But Kangaroos hop
off the ground and it becomes challenging for the system to identify the location
and where they will land.As per National Roads and Motorists’ Association of
Australia, 80% of animal collisions in the country are with Kangaroos. More
than 16 thousand kangaroo strikes each year with the vehicles.These accidents
also creates millions of dollars of insurance claims. Major companies like Volvo
are taking this issue seriously.The safety engineers from Volvo have started film-
ing Kangaroos and they have used this data to developed Kangaroo detection
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 9
system in there vehicle.
3 Issue of Blind Spots in self driving car
“Blind Spots” are the areas around the car which are not directly visible from
the rear and side mirrors. These areas can be seen by driver by little extra efforts
like turning the head on sides.Blind spots also consists of areas which are not
visible because of body of the cars which includes Pillars which are used to join
top and body of the car [30].Like human driven cars, DCs are also suffering from
Blind Spot issues. DCs uses LIDAR as an eye to get overall 360 degree view of
surrounding in 3D. Detection of object for an DC is done using LIDAR sensors.
Hence the blind spot area depends on the number of sensors used. Uber have
recently encountered into big trouble because of this issue [36]. Ubers self driving
car have stroked the lady in Arizona which resulted in her death. The reason for
this accident is said to be the Blind Spot due to reduction of Sensors from five
to one [35].In 2016, Uber shifted to Volvo instead of using Fords autonomous
cars. This lead to large changes in Sensors design. The sensors were reduced from
Five to just One which is mounted on the top of the roof. To compensate this
change, they increased RADAR sensors from seven to ten. Removal of number
of LIDARs, reduced the cost but increased the blind spots. One LIDAR on the
top results in blind spot area low to the ground all around the car.Hence there
is a chance of accident if the object is not detected in the blind spot area.
Therefore, it is necessary to design a system which will guide the autonomous
vehicle while changing lanes or taking turns.
3.1 Current Solutions to Blind Spots Problem
For human driven cars, number of solutions are there for elimination of blind
spots.Most simple solution to reduce blind spots is to adjust the mirrors of
10 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
the car.Blind spots are eliminated in heavy vehicles by using special seat de-
signs.Special blind spot mirrors are developed which gives wide view to the
driver.Some high end cars also uses electronic based systems which detects the
cars in the blind spots and warning is given on the mirrors using LED light.Vision
based Blind Spot Detection systems (BSD) are there for daytime and nighttime
driver assistance which makes use of lasers and cameras [23,24,25]. At present,
LIDAR is used in autonomous vehicle to see around the world. LIDARs used at
the sides of the car to reduce the blind spots. But the cost of LIDAR is huge
which increases the overall cost of Driver-less Car.Only LIDAR can cost upto
$80,000. If we reduce the number of LIDARs to one (mounted on the top), there
can be some areas around the car which may not be in the visibility. Any small
object like cyclist can remain undetected because of this. In 2018, Uber already
had an accident in Arizona due to this reason. One of the Uber car in Arizona
stroked the pedestrian woman who died due to this accident. As a result, Uber
had to terminate testing in Arizona.
Further, even if we detect the object by using sensors instead of LIDAR, the
driverless car will not get enough assistance for further steps (to reduce or in-
crease the speed) which is done easily by human driver using Eye Contact.
Hence it is necessary to develop a system for blind spot detection which will be
cost effective as well as will assist the vehicle for driving actions.
3.2 Proposed System for Blind Spot Detection
We are proposing a novel approach using machine learning to tackle the problem
of Blind spots. The aim of this approach is to reduce the accidents caused by
visiblility and effectively to improve humans safety. This BSD system will consist
of two modules. First module will be using Sensors to identify any object in the
blind spot area of vehicle [26,27,28].Second module will use machine learning
techniques to identify relative speed of that vehicle. First module is Object
Detector Module and second module is Assistance Module. Based on the speed
calculations, assistance can be given to the DC. If the object in the blind spot is
moving with relatively same speed as our DCs speed, it has to vary its speed so
that the object will be out of blind spot area [29]. Post that, it can change the
lane or take the turn. If the speed of detected object is relatively slow or fast,
our DC will wait for that vehicle to move out.
Note that, we are not using any extra LIDAR for detection, hence the solution
will not add huge cost in the design. Here,the use of machine learning to calculate
relative speed is essential to avoid long wait. The scenario may occur where
the car behind our car is also moving with relatively same speed and hence
maintaining same distance with our car. Therefore, our can cannot indefinitely
wait for that car to vary its speed. In case of human drivers, the driver will
judge the speed of that car and vary its speed to take it out of blind spot. But
this is not that simple for the machine. Also, we are taking “speed” into account
instead of distance because vehicles with same speed will maintain same distance
for longer time but calculation of relative speed will give instant assistance.
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 11
3.3 Proposed Algorithm for BSD system
Step 1: Detect the object using Object Detection Module in the blind spot area.
Object Detection Module will contain sensors to detect the presence of object.
Step 2: Calculate the relative speed of the object using using Machine Learning
Techniques (Refer Output in the figure 3) .
Step 3: If the relative speed is Zero (Moving with same speed), Decrease the
speed slowly so that it will move out of blind spot.
Step 4: If the relative speed is Negative (Moving slow with respect to our DC),
Increase the speed and give the appropriate indicator.
Step 5: If the relative speed is Positive (Moving fast with respect to our DC),
Wait for the car to move out without any efforts.
Step 6: Once the blind spot area detector gives green signal, indicating Safe to
Move, give the proper indicator (left or right) and change the lane or take the
turn.
Step 7: Perform above steps continuously while driving.
12 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
Fig. 3. Vehicle Speed estimation.(Source: Zheng Tang, vehicle speed estimation for the
NVIDIA AI City Challenge Workshop at CVPR 2018)
4 Conclusion
The “Development of self driving cars” is one of the most active research topics
going on in the industry. Still there are lot of challenges and issues for which the
self driving cars are not ready to perform like humans.In this paper, we have
addressed some of the key challenges in this area. Later, we explained the issue
of Blind Spots. We have also proposed a system for self driving cars which can
be used to reduce the accidents and improve human safety. We believe that the
day is not so far when the driverless car will be on the road for general public if
we mainly address safety and legal issues.
References
1. Rasheed Hussain, Sherali Zeadally: Autonomous Cars: Research Results, Issues
and Future Challenges. In:IEEE Communications Surveys & Tutorials, pp. 1-1 (10
September 2018)
2. R. Okuda, Y. Kajiwara, and K. Terashima: A survey of technical trend of adas
and autonomous driving,In: Proceedings of Technical Program - 2014 International
Symposium on VLSI Technology, Systems and Application (VLSI-TSA), pp. 14,
April 2014.
3. T. Kanade, C. Thorpe, and W. Whittaker: Autonomous land vehicle project at
cmu, In: Proceedings of the 1986 ACM Fourteenth Annual Conference on Computer
Science, CSC 86, (New York, NY, USA), pp. 7180, ACM, 1986.
4. “Biggest Challenges in Driverless cars”, https://9clouds.com/blog/what-are-the-
biggest-driverless-car-problems/
5. I Barabs, A Todoru, N Cordo, A Molea: “Current challenges in autonomous driving”,
In: IOP Conference Series: Materials Science and Engineering, 252 012096 (2017)
6. Kanwaldeep Kaur, Giselle Rampersad: “Trust in driverless cars: Investigating
key factors influencing the adoption of driverless cars”,Volume 48, Pages 87-96
(AprilJune 2018)
Autonomous Cars: Technical Challenges and Solution to Blind Spots Issue 13
7. Jan De Bruyne, Jarich Werbrouck: “Merging self-driving cars with the law”,In:
Computer Law & Security Review, Volume 34, Issue 5, Pages 1150-1153 (October
2018)
8. Nynke E.Velling: “From the testing to the deployment of self-driving cars: Legal
challenges to policymakers on the road ahead”, In:Computer Law & Security Re-
view, Volume 33, Issue 6, Pages 847-863 (December 2017)
9. Eric R.Teoh,David G.Kidd: “Rage against the machine? Google’s self-driving cars
versus human drivers”, In:Journal of Safety Research, Volume 63, Pages 57-60 (
December 2017)
10. “Uber Self Driving car fatality”, In:NewScientist, Volume 237, Issue 3170 Pages
5-57 (24 March 2018)
11. Meixin Zhu, Xuesong Wang, Yinhai Wang: “Human-like autonomous car-following
model with deep reinforcement learning”, In:Transportation Research Part C:
Emerging Technologies, Volume 97, Pages 348-368 (December 2018)
12. Wen-Xing Zhu, H.M.Zhang: “Analysis of mixed traffic flow with human-driving and
autonomous cars based on car-following model”,In:Physica A: Statistical Mechanics
and its Applications, Volume 496, Pages 274-285 (15 April 2018)
13. Yassine Zein,Mohamad Darwiche, Ossama Mokhiamar:“GPS tracking system for
autonomous vehicles”,Alexandria Engineering Journal Available online (13 Novem-
ber 2018)
14. Nadia Adnan, Shahrin Md, Nordin Mohamad Ariffbin, Bahruddin, Murad Ali:
“How trust can drive forward the user acceptance to the technology? In-vehicle
technology for autonomous vehicle”, Transportation Research Part A: Policy and
Practice Volume 118, Pages 819-836 (December 2018)
15. Paul Marks:“Autonomous cars ready to hit our roads”, In: New Scientist,Volume
213, Issue 2858, Pages 19-20 (31 March 2012)
16. MartinLd, Ivo Herman, Zdenk Hurk:“Vehicular platooning experiments using au-
tonomous slot cars”, In:IFAC-PapersOnLine, Volume 50, Issue 1, Pages 12596-12603
(July 2017)
17. Yi-ChingLee, Jessica H.Mirman :“Parents perspectives on using autonomous vehi-
cles to enhance childrens mobility”, In:Transportation Research Part C: Emerging
Technologies, Volume 96, Pages 415-431 (November 2018)
18. Riccardo Iacobucci, Benjamin Mc,Lellan Tetsuo Tezuka: “Modeling shared au-
tonomous electric vehicles: Potential for transport and power grid integration”,In:
Energy Volume 158, Pages 148-163 (1 September 2018)
19. Josiah P.Hanna, Michael Albert, Donna Chen, Peter Stone: “Minimum Cost
Matching for Autonomous Carsharing”, In:IFAC-PapersOnLine, Volume 49, Issue
15, Pages 254-259 (2016)
20. Jinke Yu, Leonard Petng: “Space-based Collision Avoidance Framework for Au-
tonomous Vehicles”,In: Procedia Computer Science, Volume 140, Pages 37-45 (2018)
21. Patricia Bhm, Martin Kocur, Murat Firat, Daniel Isemann: “Which Factors In-
fluence Attitudes Towards Using Autonomous Vehicles”, In: AutomotiveUI, Pro-
ceedings of the 9th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications Adjunct, Pages 141-145 (2017)
22. Khattab M. Ali Alheeti, Anna Gruebler, Klaus D. McDonald-Maier: “An intrusion
detection system against malicious attacks on the communication network of driver-
less cars”,In: 2015 12th Annual IEEE Consumer Communications and Networking
Conference (CCNC)(2015).
23. Guiru Liu, Mingzheng Zhou, Lulin Wang, Hai Wang, Xiansheng Guo, “A blind spot
detection and warning system based on millimeter wave radar for driver assistance”,
ScienceDirect, Volume 135, April 2017, Pages 353-365
14 Hrishikesh M. Thakurdesai and Dr. Jagannath Aghav
24. B.F. Wu, H.Y. Huang, C.J. Chen, Y.H. Chen, C.W. Chang, Y.L. Chen “A vision-
based blind spot warning system for daytime and nighttime driver assistance” Com-
put. Electr. Eng., 39 (April) (2013), pp. 846-862
25. M.W. Park, K.H. Jang, S.K. Jung Panoramic vision system to eliminate driver’s
blind spots using a laser sensor and cameras Int. J. ITS Res., 10 (2012), pp. 101-114
26. Y.L. Chen, B.F. Wu, H.Y. Huang, C.J. FanA real-time vision system for nighttime
vehicle detection and traffic surveillance IEEE Trans. Ind. Electron., 58 (5) (2011),
pp. 2030-2044
27. Y.C. Kuo, N.S. Pai, Y.F. Li, Vision-based vehicle detection for a driver assistance
system Comput. Math. Appl., 61 (2011), pp. 2096-2100
28. C.T. Chen, Y.S. Chen, Real-time approaching vehicle detection in blind-spot area
Proceedings IEEE International Conference Intelligence Transport System (2009),
pp. 1-6
29. Vicente Milanes, David F. Llorca, Jorge Villagra, Joshua Perez, Carlos Fernandez,
Ignacio Parra, Carlos Gonzalez, Miguel A. Sotelo, Intelligent automatic overtaking
system using vision for vehicle detection Expert Syst. Appl., 39 (2012), pp. 3362-
3373
30. Y.H. Cho, B.K. HanApplication of slim A-pillar to improve driver’s field of vision
Int. J. Auto. Technol., 11 (2010), pp. 517-524
31. Scott A.Cohen, Debbie Hopkins, “Autonomous vehicles and the future of urban
tourism”, Annals of Tourism Research, Volume 74, January 2019, Pages 33-42
32. Jin Cui, Lin Shen Liew, Giedre Sabaliauskaite, Fengjun Zhou, “A Review on Safety
Failures, Security Attacks, and Available Countermeasures for Autonomous Vehi-
cles”, Ad Hoc Networks Available online 7 December 2018, In Press, Accepted
Manuscript
33. Lanhang Ye, Toshiyuki Yamamoto, “Impact of dedicated lanes for connected and
autonomous vehicle on traffic flow throughput”, Physica A: Statistical Mechanics
and its Applications Volume 512, 15 December 2018, Pages 588-597
34. Jonas Meyer, Henrik Becker, Patrick M.BschKay, W.Axhausen, “Autonomous ve-
hicles: The next jump in accessibilities”, Research in Transportation Economics
Volume 62, June 2017, Pages 80-91
35. Kieren McCarthy:“Uber self-driving car death riddle: Was LIDAR blind spot to
blame?”, Emergent Tech, 28 Mar 2018
36. Keith Naughton: “Ubers Fatal Crash Revealed a Self-Driving Blind Spot: Night
Vision”,Bloomberg, May 29, 2018