ArticlePDF Available

Unreliable Pedestrian Detection and Driver Alerting in Intelligent Vehicles

Authors:

Abstract and Figures

Vehicles with advanced driving assist systems that automatically steer, accelerate and brake are popular, but associated with increased driver distraction. This distraction coupled with unreliable autonomous system performance leads to vehicles that may be at higher risk for striking pedestrians. To this end, this study tested three consumer vehicles in two different model classes in a pedestrian crossing scenario. In 120 trials, one model never detected the pedestrian, nor alerted the driver. In 123 trials, the other model vehicles almost always detected the pedestrian, but in 35% of trials, alerted the driver too late. These cars were not consistent internally or with one another in pedestrian detection and response, and only sparingly sounded any warnings. These intelligent vehicles also detected the pedestrian earlier if there were no established lane lines, suggesting that in well-marked areas, typically the case in for established crossings, pedestrians may be at increased risk of a possible conflict. This research demonstrates that artificial intelligence can lead to unreliable vehicle behaviors and warnings in pedestrian detection, potentially catching drivers off guard. These results further indicate industry needs to do more testing of intelligent systems, regulators should reevaluate the self-certification approval process, and that more fundamental work is needed in academia around the performance and quality of technologies with embedded neural networks.
Content may be subject to copyright.
1
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
1
Unreliable Pedestrian Detection and Driver Alerting
in Intelligent Vehicles
Mary L. Cummings, Senior Member, IEEE, and Ben Bauchwitz
Abstract Vehicles with advanced driving assist systems that
automatically steer, accelerate and brake are popular, but
associated with increased driver distraction. This distraction
coupled with unreliable autonomous system performance leads to
vehicles that may be at higher risk for striking pedestrians. To this
end, this study tested three consumer vehicles in two different
model classes in a pedestrian crossing scenario. In 120 trials, one
model never detected the pedestrian, nor alerted the driver. In 123
trials, the other model vehicles almost always detected the
pedestrian, but in 35% of trials, alerted the driver too late. These
cars were not consistent internally or with one another in
pedestrian detection and response, and only sparingly sounded
any warnings. These intelligent vehicles also detected the
pedestrian earlier if there were no established lane lines,
suggesting that in well-marked areas, typically the case in for
established crossings, pedestrians may be at increased risk of a
possible conflict. This research demonstrates that artificial
intelligence can lead to unreliable vehicle behaviors and warnings
in pedestrian detection, potentially catching drivers off guard.
These results further indicate industry needs to do more testing of
intelligent systems, regulators should reevaluate the self-
certification approval process, and that more fundamental work is
needed in academia around the performance and quality of
technologies with embedded neural networks.
Index TermsPedestrian detection, self-driving, driving assist,
computer vision, driver alerting
I. INTRODUCTION
EHICLES with partially-automated driving systems,
including those with Advanced Driver Assist Systems
(ADAS) that leverage embedded artificial intelligence
(AI) to laterally and longitudinally control a vehicle (i.e., lane
keeping, accelerating, braking, lane changing, passing and
navigation) are increasingly prevalent. Whether these systems
actually improve overall safety has not been definitively
established, as they are often misused by drivers [1]. ADAS-
equipped vehicles from all major manufacturers require a
significant amount of driver supervision and input, despite
advertisements of self and hands-free driving. An increasing
number of accidents indicates that these systems may be brittle
and suffer from vulnerabilities that are not well understood [2].
ADAS systems rely on computer vision-based lane tracking
systems to maintain lateral control [3], and may combine
computer vision with other sensing modalities to maintain
This research was partially funded by a US Department of Transportation
University Transportation Center grant through the University of North
Carolina’s Collaborative Sciences Center for Road Safety.
Mary L. Cummings is a professor of Mechanical Engineering, Electrical and
Computer Engineering, and Computer Science at George Mason University,
Fairfax, VA 22030 USA (e-mail: cummings@gmu.edu).
space from other vehicles and detect obstacles [4-6]. However,
recent research demonstrates that ADAS systems perform
inconsistently in lateral and longitudinal control, including
obstacle detection, warning and mitigations [7-9].
Inconsistent autonomous system performance and sensor
blind spots raise the concern that self-driving and partially-
automated vehicles may be at a higher risk for striking
pedestrians. For example, a fatal 2018 collision between an
experimental Uber self-driving car and a pedestrian was a major
setback for the industry [10]. While pedestrian detection
systems have made important strides in the past decade, recent
studies have shown that environmental conditions like darkness
can reduce the effectiveness of such systems [11]. Other failure
modes include pedestrian occlusion by other objects or
insufficiently-trained object detection models [12].
Pedestrian deaths continue to rise every year [13] and
pedestrian-centric urban environments have much more road
feature diversity than restricted access highways. This diversity
can impair autonomous vehicle perception, especially if
underlying neural networks are undertrained and do not account
for various lighting and environmental factors, like rain, which
also causes problems for laser ranging systems like LIDAR.
Previous studies have shown that autonomous car lane
detection performance suffers in complex scenes like those that
contain occlusions [14], including pedestrians [15] and traffic
cones [7].
In addition to problems with AI-enabled object and lane
detection, there are also concerns about a driver’s ability to take
over if an autonomous driving system fails to detect an object.
Such systems have been associated with increased driver
distraction [16, 17], and was cited as the primary reason for the
Uber self-driving death. Moreover, several recent studies found
that a concerning number of drivers do not understand how their
ADAS systems work [18, 19]. Thus, it is also imperative to
understand how and when these cars communicate the presence
of pedestrians to drivers, especially if there are known gaps in
sensor capabilities and drivers are likely distracted.
To this end, this study examined how three cars each from
two different ADAS-equipped models compared when
presented with a near-miss pedestrian scenario. The goal was to
determine whether the cars, in automatic driving mode with no
driver hands on the steering wheel, detected the pedestrian and
performed any mitigating actions. In addition, because of
Ben Bauchwitz is with Duke University, Durham, NC 27705 USA. (e-mail:
benjamin.bauchwitz@duke.edu).
Color versions of one or more of the figures in this article are available
online at http://ieeexplore.ieee.org.
V
2
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
2
known issues between lane tracking and pedestrian detection
[15], whether any possible interaction existed between lane line
quality and pedestrian detection was examined. How far in
advance the driver was warned, whether the cars initiated
braking, and how much variability was exhibited by each of the
two groups of cars were also measured. Such results are critical
in determining the risk that vehicles withs AI-enabled computer
vision present to pedestrians, whether infrastructure like lane
quality is a factor in pedestrian safety in the presence of AI-
equipped vehicles and whether more oversight is needed of
such systems.
II. METHODS
The three cars in each of the two 2021 model classes (Models
A and B) of vehicles used in the tests were obtained through a
car-sharing platform, as they represent typical cars driven by
actual consumers. The two models were selected because they
had very similar camera vision-based systems for pedestrian
detection (Tables 1 and 2), they both were equipped with radar,
they could both be driven hands-free for periods of time, and
they advertised pedestrian detection and avoidance as a safety
feature. Specific vehicle information can be provided upon
request.
A. Procedure
Each trial consisted of driving one of the six vehicles on a
1500-foot section of track (Fig. 1) at the North Carolina Center
for Automotive Research (NCCAR), a closed test track facility.
The roadway is 40 feet wide, which was then separated into
three 13-foot-wide lanes. All cars experienced regular lane line
trials, which were standard 10 ft x 6 in lines with 30 ft
longitudinal spacing, as specified in the Manual on Uniform
Traffic Control Devices [20].
They also experienced a degraded lane line condition, where
longitudinal marks were present but were shorter sideway C-
shaped lines and irregularly spaced (Fig. 2) 600 ft from the
pedestrian. These markings were intended to confuse the line-
based feature used by AI in computer vision systems with
mixed lateral and longitudinal markings. All trials for this test
were conducted with the sun altitude > 40 degrees from the
horizon. Each of the six cars experienced randomized and
counterbalanced 40 trials, 20 in a defined lane condition
(standard 10 ft x 6 in white lines with 30 ft longitudinal spacing
[20]), and 20 trials in a degraded lane setting.
For each trial, the vehicle started in a defined lane and
accelerated to a speed of 40 mph, with both adaptive cruise
control and automatic steering activated at this point. This
speed was chosen since previous research determined that to
meaningfully reduce pedestrian deaths, pedestrian detection
systems would need to operate at or above this number [21].
Once at 40 mph, the driver took his hands off the wheel for
the duration of the test. Traffic cones were positioned on either
edge of the roadway 350 ft prior to the pedestrian rig, and the
pedestrian began crossing the road as soon as the vehicle passed
these cones at a speed of 5 fps to approximate typical walking
speed [22]. This was designed to result in a “near miss”
scenario, where the vehicle would come within approximately
2 s of colliding with the pedestrian, but the pedestrian would be
just out of the vehicle’s path even if the vehicle did not brake.
The ADAS system was reset between each trial at the locations
marked start in Fig. 1.
The same driver, an experienced test driver, completed all
tests. In all experiments, a second experimenter sat in the
passenger seat to monitor the status of the recording devices and
ensure that cruise control speed was set at the correct level. A
third experimenter controlled the movement of the pedestrian
test target.
TABLE I
TECHNICAL DETAILS OF MODEL A
Feature
Description
Lane
Keeping
Assist
Model A vehicles detect lane markers on the road
with a camera mounted on the front windshield.
The vehicle, lane markings and immediate
surrounding objects are visualized on a screen
inside the vehicle. The vehicle can independently
maintain lane position but requires periodic driver
input. When the driver fails to provide timely input,
it alerts the driver with a visual warning.
Forward
Collision
Warning
A visual warning message appears on a screen
inside the vehicle and is accompanied by an aural
warning and steering wheel vibration. The vehicle
may trigger automatic emergency braking
depending on proximity to the obstacle.
Automatic
Emergency
Braking
A visual warning message appears on a screen
inside the vehicle and is accompanied by an aural
warning. The vehicle automatically brakes as much
as necessary to avoid the obstacle, up to a full stop.
TABLE II
TECHNICAL DETAILS OF MODEL B
Feature
Description
Lane
Keeping
Assist
Lane markers on the road are detected with a
camera mounted on the front windshield. The
vehicle, lane markings, and immediate surrounding
objects are visualized on a screen inside the
vehicle. The vehicle can independently maintain
lane position but requires periodic driver input.
When the driver fails to provide timely input or the
system otherwise fails to maintain lane position, it
alerts the driver with a visual and audible warning.
Forward
Collision
Warning
A visual warning message appears on a screen
inside the vehicle and is accompanied by an aural
warning. The vehicle may trigger automatic
emergency braking or automatic emergency
steering depending on proximity to the obstacle.
Automatic
Emergency
Braking
A visual warning message appears on a screen
inside the vehicle and is accompanied by an aural
warning. The vehicle automatically brakes as much
as necessary to avoid the obstacle, up to a full stop,
and augments with steering if braking action is
insufficient to avoid the obstacle.
The pedestrian target consisted of a black SeasonsTM rubber
composite inflatable pedestrian (Fig. 3). The pedestrian was 72
in and dressed in a plain long sleeve shirt and blue jeans and
had a total inflated mass of 3 kg. It was continuously re-inflated
to minimize changes in appearance due to air loss. The rig
3
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
3
moving the pedestrian was supported by two 8-foot wooden
posts placed on either side of the roadway and secured using
114 pounds of concrete. A 75-foot KeeperTM 3/8-inch bungee
cord was suspended between the two posts, also secured to the
ground on either side. The cord tension provided just enough
sag so that the pedestrian would touch the ground while
suspended from the cord without hovering or bending. A nylon
rope connected to the top pedestrian’s head was pulled by an
operator on the side of the track. The operator manually pulled
the pedestrian during each trial, using a digital clock to pace the
movement so that the pedestrian traveled at the intended speed.
Fig. 1. Test environment. The defined lane condition is
marked in white while the degraded lane condition is marked
in yellow
Fig. 2. Degraded lane marking appearance
B. Data Collection Instruments
Several measures were collected during each trial. First, in
all experiments continuous video data was captured from
GoPro Hero 7 Black cameras covering three areas of interest:
(1) the driver’s face and hands, (2) the road, and (3) the console
readout displaying ADAS information. The road-facing camera
was mounted in the center of the dashboard set back three
inches from the front of the windshield. The driver-facing
camera was mounted at a position specific to each model and
standardized using dashboard
landmarks. The position of
the console-facing camera
varied depending on where
information was displayed in
each vehicle model.
For vehicles with ADAS
information presented on the
center console (Model B),
this camera was mounted on
the sun roof just behind the
first row of seats using a
suction mount. For vehicles
with ADAS information
presented behind the steering
wheel (Model A), this camera
was mounted on the steering
column. It was placed along
the center line of the steering column as far forward as possible,
with the bottom of the camera’s sensor resting on the bottom lip
of the console alcove. This placement allowed the camera to
capture a full view of the console without blocking the driver’s
view of the console.
Cameras were set to prohibit auto-exposure adjustment so
that light capture was as similar as possible across trials. The
video capture was set to 25 fps with a 1920 x 1440 pixel image
size. Zoom was set to the highest level of lens resolution
available. Each camera was electronically connected to a
SyncBac Pro radio frequency synchronizer device which
augments synchronized videos with timecodes.
III. RESULTS
A. Model A
For the 120 trials for Model A in both the defined and
degraded lane conditions, none resulted in pedestrian detection,
alerting or any braking maneuvers. The vehicle did not collide
with the pedestrian target in any test, as designed.
B. Model B
Model B experienced 123 trials, three extra trials due to
concerns about possible lost data, and the vehicle did not collide
with the pedestrian target in any test. During these trials, nine
behaviors were observed as the vehicle approached the pedestrian,
Table 3. All icons were displayed on the standard LCD console in
the car. The cars provided a visual alert of the pedestrian in 99%
of cases but only braked for 85% of these events. In 3 cases (2.4%
of all trials), the cars automatically accelerated toward the
pedestrian after initially braking. Some, but not all, of these
behaviors occurred in concert, i.e., a hands-on-wheel (HOW) alert
and aural urgent alarm could sound in quick succession, but there
was no predictable sequence or pattern of the combination of
visual and/or aural alerting in either lane line condition.
Figure 4 shows the counts of events in Table 3 for the defined
and degraded lane conditions except for pedestrian
visualizations, discussed in the next section. The typical vehicle
Fig. 3. Pedestrian test
environment layout
4
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
4
response to the crossing pedestrian was to present a pedestrian
icon on the dash display and then automatically brake 20-25s
later. As seen in Fig. 4, there were no other consistent actions
or alerts between the cars, discussed in more detail below.
TABLE III
MODEL B BEHAVIORS
C. Model B Braking Events
Cars B1 and B2 braked upon pedestrian detection in almost all
trials (98% for B1 and 100% for B2), but B3 only braked in about
60% of all trials. Table 4 shows the mean and one standard
deviation values for the reduction in speed, as well as the severity
of the deceleration in g. The average deceleration for a light
passenger car braking for some kind of obstacle in the roadway is
-0.23g [23], so for those decelerations in Table 4, none were
statistically different than -0.23g except for B3 in the degraded
condition (p = .0042). This means that B3, on average, braked
harder than the other vehicles, but only in the degraded lane
condition. For reference, the threshold for hard braking is less than
-.35g [24], so it is likely such braking would be noticed by the
driver.
Using a Kruskal-Wallis H non-parametric test due to non-
normal data, speed reduction was statistically different between
defined and degraded lanes for all cars (H(1) = 9.943, p = .002,
alpha = .01). The Kruskal-Wallis H test comparing the three
cars yielded a non-significant test (H(2) = 1.040, p = .594, alpha
= .01). While Model B cars comparatively exhibited the same
reduction in speed overall, they exhibited different speed
reductions depending on the lane condition, with greater (i.e.,
safer) speed reduction in the degraded lanes condition.
As noted previously, only B3 exhibited aggressive
deceleration in the aggregate and only in the degraded lane line
condition. Only 4 events in this category exceeded the -0.35g
threshold for a hard braking, with the most extreme (-0.55g) at
106 ft (2.2s) from the pedestrian. This would be considered an
emergency braking event. In contrast, this same car under the
defined lane line condition did not brake at all for the three
closest detections of the pedestrian (13, 18 and 20 ft). These
detection distances are discussed in more detail in the next
section.
TABLE IV
AVERAGE SPEED REDUCTIONS AND DECELERATIONS
Car
Avg. Speed Reduction in
mph ( 1 Std. Dev.)
Avg. Deceleration in g
(1 Std. Dev.)
Degraded
Defined
Degraded
B1
-10.41 (2.38)
-.22 (.09)
-.22 (.05)
B2
-9.75 (1.48)
-.24 (.09)
-.20 (.03)
B3
-10.45 (6.00)
-.27 (.10)
-.34 (.10)
D. Pedestrian Visual Alert
Figure 5 shows the first detected position of the pedestrian
target, relative to the vehicle, on the car’s display. Longitudinal
distance is the distance in feet from the car’s bumper to the
pedestrian, and was measured from the synced camera views
using 10’ markers spaced every 40’. Lateral distance is defined
relative to the vehicle centerline (which was centered on the
roadway), with -20 ft at the far-left edge of the road and +20 ft
at the far-right edge of the road. Lateral positions were
determined by reviewing the videos of each run that captured
the internal display pedestrian depiction as well as the position
of the pedestrian.
Since the pedestrian target crossed the road from left to right,
an early detection of the pedestrian occurs with larger
longitudinal distances and negative lateral distances (closest to
the left edge, where the pedestrian began its movement). Small
longitudinal distances and positive lateral distances indicate
that the pedestrian was detected late.
Table 5 provides summary statistics for where each car
visualized the pedestrian in each lane marking condition. Cars
B1 and B2 were largely consistent in where they reported
pedestrian detections via the icons, detecting the pedestrian
slightly earlier (larger longitudinal values and negative lateral
values). In three of the six conditions in Table 5, B1 and B2 had
a median detection distance of 211 ft, which suggests this was
the center of the intended design envelope. B2 had the earliest
visualized detection at 373 ft, and was overall the most
consistent performer, but its shortest detection distance was 55
ft, less that a second from crossing the pedestrian path. Car B3
detected the pedestrian much later, first detecting the pedestrian
when the vehicle was about 100 feet further down the road and
the pedestrian was 10-20 feet further along its path.
When comparing the estimated marginal means for both the
lateral and longitudinal distance cases in Table 5 through
pairwise testing using a Bonferroni adjustment, cars B1 and B2
Behavior
Count
Pedestrian visualization icon
122
Automatic braking
103
Standard alert sound
25
Standard hands-on-wheel icon
25
Urgent alert sound
20
Forward collision warning icon
8
Urgent hands-on-wheel icon
3
Automatic acceleration
3
Takeover alert icon
1
Fig. 4. Model B event counts by individual car & lane
condition.
5
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
5
was statistically no different, but car B3 was different (p < .001
for all comparisons with an alpha = .017 for a familywise error
correction). The longitudinal comparison between defined and
degraded lanes was not statistically significant at p = .026, but
it was for the lateral data (p < .001). This means that the
pedestrians were detected earlier in their crossing path, on
average, left of center at -3.8 ft in the degraded lane lines
condition but were not detected, on average, in the defined lane
line condition until right of center at 1.1 ft.
Fig. 5. First pedestrian detection on the Model B internal
vehicle display for each test trial. X denotes a degraded lane
condition trial, while o denotes a defined lane trial. The
pedestrian crossed left to right.
E. Aural Warnings
It has long been established that safety-critical warnings
should be dual coded, i.e., that they occur on at least two
sensory axes like visual and aural or aural and haptic [25].
However, aural warnings were only present in 37% of trials and
only 2% co-occurred with a visual warning, meaning for the
other 35%, there were intervals of 2 or more seconds between
the visual and aural alert. Figure 4 illustrates that only B2
consistently signaled an aural warning with the pedestrian
visualization and braking (19 out of 20 times), but only in the
degraded lane line condition. Similarly, B1 also only activated
an aural warning with braking and a pedestrian visualization in
the degraded lane lines condition, but only for 6 out of 20
events. However, neither B1 or B2 signaled an urgent aural
warning for the driver to take over in the degraded condition,
and only triggered the warning once and thrice, respectively, in
the defined condition. Curiously, B3 which never triggered an
aural warning with the pedestrian visualization, signaled the
urgent warning 44% of the time in the degraded lane condition
and 36% in the defined lanes condition.
TABLE V
MEAN LATERAL & LONGITUDINAL DISTANCES (FT) TO
PEDESTRIAN VISUALIZATIONS. NEGATIVE NUMBERS MEAN THE
PEDESTRIAN WAS DETECTED TO THE LEFT OF THE CENTERLINE.
F. Initial Warnings in High-Risk Scenarios
Warnings that signal to the potentially distracted driver that
there is a person in the forward field are especially critical to
bringing the human back into the driving loop. However,
warnings must occur in a time frame that allow a human the
ability to perceive and respond to them [26]. In this experiment,
warnings that occurred at less than 159 ft collision distance, the
distance needed to stop a car travelling at 40 mph, assuming a
human perception and response time of 1.5s [27], indicate
extremely high risk. For warnings that occurred less than 71 ft
from the pedestrian path, it would be nearly impossible for
drivers to bring the car to a stop, so swerving would be the only
mitigating action.
Table 6 shows those initial visual or aural warnings that
signaled the driver to the presence of a pedestrian inside this
high-risk window. 35% of events (43/123) occurred in this
window, with 28% (34/123) in the 71-159 ft range where
braking to a stop before the pedestrian was still possible and 7%
(9/123) in the 0-70 ft range where braking was impossible. It is
notable that for these critical warnings, only 12% were aural
warnings, which is important because human reaction times are
faster with an aural warning as compared to a visual one [26].
This is especially noteworthy because a hands-free driver may
not be looking at the screen and so any displayed pedestrian
icon could easily be missed.
Table 6 also shows that B3 was responsible for the bulk of the
high-risk first warning events. Table 5 indicates that the average
detection distances for B3’s trials in both lane conditions were
well below the 159 ft threshold. The majority (67%) of these
high-risk first warnings occurred in the defined lanes condition
and again, the majority (but not all) of these cases happened to
6
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
6
B3. The only event with no pedestrian icon displayed on the
monitor happened to B3 in the degraded lane lines condition.
At 1.6 s from crossing the pedestrian path, the car triggered the
urgent aural warning with the frontal collision warning. At 1.3
s the driver was shown the visual takeover warning.
The most urgent alert in Model B cars was the frontal collision
warning, which gives the driver visual and aural cues. As seen
in Fig. 4, there were very few frontal collision warnings (8 out
of 123, 6.5%). Car B1 never provided this warning and B2 only
signaled this once at 87 ft. The average distance where it
triggered for all cars was 84 ft (±10 ft), with the maximum at
105 ft and the minimum at 72 ft. Thus, this warning gave the
driver 1.2-1.8 s to successfully detect the condition and
respond, which would require the driver to pay perfect attention
and have very fast reflexes.
TABLE VI
HIGH-RISK (71 FT - 159 FT) AND LIKELY IMPOSSIBLE
RECOVERY (<71 FT) ALERTING EVENTS
Fig. 6. Distances between car and pedestrians when automated
acceleration events occurred
G. Automatic Acceleration Events
As noted in Table 3, there were three events where a Model B
car automatically accelerated in the presence of a pedestrian
after initially braking. The acceleration events were relatively
mild, ranging from 0.1g to 0.13g with resulting speed changes
ranging between 2 4 mph. All of the events occurred in the
defined lane quality condition, and each car experienced one
such event. These events are visualized in Fig. 6. For example,
B3 displayed the pedestrian visualization at 219 ft. The car then
executed an automated braking maneuver 171 ft from the
pedestrian, but then accelerated at 92 ft from the pedestrian,
even though it was detected. At that point the driver was also
presented with an alarm and a forward collision warning. None
of the three events were similar in the ordering or even the
nature of the events in the timeline. The only similarity across
all three events was that they occurred in the defined lanes
condition. This finding is unexpected since driving with lane
lines represents the ideal operational domain that the car is
expected to handle.
IV. DISCUSSION
This study was designed to assess the variability of pedestrian
detection and driver alerting of ADAS-equipped cars that rely
upon computer vision. In addition, we also investigated whether
lane markings made a difference for two different car models.
The results revealed wide discrepancies across different kinds
of cars with onboard autonomy, and also significant variation
within individual vehicles of the same type.
One set of cars, Model A, never detected the pedestrian target,
despite having that stated capability and under ideal conditions
of a bright sunny day with no traffic or other feature diversity
that would be typical in an actual urban environment. The lack
of any detection or risk mitigation suggests that if the driver was
even slightly inattentive, the outcome could have been
disastrous.
The other set of cars, Model B, performed relatively better,
detecting the pedestrian in almost all scenarios. However,
despite the presentation of 123 nearly identical scenarios, the
three different cars in the Model B cluster did not exhibit
consistent performance. Cars B1 and B2 were more consistent
on where they first identified the crossing pedestrian as well as
executing automatic braking. However, these two cars were not
consistent with each other or with themselves in when or where
they provided the driver with aural and visual alerting.
To minimize risk, detection and alerting should have
happened in the upper left section of Fig. 5 (i.e., prior to the
pedestrian moving into the vehicle’s path) to give the driver
several seconds to see the pedestrian and slow down. However,
even the two Model B cars that detected pedestrians at more
than 240 ft from the car (4 s gap) still had very late detections
(Table 6). This variability makes it extremely difficult to
conduct thorough testing and pinpoint the problem, which is a
known challenge with computer vision systems that rely on
neural networks [28].
It is also quite noteworthy that more than a third of initial
warning events occurred at distances where it highly unlikely
that drivers would be able to effectively respond, especially if
they were distracted or had their hands off the steering wheel.
Any hesitation by the pedestrian, even with an attentive but
hands-off driver in these scenarios could have been fatal, as
recent research has shown that takeover time for people in cars
with advanced driving assist systems can range from 1.8 to 6.1
seconds [29].
It should also be noted that while a low percentage, each of
the Model B cars mildly accelerated before the pedestrian
cleared the vehicle’s path. In two cases, the car accelerated after
the car clearly detected the pedestrian, and in the third case,
accelerated after an automatic braking maneuver but before the
pedestrian icon was displayed. This behavior should be cause
for concern since it occurred in the best possible testing
71’-159’
(1.2 - 2.7s)
<71’
(<1.2s)
Total
First alert, visual N
(B1/B2/B3)
29 (4/4/21)
9 (1/1/7)
38
First alert, visual N
(defined/degraded lanes)
20/9
8/1
38
First alert, aural N
(B1/B2/B3)
0/0/5
0/0/0
5
First alert, aural N
(defined/degraded lanes)
1/4
0/0
5
7
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
7
conditions (defined lane lines, sunny day, dedicated test track),
and occurred at least once in all Model B vehicles.
Car B3 was a statistical anomaly in terms of where it detected
the pedestrian, providing significantly less advanced warning
than the other two vehicles. It typically alerted the driver to a
pedestrian well inside the 159 ft threshold required for an
attentive human to bring the car to an emergency stop. Car B3
almost always (92%) detected the pedestrian after crossing in
front of the vehicle, as opposed to the B1 and B2 which usually
detected the pedestrian before it crossed the centerline. Car B3
did provide a few additional urgent aural warnings other than
the pedestrian visualization, but these occurred inside the 159 ft
threshold where they would have failed to induce the driver to
successfully brake. Furthermore, B3 did not brake as often as
the other two cars, and it only applied aggressive braking in
degraded lane condition, which was expected given the late
detections but was not present in the defined lane line condition.
Given B3’s unreliable and inadequate automatic braking, late
longitudinal detections and late warnings, it was distinctly less
safe than the other two Model B cars.
Another goal of this research was to determine whether the
degraded lane lines had a measurable impact on pedestrian
detection performance. They statistically appeared to influence
the magnitude of automatic braking, with the cars slowing
down more when the lane lines were degraded. In addition, B1
and B2 both consistently experienced the earliest pedestrian
detections in the degraded lanes condition (Table 5), and B3
appropriately aggressively braked for the pedestrian, but only
in the degraded lane condition (Table 4). These findings suggest
that the cars’ computer vision systems detected pedestrians
earlier and in the case of B3, led to aggressive braking when not
tracking established lane lines. The presence of regular and
intermittent lane lines may decrease the vehicle vision system’s
prior expectation of pedestrians in the environment, delaying
pedestrian detection and action when the scene has fewer urban
characteristics. Thus, it is possible that active lane tracking
causes a blind spot for computer vision systems for obstacles
on the road’s edge.
With the expansion of urban highways and other multi-lane
high speed roads in many metropolitan areas, pedestrians and
other vulnerable road users at the road’s edge may be at
increased risk of a possible conflict even in areas with simple,
well-defined lane markings. Making matters worse, drivers may
associate roads with crisp white lines to be low-risk areas of
operation as they appear easier for the vehicle to interpret, and
so may be more inclined to divert their attention for longer
periods of times. Several studies have shown that drivers tend
to pay less attention in normal circumstances when using
ADAS [17, 29]. This is made significantly worse by the lack of
reliable coupled aural and visual warnings that would be critical
in alerting a distracted driver to a possible pedestrian.
These results also add to the growing body of literature [29]
that questions whether hands-free driving should be encouraged
by manufacturers in the presence of uncertain autonomous
vehicles behaviors, especially any environment where
pedestrians could be present. If vehicles cannot effectively warn
drivers with enough time to react, then any claims of an ADAS
system that improves safety is, at best, performative safety.
A. Limitations
Testing actual vehicles on dedicated closed tracks is difficult
due to resource constraints, so results from such experiments
are typically limited by their small sample size. These results
do not capture the result diversity representative of a vehicle
fleet. However, given that this study was designed to determine
the degree of variation for pedestrian detection systems, it
serves as a baseline for future studies.
Despite using the same make of car within each model class,
variation in software version was one potential confound across
the three vehicles. Given the current preference for over-air
software updates by many automakers, it was not possible to
guarantee that all vehicles were running the same exact
software version, though neither vehicle experienced a major
software update that added or removed features or overtly
modified the pedestrian response or lane detection systems.
Additionally, both models allowed users to set various
preferences in the vehicles, none of which had any direct
relation to the pedestrian detection systems. While the
researchers standardized all externally modifiable settings, it
was not possible to guarantee that some settings internal to the
vehicle software were consistent across all cars.
Another limitation of pedestrian detection testing is the use
of an inflatable soft pedestrian target. There are no standards for
pedestrian testing in the United States, which would fall under
the US government’s New Car Assessment Program (NCAP).
The Government Accounting Office has been critical of the US
National Highway Traffic Safety Administration’s lack of
progress for these standards, which were proposed in 2015 but
never finalized [30]. The European Union has a standard for
pedestrian testing that includes using belt-driven systems (Euro
NCAP TB29) that move soft pedestrians along the road at the
ground level. However, these systems were beyond the reach of
this research effort.
The inflatable pedestrian target used in this study had
dimensions typical of the average American male (i.e., within
one standard deviation) and thus closely visually resembled a
pedestrian from distances of 100 feet or greater. The cable and
pulley system for moving the pedestrian was composed of a thin
overhead wire which was only visible within 180 feet, by which
point the pedestrian had already crossed the center line.
The pedestrian was covered in generic clothing (blue jeans
and a grey long sleeve cotton shirt) to achieve similar radar
reflectivity to a real pedestrian. Given that Model B cars
reliably indicated the pedestrian via the pedestrian icon further
supports that the target was typically perceived as such. Still,
the lack of realistic locomotion patterns such as swinging arms
and legs, could have been a confound at close distances.
Moreover, the difference in mass between the pedestrian target
and a real human could have affected its radar profile in Model
A cars despite negligible differences in reflectivity.
This testing occurred under ideal conditions during a sunny
day, so the results cannot be generalized to other operational
domains. Pedestrian detection systems typically perform much
worse at night [31], where 75% of pedestrian fatalities occur
[11, 12], so these results likely underestimate the actual risk.
8
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
8
V. CONCLUSION
These results raise an important issue surrounding the
predictability of an autonomous system that leverages neural
networks for decision making. While Model A never detected
the pedestrian and Model B almost always detected the
pedestrian, it should not be automatically assumed that one
system is better than the other. Indeed, Model B accelerated
towards the pedestrian in three cases, and the three Model B
cars were not consistent with each other or even internally. This
is especially problematic if Model B drivers learn to trust their
cars and assume they will always avoid the pedestrian, whereas
Model A drivers may remain vigilant if they expect they will
need to intervene. Model B drivers may then be surprised and
unprepared to take over in scenarios where an unexpected and
dangerous autonomous action occurs, like in the automatic
acceleration events. Such surprises in autonomous vehicles that
seem to perform well are the precursors for latent failures,
where the driver is primed, as well as nearby stakeholders, for
potentially deadly outcomes.
This issue of surprising behaviors more broadly speaks to the
critical problems with significant variation both between cars
of different models but also within a single model or even a
single vehicle. In current ADAS-equipped cars on US
roadways, there are a number of behaviors that can significantly
vary, potentially catching drivers off guard (like surprise
accelerations). There is a long history of operators becoming
complacent and performing worse with imperfect or unreliable
automation as compared to no or obviously bad automation in
the aviation domain [32-34], so it is critical that the autonomous
surface transportation community incorporate these lessons as
quickly as possible and improve driver monitoring and alerting.
Given these results and in light of manufacturers’ claims that
their ADAS features improve safety, the current vehicle
regulatory approval process of self-certification should be
questioned. More research is needed to determine “good
enough” performance standards for cars that embed
probabilistic reasoning, since it is clear from these results that
despite manufacturers’ promises of improved safety, vehicle
pedestrian detection can be highly variable, if it happens at all.
Furthermore, while this study only examined a single
pedestrian, further work is needed to determine how such
computer vision systems respond to multiple pedestrians.
This work supports the GAO recommendations for US NCAP
pedestrian testing specifications [30]. In addition, more testing
is needed to examine the impact of different lighting conditions
and different road markings, since this effort demonstrated that
lane markings could negatively influence outcomes. Also, more
testing is needed for pedestrian scenarios that contain multiple
vehicles, especially in light of the recent Cruise self-driving car
that struck a pedestrian after she was struck by a human-driven
car [35].
Lastly, while these results point to various issues that industry
and regulatory agencies need to consider, there are many
fundamental issues with artificial intelligence that should be
addressed by academia. Performance of neural networks are
typically measured by accuracy of prediction on artificial or
highly-curated data sets. There needs to be more rigorous
testing of algorithms before they are published to ensure the
limitations of such approaches are well documented. Moreover,
it is possible that these cars’ computer vision systems originally
worked as intended but over time, they degraded due to model
drift. Thus, more work needs to be done to determine how and
why such model drift occurs in neural networks and how to
mitigate this problem.
There are many other basic issues that could be the source of
the problems highlighted in this study including poor quality
data and inappropriate modeling, so much more core research
is needed in the development and maintenance of high-quality
and trustworthy neural networks in computer vision systems.
ACKNOWLEDGMENTS
All testing was completed at NCCAR. Luisa Silva and
Yunseon (Chloe) Kang helped with the data analysis. Tate
Staples, Matthew Spores, and Tianxin Shen assisted in the
experiments. Neither author has any financial conflicts of
interest for this research.
REFERENCES
[1] H. Kim, M. Song, and Z. R. Doerzaph, "Real-World Use of
Automated Driving Systems and Their Consequences: A
Naturalistic Driving Data Analysis," Virginia Tech
Transportation Institute, Blacksburg, VA, VTTI-00-029,
November 2020.
[2] NHTSA, "Summary Report: Standing General Order on Crash
Reporting for Level 2 Advanced Driver Assistance Systems,"
Department of Transportation, Washington, DC, DOT HS 813
325, 2022.
[3] S. Waykole, N. Shiwakoti, and P. Stasinopoulos, "Review on
Lane Detection and Tracking Algorithms of Advanced Driver
Assistance System," Sustainability, vol. 13, no. 20, p. 11417,
2021.
[4] J. Wei, J. He, Y. Zhou, K. Chen, Z. Tang, and Z. Xiong,
"Enhanced Object Detection With Deep Convolutional Neural
Networks for Advanced Driving Assistance," IEEE Transactions
on Intelligent Transportation Systems, vol. 21, no. 4, pp. 1572
1583, 2020.
[5] A. Ziebinski, R. Cupek, D. Grzechca, and L. Chruszczyk,
"Review of advanced driver assistance systems (ADAS)," in
International Conference of Computational Methods in Sciences
and Engineering 2017 (ICCMSE-2017), 2017, vol. 1906, p.
120002, doi: 10.1063/1.5012394.
[6] J. Kim, D. S. Han, and B. Senouci, "Radar and Vision Sensor
Fusion for Object Detection in Autonomous Vehicle
Surroundings," in IEEE Tenth International Conference on
Ubiquitous and Future Networks (ICUFN), Prague, 2018, pp. 76-
78.
[7] M. L. Cummings and B. Bauchwitz, "Safety Implications of
Variability in Autonomous Driving Assist Performance," IEEE
Intelligent Transportation Systems, vol. 23, no. 8, pp. 12039-
12049, 2022.
[8] AAA, "AAA Testing Finds Inconsistencies with Driving
Assistance Systems," American Automobile Association,
Dearborn, MI, May 22 2022.
[9] A. Gross, "Consumer Skepticism Toward Autonomous Driving
Features Justified," vol. 2022, ed. Heathrow, FL: American
Automobile Association, 2022.
[10] M. Laris, "Fatal Uber crash spurs debate about regulation of
driverless vehicles," in Washington Post, ed. Washington DC:
Nash Holdings, 2018.
9
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
9
[11] J. B. Cicchino, "Effects of automatic emergency braking systems
on pedestrian crash risk," Accident Analysis & Prevention, 2022,
doi: 10.1016/j.aap.2022.106686.
[12] R. N. Rajaram, E. Ohn-Bar, and M. M. Trivedi, "An Exploration
of Why and When Pedestrian Detection Fails," in IEEE 18th
International Conference on Intelligent Transportation Systems,
Grand Caneria, Spain, 2015, vol. IEEE, pp. 23352340.
[13] K. Macek, "Pedestrian Traffic Fatalities by State: 2022
Preliminary Data," Governors Highway Safety Association,
Washington DC, 2023.
[14] Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, and Q. Wang, "Robust
Lane Detection from Continuous Driving Scenes Using Deep
Neural Networks," IEEE Transactions on Vehicular Technology,
vol. 1, pp. 41-54, 2019, doi: 10.1109/TVT.2019.2949603.
[15] Q. Huang and J. Liu, "Practical limitations of lane detection
algorithm based on Hough transform in challenging scenarios,"
International Journal of Advanced Robotic Systems, pp. 1-13,
2021, doi: 10.1177/172988142110087.
[16] N. Dunn, T. Dingus, and S. Soccolich, "Understanding the Impact
of Technology: Do Advanced Driver Assistance and Semi-
Automated Vehicle Systems Lead to Improper Driving
Behavior?," AAA Foundation for Traffic Safety, Washington
DC, 2019.
[17] IIHS, "Drivers let their focus slip as they get used to partial
automation," Insurance Institute for Highway Safety, Arlington,
VA, Nov. 19 2020. [Online]. Available:
https://www.iihs.org/news/detail/drivers-let-their-focus-slip-as-
they-get-used-to-partial-automation
[18] J.D. Power, "J.D. Power 2022 Mobility Confidence Index Study,"
Troy, MI, 2022.
[19] A. S. Mueller, J. B. Cicchino, and J. V. Calvanelli, "Habits,
attitudes, and expectations of regular users of partial driving
automation systems," Insurance Institute for Highway Safety,
Arlington, VA, October 2022.
[20] FHWA (2022). Manual on Uniform Traffic Control Devices.
[21] J. S. Jermakian and D. S. Zuby, "Primary Pedestrian Crash
Scenarios: Factors Relevant to the Design of Pedestrian
Detection Systems," Insurance Institute for Highway Safety,
Arlington, VA, 2011.
[22] M. Schimpl et al., "Association between walking speed and age
in healthy, free-living individuals using mobile accelerometry
Across-sectional study," PLoS ONE, vol. 6, 2011.
[23] E. Roenitz, A. Happer, R. Johal, and R. Overgaard,
"Characteristic Vehicular Deceleration for Known Hazards," SAE
Transactions, vol. 108, no. 6, Journal of Passenger Cars, Part 1,
pp. 272-289, 1999, doi: 10.4271/1999-01-0098.
[24] W. W. Wierwille, , , S. E. Lee, M. DeHart, ,, and M. Perel "Test
road experiment on imminent warning rear lighting and
signaling," Human Factors, vol. 48, pp. 615626, 2006.
[25] M. S. Wogalter, Handbook of Warnings. Philadelphia, PA:
Lawrence Erlbaum Associates, 2006.
[26] C. D. Wickens, J. D. Lee, Y. Liu, and S. G. Becker, An
Introduction to Human Factors Engineering, 2nd ed. Upper
Saddle River, New Jersey: Pearson Education, Inc., 2004.
[27] NHTSA, (2018). Speed-Measuring Device Operator Training.
[Online] Available:
https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/core_part
icipant_manual-smd-2018.pdf
[28] M. A. Flournoy, A. Haines, and G. Chefitz, "Building Trust
through Testing," WestExec Advisors., Washington DC, 2020.
[29] P. Gershon, B. Mehler, and B. Reimer, "Driver response and
recovery following automation initiated disengagement in real-
world hands-free driving," Traffic Injury Prevention, 2023, doi:
10.1080/15389588.2023.2189990.
[30] GAO, (2020). Pedestrian Safety: NHTSA Needs to Decide
Whether to Include Pedestrian Safety Tests in Its New Car
Assessment Program, Washington DC.
[31] IIHS. "Few vehicles excel in new nighttime test of pedestrian
autobrake." https://www.iihs.org/news/detail/few-vehicles-
excel-in-new-nighttime-test-of-pedestrian-autobrake (accessed
April 12, 2023).
[32] C. Wickens, B. Clegg, A. Vieane, and A. Sebok, "Complacency
and Automation Bias in the Use of Imperfect Automation,"
Human Factors, vol. 57, 2015, doi: 10.1177/0018720815581940.
[33] R. Parasuraman and D. H. Manzey, "Complacency and bias in
human use of automation: an attentional integration.," Human
Factors, vol. 52, no. 3, pp. 381-410, 2010.
[34] E. Rovira, K. McGarry, and R. Parasuraman, "Effects of
imperfect automation on decision making in a simulated
command and control task," Human Factors, vol. 49, no. 1, pp.
76-87, 2007.
[35] J. White, H. Jin, and D. Shepardson. "GM's Cruise may face fines
for 'misleading' regulator over accident." Reuters.
https://www.reuters.com/world/us/gm-cruise-could-face-state-
fines-over-oct-2-pedestrian-accident-2023-12-04/ (accessed 10
Dec., 2023).
Mary (Missy) Cummings (SM’03)
received her Ph.D. in Systems Engineering
from the University of Virginia in 2004. She
is a professor in the George Mason
University Mechanical, Electrical and
Computer Engineering and Computer
Science Departments. In October 2021, Prof.
Cummings resigned from a Tier 1
automotive supplier to become the Senior
Safety Advisor to the National Highway Traffic Safety
Administration.
Ben Bauchwitz is a graduate student in the
Duke University Department of Computer
Science. He received a Bachelor’s degree in
Brain and Cognitive Science from the
Massachusetts Institute of Technology. His
research focuses on evaluating safety and
establishing performance bounds for
autonomous systems.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Advanced Driving Assist Systems (ADAS) are on the rise in new cars, including versions that embed artificial intelligence in computer vision systems that leverage deep learning algorithms. Because these systems, at the present time, cannot operate in all operational driving domains, they employ some type of driver monitoring system for assessing driver attention, so that drivers can effectively take control if and when an ADAS system can no longer control the car. To determine the reliability of a driver alerting system when linked to autonomy that leverages deep learning, a set of increasingly complex tests were conducted on three Tesla Model 3 vehicles. Tests were conducted on a highway and a closed test track to test road departure and construction zone detection capabilities. Results revealed significant between- and within-vehicle variation on a number of metrics related to driver monitoring, alerting, and safe operation of the underlying autonomy. In some cases, cars performed better than expected but all cars exhibited both inconsistent and unsafe behaviors as well as poor driver alerting. These results highlight that a post-deployment regulatory process is ill-equipped to flag significant issues in vehicles with embedded artificial intelligence.
Article
Full-text available
Autonomous vehicles and advanced driver assistance systems are predicted to provide higher safety and reduce fuel and energy consumption and road traffic emissions. Lane detection and tracking are the advanced key features of the advanced driver assistance system. Lane detection is the process of detecting white lines on the roads. Lane tracking is the process of assisting the vehicle to remain in the desired path, and it controls the motion model by using previously detected lane markers. There are limited studies in the literature that provide state-of-art findings in this area. This study reviews previous studies on lane detection and tracking algorithms by performing a comparative qualitative analysis of algorithms to identify gaps in knowledge. It also summarizes some of the key data sets used for testing algorithms and metrics used to evaluate the algorithms. It is found that complex road geometries such as clothoid roads are less investigated, with many studies focused on straight roads. The complexity of lane detection and tracking is compounded by the challenging weather conditions, vision (camera) quality, unclear line-markings and unpaved roads. Further, occlusion due to overtaking vehicles, high-speed and high illumination effects also pose a challenge. The majority of the studies have used custom based data sets for model testing. As this field continues to grow, especially with the development of fully autonomous vehicles in the near future, it is expected that in future, more reliable and robust lane detection and tracking algorithms will be developed and tested with real-time data sets.
Article
Full-text available
Lane detection in driving scenes is an important module for autonomous vehicles and advanced driver assistance systems. In recent years, many sophisticated lane detection methods have been proposed. However, most methods focus on detecting the lane from one single image, and often lead to unsatisfactory performance in handling some extremely-bad situations such as heavy shadow, severe mark degradation, serious vehicle occlusion, and so on. In fact, lanes are continuous line structures on the road. Consequently, the lane that cannot be accurately detected in one current frame may potentially be inferred out by incorporating information of previous frames. To this end, we investigate lane detection by using multiple frames of a continuous driving scene, and propose a hybrid deep architecture by combining the convolutional neural network (CNN) and the recurrent neural network (RNN). Specifically, information of each frame is abstracted by a CNN block, and the CNN features of multiple continuous frames, holding the property of time-series, are then fed into the RNN block for feature learning and lane prediction. Extensive experiments on two large-scale datasets demonstrate that, the proposed method outperforms the competing methods in lane detection, especially in handling difficult situations.
Article
Full-text available
Object detection is a critical problem for advanced driving assistance systems (ADAS). Recently, convolutional neural networks (CNN) achieved large successes on object detection, with performance improvement over traditional approaches, which use hand-engineered features. However, due to the challenging driving environment (e.g., large object scale variation, object occlusion, and bad light conditions), popular CNN detectors do not achieve very good object detection accuracy over the KITTI autonomous driving benchmark dataset. In this paper, we propose three enhancements for CNN-based visual object detection for ADAS. To address the large object scale variation challenge, deconvolution and fusion of CNN feature maps are proposed to add context and deeper features for better object detection at low feature map scales. In addition, soft non-maximal suppression (NMS) is applied across object proposals at different feature scales to address the object occlusion challenge. As the cars and pedestrians have distinct aspect ratio features, we measure their aspect ratio statistics and exploit them to set anchor boxes properly for better object matching and localization. The proposed CNN enhancements are evaluated with various image input sizes by experiments over KITTI dataset. The experimental results demonstrate the effectiveness of the proposed enhancements with good detection performance over KITTI test set.
Conference Paper
Full-text available
New cars can be equipped with many advanced safety solutions. Airbags, seatbelts and all of the essential passive safety parts are standard equipment. Now cars are often equipped with new advanced active safety systems that can prevent accidents. The functions of the Advanced Driver Assistance Systems are still growing. A review of the most popular available technologies used in ADAS and descriptions of their application areas are discussed in this paper.
Article
Objective: Advanced driver assistance systems are increasingly available in consumer vehicles, making the study of drivers' behavioral adaptation and the impact of automation beneficial for driving safety. Concerns over driver's being out-of-the-loop, coupled with known limitations of automation, has led research to focus on time-critical, system-initiated disengagements. This study used real-world data to assess drivers' response to, and recovery from, automation-initiated disengagements by quantifying changes in visual attention, vehicle control, and time to steady-state behaviors. Methods: Fourteen drivers drove for one month each a Cadillac CT6 equipped with Super Cruise (SC), a partial automation system that, when engaged, enables hands-free driving. The vehicles were instrumented with data acquisition systems recording driving kinematics, automation use, GPS, and video. The dataset included 265 SC-initiated disengagements identified across 5,514 miles driven with SC. Results: Linear quantile mixed-effects models of glance behavior indicated that following SC-initiated disengagement, the proportions of glances to the Road decreased (Q50Before=0.91, Q50After=0.69; Q85Before=1.0, Q85After=0.79), the proportions of glances to the Instrument Cluster increased (Q50Before=0.14, Q50After=0.25; Q85Before=0.34, Q85After=0.45), and mean glance duration to the Road decreased by 4.86 sec in Q85. Multinomial logistic regression mixed-models of glance distributions indicated that the number of transitions between glance locations following disengagement increased by 43% and that glances were distributed across fewer locations. When driving hands-free, take over time was significantly longer (2.4 sec) compared to when driving with at least one hand on the steering wheel (1.8 sec). Analysis of moment-to-moment distributional properties of visual attention and steering wheel control following disengagement indicated that on average it took drivers 6.1 sec to start the recovery of glance behavior to the Road and 1.5 sec for trend-stationary proportions of at least one hand on the steering wheel. Conclusions: Automation-initiated disengagements triggered substantial changes in driver glance behavior including shorter on-road glances and frequent transitions between Road and Instrument Cluster glance locations. This information seeking behavior may capture drivers' search for information related to the disengagement or the automation state and is likely shaped by the automation design. The study findings can inform the design of more effective driver-centric information displays for smoother transitions and faster recovery.
Article
Objective Automatic emergency braking (AEB) that detects pedestrians has great potential to reduce pedestrian crashes. The objective of this study was to examine its effects on real-world police-reported crashes. Methods Two methods were used to assess the effects of pedestrian-detecting AEB on pedestrian crash risk. Vehicles with and without the system were examined on models where it was an optional feature. Poisson regression was used to estimate the effects of AEB on pedestrian crash rates per insured vehicle year, and quasi-induced exposure using logistic regression compared involvement in pedestrian crashes to a system-irrelevant crash type. Results AEB with pedestrian detection was associated with significant reductions of 25%–27% in pedestrian crash risk and 29%–30% in pedestrian injury crash risk. However, there was not evidence that that the system was effective in dark conditions without street lighting, at speed limits of 50 mph or greater, or while the AEB-equipped vehicle was turning. Conclusions Pedestrian-detecting AEB is reducing pedestrian crashes, but its effectiveness could be even greater. For the system to make meaningful reductions in pedestrian fatalities, it is crucial for it to work well in dark and high-speed conditions. Other proven interventions to reduce pedestrian crashes under challenging circumstances, such as improved headlights and roadway-based countermeasures, should continue to be implemented in conjunction with use of AEB to prevent pedestrian crashes most effectively.
Conference Paper
This paper undergoes a finer-grained analysis of current state-of-the-art in pedestrian detection, with the aims of discovering insights into why and when detection fails. Current pedestrian detection research studies are often measured and compared by a single summarizing metric across datasets. The progress in the field is measured by comparing the metric over the years for a given dataset. Nonetheless, this type of analysis may hinder development by ignoring the strengths and limitations of each method as well as the role of dataset-specific characteristics. For the experiments we employ two pedestrian detection datasets, Caltech and KITTI, and highlight their differences. The datasets are used in order to understand in what ways methods fail, and the impact of attributes, occlusion, and other challenges. Finally, the analysis is used to identify promising next steps for researchers.