Archived project

Multilevel Traffic Simulation with Cognitive Basis (MULSIMCO) 2014-2018

Goal: Funded by the Academy of Finland, the project MULSIMCO produced new knowledge of driver behavior in car-following.

Researchers of Transportation Engineering in Aalto University and the University of Helsinki Traffic Research Unit studied car following behavior in a 3D VR driving simulator and field experiments.

The results showed that the desired time gap of and individual driver strongly correlated with accelerative behavior so that drivers keeping shorter time gaps accelerated and braked more forcefully. The time gap and accelerations can be described as driving intensity, which is a characteristic of the individual's driving style. The time gap is also tightly connected with observation behavior: drivers who keep a longer "safety margin" glance at the leading vehicle less often.

These dependencies were modelled with a computational model that uses machine learning methods to describe a driver's cognition. Some microsimulation results were obtained (but not yet published),
development of the nanosimulation model for modelling multi-agent traffic scenarios is ongoing.

In addition to empirical findings, the project supported a substantial amount of infrastructure development in numerous Open Source Code projects(see https://github.com/jampekka?tab=repositories). These include:
- TRUsas integrated signal acquisition system for laboratory and field measurements of multidimensional physiological data
- webtrajsim 3D VR driving simulator environment used in the experiments
- NSLR segmentation algorithm used of eye movement event detection
- SCvideonaxu video-annotation tool

The computational driver model developed in this project forms the basis of a new project (consortium UPP-PERFORMANCE, between UH and National Defense University Finland). There it will be extended and applied to modelling Situation Awareness in complex dynamic control tasks.

The project supported researcher mobility between Aalto University and Chalmers Tekniska Högskola, and between the University of Helsinki and the University of Leeds and Université de Nantes.

Date: 1 September 2014 - 31 August 2018

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
36
Reads
0 new
236

Project log

Jami Pekkanen
added a research item
This thesis is an inquiry into how humans use their imperfect perception, limited attention and action under uncertainty to successfully conduct time-critical tasks. This is done in four studies. The experimental task in the first two is car following while being visually distracted. The third study presents a new method for analyzing eye-movement signals recorded in challenging recording environments. The method is applied in the fourth study to examine drivers' gaze strategies in curve driving. Using a driving simulator, Study I finds strong experimental evidence that drivers increase their headway to the leading vehicle in response to distraction. This finding is in line with traffic psychological theory and recent car following models and is used to experimentally evaluate quantitative forms of the models. Study I also finds indication that drivers' attention is affected by changes in the headway. Study II replicates the results of Study I using both virtual reality and a real car, and presents a computational model of drivers' cognitive processes. The model assumes that drivers use a stochastic internal representation of the environment for action selection and attention allocation. The model's behavior replicates the empirically observed connections between headway and attention. Study III develops a new method for analyzing eye-movement signals based on segmented linear regression and hidden Markov model classification. The method attains state-of-the-art performance in both gaze signal denoising and oculomotor event classification. Study IV measures drivers' eye-movements in curve driving and finds that drivers seem to track the road surface with their gaze, but can also steer successfully when the road is presented only as sparse waypoints. When waypoints are randomly omitted, gaze still seeks locations where a point would be expected. This result is difficult to explain with current models of steering and gaze behavior, and suggests that drivers likely employ an internal representation for steering. The findings are discussed in relation to theoretical questions regarding internal representations in control of action, attention allocation based on uncertainty and formalization of traffic psychological theories. Some future directions for modeling tasks with more complex control and attention allocation processes are outlined.
Otto Lappi
added a research item
Objective: To present a structured, narrative review highlighting research into human perceptual-motor coordination that can be applied to Automated Vehicle (AV)-Human ‘transitions’. Background: Manual control of vehicles is made possible by the coordination of perceptual-motor behaviours (gaze and steering actions), where active feedback loops enable drivers to respond rapidly to ever-changing environments. AVs will change the nature of driving to periods of monitoring followed by the human driver taking over manual control. The impact of this change is currently poorly understood. Method: We outline an explanatory framework for understanding control transitions based on models of human steering control. This framework can be summarised as a perceptual-motor loop that requires i) calibration and ii) gaze and steering coordination. A review of the current experimental literature on transitions is presented in the light of this framework. Results: The success of transitions are often measured using reaction times, however, the perceptual-motor mechanisms underpinning steering quality remain relatively unexplored. Conclusion: Modelling the coordination of gaze and steering, and the calibration of perceptual-motor control will be crucial to ensure safe and successful transitions out of automated driving. Application: This conclusion poses a challenge for future research on AV-Human transitions. Future studies need to provide an understanding of human behaviour which will be sufficient to capture the essential characteristics of drivers re-engaging control of their vehicle. The proposed framework can provide a guide for investigating specific components of human control of steering, and potential routes to improving manual control recovery.
Otto Lappi
added a research item
In car driving, gaze typically leads the steering when negotiating curves. The aim of the current study was to investigate whether drivers also use this gaze-leads-steering strategy when time-sharing between driving and a visual secondary task. Fourteen participants drove an instrumented car along a motorway while performing a secondary task: looking at a specified visual target as long and as much as they felt it was safe to do so. They made six trips, and in each trip the target was at a different location relative to the road ahead. They were free to glance back at the road at any time. Gaze behaviour was measured with an eye tracker, and steering corrections were recorded from the vehicle's CAN bus. Both in-car 'Fixation' targets and outside 'Pursuit' targets were used. Drivers often used a gaze-leads-steering strategy, glancing at the road ahead 200-600 ms before executing steering corrections. However, when the targets were less eccentric (requiring a smaller change in glance direction relative to the road ahead), the reverse strategy, in which glances to the road ahead followed steering corrections with 0-400 ms latency, was clearly present. The observed use of strategies can be interpreted in terms of predictive processing: The gaze-leads-steering strategy is driven by the need to update the visual information and is therefore modulated by the quality/quantity of peripheral information. Implications for steering models are discussed.
Jami Pekkanen
added a research item
We present a computational model of intermittent visual sampling and locomotor control in a simple yet representative task of a car driver following another vehicle. The model has a number of features that take it beyond the current state of the art in modelling natural tasks, and driving in particular. First, unlike most control theoretical models in vision science and engineering-where control is directly based on observable (optical) variables-actions are based on a temporally enduring internal representation. Second, unlike the more sophisticated engineering driver models based on internal representations, our model explicitly aims to be psychologically plausible, in particular in modelling perceptual processes and their limitations. Third, unlike most psychological models, it is implemented as an actual simulation model capable of full task performance (visual sampling and longitudinal control). The model is developed and validated using a dataset from a simplified car-following experiment (N = 40, in both three-dimensional virtual reality and a real instrumented vehicle). The results replicate our previously reported connection between time headway and visual attention. The model reproduces this connection and predicts that it emerges from control of action uncertainty. Implications for traffic psychological models and future developments for psychologically plausible yet computationally rigorous models of full natural task performance are discussed.
Otto Lappi
added a research item
The authors present an approach to the coordination of eye movements and locomotion in naturalistic steering tasks. It is based on recent empirical research, in particular, in driver eye movements, that poses challenges for existing accounts of how we visually steer a course. They first analyze how the ideas of feedback and feedforward processes and internal models are treated in control theoretical steering models within vision science and engineering, which share an underlying architecture but have historically developed in very separate ways. The authors then show how these traditions can be naturally (re)integrated with each other and with contemporary neuroscience, to better understand the skill and gaze strategies involved. They then propose a conceptual model that (a) gives a unified account to the coordination of gaze and steering control, (b) incorporates higher-level path planning, and (c) draws on the literature on paired forward and inverse models in predictive control. Although each of these (a– c) has been considered before (also in the context of driving), integrating them into a single framework and the authors’ multiple waypoint identification hypothesis within that framework are novel. The proposed hypothesis is relevant to all forms of visually guided locomotion. http:// doi.org./10.1037/bul0000150
Otto Lappi
added a research item
We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.
Otto Lappi
added 3 research items
Variation in longitudinal control in driving has been discussed in both traffic psychology and transportation engineering. Traffic psychologists have concerned themselves with “driving style”, a habitual form of behavior marked by it’s stability, and its basis in psychological traits. Those working in traffic microsimulation have searched for quantitative ways to represent different driver-car systems in car following models. There has been unfortunately little overlap or theoretical consistency between these literatures. Here, we investigated relationships between directly observable measures (time headway, acceleration and jerk) in a simulated driving task where the driving context, vehicle and environment were controlled. We found individual differences in the way a trade-off was made between close but jerky vs. far and smooth following behavior. We call these “intensive” and “calm” driving, and suggest this trade-off can serve as an indicator of a possible latent factor underlying driving style. We posit that pursuing such latent factors for driving style may have implications for modelling driver heterogeneity across various domains in traffic simulation.
In this paper we present and qualitatively analyze an expert driver’s gaze behaviour in natural driving on a real road, with no specific experimental task or instruction. Previous eye tracking research on naturalistic tasks has revealed recurring patterns of gaze behaviour that are surprisingly regular and repeatable. Lappi (doi: 10.1016/j.neubiorev.2016.06.006) identified in the literature seven “qualitative laws of gaze behaviour in the wild”: recurring patterns that tend to go together, the more so the more naturalistic the setting, all of them expected in extended sequences of fully naturalistic behaviour. However, to date no study to date has observed all in a single experiment. Here, we wanted to do just that: present observations supporting all the “laws” in a single behavioural sequence by a single subject. We discuss the laws in terms of unresolved issues in driver modelling and open challenges for experimental and theoretical development.
Car following (CF) models used in traffic engineering are often criticized for not incorporating “human factors” well known to affect driving. Some recent work has addressed this by augmenting the CF models with the Task-Capability Interface (TCI) model, by dynamically changing driving parameters as function of driver capability. We examined assumptions of these models experimentally using a self-paced visual occlusion paradigm in a simulated car following task. The results show strong, approximately one-to-one, correspondence between occlusion duration and increase in time headway. The correspondence was found between subjects and within subjects, on aggregate and individual sample level. The long time scale aggregate results support TCI-CF models that assume a linear increase in time headway in response to increased distraction. The short time scale individual sample level results suggest that drivers also adapt their visual sampling in response to transient changes in time headway, a mechanism which isn’t incorporated in the current models.
Otto Lappi
added a project goal
Funded by the Academy of Finland, the project MULSIMCO produced new knowledge of driver behavior in car-following.
Researchers of Transportation Engineering in Aalto University and the University of Helsinki Traffic Research Unit studied car following behavior in a 3D VR driving simulator and field experiments.
The results showed that the desired time gap of and individual driver strongly correlated with accelerative behavior so that drivers keeping shorter time gaps accelerated and braked more forcefully. The time gap and accelerations can be described as driving intensity, which is a characteristic of the individual's driving style. The time gap is also tightly connected with observation behavior: drivers who keep a longer "safety margin" glance at the leading vehicle less often.
These dependencies were modelled with a computational model that uses machine learning methods to describe a driver's cognition. Some microsimulation results were obtained (but not yet published),
development of the nanosimulation model for modelling multi-agent traffic scenarios is ongoing.
In addition to empirical findings, the project supported a substantial amount of infrastructure development in numerous Open Source Code projects(see https://github.com/jampekka?tab=repositories). These include:
- TRUsas integrated signal acquisition system for laboratory and field measurements of multidimensional physiological data
- webtrajsim 3D VR driving simulator environment used in the experiments
- NSLR segmentation algorithm used of eye movement event detection
- SCvideonaxu video-annotation tool
The computational driver model developed in this project forms the basis of a new project (consortium UPP-PERFORMANCE, between UH and National Defense University Finland). There it will be extended and applied to modelling Situation Awareness in complex dynamic control tasks.
The project supported researcher mobility between Aalto University and Chalmers Tekniska Högskola, and between the University of Helsinki and the University of Leeds and Université de Nantes.