PROCEEDINGS of the Eighth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design
the Queuing Network Model uses a strategy in which eyes fixate on the closest unattended visual
field with a target feature (Lim & Liu, 2004. These static values can be replaced with dynamic
values that reflect specific display characteristics. For instance, the Itti & Koch salience model
has been combined with Distract-R driver model to predict driver distraction (Lee, 2014), where
salience model captures bottom-up (e.g., visual attention) influences and driver model captures
top-down (e.g., goal) influences on driving behaviors. Integrating the present application into
these driver models will allow better simulation of driving with secondary tasks. Specifically,
such integration makes it possible to account for value- or expectation-driven attention (top-
down) that might dominate the search of objects of interest with familiar displays in addition to
the saliency of visual objects (bottom-up). It will help address the important fact that saliency is
not the only force governing glance duration and visual search.
The current model captures important features of salience-driven attention, and so is an example
of a simple model that represents a specific human function and can address a limited set of
design issues (Rasmussen, 1983). It addresses the issue on misplaced salience based on well-
established theory of visual perception and attention and has been validated on several test
datasets from other domains. As such, this tool represents an interactive design guideline
conveniently available on the web that can help designers address one contributor to visual
distraction. Nevertheless, further validation with vehicle display designs is required to fully
demonstrate the utility of this tool. Input from potential end-users (i.e., designers) will also help
refine the tool to better support the design process.
Bylinskii, Z., Judd, T., Durand, F., Oliva, A., & Torralba, A. (2012). MIT saliency benchmark.
Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in
cognitive sciences, 7(11), 498-504.
Henderson, J. M., & Hollingworth, A. (1999). High-level scene perception. Annual review of
psychology, 50(1), 243-271.
Huang, L., & Pashler, H. (2007). A Boolean map theory of visual attention. Psychological
review, 114(3), 599.
Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of
visual attention. Vision research, 40(10), 1489-1506.
Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans
look. Proceedings of the IEEE international conference on Computer Vision, 2106-2113.
Klauer, S. G., Dingus, T. A., Neale, V. L., Sudweeks, J. D., & Ramsey, D. J. (2006). The impact
of driver inattention on near-crash/crash risk: An analysis using the 100-car naturalistic
driving study data. Tecnical Report No. DOT HS 810 594. Washington, DC: National
Highway Traffic Safety Administration (NHTSA).
Land, M., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control
of activities of daily living. Perception, 28(11), 1311-1328.