Bruce N. Walker

Georgia Institute of Technology, Atlanta, Georgia, United States

Are you Bruce N. Walker?

Claim your profile

Publications (121)31.14 Total impact

  • Myounghoon Jeon · Bruce N. Walker · Thomas M. Gable
    [Show abstract] [Hide abstract]
    ABSTRACT: Research has suggested that interaction with an in-vehicle software agent can improve a driver's psychological state and increase road safety. The present study explored the possibility of using an in-vehicle software agent to mitigate effects of driver anger on driving behavior. After either anger or neutral mood induction, 60 undergraduates drove in a simulator with two types of agent intervention. Results showed that both speech-based agents not only enhance driver situation awareness and driving performance, but also reduce their anger level and perceived workload. Regression models show that a driver's anger influences driving performance measures, mediated by situation awareness. The practical implications include design guidelines for the design of social interaction with in-vehicle software agents.
    No preview · Article · Nov 2015 · Applied Ergonomics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Driving research has recently seen a surge in the collection and use of physiological measurements. This use of physiological data is often part of an attempt to either measure cognitive load or detect affective states. While these methods are becoming more popular it seems that many driving researchers are still unsure of the best methods of data collection and analysis. This discussion panel will center around a question and answer session between the audience and experts in techniques for the collection and analysis of physiological measures in the driving research fields to help researchers increase their productivity in this space and provide a forum for frank discussion on methods in the area.
    No preview · Article · Sep 2015
  • Source
    Michael A. Nees · Bruce N. Walker
    [Show abstract] [Hide abstract]
    ABSTRACT: An experiment examined performance with sonifications—a general term for nonspeech auditory displays—as a function of working memory encoding and the demands of three different types of interference tasks. Participants encoded the sonifications as verbal representations, visuospatial images, or auditory images. After encoding, participants engaged in brief verbal, visuospatial, or auditory interference tasks before responding to point estimation queries about the sonifications. Results were expected to show selective impact on sonification task performance when the interference task demands matched the working memory encoding strategy, but instead a pattern of general working memory interference emerged in addition to auditory modal interference. In practical applications, results suggested that performance with auditory displays will be impacted by any interference task, though auditory tasks likely will cause more interference than verbal or visuospatial tasks.
    Full-text · Conference Paper · Oct 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: There is a growing concern regarding our academic community that academia has become a less than optimal option for new graduates. As our discipline is strongest when there is an appropriate balance between academia and industry, maintaining a strong academic workforce remains critical. However, apprehension exists on the mind of students regarding the viability of academic careers. Of specific concern is a very high expectation for tenure. Although such expectations may be accurate for some high performing institutions, a more accurate depiction is needed regarding the variance of academic positions. This panel will allow for an open discussion between those interested in academic careers and a multitude of differing academic experiences. Although tenure will be a major component discussed, interactions will also include best practices and tips for academic success.
    Full-text · Article · Oct 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Displaying multiple variables or data sets within a single sonification has been identified as a challenge for the field of auditory display research. We discuss our recent study that evaluates the usability of a sonification that contains multiple variables presented in a way that encouraged perception across multiple auditory streams. We measured listener comprehension of weather sonifications that include the variables of temperature, humidity, wind speed, wind direction, and cloud cover. Listeners could accurately identify trends in five concurrent variables presented together in a single sonification. This demonstrates that it is indeed possible to include multiple variables together within an auditory stream and thus a greater number of variables within a sonification.
    No preview · Article · Oct 2014
  • Source
    Michael Nees · Thomas M Gable · Myounghoon Jeon · Bruce N. Walker
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe work-in-progress prototypes of auditory displays for fuel efficiency driver interfaces (FEDIs). Although research has established that feedback from FEDIs can have a positive impact on driver behaviors associated with fuel economy, the impact of FEDIs on driver distraction has not been established. Visual displays may be problematic for providing this feedback; it is precisely during fuel-consuming behaviors that drivers should not divert attention away from the driving task. Auditory displays offer a viable alternative to visual displays for communicating information about fuel economy to the driver without introducing visual distraction.
    Full-text · Conference Paper · Jun 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Auditory display research for driving has mainly examined a limited range of tasks (e.g., collision warnings, cell phone tasks). In contrast, the goal of this project was to evaluate the effectiveness of enhanced auditory menu cues in a simulated driving context. The advanced auditory cues of ‘spearcons’ (compressed speech cues) and ‘spindex’ (a speech-based index cue) were predicted to improve both menu navigation and driving. Two experiments used a dual task paradigm in which users selected songs on the vehicle’s infotainment system. In Experiment 1, 24 undergraduates played a simple, perceptual-motor ball-catching game (the primary task; a surrogate for driving), and navigated through an alphabetized list of 150 song titles—rendered as an auditory menu—as a secondary task. The menu was presented either in the typical visual-only manner, or enhanced with text-to-speech (TTS), or TTS plus one of three types of additional auditory cues. In Experiment 2, 34 undergraduates conducted the same secondary task while driving in a simulator. In both experiments, performance on both the primary task (success rate of the game or driving performance) and the secondary task (menu search time) was better with the auditory menus than with no sound. Perceived workload scores, as well as user preferences favored the enhanced auditory cue types. These results show that adding audio, and enhanced auditory cues in particular, can allow a driver to operate the menus of in-vehicle technologies more efficiently while driving more safely. Results are discussed with multiple resources theory.
    Full-text · Article · May 2014 · International Journal of Human-Computer Interaction
  • Masoud Gheisari · Javier Irizarry · Bruce N. Walker

    No preview · Conference Paper · May 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this study we present the results of evaluating the sonification protocol of a new assistive product aiming to help the visually impaired in perceiving their surroundings through sounds organized in different cognitive profiles. The evaluation was carried out with 17 sighted and 11 visually impaired participants. The experiment was designed over both virtual and real environments and divided into 4 virtual reality based tests and one real life test. Finally, four participants became experts by means of longer and deeper trainings and then participated in a focus group at the end of the process. Both quantitative and qualitative results showed that the proposed system is able to effectively represent the spatial configuration of objects through sounds. However, important limitations have been found in the sample used (some important demographic characteristics are intercorrelated, impeding segregated analysis), the usability of the most complex profile, and even the special difficulties faced by totally blind participants relative to the sighted and low vision ones.
    Full-text · Article · May 2014
  • Source
    Myounghoon Jeon · Bruce N. Walker · Jung-Bin Yim
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this paper was to explore effects of specific emotions on subjective judgment, driving performance, and perceived workload. The traditional driving behavior research has focused on cognitive aspects such as attention, judgment, and decision making. Psychological findings have indicated that affective states also play a critical role in a user’s rational, functional, and intelligent behaviors. Most applied emotion research has concentrated on simple valence and arousal dimensions. However, recent findings have indicated that different emotions may have different impacts, even though they belong to the same valence or arousal. To identify more specific affective effects, seventy undergraduate participants drove in a vehicle simulator under three different road conditions, with one of the following induced affective states: anger, fear, happiness, or neutral. We measured their subjective judgment of driving confidence, risk perception, and safety level after affect induction; four types of driving errors: Lane Keeping, Traffic Rules, Aggressive Driving, and Collision while driving; and the electronic NASA-TLX after driving. Induced anger clearly showed negative effects on subjective safety level and led to degraded driving performance compared to neutral and fear. Happiness also showed degraded driving performance compared to neutral and fear. Fear did not have any significant effect on subjective judgment, driving performance, or perceived workload. Results suggest that we may need to take emotions and affect into account to construct a naturalistic and generic driving behavior model. To this end, a specific-affect approach is needed, beyond the sheer valence and arousal dimensions. Given that workload results are similar across affective states, examining affective effects may also require a different approach than just the perceived workload framework. The present work is expected to guide emotion detection research and help develop an emotion regulation model and adaptive interfaces for drivers.
    Full-text · Article · May 2014 · Transportation Research Part F Traffic Psychology and Behaviour
  • Source

    Full-text · Article · Apr 2014 · Journal of the Audio Engineering Society
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a survey of existing sonification systems used to represent visual scenes, analyzes their characteristics and proposes a taxonomy of this set of algorithms and devices. This classification is non-exclusive, since many parameters of any sonification procedure work as independent variables and can be recombined in any other set. Although many of these algorithms have been proposed in the field of assistive technology, and most of the examples come from that field, we will only focus on auditory aspects, avoiding an analysis in terms of mobility, rehabilitation or subjective perception. We propose two main categories to classify every sonification algorithm, the psychoacoustic and the artificial, and a third one mixing properties of each one of them. We use classic paradigms such as the pitch, piano and point transform, as well as some new subsets. A final summary of 25 different assistive products is given, with their classification following our scheme.
    No preview · Article · Mar 2014 · Journal of the Audio Engineering Society
  • Myounghoon Jeon · Bruce N. Walker · Thomas M. Gable
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Research has suggested that emotional states have critical effects on various cognitive processes, which are important components of situation awareness (Endsley, 1995b). Evidence from driving studies has also emphasized the importance of driver situation awareness for performance and safety. However, to date, little research has investigated the relationship between emotional effects and driver situation awareness. In our experiment, 30 undergraduates drove in a simulator after induction of either anger or neutral affect. Results showed that an induced angry state can degrade driver situation awareness as well as driving performance as compared to a neutral state. However, the angry state did not have an impact on participants' subjective judgment or perceived workload, which might imply that the effects of anger occurred below their level of conscious awareness. One of the reasons participants showed a lack of compensation for their deficits in performance might be that they were not aware of severe impacts of emotional effects on driving performance.
    No preview · Article · Feb 2014 · Presence Teleoperators & Virtual Environments
  • Conference Paper: Auditory weather reports

    No preview · Conference Paper · Jan 2014
  • Source
    Michael A Nees · Bruce N Walker
    [Show abstract] [Hide abstract]
    ABSTRACT: Dual-process accounts of working memory have suggested distinct encoding processes for verbal and visual information in working memory, but encoding for nonspeech sounds (e.g., tones) is not well understood. This experiment modified the sentence􏰀picture verification task to include nonspeech sounds with a complete factorial examination of all possible stimulus pairings. Participants studied simple stimuli􏰀pictures, sentences, or sounds􏰀and encoded the stimuli verbally, as visual images, or as auditory images. Participants then compared their encoded representations to verification stimuli􏰀again pictures, sentences, or sounds􏰀in a two-choice reaction time task. With some caveats, the encoding strategy appeared to be as important or more important than the external format of the initial stimulus in determining the speed of verification decisions. Findings suggested that: (1) auditory imagery may be distinct from verbal and visuospatial processing in working memory; (2) visual perception but not visual imagery may automatically activate concurrent verbal codes; and (3) the effects of hearing a sound may linger for some time despite recoding in working memory. We discuss the role of auditory imagery in dual-process theories of working memory.
    Full-text · Article · Nov 2013 · Journal of Cognitive Psychology
  • Richard Swette · Keenan R. May · Thomas M. Gable · Bruce N. Walker
    [Show abstract] [Hide abstract]
    ABSTRACT: Three novel interfaces for navigating a hierarchical menu while driving were experimentally evaluated. Prototypes utilized redundant visual and auditory feedback (multimodal), and were compared to a conventional direct touch interface. All three multimodal prototypes employed an external touchpad separate from the infotainment display in order to afford simple eyes-free gesturing. Participants performed a basic driving task while concurrently using these prototypes to perform menu selections. Mean lateral lane deviation, eye movements, secondary task speed, and self-reported workload were assessed for each condition. Of all conditions, swiping the touchpad to move one-by-one through menu items yielded significantly smaller lane deviations than direct touch. In addition, in the serial swipe condition, the same time spent looking at the prototype was distributed over a longer interaction time. The remaining multimodal conditions allowed users to feel around a pie or list menu to find touchpad zones corresponding to menu items, allowing for either exploratory browsing or shortcuts. This approach, called GRUV, was ineffective compared to serial swiping and direct touch, possibly due its uninterruptable interaction pattern and overall novelty. The proposed explanation for the performance benefits of the serial swiping condition was that it afforded flexible sub tasking and incremental progress, in addition to providing multimodal output.
    No preview · Conference Paper · Oct 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: In-vehicle technologies can create dangerous situations through driver distraction. In recent years, research has focused on driver distraction through communications technologies, but others, such as scrolling through a list of songs or names, can also carry high attention demands. Research has revealed that the use of advanced auditory cues for in-vehicle technology interaction can decrease cognitive demand and improve driver performance when compared to a visual-only system. This paper discusses research investigating the effects of applying advanced auditory cues to a search task on a mobile device while driving, particularly focusing on visual fixation. Twenty-six undergraduates performed a search task through a list of 150 songs on a cell phone while performing the lane change task, wearing eye-tracking glasses. Eye-tracking data, performance, workload, and preferences for six conditions were collected. Compared to no sound, visual fixation time on driving and preferences were found to be significantly higher for the advanced auditory cue of spindex. Results suggest more visual availability for driving when the spindex cue is applied to the search task and provides further evidence that these advanced auditory cues can lessen distraction from driving while using mobile devices to search for items in lists.
    No preview · Conference Paper · Oct 2013
  • Jared M. Batterman · Jonathan H. Schuett · Bruce N. Walker
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we address the lack of accessibility in fantasy sports for visually impaired users and discuss the accessible fantasy sports system that we have designed using auditory displays. Fantasy sports are a fun and social activity requiring users to make decisions about their fantasy teams, which use real athletes' weekly performance to gain points and compete against other users' fantasy teams. Fantasy players manage their teams by making informed decisions using statistics about real sports related data. These statistics are usually presented online in a spreadsheet layout, however online fantasy sports are usually inaccessible to screen readers due to the use of Flash on most sites. Our current system, described in this paper, utilizes auditory display techniques such as auditory alerts, earcons, spearcons, general text-to-speech, and auditory graphs to present sports statistics to visually impaired fantasy users. The current version of our system was designed based on feedback from current fantasy sports users during a series of think-aloud walkthroughs.
    No preview · Conference Paper · Oct 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: The human factors discipline has always benefited from a strong connection between industry and academia. However, the increasing need of an educated industry workforce has created a potential concern of maintaining a viable academic workforce. Students, in particular, have previously voiced apprehensions regarding academic careers when compared to industry options. The balance between industry and academia should be preserved. Therefore, to aid in this equilibrium, an open discussion centered on student inquiries about early academia is needed to maintain an understanding of the current academic environment. Specifically, the most beneficial interaction may be through discussions between those interested in academia and those currently entrenched in multiple facets of success in early academic careers.
    No preview · Article · Sep 2013
  • Jonathan H. Schuett · Bruce N. Walker
    [Show abstract] [Hide abstract]
    ABSTRACT: When the goal of an auditory display is to provide inference or intuition to a listener, it is important for researchers and sound designers to gauge users' comprehension of the display to determine if they are, in fact, receiving the correct message. This paper discusses an approach to measuring listener comprehension in sonifications that contain multiple concurrently presented data series. We draw from situation awareness research that has developed measures of comprehension within environments or scenarios, based on our view that an auditory scene is similar to a virtual or mental representation of the listener's environment.
    No preview · Conference Paper · Sep 2013