Brianna J. Tomlinson’s research while affiliated with Georgia Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (15)


The importance of incorporating risk into human-automation trust
  • Article

October 2021

·

79 Reads

·

27 Citations

Theoretical Issues in Ergonomics Science

·

Brianna J. Tomlinson

·

Bruce N. Walker

A key psychological component of interactions in both human-human and human-automation relationships is trust. Although trust has repeatedly been conceptualized as having a component of risk, the role risk plays, as well as what elements of risk impact trust (e.g., perceived risk, risk-taking propensity), has not been clearly explained. Upon reviewing the foundational theories of trust, it is clear that trust is only needed when risk exists or is perceived to exist, in both human-human and human-automation contexts. Within the limited research that has explored human-automation trust and risk, it has been found that the presence of risk and a participant’s perceived situational risk impacts their behavioural trust of the automation. In addition, perceived relational risk has a strong negative relationship with trust. We provide an enhanced model of trust to demonstrate how risk interacts with trust, incorporating these distinct perceived risks, as well as risk-taking propensity. This model identifies the unique interactions of these components with trust based on both the theory reviewed and the studies that have explored some aspects of these relationships. Guidelines are provided for improving the study of human-automation trust via the incorporation of risk.


Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations

March 2021

·

22 Reads

·

2 Citations

Journal on Multimodal User Interfaces

Interactive simulations are tools that can help students understand and learn about complex relationships. While most simulations are primarily visual due to mostly historical reasons, sounds can be used to add to the experience. In this work, we evaluated sets of audio designs for two different, but contextually- and visually-similar simulations. We identified key aspects of the audio representations and the simulation content which needed to be evaluated, and compared designs across two simulations to understand which auditory designs could generalize to other simulations. To compare the designs and explore how audio affected a user’s experience, we measured preference (through usability, user experience, and open-ended questions) and interpretation accuracy for different aspects of the simulation (including the main relationships and control feedback). We suggest important characteristics to represent through audio for future simulations, provide sound design suggestions, and address how overlap between visual and audio representations can support learning opportunities.


Spotlights and Soundscapes: On the Design of Mixed Reality Auditory Environments for Persons with Visual Impairment

April 2020

·

69 Reads

·

32 Citations

ACM Transactions on Accessible Computing

·

Brianna J. Tomlinson

·

Xiaomeng Ma

·

[...]

·

Bruce N. Walker

For persons with visual impairment, forming cognitive maps of unfamiliar interior spaces can be challenging. Various technical developments have converged to make it feasible, without specialized equipment, to represent a variety of useful landmark objects via spatial audio, rather than solely dispensing route information. Although such systems could be key to facilitating cognitive map formation, high-density auditory environments must be crafted carefully to avoid overloading the listener. This article recounts a set of research exercises with potential users, in which the optimization of such systems was explored. In Experiment 1, a virtual reality environment was used to rapidly prototype and adjust the auditory environment in response to participant comments. In Experiment 2, three variants of the system were evaluated in terms of their effectiveness in a real-world building. This methodology revealed a variety of optimization approaches and recommendations for designing dense mixed-reality auditory environments aimed at supporting cognitive map formation by visually impaired persons.




Accessible Video Calling: Enabling Nonvisual Perception of Visual Conversation Cues

November 2019

·

36 Reads

·

20 Citations

Proceedings of the ACM on Human-Computer Interaction

Nonvisually Accessible Video Calling (NAVC) is a prototype that detects visual conversation cues in a video call and uses audio cues to convey them to a user who is blind or low-vision. NAVC uses audio cues inspired by movie soundtracks to convey Attention, Agreement, Disagreement, Happiness, Thinking, and Surprise. When designing NAVC, we partnered with people who are blind or low-vision through a user-centered design process that included need-finding interviews and design reviews. To evaluate NAVC, we conducted a user study with 16 participants. The study provided feedback on the NAVC prototype and showed that the participants could easily discern some cues, like Attention and Agreement, but had trouble distinguishing others. The accuracy of the prototype in detecting conversation cues emerged as a key concern, especially in avoiding false positives and in detecting negative emotions, which tend to be masked in social conversations. This research identified challenges and design opportunities in using AI models to enable accessible video calling.


Sonic Information Design for Science Education

November 2018

·

40 Reads

·

9 Citations

Ergonomics in Design The Quarterly of Human Factors Applications

The PhET project is a collection of over 130 interactive simulations (or “sims”) designed to teach physics concepts to students from elementary to university levels. The sims rely heavily on visual representation, making them inaccessible to students with disabilities, including those with visual impairments. We present the theory, methods, and process behind our audio design and provide example mapping strategies from two of the simulations. We compare physical, abstract, and musical mapping strategies, noting the strengths of each. We conclude with design recommendations that have arisen in our work, and for which we think would benefit the field at large.


Design and Evaluation of a Multimodal Science Simulation

October 2018

·

20 Reads

·

4 Citations

We present a multimodal science simulation, including visual and auditory (descriptions, sound effects, and sonifications) display. The design of each modality is described, as well as evaluation with learners with and without visual impairments. We conclude with challenges and opportunities at the intersection of multiple modalities.


BUZZ: An Auditory Interface User Experience Scale

April 2018

·

141 Reads

·

36 Citations

Auditory user interfaces (AUIs) have been developed to support data exploration, increase engagement with arts and entertainment, and provide an alternative to visual interfaces. Standard measures of usability such as the SUS [4] and UMUX [8] can help with comparing baseline usability and user experience (UX), but the overly general nature of the questions can be confusing to users and can present problems in interpretation of the measures when evaluating an AUI. We present an efficient and effective alternative: an 11-item Auditory Interface UX Scale (BUZZ), designed to evaluate interpretation, meaning, and enjoyment of an AUI.


Spindex and Spearcons in Mandarin: Auditory Menu Enhancements Successful in A Tonal Language

June 2017

·

11 Reads

·

2 Citations

Auditory displays have been used extensively to enhance visual menus across diverse settings for various reasons. While standard auditory displays can be effective and help users across these settings, standard auditory displays often consist of text to speech cues, which can be time intensive to use. Advanced auditory cues including spindex and spearcon cues have been developed to help address this slow feedback issue. While these cues are most often used in English, they have also been applied to other languages, but research on using them in tonal languages, which may affect the ability to use them, is lacking. The current research investigated the use of spindex and spearcon cues in Mandarin, to determine their effectiveness in a tonal language. The results suggest that the cues can be effectively applied and used in a tonal language by untrained novices. This opens the door to future use of the cues in languages that reach a large portion of the world’s population.


Citations (14)


... In this paper, we aim at providing a structured analysis of the dimensions of a performer's competence, willingness and external factors and evaluate the feasibility of each possible interdependence relationship. The final decision of which interdependence is better for a certain task is left to the user (and trustor) to decide, as this mainly depends on their perceived risk (Fahnenstich, Rieger, and Roesler 2023;Hoesterey and Onnasch 2023;Stuck, Holthausen, and Walker 2021;Stuck, Tomlinson, and Walker 2022;Wagner, Robinette, and Howard 2018), of trusting and, sometimes, of not trusting, see e.g. Mehrotra et al. (2024). ...

Reference:

Interdependence and trust analysis (ITA): a framework for human-machine team design
The importance of incorporating risk into human-automation trust
  • Citing Article
  • October 2021

Theoretical Issues in Ergonomics Science

... Also, sound is conveyed by simple headphones in stereo, which by adjusting the volume and the relative volume between the two earpieces allows for a 2D representation of the particles. Following the authors' original work (Lahav & Levy, 2011;Levy & Lahav, 2012), PhET, a University of Colorado based project specialising in interactive simulations, has started developing similar sonified models (Moore, 2015;PhET, n.d.;Tomlinson et al., 2019Tomlinson et al., , 2021Winters et al., 2019). This research paper builds on a long process of research and development work by the Authors and their colleagues. ...

Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations
  • Citing Article
  • March 2021

Journal on Multimodal User Interfaces

... For example, Mack et al. 's [60] recent literature survey of accessibility research within the HCI community extensively uses the term accessibility, but the authors never provide a clear definition of it. Likewise, other literature studies, e.g., the one by Brulé et al. [15], or a co-word analysis that addresses accessibility research by Sarsenbayeva et al. [83], center accessibility within their work, but do so without provision of a definition, once more highlighting the need to develop language to comprehensively reflect on accessibility in the context of digital technology. ...

Review of Quantitative Empirical Evaluations of Technology for People with Visual Impairments
  • Citing Conference Paper
  • April 2020

... Scientific visualization, where data variations are highly irregular to differentiate visually, sonification improved user perception [12]. Research findings demonstrated that visual learning materials augmented with audio feedback enriched learners' experiences [13]. Sonification facilitated visual perception and helped users overcome challenges in visual representations [11]. ...

Auditory Display in Interactive Science Simulations: Description and Sonification Support Interaction and Enhance Opportunities for Learning
  • Citing Conference Paper
  • April 2020

... For blind people, existing landmark enhancement systems have focused on providing auditory feedback [6,7,17,22,50]. For example, Balata et al. [6,7] designed a system that generates landmarkenhanced navigation instructions via audio feedback, revealing that users preferred the landmark-enhanced instructions than conventional turn-by-turn instructions. May et al. [50] developed an auditory environment in mixed reality to simulate a virtual space, presenting landmarks through spatial audio to facilitate mental map formation. ...

Spotlights and Soundscapes: On the Design of Mixed Reality Auditory Environments for Persons with Visual Impairment
  • Citing Article
  • April 2020

ACM Transactions on Accessible Computing

... With regard to implementation issues, there is a need to provide 1) Subtitles in the native language of each user and 2) SignWriting (Hersh, et al., 2020). Next, in the research "Accessible Video Calling: Enabling Nonvisual Perception of Visual Conversation Cues" (Shi, et al., 2019) the authors create a prototype of non-visually accessible video calls (NAVC). This is a tool that detects visual and audio cues (movie soundtracks) through AI to express attention, agreement, disagreement, happiness, surprise among other emotions among visually impaired users. ...

Accessible Video Calling: Enabling Nonvisual Perception of Visual Conversation Cues
  • Citing Article
  • November 2019

Proceedings of the ACM on Human-Computer Interaction

... Also, sound is conveyed by simple headphones in stereo, which by adjusting the volume and the relative volume between the two earpieces allows for a 2D representation of the particles. Following the authors' original work (Lahav & Levy, 2011;Levy & Lahav, 2012), PhET, a University of Colorado based project specialising in interactive simulations, has started developing similar sonified models (Moore, 2015;PhET, n.d.;Tomlinson et al., 2019Tomlinson et al., , 2021Winters et al., 2019). This research paper builds on a long process of research and development work by the Authors and their colleagues. ...

Sonic Information Design for Science Education
  • Citing Article
  • November 2018

Ergonomics in Design The Quarterly of Human Factors Applications

... Sonifcation has emerged as a powerful tool in facilitating learning for BLV students [19,39], ofering innovative ways to access and comprehend multimodal data [4,20,28,36]. Studies have shown that sonifcation supports various learning domains [10,31], including physics [35,41], astronomy [24,27,42,49], computer science [3,32,37] and mathematics [1,5,23,30,46]. The advancements in sonifcation technology and research pave the way for continued exploration and development of efective strategies to optimize learning outcomes for BLV students. ...

Design and Evaluation of a Multimodal Science Simulation
  • Citing Conference Paper
  • October 2018

... How did the sound feedback affect your 4.0 ± 1. 4.1 ± 0.9 a whole where users rate the sound feedback impact on the interaction with a range from "Very negative" (1) to "Very positive" (5). While these last questions have been created ad-hoc for our experiments, points from P5 to P10 are taken with little to no adaptation from the BUZZ scale [23], which is commonly used to rate auditory stimulation in HRI. In this scale, users are asked to rate the degree to which they agree with a particular statement using a score from 1 ("strongly disagree") to 5 ("strongly agree"). ...

BUZZ: An Auditory Interface User Experience Scale
  • Citing Conference Paper
  • April 2018

... For example, in driving research, English spearcons have proven to be more effective than visual menus (Jeon et al., 2015) and earcon-based menus (Walker et al., 2013) for navigating in-vehicle information systems. Although most studies have used English spearcons, recent studies using spearcons of tonal languages such as Mandarin (Gable, Tomlinson, Cantrell, & Walker, 2017) and Cantonese (Li et al., 2017) showed that native speakers can use them effectively to navigate vehicle menus or monitor vital signs, respectively. ...

Spindex and Spearcons in Mandarin: Auditory Menu Enhancements Successful in A Tonal Language
  • Citing Conference Paper
  • June 2017