Figure 1 - uploaded by Lewis L. Chuang
Content may be subject to copyright.

Experiment 1's ERP responses (left) with scalp topography plots (right) of statistically significant differences across time and electrodes respectively. Left: ERP waveforms are averaged across the frontal (pink) and parietal (green) electrodes and deflections are labeled for N1, P2, P3a, and P3b. The shaded areas between the two waveforms indicate time-regions that are significantly different. Right: The scalp topographies show the EEG activity to verbal commands and auditory icons at time-ranges A and B. Electrodes that are significantly different are represented by white dots.
Source publication
Design recommendations for notifications are typically based on user performance and subjective feedback. In comparison, there has been surprisingly little research on how designed notifications might be processed by the brain for the information they convey. The current study uses EEG/ERP methods to evaluate auditory notifications that were design...
Contexts in source publication
Context 1
... were computed for each participant, and for every elec- trode, by extracting an epoch of EEG activity around the notification presentation. The presentation onset of the noti- fications was the trigger event for an epoch that consisted of 500 ms of baseline activity pre-trigger and 1000 ms of brain response post-trigger. All epochs that belonged to either ver- bal commands or auditory icons were mean-averaged for each electrode. We further grouped the frontal and parietal elec- trodes into two separate groups for visualization (see Figures 1 and 2, right). These group averaged waveforms depict dis- tinct ERP components (i.e., N1, P2, P3a, and P3b) that serve as established neural correlates for perceptual and cognitive mechanisms. With regards to auditory information processing, they relate to detection (N1), discrimination (P2), attentional capture (P3a), and context-updating ...
Context 2
... EEG/ERP activity elicited by verbal commands and au- ditory icons were similar in general morphology, latency, and scalp distribution in the anterior-posterior dimension for both Experiments 1 and 2 ( Figs. 1 and 2). Statistically significant differences were revealed in the EEG/ERP activity generated by auditory icons and verbal commands in the frontal as well as parietal electrodes 1 . The frontal group of electrodes are: F5, F3, F1, Fz, F2, F4, F6, FC5, FC3, FC1, FC2, FC4, FC6. The parietal group of electrodes are: P5, P3, P1, Pz, P2, P4, P6, CP5, CP3, CP1, CPz, CP2, CP4, ...
Context 3
... we believe that verbal commands capture attention more readily than auditory icons. P3a amplitudes are indica- tive of an involuntary orienting response to surprising and novel events [57]. While the larger P3a amplitude for verbal commands was not significant in Experiment 1 (Figure 1), it was for the participants of Experiment 2. We believe that this was because the professional truck drivers understood the verbal commands and their operational implications more readily, which increased the potential of verbal commands's in capturing attention ( Figure ...
Context 4
... amplitudes are indica- tive of an involuntary orienting response to surprising and novel events [57]. While the larger P3a amplitude for verbal commands was not significant in Experiment 1 (Figure 1), it was for the participants of Experiment 2. We believe that this was because the professional truck drivers understood the verbal commands and their operational implications more readily, which increased the potential of verbal commands's in capturing attention ( Figure 2). ...
Citations
... This aligns with Yatani et al. [89], who found that handheld tactile maps combining tactile feedback with audio instructions offer superior spatial orientation compared to audio-only feedback. Additionally, the study revealed differences in the effectiveness of verbal audio vs. auditory icons, aligning with the findings of Glatz et al. [41], who found auditory icons to be more effective for conveying contextual information, while verbal audio was better for urgent requests. Further, by comparing the effectiveness of auditory, visual, and combined audio-visual feedback, the combination of audio and visual feedback improved participants' situation awareness more than visual feedback alone [66]. ...
The introduction of Highly Automated Vehicles (HAVs) has the potential to increase the independence of blind and visually impaired people (BVIPs). However, ensuring safety and situation awareness when exiting these vehicles in unfamiliar environments remains challenging. To address this, we conducted an interactive workshop with N=5 BVIPs to identify their information needs when exiting an HAV and evaluated three prior-developed low-fidelity prototypes. The insights from this workshop guided the development of PathFinder, a multimodal interface combining visual, auditory, and tactile modalities tailored to BVIP's unique needs. In a three-factorial within-between-subject study with N=16 BVIPs, we evaluated PathFinder against an auditory-only baseline in urban and rural scenarios. PathFinder significantly reduced mental demand and maintained high perceived safety in both scenarios, while the auditory baseline led to lower perceived safety in the urban scenario compared to the rural one. Qualitative feedback further supported PathFinder's effectiveness in providing spatial orientation during exiting.
... They found that audio-visual feedback led to an increase in participants' SA compared to a visual-only representation of relevant traffic objects. This study also distinguished between the effectiveness of verbal audio and auditory icons, aligning with Glatz et al. [39], who found that auditory icons are effectively perceived for contextual information while verbal audio is more suitable for time-critical information. For BVIPs, Brinkley et al. [13] developed a prototype that enhances their SA inside HAVs through the use of audible location cues and spatial audio. ...
... In particular, three participants imagined less critical information to be bothersome when conveyed through auditory cues. However, they also mentioned that when it comes to critical situations or important information, auditory verbal feedback is preferred over tactile feedback as audio was assumed to convey more context information, which aligns with Glatz et al. [39]. According to the participants, important information includes that the vehicle arrived at its destination and notifications of dangerous situations, such as a bike lane next to the arrived vehicle. ...
Highly Automated Vehicles offer a new level of independence to people who are blind or visually impaired. However, due to their limited vision, gaining knowledge of the surrounding traffic can be challenging. To address this issue, we conducted an interactive, participatory workshop (N=4) to develop an auditory interface and OnBoard - a tactile interface with expandable elements - to convey traffic information to visually impaired people. In a user study with N=14 participants, we explored usability, situation awareness, predictability, and engagement with OnBoard and the auditory interface. Our qualitative and quantitative results show that tactile cues, similar to auditory cues, are able to convey traffic information to users. In particular, there is a trend that participants with reduced visual acuity showed increased engagement with both interfaces. However, the diversity of visual impairments and individual information needs underscores the importance of a highly tailored multimodal approach as the ideal solution.
... A concurrent auditory task can interfere with and even mask the target auditory warning (Nees and Walker, 2011). Moreover, various types of auditory warnings have different susceptibilities to the interference of concurrent auditory tasks (Glatz et al., 2018). For example, Vilimek and Hempel (2005) and Bonebright and Nees (2009) found that speech interferes with concurrent auditory memory tasks, whereas auditory icons and earcons do not. ...
With the era of automated driving approaching, designing an effective auditory takeover request (TOR) is critical to ensure automated driving safety. The present study investigated the effects of speech-based (speech and spearcon) and non-speech-based (earcon and auditory icon) TORs on takeover performance and subjective preferences. The potential impact of the non-driving-related task (NDRT) modality on auditory TORs was considered. Thirty-two participants were recruited in the present study and assigned to two groups, with one group performing the visual N-back task and another performing the auditory N-back task during automated driving. They were required to complete four simulated driving blocks corresponding to four auditory TOR types. The earcon TOR was found to be the most suitable for alerting drivers to return to the control loop because of its advantageous takeover time, lane change time, and minimum time to collision. Although participants preferred the speech TOR, it led to relatively poor takeover performance. In addition, the auditory NDRT was found to have a detrimental impact on auditory TORs. When drivers were engaged in the auditory NDRT, the takeover time and lane change time advantages of earcon TORs no longer existed. These findings highlight the importance of considering the influence of auditory NDRTs when designing an auditory takeover interface. The present study also has some practical implications for researchers and designers when designing an auditory takeover system in automated vehicles.
... Is it ethical to design systems with access to early and primitive information-processing systems of targeted users? It might seem sensible to design notification displays that alert drowsy drivers by exploiting physical properties that signal the approach of threat (e.g., looming intensities) [11]. But where should we stop? ...
While Human-Computer Interaction (HCI) has contributed to demonstrating that physiological measures can be used to detect cognitive changes, engineering and machine learning will bring these to application in consumer wearable technology. For HCI, many open questions remain, such as: What happens when this becomes a cognitive form of personal informatics? What goals do we have for our daily cognitive activity? How should such a complex concept be conveyed to users to be useful in their everyday life? How can we mitigate potential ethical concerns? These issues are different from physiologically controlled interactions, such as BCIs, to a time when we have new data about ourselves. This workshop will be the first to directly address the future of Cognitive Personal Informatics (CPI), by bringing together design, BCI and physiological data, ethics, and personal informatics researchers to discuss and set the research agenda in this inevitable future before it arrives.
... Apart from a small number of specific methodological keywords referring to the type of analysis employed in the article, all keywords could be grouped into these categories. It is interesting to note that only four generic keywords "BCI" (42 times), "EEG" (49), "fNIRS" (17), and "human-computer interaction" (13) occur more than 10 times, showing the large variety of topics covered by these articles. ...
... Articles HCI evaluation [56], [55], [93], [66], [102], [79], [3], [5], [63], [91], [19], [118], [83], [26], [44], [31], [92], [132], [49], [14], [71], [12], [25], [78], [47], [142], [133], [134], [28], explicit control [35], [72], [119], [145], [151], [101], [99], [114], [51], [52], [33], [150], [98], [106], [76], [90], [70], [69], [103], [39], [84], [65], [95], [36], implicit open loop [121], [131], [108], [58], [140], [122], [2], [107], [7], [109], [27], [73], [112], implicit closed loop [128], [126], [130], [1], [111], [152], [147], [115], [15], neurofeedback [45], [43], [54], [81], [6], mental state assessment [138], [77], [50], [23], [154], [61], [157], [139], [110], [125], [11], [88], [34], [148], [48], [158], [96], other [127], [123], [104], [85], [21], [136], [87], [41], [38], [10], [4], [24], [97], category). Systems which are able to bring all components together show that using brain signals in runtime can yield substantial usability improvements [1,152] or unlock completely novel kinds of applications [103]. ...
... al.[49]) Middleware/Communication: For interactive applications or distributed recording setups, this attribute reports how the different parts communicate to exchange data, triggers, commands, and so on."We wrote a custom Java bridge program to connect the headset to the Android OS and Unity application on the Game tablet. ...
In human-computer interaction (HCI), there has been a push towards open science, but to date, this has not happened consistently for HCI research utilizing brain signals due to unclear guidelines to support reuse and reproduction. To understand existing practices in the field, this paper examines 110 publications, exploring domains, applications, modalities, mental states and processes, and more. This analysis reveals variance in how authors report experiments, which creates challenges to understand, reproduce, and build on that research. It then describes an overarching experiment model that provides a formal structure for reporting HCI research with brain signals, including definitions, terminology, categories, and examples for each aspect. Multiple distinct reporting styles were identified through factor analysis and tied to different types of research. The paper concludes with recommendations and discusses future challenges. This creates actionable items from the abstract model and empirical observations to make HCI research with brain signals more reproducible and reusable.
... Underpinning the work already underway at the intersection of CUI and Auto-UI, these communities share interests in multimodal interaction evaluation [10,20,24], multitasking and interruptions as interaction paradigms [5,8,11,22,23], modeling mental workload [8,15,24], and mixed-methods approaches to research ranging from physiological sensing [9,10,15] to in-the-wild observation [2,6]. We aim to bring together the shared goals and compare the different approaches of these communities, establishing a community of practice that can share resources and expertise to better understand automotive conversational user interfaces. ...
This work aims to connect the Automotive User Interfaces (Auto-UI) and Conversational User Interfaces (CUI) communities through discussion of their shared view of the future of automotive conversational user interfaces. The workshop aims to encourage creative consideration of optimistic and pessimistic futures, encouraging attendees to explore the opportunities and barriers that lie ahead through a game. Considerations of the future will be mapped out in greater detail through the drafting of research agendas, by which attendees will get to know each other's expertise and networks of resources. The two day workshop, consisting of two 90-minute sessions, will facilitate greater communication and collaboration between these communities, connecting researchers to work together to influence the futures they imagine in the workshop.
... Tasks that require large amounts of working memory are more difficult to process, thus resulting in smaller ERP amplitudes. Hence, ERPs can be measured to assess the mental vigilance towards auditory cues [147] or to detect vocabulary gaps [339,340]. ...
... Step No. Cooking Step Audio Video Contour 1. 147. Investigating the mean cooking times between both conditions shows longer cooking times for in-situ assistance (M = 236.3, ...
In today's society, our cognition is constantly influenced by information intake, attention switching, and task interruptions. This increases the difficulty of a given task, adding to the existing workload and leading to compromised cognitive performances. The human body expresses the use of cognitive resources through physiological responses when confronted with a plethora of cognitive workload. This temporarily mobilizes additional resources to deal with the workload at the cost of accelerated mental exhaustion. We predict that recent developments in physiological sensing will increasingly create user interfaces that are aware of the user's cognitive capacities, hence able to intervene when high or low states of cognitive workload are detected. Subsequently, we investigate suitable feedback modalities in a user-centric design process which are desirable for cognitive assistance. We then investigate different physiological sensing modalities to enable suitable real-time assessments of cognitive workload. We provide evidence that the human brain and eye gaze are sensitive to fluctuations in cognitive resting states. We show that electroencephalography and eye tracking are reliable modalities to assess mental workload during user interface operation. In the end, we present applications that regulate cognitive workload in home and work setting, investigate how cognitive workload can be visualized to the user, and show how cognitive workload measurements can be used to predict the efficiency of information intake through reading interfaces. Finally, we present our vision of future workload-aware interfaces. Previous interfaces were limited in their ability to utilize cognitive workload for user interaction. Together with the collected data sets, this thesis paves the way for methodical and technical tools that integrate workload-awareness as a factor for context-aware systems.
... Tasks that require large amounts of working memory are more difficult to process, thus resulting in smaller ERP amplitudes. Hence, ERPs can be measured to assess the mental vigilance towards auditory cues [147] or to detect vocabulary gaps [339,340]. ...
... Step No. Cooking Step Audio Video Contour 1. 147. Investigating the mean cooking times between both conditions shows longer cooking times for in-situ assistance (M = 236.3, ...
In today's society, our cognition is constantly influenced by information intake, attention switching, and task interruptions. This increases the difficulty of a given task, adding to the existing workload and leading to compromised cognitive performances. The human body expresses the use of cognitive resources through physiological responses when confronted with a plethora of cognitive workload. This temporarily mobilizes additional resources to deal with the workload at the cost of accelerated mental exhaustion.
We predict that recent developments in physiological sensing will increasingly create user interfaces that are aware of the user’s cognitive capacities, hence able to intervene when high or low states of cognitive workload are detected. In this thesis, we initially focus on determining opportune moments for cognitive assistance. Subsequently, we investigate suitable feedback modalities in a user-centric design process which are desirable for cognitive assistance. We present design requirements for how cognitive augmentation can be achieved using interfaces that sense cognitive workload.
We then investigate different physiological sensing modalities to enable suitable real-time assessments of cognitive workload. We provide empirical evidence that the human brain is sensitive to fluctuations in cognitive resting states, hence making cognitive effort measurable. Firstly, we show that electroencephalography is a reliable modality to assess the mental workload generated during the user interface operation. Secondly, we use eye tracking to evaluate changes in eye movements and pupil dilation to quantify different workload states. The combination of machine learning and physiological sensing resulted in suitable real-time assessments of cognitive workload. The use of physiological sensing enables us to derive when cognitive augmentation is suitable.
Based on our inquiries, we present applications that regulate cognitive workload in home and work settings. We deployed an assistive system in a field study to investigate the validity of our derived design requirements. Finding that workload is mitigated, we investigated how cognitive workload can be visualized to the user. We present an implementation of a biofeedback visualization that helps to improve the understanding of brain activity. A final study shows how cognitive workload measurements can be used to predict the efficiency of information intake through reading interfaces. Here, we conclude with use cases and applications which benefit from cognitive augmentation.
This thesis investigates how assistive systems can be designed to implicitly sense and utilize cognitive workload for input and output. To do so, we measure cognitive workload in real-time by collecting behavioral and physiological data from users and analyze this data to support users through assistive systems that adapt their interface according to the currently measured workload. Our overall goal is to extend new and existing context-aware applications by the factor cognitive workload. We envision Workload-Aware Systems and Workload-Aware Interfaces as an extension in the context-aware paradigm. To this end, we conducted eight research inquiries during this thesis to investigate how to design and create workload-aware systems.
Finally, we present our vision of future workload-aware systems and workload-aware interfaces. Due to the scarce availability of open physiological data sets, reference implementations, and methods, previous context-aware systems were limited in their ability to utilize cognitive workload for user interaction. Together with the collected data sets, we expect this thesis to pave the way for methodical and technical tools that integrate workload-awareness as a factor for context-aware systems.
... Since eye movements are inevitable during reading, EOG enables us to filter the noise generated by muscles to create a clean recording of the actual brain responses. To record EOG, the electrodes were placed on the right and left canthi as well as above and below the left eye as suggested by prior work [13] using adhesive tape for medical use. We chose four electrodes from our setup (right eye: FT10; left eye: above: FT9, side: O2, below: O1), which are least likely to show responses to language processing, i.e., with the great distance to the central parietal area [25]. ...
The pervasive availability of media in foreign languages is a rich resource for language learning. However, learners are forced to interrupt media consumption whenever comprehension problems occur. We present BrainCoDe, a method to implicitly detect vocabulary gaps through the evaluation of event-related potentials (ERPs). In a user study (N=16), we evaluate BrainCoDe by investigating differences in ERP amplitudes during listening and reading of known words compared to unknown words. We found significant deviations in N400 amplitudes during reading and in N100 amplitudes during listening when encountering unknown words. To evaluate the feasibility of ERPs for real-time applications, we trained a classifier that detects vocabulary gaps with an accuracy of 87.13% for reading and 82.64% for listening, identifying eight out of ten words correctly as known or unknown. We show the potential of BrainCoDe to support media learning through instant translations or by generating personalized learning content.
... To assess the induced cognitive load while one devises a verb in the verb task, we present an oddball stimulus as a probe and record the subsequent brain activity with an electroencephalogram (EEG). By averaging multiple response measurements to an event related potential (ERP), a high signal to noise ratio can be achieved (e.g., Glatz, Krupenia, Bülthoff, & Chuang, 2018;Squires, Squires, & Hillyard, 1975;Van der Heiden et al., 2018;Wester et al., 2008). Verb generation is a mental process that involves different brain areas over time (Abdullaev & Posner, 1998;Bijl et al., 2007). ...
In this study we evaluate how cognitive load affects susceptibility to auditory signals. Previous research has used the frontal P3 (fP3) event related potential response to auditory novel stimuli as an index for susceptibility to auditory signals. This work demonstrated that tasks that induce cognitive load such as visual and manual tasks, reduced susceptibility. It is however unknown whether cognitive load without visual or manual components also reduces susceptibility. To investigate this, we induced cognitive load by means of the verb generation task, in which participants need to think about a verb that matches a noun. The susceptibility to auditory signals was measured by recording the event related potential in response to a successively presented oddball probe stimulus at 3 different inter-stimulus intervals, 0 ms, 200 ms or 400 ms after the offset of the noun from the verb generation task. An additional control baseline condition, in which oddball response was probed without a verb generation task, was also included. Results show that the cognitive load associated with the verb task reduces fP3 response (and associated auditory signal susceptibility) compared to baseline, independent of presentation interval. This suggests that not only visual and motor processing, but also cognitive load without visual or manual components, can reduce susceptibility to auditory signals and alerts.