Figure 2 - uploaded by Lewis L. Chuang
Content may be subject to copyright.
Six examples of clusters of dipoles (blue) and their mean position (red), their projected scalp activity, and power spectral density (inset: left to right), derived from EEG recordings in the driving simulator. First row: Cortical dipoles that are likely to be associated with auditory processing (left) and motor response generation (right) respectively. Second row: Non-cortical dipoles that are associated with muscle activity (left) and eye-movements and-blinks (right). Third row: Non-cortical dipoles that are due to electrical line noise (left) and unresolved variance in EEG recording (right). 

Six examples of clusters of dipoles (blue) and their mean position (red), their projected scalp activity, and power spectral density (inset: left to right), derived from EEG recordings in the driving simulator. First row: Cortical dipoles that are likely to be associated with auditory processing (left) and motor response generation (right) respectively. Second row: Non-cortical dipoles that are associated with muscle activity (left) and eye-movements and-blinks (right). Third row: Non-cortical dipoles that are due to electrical line noise (left) and unresolved variance in EEG recording (right). 

Source publication
Conference Paper
Full-text available
In this study, we employ EEG methods to clarify why auditory notifications, which were designed for task management in highly automated trucks, resulted in different performance behavior, when deployed in two different test settings: (a) student volunteers in a lab environment, (b) professional truck drivers in a realistic vehicle simulator. Behavi...

Context in source publication

Context 1
... collection, signal processing, and statistical analysis Data pre-processing and analysis was performed offline with Matlab (The Mathworks, Natick, MA) scripts based on EEGLAB v.14 1 , an open source environment for processing electrophysiological data [9]. The following steps were per- formed on EEG data prior to analyzing the ERPs of stimuli and responses [3]. First, the data was downsampled to 250 Hz to reduce computational costs. Next, a high-pass filter (cut-off=0.5 Hz) was applied to remove slow drifts, 50 Hz electrical line noise from the environment was removed using the CleanLine algorithm, and bad channels were removed us- ing the ASR algorithm. Next, all electrodes were re-referenced to their common average, and each participant's dataset was separately submitted to an Adaptive Mixture ICA to decom- pose the continuous data into source-resolved activity [10]. On these learned independent components (IC), equivalent current dipole model estimation was performed by using an MNI Boundary Element Method head model to fit an equiva- lent dipole to the scalp projection pattern of each independent component. ICs whose dipoles were located outside the brain were excluded as well as those that had a residual variance of over 15%. Within each participant group, ICs were clustered into 30 clusters using k-means based on their mean power spectra, topography, and equivalent dipole location. Figure 2 provides examples of dipole clusters with either cor- tical (first row) or non-cortical origins (e.g., muscle and eye activity (second row), electrical activity from environment sources (third row)). Non-cortical components were identified on the basis of their power spectral density, scalp topology, and location in a volumetric brain model [19]. As might be expected, there were more non-cortical dipole components found in participants who performed the experiment in the driving simulator (N=15) than those from the psychophysical experiment (N=14). In other words, EEG recordings were con- taminated by the activity of more non-cortical components in the driving simulator environment than in the psychophysical laboratory. Non-cortical dipole clusters were removed from the EEG recording and the remaining EEG activity was sub- jected to comparative analysis for the two participant ...

Similar publications

Article
Full-text available
People normally know what they want to communicate before they start speaking. However, brain indicators of communication are typically observed only after speech act onset, and it is unclear when any anticipatory brain activity prior to speaking might first emerge, along with the communicative intentions it possibly reflects. Here, we investigated...

Citations

... interaction between human and machines can be established via a microphone (based on speech) to provide a more natural, convenient, and efficient speech-based communication (Chuang et al. 2017;Munir et al. 2019). Since a smart home is made up of connected smart objects, a controller should be embedded for each object of the smart home such as light, sound, and door. ...
Article
Full-text available
Nowadays, various interfaces are used to control smart home appliances. The human and smart home appliances interaction may be based on input devices such as a mouse, keyboard, microphone, or webcam. The interaction between humans and machines can be established via speech using a microphone as one of the input modes. The Speech-based human and machine interaction is a more natural way of communication in comparison to other types of interfaces. Existing speech-based interfaces in the smart home domain suffer from some problems such as limiting the users to use a fixed set of pre-defined commands, not supporting indirect commands, requiring a large training set, or depending on some specific speakers. To solve these challenges, we proposed several approaches in this paper. We exploited ontology as a knowledge base to support indirect commands and remove user restrictions on expressing a specific set of commands. Moreover, Long Short-Term Memory (LSTM) has been exploited for detecting spoken commands more accurately. Additionally, due to the lack of Persian voice commands for interacting with smart home appliances, a dataset of speaker-independent Persian voice commands for communicating with TV, media player, and lighting system has been designed, recorded, and evaluated in this research. The experimental results show that the LSTM-based voice command detection system performed almost 1.5% and 13% more accurately than the Hidden Markov Model-based one, in scenarios ‘with’ and ‘without ontology’, respectively. Furthermore, using ontology in the LSTM-based method has improved the system performance by about 40%.
... Car manufactures and IVI systems that try to limit drivers' ability to perform secondary tasks are often circumvented by users who in-turn adopt far riskier modes of interaction [30]. The results of this research indicate that some tasks can be extremely difficult to perform within the driving environment by simply using visual interaction, depending on driver's ability to multitask [32]. These tasks include 'Text Entry' as well as 'Multi-Layered Menu Selection' on a touchscreen-based device as suggested by Kujala and Grahn [33]. ...
Article
Full-text available
Methods of information presentation in the automotive space have been evolving continuously in recent years. As technology pushes forward the boundaries of what is possible, automobile manufacturers are trying to keep up with the current trends. Traditionally, the often-long development and quality control cycles of the automotive sector ensured slow yet steady progress. However, the exponential advancement in the mobile and hand-held computing space seen in the last 10 years has put immense pressure on automobile manufacturers to try to catch up. For this reason, we now see manufacturers trying to explore new techniques for in-vehicle interaction (IVI), which were ignored in the past. However, recent attempts have either simply extended the interaction model already used in mobile or handheld computing devices or increased visual-only presentation-of-information with limited expansion to other modalities (i.e. audio or haptics). This is also true for system interaction which generally happens within complex driving environments, making the primary task of a driver (driving) even more challenging. Essentially, there is an inherent need to design and research IVI systems that complement and natively support a multimodal interaction approach, providing all the necessary information without increasing driver’s cognitive load or at a bare minimum his/her visual load. In this research we focus on the key elements of IVI system: touchscreen interaction by developing prototype devices that can complement the conventional visual and auditory modalities in a simple and natural manner. Instead of adding primitive touch feedback cues to increase redundancy or complexity, we approach the issue by looking at the current requirements of interaction and complementing the existing system with natural and intuitive input and output methods, which are less affected by environmental noise than traditional multimodal systems.
... With increasing automation in automotive contexts, it is increasingly relevant to investigate different cognitive states, e.g. attention, task engagement and cognitive workload [14,49,53,58]. However, current work always focusses on detecting cognitive workload during the task of driving and examining the effects of the present situation or the environment, e.g. ...
... Working memory load (also called cognitive work-load) can be understood as the amount of mental resources that are used to execute a particular task [18,33] (in our case the VCT and ACT). Cognitive workload related changes in the EEG are associated with changes in the theta-band power (4-7Hz) at frontal brain areas and in the alpha-band power (8)(9)(10)(11)(12)(13)(14) at parieto-occipital brain areas [4,7,44]. We hypothesized, that additional cognitive demands (e.g. ...
Conference Paper
Full-text available
Autonomous driving provides new opportunities for the use of time during a car ride. One such important scenario is working. We conducted a neuroergonomical study to compare three configurations of a car interior (based on lighting, visual stimulation, sound) regarding their potential to support productive work. We assessed participants? concentration, performance and workload with subjective, behavioral and EEG measures while they carried out two different concentration tasks during simulated autonomous driving. Our results show that a configuration with a large-area, bright light with high blue components, and reduced visual and auditory stimuli promote performance, quality, efficiency, increased concentration and lower cognitive workload. Increased visual and auditory stimulation paired with linear, darker light with very few blue components resulted in lower performance, reduced subjective concentration, and higher cognitive workload, but did not differ from a normal car configuration. Our multi-method approach thus reveals possible car interior configurations for an ideal workspace.
... On the other hand, an increasing number of automobiles are equipped with multimodal interfaces [37,39,40] including a voicebased interface in which the drivers do not need to pay visual attention to the interface [7,23,29,42]. Though this kind of interface could decrease a driver's mental workload in terms of visual attention, the workload on a visuospatial sketchpad while they manipulate a voice-based interface remains unknown. Particularly, conventional measurement tools cannot measure cognitive workloads on working memory while using voice commands because these measurement methods mainly use drivers' visual actions, including elapsed time with line-of-sight deviation.Therefore, it is important to measure the cognitive workload of WM in the case of driving and operating voice command interfaces. ...
... They reported that the workload of voice commands was comparatively small in various conventional interfaces, but even the voice-based system could induce visual-manual interactions to the system that may be distractive [31,41,42]. Chuang et al. acquired electroencephalogram of professional drivers and novice students when they dealt with auditory notifications in a high-fidelity driving simulator [7]. Their results indicated that, surprisingly, professional drivers performed at a slower reaction time and with less sensitivity to the auditory stimuli. ...
... Their results indicated that, surprisingly, professional drivers performed at a slower reaction time and with less sensitivity to the auditory stimuli. The authors discussed various causes, such as age-related factors and the habituation of the environment [7]. ...
Conference Paper
The goal of this study is to quantify the cognitive workload of visuospatial components on operating voice-based interfaces. Particularly, we aim to quantify the user's visuospatial workload when they operate voice commands while driving and, then, comparing the reported workload that participants simultaneously used graphical interfaces. We used the quantitative measurement method to evaluate the workload on the visuospatial sketchpad by employing a dual task of pattern span test and usage of a target interface. The results indicated that even voice command interfaces affect the performance of the pattern span test regardless of the independence of the sketchpad and the phonological loop. Also, we quantitatively found that employing familiar words and their combinations for drivers could reduce the workload of voice-based operations.
... This approach has been used to evaluate how different auditory notifications support different cognitive processes-for example, how verbal notifications promoted better discrimination from background distractors while auditory icons promoted better context-updating [11]. It has also been used to account for individual differences in behavioral performance to auditory notifications presented across different test environments [6]. Finally, EEG/ERP activity has also been proposed as a potential input modality for automobiles in order to support faster predictions of braking intentions than manual braking [16]. ...
Conference Paper
Full-text available
Looming sounds can be an ideal warning notification for emergency braking. This agrees with studies that have consistently demonstrated preferential brain processing for looming stimuli. This study investigates and demonstrates that looming sounds can similarly benefit emergency braking in managing a vehicle with adaptive cruise control (ACC). Specifically, looming auditory notifications induced the faster emergency braking times relative to a static auditory notification. Next, we compare the event-related potential (ERP) evoked by a looming notification, relative to its static equivalent. Looming notifications evoke a smaller fronto-central N2 amplitude than their static equivalents. Thus, we infer that looming sounds are consistent with the visual experience of an approaching collision and, hence, induced a corresponding performance benefit. Subjective ratings indicate no significant differences in the perceived workload across the notification conditions. Overall, this work suggests that auditory warnings should have congruent physical properties with the visual events that they warn for.
... For this purpose, we measured the EEG activity of naïve participants (Experiment 1) and professional truck drivers (Experiment 2). The current EEG dataset has been previously analyzed for differences between the two participant groups and have shown that both groups respond to these notifications similarly as a whole [7]. While professional truck drivers responded slower in general, this was not due to fundamental differences in brain responses to the auditory notifications. ...
... Clusters, containing such non-cortical activity, were determined based on their power spectrum, their scalp topography, and their dipole location in a volumetric brain model. These non-cortical activity clusters, present across the group of participants, were removed from the EEG data (for examples, see [7]). Finally, this EEG data for cortical activity was backprojected to the sensor level, and analyzed for potential differences between verbal commands and auditory icons. ...
Conference Paper
Full-text available
Design recommendations for notifications are typically based on user performance and subjective feedback. In comparison, there has been surprisingly little research on how designed notifications might be processed by the brain for the information they convey. The current study uses EEG/ERP methods to evaluate auditory notifications that were designed to cue long-distance truck drivers for task-management and driving conditions, particularly for automated driving scenarios. Two experiments separately evaluated naive students and professional truck drivers for their behavioral and brain responses to auditory notifications, which were either auditory icons or verbal commands. Our EEG/ERP results suggest that verbal commands were more readily recognized by the brain as relevant targets, but that auditory icons were more likely to update contextual working memory. Both classes of notifications did not differ on behavioral measures. This suggests that auditory icons ought to be employed for communicating contextual information and verbal commands, for urgent requests.
... This approach has been used to evaluate how different auditory notifications support different cognitive processes-for example, how verbal notifications promoted better discrimination from background distractors while auditory icons promoted better context-updating [11]. It has also been used to account for individual differences in behavioral performance to auditory notifications presented across different test environments [6]. Finally, EEG/ERP activity has also been proposed as a potential input modality for automobiles in order to support faster predictions of braking intentions than manual braking [16]. ...
... Auditory in-vehicle notifications EEG data were collected from thirty participants who responded to auditory notifications that were designed to alert drivers to changes in driving conditions or to cue them to perform certain tasks [5]. This dataset is sub-divided into one that was collected under in a highly controlled psychophysics laboratory and another that was collected in a virtual reality vehicle simulator. ...
Conference Paper
Full-text available
It is increasingly viable to measure the brain activity of mobile users, as they go about their everyday business in their natural world environment. This is due to: (i) modern signal processing methods, (ii) lightweight and cost-effective measurement devices, and (iii) a better, albeit incomplete, understanding of how measurable brain activity relates to mental processes. Here, we address how brain activity can be measured in mobile users and how this contrasts with measurements obtained under controlled laboratory conditions. In particular, we will focus on electroencephalography (EEG) and will cover: (i) hardware and software implementation, (ii) signal processing techniques, (iii) interpretation of EEG measurements. This will consist of hands-on analyses of real EEG data and a basic theoretical introduction to how and why EEG works.
Article
As partially automated driving vehicles are set to be mass produced, there is an increased necessity to research situations where such partially automated vehicles become unable to drive. Automated vehicles at SAE Level 3 cannot avoid a take-over between the human driver and vehicle system. Therefore, how the system alerts a human driver is essential in situations where the vehicle autonomous driving system is taken over. The present study delivered a take-over transition alert to human drivers using diverse combinations of visual, auditory, and haptic modalities and analyzed the drivers’ brainwave data. To investigate the differences in indexes according to the take-over transition alert type, the independent variable of this study, the nonparametric test of Kruskal-Wallis was performed along with Mann-Whitney as a follow-up test. Moreover, the pre/post-warning difference in each index was investigated, and the results were reflected in ranking effective warning combinations and their resulting scores. The visual-auditory-haptic warning scored the highest in terms of various EEG indexes, to be the most effective type of take-over transition alert. Unlike most preceding studies analyzing post-take-over-alert human drivers’ response times or vehicle behavior, this study investigates drivers’ brainwave after the take-over warning.