Blinking is a natural user-induced response which paces visual information processing. This study investigates whether blinks are viable for segmenting continuous electroencephalog-raphy (EEG) activity, for inferring cognitive demands in ecologically valid work environments. We report the blink-related EEG measures of participants who performed auditory tasks either standing , walking on grass, or whilst completing an obstacle course. Blink-related EEG activity discriminated between different levels of cognitive demand during walking. Both behavioral parameters (e.g., blink duration or head motion) and blink-related EEG activity varied with walking conditions. Larger occipital N1 was observed during walking, relative to standing and traversing an obstacle course, which reflects differences in bottom-up visual perception. In contrast, the amplitudes of top-down components (N2, P3) significantly decreased with increasing walking demands, which reflected narrowing attention. This is consistent with blink-related EEG, specifically in Theta and Alpha power that, respectively, increased and decreased with increasing demands of the walking task. This work presents a novel and robust analytical approach to evaluate the cognitive demands experienced in natural work settings, which precludes the use of artificial task manipulations for data segmentation.
Modern living and working environments are more and more interspersed with the concurrent execution of locomotion and sensory processing, most often in the visual domain. Many job profiles involve the presentation of visual information while walking, for example in warehouse logistics work, where a worker has to manage walking to the correct aisle to pick up a package while being presented with visual information over data-glasses concerning the next order. Similar use-cases can be found in manufacturing jobs, for example in car montage assembly lines where next steps are presented via augmented reality headsets while walking at a slow pace. Considering the overall scarcity of cognitive resources available to be deployed to either the cognitive or motor processes, task performance decrements were found when increasing load in either domain. Interestingly, the walking motion also had beneficial effects on peripheral contrast detection and the inhibition of visual stream information. Taking these findings into account, we conducted a study that comprised the detection of single visual targets (Landolt Cs) within a broad range of the visual field (-40° to +40° visual angle) while either standing, walking, or walking with concurrent perturbations. We used questionnaire (NASA-TLX), behavioral (response times and accuracy), and neurophysiological data (ERPs and ERSPs) to quantify the effects of cognitive-motor interference. The study was conducted in a Gait Real-time Analysis Interactive Laboratory (GRAIL), using a 180° projection screen and a swayable and tiltable dual-belt treadmill. Questionnaire and behavioral measures showed common patterns. We found increasing subjective physical workload and behavioral decrements with increasing stimulus eccentricity and motor complexity. Electrophysiological results also indicated decrements in stimulus processing with higher stimulus eccentricity and movement complexity (P3, Theta), but highlighted a beneficial role when walking without perturbations and processing more peripheral stimuli regarding earlier sensory components (N1pc/N2pc, N2). These findings suggest that walking without impediments can enhance the visual processing of peripheral information and therefore help with perceiving non-foveal sensory content. Also, our results could help with re-evaluating previous findings in the context of cognitive-motor interference, as increased motor complexity might not always impede cognitive processing and performance.
Removing power line noise and other frequency-specific artifacts from electrophysiological data without affecting neural signals remains a challenging task. Recently, an approach was introduced that combines spectral and spatial filtering to effectively remove line noise: Zapline. This algorithm, however, requires manual selection of the noise frequency and the number of spatial components to remove during spatial filtering. Moreover, it assumes that noise frequency and spatial topography are stable over time, which is often not warranted. To overcome these issues, we introduce Zapline-plus, which allows adaptive and automatic removal of frequency-specific noise artifacts from M/electroencephalography (EEG) and LFP data. To achieve this, our extension first segments the data into periods (chunks) in which the noise is spatially stable. Then, for each chunk, it searches for peaks in the power spectrum, and finally applies Zapline. The exact noise frequency around the found target frequency is also determined separately for every chunk to allow fluctuations of the peak noise frequency over time. The number of to-be-removed components by Zapline is automatically determined using an outlier detection algorithm. Finally, the frequency spectrum after cleaning is analyzed for suboptimal cleaning, and parameters are adapted accordingly if necessary before re-running the process. The software creates a detailed plot for monitoring the cleaning. We highlight the efficacy of the different features of our algorithm by applying it to four openly available data sets, two EEG sets containing both stationary and mobile task conditions, and two magnetoencephalography sets containing strong line noise.
Electroencephalography (EEG) is a non-invasive technique used to record cortical neurons' electrical activity using electrodes placed on the scalp. It has become a promising avenue for research beyond state-of-the-art EEG research that is conducted under static conditions. EEG signals are always contaminated by artifacts and other physiological signals. Artifact contamination increases with the intensity of movement. In the last decade (since 2010), researchers have started to implement EEG measurements in dynamic setups to increase the overall ecological validity of the studies. Many different methods are used to remove non-brain activity from the EEG signal, and there are no clear guidelines on which method should be used in dynamic setups and for specific movement intensities. Currently, the most common methods for removing artifacts in movement studies are methods based on independent component analysis (ICA). However, the choice of method for artifact removal depends on the type and intensity of movement, which affects the characteristics of the artifacts and the EEG parameters of interest. When dealing with EEG under non-static conditions, special care must be taken already in the designing period of an experiment. Software and hardware solutions must be combined to achieve sufficient removal of unwanted signals from EEG measurements. We have provided recommendations for the use of each method depending on the intensity of the movement and highlighted the advantages and disadvantages of the methods. However, due to the current gap in the literature, further development and evaluation of methods for artifact removal in EEG data during locomotion is needed.
Advances in Mobile Brain/Body Imaging (MoBI) technology allows for real-time measurements of human brain dynamics during every day, natural, real-life situations. This special issue Time to Move brings together a collection of experimental papers, targeted reviews and opinion articles that lay out the latest MoBI findings. A wide range of topics across different fields are covered including art, athletics, virtual reality, and mobility. What unites these diverse topics is the common goal to enhance and restore human abilities by reaching a better understanding on how cognition is implemented by the brain-body relationship. The breadth and novelty of paradigms and findings reported here positions MoBI as a new frontier in the field of human cognitive neuroscience.
A sedentary lifestyle in nursing home residents is often accompanied with reduced life space mobility and in turn affects satisfaction with life. One of the reasons for this may be limited ability to find one’s way around the care facility and its environment. However, spatial orientation exercises might reduce these problems if they are integrated into an adequate cognitive-motor training. Therefore, we integrated six novel and target group-specific spatial orientation exercises into an established multicomponent cognitive-motor group training for nursing home residents and evaluated its feasibility. Forty nursing home residents (mean age: 87.3 ± 7 years) participated in the spatial orientation cognitive motor training (45–60 min, twice a week over a period of 12 weeks). The main outcomes included the feasibility criteria (adherence, completion time, acceptance, instructions, motor performance, materials/set up, complexity) and first measurements of mobility and satisfaction with life (SPPB [Short Physical Performance Battery], SWLS [Satisfaction with Life Scale]). Adherence increased over time. The increase was associated with the adaptions and modifications of the spatial orientation exercises that were made to meet the participants’ requirements. A positive trend was discerned for mobility and life satisfaction, comparing pre- and posttraining data. In summary, the feasibility analysis revealed that future interventions should consider that (a) instructions of demanding spatial tasks should be accompanied by an example task, (b) trainers should be encouraged to adjust task complexity and materials on an individual basis, (c) acceptance of the training should be promoted among nursing staff, and (d) surroundings with as little disturbance as possible should be selected for training.
Until recently, neural assessments of gross motor coordination could not reliably handle active tasks, particularly in realistic environments, and offered a narrow understanding of motor-cognition. By applying a comprehensive neuroergonomic approach using optical mobile neuroimaging, we demonstrated the broader capability for ecologically relevant neural evaluations for the “difficult-to-diagnose” Developmental Coordination Disorder (DCD), a motor-learning deficit affecting 5-6% of children with lifelong complications. We confirmed that DCD is not an intellectual, but a motor-cognitive disability, as gross motor /complex tasks revealed neuro-hemodynamic deficits and dysfunction within the right middle and superior frontal gyri of the Prefrontal Cortex. Furthermore, by incorporating behavioral performance, aberrant patterns of neural efficiency in these regions were revealed in DCD children, specifically during motor tasks. Lastly, we provide a framework, evaluating disorder impact in real-world contexts to identify those for whom interventional approaches are most needed and open the door for precision therapies.
The retrosplenial complex (RSC) plays a crucial role in spatial orientation by computing heading direction and translating between distinct spatial reference frames based on multi-sensory information. While invasive studies allow investigating heading computation in moving animals, established non-invasive analyses of human brain dynamics are restricted to stationary setups. To investigate the role of the RSC in heading computation of actively moving humans, we used a Mobile Brain/Body Imaging approach synchronizing electroencephalography with motion capture and virtual reality. Data from physically rotating participants were contrasted with rotations based only on visual flow. During physical rotation, varying rotation velocities were accompanied by pronounced wide frequency band synchronization in RSC, the parietal and occipital cortices. In contrast, the visual flow rotation condition was associated with pronounced alpha band desynchronization, replicating previous findings in desktop navigation studies, and notably absent during physical rotation. These results suggest an involvement of the human RSC in heading computation based on visual, vestibular, and proprioceptive input and implicate revisiting traditional findings of alpha desynchronization in areas of the navigation network during spatial orientation in movement-restricted participants.
The parallel execution of two motor tasks can lead to performance decrements in either one or both of the tasks. Age-related declines can further magnify the underlying competition for cognitive resources. However, little is known about the neural dynamics underlying motor resource allocation during dual-task walking. To better understand motor resource conflicts, this study investigated sensorimotor brain rhythms in younger and older adults using a dual-task protocol. Time-frequency data from two independent component motor clusters were extracted from electroencephalography data during sitting and walking with an additional task requiring manual responses. Button press-related desynchronization in the alpha and beta frequency range were analyzed for the impact of age (< 35 years, ≥ 70 years) and motor task (sitting, walking). Button press-related desynchronization in the beta band was more pronounced for older participants and both age groups demonstrated less pronounced desynchronizations in both frequency bands during walking compared to sitting. Older participants revealed less power modulations between sitting and walking, and less pronounced changes in beta and alpha suppression were associated with greater slowing in walking speed. Our results indicate age-specific allocations strategies during dual-task walking as well as interdependencies of concurrently performed motor tasks reflected in modulations of sensorimotor rhythms.
The repeated use of navigation assistance systems leads to decreased processing of the environment. Previous studies demonstrated that auditory references to landmarks in navigation instructions can improve incidental spatial knowledge acquisition when driving a single route through an unfamiliar virtual environment. Based on these results, three experiments were conducted to investigate the generalizability and ecological validity of incidental landmark and route knowledge acquisition induced by landmark-based navigation instructions. In the first experiment, spatial knowledge acquisition was tested after watching an interactive video showing the navigation of a real-world urban route. A second experiment investigated incidental spatial knowledge acquisition during assisted navigation when participants walked through the same real-world, urban environment. The third experiment tested the acquired spatial knowledge two weeks after participants had walked through the real-world environment. All experiments demonstrated better performance in a cued-recall task for participants navigating with landmark-based navigation instructions as compared to standard instructions. Different levels of information provided with landmark-based instructions impacted landmark recognition dependent on the delay between navigation and test. The results replicated an improved landmark and route knowledge when using landmark-based navigation instructions emphasizing that auditory landmark augmentation enhances incidental spatial knowledge acquisition, and that this enhancement can be generalized to real-life settings. This research is paving the way for navigation assistants that, instead of impairing spatial knowledge acquisition, incidentally foster the acquisition of landmark and route knowledge during every-day navigation.
Spatial navigation is a complex cognitive process based on multiple senses that are integrated and processed by a wide network of brain areas. Previous studies have revealed the retrosplenial complex (RSC) to be modulated in a task-related manner during navigation. However, these studies restricted participants’ movement to stationary setups, which might have impacted heading computations due to the absence of vestibular and proprioceptive inputs. Here, we present evidence of human RSC theta oscillation (4–8 Hz) in an active spatial navigation task where participants actively ambulated from one location to several other points while the position of a landmark and the starting location were updated. The results revealed theta power in the RSC to be pronounced during heading changes but not during translational movements, indicating that physical rotations induce human RSC theta activity. This finding provides a potential evidence of head-direction computation in RSC in healthy humans during active spatial navigation.
Objective We demonstrate and discuss the use of mobile electroencephalogram (EEG) for neuroergonomics. Both technical state of the art as well as measures and cognitive concepts are systematically addressed. Background Modern work is increasingly characterized by information processing. Therefore, the examination of mental states, mental load, or cognitive processing during work is becoming increasingly important for ergonomics. Results Mobile EEG allows to measure mental states and processes under real live conditions. It can be used for various research questions in cognitive neuroergonomics. Besides measures in the frequency domain that have a long tradition in the investigation of mental fatigue, task load, and task engagement, new approaches—like blink-evoked potentials—render event-related analyses of the EEG possible also during unrestricted behavior. Conclusion Mobile EEG has become a valuable tool for evaluating mental states and mental processes on a highly objective level during work. The main advantage of this technique is that working environments don’t have to be changed while systematically measuring brain functions at work. Moreover, the workflow is unaffected by such neuroergonomic approaches.
Coupling behavioral measures and brain imaging in naturalistic, ecological conditions is key to comprehend the neural bases of spatial navigation. This highly‐integrative function encompasses sensorimotor, cognitive, and executive processes that jointly mediate active exploration and spatial learning. However, most neuroimaging approaches in humans are based on static, motion constrained paradigms and they do not account for all these processes, in particular multisensory integration. Following the Mobile Brain/Body Imaging approach, we aimed to explore the cortical correlates of landmark‐based navigation in actively behaving young adults, solving a Y‐maze task in immersive virtual reality. EEG analysis identified a set of brain areas matching state‐of‐the‐art brain imaging literature of landmark‐based navigation. Spatial behavior in mobile conditions additionally involved sensorimotor areas related to motor execution and proprioception usually overlooked in static fMRI paradigms. Expectedly, we located a cortical source in or near the posterior cingulate, in line with the engagement of the retrosplenial complex in spatial reorientation. Consistent with its role in visuo‐spatial processing and coding, we observed an alpha power desynchronization while participants gathered visual information. We also hypothesized behavior‐dependent modulations of the cortical signal during navigation. Despite finding few differences between the encoding and retrieval phases of the task, we identified transient time‐frequency patterns attributed, for instance, to attentional demand, as reflected in the alpha/gamma range, or memory workload in the delta/theta range. We confirmed that combining mobile high‐density EEG and biometric measures can help unravel the brain structures and the neural modulations subtending ecological landmark‐based navigation.
Learning to navigate uncharted terrain is a key cognitive ability that emerges as a deeply embodied process, with eye movements and locomotion proving most useful to sample the environment. We studied healthy human participants during active spatial learning of room‐scale virtual reality (VR) mazes. In the invisible maze task participants wearing a wireless EEG headset were free to explore their surroundings, only given the objective to build and foster a mental spatial representation of their environment. Spatial uncertainty was resolved by touching otherwise invisible walls that were briefly rendered visible inside VR, similar to finding your way in the dark. We showcase the capabilities of Mobile Brain/Body Imaging (MoBI) using Virtual Reality (VR), demonstrating several analyses approaches based on general linear models (GLM) to reveal behavior‐dependent brain dynamics. Confirming spatial learning via drawn sketch maps we employed motion capture to image spatial exploration behavior describing a shift from initial exploration to subsequent exploitation of the mental representation. Using independent component analyses, the current work specifically targeted oscillations in response to wall touches reflecting isolated spatial learning events arising in deep posterior EEG sources located to the retrosplenial complex. Single‐trial regression identified significant modulation of alpha oscillations by the immediate, egocentric, exploration behavior. When encountering novel walls, as well as with increasing walking distance between subsequent touches when encountering novel walls, alpha power decreased. We conclude a prominent role during egocentric evidencing of allocentric spatial hypotheses.
Action is a medium of collecting sensory information about the environment, which in turn is shaped by architectural affordances. Affordances characterize the fit between the physical structure of the body and capacities for movement and interaction with the environment, thus relying on sensorimotor processes associated with exploring the surroundings. Central to sensorimotor brain dynamics, the attentional mechanisms directing the gating function of sensory signals share neuronal resources with motor-related processes necessary to inferring the external causes of sensory signals. Such a predictive coding approach suggests that sensorimotor dynamics are sensitive to architectural affordances that support or suppress specific kinds of actions for an individual. However, how architectural affordances relate to the attentional mechanisms underlying the gating function for sensory signals remains unknown. Here we demonstrate that event-related desynchronization of alpha-band oscillations in parieto-occipital and medio-temporal regions covary with the architectural affordances. Source-level time-frequency analysis of data recorded in a motor-priming Mobile Brain/ Body Imaging experiment revealed strong event-related desynchronization of the alpha band to originate from the posterior cingulate complex, the parahippocampal region as well as the occipital cortex. Our results firstly contribute to the understanding of how the brain resolves architectural affordances relevant to behaviour. Second, our results indicate that the alpha-band originating from the occipital cortex and parahippocampal region covaries with the architectural affordances before participants interact with the environment, whereas during the interaction, the posterior cingulate cortex and motor areas dynamically reflect the affordable behaviour. We conclude that the sensorimotor dynamics reflect behaviour-relevant features in the designed environment.
Spatial navigation is one of the fundamental cognitive functions central to survival in most animals. Studies in humans investigating the neural foundations of spatial navigation traditionally use stationary, desk‐top protocols revealing the hippocampus, parahippocampal place area (PPA), and retrosplenial complex to be involved in navigation. However, brain dynamics while freely navigating the real world remain poorly understood. To address this issue, we developed a novel paradigm, the Audiomaze, in which participants freely explore a room‐sized virtual maze while EEG is recorded synchronized to motion capture. Participants (n=16) were blindfolded and explored different mazes, each in three successive trials, using their right hand as a probe to ‘feel’ for virtual maze walls. When their hand ‘neared’ a virtual wall, they received directional noise feedback. Evidence for spatial learning include shortening of time spent and an increase of movement velocity as the same maze was repeatedly explored. Theta‐band EEG power in or near the right lingual gyrus, the posterior portion of the PPA, decreased across trials, potentially reflecting the spatial learning. Effective connectivity analysis revealed directed information flow from the lingual gyrus to the midcingulate cortex, which may indicate an updating process that integrates spatial information with future action. To conclude, we found behavioral evidence of navigational learning in a sparse‐AR environment, and a neural correlate of navigational learning was found near lingual gyrus.
Coupling behavioral measures and brain imaging in naturalistic, ecological conditions is key to comprehend the neural bases of spatial navigation. This highly-integrative function encompasses sensorimotor, cognitive, and executive processes that jointly mediate active exploration and spatial learning. However, most neuroimaging approaches in humans are based on static, motion constrained paradigms and they do not account for all these processes, in particular multisensory integration. Following the Mobile Brain/Body Imaging approach, we aimed to explore the cortical correlates of landmark-based navigation in actively behaving young adults, solving a Y-maze task in immersive virtual reality. EEG analysis identified a set of brain areas matching state-of-the-art brain imaging literature of landmark-based navigation. Spatial behavior in mobile conditions additionally involved sensorimotor areas related to motor execution and proprioception usually overlooked in static fMRI paradigms. Expectedly, we located a cortical source in or near the posterior cingulate, in line with the engagement of the retrosplenial complex in spatial reorientation. Consistent with its role in visuo-spatial processing and coding, we observed an alpha power desynchronization while participants gathered visual information. We also hypothesized behavior-dependent modulations of the cortical signal during navigation. Despite finding few differences between the encoding and retrieval phases of the task, we identified transient time-frequency patterns attributed, for instance, to attentional demand, as reflected in the alpha/gamma range, or memory workload in the delta/theta range. We confirmed that combining mobile high-density EEG and biometric measures can help unravel the brain structures and the neural modulations subtending ecological landmark-based navigation.
Conducting neuroscience research in the real world remains challenging because of movement‐ and environment‐related artifacts as well as missing control over stimulus presentation. The present study overcame these restrictions by using mobile electroencephalography (EEG) and data driven analysis approaches during a real‐world navigation task. During assisted navigation through an unfamiliar city environment, participants received either standard or landmark‐based auditory navigation instructions. EEG data was recorded continuously during navigation. Saccade‐ and blink‐events as well as gait‐related EEG activity were extracted from sensor level data. Brain activity associated with the navigation task was identified by subsequent source‐based cleaning of non‐brain activity and unfolding of overlapping event‐related potentials. When navigators received landmark‐based instructions compared to those receiving standard navigation instructions, the blink‐related brain potentials during navigation revealed higher amplitudes at fronto‐central leads in a time window starting at 300 ms after blinks, which was accompanied by improved spatial knowledge acquisition tested in follow‐up spatial tasks. Replicating improved spatial knowledge acquisition from previous experiments, the present study revealed eye‐movement related brain potentials to point to the involvement of higher cognitive processes and increased processing of incoming information during periods of landmark‐based instructions. The study revealed neuronal correlates underlying visuo‐spatial information processing during assisted navigation in the real‐world providing a new analysis approach for neuroscientific research in freely moving participants in uncontrollable real‐world environments.
Detecting and correcting incorrect body movements is an essential part of everyday interaction with one's environment. The human brain has a constant monitoring system that controls and adjusts our actions according to our surroundings. However, when our brain's predictions about a planned action do not match the sensory inputs resulting from that action, cognitive conflict occurs. Much is known about cognitive conflict in 1D/2D environments; however, less is known about the role of movement characteristics on cognitive conflict in 3D environment. Hence, we devised an object selection task in a virtual reality environment to test how the velocity of hand movements impact a number of brain responses. From a series of analyses of EEG recordings synchronized with motion capture, we found that the velocity of the participants' hand movements modulated the brain's proprioception during the task and induced prediction error negativity. Additionally, prediction error negativity originates in the anterior cingulate cortex and is itself modulated by the ballistic phase of the hand's movement. These findings suggest that velocity is an essential component of integrating hand movements with visual and proprioceptive information during interactions with real and virtual objects.
Conducting neuroscience research in the real world remains challenging because of movement-and environment-related artifacts as well as missing control over stimulus presentation. The present study demonstrated that it is possible to investigate the neuronal correlates underlying visuo-spatial information processing during real-world navigation. Using mobile EEG allowed for extraction of saccade-and blink-related potentials as well as gait-related EEG activity. In combination with source-based cleaning of non-brain activity and unfolding of overlapping event-related activity, brain activity of naturally behaving humans was revealed even in a complex and dynamic city environment. K E Y W O R D S natural cognition, mobile EEG, blink-related potentials, saccade-related potentials, gait artifacts
The augmentation of landmarks in auditory navigation instructions had been shown to improve incidental spatial knowledge acquisition during assisted navigation. Here, two driving simulator experiments are reported that replicated this effect even when adding a three-week delay between navigation and spatial tasks and varying the degree of detail in the provided landmark information. Performance in free- and cued-recall of landmarks and driving the route again without assistance demonstrated increased landmark and route knowledge when navigating with landmark-based compared to standard instructions. The results emphasize that small changes to existing navigation systems can foster spatial knowledge acquisition during every-day navigation.
Detecting and correcting incorrect body movements is an essential part of everyday interaction with one's environment. The human brain provides a monitoring system that constantly controls and adjusts our actions according to our surroundings. However, when our brain's predictions about a planned action do not match the sensory inputs resulting from that action, cognitive conflict occurs. Much is known about cognitive conflict in 1D/2D environments; however, less is known about the role of movement characteristics associated with cognitive conflict in 3D environment. Hence, we devised an object selection task in a virtual reality (VR) environment to test how the velocity of hand movements impacts human brain responses. From a series of analyses of EEG recordings synchronized with motion capture, we found that the velocity of the participants’ hand movements modulated the brain's response to proprioceptive feedback during the task and induced a prediction error negativity (PEN). Additionally, the PEN originates in the anterior cingulate cortex and is itself modulated by the ballistic phase of the hand's movement. These findings suggest that velocity is an essential component of integrating hand movements with visual and proprioceptive information during interactions with real and virtual objects.
Behavioral findings suggest that aging alters the involvement of cortical sensorimotor mechanisms in postural control. However, corresponding accounts of the underlying neural mechanisms remain sparse, especially the extent to which these mechanisms are affected during more demanding tasks. Here, we set out to elucidate cortical correlates of altered postural stability in younger and older adults. 3D body motion tracking and high-density electroencephalography (EEG) were measured while 14 young adults (mean age = 24 years, 43% women) and 14 older adults (mean age = 77 years, 50% women) performed a continuous balance task under four different conditions. Manipulations were applied to the base of support (either regular or tandem (heel-to-toe) stance) and visual input (either static visual field or dynamic optic flow). Standing in tandem, the more challenging position, resulted in increased sway for both age groups, but for the older adults, only this effect was exacerbated when combined with optic flow compared to the static visual display. These changes in stability were accompanied by neuro-oscillatory modulations localized to midfrontal and parietal regions. A cluster of electro-cortical sources localized to the supplementary motor area showed a large increase in theta spectral power (4-7 Hz) during tandem stance, and this modulation was much more pronounced for the younger group. Additionally, the older group displayed widespread mu (8-12 Hz) and beta (13-30 Hz) suppression as balance tasks placed more demands on postural
Action is a medium of collecting sensory information about the environment, which in turn is shaped by architectural affordances. Affordances characterize the fit between the physical structure of the body and capacities for movement and interaction with the environment, thus relying on sensorimotor processes associated with exploring the surroundings. Central to sensorimotor brain dynamics, the attentional mechanisms directing the gating function of sensory signals share neuronal resources with motor-related processes necessary to inferring the external causes of sensory signals. Such a predictive coding approach suggests that sensorimotor dynamics are sensitive to architectural affordances that support or suppress specific kinds of actions for an individual. However, how architectural affordances relate to the attentional mechanisms underlying the gating function for sensory signals remains unknown. Here we demonstrate that event-related desynchronization of alpha-band oscillations in parieto-occipital and medio-temporal regions covary with the architectural affordances. Source-level time-frequency analysis of data recorded in a motor-priming Mobile Brain/Body Imaging experiment revealed strong event-related desynchronization of the alpha band to originate from the posterior 23 cingulate complex and bilateral parahippocampal areas. Our results firstly contribute to the understanding of how the brain resolves architectural affordances relevant to behaviour. Second, our results indicate that the alpha-band originating from the posterior cingulate complex covaries with the architectural affordances before participants interact with the environment. During the interaction, the bilateral parahippocampal areas dynamically reflect the affordable behaviour as perceived through the visual system. We conclude that the sensorimotor dynamics are developed for processing behaviour-relevant features in the designed environment.
Recent developments in EEG hardware and analyses approaches allow for recordings in both stationary and mobile settings. Irrespective of the experimental setting, EEG recordings are contaminated with noise that has to be removed before the data can be functionally interpreted. Independent component analysis (ICA) is a commonly used tool to remove artifacts such as eye movement, muscle activity, and external noise from the data and to analyze activity on the level of EEG effective brain sources. The effectiveness of filtering the data is one key preprocessing step to improve the decomposition that has been investigated previously. However, no study thus far compared the different requirements of mobile and stationary experiments regarding the preprocessing for ICA decomposition. We thus evaluated how movement in EEG experiments, the number of channels, and the high-pass filter cutoff during preprocessing influence the ICA decomposition. We found that for commonly used settings (stationary experiment, 64 channels, 0.5 Hz filter), the ICA results are acceptable. However, high-pass filters of up to 2 Hz cutoff frequency should be used in mobile experiments, and more channels require a higher filter to reach an optimal decomposition. Fewer brain ICs were found in mobile experiments, but cleaning the data with ICA has been proved to be important and functional even with low-density channel setups. Based on the results, we provide guidelines for different experimental settings that improve the ICA decomposition. K E Y W O R D S artifact removal, electroencephalogram, independent component analysis, mobile brain/body imaging, preprocessing
Neuroscience of dance is an emerging field with important applications related to health and well‐being, as dance has shown potential to foster adaptive neuroplasticity and is increasingly popular as a therapeutic activity or adjunct therapy for people living with conditions such as Parkinson’s and Alzheimer’s Diseases. However, the multimodal nature of dance presents challenges to researchers aiming to identify mechanisms involved when dance is used to combat neurodegeneration or support healthy aging. Requiring simultaneous engagement of motor and cognitive domains, dancing includes coordination of systems involved in timing, memory, and spatial learning. Studies on dance to this point rely primarily on assessments of brain dynamics and structure through pre/post tests or studies on expertise, as traditional brain imaging modalities restrict participant movement to avoid movement‐related artifacts. In this paper, we describe the process of designing and implementing a study that uses Mobile Brain/body Imaging (MoBI) to investigate real‐time changes in brain dynamics and behaviour during the process of learning and performing a novel dance choreography. We show the potential for new insights to emerge from the coordinated collection of movement and brain‐based data, and the implications of these in an emerging field whose medium is motion.
When walking in our natural environment, we often solve additional cognitive tasks. This increases the demand of resources needed for both the cognitive and motor systems, resulting in Cognitive‐Motor Interference (CMI). A large portion of neurophysiological investigations on CMI took place in static settings, emphasizing the experimental rigor but overshadowing the ecological validity. As a more ecologically valid alternative to treadmill and desktop‐based set‐ups to investigate CMI, we developed a dual‐task walking scenario in virtual reality (VR) combined with Mobile Brain/Body Imaging (MoBI). We aimed at investigating how brain dynamics are modulated by dual‐task overground walking with an additional task in the visual domain. Participants performed a visual discrimination task in VR while standing (single‐task) and walking overground (dual‐task). Even though walking had no impact on the performance in the visual discrimination task, a P3 amplitude reduction along with changes in power spectral densities (PSDs) were observed for discriminating visual stimuli during dual‐task walking. These results reflect an impact of walking on the parallel processing of visual stimuli even when the cognitive task is particularly easy. This standardized and easy to modify VR‐paradigm helps to systematically study CMI, allowing researchers to control for the impact of additional task complexity of tasks in different sensory modalities. Future investigations implementing an improved virtual design with more challenging cognitive and motor tasks will have to investigate the roles of both cognition and motion, allowing for a better understanding of the functional architecture of attention reallocation between cognitive and motor systems during active behavior.
Adaptively changing between different tasks while in locomotion is a fundamental prerequisite of modern daily life. The cognitive processes underlying dual tasking have been investigated extensively using EEG. Due to technological restrictions, however, this was not possible for dual task scenarios including locomotion. With new technological opportunities, this became possible and cognitive-motor interference can be studied, even in outside-the-lab environments. In the present study, participants carried out a cognitive-motor interference task as they responded to cued, auditory task-switch stimuli while performing locomotive tasks with increasing complexity (standing, walking, traversing an obstacle course). We observed increased subjective workload ratings as well as decreased behavioral performance for increased movement complexity and cognitive task difficulty. A higher movement load went along with a decrease of parietal P2, N2, and P3 amplitudes and frontal Theta power. A higher cognitive load, on the other hand, was reflected by decreased frontal CNV amplitudes. Additionally, a connectivity analysis using inter-site phase coherence revealed that higher movement as well as cognitive task difficulty had an impairing effect on fronto-parietal connectivity. In conclusion, subjective ratings, behavioral performance, and electrophysiological results indicate that less cognitive resources were available to be deployed towards the execution of the cognitive task when in locomotion compared to standing still. Connectivity results also show a scarcity of attentional resources when switching a task during the highest movement complexity condition. Summarized, all findings indicate a central role of attentional control regarding cognitive-motor dual tasking and an inherent limitation of cognitive resources.
The goal of this study was to determine whether the cortical responses elicited by whole‐body balance perturbations were similar to established cortical markers of action monitoring. Postural changes imposed by balance perturbations elicit a robust negative potential (N1) and a brisk increase of theta activity in the electroencephalogram recorded over midfrontal scalp areas. Because action monitoring is a cognitive function proposed to detect errors and initiate corrective adjustments, we hypothesized that the possible cortical markers of action monitoring during balance control (N1 potential and theta rhythm) scale with perturbation intensity and the eventual execution of reactive stepping responses (as opposed to feet‐in‐place responses). We recorded high‐density electroencephalogram from eleven young individuals, who participated in an experimental balance assessment. The participants were asked to recover balance following anteroposterior translations of the support surface at various intensities, while attempting to maintain both feet in place. We estimated source‐resolved cortical activity using independent component analysis. Combining time‐frequency decomposition and group‐level general linear modeling of single‐trial responses, we found a significant relation of the interaction between perturbation intensity and stepping responses with multiple cortical features from the midfrontal cortex, including the N1 potential, and theta, alpha, and beta rhythms. Our findings suggest that the cortical responses to balance perturbations index the magnitude of a deviation from a stable postural state to predict the need for reactive stepping responses. We propose that the cortical control of balance may involve cognitive control mechanisms (i.e., action monitoring) that facilitate postural adjustments to maintain postural stability.
The central and peripheral effects of caffeine remain debatable. We verified whether increases in endurance performance after caffeine ingestion occurred together with changes in primary motor cortex (MC) and prefrontal cortex (PFC) activation, neuromuscular efficiency (NME), and electroencephalography-electromyography coherence (EEG-EMG coherence). Twelve participants performed a time-to-task failure isometric contraction at 70% of the maximal voluntary contraction after ingesting 5 mg/kg of caffeine (CAF) or placebo (PLA), in a crossover and counterbalanced design. MC (Cz) and PFC (Fp1) EEG alpha wave and vastus lateralis (VL) muscle EMG were recorded throughout the exercise. EEG-EMG coherence was calculated through the magnitude squared coherence analysis in MC EEG gamma-wave (CI > 0.0058). Moreover, NME was obtained as the force-VL EMG ratio. When compared to PLA, CAF improved the time to task failure (p = 0.003, d = 0.75), but reduced activation in MC and PFC throughout the exercise (p = 0.027, d = 1.01 and p = 0.045, d = 0.95, respectively). Neither NME (p = 0.802, d = 0.34) nor EEG-EMG coherence (p = 0.628, d = 0.21) was different between CAF and PLA. The results suggest that CAF improved muscular performance through a modified central nervous system (CNS) response rather than through alterations in peripheral muscle or central-peripheral coupling.
Recent developments in EEG hardware and analyses approaches allow for recordings in both stationary and mobile settings. Irrespective of the experimental setting, EEG recordings are contaminated with noise that has to be removed before the data can be functionally interpreted. Independent component analysis (ICA) is a commonly used tool to remove artifacts such as eye movement, muscle activity, and external noise from the data and to analyze activity on the level of EEG effective brain sources. While the effectiveness of filtering the data as one key preprocessing step to improve the decomposition has been investigated previously, no study thus far compared the different requirements of mobile and stationary experiments regarding the preprocessing for ICA decomposition. We thus evaluated how movement in EEG experiments, the number of channels, and the high-pass filter cutoff during preprocessing influence the ICA decomposition. We found that for commonly used settings (stationary experiment, 64 channels, 0.5 Hz filter), the ICA results are acceptable. However, high-pass filters of up to 2 Hz cutoff frequency should be used in mobile experiments, and more channels require a higher filter to reach an optimal decomposition. Fewer brain ICs were found in mobile experiments, but cleaning the data with ICA proved to be important and functional even with 16 channels. Based on the results, we provide guidelines for different experimental settings that improve the ICA decomposition.
For over two centuries, the wheelchair has been one of the most common assistive devices for individuals with locomotor impairments without many modifications. Wheelchair control is a complex motor task that increases both the physical and cognitive workload. New wheelchair interfaces, including Power Assisted devices, can further augment users by reducing the required physical effort, however little is known on the mental effort implications. In this study, we adopted a neuroergonomic approach utilizing mobile and wireless functional near infrared spectroscopy (fNIRS) based brain monitoring of physically active participants. 48 volunteers (30 novice and 18 experienced) selfpropelled on a wheelchair with and without a PowerAssist interface in both simple and complex realistic environments. Results indicated that as expected, the complex more difficult environment led to lower task performance complemented by higher prefrontal cortex activity compared to the simple environment. The use of the PowerAssist feature had significantly lower brain activation compared to traditional manual control only for novices. Expertise led to a lower brain activation pattern within the middle frontal gyrus, complemented by performance metrics that involve lower cognitive workload. Results here confirm the potential of the Neuroergonomic approach and that direct neural activity measures can complement and enhance task performance metrics. We conclude that the cognitive workload benefits of PowerAssist are more directed to new users and difficult settings. The approach demonstrated here can be utilized in future studies to enable greater personalization and understanding of mobility interfaces within real-world dynamic environments.