ChapterPDF Available

EEG Acquisition During the VR Administration of Resting State, Attention, and Image Recognition Tasks: A Feasibility Study


Abstract and Figures

The co-acquisition of EEG in a virtual environment (VE) would give researchers and clinicians the opportunity to acquire EEG data with millisecond-level temporal resolution while participants performed VE activities. This study integrated Advanced Brain Monitoring’s (ABM) X-24t EEG hardware with the HTC Vive VR headset and investigated EEG differences in tasks delivered in two modalities: VE and a desktop computer.
Content may be subject to copyright.
EEG Acquisition during the VR Administration of Resting State, Attention, and Image
Recognition Tasks: A Feasibility Study
Preliminary Technical Report
United Neuroscience
Fitzwilliam House, Suite 11
3-4 Upper Pembroke Street
Dublin 2, Ireland
Submitted to:
Ajay Verma, MD, PhD
United Neuroscience
Points of Contact:
Greg Rupp, Senior Researcher Chris Berka, CEO
Advanced Brain Monitoring, Inc. Advanced Brain Monitoring, Inc.
2237 Faraday Ave, Suite 100 2237 Faraday Ave., Ste 100
Carlsbad, CA 92008 Carlsbad, CA 92008
760-720-0099 ext 6009 760-720-0099
Glossary of terms:
Alertness and Memory Profiler. ABM’s collection of
neurocognitive tasks
Resting State Eyes Open Task
Resting State Eyes Closed Task
Bright Environment
An environment with normal levels of light. This was used in
both the Desktop and VR AMP acquisitions.
Dark Environment
A darkened environment with extremely low levels of light. This
was only feasible in the VR environment.
3-Choice Vigilance Task
Standard Image Recognition Task
Virtual Environment
Original Equipment Manufacturer
Target Effect
A difference in LPP amplitude for Target trials compared to
NonTarget trials for a given task.
Late Positive Potential
1. Introduction
A growing number of virtual reality (VR) devices are commercially available and deliver an
immersive experience to the end-user. This technology was developed primarily for gaming
and has been scaled such that the price point is not unreasonable for much of the population.
The potential application of these VR systems to the research and clinical domains is
considerable. Specifically, the potential co-acquisition of EEG and VR is of great importance
as it would give researchers and clinicians the opportunity to acquire EEG data with
millisecond level temporal resolution while the participant is immersed in a virtual
environment (VE). Today’s VR technology allows for delivery of a realistic VE allowing
comparisons of EEG metrics and responses evoked by VEs versus similar presentations on
desktop or other 2D environments.
The amount of research involving EEG and VR is growing steadily. The efficacy of using
VEs to enhance simulated environments has been demonstrated in a wide range of fields,
from psycholinguistics to education to driving simulation. In one study, VR and EEG were
integrated into a Virtual Reality Therapy System that used EEG to measure VR-induced
changes in brain states associated with relaxation. It is clear that VR, used in combination
with EEG, can be a tool for delivering and creating novel and enhanced environments
designed for assessments or therapeutic interventions.
Advanced Brain Monitoring (ABM) has developed a collection of neurocognitive tests
collectively referred to as the Alertness and Memory Profiler (AMP). AMP is a
neurocognitive testbed designed to be administered while simultaneously recording
EEG/ECG with a wireless and portable headset that acquires 20 channels of EEG and
transmits digitized data via Bluetooth to a laptop. AMP consists of sustained attention,
working memory, and recognition memory tests that are automatically delivered and time-
locked to the EEG to generate event-related potentials (ERPs). In prior work ABM
demonstrated the potential utility for ERP measures acquired with AMP as sensitive early
stage biomarkers indicative of the cognitive deficits associated with Mild Cognitive
In the present study, the integration potential of ABM’s B-Alert X24t headset with the HTC
Vive VR headset was assessed. This was done in two ways: the physical integration of the
hardware and software, and the EEG measures collected during the AMP tasks. The aim was
to achieve millisecond level time syncing between the EEG and VR headsets, ensure comfort
for the user, and ascertain whether or not there were significant EEG differences between
tasks administered on the desktop computer and tasks administered in the VR environment.
2. Materials and Methods
2.1 Participants
Ten healthy individuals ranging from 24 to 75 years old were recruited and screened to serve
as participants. Data was acquired at the study site at either 9:00am or 1:00pm in order to
minimize any effects from the diurnal dip. The VR AMP and Desktop AMP acquisition were
done at least one week apart (in no particular order) to minimize any practice and or/memory
effects. Participants were asked a variety of questions about their VR experience before
being administered the VR AMP testbed, during the 10 minute break in the middle of the
session, and immediately following the completion of VR AMP.
2.2 VR EEG Acquisitions
The participant’s were administered AMP that had been adapted to the HTC Vive™ Steam
Software (Figure 1). The participants were given a few minutes to acclimate themselves to
the virtual reality room that simulated the room in which the acquisition was occurring.
The protocol for the VR AMP acquisition was as follows:
Resting State Eyes Open in a VE with normal indoor lighting (RSEO-Bright, 5 minutes)
Resting State Eyes Closed in a VE with normal indoor lighting (RSEC-Bright, 5
3-Choice Vigilance Task (3CVT test of sustained attention, 20 minutes)
10 min break
Standard Image Recognition Task (SIR test of image recognition memory, 7 minutes)
Resting State Eyes Open in a VE without any lighting (RSEO-Dark, 5 minutes)
Figure 1: The B-Alert X24t / HTC Vive
integrated system
Figure 2: The 3CVT task in the VE. A NonTarget
image is being presented.
Resting State Eyes Closed in a VE without any lighting (RSEC-Dark, 5 minutes)
The 3CVT requires participants to discriminate between Target, frequent stimuli (right side
up triangle), from NonTarget and Interference infrequent stimuli (upside down triangle and
diamond, Figure 2). After the SIR task (administered in the VE only), the original EO and
EC tasks were repeated, but were performed within a darkened virtual room.
The participants were given a 10 minute break period from wearing the HTC Vive™ after the
3CVT to mitigate any potential motion sickness or issues with visuospatial awareness. Once
the break was over, the headset was put back on the participant, and an impedance check was
performed to ensure good connection.
2.3 Desktop EEG Acquisition
The procedure was the same for the desktop EEG AMP acquisition. The only difference is
that there were no RSEO or RSEC dark tasks, as light could not be effectively controlled in
the room where the acquisitions occurred.
The protocol for the Desktop AMP acquisition was as follows:
Resting State Eyes Open with normal indoor lighting (RSEO-Bright, 5 minutes)
Resting State Eyes Closed with normal indoor lighting (RSEC-Bright, 5 minutes)
3-Choice Vigilance Task (3-CVT test of sustained attention, 20 minutes)
10 min break
Standard Image Recognition Task (SIR test of image recognition memory, 7 minutes)
3. Results
3.1. Evaluation of B-Alert/Vive Headsets for Comfort and Signal Quality
The HTC Vive™ VR head strap was placed directly on top of the participant’s head over the
B-Alert™ sensor strip. The Vive was applied after the EEG headset was fully setup and
operational. Impedance checks were done after the initial placement of the EEG headset, and
a second time after the VR headset was applied. For all 10 participants, the impedance value
of any individual channel did not increase after the application of the VR headset. Therefore,
the original equipment manufacturer (OEM) VR head strap was adequate for AMP data
The other factor contributing to comfort of the VR headset was the type of face padding
used. Five types of face pads varying in size, shape, and material were obtained (two OEM,
three third-party). Using age, gender, and pupil distance as variables, we attempted to
determine which face mask was best for each participant. Despite our best efforts, there was
no pattern found amongst the different facemasks. A utilitarian approach was taken, in which
we used the face mask that was comfortable for the majority of people. If a participant did
not like that face mask, we switched it out and tried a different one until the participant was
sufficiently comfortable.
The participant was instructed to look forward at all times, as they would in a desktop
acquisition. This created apprehension that the Vive (1.04 pounds) would be too much
weight on a participant’s face to handle for an extended period of time. A pulley system was
installed to hold the device up and thereby relieve some pressure off the face of the
participant (Figure 3). If a person felt the weight of the device, the cables from the Vive were
pulled back in order to alleviate the weight off of a participant’s face.
The same furniture and computer equipment was used throughout all
acquisitions. The chair was not a factor for a participant’s comfort
during any of the desktop acquisitions. However, there was a concern
that very short or very tall people would perceive the virtual
environment differently (due to the same design of the VE for all
participant regardless of their height). This concern turned out to be
unfounded, as people as short as 5’1” and as tall as 6’4” felt
comfortable in the VE.
The signal quality during the VR EEG acquisition was comparable to
that of the desktop acquisitions. In the ERP tasks, the average number
of clean trials for desktop 3CVT was (209) and for desktop SIR was
(40). The average number of clean trials for VR 3CVT was (177) and VR SIR was (37). The
average number of clean epochs during desktop resting state (bright only; EO and EC) was
(186 and 204), and VR resting state was (154 and 194). Table 1 summarizes this data.
Table 1: Average number of clean epochs (or clean trials for ERP tasks) compared
between VR and Desktop during each task
RSEC Bright
3.2. Programming the VR Environment and Achieving Millisecond Level Temporal
The VR environment was programmed using the Unreal Engine 4 (UE41) to closely
approximate the look and feel of the room in which the Desktop AMP acquisitions took
place. This was done in order to deliver a realistic VE, which has been shown to evoke
different EEG measures compared to non-realistic VEs.
Using ABM’s OCEAN task player (the stimulus delivery package of the ABM’s proprietary
acquisition software) as a model, VR programmers used UE4 to send event markers that
correspond with the presentation and removal of each stimulus, as well as whether or not the
user answered correctly, if at all. The AMP tasks themselves were also programmed to
replicate ABM’s desktop version. The event files were then manually inspected to make sure
Figure 3: pulley system
all event codes were marked properly and that the timing matched each task-specific
protocol. After the manual inspection of event files from several separate acquisitions, it was
determined that millisecond level temporal synchronization was accomplished.
3.3. EEG Differences Between Desktop and VR Resting State in a Bright Environment
In all three conditions of resting state (Desktop Bright, VR Bright and VR Dark), as
expected, alpha power was significantly greater (p<0.01) at all channels during RSEC (eyes
closed) compared to RSEO (eyes open)(Figure 4a-c).
Figure 4: Alpha Power during Resting State Eyes Closed compared to Resting State eyes open for three different
modalities: (a) Desktop Bright (b) VR Bright (c) VR Dark
When comparing resting state EEG during VR and Desktop acquisitions, there was no within
subject significant difference in Alpha power (Figure 5).
Figure 5: Alpha power during VR (left) and Desktop (center) RSEO. The rightmost figure displays a difference map
3.4. Resting State Alpha Power in a Bright vs. Dark VR Environment
In VR during the eyes open task, alpha power was significantly greater (p<0.01) in the dark
environment compared to the bright environment at all channels (Figure 6a). There was no
significant difference between the dark and bright environments with eyes closed (Figure
Figure 6: Differences in Alpha power between Dark and Light VEs for (a) RSEO and (b) RSEC
3.5. 3CVT and SIR Performance
There were no significant performance differences between Desktop and VR 3CVT (Figure
7a) and Desktop and VR SIR (Figure 7b)
(a) (b)
Figure 7: Performance during the (a) 3CVT and (b) SIR task. The performance metrics shown are reaction time (left),
percent correct (center), and F-measure (right).
3.6. EEG ERP Differences Between Desktop and VR During 3CVT
Figure 8a displays the grand average for the two modalities examined in this study. Upon visual
inspection, the 3CVT ERP waveforms from the Desktop and VR acquisitions do not significantly
differ. Figure 8b displays one channel for one participant as an example in which the ERP
waveforms elicited in each modality exhibit a high degree of similarity.
Figure 8: (a) Grand Average plots for all participants for Desktop and VR 3CVT Target (frequent) trials. (b) A single
channel from participant 1
A within-subject comparison of the amplitude of the late positive potential (LPP) was computed
for both Desktop and VR 3CVT Target (frequent) trials. There was no significant difference at
any channel between VR and Desktop acquisitions (Figure 9).
Figure 9: LPP differences between VR and Desktop 3CVT Target (frequent) trials.
Additionally, the target effect, which is the difference in LPP amplitude between Target and
NonTarget trials for each participant, is shown in Figure 10. There was no significant difference
in target effect between the VR and Desktop 3CVT.
Figure 10: Target effect for VR and Desktop 3CVT
3.7. EEG ERP Differences Between Desktop and VR During SIR
Grand average waveforms (Figure 11a) demonstrate that, upon visual inspection, VR and
Desktop SIR ERPs do not meaningfully differ. A single channel from a single subject is shown
in Figure 11b in order demonstrate the high degree of similarity seen in most channels in most
Figure 11: (a) Grand Average plots for all subjects for Desktop and SIR Target (infrequent) trials. (b) A single channel
from subject 1
The late positive potential (LPP) was computed for both Desktop and VR SIR Target
(infrequent) trials. There was no significant difference at any channel between VR and Desktop
acquisitions (Figure 12).
Figure 12: LPP differences between VR and Desktop SIR Target (infrequent) trials.
Additionally, the target effect, which is the difference in LPP between Target and NonTarget
trials for each participant, is shown in Figure 13. There was no difference in target effect
between the VR and Desktop SIR
Figure 13: Target effect for VR and Desktop SIR
4. Discussion
This project demonstrated that ABM’s B-alert X24t hardware and acquisition software could be
successfully integrated with the HTC Vive (programmed using Unreal Engine 4) to allow high
quality EEG data to be acquired with time-locked VR stimulus delivery with temporal resolution
at the millisecond level. As expected, a within-subjects analysis revealed no significant
differences in the EEG measures between Desktop and VR AMP acquisitions.
This preliminary evaluation suggests the integrated systems could prove useful for numerous
clinical and research applications. However, there are several issues that will need to be
addressed prior to widespread adoption of VR-EEG. The HTC Vive weighs 1.04 pounds and
exerts significant pressure on a subject’s face, causing discomfort and in some cases neck strain.
For this reason, Neurons Inc. and ABM determined that 30 minutes was an optimal acquisition
time to maximize the recording time while minimizing participant discomfort before they need to
take a break. The device is bulky with several heavy wires, making it somewhat difficult to be
mobile while wearing the device. An improvement in form factor of the HTC Vive could
ameliorate these problems and increase the comfort of the participant, allowing for longer
acquisitions. There is also potential for further integration with the ABM EEG headsets. For
example, the capability of integrating the EEG electrodes with the Vive headset would result in a
lighter and more comfortable system.
Thus, the HTC Vive appears to be a promising tool for future research as it allows participants to
be fully immersed in a given environment without changing the core neurological signatures
associated with attention, learning and memory. There are many potential applications of VR as
it offers realistic 3-D experiences, for example the measurement of Optic flow. Optic flow (OF)
refers to the perception and integration of the visual field as one moves through a physical
environment. Specifically “radial optic flow” involves the perception of motion around a central
visual field when there is change in scenery from self-directed movement or from objects
moving towards or away from the observer.
OF is critical for all aspects of navigation, including judging spaces, distances, speed of
movement and is essential to avoide collisions. Basic OF emerges within the first weeks of birth
with radial OF presenting at 12 months of age in healthy children. Psychophysical tests of OF
have been developed to assess the thresholds for perception of radial, horizontal and random
motions. The thresholds for perception of radial OF (even when presented on a computer to a
non-moving participant) are highly correlated with navigational abilities.
Using fMRI, MEG & EEG, OF has been shown to be processed via the dorsal visual stream
pathway including striate and extrastriate areas V1-3 and V3a as well as V5/MT, MST, V6 and
the posterior parietal region. These systems as well as V5a interact to control eye movements to
assess the changes in OF as one moves through the environment. Several EEG metrics have been
validated for detecting OF including event-related potential (ERP) components (primarily the
N170, N200 and the P200 components) and event-related changes in power (theta, beta and
alpha) indicatives of responses to changes in OF.
Radial OF has been extensively studied in association with neurodegenerative disease,
particularly Alzheimer’s disease (AD) and amnestic MCI (aMCI). Psychophysical testing
confirmed OF impairment in AD but not in aMCI, however the neurophysiological studies
revealed sensory (early components P100 and N130), perceptual and cognitive ERP component
differences for OF tasks in AD and differences in the cognitive ERP components for aMCI
during a variety of experimental paradigms designed to stimulate OF. The primary finding
across studies of AD patients is longer latency and reduced amplitude of the P200. In one study
of aMCI patients the latency of the P200 was inversely correlated with MMSE suggesting a link
to cognitive decline.
These ERP changes are also highly correlated with impairments in navigational abilities in AD.
Interestingly, not all AD or MCI patients show the ERP changes and there have been several
reports that older adults with no neurodegenerative disease evidence ERP change in OF and
impaired navigational capabilities. Additional research is required to further delineate the
differences across AD, MCI and controls, associate the findings with regional differences in
structure or function (MRI, fMRI and PET) and to better understand the relationships with
cognitive decline and navigational abilities.
There is currently no standardized method for testing patients for OF although OF deficiencies
may be inferred following a comprehensive ophthalmological work-up. There are multiple
computerized tests as well as OF training protocols reported in the literature however, there is no
consensus on the size, color, shape or other dimensions of the visual stimuli used to evoke OF.
All these approaches are administered to non-mobile participants and thus can only evaluate the
perceived movements within the display. In contrast, the virtual reality (VR) environment offers
a rich array of potential test environments for simulating the 3-dimensional characteristic of true
OF as encountered during locomotion.
Planning Next Steps
The team at ABM is continuing to review the literature on OF and neurodegeneration and is
beginning to brainstorm on possible VR tasks that can be programmed for simultaneous EEG
ERP measurements. That being said, without a “gold standard” test to validate the VR version it
will be difficult to validate that the OF construct is actually being evaluated. However, the team
could create several VR programs that involve participants moving in VR and confronting
moving objects in the VE with the goal of simulating realistic OF.
The conceptual protocol for VR-OF would include 4-5 different scenarios:
- Walking through a crowd of people in an urban environment
- Navigating through water filled with moving fish
- Hitting balls coming towards you
- Navigation in a shopping environment
- This VE could also provide an opportunity to assess “skill in activities of daily living”
embedding various tests of attention and memory such as recalling shopping lists, finding
items in the store and adding up dollar amounts prior to check-out.
Additionally, a meeting with HTC to review product design goals would be very helpful in
planning and strategizing next steps.
1. Baka, E., et al., An EEG-based Evaluation for Comparing the Sense of Presence between
Virtual and Physical Environments, in Proceedings of Computer Graphics International
2018. 2018, ACM: Bintan, Island, Indonesia. p. 107-116.
2. Tromp, J., et al. Combining EEG and virtual reality: The N400 in a virtual environment. in
the 4th edition of the Donders Discussions (DD, 2015). 2015. Nijmegen, Netherlands.
3. Lin, C.-T., et al., EEG-based assessment of driver cognitive responses in a dynamic virtual-
reality driving environment. IEEE Transactions on Biomedical Engineering, 2007. 54(7): p.
4. Makransky, G., T.S. Terkildsen, and R.E. Mayer, Adding immersive virtual reality to a
science lab simulation causes more presence but less learning. Learning and Instruction,
5. Slobounov, S.M., et al., Modulation of cortical activity in 2D versus 3D virtual reality
environments: an EEG study. Int J Psychophysiol, 2015. 95(3): p. 254-60.
6. Slobounov, S.M., E. Teel, and K.M. Newell, Modulation of cortical activity in response to
visually induced postural perturbation: combined VR and EEG study. Neurosci Lett, 2013.
547: p. 6-9.
7. Agyei, S.B., F.R. van der Weel, and A.L. van der Meer, Longitudinal study of preterm and
full-term infants: High-density EEG analyses of cortical activity in response to visual
motion. Neuropsychologia, 2016. 84: p. 89-104.
8. Tata, M.S., et al., Selective attention modulates electrical responses to reversals of optic-
flow direction. Vision Res, 2010. 50(8): p. 750-60.
9. Albers, M.W., et al., At the interface of sensory and motor dysfunctions and Alzheimer's
disease. Alzheimers Dement, 2015. 11(1): p. 70-98.
10. Yamasaki, T., et al., Selective impairment of optic flow perception in amnestic mild
cognitive impairment: evidence from event-related potentials. J Alzheimers Dis, 2012.
28(3): p. 695-708.
11. Kim, N.G., Perceiving Collision Impacts in Alzheimer's Disease: The Effect of Retinal
Eccentricity on Optic Flow Deficits. Front Aging Neurosci, 2015. 7: p. 218.
12. Waninger, S., et al, , Event-Related Potentials during Sustained Attention and Memory
Tasks: Utility as Biomarkers for Mild Cognitive Impairment. Alzheimer's & Dementia:
Diagnosis, Assessment & Disease Monitoring, in press.
13. Allison, S. L., Fagan, A. M., Morris, J. C., & Head, D. (2016). Spatial navigation in
preclinical Alzheimer’s disease. J. Alzheimer’s Dis, 52(1), 77–90.
14. Chou, Y. H., Wagenaar, R. C., Saltzman, E., Giphart, J. E., Young, D., Davidsdottir, R., &
Cronin-Golomb, A. (2009). Effects of optic flow speed and lateral flow asymmetry on
locomotion in younger and older adults: A virtual reality study. Journals of Gerontology -
Series B Psychological Sciences and Social Sciences, 64(2), 222231.
15. Frenz, H., & Lappe, M. (2005). Absolute travel distance from optic flow. Vision Research,
45(13), 16791692.
16. Hort, J., Laczó, J., Vyhná Lek, M., Bojar, M., Bureš, J., & Vlček, K. (n.d.). Spatial
navigation deficit in amnestic mild cognitive impairment.
17. Kavcic, V., Vaughn, W., & Duffy, C. J. (2012). Distinct Visual Motion Processing
Impairments In Aging and Alzheimer’s Disease, 51(3), 386395.
18. Lamontagne, A., Fung, J., McFadyen, B. J., & Faubert, J. (2007). Modulation of walking
speed by changing optic flow in persons with stroke. Journal of NeuroEngineering and
Rehabilitation, 4, 18.
19. Riddell, H., & Lappe, M. (2018). Heading Through a Crowd. Psychological Science.
20. Sparto, P. J., Whitney, S. L., Hodges, L. F., Furman, J. M., & Redfern, M. S. (2004).
Simulator sickness when performing gaze shifts within a wide field of view optic flow
environment: Preliminary evidence for using virtual reality in vestibular rehabilitation.
Journal of NeuroEngineering and Rehabilitation, 1, 110.
21. Tata, M. S., Alam, N., Mason, A. L. O., Christie, G., & Butcher, A. (2010). Selective
attention modulates electrical responses to reversals of optic-flow direction. Vision
... Recent advances in the technical specifications of virtual reality (VR) systems have enabled their utility in the study of cognitive load and function (Cipresso et al. 2018;Luong et al. 2019;Kourtesis et al. 2019;Radianti et al. 2020). Researchers have taken advantage of VR systems in a head-mounted display (VR HMD) configuration in their investigation of cognitive tasks as such systems offer the ability to create/control the visual surround and deliver complex stimuli (Harjunen et al. 2017;Rupp et al. 2019;Dey et al. 2019;Tauscher et al. 2019;Tremmel et al. 2019). Electroencephalography (EEG) is a non-invasive method which conveniently acquires brain neuronal activity in human participants by recording voltage differences at the scalp surface in the millisecond range of temporal resolution (Gevins 1998). ...
... Electroencephalography (EEG) is a non-invasive method which conveniently acquires brain neuronal activity in human participants by recording voltage differences at the scalp surface in the millisecond range of temporal resolution (Gevins 1998). Interestingly, studies have shown the feasibility of combining VR HMD with EEG to acquire brain responses to various cognitive tasks: visual oddball (Tauscher et al. 2019), n-back working memory (Dey et al. 2019;Luong et al. 2019;Tremmel et al. 2019), 3-choice vigilance and image recognition (Rupp et al. 2019) and bimodal oddball (Harjunen et al. 2017) tasks. In terms of signal-to-noise levels, Harjunen et al. (2017) demonstrated that, despite their a priori concerns regarding signal interference from the VR HMD system upon EEG signals, there was no difference between the results obtained using a VR HMD system and a desktop computer screen (CS). ...
... EEG data pre-processing EEG data were processed and analysed using EEGLAB (Delorme and Makeig 2004) and ERPLAB (Lopez-Calderon and Luck 2014) toolboxes running on MATLAB (Matlab R2015a, Mathworks, Inc.). Previous investigations of ERP components during VR HMD have used a range of high pass filters from 0.1 to 3 Hz (Harjunen et al. 2017;Rupp et al. 2019;Tauscher et al. 2019;Du et al. 2019;Lier et al. 2020). In our study, we applied a 2nd order infinite impulse response (IIR) Butterworth filter for bandpass filtering to the continuous data with a lower cutoff of 0.5 Hz and higher cutoff of 30 Hz. ...
Full-text available
Virtual reality head mounted display (VR HMD) systems are increasingly utilised in combination with electroencephalography (EEG) in the experimental study of cognitive tasks. The aim of our investigation was to determine the similarities/differences between VR HMD and the computer screen (CS) in response to an n -back working memory task by comparing visual electrophysiological event-related potential (ERP) waveforms (N1/P1/P3 components). The same protocol was undertaken for VR HMD and CS with participants wearing the same EEG headcap. ERP waveforms obtained with the VR HMD environment followed a similar time course to those acquired in CS. The P3 mean and peak amplitudes obtained in VR HMD were not significantly different to those obtained in CS. In contrast, the N1 component was significantly higher in mean and peak amplitudes for the VR HMD environment compared to CS at the frontal electrodes. Significantly higher P1 mean and peak amplitudes were found at the occipital region compared to the temporal for VR HMD. Our results show that successful acquisition of ERP components to a working memory task is achievable by combining VR HMD with EEG. In addition, the higher amplitude N1/P1 components seen in VR HMD indicates the potential utility of this VR modality in the investigation of early ERPs. In conclusion, the combination of VR HMD with EEG/ERP would be a useful approach to advance the study of cognitive function in experimental brain research.
Full-text available
Introduction The objective of the study is to validate attention and memory tasks that elicit event-related potentials (ERPs) for utility as sensitive biomarkers for early dementia. Methods A 3-choice vigilance task designed to evaluate sustained attention and standard image recognition memory task designed to evaluate attention, encoding, and image recognition memory were administered with concurrent electroencephalography acquisition to elicit ERPs in mild cognitive impairment (MCI) and healthy cohorts. ERPs were averaged, and mean or maximum amplitude of components was measured and compared between and within cohorts. Results There was significant suppression of the amplitude of the late positive potential in the MCI cohort compared with the healthy controls during 3-choice vigilance task, predominantly over occipital and right temporal-parietal region, and standard image recognition memory task over all regions. During standard image recognition memory task, diminished performance showed strong correlation with electroencephalography measurements. The old/new effects observed in the healthy controls cohort correlated with performance and were lost in MCI. Discussion ERPs obtained during cognitive tasks may provide a powerful tool for assessing MCI and have strong potential as sensitive and robust biomarkers for tracking disease progression and evaluating response to investigative therapeutics.
Full-text available
Virtual reality (VR) is predicted to create a paradigm shift in education and training, but there is little empirical evidence of its educational value. The main objectives of this study were to determine the consequences of adding immersive VR to virtual learning simulations, and to investigate whether the principles of multimedia learning generalize to immersive VR. Furthermore, electroencephalogram (EEG) was used to obtain a direct measure of cognitive processing during learning. A sample of 52 university students participated in a 2 x 2 experimental cross-panel design wherein students learned from a science simulation via a desktop display (PC) or a head-mounted display (VR); and the simulations contained on-screen text or on-screen text with narration. Across both text versions, students reported being more present in the VR condition (d = 1.30); but they learned less (d = 0.80), and had significantly higher cognitive load based on the EEG measure (d = 0.59). In spite of its motivating properties (as reflected in presence ratings), learning science in VR may overload and distract the learner (as reflected in EEG measures of cognitive load), resulting in less opportunity to build learning outcomes (as reflected in poorer learning outcome test performance).
Full-text available
Although several previous studies have demonstrated navigational deficits in early-stage symptomatic Alzheimer's disease (AD), navigational abilities in preclinical AD have not been examined. The present investigation examined the effects of preclinical AD and early-stage symptomatic AD on spatial navigation performance. Performance on tasks of wayfinding and route learning in a virtual reality environment were examined. Comparisons were made across the following three groups: Clinically normal without preclinical AD (n = 42), clinically normal with preclinical AD (n = 13), and early-stage symptomatic AD (n = 16) groups. Preclinical AD was defined based on cerebrospinal fluid Aβ42 levels below 500 pg/ml. Preclinical AD was associated with deficits in the use of a wayfinding strategy, but not a route learning strategy. Moreover, post-hoc analyses indicated that wayfinding performance had moderate sensitivity and specificity. Results also confirmed early-stage symptomatic AD-related deficits in the use of both wayfinding and route learning strategies. The results of this study suggest that aspects of spatial navigation may be particularly sensitive at detecting the earliest cognitive deficits of AD.
Full-text available
The present study explored whether the optic flow deficit in Alzheimer’s disease (AD) reported in the literature transfers to different types of optic flow, in particular, one that specifies collision impacts with upcoming surfaces, with a special focus on the effect of retinal eccentricity. Displays simulated observer movement over a ground plane toward obstacles lying in the observer’s path. Optical expansion was modulated by varying τ˙. The visual field was masked either centrally (peripheral vision) or peripherally (central vision) using masks ranging from 10° to 30° in diameter in steps of 10°. Participants were asked to indicate whether their approach would result in “collision” or “no collision” with the obstacles. Results showed that AD patients’ sensitivity to τ˙ was severely compromised, not only for central vision but also for peripheral vision, compared to age- and education-matched elderly controls. The results demonstrated that AD patients’ optic flow deficit is not limited to radial optic flow but includes also the optical pattern engendered by τ˙. Further deterioration in the capacity to extract τ˙ to determine potential collisions in conjunction with the inability to extract heading information from radial optic flow would exacerbate AD patients’ difficulties in navigation and visuospatial orientation.
The ability to navigate through crowds of moving people accurately, efficiently, and without causing collisions is essential for our day-to-day lives. Vision provides key information about one’s own self-motion as well as the motions of other people in the crowd. These two types of information (optic flow and biological motion) have each been investigated extensively; however, surprisingly little research has been dedicated to investigating how they are processed when presented concurrently. Here, we showed that patterns of biological motion have a negative impact on visual-heading estimation when people within the crowd move their limbs but do not move through the scene. Conversely, limb motion facilitates heading estimation when walkers move independently through the scene. Interestingly, this facilitation occurs for crowds containing both regular and perturbed depictions of humans, suggesting that it is likely caused by low-level motion cues inherent in the biological motion of other people.
Conference Paper
The current study concerns the identification of possible differences in perception between the virtual and the real world in terms of the effect on brain activity. For this reason, an EEG device was used to capture participants' brain activity in different brain areas during their exposure to different virtual and real environments. The environments considered in this study portray a classroom environment with a scenario suitable for teacher training and professional development. The first aim of the experiment is to investigate if exposure to a virtual environment can affect motor, cognitive or other function of the users, and the second aim is to test if the graphics content and nature of such an environment can influence the user experience. During the study, the optimum duration of exposure in a virtual environment was also assessed by measuring the time that the brain needs to perceive and adapt to the new state. Our results, consisting of EEG data analyzed in 10 Regions of Interest (ROIs) and responses from an Igroup Presence questionnaire, indicated a significant difference in each brain area, especially in the frontal and occipital region, when a participant was exposed to a non-realistic virtual environment, compared to a realistic one, highlighting the impact of the selected virtual environment design. The results of the experiment can play an important role in defining the characteristics of optimal virtual environments for virtual reality-based training applications.
Electroencephalogram (EEG) was used to investigate brain electrical activity of full-term and preterm infants at 4 and 12 months of age as a functional response mechanism to structured optic flow and random visual motion. EEG data were recorded with an array of 128-channel sensors. Visual evoked potentials (VEPs) and temporal spectral evolution (TSE, time-dependent amplitude changes) were analysed. VEP results showed a significant improvement in full-term infants’ latencies with age for forwards and reversed optic flow but not random visual motion. Full-term infants at 12 months significantly differentiated between the motion conditions, with the shortest latency observed for forwards optic flow and the longest latency for random visual motion, while preterm infants did not improve their latencies with age, nor were they able to differentiate between the motion conditions at 12 months. Differences in induced activities were also observed where comparisons between TSEs of the motion conditions and a static non-flow pattern showed desynchronised theta-band activity in both full-term and preterm infants, with synchronised alpha-beta band activity observed only in the full-term infants at 12 months. Full-term infants at 12 months with a substantial amount of self-produced locomotor experience and neural maturation coupled with faster oscillating cell assemblies, rely on the perception of structured optic flow to move around efficiently in the environment. The poorer responses in the preterm infants could be related to impairment of the dorsal visual stream specialized in the processing of visual motion.
There is a growing empirical evidence that virtual reality (VR) is valuable for education, training, entertaining and medical rehabilitation due to its capacity to represent real-life events and situations. However, the neural mechanisms underlying behavioral confounds in VR environments are still poorly understood. In two experiments, we examined the effect of fully immersive 3D stereoscopic presentations and less immersive 2D VR environments on brain functions and behavioral outcomes. In Experiment 1 we examined behavioral and neural underpinnings of spatial navigation tasks using electroencephalography (EEG). In Experiment 2, we examined EEG correlates of postural stability and balance. Our major findings showed that fully immersive 3D VR induced a higher subjective sense of presence along with enhanced success rate of spatial navigation compared to 2D. In Experiment 1 power of frontal midline EEG (FM-theta) was significantly higher during the encoding phase of route presentation in the 3D VR. In Experiment 2, the 3D VR resulted in greater postural instability and modulation of EEG patterns as a function of 3D versus 2D environments. The findings support the inference that the fully immersive 3D enriched-environment requires allocation of more brain and sensory resources for cognitive/motor control during both tasks than 2D presentations. This is further evidence that 3D VR tasks using EEG may be a promising approach for performance enhancement and potential applications in clinical/rehabilitation settings.
Recent evidence indicates that sensory and motor changes may precede the cognitive symptoms of Alzheimer's disease (AD) by several years and may signify increased risk of developing AD. Traditionally, sensory and motor dysfunctions in aging and AD have been studied separately. To ascertain the evidence supporting the relationship between age-related changes in sensory and motor systems and the development of AD and to facilitate communication between several disciplines, the National Institute on Aging held an exploratory workshop titled “Sensory and Motor Dysfunctions in Aging and AD.” The scientific sessions of the workshop focused on age-related and neuropathologic changes in the olfactory, visual, auditory, and motor systems, followed by extensive discussion and hypothesis generation related to the possible links among sensory, cognitive, and motor domains in aging and AD. Based on the data presented and discussed at this workshop, it is clear that sensory and motor regions of the central nervous system are affected by AD pathology and that interventions targeting amelioration of sensory-motor deficits in AD may enhance patient function as AD progresses.
There is evidence from EEG studies that unexpected perturbations to standing posture induce a differential modulation of cortical activity compared to self-initiated and/or predictable conditions. However, the neural correlates of whole body postural response to visually-induced perturbations on standing posture have not been examined. Here we employ a novel experimental paradigm via combined Virtual Reality (VR) and EEG measures to examine the effects of visually induced perturbations on the dynamics of postural responses. Twelve Penn State student-athletes without prior history of neurologic disorders and/or orthopaedic injuries participated in this study. There were no differences in response/reaction time measures between both spatially and temporally unpredictable and fully predictable conditions (p>.05). However, significantly stronger modulation of frontal-central EEG theta activity was present prior to onset of unpredictable postural perturbations (p<. 05). It is postulated that enhanced EEG theta in unpredictable conditions reflects increased effort to recruit additional brain resources to meet the demands of the postural tasks.