Fig 5 - available via license: CC BY
Content may be subject to copyright.
Experiment 2a: Example of DFT of time-domain pupil modulation results for one participant and for each frequency condition in which eye gaze was directed at the right circle. https://doi.org/10.1371/journal.pone.0226991.g005

Experiment 2a: Example of DFT of time-domain pupil modulation results for one participant and for each frequency condition in which eye gaze was directed at the right circle. https://doi.org/10.1371/journal.pone.0226991.g005

Source publication
Article
Full-text available
This study develops an information-input interface in which a visual stimulus targeted by a user’s eye gaze is identified based on the pupillary light reflex to periodic luminance modulations of the object. Experiment 1 examines how pupil size changes in response to periodic luminance modulation of visual stimuli, and the results are used to develo...

Context in source publication

Context 1
... goal of experiment 2a was to identify at which circle the participants gazed based on pupil-size oscillation measured with a visual display. Fig 5 shows typical PSD results obtained with 0.01 Hz intervals. These results were constructed from the DFT of 7 s of time-series data of pupil response for one participant gazing at the right circle. ...

Similar publications

Article
Full-text available
In daily life, our emotions are often elicited by a multimodal environment, mainly visual and auditory stimuli. Therefore, it is crucial to investigate the symmetrical characteristics of emotional responses to pictures and sounds. In this study, we aimed to elucidate the relationship of attentional states to emotional unimodal stimuli (pictures or...
Article
Full-text available
Several papers by Eckhard Hess from the 1960s and 1970s report that the pupils dilate or constrict according to the interest value, arousing content, or mental demands of visual stimuli. However, Hess mostly used small sample sizes and undocumented luminance control. In a first experiment (N = 182) and a second preregistered experiment (N = 147), w...

Citations

... Hybrid BCI can improve the classification accuracy, increase the number of commands, and shorten the detection time of the BCI system by combining two or more patterns (at least one of which is a brain signal) (Hong and Jawad, 2017). Recently, pupillary responses (PR), such as the pupillary light reflex, have been used as the second pattern in addition to EEG due to the low user burden, non-invasiveness, and no need for training (Muto et al., 2020). Pupil diameter changes steadily with the illuminance of the observed object to regulate the amount of light entering the eye (Crawford, 1936;Woodhouse, 1975;Woodhouse and Campbell, 1975), and the modulation frequency of PR is synchronized with the luminance-modulation frequency of the visual stimulus. ...
... Pupil diameter changes steadily with the illuminance of the observed object to regulate the amount of light entering the eye (Crawford, 1936;Woodhouse, 1975;Woodhouse and Campbell, 1975), and the modulation frequency of PR is synchronized with the luminance-modulation frequency of the visual stimulus. The amplitude of PR decreases as the stimulation frequency increases (Muto et al., 2020), and the consistent, measurable PR can be induced at the flickering frequency up to 2.3 Hz (Naber et al., 2013). Compared with the detection of gaze position Yao et al., 2018), the measurement of PR does not require system calibration. ...
... They also established a binary communication based on PR and achieved an accuracy of 100% at 10 bpm and 96% at 15 bpm. Muto et al. (2020) realized an information input interface with 12 options (from 0.58 to 1.90 Hz, with an interval of 0.12 Hz) based on PR. The averaged power spectral density (PSD) peak decreased with increasing luminance-modulation frequency, and the averaged classification accuracy reached 85.4% with a data length of 7 s. ...
Article
Full-text available
Brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP) has been widely studied due to the high information transfer rate (ITR), little user training, and wide subject applicability. However, there are also disadvantages such as visual discomfort and “BCI illiteracy.” To address these problems, this study proposes to use low-frequency stimulations (12 classes, 0.8–2.12 Hz with an interval of 0.12 Hz), which can simultaneously elicit visual evoked potential (VEP) and pupillary response (PR) to construct a hybrid BCI (h-BCI) system. Classification accuracy was calculated using supervised and unsupervised methods, respectively, and the hybrid accuracy was obtained using a decision fusion method to combine the information of VEP and PR. Online experimental results from 10 subjects showed that the averaged accuracy was 94.90 ± 2.34% (data length 1.5 s) for the supervised method and 91.88 ± 3.68% (data length 4 s) for the unsupervised method, which correspond to the ITR of 64.35 ± 3.07 bits/min (bpm) and 33.19 ± 2.38 bpm, respectively. Notably, the hybrid method achieved higher accuracy and ITR than that of VEP and PR for most subjects, especially for the short data length. Together with the subjects’ feedback on user experience, these results indicate that the proposed h-BCI with the low-frequency stimulation paradigm is more comfortable and favorable than the traditional SSVEP-BCI paradigm using the alpha frequency range.
Article
Objective: Recently, pupil oscillations synchronized with steady visual stimuli were used as input for an interface. The proposed system, inspired by a brain-computer interface based on steady-state visual evoked potentials, does not require contact with the participant. However, the pupil oscillation mechanism limits the stimulus frequency to 2.5 Hz or less, making it hard to enhance the information transfer rate. Approach: Here, we compared multiple conditions for stimulation to increase the information transfer rate (ITR) of the pupil vibration-based interface, which were called monocular-single, monocular-superposed, and binocular-independent conditions. The binocular-independent condition stimulates each eye at different frequencies respectively and mixes them by using the visual stereoscopic perception of users. The monocular-superposed condition stimulates both eyes by a mixed signal of two different frequencies. We selected the shape of the stimulation signal, evaluated the amount of spectral leakage in the monocular-superposed and binocular-independent conditions, and compared the power spectrum density at the stimulation frequency. Moreover, 5, 10, and 15 patterns of stimuli were classified in each condition. Main results: A square wave, which causes an efficient pupil response, was used as the stimulus. Spectral leakage at the beat frequency was higher in the monocular-superposed condition than in the binocular-independent one. The power spectral density of stimulus frequencies was greatest in the monocular-single condition. Finally, we could classify the 15-stimulus pattern, with ITRs of 14.4 (binocular-independent, using 5 frequencies), 14.5 (monocular-superimposed, using 5 frequencies), and 23.7 bits/min (monocular-single, using 15 frequencies). There were no significant differences for the binocular-independent and monocular-superposed conditions. Significance: This paper shows a way to increase the number of stimuli that can be simultaneously displayed without decreasing ITR, even when only a small number of frequencies are available. This could lead to the provision of an interface based on pupil oscillation to a wider range of users.