
Simon Geirnaert- Doctor of Engineering Science (PhD)
- KU Leuven
Simon Geirnaert
- Doctor of Engineering Science (PhD)
- KU Leuven
About
37
Publications
4,082
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
361
Citations
Introduction
Current institution
Publications
Publications (37)
Attention is fundamental for classroom learning, yet measuring it during learning remains challenging. Behavioural measures are often subjective and lack the sensitivity to capture online momentary fluctuations in attention. This experiment examined the potential of EEG(Electroencephalography)-based neural envelope tracking (NET) as a measure of au...
We present a wireless EEG sensor network consisting of two miniature, wireless, behind-the-ear sensor nodes with a size of 2 cm x 3 cm, each containing a 4-channel EEG amplifier and a wireless radio. Each sensor operates independently, each having its own sampling clock, wireless radio, and local reference electrode, with full electrical isolation...
Correlation-based auditory attention decoding (AAD) algorithms exploit neural tracking mechanisms to determine listener attention among competing speech sources via, e.g., electroencephalography signals. The correlation coefficients between the decoded neural responses and encoded speech stimuli of the different speakers then serve as AAD decision...
Many studies have demonstrated that auditory
attention to natural speech can be decoded from EEG data.
However, most studies focus on selective auditory attention
decoding (sAAD) with competing speakers, while the dynamics
of absolute auditory attention decoding (aAAD) to a single
target remains underexplored. The goal of aAAD is to measure
the deg...
Objective:
Selective auditory attention decoding (AAD) algorithms process brain data such as electroencephalography to decode to which of multiple competing sound sources a person attends. Example use cases are neuro-steered hearing aids or communication via brain-computer interfaces (BCI). Recently, it has been shown that it is possible to train...
In a recent paper, we presented the KU Leuven audiovisual, gaze-controlled auditory attention decoding (AV-GC-AAD) dataset, in which we recorded electroencephalography (EEG) signals of participants attending to one out of two competing speakers under various audiovisual conditions. The main goal of this dataset was to disentangle the direction of g...
Auditory attention decoding (AAD) is the process of identifying the attended speech in a multi-talker environment using brain signals, typically recorded through electroencephalography (EEG). Over the past decade, AAD has undergone continuous development, driven by its promising application in neuro-steered hearing devices. Most AAD algorithms are...
Selective attention enables humans to efficiently process visual stimuli by enhancing important locations or objects and filtering out irrelevant information. Locating visual attention is a fundamental problem in neuroscience with potential applications in brain-computer interfaces. Conventional paradigms often use synthetic stimuli or static image...
Various new brain-computer interface technologies or neuroscience applications require decoding stimulus-following neural responses to natural stimuli such as speech and video from, e.g., electroencephalography (EEG) signals. In this context, generalized canonical correlation analysis (GCCA) is often used as a group analysis technique, which allows...
Objective. In this study, we use electroencephalography (EEG) recordings to determine whether a subject is actively listening to a presented speech stimulus. More precisely, we aim to discriminate between an active listening condition, and a distractor condition where subjects focus on an unrelated distractor task while being exposed to a speech st...
Objective. Electroencephalography (EEG) is a widely used technology for recording brain activity in brain-computer interface (BCI) research, where understanding the encoding-decoding relationship between stimuli and neural responses is a fundamental challenge. Recently, there is a growing interest in encoding-decoding natural stimuli in a single-tr...
Objective. Spatial auditory attention decoding (Sp-AAD) refers to the task of identifying the direction of the speaker to which a person is attending in a multi-talker setting, based on the listener’s neural recordings, e.g. electroencephalography (EEG). The goal of this study is to thoroughly investigate potential biases when training such Sp-AAD...
More than 5% of the world’s population suffers from disabling hearing loss. Hearing aids and cochlear implants are crucial for improving their quality of life. However, current hearing technology does not work well in cocktail party scenarios, where several people talk simultaneously. This is mainly because the hearing device does not know which sp...
Objective
In this study, we use electroencephalography (EEG) recordings to determine whether a subject is actively listening to a presented speech stimulus. More precisely, we aim to discriminate between an active listening condition, and a distractor condition where subjects focus on an unrelated distractor task while being exposed to a speech sti...
A bstract
Objective
Electroencephalography (EEG) is a widely used technology for recording brain activity in brain-computer interface (BCI) research, where understanding the encoding-decoding relationship between stimuli and neural responses is a fundamental challenge. Recently, there is a growing interest in encoding-decoding natural stimuli in a...
p>Auditory attention decoding (AAD) algorithms process brain data such as electroencephalography (EEG) in order to decode to which of multiple competing sound sources a person attends. Example use cases are neuro-steered hearing aids or brain-computer interfaces (BCI) for patients with severe motor or cognitive impairments. Recently, it has been sh...
p>Auditory attention decoding (AAD) algorithms process brain data such as electroencephalography (EEG) in order to decode to which of multiple competing sound sources a person attends. Example use cases are neuro-steered hearing aids or brain-computer interfaces (BCI) for patients with severe motor or cognitive impairments. Recently, it has been sh...
Objective
Spatial auditory attention decoding (Sp-AAD) refers to the task of identifying the direction of the speaker to which a person is attending in a multi-talker setting, based on the listener’s neural recordings, e.g., electroencephalography (EEG). The goal of this study is to thoroughly investigate potential biases when training such Sp-AAD...
In brain-computer interface or neuroscience applications, generalized canonical correlation analysis (GCCA) is often used to extract correlated signal components in the neural activity of different subjects attending to the same stimulus. This allows quantifying the so-called inter-subject correlation or boosting the signal-to-noise ratio of the st...
Many problems require the selection of a subset of variables from a full set of optimization variables. The computational complexity of an exhaustive search over all possible subsets of variables is, however, prohibitively expensive, necessitating more efficient but potentially suboptimal search strategies. We focus on sparse variable selection for...
One in five experiences hearing loss. The World Health Organization estimates that this number will increase to one in four in 2050. Luckily, effective hearing devices such as hearing aids and cochlear implants exist with advanced speaker enhancement algorithms that can significantly improve the quality of life of people suffering from hearing loss...
The goal of auditory attention decoding (AAD) is to determine to which speaker out of multiple competing speakers a listener is attending based on the brain signals recorded via, e.g., electroencephalography (EEG). AAD algorithms are a fundamental building block of so-called neuro-steered hearing devices that would allow identifying the speaker tha...
The goal of auditory attention decoding (AAD) is to determine to which speaker out of multiple competing speakers a listener is attending based on the brain signals recorded via, e.g., electroencephalography (EEG). AAD algorithms are a fundamental building block of so-called neuro-steered hearing devices that would allow identifying the speaker tha...
People suffering from hearing impairment often have difficulties participating in conversations in so-called cocktail party scenarios where multiple individuals are simultaneously talking. Although advanced algorithms exist to suppress background noise in these situations, a hearing device also needs information about which speaker a user actually...
Many problems require the selection of a subset of variables from a full set of optimization variables. The computational complexity of an exhaustive search over all possible subsets of variables is, however, prohibitively expensive, necessitating more efficient but potentially suboptimal search strategies. We focus on sparse variable selection for...
When multiple speakers talk simultaneously, a hearing device cannot identify which of these speakers the listener intends to attend to. Auditory attention decoding (AAD) algorithms can provide this information by, for example, reconstructing the attended speech envelope from electroencephalography (EEG) signals. However, these stimulus reconstructi...
Objective:
Noise reduction algorithms in current hearing devices lack informationabout the sound source a user attends to when multiple sources are present. To resolve this issue, they can be complemented with auditory attention decoding (AAD) algorithms, which decode the attention using electroencephalography (EEG) sensors. State-of-the-art AAD a...
Auditory attention decoding (AAD) algorithms decode the auditory attention from electroencephalography (EEG) signals which capture the neural activity of the listener. Such AAD methods are believed to be an important ingredient towards so-called neuro-steered assistive hearing devices. For example, traditional AAD decoders allow to detect to which...
People suffering from hearing impairment often have difficulties participating in conversations in so-called `cocktail party' scenarios with multiple people talking simultaneously. Although advanced algorithms exist to suppress background noise in these situations, a hearing device also needs information on which of these speakers the user actually...
Objective
Noise reduction algorithms in current hearing devices lack information about the sound source a user attends to when multiple sources are present. To resolve this issue, they can be complemented with auditory attention decoding (AAD) algorithms, which decode the attention using electroencephalography (EEG) sensors. State-of-the-art AAD al...
In a multi-speaker scenario, a hearing aid lacks information on which speaker the user intends to attend, and therefore it often mistakenly treats the latter as noise while enhancing an interfering speaker. Recently, it has been shown that it is possible to decode the attended speaker from brain activity, e.g., recorded by electroencephalography se...