About
143
Publications
22,389
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,284
Citations
Introduction
Skills and Expertise
Publications
Publications (143)
Future VR environments envision adaptive and personalized interactions. To this aim, attention detection in VR settings would allow for diverse applications and improved usability. However, attention-aware VR systems based on EEG data suffer from long training periods, hindering generalizability and widespread adoption. This work addresses the chal...
We introduce the concept of LabLinking: a technology-based interconnection of experimental laboratories across institutions, disciplines, cultures, languages, and time zones - in other words human studies and experiments without borders. In particular, we introduce a theoretical framework of LabLinking, describing multiple dimensions of conceptual,...
Fig. 1. A virtual reality showroom for technology products. The soon-to-be mainstream shopping paradigm? This virtual reality (VR) study uses an iterative design and development approach to investigate hindrances and boundary factors for future v-commerce platforms. It focuses on the design of human sales agent avatars in VR because in contrast to...
Developed a VR adaptive system utilizing EEG correlates of external and internal attention to optimizing task performance and user engagement. • Demonstrated the effectiveness of online adaptation using EEG correlates of attention, resulting in efficient user model. • We adapted peripheral environmental factors rather than manipulating main task fe...
Electroencephalography (EEG) related research faces a significant challenge of subject independence due to the variation in brain signals and responses among individuals. While deep learning models hold promise in addressing this challenge, their effectiveness depends on large datasets for training and generalization across participants. To overcom...
Online worlds offer a massive display of people’s lives including their creative activities. In the present study, we investigated do-it-yourself (DIY) videos on YouTube to explore the type and prevalence of everyday creative behaviors. DIY is a term associated with the production of original and effective products, and, like YouTube, typically fea...
In this study we examined if training with a virtual tool in augmented reality (AR) affects the emergence of ownership and agency over the tool and whether this relates to changes in body schema (BS). 34 young adults learned controlling a virtual gripper to grasp a virtual object. In the visuo-tactile (VT) but not the vision-only (V) condition, vib...
In young adults (YA) who practised controlling a virtual tool in augmented reality (AR), the emergence of a sense of body ownership over the tool was associated with the integration of the virtual tool into the body schema (BS). Agency emerged independent of BS plasticity. Here we aimed to replicate these findings in older adults (OA). Although the...
We examined whether resting-state and task-related oscillations differently predict the practice effect during virtual tool-use training in young (YA) and older (OA) adults. Thirty-seven YA (Mage: 23.64, SD: 7.07) and forty-one OA (Mage: 68.92, SD: 4.49) learned to control a virtual gripper to grasp a virtual object. The training was organized in t...
Metrics for Visual Grounding (VG) in Visual Question Answering (VQA) systems primarily aim to measure a system's reliance on relevant parts of the image when inferring an answer to the given question. Lack of VG has been a common problem among state-of-the-art VQA systems and can manifest in over-reliance on irrelevant image parts or a disregard fo...
In this paper, we investigate the effect of distractions and hesitations as a scaffolding strategy. Recent research points to the potential beneficial effects of a speaker’s hesitations on the listeners’ comprehension of utterances, although results from studies on this issue indicate that humans do not make strategic use of them. The role of hesit...
In young adults (YA) who practised controlling a virtual tool in augmented reality (AR), the emergence of a sense of body ownership over the tool was associated with the integration of the virtual tool into the body schema (BS). Agency emerged independent of BS plasticity. Here we aimed to replicate these findings in older adults (OA). Although the...
In this study we examined if training with a virtual tool in augmented reality (AR) affects the emergence of ownership and agency over the tool and whether this relates to changes in body schema (BS). 34 young adults learned controlling a virtual gripper to grasp a virtual object. In the visuo-tactile (VT) but not the vision-only (V) condition, vib...
Visual Grounding (VG) in Visual Question Answering (VQA) systems describes how well a system manages to tie a question and its answer to relevant image regions. Systems with strong VG are considered intuitively interpretable and suggest an improved scene understanding. While VQA accuracy performances have seen impressive gains over the past few yea...
We present three user studies that gradually prepare our prototype system SmartHelm for use in the field, i.e. supporting cargo cyclists on public roads for cargo delivery. SmartHelm is an attention-sensitive smart helmet that integrates none-invasive brain and eye activity detection with hands-free Augmented Reality (AR) components in a speech-ena...
In human-computer interaction (HCI), there has been a push towards open science, but to date, this has not happened consistently for HCI research utilizing brain signals due to unclear guidelines to support reuse and reproduction. To understand existing practices in the field, this paper examines 110 publications, exploring domains, applications, m...
As lightweight, low-cost EEG headsets emerge, the feasibility of consumer-oriented brain-computer interfaces (BCI) increases. The combination of portable smartphones and easy-to-use EEG dry electrode headbands offers intriguing new applications and methods of human-computer interaction. In previous research, augmented reality (AR) scenarios have be...
We investigated whether resting-state EEG predict learning slope during and after tool-use training in augmented reality and whether such changes are associated with performance in tactile localization test (TLT) as indicator of embodiment and body schema plasticity. 34 young and 40 older adults underwent a virtual tool-use training in AR with and...
Abstract Submission Topic: S_D. Sensory and Motor Systems / D.5 Tactile/somatosensory system / D.5.a Peripheral receptors Abstract Body
Aims: We investigated whether training with a virtual tool in augmented reality (AR) has comparable effects on body schema (BS) and sense of ownership and agency in young and older adults, while leveraging AR to p...
Online worlds offer a massive display of people’s lives including their creative activities. In the present study, we investigated do-it-yourself (DIY) videos on YouTube to explore the type and prevalence of everyday creative behaviors. DIY is a term associated with the production of original and effective products, and, like YouTube, typically fea...
Often, various modalities capture distinct aspects of particular mental states or activities. While machine learning algorithms can reliably predict numerous aspects of human cognition and behavior using a single modality, they can benefit from the combination of multiple modalities. This is why hybrid BCIs are gaining popularity. However, it is no...
Physical, social and cognitive activation is an important cornerstone in non-pharmacological therapy for People with Dementia (PwD). To support long-term motivation and well-being, activation contents first need to be perceived positively. Prompting for explicit feedback, however, is intrusive and interrupts the activation flow. Automated analyses...
Statistical measurements of eye movement-specific properties, such as fixations, saccades, blinks, or pupil dilation, are frequently utilized as input features for machine learning algorithms applied to eye tracking recordings. These characteristics are intended to be interpretable aspects of eye gazing behavior. However, prior research has demonst...
Modeling with multimodal data in the wild poses similar challenges in human-computer and human-robot interaction (HCI, HRI). This workshop series thus blends HCI and HRI to jointly address a broad range of current topics in multimodal modeling aimed at designing intelligent systems in the wild. From addressing data scarcity in multimodal user state...
With the expressed goal of improving system transparency and visual grounding in the reasoning process in VQA, we present a modular system for the task of compositional VQA based on scene graphs. Our system is called "Adventurer's Treasure Hunt" (or ATH), named after an analogy we draw between our model's search procedure for an answer and an adven...
Adding attention-awareness to an Augmented Reality setting by using a Brain-Computer Interface promises many interesting new applications and improved usability. The possibly complicated setup and relatively long training period of EEG-based BCIs however, reduce this positive effect immensely. In this study, we aim at finding solutions for person-i...
It has been shown that conclusions about the human mental state can be drawn from eye gaze behavior by several previous studies. For this reason, eye tracking recordings are suitable as input data for attentional state classifiers. In current state-of-the-art studies, the extracted eye tracking feature set usually consists of descriptive statistics...
Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning te...
I-CARE is a hand-held activation system that allows professional and informal caregivers to cognitively and socially activate people with dementia in joint activation sessions without special training or expertise. I-CARE consists of an easy-to-use tablet application that presents activation content and a server-based backend system that securely m...
In the field of HCI, researchers from diverse backgrounds have taken a broad view of application domains that could benefit from brain signals, both by applying HCI methods to improve interfaces using brain signals (e.g., human-centered design and evaluation of brain-based user interfaces), as well as integrating brain signals into HCI methods (e.g...
Eye behavior is increasingly used as an indicator of internal versus external focus of attention both in research and application. However, available findings are partly inconsistent, which might be attributed to the different nature of the employed types of internal and external cognition tasks. The present study, therefore, investigated how consi...
We investigated whether virtual tool-use training combined with vibro-tactile feedback on the thumb
and index finger changes localization of tactile stimuli on those fingers as well as associated cortical
processing. Thirty young adult participants learned controlling a virtual gripper in augmented reality
to grasp virtual objects at various loc...
We examined whether resting-state and task-related EEG power over centro-parietal and frontal brain
regions were changed by virtual tool-use training and whether such changes were associated with
learning, sense of agency and ownership. Thirty-four young adult participants learned to use a virtual
tool for grasping an object in augmented reality...
We introduce the concept of LabLinking: a technology-based interconnection of experimental laboratories across institutions, disciplines, cultures, languages, and time zones - in other words experiments without borders. In particular, we introduce LabLinking levels (LLL), which define the degree of tightness of empirical interconnection between lab...
Augmented Reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning te...
What does it feel like when one controls a tool? Would it feel like it is being extended from one’s own body? This thesis is a cornerstone for the PALMS study. Plasticity of the minimal self can be defined as the ability to shape or extend Body Image and Body Action. The PALMS project studies the plasticity of the minimal self in healthy aging in t...
As long as there have been computers, there has been a desire to integrate one’s thoughts directly with them. As the technology progressively comes into contact with human users, new challenges and opportunities arise that are central to human-computer interaction (HCI). In the field of HCI, researchers from diverse backgrounds have taken a broad v...
Städte benötigen dringend neue Konzepte für intelligente Systeme, die den steigenden Anforderungen an effizienten und umweltschonenden Verkehr gerecht werden können. Aktuelle Entwicklungen in der Mikromobilität bieten hier vielversprechende Möglichkeiten, benötigen jedoch belastbare Sensordaten und standardisierte Datenstrukturen. Dieser Beitrag gi...
Mobile users rely on typing assistant mechanisms such as prediction and autocorrect. Previous studies on mobile keyboards showed decreased performance for heavy use of word prediction, which identifies a need for more research to better understand the effectiveness of predictive features for different users. Our work aims at such a better understan...
Eye behavior is increasingly used as indicator of internal vs. external focus of attention both in research and application. However, available findings are partly inconsistent, which might be attributed to the different nature of the employed types of internal and external cognition tasks. The present study, therefore, investigated how consistentl...
Miller et al. (2014) described altered arm representation and body schema after training to use a mechanical gripper for grasping distant objects. We examined whether similar training with a virtual tool in augmented reality (AR) would have comparable effects. Thirty young adults learned controlling a virtual gripper to grasp virtual objects at var...
This book constitutes the thoroughly refereed post-conference proceedings of the 12th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2019, held in Prague, Czech Republic, in February 2019.
The 22 revised and extended full papers presented were carefully reviewed and selected from a total of 271 submission...
The Seventh International Brain-Computer Interface (BCI) Meeting was held May 21-25th, 2018 at the Asilomar Conference Grounds, Pacific Grove, California, United States. The interactive nature of this conference was embodied by 25 workshops covering topics in BCI (also called brain-machine interface) research. Workshops covered foundational topics...
The current attentional state can be divided into several categories, for example, the direction of attention. Often, this state is subconscious or its constant report impossible. Thus, an automated surveillance of the attentional state could be beneficial. In this paper, we performed a classification of multimodal data (EEG and eye tracking) to mo...
One problem faced in the design of Augmented Reality (AR) applications is the interference of virtually displayed objects in the user's visual field, with the current attentional focus of the user. Newly generated content can disrupt internal thought processes. If we can detect such internally-directed attention periods, the interruption could eith...
Human Computer Interaction (HCI) performance can be impaired by several HCI obstacles. Cognitive adaptive systems should dynamically detect such obstacles and compensate them with suitable User Interface (UI) adaptation. In this paper, we discuss the detection of two main HCI obstacles: memory-based and visual obstacles. A sequential model based on...
Virtual Reality (VR) has emerged as a novel paradigm for immersive applications in training, entertainment, rehabilitation, and other domains. In this paper, we investigate the automatic classification of mental workload from brain activity measured through functional near-infrared spectroscopy (fNIRS) in VR. We present results from a study which i...
In this chapter, we will introduce Augmented and Virtual Reality as a novel way of user interaction which holds great promises for immersive BCI art applications. We will first introduce the key terms and give an introduction to the technical challenges and possible solutions to them. Then, we will discuss a number of important examples of the comb...
Dealing with fear of falling is a challenge in sport climbing. Virtual reality (VR) research suggests that using physical and reality-based interaction increases the presence in VR. In this paper, we present a study that investigates the influence of physical props on presence, stress and anxiety in a VR climbing environment involving whole body mo...
Executive cognitive functions like working memory determine the success or failure of a wide variety of different cognitive tasks, such as problem solving, navigation, or planning. Estimation of constructs like working memory load or memory capacity from neurophysiological or psychophysiological signals would enable adaptive systems to respond to c...
Multimodal data is increasingly used in cognitive prediction models to better analyze and predict different user cognitive processes. Classifiers based on such data, however, have different performance characteristics. We discuss in this paper an intervention-free selection task using multimodal data of EEG and eye tracking in three different model...
In this paper, we extract features of head pose, eye gaze, and facial expressions from video to estimate individual learners' attentional states in a classroom setting. We concentrate on the analysis of different definitions for a student's attention and show that available generic video processing components and a single video camera are sufficien...
Multimodal signals allow us to gain insights into internal cognitive processes of a person, for example: speech and gesture analysis yields cues about hesitations, knowledgeability, or alertness, eye tracking yields information about a person's focus of attention, task, or cognitive state, EEG yields information about a person's cognitive load or i...