Jean VanderdoncktCatholic University of Louvain | UCLouvain · Louvain Research Institute in Management and Organizations
Jean Vanderdonckt
MSc Mathematics, Agreg., M Com Sci, PhD Comp Sc
Interested in Intelligent User Interfaces (adaptive user interfaces), Gesture interaction
About
610
Publications
197,719
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
10,958
Citations
Introduction
Engineering of Interactive Systems
Additional affiliations
Education
September 1989 - July 1997
Publications
Publications (610)
Affective computing has potential to enrich the development lifecycle of Graphical User Interfaces (GUIs) and of intelligent user interfaces by incorporating emotion-aware responses. Yet, affect is seldom considered to determine whether a GUI design would be perceived as good or bad. We study how physiological signals can be used as an early, effec...
Large Language Models (LLMs) like ChatGPT demonstrate significant potential in the medical field, often evaluated using multiple-choice questions (MCQs) similar to those found on the USMLE. Despite their prevalence in medical education, MCQs have limitations that might be exacerbated when assessing LLMs. To evaluate the effectiveness of MCQs in ass...
Objective
To optimize the training strategy of large language models for medical applications, focusing on creating clinically relevant systems that efficiently integrate into healthcare settings, while ensuring high standards of accuracy and reliability.
Materials and Methods
We curated a comprehensive collection of high-quality, domain-specific...
Consciousness has been historically a heavily debated topic in engineering, science, and philosophy. On the contrary, awareness had less success in raising the interest of scholars in the past. However, things are changing as more and more researchers are getting interested in answering questions concerning what awareness is and how it can be artif...
In today's data-driven era, handling information overload in time-sensitive scenarios poses a significant challenge. Visualization is a valuable tool for comprehending vast amounts of data. However, it's crucial to have self-adapting visualizations that are tailored to the user's cognitive level and grow with their expertise. Existing solutions oft...
Radar sensing can penetrate non-conducting materials, such as glass, wood, and plastic, which makes it appropriate for recognizing gestures in environments with poor visibility, limited accessibility, and privacy sensitivityWhile the performance of radar-based gesture recognition in these environments has been extensively researched, the preference...
How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functi...
Long bone fractures are a concern in long-duration exploration missions (LDEM) where crew autonomy will exceed the current Low Earth Orbit paradigm. Current crew selection assumptions require extensive complete training and competency testing prior to flight for off-nominal situations. Analogue astronauts (n = 6) can be quickly trained to address a...
This post-conference book constitutes selected papers of the Fifth International Conference on Computer-Human Interaction Research and Applications, CHIRA 2021, held virtually due to COVID 19, and Sixth International Conference on Computer-Human Interaction Research and Applications, CHIRA 2022, held in Valletta, Malta, in October 2022.
The 8 full...
Individuals with motor disabilities can benefit from an alternative means of interacting with the world: using their tongue. The tongue possesses precise movement capabilities within the mouth, allowing individuals to designate targets on the palate. This form of interaction, known as lingual interaction, enables users to perform basic functions by...
Long bone fractures in hostile environments pose unique challenges due to limited resources, restricted access to healthcare facilities, and absence of surgical expertise. While external fixation has shown promise, the availability of trained surgeons is limited, and the procedure may frighten unexperienced personnel. Therefore, an easy-to-use exte...
User interface aesthetics, a particular sub-characteristic of the ISO 25010 software quality model, is correlated to the perceived or actual usability of a graphical user interface, its user experience, and trust. While many measures, such as balance, symmetry, proportion, alignment, regularity, and simplicity, can be computed, no consensus exists...
The field of generative artificial intelligence has seen significant advancements in recent years with the advent of large language models, which have shown impressive results in software engineering tasks but not yet in engineering user interfaces. Thus, we raise a specific research question: would an LLM-based system be able to search for relevan...
Virtual reality applications offer the promise to immerse end users in a synthetic environment where several actions could be observed, simulated, and reproduced, before transferring them to reality, which makes them particularly appropriate for training. Yet, when the training requires complex handling of information, the tasks become cognitively...
Radar sensing technologies offer several advantages over other gesture input modalities, such as the ability to reliably sense human movements, a reasonable deployment cost, insensitivity to ambient conditions such as light, temperature, and the ability to preserve anonymity. These advantages come at the price of high processing complexity mainly d...
Adaptive user interfaces have the advantage of being able to dynamically change their aspect and/or behaviour depending on the characteristics of the context of use, i.e. to improve user experience(UX). UX is an important quality factor that has been primarily evaluated with classical measures but to a lesser extent with physiological measures, suc...
Long bone fractures are a concern in long-duration exploration missions (LDEM) where crew autonomy will exceed the current Low Earth Orbit paradigm. Current crew selection assumptions require extensive complete training and competency testing prior to flight for off-nominal situations. Analog astronauts (n=6) can be quickly trained to address a sin...
We examine radar-based gesture input for interactive computer systems, a technology that has recently grown in terms of commercial availability, affordability, and popularity among researchers and practitioners, where radar sensors are leveraged to detect user input performed in mid-air, on the body, and around physical objects and digital devices....
Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic li...
Nowadays, creating a positive experience is a key source of competitive advantage (Lemon & Verhoef, 2016). A good experience makes a person five times more likely to recommend a company and more likely to purchase in the future (Yohn, 2019). Besides, gesture interaction technology appears as a promising way to provide individuals a global richer ex...
Human long duration exploration missions (LDEMs) raise a number of technological challenges. This paper addresses the question of the crew autonomy: as the distances increase, the communication delays and constraints tend to prevent the astronauts from being monitored and supported by a real time ground control. Eventually, future planetary mission...
This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes....
To support radiologists in their decision-making regarding women breast cancer diagnosis based on mammographies, this paper addresses three challenges posed by a clinical decision support system for breast cancer screening, diagnosis, and reporting: multimodality (via textual, graphical, and two-dimensional gestural interaction), usability (via hum...
To consistently compare gesture recognizers under identical conditions, a systematic procedure for comparative testing should investigate how the number of templates, the number of sampling points, the number of fingers, and their configuration with other hand parameters such as hand joints, palm, and fingertips impact performance. This paper defin...
Retailers develop personalized websites with the aim of improving customer experience. However, we still have limited knowledge about the effect of personalization on customer experience and the underlying processes. With a lab experiment, this research specifically examines the effect of actual personalization and perceived personalization on play...
Browsing multimedia objects, such as photos, videos, documents, and maps represents a frequent activity in a context of use where an end-user interacts on a large vertical display close to bystanders, such as a meeting in a corporate environment or a family display at home. In these contexts, mid-air gesture interaction is suitable for a large vari...
On large displays, using keyboard and mouse input is challenging because small mouse movements do not scale well with the size of the display and individual elements on screen. We present “Large User Interface”(LUI), which coordinates gestural and vocal interaction to increase the range of dynamic surface area of interactions possible on large disp...
This paper presents SnappView, an open-source software development kit that facilitates end-user review of graphical user interfaces for mobile applications and streamlines their input into a continuous design life cycle. SnappView structures this user interface review process into four cumulative stages: (1) a developer creates a mobile applicatio...
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation...
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This...
Despite the tremendous progress made for recognizing gestures acquired by various devices, such as the Leap Motion Controller, developing a gestural user interface based on such devices still induces a significant programming and software engineering effort before obtaining a running interactive application. To facilitate this development, we prese...
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastr...
Nowadays, creating a positive experience is a key source of competitive advantage. A good experience makes a person five times more likely to recommend a company and more likely to purchase in the future. Besides, gesture interaction technology appears as a promising way to provide individuals a global richer experience than with classical user int...
Arm-and-hand tracking by technological means allows gathering data that can be elaborated for determining gesture meaning. To this aim, machine learning (ML) algorithms have been mostly investigated looking for a balance between the highest recognition rate and the lowest recognition time. However, this balance comes mainly from statistical models,...
Adapting the user interface of a software system to the requirements of the context of use continues to be a major challenge, particularly when users become more demanding in terms of adaptation quality. A considerable number of methods have, over the past three decades, provided some form of modelling with which to support user interface adaptatio...
This workshop aims at identifying, examining, structuring and sharing educational resources and approaches to support the process of teaching/learning Human-Computer Interaction (HCI) Engineering. The broadening of the range of available interaction technologies and their applications, many times in safety and mission critical areas, to novel and l...
The expansion of touch-sensitive technologies, ranging from smartwatches to wall screens, triggered a wider use of gesture-based user interfaces and encouraged researchers to invent recognizers that are fast and accurate for end-users while being simple enough for practitioners. Since the pioneering work on two-dimensional (2D) stroke gesture recog...
Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that...
With the continuously increasing number and variety of devices, the study of visual design of their Graphical User Interfaces grows in importance and scope, particularly for new devices, including smartphones, tablets, and large screens. Conducting a visual design experiment typically requires defining and building a GUI dataset with different reso...
We introduce “4E,” a new design approach of reconfigurable displays that can change their form factors by capitalizing on four quality properties inspired by applied material: extensibility, extendability, expandability, and extractability. This approach is applicable to both fixed and portable displays. We define and exemplify each property, highl...
Cloud computing is being adopted by commercial and governmental organizations driven by the need to reduce the operational cost of their information technology resources and search for a scalable and flexible way to provide and release their software services. In this computing model, the Quality of Services (QoS) is agreed between service provider...
Generation of Graphical User Interfaces (GUIs) from Business Process Model Notation (BPMN) models is a step manually performed by an analyst in any information system development. By analyzing twelve BPMN projects and comparing them with their associated GUIs, a set of rules for mapping BPMN models expressed in terms of BPMN patterns to GUIs has be...
The notion of experience has gained in popularity both in management and in computer science. To assess the quality of an information system, specialists in human-computer interaction are now referring to the user experience. On the marketing side, the concept of experience has also become key to describe the relationship between an individual and...
Presently, miniaturized sensors can be embedded in any small-size wearable to recognize movements on some parts of the human body. For example, an electrooculography-based sensor in smart glasses recognizes finger movements on the nose. To explore the interaction capabilities, this paper conducts a gesture elicitation study as a between-subjects ex...
Whilst new patents and announcements advertise the technical availability of foldable displays, which are capable to be folded to some extent, there is still a lack of fundamental and applied understanding of how to model, to design, and to prototype graphical user interfaces for these devices before actually implementing them. Without waiting for...
While end users can acquire full 3D gestures with many input devices, they often capture only 3D trajectories, which are 3D uni-path, uni-stroke single-point gestures performed in thin air. Such trajectories with their $(x,y,z)$ coordinates could be interpreted as three 2D stroke gestures projected on three planes,\ie, $XY$, $YZ$, and $ZX$, thus ma...
Adapting the user interface (UI) to the changing context of use is intended to support the interaction effectiveness and sustain UI usability. However, designing and/or processing UIs adaptation at design time does not encompass real situation requirements. Adaptation should have a cross-cutting and low-cost impact on software patterning and appear...
Our digital and physical worlds are becoming increasingly interconnected. Digital services reduce the need to physically move hence to have to face physical accessibility barriers, but it becomes then more critical to make sure they are not replaced by digital accessibility barriers. In order to assess the interplay of both worlds from the accessib...
Online customer experience comprises all subjective responses consumers may have when interacting with a website (Rose et al. 2012). Companies have long recognized that providing optimal customer experience has become a key challenge (Ho and Bodoff 2014; Lemon and Verhoef 2016). To address this challenge, worldwide companies have developed personal...