Marie-Luce Bourguet

Marie-Luce Bourguet
Queen Mary, University of London | QMUL · School of Electronic Engineering and Computer Science

PhD

About

65
Publications
16,322
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
325
Citations
Citations since 2016
24 Research Items
132 Citations
2016201720182019202020212022051015202530
2016201720182019202020212022051015202530
2016201720182019202020212022051015202530
2016201720182019202020212022051015202530
Introduction
I work on the use of social robots, virtual and augmented reality in education. I also work on human-robot interaction, multimodal interaction, gesture recognition and audio interfaces.
Additional affiliations
September 2000 - November 2020
University of London
Position
  • Senior Lecturer
Description
  • QMUL/BUPT joint programme senior lecturer
September 2000 - November 2020
Queen Mary, University of London
Position
  • Lecturer

Publications

Publications (65)
Chapter
Flipping the classroom requires from students some self-regulated learning skills, as they must have engaged in learning activities prior to attending classes. The study we describe in this paper was done in the context of a 15-week flipped course delivered online to a large class of undergraduate students. We collected various time-stamped digital...
Book
Full-text available
The 8th annual International Conference of the Immersive Learning Research Network (iLRN2022) was the first iLRN event to offer a hybrid experience, with two days of presentations and activities on the iLRN Virtual Campus (powered by ©Virbela), followed by three days on location at the FH University of Applied Sciences BFI in Vienna, Austria.
Chapter
In this work-in-progress paper, we describe the architecture of a system that can automatically sense an online learner’s situation and context (affective-cognitive state, fatigue, cognitive load, and physical environment), analyse the needs for intervention, and react through an intelligent agent to shape the learner’s self-regulated learning stra...
Presentation
Full-text available
Two visualisation techniques have gained great momentum in education in the past few years: Virtual Reality (VR) and Augmented Reality (AR). VR and AR are potentially very useful when teaching about a scientific topic that is difficult to experience due to being abstract or invisible, or when class size, non-availability of equipment or space is a...
Conference Paper
Full-text available
It is highly likely that classrooms of the future will feature robots to assist the human teachers. Tutor robots will be valued for their capacity to motivate learners and to provide affective support during learning activities, which will require from them to be able to understand the students' affects and behaviours, and to respond to these throu...
Conference Paper
Full-text available
Two visualisation techniques have recently gained great momentum in education: virtual reality (VR) and augmented reality (AR). In materials science education, VR and AR are potentially very useful when teaching about a topic that is difficult to experience due to being abstract or invisible, or when availability of equipment and space is a limitat...
Conference Paper
Full-text available
Social robots acting as stand-ins for speakers or teachers would enable them to reach large audiences from anywhere in the world, increasing the options for distant learning. They would need to be endowed with effective public speaking skills though, in order to deliver their message, entertain, and maintain audience attention. In this paper, we re...
Conference Paper
Full-text available
This paper describes our ongoing work to develop a visual sensing platform that can inform a robot teacher about the behaviour and affective state of its student audience. We have developed a multi-student behaviour recognition system, which can detect behaviours such as "listening" to the lecturer, "raising hand", or "sleeping". We have also devel...
Conference Paper
Full-text available
Aesthetics has been shown to play a considerable role in system acceptability and perceived usability, hence the importance of studying the aesthetic aspect of user interface design. Computable aesthetics evaluation measures have been proposed that are based on the extraction (or segmentation) of the visual elements that compose the interface layou...
Conference Paper
Full-text available
According to graphology, people's emotional states can be detected from their handwriting. Unlike writing on paper, which can be analysed through its on-surface properties, spatial interaction-based handwriting is entirely in-air. Consequently, the techniques used in graphology to reveal the emotions of the writer are not directly transferable to s...
Conference Paper
Full-text available
Informative videos (e.g. recorded lectures) are increasingly being made available online, but they are difficult to use, browse and search. Nowadays, popular platforms let users search and navigate videos via a transcript, which, in order to guarantee a satisfactory level of word accuracy, has typically been generated using some manual inputs. The...
Conference Paper
Full-text available
This paper describes a small experimental study into the use of avatars to remediate the lecturer's absence in voice-over-slide material. Four different avatar behaviours are tested. Avatar A performs all the upper-body gestures of the lecturer, which were recorded using a 3D depth sensor. Avatar B is animated using few random gestures in order to...
Conference Paper
Full-text available
Pointing gestures performed by lecturers are important because they seem to indicate pedagogical significance. In this extended abstract, we describe a simple empirical method for detecting pointing gestures in recorded lectures captured using a depth sensor (Kinect). We first analyse component gestures; second, we assign them weights; finally, we...
Conference Paper
Full-text available
This paper addresses the issue of uncertainty in ubiquitous computing applications, from a user's perspective. It exposes the difficulties users meet for recovering from system errors (recognition, perception and interpretation errors). It shows that in ubiquitous computing, error handling difficulty is exacerbated by systems characteristics such a...
Conference Paper
Full-text available
This paper describes a small experimental study into the relationship between the hand gestures performed by lecturers and the pedagogical significance of the corresponding parts of the lecture. Body movements have long been known to play an important role in communication, especially when teaching. The characterisation of such a relationship could...
Chapter
Currently, a lack of reliable methodologies for the design and evaluation of usable multimodal interfaces makes developing multimodal interaction systems a big challenge. In this paper, we present a usability framework to support the design and evaluation of multimodal interaction systems. First, elementary multimodal commands are elicited using tr...
Article
Full-text available
Multimodal interaction can improve accessibility to pervasive computing applications. However, recognition-based interaction techniques used in multimodal interfaces (e.g. speech and gesture recognition) are still error prone. Recognition errors and misinterpretations can compromise the security, robustness, and efficiency of pervasive computing ap...
Article
Full-text available
An aspect of the European IST SAVANT (Synchronised and Scalable Audio Visual content Across NeTworks) project is the personalisation of interactive TV systems combining broadcast programmes with additional content transmitted via the Internet. This paper describes scalability by selection of service components within the available services accordin...
Article
Full-text available
Designing multimodal systems that take the best advantage of multiple error prone recognition-based technologies, such as speech and gesture recognition, is difficult. To guarantee a robust and usable interaction, careful consideration must be given to the choice of modalities of interaction made available, their allocation to tasks, and the range...
Conference Paper
Full-text available
The design and evaluation of multimodal interaction is difficult. For designers in industry, developing multimodal interaction systems is a big challenge. Although past researches have presented various methodologies, they have addressed only specific cases of multimodality and failed to generalise their methodologies to a range of applications. In...
Chapter
Full-text available
Desktop multimedia (multimedia personal computers) dates from the early 1970s. At that time, the enabling force behind multimedia was the emergence of the new digital technologies in the form of digital text, sound, animation, photography, and, more recently, video. Nowadays, multimedia systems mostly are concerned with the compression and transmis...
Conference Paper
Full-text available
The current practice of designing the auditory mode in the user interface is poorly understood. In this survey, we aim at revealing the common understanding of the role of audio in human-computer interaction and how designers approach design tasks involving audio. We investigate which guidelines and principles participants use in their designs and...
Conference Paper
Full-text available
Common practice in the design of auditory display is hardly ever based on any structured design methodology. This leaves audio being widely underused or used inappropriately and inefficiently. We analyse the current status of research in this context and develop requirements for a methodological framework for auditory display design. Based on these...
Chapter
Full-text available
Desktop multimedia (multimedia personal computers) dates from the early 1970s. At that time, the enabling force behind multimedia was the emergence of the new digital technologies in the form of digital text, sound, animation, photography, and, more recently, video. Nowadays, multimedia systems mostly are concerned with the compression and transmis...
Article
This paper introduces a methodological framework for contextual design with patterns (paco). Its development was driven by the lack of guidance in designing audio in the user interface and by the need to communicate design knowledge within the community and to designers outside the field. The fundamental concepts presented in this paper, however, a...
Article
Full-text available
In this paper, we survey the different types of error-handling strategies that have been described in the literature on recognition-based human–computer interfaces. A wide range of strategies can be found in spoken human–machine dialogues, handwriting systems, and multi-modal natural interfaces. We then propose a taxonomy for classifying error-hand...
Conference Paper
Full-text available
Bilingual education programs whose aim is to support the development of balanced bilingualism belong to "strong" forms of bilingual education. Other desired outcomes of strong forms of bilingual education are cognitive advantages and better school achievement. The goal of our research is to explore how technology can help introducing strong forms o...
Conference Paper
Full-text available
Designing multimodal systems that take the best advantage of multiple error prone recognition-based technologies (e.g. speech and gesture recognition) is difficult. To guarantee a robust interaction, careful consideration must be given to the choice of modalities made available, their allocation to tasks, and the range of modality combinations allo...
Article
Full-text available
The research question posed in this paper is to what extent and in which ways can adaptive digital learning environments assist non-native speakers to overcome cultural and language barriers to learning. Cultural and language barriers are considered from two aspects: firstly those that impede students acquisition of knowledge and skills within the...
Article
Full-text available
Few educational systems have been developed to specifically address the needs of young children who are acquiring two languages at the same time. In this paper, we present a prototype of a CALL (Computer Assisted Language Learning) system for English and Japanese bilingual children aged between 6 and 8. The prototype recreates a bilingual learning...
Article
Full-text available
INTRODUCTION Desktop multimedia (multimedia personal comput-ers) dates from the early 1970s. At that time, the enabling force behind multimedia was the emer-gence of the new digital technologies in the form of digital text, sound, animation, photography, and, more recently, video. Nowadays, multimedia sys-tems mostly are concerned with the compress...
Conference Paper
Full-text available
The multimodal dimension of a user interface raises numerous problems that are not present in more traditional interfaces. In this paper, we briefly review the current approaches in software design and modality integration techniques for multimodal interaction. We then propose a simple framework for describing multimodal interaction designs and for...
Conference Paper
Full-text available
Children, who are acquiring several languages from birth or at an early age, are typically raised in extremely complex and varied multilingual and multicultural environments. In their everyday life, these children are constantly exposed to several languages, different script systems and mixed cultural customs. Currently, few educational systems hav...
Conference Paper
Full-text available
Recognition-based interaction technologies (e.g. speech and gesture recognition) are still error-prone. It has been shown that, in multimodal architectures, combining complementary input modes can contribute to automatic recovery from recognition errors. However, the degree to which error recovery can be achieved is dependent on the design of the i...
Article
Full-text available
This paper deals with the use of metadata standardssuch as MPEG-7 and MPEG-21 to drive the automatic adaptation andpersonalisation of scalable broadcast and Internet content and services
Presentation
Full-text available
One aspect of the presentation is concerned with the development of efficient integration frameworks to support the designers and the developers of multimodal applications. The second aspect is concerned with the empirical study of modality integration. I report on an experiment to study the synchronisation of speech with 3D pointing gestures and h...
Conference Paper
Full-text available
Designing and implementing multimodal applications that take advantage of several recognition- based interaction techniques (e.g. speech and gesture recognition) is a difficult task. The goal of our research is to explore how simple modelling techniques and tools can be used to support the designers and developers of multimodal systems. In this pap...
Article
Full-text available
The aim of our research is to explore new interaction paradigms for the design of multilingual educational software. In this poster, we present some early work on the use of multimodal interaction techniques to facilitate the simultaneous acquisition of more than one language during the period of primary language development. We suggest that multim...
Article
Full-text available
This paper describes two scenarios demonstrating the application of the scalable content andservices, and provides an overview of the SAVANT system, in particular, the terminal where usersaccess these services. Finally, a number of metadata standards and their suitability in providingsynchronised and scalable broadcast and Internet content and serv...
Conference Paper
Full-text available
Designing and implementing applications that can handle multiple recognition-based interaction technologies such as speech and gesture inputs is a difficult task. IMBuilder and MEngine are the two components of a new toolkit for rapidly creating and testing multimodal interface designs. First, an interaction model is specified in the form of a coll...
Article
Full-text available
Designing and implementing applications that can handle multiple recognition-based interaction technologies such as speech and gesture inputs is a difficult task. IMBuilder and MEngine are the two components of a new toolkit for rapidly creating and testing multimodal interface designs. First, an interaction model is specified in the form of a coll...
Patent
Full-text available
This invention relates to apparatus for managing a multi-modal user interface for, for example, a computer or computer or processor controlled device.
Article
This paper describes two experiments that study temporal synchronization between speech (Japanese) and hand pointing gestures. Gesture (G) is shown to be synchronized with either the nominal or deictic ("this", "that", "here", etc.) expression of a phrase. It is also shown that G is predictable in the [-200 ms, 400 ms] interval around the beginning...
Conference Paper
Full-text available
In this paper, we describe an experiment that studies temporal synchronization between speech (Japanese) and hand pointing gestures. Gesture (G) is shown to be synchronized with either the nominal or deictic ("this", "that", 'here", etc.) expression of a phrase. It is also shown that G is predictable in the [-200 ms, 400 ms] interval around the beg...
Conference Paper
Full-text available
This paper proposes a quantitative model of natural modality integration for speech and pointing gestures. An experiment is described that study temporal synchronization between speech and pointing gestures during multimodal interaction. The end of a pointing gesture (MT) is shown to be synchronized with either the key word of an expression or the...
Article
Full-text available
This paper describes, within the framework of the European IST SAVANT project, the development of a search engine that supports transparent access to annotated broadcast content from different types of user devices. For this purpose, the search engine makes use of emerging standards such as MPEG-7, TV-Anytime and MPEG-21.
Article
Full-text available
Recognition-based interaction technologies (e.g. speech and gesture recognition) are still error-prone. It has been shown that, in multimodal architectures, combining complementary input modes can contribute to automatic recovery from recognition errors. However, the degree to which error recovery can be achieved is dependent on the design of the i...
Article
Full-text available
this paper we focus on the content accesssystem and within it the search and retrievalengine. We define an architecture that supportsthe use of different user terminals and builds onestablished and emerging standards such asMPEG-7 [1], TV-AnyTime [2] and MPEG-21[3]

Network

Cited By

Projects

Projects (6)
Archived project
Metadata
Archived project
How technology can help in the multilingual classroom
Project
Speech, gestures and spatial interaction.