Jean-François Jégo's research while affiliated with Université de Vincennes - Paris 8 and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (14)
Participatory art allows for the spectator to be a participant or a viewer able to engage actively with interactive art. Real-time technologies offer new ways to create participative artworks. We hereby investigate how to engage participation through movement in interactive digital art, and what this engagement can awaken, focusing on the ways to e...
This paper describes an interactive installation that invites participants to improvise with an autonomous virtual actor. Based on real-time analysis of the participant's gestures, the virtual actor is capable of either propose expressive and linguistic gestures (selected from a set of four Motion Capture Databases), either imitate the participant,...
Here, hands don't beat the drum. Instead the drum speaks with its hands, projected onto its skin. They interact and create poems in sign language, especially for the deaf/hearing impaired, because the drum has acquired the expressive and prosodic gestures of deaf poets. This installation is based on a Virtual Reality Agent-based platform to explore...
We are looking for something primitive: a memory from before our birth. Something obvious, we all carry and that evolves within us: the first gestures of the first men. Between art, science and technology, our research tends to a virtual scene of rock art in action. Assuming that the cave paintings are the traces of oral performance or dance rites...
This paper describes the process of designing a Virtual Reality Agent-based platform to explore an interactive gestural dialogue between real and virtual actors and propose an analysis of the results. It is part of the CIGALE interdisciplinary project involving researchers in the fields of linguistics, digital arts and computer science, as well as...
In this thesis, we propose and evaluate new gestural interfaces for 3DUI. This work is motivated by two application cases: the first one is dedicated to people with limited sensory-motor abilities for whom generic interaction methods may not be adapted. The second one is artistic digital performances, for which gesture freedom is part of the creati...
In this paper we study the memorization of user created gestures for 3DUI. Wide public applications mostly use standardized gestures for interactions with simple contents. This work is motivated by two application cases for which a standardized approach is not possible and thus user specific or dedicated interfaces are needed. The first one is appl...
Citations
... novel movement based on the specific style of each specific dancer in a company. Other systems like EVE (Jégo and Meneghini, 2020) or the project AI_am (Berman and James, 2015) integrate audience interaction by either adding the audience member as a third partner in improvisation or studying how audience members understand a virtual improviser. Lastly, Jochum and Derks (2019) study human-robot improvisation during three different performances, in break dancing, physical theatre, and modern dance, all during which the dancer and robot responded to each other, but rarely came into physical contact with the other. ...
... The question of imperfection that Judith raises, evokes Dominique Boutet's linguistic study (Tramus et al., 2018;Tramus & Boutet, 2021) of recorded videos of how spectators and virtual actors interact in the CIGALE project's InterACTE installation. Due to technical constraints, the virtual actor experiences the same difficulty a toddler does when rotating their hands to face up (supination) or down (pronation). ...
... This project on capturing artistic, linguistic, and expressive gestures, allowed us to make the installation, InterACTE 8 where a virtual actor adapts their gestures while facing a spectator. Through real-time interaction, they are also able to make new gestures thanks to our behaviour engine (Batras et al., 2016). I was particularly interested in the way the virtual actor's body was staged in terms of visual appearance. ...
... For the application of dance, this collaboration usually happens during improvisation, not necessarily with the goal of choreographic creation. Systems like LumenAI (Liu et al., 2019), ViewpointsAI (Jacob et al., 2013), and InterACTE (Batras et al., 2016) all allow dancers to improvise with a virtual partner, present on a screen or in a VR environment, who analyzes the dancer's movement and has the potential to not only mimic their movement but offer new movement. The Living Archive 5 takes this interaction a step further by offering 5 https://experiments.withgoogle .com/living-archive-wayne-mcgregor ...
... When using spatial or spatiotemporal patterns the number of actuators varies a lot (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24) [Geldard, 1957, Nicolau et al., 2013, Nicolau et al., 2015 due to the use of different patterns and encodings. ...