Figure - available from: PLOS One
This content is subject to copyright.
Cosine similarity between individual and common PMs
The similarity measures are specified when sim > 0.2, for sake of visibility.

Cosine similarity between individual and common PMs The similarity measures are specified when sim > 0.2, for sake of visibility.

Source publication
Article
Full-text available
Sign Language (SL) is a continuous and complex stream of multiple body movement features. That raises the challenging issue of providing efficient computational models for the description and analysis of these movements. In the present paper, we used Principal Component Analysis (PCA) to decompose SL motion into elementary movements called principa...

Similar publications

Article
Full-text available
Keyframe extraction is a widely applied remedy for issues faced with 3D motion capture -based computer animation. In this paper, we propose a novel keyframe extraction method, where the motion is represented in LRI coordinates and the dimensions covering 95% of the data are automatically selected using PCA. Then, by K-means classification, the summ...

Citations

... The first 15 PMs -accounting for more than 95% of the kinematic variance -were retained for further analyses. 60,61 PCA was used to extract PMs based on previous evidence suggesting that such linear combinations accurately model the neural control of complex movements, 59,62 and even the specific coordination patterns of dance. 16,63 Further, we found that the variance explained by our 15 PMs was similar to that explained by non-linear components extracted using an auto-encoder (95.6% vs. 96.5%, ...
Article
Full-text available
Collective synchronized behavior has powerful social-communicative functions observed across several animal taxa.¹,²,³,⁴,⁵,⁶,⁷ Operationally, synchronized behavior can be explained by individuals responding to shared external cues (e.g., light, sound, or food) as well as by inter-individual adaptation.³,⁸,⁹,¹⁰,¹¹ We contrasted these accounts in the context of a universal human practice—collective dance—by recording full-body kinematics from dyads of laypersons freely dancing to music in a “silent disco” setting. We orthogonally manipulated musical input (whether participants were dancing to the same, synchronous music) and visual contact (whether participants could see their dancing partner). Using a data-driven method, we decomposed full-body kinematics of 70 participants into 15 principal movement patterns, reminiscent of common dance moves, explaining over 95% of kinematic variance. We find that both music and partners drive synchrony, but through distinct dance moves. This leads to distinct kinds of synchrony that occur in parallel by virtue of a geometric organization: anteroposterior movements such as head bobs synchronize through music, while hand gestures and full-body lateral movements synchronize through visual contact. One specific dance move—vertical bounce—emerged as a supramodal pacesetter of coordination, synchronizing through both music and visual contact, and at the pace of the musical beat. These findings reveal that synchrony in human dance is independently supported by shared musical input and inter-individual adaptation. The independence between these drivers of synchrony hinges on a geometric organization, enabling dancers to synchronize to music and partners simultaneously by allocating distinct synchronies to distinct spatial axes and body parts.
... Researchers have explored using avatars in this capacity since the late 1990s [3]. Anonymization can begin with a system that records the signer's motion and play it back on an avatar that looks very different from the signer; however, this does not address the issue of signing style, which can also identify a signer [4]. ...
Article
Full-text available
Signing avatars continue to be an active field of research in Deaf-hearing communication, and while their ability to communicate the extemporaneous signing has grown significantly in the last decade, one are continues to be commented on consistently in user tests: the quality, quantity or lack-thereof of facial movement. Facial nonmanual signals are extremely important for forming legible grammatically correct signing, because they are an intrinsic part of the grammar of the language. On computer-generated avatars, they are even more important because they are a key element that keeps the avatar moving in all aspects, and thus are critical for creating synthesized signing that is lifelike and acceptable to Deaf users. This paper explores the technical and perception issues that avatars must grapple with to attain acceptability, and proposes a method for rigging and controlling the avatar that extends on prior interfaces based on the Facial Action Coding System that has been used for many signing avatars in the past. Further, issues of realism are explored that must be considered if the avatar is to avoid the creepiness that has plagued computer-generated characters in many other applications such as film. The techniques proposed are demonstrated using a state-of-the-art signing avatar that has been tested with Deaf users for acceptability.
... This chapter is partly reproduced from Bigand et al. (2021a). ...
Thesis
Full-text available
Many technological barriers must be tackled in order to provide tools in Sign Languages (SLs) in the same way as for spoken languages. For that aim, further insights must be gained into multiple disciplines, in particular motion science. More specifically, the present thesis aims to gain insights into the possibility of anonymizing the movements of a signer, in the same way as a speaker can remain anonymous by modifying specific aspects of the voice.First, this thesis sheds light on general kinematic properties of spontaneous SL in order to improve the models of natural SL. Using 3D motion recordings of multiple signers, we show that the kinematic bandwidth of spontaneous SL highly differs from that of signs made in isolation. Furthermore, a Principal Component Analysis reveals that the spontaneous SL discourses can be described by a reduced set of simple, one-directional, movements (i.e., synergies).Furthermore, combining human data and computational modelling, we demonstrate that signers can be identified from their movements, beyond morphology- and posture-related cues. Finally, we present machine learning models able to automatically extract identity information in SL movements and to manipulate it in generated motion. The models developed in this thesis could allow producing anonymized SL messages via virtual signers, which would open new horizons for deaf SL users.