Figure 2 - uploaded by Félix Bigand
Content may be subject to copyright.
Performance scores from the four-alternative forced choice identification task. Dashed horizontal line indicates chance performance levels. Error bars indicate standard errors. Significant differences from chance level : * (p<.05), *** (p<.001).

Performance scores from the four-alternative forced choice identification task. Dashed horizontal line indicates chance performance levels. Error bars indicate standard errors. Significant differences from chance level : * (p<.05), *** (p<.001).

Contexts in source publication

Context 1
... Identification scores as a function of the four signers are shown in Figure 2. ...
Context 2
... et al. (2005) reported 76% correct identification but involving extensive pretraining for the participants [28]. Our results ( Figure 2) include all participants' responses, whatever their familiarity with signers. In addition, limitations of the online survey can be discussed. ...

Citations

... The data also revealed individual differences among the signers for both verbs (Fig. 6) and adjectives (Fig. 7), which might be interpreted as personal signing style (cf. Bigand et al., 2020). The present study suggests that in sign languages the physical parameters of motion are recruited for semantic and grammatical markings, but different parameters are recruited for different marking categories. ...
Article
Full-text available
Across a number of sign languages, temporal and spatial characteristics of dominant hand articulation are used to express semantic and grammatical features. In this study of Austrian Sign Language (Österreichische Gebärdensprache, or ÖGS), motion capture data of four Deaf signers is used to quantitatively characterize the kinematic parameters of sign production in verbs and adjectives. We investigate (1) the difference in production between verbs involving a natural endpoint (telic verbs; e.g. arrive) and verbs lacking an endpoint (atelic verbs; e.g. analyze), and (2) adjective signs in intensified vs. non-intensified (plain) forms. Motion capture data analysis using linear-mixed effects models (LME) indicates that both the endpoint marking in verbs, as well as marking of intensification in adjectives, are expressed by movement modulation in ÖGS. While the semantic distinction between verb types (telic/atelic) is marked by higher peak velocity and shorter duration for telic signs compared to atelic ones, the grammatical distinction (intensification) in adjectives is expressed by longer duration for intensified compared to non-intensified adjectives. The observed individual differences of signers might be interpreted as personal signing style.
... 3 Chapter 8 is partly reproduced from Bigand et al. (2020). 4 Chapter 9 is partly reproduced from Bigand et al. (2021c). ...
... Moreover, the minor role of morphology-related cues in the human ability to identify the signers calls for further work, including machine learning studies, investigating the role of other motion features, in particular kinematic ones (Section 8.3). This chapter is partly reproduced from Bigand et al. (2020). ...
Thesis
Full-text available
Many technological barriers must be tackled in order to provide tools in Sign Languages (SLs) in the same way as for spoken languages. For that aim, further insights must be gained into multiple disciplines, in particular motion science. More specifically, the present thesis aims to gain insights into the possibility of anonymizing the movements of a signer, in the same way as a speaker can remain anonymous by modifying specific aspects of the voice.First, this thesis sheds light on general kinematic properties of spontaneous SL in order to improve the models of natural SL. Using 3D motion recordings of multiple signers, we show that the kinematic bandwidth of spontaneous SL highly differs from that of signs made in isolation. Furthermore, a Principal Component Analysis reveals that the spontaneous SL discourses can be described by a reduced set of simple, one-directional, movements (i.e., synergies).Furthermore, combining human data and computational modelling, we demonstrate that signers can be identified from their movements, beyond morphology- and posture-related cues. Finally, we present machine learning models able to automatically extract identity information in SL movements and to manipulate it in generated motion. The models developed in this thesis could allow producing anonymized SL messages via virtual signers, which would open new horizons for deaf SL users.
... With the advent of motion capture (mocap) systems, it has been possible to develop virtual signers (or signing avatars) with high naturality and comprehensibility, by replaying movements of real signers Huenerfauth, 2010, 2014;Gibet, 2018). However, it has now been shown that deaf observers can identify signers from point-light displays (PLDs) of their movements, beyond cues related to appearance, clothes, or morphology (Bigand et al., 2020). This observation questions the possibility to produce anonymized, non-identifiable, content with virtual signers. ...
... Using PLDs, studies have demonstrated that human observers were able to extract critical information from motion, such as actions (Johansson, 1973), gender Mather and Murdoch, 1994), or emotional state (Atkinson et al., 2004). Similarly, behavioral studies have used PLDs to show that the identity of familiar individuals can be inferred from human movements, such as walking Loula et al., 2005;Troje et al., 2005), dancing (Loula et al., 2005;Bläsing and Sauzet, 2018), clapping (Sevdalis and Keller, 2009), or producing SL (Bigand et al., 2020). Moreover, Baragchizadeh et al. (2020) recently demonstrated that motion cues also allow for the perceptual discrimination of the identity of unfamiliar people. ...
... The data used in the present investigation were taken from a previously reported study (Bigand et al., 2020). In brief, each of six deaf native and fluent signers had freely described the content of 25 pictures (as shown in examples in Supplementary Material 1) using LSF. ...
Article
Full-text available
Sign language (SL) motion contains information about the identity of a signer, as does voice for a speaker or gait for a walker. However, how such information is encoded in the movements of a person remains unclear. In the present study, a machine learning model was trained to extract the motion features allowing for the automatic identification of signers. A motion capture (mocap) system recorded six signers during the spontaneous production of French Sign Language (LSF) discourses. A principal component analysis (PCA) was applied to time-averaged statistics of the mocap data. A linear classifier then managed to identify the signers from a reduced set of principal components (PCs). The performance of the model was not affected when information about the size and shape of the signers were normalized. Posture normalization decreased the performance of the model, which nevertheless remained over five times superior to chance level. These findings demonstrate that the identity of a signer can be characterized by specific statistics of kinematic features, beyond information related to size, shape, and posture. This is a first step toward determining the motion descriptors necessary to account for the human ability to identify signers.
... Nevertheless, as the authors caution, this work is rare and foundational, and still far behind the progress achieved so far by research into writing and speech. Another issue is the confidentiality of the individuals involved in the production of sign language samples collected for shared datasets: as a recent study suggests, signers can be recognized based on motion capture information (Bigand et al. 2020). ...
Technical Report
Full-text available
This publication is based upon work from COST Action ‘Language in the Human-Machine Era’, supported by COST (European Cooperation in Science and Technology). Authors of the report: Sayers, Dave • 0000-0003-1124-7132 Sousa-Silva, Rui • 0000-0002-5249-0617 Höhn, Sviatlana • 0000-0003-0646-3738 Ahmedi, Lule • 0000-0003-0384-6952 Allkivi-Metsoja, Kais • 0000-0003-3975-5104 Anastasiou, Dimitra • 0000-0002-9037-0317 Beňuš, Štefan • 0000-0001-8266-393X Bowker, Lynne • 0000-0002-0848-1035 Bytyçi, Eliot • 0000-0001-7273-9929 Catala, Alejandro • 0000-0002-3677-672X Çepani, Anila • 0000-0002-8400-8987 Chacón-Beltrán, Rubén • 0000-0002-3055-0682 Dadi, Sami • 0000-0001-7221-9747 Dalipi, Fisnik • 0000-0001-7520-695X Despotovic, Vladimir • 0000-0002-8950-4111 Doczekalska, Agnieszka • 0000-0002-3371-3803 Drude, Sebastian • 0000-0002-2970-7996 Fort, Karën • 0000-0002-0723-8850 Fuchs, Robert • 0000-0001-7694-062X Galinski, Christian • (no ORCID number) Gobbo, Federico • 0000-0003-1748-4921 Gungor, Tunga • 0000-0001-9448-9422 Guo, Siwen • 0000-0002-6132-6093 Höckner, Klaus • 0000-0001-6390-4179 Láncos, Petra Lea • 0000-0002-1174-6882 Libal, Tomer • 0000-0003-3261-0180 Jantunen, Tommi • 0000-0001-9736-5425 Jones, Dewi • 0000-0003-1263-6332 Klimova, Blanka • 0000-0001-8000-9766 Korkmaz, Emin Erkan • 0000-0002-7842-7667 Maučec, Mirjam Sepesy • 0000-0003-0215-513X Melo, Miguel • 0000-0003-4050-3473 Meunier, Fanny • 0000-0003-2186-2163 Migge, Bettina • 0000-0002-3305-7113 Mititelu, Verginica Barbu • 0000-0003-1945-2587 Névéol, Aurélie • 0000-0002-1846-9144 Rossi, Arianna • 0000-0002-4199-5898 Pareja-Lora, Antonio • 0000-0001-5804-4119 Sanchez-Stockhammer, C. • 0000-0002-6294-3579 Şahin, Aysel • 0000-0001-6277-6208 Soltan, Angela • 0000-0002-2130-7621 Soria, Claudia • 0000-0002-6548-9711 Shaikh, Sarang • 0000-0003-2099-4797 Turchi, Marco • 0000-0002-5899-4496 Yildirim Yayilgan, Sule • 0000-0002-1982-6609