Figure - uploaded by Chun-Min Chang
Content may be subject to copyright.
The bolded number indicates a statistically significant correlation between group performance and each of the Big-5 personality composite attribute.

The bolded number indicates a statistically significant correlation between group performance and each of the Big-5 personality composite attribute.

Contexts in source publication

Context 1
... understand the influence of group personality on team performance, we compute spearman correlation between each of the 20 dimensions of composite group personality measures and our target group performance label. Table 2 includes the correlation results. The number in bold indicates significant correlation at α = 0.05 level. ...
Context 2
... understand the influence of group personality on team performance, we compute spearman correlation between each of the 20 dimensions of composite group personality measures and our target group performance label. Table 2 includes the correlation results. The number in bold indicates significant correlation at α = 0.05 level. ...

Similar publications

Preprint
Full-text available
Textual Inversion remains a popular method for personalizing diffusion models, in order to teach models new subjects and styles. We note that textual inversion has been underexplored using alternatives to the UNet, and experiment with textual inversion with a vision transformer. We also seek to optimize textual inversion using a strategy that does...

Citations

... asserted that existing research on TMS overly emphasizes expertise-related factors while neglecting other individual attributes that also play a critical role in affecting TMS development and team collaboration. Among these attributes, individuals' personality traits stand out as crucial factors that significantly influence team performance and collaboration outcomes (Stadler et al., 2019;Zhong et al., 2019). Despite extensive research on personality traits across various contexts (see Huang et al., 2014), little attention has been given to investigating their impact on team performance through the operation of TMS development (Pearsall and Ellis, 2006). ...
Article
Full-text available
This research applies and integrates transactive memory systems (TMS) theory and the Big Five personality traits model to investigate the performance dynamics of dyadic teams engaged in virtual collaborative problem-solving (CPS). Specifically, this study examines how the personal attributes of team members, including their expertness and Big Five personality traits (extraversion, agreeableness, openness, conscientiousness, and neuroticism), as well as the resultant diversity in expertness and Big Five personality traits within teams, influence both team-level and individual-level performance gain from virtual collaboration. Studying 377 dyadic teams composed of 754 individuals working on an online collaborative intellective task, this research found that dyads with high expertness diversity had greater performance gain from virtual collaboration than dyads with low expertness diversity. Further, dyads, where both members scored low on agreeableness, showed the most significant improvement in team performance. At the individual level, a team member who had a low expertness level but was paired with a high-expertness teammate demonstrated the greatest performance gain from virtual collaboration. The integration of TMS theory and the Big Five personality traits model provides a richer and more nuanced understanding of how individual attributes and team dynamics contribute to successful virtual CPS outcomes.
... corresponding to these cues typically include statistics of signal energy and fundamental frequency (e.g., variation, maximum, mean), spectral features (formants, bandwidths, spectrum intensity), speaking rate (e.g., number of syllables per second) and local variability of the speech signal (e.g., jitter and shimmer) and Mel-Frequency Cepstral Coefficients (MFCCs). Due to the large number of speech features aimed at capturing vocal behavior, there have been attempts to identify standard features sets through the application of publicly available packages (e.g., OpenSMILE [52]) or meta-analysis of the literature (e.g., the Geneva Minimalistic Acoustic Parameter Set (GeMAPS) [212]). ...
... Deep Boltzmann Machines (DBM) were applied as an unsupervised feature learning method to jointly model body pose features in [21]. For sequential data processing, Long Short-Term Memory Networks (LSTM) [2,24,114,120,174,208,212], a type of Recurrent Neural Network (RNN), were applied for various problems and often in combination with CNNs. In [22], sequential data processing was performed with Conditional Restricted Boltzmann Machines (CRBM) [22] and a combination of RNNs with Restricted Boltzmann Machines (RNN-RBM) [22]. ...
... It has a high correlation with engagement, rapport, and even empathy. Through nonverbal behavior analysis, it is possible to detect whether there is a high/low group performance [212], to quantify interaction quality [105] or to predict group satisfaction level [106]. Below, we discuss each topic and the corresponding studies in depth and, Table 3 summarizes them. ...
Preprint
Full-text available
This work presents a systematic review of recent efforts (since 2010) aimed at automatic analysis of nonverbal cues displayed in face-to-face co-located human-human social interactions. The main reason for focusing on nonverbal cues is that these are the physical, machine detectable traces of social and psychological phenomena. Therefore, detecting and understanding nonverbal cues means, at least to a certain extent, to detect and understand social and psychological phenomena. The covered topics are categorized into three as: a) modeling social traits, such as leadership, dominance, personality traits, b) social role recognition and social relations detection and c) interaction dynamics analysis in terms of group cohesion, empathy, rapport and so forth. We target the co-located interactions, in which the interactants are always humans. The survey covers a wide spectrum of settings and scenarios, including free-standing interactions, meetings, indoor and outdoor social exchanges, dyadic conversations, and crowd dynamics. For each of them, the survey considers the three main elements of nonverbal cues analysis, namely data, sensing approaches and computational methodologies. The goal is to highlight the main advances of the last decade, to point out existing limitations, and to outline future directions.
... Batch size is fixed as [16,32], the max epoch is 1000, and optimizer is ADAMAX [29]. Additionally, we follow [65,66], which are the closest studies to us, to use an unweighted average recall (UAR) as our final evaluation metric. Zhong et al. [65,66] modeled the group-level personality composition for group performance classification. ...
... Additionally, we follow [65,66], which are the closest studies to us, to use an unweighted average recall (UAR) as our final evaluation metric. Zhong et al. [65,66] modeled the group-level personality composition for group performance classification. Finally, the whole framework is implemented using the Pytorch toolkit [49]. ...
Conference Paper
Full-text available
Physiological synchrony is a particular phenomenon of physiological responses during a face-face conversation. However, while many previous studies had proposed various physiological synchrony measures between interlocutors in dyadic conversations, there are very few works on computing physiological synchrony in small groups (three or more people). Besides, belongingness and satisfaction are two important factors for the human to decide which group they want to stay. Therefore, in this preliminary work, we want to investigate and reveal the relationship between physiological synchrony and belongingness/satisfaction under group conversation. We feed the physiology of group members into a designed learnable graph structure with the group-level physiological synchrony and heart-related features computed from Photoplethysmography (PPG) signals. We then devise a Group-modulated Attentive Bi-directional Long Short-Term Memory (GGA-BLSTM) model to recognize three-levels of belongingness and satisfaction (low, middle, and high) in groups. Finally, we evaluate the proposed method on our recently collected multimodal group interaction corpus (never published before), NTUBA, and the results show that (1) the models trained jointly with the group-level physiological synchrony and the conventional heart-related features consistently outperforms the model only trained with the conventional features, and (2) the proposed model with a Graph-structure Group-modulated Attention mechanism (GGA), GGA-BLSTM, performs better than the strong baseline model, the attentive BLSTM. Finally, the GGA-BLSTM achieves a promising unweighted average recall (UAR) of 73.3% and 82.1% on group satisfaction and belongingness classification tasks respectively. In further analyses, we reveal the relationships between physiological synchrony and group satisfaction/belongingness.
... To be used in practice, the zscore normalization of the EPQ score function is computed in the training data, and then apply it on the test data. Furthermore, inspired by [49], [50] that computed statistics (e.g., mean, maximum, minimum) as measures of personalities for each interaction unit, e.g., within a group, we not only include raw EPQ scores but also compute seven statistics (difference, maximum, minimum, mean, standard deviation, lower quartile (quartile1), and upper quartile (quartile3)) between interrogator and deceiver (each pair participant). ...
Conference Paper
Full-text available
Deception occurs frequently in our life. It is well-known that people are generally not good at detecting deception, however, behaviors of interlocutors during an interrogator-deceiver conversation may indicate whether the interrogator thinks the other person is telling deceptions or not. The ability to automatically recognize such a perceived deception using behavior cues has the potential in advancing technologies for improved deception prevention or enhanced persuasion skills. To investigate the feasibility to recognize the perceived deception from behaviors, we utilize a joint learning framework by considering acoustic-prosodic features, linguistic characteristics, language uses, and conversational temporal dynamics. We further incorporate personality attributes as an additional input to the recognition network. Our proposed model is evaluated on a recently collected Chinese deceptive corpus of dialog games. We achieve an unweighted average recall (UAR) of 86.70% and 84.89% (UAR) on 2-class perceived deception-truth recognition tasks given the deceiver is telling either truths or lies, respectively. Further analyses reveal that 1) the deceiver's behaviors affect the interrogator's perception (e.g., the higher intensity of the deceiver makes the interrogator believe their statements even though they are deceptive in fact), 2) the interrogator's behavior features carry information about their own deception perception (e.g., interrogator's utterance duration is correlated to his/her perception of truth), and 3) personality traits indeed enhance perceived deception-truth recognition. Finally, we also demonstrate additional evidence indicating that human is bad at detecting deceptions-there are very few indicators that overlaps between perceived and produced truth-deceptive behaviors.
... Recently, computational research has progressed in developing methods that automatically predict group-level task performance from verbal/non-verbal behaviors during small group interactions [6,7], and some research has started to investigate joint modeling approach in considering the intertwining effect between member's vocal behaviors and intra-group personality compositions [8,9]. Where these past research has laid the solid foundation in predicting group performances using vocal behaviors by jointly modeling the effect of intra-group personality composition, these works do not leverage the inter-group personality structures into consideration. ...
... For each session, we first rank and label participants according to their speak times from the most to the least, e.g., we assign the interlocutors as either talkative or talk-less subject in the NTULP database. We train a Bi-GRU for each subject with personality re-weighted attention mechanism as in our previous work [9], defined as: ...
... Bi-GRU+ATT-Vocal Behavior Only Training a typical Bi-GRU for each subject with attention to perform recognition directly. Personality Network (PN)-Vocal Personality Only Using the PN model in our previous work [9], which uses 5-layer DNN on personality composite features to perform recognition. ...
Article
Full-text available
Automated co-located human-human interaction analysis has been addressed by the use of nonverbal communication as measurable evidence of social and psychological phenomena. We survey the computing studies (since 2010) detecting phenomena related to social traits (e.g., leadership, dominance, personality traits), social roles/relations, and interaction dynamics (e.g., group cohesion, engagement, rapport). Our target is to identify the nonverbal cues and computational methodologies resulting in effective performance. This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings (free-standing conversations, meetings, dyads, and crowds). We also present a comprehensive summary of the related datasets and outline future research directions which are regarding the implementation of artificial intelligence, dataset curation, and privacy-preserving interaction analysis. Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively; multimodal features are prominently performing better; deep learning architectures showed improved performance in overall, but there exist many phenomena whose detection has never been implemented through deep models. We also identified several limitations such as the lack of scalable benchmarks, annotation reliability tests, cross-dataset experiments, and explainability analysis.
Article
A small group is a fundamental interaction unit for achieving a shared goal. Group performance can be automatically predicted using computational methods to analyze members’ verbal behavior in task-oriented interactions, as has been proven in several recent works. Most of the prior works focus on lower-level verbal behaviors, such as acoustics and turn-taking patterns, using either hand-crafted features or even advanced end-to-end methods. However, higher-level group-based communicative functions used between group members during conversations have not yet been considered. In this work, we propose a two-stage training framework that effectively integrates the communication function, as defined using Bales’ interaction process analysis (IPA) coding system, with the embedding learned from the low-level features in order to improve the group performance prediction. Our result shows a significant improvement compared to the state-of-the-art methods (4.241 MSE and 0.341 Pearson’s correlation on NTUBA-task1 and 3.794 MSE and 0.291 Pearson’s correlation on NTUBA-task2) on the NTUBA (National Taiwan University Business Administration) small-group interaction database. Furthermore, based on the design of IPA, our computational framework can provide a time-grained analysis of the group communication process and interpret the beneficial communicative behaviors for achieving better group performance.