TABLE 2 - uploaded by Frank J. Bernieri
Content may be subject to copyright.
Source publication
This study examined the relative impact different channels of communication had on social perception based on exposure to thin slices of the behavioral stream. Specifically, we tested the hypothesis that dyadic rapport can be perceived quickly through visual channels. Perceivers judged the rapport in 50 target interactions in one of five stimulus d...
Contexts in source publication
Context 1
... correlations between the behavioral cues and the interactants' self-reports of rapport are presented on left side of Table 2. These correla- Note. ...
Context 2
... agreement between the encoding correlations and the cue dependencies represent more ecologically valid perception processes. The mean cue dependencies across each sample of perceivers for each condi- tion are presented on the right side of Table 2. ...
Context 3
... second set of contrasts tested the hypothesis that a cue was uti- lized more heavily when judging video stimuli than when judging audio stimuli. The data on Table 2 demonstrated that perceivers in the video condition compared to those in the audio condition relied more heavily on synchrony, proximity, mean attractiveness, and female adaptors, all of which are contained within the visual display. On the other hand, per- ceivers in the audio condition compared to those in the video condition relied more heavily on mutual silence that was conveyed through the ver- bal channel. ...
Citations
... Objective: Unstructured observation by two researchers to evaluate visceral and behavioral levels of emotional attachment. Wilson et al. (2017) Subjective: Rapport subscales taken various studies (Briggs et al., 2015;Grahe & Bernieri, 1999;Kang & Gratch, 2012;Kang et al., 2008;Von Der P€ utten et al., 2010;Wang & Gratch, 2009); original items on emotional support and synchrony (continued) dynamics of attachment in HRI, particularly in assessing the long-term psychological and social effects of everyday human-robot interactions. Insights from long-term studies in human-human, human-pet, and human-object attachment could be valuable in extending this understanding to HRI (Endenburg et al., 2014;Grossmann et al., 2006;Mugge et al., 2006;Sroufe, 2005;Raina et al., 1999;Waters et al., 2000). ...
... Furthermore, Miles et al. [23] showed that synchrony between two participants is associated with high rapport in both visual and audio cues. Grahe and Bernieri [8] clarified that visual cues are the most useful for the observer to perceive rapport accurately. Overall, previous studies provide evidence that visual cues are more strongly associated with rapport than audio cues. ...
Automatic rapport estimation in social interactions is a central component of affective computing. Recent reports have shown that the estimation performance of rapport in initial interactions can be improved by using the participant's personality traits as the model's input. In this study, we investigate whether this findings applies to interactions between friends by developing rapport estimation models that utilize nonverbal cues (audio and facial expressions) as inputs. Our experimental results show that adding Big Five features (BFFs) to nonverbal features can improve the estimation performance of self-reported rapport in dyadic interactions between friends. Next, we demystify how BFFs improve the estimation performance of rapport through a comparative analysis between models with and without BFFs. We decompose rapport ratings into perceiver effects (people's tendency to rate other people), target effects (people's tendency to be rated by other people), and relationship effects (people's unique ratings for a specific person) using the social relations model. We then analyze the extent to which BFFs contribute to capturing each effect. Our analysis demonstrates that the perceiver's and the target's BFFs lead estimation models to capture the perceiver and the target effects, respectively. Furthermore, our experimental results indicate that the combinations of facial expression features and BFFs achieve best estimation performances not only in estimating rapport ratings, but also in estimating three effects. Our study is the first step toward understanding why personality-aware estimation models of interpersonal perception accomplish high estimation performance.
... In this situation, nonverbal cues may have a more decisive role than verbal ones. Grahe and Bernieri (1999) noted that an observer's assessment of rapport can be achieved with limited exposure and argued that rapport should be expressed through nonverbal cues that are quickly detected rather than verbal cues. Tickle-Degnen and Rosenthal (1990) also explained that while rapport is not defined solely by nonverbal behaviour, it is strongly encoded and transmitted through nonverbal visual channels. ...
... Floor time distribution or the number of interruptions [18] and various activities [38] also change according to the particular step of relationships. In terms of rapport, Graphe [39] reported that participants are likely to maintain longer eye contact, smile more, and lean more toward each other when building rapport. ...
... For gaze, participants with high rapport are likely to sustain eye contact longer [39]. Therefore, we focused on gaze variations. ...
Conversations based on mutual intimacy are critical for maintaining positive relationships. A detailed understanding of speaker relationships in dialogues enhances various applications, such as information recommendation systems. Such systems, when interacting with multiple users, can provide more tailored information by understanding the users’ relationships. Furthermore, dialogue systems, which are becoming increasingly prevalent in society, can foster long-term user engagement by recognizing and responding to the intimacy levels of users. This study explores a method for estimating the intimacy levels of speakers and dialogue partners in conversational exchanges. Our approach utilizes a multimodal corpus of natural conversations with 71 Japanese participants, complete with metadata indicating each speaker’s perceived intimacy level. We identified key features for estimating intimacy by analyzing the statistical parameters of these features. Our comprehensive analysis encompassed both verbal and non-verbal information, including prosody, gestures, and facial expressions. The proposed intimacy estimation model combines multimodal features using a multi-stream Bi-directional Long Short-Term Memory (BLSTM) network and grasps the contextual information of conversations with a Context BLSTM. Our model’s effectiveness is demonstrated through comparisons with several baseline models. Experimental results show that our proposed model significantly improves the overall performance compared with other models. Although the RoBERTa-based method (the best baseline model) achieved an F1 score of 0.571, our method had an F1 score of 0.594. In particular, an ablation study shows that combining verbal and non-verbal features is useful for intimacy estimation. The performance was further improved by extending the dialogue context, showing that the proposed model can estimate three levels of intimacy with an F1 score of 0.666 by observing eight utterance exchanges.
... In nontherapeutic literature, Bernieri's work stands as the benchmark for studies examining the nature of rapport (Bernieri et al., 1994Bernieri & Gillis, 1995;Grahe & Bernieri, 1999. For instance, Bernieri et al. (1988) emphasized the role of behavioral synchrony in their study of mother-infant genuine and pseudointeractions; they also suggested the importance of identifying other components of rapport and the need for assessing rapport from different sources, such as third parties' and self-ratings. ...
... Much of Bernieri and colleagues' work (Bernieri et al., 1994Bernieri & Gillis, 1995;Grahe & Bernieri, 1999 was based on prior research and conceptualizations about rapport by Rosenthal (1987a, 1990) that posited three core components of rapport: mutual attentiveness (interest, focus), positivity (positive behaviors, friendliness, warmth), and coordination (balance, harmony). These authors also suggested that positivity may be more important at the beginning of an interaction while coordination was relatively more important later. ...
Rapport is a fundamental building block of human relationships across cultures; yet, there is still a dearth of systematic, cross-cultural research on this important topic. This study contributes to a small but growing literature on the nature of rapport across cultures by examining judgments of rapport by observers from different culture/language groups of interactions involving investigative interviews conducted in different languages. Observers from four culture/language groups (English, Spanish, Arabic, and French) rated rapport in nine video clips consisting of three interview languages (English, Spanish, and French) and three segments within each interview. Findings demonstrated that rapport judgments reduced to a bidimensional model of positivity and negativity across the observer culture/language groups; that considerable cultural similarities in rapport judgments existed across the ebb and flow of the interviews; and that there were some possible cultural differences in rapport judgments and the constructs contributing to those judgments, notably French observers’ judgments of mutual respect and seriousness. These findings suggested both major similarities and potential differences in judgments of rapport across cultures.
... Other researchers have operationalized expressivity exclusively in terms of how nonverbally expressive they are with their bodies and gestures and how vocally animated and variable their voice appears to others (e.g., Bernieri et al., 1996;Grahe & Bernieri, 1999). This aspect of expressivity is less psychological than the trait assessed by the ACT and focuses instead on the quantity, amplitude, and diversity of an individual's facial expressions, body movements, gestures, and vocalizations. ...
This study compared the effects of attractiveness and expressivity on liking at three important stages in a relationship: (a) at zero-acquaintance, (b) after a five-minute getting-to-know-you conversation, and finally (c) after becoming well-acquainted with one another. We formed unacquainted groups of participants (N = 81) and over a period of nine weeks (40 + hours of total contact) had them engage in group activities spanning work, play, eating, and conflict. At zero acquaintance, attractive targets were liked more, a direct replication of prior literature. After the first conversation, this effect was still present. Self-reported expressivity also predicted liking after a five-minute conversation. By nine weeks of acquaintanceship, both self-reported expressivity and observer-rated expressiveness predicted liking in addition to attractiveness. We interpret this finding to suggest that these nonverbal behavioral qualities that are chronically embedded throughout one’s behavioral stream must be notable given their effects on liking remained predictive even after interactants learned about their group members’ other characteristics over the course of a relationship.
... Once the first impression is formed, its impact can last for a while and can determine subsequent social interactions, such as hiring decisions and career development [5]. In face-to-face interactions, rich social cues including one s head-body orientation, posture, gaze, tone, outfit, or environmental layout, all influence how people form impressions of each other [6,7,8]. However, remote interaction makes it challenging to form correct impressions due to the ambiguity of assessing remote counterparts' intention and behavior [9]. ...
... We used a 7-point Likert scale for participants to answer twelve survey items. For example, I feel myself/my counterpart is unintelligent (1) -intelligent (7); incompetent (1) -competent (7). We averaged scores of all 12 items for each participant to obtain an overall credibility score. ...
When communicating over videoconferencing, people selectively manage their online presentation to obtain a good impression by adding filters or changing virtual backgrounds. When applying virtual background, the influence of augmented contextual information in the virtual background on impression formation is unclear yet. Therefore, this study examines whether and how contextual cues in virtual backgrounds affect impression formation in video-mediated communication (VMC). With an online survey (N=64) and controlled experiment (N=58), we demonstrate that contextual information on virtual backgrounds significantly changed the perceived credibility toward remote counterparts from viewers’ viewpoints before having interactions. However, such effect was not found after having interactions, possibly due to the reduced ambiguity about the counterpart after the interaction. Further analysis revealed that people misassociate their unpreferred virtual backgrounds with their counterparts’ credibility. We discuss the possible effects of viewing contextual information in virtual backgrounds on impression formation through VMC.
... Vocal behavior has not been explored in automatically predicting the decisions of investors. Given the superior performance of deep learning-based approaches in several audio classification tasks [17] and the significance of vocal behavior in decision-making process [14], we propose to utilize deep learning methods to model vocal behavior and to predict decisions of investors. This research is conducted on a dataset including video recordings of individuals performing an entrepreneurial pitch about their start-up business idea. ...
Entrepreneurial pitch competitions have become increasingly popular in the start-up culture to attract prospective investors. As the ultimate funding decision often follows from some form of social interaction, it is important to understand how the decision-making process of investors is influenced by behavioral cues. In this work, we examine whether vocal features are associated with the ultimate funding decision of investors by utilizing deep learning methods. We used videos of individuals in an entrepreneurial pitch competition as input to predict whether investors will invest in the startup or not. We proposed models that combine deep audio features and Handcrafted audio Features (HaF) and feed them into two types of Recurrent Neural Networks (RNN), namely Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). We also trained the RNNs with only deep features to assess whether HaF provide additional information to the models. Our results show that it is promising to use vocal behavior of pitchers to predict whether investors will invest in their business idea. Different types of RNNs yielded similar performance, yet the addition of HaF improved the performance.Keywordsvocal behaviorentrepreneurial decision makingdeep learningVGGishLSTMGRU
... Social life in complex societies-in which members recognize one another and interpret each other's actions and interactions-comprises a wide spectrum of behaviors, ranging from innate survival mechanisms to abilities relying on sophisticated cognitive processes. Primates exhibit elaborate social behaviors that are indicative of a rich cognitive repertoire: They recognize other conspecifics by their faces and voices (Cheney & Seyfarth 1980, Parr et al. 2000, Sliwa et al. 2011, classify individuals based on their social status and group affiliation (Bergman et al. 2003, Shutts et al. 2013, Silk 1999, rapidly detect and analyze social interactions between conspecifics (Grahe & Bernieri 1999, Isik et al. 2020, and understand others' actions and interactions in terms of underlying mental states-such as intentions or knowledge-that drive them (Call & Tomasello 2008). ...
Primates have evolved diverse cognitive capabilities to navigate their complex social world. To understand how the brain implements critical social cognitive abilities, we describe functional specialization in the domains of face processing, social interaction understanding, and mental state attribution. Systems for face processing are specialized from the level of single cells to populations of neurons within brain regions to hierarchically organized networks that extract and represent abstract social information. Such functional specialization is not confined to the sensorimotor periphery but appears to be a pervasive theme of primate brain organization all the way to the apex regions of cortical hierarchies. Circuits processing social information are juxtaposed with parallel systems involved in processing nonsocial information, suggesting common computations applied to different domains. The emerging picture of the neural basis of social cognition is a set of distinct but interacting subnetworks involved in component processes such as face perception and social reasoning, traversing large parts of the primate brain.
... However, we encourage future research to use the range of behaviors recorded in this study to help establish links between NVC and empathy, farmer satisfaction, and veterinarian advice uptake. Humans process NVB as a gestalt that translates into a global perception of what is being communicated, and research has established links between certain NVB and global perceptions, such as immediacy (Andersen and Andersen, 2004), rapport (Harrigan et al., 1985;Grahe and Bernieri, 1999), attitudes (Mehrabian and Ferris, 1967), and the general affect experienced by 1 person interacting with another (Pally, 2001). ...
Uptake of advice and the ability to facilitate change on-farm are key elements for successful veterinary practice. However, having the necessary clinical skills and knowledge is not enough to achieve this: effective communication skills are essential for veterinarians to realize their advisory role by exploring and understanding the farmer's worldview. Research of verbal aspects of veterinarian communication supports the use of a relationship-centered communication style; we next need to study how veterinarian-farmer nonverbal communication (NVC) can influence interactions and their outcomes, which has been examined in medical and companion animal practice. In this study, we considered which aspects of NVC should be measured, and how, to provide an essential first step toward understanding the significance of NVC for veterinarians working in dairy practice, which should be of interest to researchers, veterinary educators, and practitioners. Eleven video recordings of routine consultations in the UK were analyzed for farmer and veterinarian NVC. The NVC attributes with established links to positive patient and client outcomes from medical and social science studies were chosen, and a methodology developed for their measurement, by adapting measures typically used in NVC research. Each consultation was segmented into intervals defined by the main activity and location on farm: introduction, fertility examination, discussion, and closing. This approach allowed us to analyze the content more consistently, establish which aspects of NVC featured within each interval, and whether the activity and location influenced the observed NVC. We measured 12 NVC attributes, including body orientation, interpersonal distance, head position, and body lean, which have been shown to influence empathy, rapport, and trust: key components of relationship-centered communication. Future research should seek to establish the significance of NVC in effective communication between veterinarian and farmer, building on our findings that show it is possible to measure nonverbal attributes. Veterinarians may benefit from becoming skilled nonverbal communicators and have more effective conversations during routine consultations, motivating farmers to make changes and improve herd health.