Roza Gizem Kamiloglu’s research while affiliated with University of Amsterdam and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (36)


More than just Smiling: How 22 Positive Emotions are Spontaneously Expressed on the Face
  • Preprint

February 2025

·

4 Reads

Kunalan Manokara

·

·

·

[...]

·

Disa Sauter

Thankfully, most people feel good most of the time, and we often spontaneously show our positive feelings to others. Yet our scientific understanding of how we express positive emotions is largely based on posed displays. Here, we empirically test how 22 positive emotions are expressed on the face using the participant-negotiated episodic recall method. European respondents (n = 163) narrated emotional events from their lives (data collected from 2018 to 2020). Frequency analyses of the extracted facial expressions (67, 279 datapoints) mostly supported our pre-registered hypotheses for each of the 22 emotions. The results from our confirmatory analyses are illustrated with the aid of animated videos: https://drive.google.com/drive/folders/1pTz_cObpFEYs5hsGUGOPVjUd0nq0Acgt?usp=drive_link. We also applied network models to exploratorily probe whether discrete positive emotions would be associated with specific patterns of facial behaviors. Consistent patterns of facial expressions were found for 19 positive emotions; some were simple (e.g., hope, interest), while others were highly complex and involved multiple facial actions (e.g., amusement, triumph). Together, these results provide a tentative map of what specific positive emotions look like when spontaneously shown on the face. Importantly, none of the positive emotions were expressed merely by smiling alone, demonstrating that people show that they feel good in a wide range of qualitatively different ways.


Emotional responses to music are shaped by acoustic features, individual differences, and contextual factors

December 2024

·

10 Reads

Listening to and making music is often a distinctly emotional experience. We hear emotions when we listen to music, and music also gives rise to emotional reactions. In recent decades, the relationship between music and emotion has become the focus of psychological research. This chapter introduces psychological perspectives on this topic. We discuss how and why people might respond to music emotionally, and the implications for cross-cultural universality, as well as individual differences. We end by outlining three promising avenues for future research: (1) investigating individual differences in emotional responses to music, (2) leveraging citizen science to improve estimates of cross-cultural similarities, and (3) broadening our understanding of the diverse contexts in which emotional responses to music occur, through naturalistic data collection methods. The growing literature and new advances offer valuable insights into the individual, cultural, and contextual nuances that shape how music moves us.


Tickling induces a unique type of spontaneous laughter
  • Article
  • Full-text available

November 2024

·

47 Reads

Laughing is ubiquitous in human life, yet what causes it and how it sounds is highly variable. Considering this diversity, we sought to test whether there are fundamentally different kinds of laughter. Here, we sampled spontaneous laughs (n = 887) from a wide range of everyday situations (e.g. comedic performances and playful pranks). Machine learning analyses showed that laughs produced during tickling are acoustically distinct from laughs triggered by other kinds of events (verbal jokes, watching something funny or witnessing someone else’s misfortune). In a listening experiment (n = 201), participants could accurately identify tickling-induced laughter, validating that such laughter is not only acoustically but also perceptually distinct. A second listening study (n = 210) combined with acoustic analyses indicates that tickling-induced laughter involves less vocal control than laughter produced in other contexts. Together, our results reveal a unique acoustic and perceptual profile of laughter induced by tickling, an evolutionarily ancient play behaviour, distinguishing it clearly from laughter caused by other triggers. This study showcases the power of machine learning in uncovering patterns within complex behavioural phenomena, providing a window into the evolutionary significance of ticking-induced laughter.

Download

Emotional responses to music are shaped by acoustic features, individual differences, and contextual factors

October 2024

·

17 Reads

Listening to and making music is often a distinctly emotional experience. We hear emotions when we listen to music, and music also gives rise to emotional reactions. In recent decades, the relationship between music and emotion has become the focus of psychological research. This chapter introduces psychological perspectives on this topic. We discuss how and why people might respond to music emotionally, and the implications for cross-cultural universality, as well as individual differences. We end by outlining three promising avenues for future research: (1) investigating individual differences in emotional responses to music, (2) leveraging citizen science to improve estimates of cross-cultural similarities, and (3) broadening our understanding of the diverse contexts in which emotional responses to music occur, through naturalistic data collection methods. The growing literature and new advances offer valuable insights into the individual, cultural, and contextual nuances that shape how music moves us.


What Makes Us Feel Good? A Data-Driven Investigation of Positive Emotion Experience

September 2024

·

15 Reads

Emotion

What does it mean to feel good? Is our experience of gazing in awe at a majestic mountain fundamentally different than erupting with triumph when our favorite team wins the championship? Here, we use a semantic space approach to test which positive emotional experiences are distinct from each other based on in-depth personal narratives of experiences involving 22 positive emotions (n = 165; 3,592 emotional events). A bottom-up computational analysis was applied to the transcribed text, with unsupervised clustering employed to maximize internal granular consistency (i.e., the clusters being maximally different and maximally internally homogeneous). The analysis yielded four emotions that map onto distinct clusters of subjective experiences: amusement, interest, lust, and tenderness. The application of the semantic space approach to in-depth personal accounts yields a nuanced understanding of positive emotional experiences. Moreover, this analytical method allows for the bottom-up development of emotion taxonomies, showcasing its potential for broader applications in the study of subjective experiences.


The distribution of perceived appropriateness ratings for the ten nonverbal vocalisations. The boxplots illustrate the spread of the ratings, with diamond representing the mean, the center line within each box the median, and the whiskers extending to show potential outliers. The y-axis shows the scale used to measure appropriateness, ranging from 1 (very inappropriate) to 9 (very appropriate)
Distribution of perceived appropriateness ratings for nonverbal vocalisations. (A) Shows histograms with overlaid boxplots for private (black) and public (grey) settings, and (B) for close (black) and not close (grey) relationships. The x-axis represents the appropriateness ratings on a scale of 1 (very inappropriate) to 9 (very appropriate), and the y-axis quantifies the frequency of responses. Boxplots show the median rating (central mark), the interquartile range (box length), and the full range excluding outliers (whiskers). These visualisations illustrate how perceived appropriateness varies with social context, with a more liberal range observed in private and close relational settings
Differential perceived appropriateness of vocalisations with effect size estimates in (A) private and public settings; (B) close and not-close others. The graph displays vocalisation types along the y-axis, ordered by the magnitude of the effect. The x-axis indicates the average appropriateness ratings. Effect sizes (Cohen’s d) and associated 95% confidence intervals are placed above each paired comparison, quantifying the impact of location or closeness on display rule strength
Comparison of display rule strength for different nonverbal vocalisations by location and interpersonal closeness. The radar charts depict average appropriateness ratings for various vocalisations in private (left) and public (right) settings. Vocalisation types are placed along the axes, with ratings scaling from the center outward — closer to the edge indicates weaker display rules (more appropriate), and closer to the center indicates stronger display rules (less appropriate). Black lines represent ratings in close relationships, while red lines indicate ratings in not close relationships. Each data point on a spoke represents the average display rule strength for a specific vocalisation type in a given location condition
Cross-cultural evaluations of nonverbal vocalisations. (A) Radar chart illustrating the average appropriateness ratings of various vocalisation types across China, the Netherlands, Türkiye, and the U.S. (B) Boxplot displaying the distribution of appropriateness ratings for each vocalisation type, delineating the median (line) and interquartile ranges (whiskers), across the four surveyed countries. The vocalisations are ordered according to their overall appropriateness ranking, from those considered as most appropriate to least appropriate

+2

When to Laugh, When to Cry: Display Rules of Nonverbal Vocalisations Across Four Cultures

September 2024

·

87 Reads

·

1 Citation

Journal of Nonverbal Behavior

Nonverbal vocalisations like laughter, sighs, and groans are a fundamental part of everyday communication. Yet surprisingly little is known about the social norms concerning which vocalisations are considered appropriate to express in which context (i.e., display rules). Here, in two pre-registered studies, we investigate how people evaluate the appropriateness of different nonverbal vocalisations across locations and relationships with listeners. Study 1, with a U.S. sample (n = 250), showed that certain vocalisations (e.g., laughter, sighs, cries) are consistently viewed as more socially acceptable than others (e.g., roars, groans, moans). Additionally, location (private vs. public) and interpersonal closeness (close vs. not close) significantly influenced these perceptions, with private locations and close relationships fostering greater expressive freedom. Study 2 extended this investigation across four societies with divergent cultural norms (n = 1120 in total): the U.S. (for direct replication), Türkiye, China, and the Netherlands. Findings largely replicated those from Study 1 and supported the existence of cross-culturally consistent patterns in display rules for nonverbal vocalisations, though with some variation across cultures. This research expands our understanding of how social norms affect auditory communication, extending beyond the visual modality of facial expressions to encompass the rich world of nonverbal vocalisations.


Voices Without Words: The Spectrum of Nonverbal Vocalisations

June 2024

·

8 Reads

Nonverbal vocalisations are a fundamental part of human life. This review uses Tinbergen’s ethological framework to examine the functions, ontogenetic trajectories, evolutionary history, and underlying mechanisms of five types of vocalisations: cries, laughter, moans, screams, and sighs. By integrating insights from evolutionary biology and social psychology, we demonstrate how biological functions and social factors shape these vocalisations. The review discusses the relationship between acoustic properties and functions, highlighting the diverse roles vocalisations can play, including emotion regulation, social bonding, threat signalling, and fostering group cohesion. Tracing the development of vocalisations from infancy to adulthood emphasises the role of innate tendencies as well as delineating processes of social learning. We additionally examine how social context and cultural norms influence vocalisations and their interpretation. For each vocalisation type, we map their distinct nature and communicative potential: cries are crucial for survival and caregiving responses; laughter fosters social bonding and group cohesion; moans convey a wide range of internal states from pleasure to discomfort; screams serve as urgent alarms in critical situations; and sighs regulate emotions and signal shifts in emotional states. This review emphasises that both biological and social factors must be considered to fully understand nonverbal vocalisations.


Emotions across Cultures

May 2024

·

14 Reads

·

1 Citation

This book provides a cutting-edge overview of emotion science from an evolutionary perspective. Part 1 outlines different ways of approaching the study of emotion; Part 2 covers specific emotions from an evolutionary perspective; Part 3 discusses the role of emotions in a variety of life domains; and Part 4 explores the relationship between emotions and psychological disorders. Experts from a number of different disciplines—psychology, biology, anthropology, psychiatry, and more—tackle a variety of “how” (proximate) and “why” (ultimate) questions about the function of emotions in humans and nonhuman animals, how emotions work, and their place in human life. This volume documents the explosion of knowledge in emotion science over the last few decades, outlines important areas of future research, and highlights key questions that have yet to be answered.


When to Laugh, When to Cry: Display Rules of Nonverbal Vocalisations

April 2024

·

17 Reads

Nonverbal vocalisations like laughter, sighs, and groans are a fundamental part of everyday communication. Yet surprisingly little is known about the social norms concerning which vocalisations are considered appropriate to express in which context (i.e., display rules). Here, in two pre-registered studies, we investigate how people evaluate the appropriateness of different nonverbal vocalisations across locations and relationships with listeners. Study 1, with a U.S. sample (n = 250), showed that certain vocalisations (e.g., laughter, sighs, cries) are consistently viewed as more socially acceptable than others (e.g., roars, groans, moans). Additionally, location (private vs. public) and interpersonal closeness (close vs. not close) significantly influenced these perceptions, with private locations and close relationships fostering greater expressive freedom. Study 2 extended this investigation into four cultural contexts (n = 1120 in total): the U.S. (for direct replication), Türkiye, China, and the Netherlands. Findings largely replicated those from Study 1 and supported the existence of cross-culturally consistent patterns in display rules for nonverbal vocalisations, though with some variation across cultures. This research expands our understanding of how social norms affect auditory communication, extending beyond the visual modality of facial expressions to encompass the rich world of nonverbal vocalisations.


Tickling Induces a Unique Type of Spontaneous Laughter

January 2024

·

10 Reads

Laughing is ubiquitous in human life, yet what causes it and how it sounds is highly variable. Considering this diversity, we sought to test whether there are fundamentally different kinds of laughter. Here, we sampled spontaneous laughs (n = 887) from wide range of everyday situations (for example, comedic performances and playful pranks). Machine learning analyses showed that laughs produced during tickling are acoustically distinct from laughs triggered by other kinds of events (verbal jokes, watching something funny, or witnessing someone else’s misfortune). In a listening experiment (n = 201), participants could accurately identify tickling-induced laughter, validating that such laughter is not only acoustically, but also perceptually distinct. A second listening study (n = 210) combined with acoustic analyses indicate that tickling-induced laughter involves less vocal control than laughter produced in other contexts. Together, our results reveal a unique acoustic and perceptual profile of laughter induced by tickling, an evolutionarily ancient play behaviour, distinguishing it clearly from laughter caused by other triggers. This study showcases the power of machine learning in uncovering patterns within complex behavioural phenomena, providing a window into the evolutionary significance of ticking-induced laughter.


Citations (19)


... The first two papers of the special issue shed light on the ability of the vocal channel to communicate information about affective states and the moderators of emotional expression. Previous work has demonstrated that vocal bursts communicate 24 distinct emotions (Cowen et al., 2019), but the display rules of nonverbal vocalizations (such as sighs, groans, and laughter) had heretofore been unexplored (Kamiloğlu et al., 2024). Based on two preregistered studies, the second of which was a large-scale comparison of four cultures, Kamiloğlu et al. (2024) showed that certain vocalizations such as laughter, sighs, and cries were consistently rated as more socially appropriate than others, including roars, growls and moans. ...

Reference:

Introduction to the Special Issue on Innovations in Vocal Research: Insights from Emotion, Personality Perception, Eyewitness Accuracy and Persuasion
When to Laugh, When to Cry: Display Rules of Nonverbal Vocalisations Across Four Cultures

Journal of Nonverbal Behavior

... An alternative approach is to work bottom-up, using discovery-oriented methods of generating attentional themes based on the co-occurrences of words and phrases (Schwartz et al., 2013). This approach, broadly referred to as topic modeling (Günther et al., 2019;Kamiloglu et al., 2023;Wilson et al., 2016), captures the distributions that naturally occur in spontaneous language use, offering flexible descriptions of the themes that emerge in context. Themes extracted from the everyday speech of English-speaking university students in the US showed, for instance, that talking about entertainment (e.g., "game," "play," "watching") coincided with more pleasant experiences, while talking about assignments (e.g., "test," "study," "class") coincided with less pleasant experiences (Sun et al., 2019). ...

What Makes Us Feel Good? A Data-driven Investigation of Positive Emotion Experience
  • Citing Preprint
  • February 2023

... Naive research assistants were tasked with searching for videos of laughter, adhering to three strict inclusion criteria: (i) the presence of a clearly audible laugh, (ii) a clear and unambiguous eliciting real-life situation, and (iii) only one person vocalizing. To ensure the spontaneity of the laughter events, we adopted a selection process [16,17]. This approach prioritized videos capturing sudden events, minimizing the chance of posed or managed laughter. ...

Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations

... Další výzkum potvrdil klíčovou roli právě zmíněných pěti emocí a ukázal, že emoční prožitky lidí předcházejí změnám v míře pohody. Jednou z účinných intervencí v období pandemie (včetně dlouhodobého stresu) by tedy mohlo být pěstování klidu a naděje u běžných lidí (Sun et al., 2024). ...

Emotional Experiences and Psychological Well-Being in 51 Countries During the COVID-19 Pandemic

Emotion

... Psychological impacts included fear and uncertainty (Ghazawy et al., 2020), somatic disturbances and eating disorders ((Omar et al, 2021), depression, anxiety, loneliness and stress (Arafa, Mohamed, Saleh & Senosy, 2021). Many went through cycles of hourly news monitoring and anxiety, with no emotional support, and no access to counselling (Sun et al., 2020). Female students experienced a lack of support from family, friends, social institutions, and an increased incidence of family violence (AboKresha, Abdelkreem & Ali, 2021). ...

Psychological Wellbeing During the Global COVID-19 Outbreak

SSRN Electronic Journal

... Attempts to differentiate between different kinds of laughter have focused on perceptual features, that is, what distinctions listeners can make. Using laughter recorded in tightly controlled laboratory settings, researchers have found that listeners can tell whether laughter occurs between friends or strangers [5] and whether the laughing person is from their own cultural group or not [6]; perceivers can categorize laughter in terms of emotion states like joyful and mocking [7,8]; and distinguish spontaneous (i.e. genuine, involuntary) from volitional (i.e. ...

Perception of group membership from spontaneous and volitional laughter

... As such, it is easier for perceivers to correctly identify happiness than it is to distinguish any of the other (negative) emotions. In recent years, there has been increasing interest in differentiating discrete positive emotional states (e.g., Chin et al., 2023;Kamiloğlu et al., 2021;Shiota et al., 2014). The relative recognizability of negative and positive emotions can only be properly compared when using a balanced set of emotions. ...

Superior Communication of Positive Emotions Through Nonverbal Vocalisations Compared to Speech Prosody

Journal of Nonverbal Behavior

... Though we identified a few appraisal dimensions that differed between the two cultures on some emotions, the overall appraisal patterns of the positive emotions we examined were quite culturally consistent. Our findings are consistent with the notion that evolved psychological mechanisms result in cultural differences instantiated as variations on common themes, thus emphasizing both preparedness and learning in emotion processes (Kamiloglu et al., 2021). ...

Emotions Across Cultures

... This research has shown that listeners can match infant vocalisations to production contexts like requesting food and giving an object (Kersken et al., 2017), and parents can infer contexts like interaction with the caregiver (play) and satisfaction after feeding from vocalisations of infants (Lindová et al., 2015). Human listeners can also accurately infer situational information from vocalisations of other species, including domestic piglets (Tallet et al., 2010), dogs (Pongrácz et al., 2005;Silva et al., 2021), cats (Nicastro & Owren, 2003), macaques (Linnankoski et al., 1994), and chimpanzees (Kamiloğlu et al., 2020). For instance, listeners can accurately judge from barks whether dogs were alone in a park, playing with their owner, or preparing to go for a walk (Pongrácz et al., 2005). ...

Human listeners’ perception of behavioural context and core affect dimensions in chimpanzee vocalizations