Justine Cassell’s research while affiliated with Carnegie Mellon University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (209)


Growing Up with Artificial Intelligence: Implications for Child Development
  • Chapter
  • Full-text available

December 2024

·

126 Reads

·

2 Citations

Ying Xu

·

·

·

[...]

·

Justine Cassell

Artificial intelligence (AI) technologies have become increasingly integrated into children’s daily lives, influencing learning, social interactions, and creative activities. This chapter provides an overview of key research fields examining children’s learning from, interactions with, and understanding of AI. Current research indicates that AI has the potential to enhance children’s development across multiple domains; however, ethical considerations need to be prioritized. When children engage in learning activities with AI, they may encounter inappropriate, inaccurate, or biased content. Additionally, children’s social interactions with AI may affect their approach to interpersonal interactions. Finally, children’s developing understanding of the world may make them particularly susceptible to attributing human-like properties to AI, undermining their expectations of these technologies. This chapter highlights the importance of future studies focusing on a child-centered design approach, promoting AI literacy, and addressing ethical concerns to fully harness AI’s potential in child development. Recommendations for parents, technology developers, and policymakers are also provided.

Download

FIGURE Recapitulation of the methodology.
Bringing together multimodal and multilevel approaches to study the emergence of social bonds between children and improve social AI

May 2024

·

56 Reads

Frontiers in Neuroergonomics

This protocol paper outlines an innovative multimodal and multilevel approach to studying the emergence and evolution of how children build social bonds with their peers, and its potential application to improving social artificial intelligence (AI). We detail a unique hyperscanning experimental framework utilizing functional near-infrared spectroscopy (fNIRS) to observe inter-brain synchrony in child dyads during collaborative tasks and social interactions. Our proposed longitudinal study spans middle childhood, aiming to capture the dynamic development of social connections and cognitive engagement in naturalistic settings. To do so we bring together four kinds of data: the multimodal conversational behaviors that dyads of children engage in, evidence of their state of interpersonal rapport, collaborative performance on educational tasks, and inter-brain synchrony. Preliminary pilot data provide foundational support for our approach, indicating promising directions for identifying neural patterns associated with productive social interactions. The planned research will explore the neural correlates of social bond formation, informing the creation of a virtual peer learning partner in the field of Social Neuroergonomics. This protocol promises significant contributions to understanding the neural basis of social connectivity in children, while also offering a blueprint for designing empathetic and effective social AI tools, particularly for educational contexts.


certain features have a sig- nificant impact on the likelihood of using hedges in tutoring conversations. Rapport has a negative valence, suggesting that higher rapport between the participants results in a lower likelihood of hedges
When to generate hedges in peer-tutoring interactions

July 2023

·

24 Reads

This paper explores the application of machine learning techniques to predict where hedging occurs in peer-tutoring interactions. The study uses a naturalistic face-to-face dataset annotated for natural language turns, conversational strategies, tutoring strategies, and nonverbal behaviours. These elements are processed into a vector representation of the previous turns, which serves as input to several machine learning models. Results show that embedding layers, that capture the semantic information of the previous turns, significantly improves the model's performance. Additionally, the study provides insights into the importance of various features, such as interpersonal rapport and nonverbal behaviours, in predicting hedges by using Shapley values for feature explanation. We discover that the eye gaze of both the tutor and the tutee has a significant impact on hedge prediction. We further validate this observation through a follow-up ablation study.


Figure 1: Hedging in peer tutoring
Figure 2: Reranking method
shows the performance of each model for the reranking method. BlenderBot once again per-
How About Kind of Generating Hedges using End-to-End Neural Models?

June 2023

·

59 Reads

Hedging is a strategy for softening the impact of a statement in conversation. In reducing the strength of an expression, it may help to avoid embarrassment (more technically, ``face threat'') to one's listener. For this reason, it is often found in contexts of instruction, such as tutoring. In this work, we develop a model of hedge generation based on i) fine-tuning state-of-the-art language models trained on human-human tutoring data, followed by ii) reranking to select the candidate that best matches the expected hedging strategy within a candidate pool using a hedge classifier. We apply this method to a natural peer-tutoring corpus containing a significant number of disfluencies, repetitions, and repairs. The results show that generation in this noisy environment is feasible with reranking. By conducting an error analysis for both approaches, we reveal the challenges faced by systems attempting to accomplish both social and task-oriented goals in conversation.


"You might think about slightly revising the title": identifying hedges in peer-tutoring interactions

June 2023

·

68 Reads

Hedges play an important role in the management of conversational interaction. In peer tutoring, they are notably used by tutors in dyads (pairs of interlocutors) experiencing low rapport to tone down the impact of instructions and negative feedback. Pursuing the objective of building a tutoring agent that manages rapport with students in order to improve learning, we used a multimodal peer-tutoring dataset to construct a computational framework for identifying hedges. We compared approaches relying on pre-trained resources with others that integrate insights from the social science literature. Our best performance involved a hybrid approach that outperforms the existing baseline while being easier to interpret. We employ a model explainability tool to explore the features that characterize hedges in peer-tutoring conversations, and we identify some novel features, and the benefits of such a hybrid model approach.





The three challenges for gathering different research in conversational AI.
Two types of task-oriented systems architectures: modular (top) or end-to-end (below).
Machine learning approaches in socio-conversational systems. Orange boxes represent the different types of supervision of machine learning models, white boxes the different usages, green boxes the intervention of external knowledge, and blue and purple arrows represent the modular and end-to-end settings, respectively. Dotted arrows indicate when the information comes from labels derived from human knowledge.
Socio-conversational systems: Three challenges at the crossroads of fields

December 2022

·

73 Reads

·

6 Citations

Socio-conversational systems are dialogue systems, including what are sometimes referred to as chatbots, vocal assistants, social robots, and embodied conversational agents, that are capable of interacting with humans in a way that treats both the specifically social nature of the interaction and the content of a task. The aim of this paper is twofold: 1) to uncover some places where the compartmentalized nature of research conducted around socio-conversational systems creates problems for the field as a whole, and 2) to propose a way to overcome this compartmentalization and thus strengthen the capabilities of socio-conversational systems by defining common challenges. Specifically, we examine research carried out by the signal processing, natural language processing and dialogue, machine/deep learning, social/affective computing and social sciences communities. We focus on three major challenges for the development of effective socio-conversational systems, and describe ways to tackle them.



Citations (80)


... There is an argument for including youth in RAI processes throughout the AI lifecycle "not in spite of [their] age, but specifically because of it, " one teen suggested in an interview with Time [9]. Due to youth 1) being early adopters of AI (i.e., among the first users and ways of using AI different from adults) [42], 2) having experiences growing up with AI-driven systems that are unique to this time of innovation (i.e., adults have had different experiences with technologies and do not have the same insight as current youth) [2,69], and 3) expressing an interest in contributing to AI fairness (i.e., wanting to engage with design and evaluation of AI) [62,64,66,68], youth are a core underexplored stakeholder in participatory RAI. Furthermore, youth have demonstrated great potential to engage in taking action toward more ethical AI. ...

Reference:

Investigating Youth AI Auditing
Growing Up with Artificial Intelligence: Implications for Child Development

... 0.908 ± 0.010 0.925 ± 0.019 18.5% GPT-4o Few-Shot 0.712 ± 0.021 0.721 ± 0.020 3.1% GPT-4o Zero-Shot 0.510 ± 0.028 0.551 ± 0.011 8.4% leagues has shown that it is possible to generate hedges in tutoring dialogues, but not always positioned where they are most probable or useful (Abulimiti et al., 2023a). In future work, we plan experiments using top-performing models such as BERT and GPT-4o in high-and low-probability situations that systematically vary the certainty associated with prompted-for information (where hedges can be most useful). ...

When to generate hedges in peer-tutoring interactions
  • Citing Conference Paper
  • January 2023

... Another approach consists of fine-tuning transformer models such as BERT or BART on annotated dialogue data for the planning task. Thus, in [8], various models are fine-tuned to predict the need for a hedge in the next turn, by taking as input a representation of the dialogue history that includes features such as conversation strategies, tutoring strategies or dialogue acts. This "next utterance hedging" prediction is binary (a turn can be either a hedge or a non-hedge turn). ...

How About Kind of Generating Hedges using End-to-End Neural Models?
  • Citing Conference Paper
  • January 2023

... Furthermore, these studies examine how underrepresentation of identities based on gender, race, and sexuality creates exclusion and cultural inaccessibility for certain groups (Gray & Leonard, 2018;Shaw, 2014;Taylor, 2008). They adopt a critical and comprehensive approach, interpreting reality through the meanings attributed by individuals and analysing how gender influences the creation, consumption, and perception of video games, as well as how feminism can drive a more inclusive and equitable industry (Jenkins & Cassell, 2008). ...

From Quake Grrls to Desperate Housewives: A Decade of Gender and Computer Games
  • Citing Chapter
  • September 2008

... Storytelling applications leverage LLM agents to create immersive and interactive learning experiences [171]. STARie [115], a peer-like embodied conversational agent, integrates multimodal tools such as speech synthesis and facial animation to scaffold children's storytelling, fostering narrative creativity and oral communication skills [17,25]. StoryAgent [174] combines topdown story drafting with bottom-up asset generation to transform simple prompts into coherent, multi-modal digital narratives. ...

Socially Interactive Agents as Peers
  • Citing Chapter
  • November 2022

... The coding of hedges is complicated by the fact that in spoken dialogue, they often co-occur with speech disfluencies. In some contexts, it may be difficult to distinguish these two kinds of signals (Prokofieva and Hirschberg, 2014), particularly since listeners can use disfluencies in much the same way they can use hedges to draw conclusions about the speaker's mental state (Arnold et al., 2003(Arnold et al., , 2007 A strong motivation for computational work on hedging comes from work on computer-assisted learning by Cassell and colleagues, specifically tutoring dialogues (Abulimiti et al., 2023a,b;Raphalen et al., 2022). Most similar to our work is Raphalen et al. (2022), where the authors propose a model that combines rule-based classifiers and machine learning models with interpretable features such as unigram and bigram counts, partof-speech tags, and LIWC categories to identify and classify hedge clauses. ...

"You might think about slightly revising the title”: Identifying Hedges in Peer-tutoring Interactions
  • Citing Conference Paper
  • January 2022

... The confidence threshold used for the prediction of the current utterance's labels is 0.7, under which the prediction is not considered viable. 4 In this work, we used a llama-based model, Beluga, which gave excellent results. But we did not use the later models that came along after we had started the long and thorough human evaluation process. ...

Socio-conversational systems: Three challenges at the crossroads of fields

... I argue this is more important than it may seem and may end up being the difference between a well-intentioned AIED succeeding versus failing. For example, in prior AIED work on multimodal measurements of curiosity to support its facilitation via embodied learning technologies (Sinha et al., 2022), despite obtaining reliable and valid measurements of behavioral indicators researchers value in conversational data, they cannot assume all findings from a human-human data sample to apply whole cloth in human-agent interaction. For example, uptake of an agents' conversational moves may be different in quantity and quality compared to conversational moves generated by a human peer. ...

A Novel Multimodal Approach for Studying the Dynamics of Curiosity in Small Group Learning

... For (a), an example was GPT identifying "utterance length" from [82], while the actual metric in the candidate list is worded as "length of utterance". Regarding (b), "transparency" was identified as a metric from [65], while "transparency" is not part of the candidate list. ...

A Model of Social Explanations for a Conversational Movie Recommendation System

... However, these results would not be satisfactory in case of a dialogue, since additional processing related to user input and more complex response generation would be required in that case. Therefore, optimization techniques would be required, similar to case described in [33]. ...

Faster Responses Are Better Responses: Introducing Incrementality into Sociable Virtual Personal Assistants

Lecture Notes in Electrical Engineering