Conference Paper

ClassMeta: Designing Interactive Virtual Classmate to Promote VR Classroom Participation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Programming [20], [73] [36], [73], [147] [66] [36] Decision-making, sense-making, data analysis [21], [55], [111], [123] [89], [111], [138], [145], [149], [152] Education [151] [19] [19], [23], [60], [151] [81] [91] Design/idea/prototype explorations [25], [31], [76], [90], [137] [76], [139] [154] [153]. When using this strategy, it is important that debates and discussions focus on presenting different perspectives and ideas rather than simply disagreeing with or asking the user to justify their opinion. ...
... The user can review these demonstrations and construct their own opinions by integrating what they have observed with their own reading. Liu et al. create a classmate AI agent in a virtual reality classroom that plays the role of an active student [91]. Students in the same virtual classroom observe its behavior, which can stimulate their active class engagement. ...
... The effect of such social roles on user outcomes needs further examination through empirical studies. For example, students may benefit more from interacting with extraheric AI agents playing the role of their peers rather than teachers, as Liu et al. demonstrated that an AI agent that simulates an active student peer in a virtual classroom can promote students' class participation [91]. Future research on extraheric AI should consider not only the interaction strategies by which AI can promote higher-order thinking skills, but also how the social roles of AI agents can enhance or degrade this process. ...
Preprint
Full-text available
As artificial intelligence (AI) technologies, including generative AI, continue to evolve, concerns have arisen about over-reliance on AI, which may lead to human deskilling and diminished cognitive engagement. Over-reliance on AI can also lead users to accept information given by AI without performing critical examinations, causing negative consequences, such as misleading users with hallucinated contents. This paper introduces extraheric AI, a human-AI interaction conceptual framework that fosters users' higher-order thinking skills, such as creativity, critical thinking, and problem-solving, during task completion. Unlike existing human-AI interaction designs, which replace or augment human cognition, extraheric AI fosters cognitive engagement by posing questions or providing alternative perspectives to users, rather than direct answers. We discuss interaction strategies, evaluation methods aligned with cognitive load theory and Bloom's taxonomy, and future research directions to ensure that human cognitive skills remain a crucial element in AI-integrated environments, promoting a balanced partnership between humans and AI.
... In recent years, the rapid development of large language models (LLMs) has driven researchers to actively explore their potential impact on education and learning, gradually integrating them into various aspects of the field. These explorations encompass areas such as programming tutoring [62,63,81], children education [34,54,73,104], STEM education [17,18], enhancing writing [36,46,69,92], language learning [56], reading assistant [40,101], classroom interaction [22,61], teaching support [89], communication learning [59,83] and creative design [19,41,53]. Based on these works, the roles of LLMs in education can be broadly categorized into three main types. ...
... However, much of the aforementioned research [36,40,62,104] focuses on the interaction between LLM systems and individual users, with relatively little attention given to multi-learner collaborative learning. A few examples explore LLMs in group conversation [58], group brainstorming [102], teamwork [106], group decision making [20], and classroom simulation [61]. For example, Liu et al., explored the role of LLM agents in children's collaborative learning, showing that these agents can effectively moderate discussions and foster creative thinking [58]. ...
Preprint
Classroom debates are a unique form of collaborative learning characterized by fast-paced, high-intensity interactions that foster critical thinking and teamwork. Despite the recognized importance of debates, the role of AI tools, particularly LLM-based systems, in supporting this dynamic learning environment has been under-explored in HCI. This study addresses this opportunity by investigating the integration of LLM-based AI into real-time classroom debates. Over four weeks, 22 students in a Design History course participated in three rounds of debates with support from ChatGPT. The findings reveal how learners prompted the AI to offer insights, collaboratively processed its outputs, and divided labor in team-AI interactions. The study also surfaces key advantages of AI usage, reducing social anxiety, breaking communication barriers, and providing scaffolding for novices, alongside risks, such as information overload and cognitive dependency, which could limit learners' autonomy. We thereby discuss a set of nuanced implications for future HCI exploration.
... Furthermore, the development of ClassInSight [40] represents a comprehensive approach to analyzing classroom dynamics by integrating sensor data, giving teachers detailed insights into their instructional practices. Liu et al. [41] introduced ClassMeta, which adopts a data-driven framework combining various educational metrics to generate actionable insights for improving teaching effectiveness. These systems demonstrate various technological methods aimed at enhancing classroom practice, from nonverbal analysis through behavior monitoring to classroom interaction analysis driven by natural language. ...
Article
Full-text available
Classroom dialogue analysis is crucial as it significantly impacts both knowledge transmission and teacher–student interactions. Since the inception of classroom analysis research, traditional methods such as manual transcription and coding have served as foundational tools for understanding these interactions. While precise and insightful, these methods are inherently time-consuming, labor-intensive, and susceptible to human bias. Moreover, they struggle to handle the scale and complexity of modern classroom data effectively. In contrast, many contemporary deep learning approaches focus primarily on dialogue classification, but often lack the capability to provide deeper interpretative insights. To address these challenges, this study introduces an automated dialogue analysis system that combines scalability, efficiency, and objectivity in evaluating teaching quality. We first collected a large dataset of classroom recordings from primary and secondary schools in China and manually annotated the dialogues using multiple encoding frameworks. Based on these data, we developed an automated analysis system featuring a novel dialogue classification algorithm that incorporates speaker role information for more accurate insights. Additionally, we implemented innovative visualization techniques to automatically generate comprehensive classroom analysis reports, effectively bridging the gap between traditional manual methods and modern automated approaches. Experimental results demonstrated the system’s high accuracy in distinguishing various types of classroom dialogue. Large-scale analysis revealed key patterns in classroom dynamics, showcasing the strong potential of our system to enhance teaching evaluation and provide valuable insights for improving education practices.
... However, most of the existing tutor systems for group discussion are limited in scope and lack substantial pedagogical depth. Some tutor systems incorporating VR technology and LLMs were constrained in their functionality, as the agents primarily addressed limited classroom actions such as note-taking, posing questions, and interacting with students (Kim et al., 2024;Liu et al., 2024). To further enhance the capabilities of LLM-based tutor agents, Mao et al. (2024) developed a multi-user discussion assistant designed to increase user engagement in group discussions; however, this system does not emphasize the educational role of the agent. ...
Article
Aiming at improving collaborative learning, the current study builds and tests an LLM-empowered agent to enhance student engagement in group discussion. We introduce a four-module conversational system, providing a user-friendly chat website integrated with an LLM agent, where students can discuss on and learn a specific topic in an online classroom. The LLM agent can continuously monitor the dialogue process and give constructive and reflective responses as a knowledgeable learning peer to engage students in the computer-supported collaborative learning (CSCL) environment. To evaluate the pedagogical performance of the system, three LLMs were tested by prompting. The results showed that LLMs with only prompting were unable to accurately process multi-user dialogue information and lacked pedagogical strategies in their responses.
... However, most of the existing tutor systems for group discussion are limited in scope and lack substantial pedagogical depth. Some tutor systems incorporating VR technology and LLMs were constrained in their functionality, as the agents primarily addressed limited classroom actions such as note-taking, posing questions, and interacting with students (Kim et al., 2024;Liu et al., 2024). To further enhance the capabilities of LLM-based tutor agents, Mao et al. (2024) developed a multi-user discussion assistant designed to increase user engagement in group discussions; however, this system does not emphasize the educational role of the agent. ...
Conference Paper
Full-text available
Aiming at improving collaborative learning, the current study builds and tests an LLM-empowered agent to enhance student engagement in group discussion. We introduce a four-module conversational system, providing a user-friendly chat website integrated with an LLM agent, where students can discuss on and learn a specific topic in an online classroom. The LLM agent can continuously monitor the dialogue process and give constructive and reflective responses as a knowledgeable learning peer to engage students in the computer-supported collaborative learning (CSCL) environment. To evaluate the pedagogical performance of the system, three LLMs were tested by prompting. The results showed that LLMs with only prompting were unable to accurately process multiuser dialogue information and lacked pedagogical strategies in their responses.
... To this end, Bozkir et al. [38] argued for integrating LLMs into XR, emphasizing their potential for enhancing inclusion and engagement while raising concerns about the privacy of voice-enabled interactions. In another work, Liu et al. [39] presented ClassMeta, LLM-driven interactive virtual classmates, in which the system uses voice commands to encourage student participation in virtual classrooms. The authors demonstrated the capabilities of LLMs in creating dynamic and interactive learning environments that simulate peer interactions. ...
Preprint
Full-text available
Recent developments in computer graphics, machine learning, and sensor technologies enable numerous opportunities for extended reality (XR) setups for everyday life, from skills training to entertainment. With large corporations offering consumer-grade head-mounted displays (HMDs) in an affordable way, it is likely that XR will become pervasive, and HMDs will develop as personal devices like smartphones and tablets. However, having intelligent spaces and naturalistic interactions in XR is as important as technological advances so that users grow their engagement in virtual and augmented spaces. To this end, large language model (LLM)--powered non-player characters (NPCs) with speech-to-text (STT) and text-to-speech (TTS) models bring significant advantages over conventional or pre-scripted NPCs for facilitating more natural conversational user interfaces (CUIs) in XR. In this paper, we provide the community with an open-source, customizable, extensible, and privacy-aware Unity package, CUIfy, that facilitates speech-based NPC-user interaction with various LLMs, STT, and TTS models. Our package also supports multiple LLM-powered NPCs per environment and minimizes the latency between different computational models through streaming to achieve usable interactions between users and NPCs. We publish our source code in the following repository: https://gitlab.lrz.de/hctl/cuify
... Additionally, gamification is a well-established approach to enhance engagement [14,29,40,64]. Most recently, the use of LLM-agents as peers in VR classrooms has been shown to promote engagement and classroom participation [49]. Design Implication 5: Use Open and Scalable VR Platforms for Sustained Long-Term Development and Studies. ...
Preprint
Full-text available
Many people struggle with learning a new language, with traditional tools falling short in providing contextualized learning tailored to each learner's needs. The recent development of large language models (LLMs) and embodied conversational agents (ECAs) in social virtual reality (VR) provide new opportunities to practice language learning in a contextualized and naturalistic way that takes into account the learner's language level and needs. To explore this opportunity, we developed ELLMA-T, an ECA that leverages an LLM (GPT-4) and situated learning framework for supporting learning English language in social VR (VRChat). Drawing on qualitative interviews (N=12), we reveal the potential of ELLMA-T to generate realistic, believable and context-specific role plays for agent-learner interaction in VR, and LLM's capability to provide initial language assessment and continuous feedback to learners. We provide five design implications for the future development of LLM-based language agents in social VR.
Conference Paper
Full-text available
Modern large language models (LLMs) have demonstrated impressive capabilities at sophisticated tasks, often through step-by-step reasoning similar to humans. This is made possible by their strong few and zero-shot abilities they can effectively learn from a handful of handcrafted, completed responses ("in-context examples"), or are prompted to reason spontaneously through specially designed triggers. Nonetheless, some limitations have been observed. First, performance in the few-shot setting is sensitive to the choice of examples, whose design requires significant human effort. Moreover, given the diverse downstream tasks of LLMs, it may be difficult or laborious to handcraft per-task labels. Second, while the zero-shot setting does not require handcrafting, its performance is limited due to the lack of guidance to the LLMs. To address these limitations , we propose Consistency-based Self-adaptive Prompting (COSP), a novel prompt design method for LLMs. Requiring neither handcrafted responses nor ground-truth labels, COSP selects and builds the set of examples from the LLM zero-shot outputs via carefully designed criteria that combine consistency, diversity and repetition. In the zero-shot setting for three different LLMs, we show that using only LLM predictions, COSP improves performance up to 15% compared to zero-shot base-lines and matches or exceeds few-shot base-lines for a range of reasoning tasks.
Article
Full-text available
In this paper, we present a comprehensive mathematical analysis of the hallucination phenomenon in generative pretrained transformer (GPT) models. We rigorously define and measure hallucination and creativity using concepts from probability theory and information theory. By introducing a parametric family of GPT models, we characterize the trade-off between hallucination and creativity and identify an optimal balance that maximizes model performance across various tasks. Our work offers a novel mathematical framework for understanding the origins and implications of hallucination in GPT models and paves the way for future research and development in the field of large language models (LLMs).
Article
Full-text available
An artificial intelligence-based chatbot, ChatGPT, was launched in November 2022 and is capable of generating cohesive and informative human-like responses to user input. This rapid review of the literature aims to enrich our understanding of ChatGPT's capabilities across subject domains, how it can be used in education, and potential issues raised by researchers during the first three months of its release (i.e., December 2022 to February 2023). A search of the relevant databases and Google Scholar yielded 50 articles for content analysis (i.e., open coding, axial coding, and selective coding). The findings of this review suggest that ChatGPT's performance varied across subject domains, ranging from outstanding (e.g., economics) and satisfactory (e.g., programming) to unsatisfactory (e.g., mathematics). Although ChatGPT has the potential to serve as an assistant for instructors (e.g., to generate course materials and provide suggestions) and a virtual tutor for students (e.g., to answer questions and facilitate collaboration), there were challenges associated with its use (e.g., generating incorrect or fake information and bypassing plagiarism detectors). Immediate action should be taken to update the assessment methods and institutional policies in schools and universities. Instructor training and student education are also essential to respond to the impact of ChatGPT on the educational environment.
Article
Full-text available
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
Article
Full-text available
In this study, the author collected tweets about ChatGPT, an innovative AI chatbot, in the first month after its launch. A total of 233,914 English tweets were analyzed using the latent Dirichlet allocation (LDA) topic modeling algorithm to answer the question “what can ChatGPT do?”. The results revealed three general topics: news, technology, and reactions. The author also identified five functional domains: creative writing, essay writing, prompt writing, code writing, and answering questions. The analysis also found that ChatGPT has the potential to impact technologies and humans in both positive and negative ways. In conclusion, the author outlines four key issues that need to be addressed as a result of this AI advancement: the evolution of jobs, a new technological landscape, the quest for artificial general intelligence, and the progress-ethics conundrum.
Article
Full-text available
Large language models represent a significant advancement in the field of AI. The underlying technology is key to further innovations and, despite critical views and even bans within communities and regions, large language models are here to stay. This commentary presents the potential benefits and challenges of educational applications of large language models, from student and teacher perspectives. We briefly discuss the current state of large language models and their applications. We then highlight how these models can be used to create educational content, improve student engagement and interaction, and personalize learning experiences. With regard to challenges, we argue that large language models in education require teachers and learners to develop sets of competencies and literacies necessary to both understand the technology as well as their limitations and unexpected brittleness of such systems. In addition, a clear strategy within educational systems and a clear pedagogical approach with a strong focus on critical thinking and strategies for fact checking are required to integrate and take full advantage of large language models in learning settings and teaching curricula. Other challenges such as the potential bias in the output, the need for continuous human oversight, and the potential for misuse are not unique to the application of AI in education. But we believe that, if handled sensibly, these challenges can offer insights and opportunities in education scenarios to acquaint students early on with potential societal biases, criticalities, and risks of AI applications. We conclude with recommendations for how to address these challenges and ensure that such models are used in a responsible and ethical manner in education.
Article
Full-text available
This study provides a comprehensive assessment of the associations of personality and intelligence. It presents a meta-analysis (N = 162,636, k = 272) of domain, facet, and item-level correlations between personality and intelligence (general, fluid, and crystallized) for the major Big Five and HEXACO hierarchical frameworks of personality: NEO Personality Inventory–Revised, Big Five Aspect Scales, Big Five Inventory–2, and HEXACO Personality Inventory–Revised. It provides the first meta-analysis of personality and intelligence to comprehensively examine (a) facet-level correlations for these hierarchical frameworks of personality, (b) item-level correlations, (c) domain and facet-level predictive models. Age and sex differences in personality and intelligence, and study-level moderators, are also examined. The study was complemented by four of our own unpublished data sets (N = 26,813) which were used to assess the ability of item-level models to provide generalizable prediction. Results showed that openness (ρ =.20) and neuroticism (ρ = −.09) were the strongest Big Five correlates of intelligence and that openness correlated more with crystallized than fluid intelligence. At the facet level, traits related to intellectual engagement and unconventionality were more strongly related to intelligence than other openness facets, and sociability and orderliness were negatively correlated with intelligence. Facets of gregariousness and excitement seeking had stronger negative correlations, and openness to aesthetics, feelings, and values had stronger positive correlations with crystallized than fluid intelligence. Facets explained more than twice the variance of domains. Overall, the results provide the most nuanced and robust evidence to date of the relationship between personality and intelligence.
Article
Full-text available
In higher education, low teacher-student ratios can make it difficult for students to receive immediate and interactive help. Chatbots, increasingly used in various scenarios such as customer service, work productivity, and healthcare, might be one way of helping instructors better meet student needs. However, few empirical studies in the field of Information Systems (IS) have investigated pedagogical chatbot efficacy in higher education and fewer still discuss their potential challenges and drawbacks. In this research we address this gap in the IS literature by exploring the opportunities, challenges, efficacy, and ethical concerns of using chatbots as pedagogical tools in business education. In this two study project, we conducted a chatbot-guided interview with 215 undergraduate students to understand student attitudes regarding the potential benefits and challenges of using chatbots as intelligent student assistants. Our findings revealed the potential for chatbots to help students learn basic content in a responsive, interactive, and confidential way. Findings also provided insights into student learning needs which we then used to design and develop a new, experimental chatbot assistant to teach basic AI concepts to 195 students. Results of this second study suggest chatbots can be engaging and responsive conversational learning tools for teaching basic concepts and for providing educational resources. Herein, we provide the results of both studies and discuss possible promising opportunities and ethical implications of using chatbots to support inclusive learning.
Article
Full-text available
Social virtual reality (VR) platforms are an emergent phenomenon, with growing numbers of users utilizing them to connect with others while experiencing feelings of presence (“being there”). This article examines the associations between feelings of presence and the activities performed by users, and the psychological benefits obtained in terms of relatedness, self-expansion, and enjoyment, in the context of the covid-19 pandemic. The results of a survey conducted among users (N = 220) indicate that feelings of spatial presence predict these three outcomes, while social presence predicts relatedness and enjoyment, but not self-expansion. Socialization activities like meeting friends in VR are associated with relatedness and enjoyment, while playful and creative activities allow for self-expansion. Moreover, the perceived impact of social distancing measures was associated with an increase in use, suggesting the utility of these platforms to help users meeting particularly frustrated psychological needs. These results provide a first quantitative account of the potential positive effects of social VR platforms on users’ wellbeing and encourage further research on the topic.
Article
Full-text available
Student life causes many sources of stress due to the requirements of managing schoolwork, family, friends, health and wellbeing, and future career planning. Some students are overwhelmed and lack resilience to overcome stress, especially if they are inexperienced in managing setbacks, fail to achieve expectations, or lack skills to independently manage social skills, recreation, and study time. The long-term accumulation of stress has a negative impact on students’ physical and mental health, and may lead to a range of symptoms such as depression, anxiety, headache, insomnia, and eating disorders. Although most universities provide psychological counseling services, there is often a shortage of professional psychologists, which leads to students suffering from stress for longer than necessary without immediate assistance. The build-up of stress can lead to tragic consequences including abnormal reasoning, anti-social behavior, and suicide. There should never be a need for a student to wait more than a month to make an appointment for counseling services and every request for help should be immediately addressed and assessed. In this research, we designed a unique test platform for an immersive virtual reality group chatbot counseling system so students can receive psychological help and stress management counseling anytime and anywhere. First, the research used questionnaires to measure the stress levels and identifies how stress affects their lives. An immersive virtual reality chatbot was developed using professional psychological counseling knowledge that can provide answers during individual or group counseling sessions. Students can log in to the platform as avatars and ask the chatbot questions or interact with other students on the platform. This research provides college students with a new technology-based counseling environment designed to help relieve stress and learn new ways to improve student life quality from others. The platform provides a test base for future clinical trials to evaluate and improve the automated virtual reality chatbot counseling system.
Article
Full-text available
The introduction of Artificial Intelligence technology enables the integration of Chatbot systems into various aspects of education. This technology is increasingly being used for educational purposes. Chatbot technology has the potential to provide quick and personalised services to everyone in the sector, including institutional employees and students. This paper presents a systematic review of previous studies on the use of Chatbots in education. A systematic review approach was used to analyse 53 articles from recognised digital databases. The review results provide a comprehensive understanding of prior research related to the use of Chatbots in education, including information on existing studies, benefits, and challenges, as well as future research areas on the implementation of Chatbot technology in the field of education. The implications of the findings were discussed, and suggestions were made.
Article
Full-text available
Background: The use of chatbots as learning assistants is receiving increasing attention in language learning due to their ability to converse with students using natural language. Previous reviews mainly focused on only one or two narrow aspects of chatbot use in language learning. This review goes beyond merely reporting the specific types of chatbot employed in past empirical studies and examines the usefulness of chatbots in language learning, including first language learning, second language learning, and foreign language learning. Aims: The primary purpose of this review is to discover the possible technological, pedagogical, and social affordances enabled by chatbots in language learning. Materials & Methods: We conducted a systematic search and identifies 25 empirical studies that examined the use of chatbots in language learning. We used the inductive grounded approach to identify the technological and pedagogical affordances, and the challenges of using chatbots for students’ language learning. We used Garrison’s social presence framework to analyze the social affordances of using chatbots in language learning. Results: Our findings revealed three technological affordances: timeliness, ease of use, and personalization; and five pedagogical uses: as interlocutors, as simulations, for transmission, as helplines, and for recommendations. Chatbots appeared to encourage students’ social presence by affective, open, and coherent communication. Several challenges in using chatbots were identified: technological limitations, the novelty effect, and cognitive load. Discussion and Conclusion: A set of rudimentary design principles for chatbots are proposed for meaningfully implementing educational chatbots in language learning, and detailed suggestions for future research are presented.
Article
Full-text available
Artificial intelligence (AI) ethics is a field that has emerged as a response to the growing concern regarding the impact of AI. It can be read as a nascent field and as a subset of the wider field of digital ethics, which addresses concerns raised by the development and deployment of new digital technologies, such as AI, big data analytics, and blockchain technologies. The principle aim of this article is to provide a high-level conceptual discussion of the field by way of introducing basic concepts and sketching approaches and central themes in AI ethics. The first part introduces concepts by noting what is being referred to by “AI” and “ethics”, etc.; the second part explores some predecessors to AI ethics, namely engineering ethics, philosophy of technology, and science and technology studies; the third part discusses three current approaches to AI ethics namely, principles, processes, and ethical consciousness; and finally, the fourth part discusses central themes in translating ethics in to engineering practice. We conclude by summarizing and noting the inherent interdisciplinary future directions and debates in AI ethics.
Conference Paper
Full-text available
Pedagogical agents are theorized to increase humans’ effort to understand computerized instructions. Despite the pedagogical promises of VR, the usefulness of pedagogical agents in VR remains uncertain. Based on this gap, and inspired by global efforts to advance remote learning during the COVID-19 pandemic, we conducted an educational VR study in-the-wild (𝑁 = 161). With a 2 × 2 + 1 between subjects design, we manipulated the appearance and behavior of a virtual museum guide in an exhibition about viruses. Factual and conceptual learning outcomes as well as subjective learning experience measures were collected. In general, participants reported high enjoyment and had significant knowledge acquisition. We found that the agent’s appearance and behavior impacted factual knowledge gain. We also report an interaction effect between behavioral and visual realism for conceptual knowledge gain. Our findings nuance classical multimedia learning theories and provide directions for employing agents in immersive learning environments.
Article
Full-text available
Exploring communication dynamics in digital social spaces such as massively multiplayer online games and 2D/3D virtual worlds has been a long standing concern in HCI and CSCW. As online social spaces evolve towards more natural embodied interaction, it is important to explore how non-verbal communication can be supported in more nuanced ways in these spaces and introduce new social interaction consequences. In this paper we especially focus on understanding novel non-verbal communication in social virtual reality (VR). We report findings of two empirical studies. Study 1 collected observational data to explore the types of non-verbal interactions being used naturally in social VR. Study 2 was an interview study (N=30) that investigated people's perceptions of non-verbal communication in social VR as well as the resulting interaction outcomes. This study helps address the limitations in prior literature on non-verbal communication dynamics in online social spaces. Our findings on what makes non-verbal communication in social VR unique and socially desirable extend our current understandings of the role of non-verbal communication in social interaction. We also highlight potential design implications that aim at better supporting non-verbal communication in social VR.
Conference Paper
Full-text available
This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational VR environments.
Article
Full-text available
The goal of this study was to apply aspects of the heuristic model advanced by Eisenberg, Cumberland, and Spinrad (1998) to the study of socialization that takes place in preschool and elementary school classrooms. Investigating socialization in this context is important given the number of hours students spend in school, the emotional nature of social interactions that take place involving teachers and students, and the emotions students often experience in the context of academic work. Guided by Eisenberg, Cumberland, et al.'s (1998) call to consider complex socialization pathways, we focus our discussion on ways teachers, peers, and the classroom context can shape students' emotion-related outcomes (e.g., self-regulation, adjustment) and academic-related outcomes (e.g., school engagement, achievement) indirectly and differentially (e.g., as a function of student or classroom characteristics). Our illustrative review of the intervention literature demonstrates that the proposed classroom-based socialization processes have clear applied implications, and efforts to improve socialization in the classroom can promote students' emotional and academic competence. We conclude our discussion by outlining areas that require additional study. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Article
Full-text available
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Conference Paper
Full-text available
Chatbots have grown as a space for research and development in recent years due both to the realization of their commercial potential and to advancements in language processing that have facilitated more natural conversations. However, nearly all chatbots to date have been designed for dyadic, one-on-one communication with users. In this paper we present a comprehensive review of research on chatbots supplemented by a review of commercial and independent chatbots. We argue that chatbots' social roles and conversational capabilities beyond dyadic interactions have been underexplored, and that expansion into this design space could support richer social interactions in online communities and help address the longstanding challenges of maintaining, moderating, and growing these communities. In order to identify opportunities beyond dyadic interactions, we used research-through-design methods to generate more than 400 concepts for new social chatbots, and we present seven categories that emerged from analysis of these ideas.
Article
Full-text available
Quiet students are sometimes misunderstood in the college classroom. Students may be quiet for reasons related to personality traits, learned behaviors, or situational factors, but regardless, their silences may be misinterpreted by their instructors as a lack of engagement in their courses. In fact, quiet students are often very engaged in the learning process but may need space to express their interest in ways that are suited to their quiet tendencies. This article describes how quiet students are perceived in the classroom, reviews the reasons why quiet students often serve as a source of uncertainty for college instructors, and explains a number of strategies that instructors can use to meet the learning needs of quiet students.
Conference Paper
Full-text available
We are building an intelligent agent to help teaming efforts. In this paper, we investigate the real-world use of such an agent to understand students deeply and help student team formation in a large university class involving about 200 students and 40 teams. Specifically, the agent interacted with each student in a text-based conversation at the beginning and end of the class. We show how the intelligent agent was able to elicit in-depth information from the students, infer the students' personality traits, and reveal the complex relationships between team personality compositions and team results. We also report on the students' behavior with and impression of the agent. We discuss the benefits and limitations of such an intelligent agent in helping team formation, and the design considerations for creating intelligent agents for aiding in teaming efforts.
Article
Full-text available
A pedagogical agent is an anthropomorphic virtual character used in an online learning environment to serve instructional purposes. The design of pedagogical agents changes over time depending on the desired objectives for them. This article is a systematic review of the research from 2007 to 2017 related to the design factors of pedagogical agents and their impact on learning environments. The objective of this review is to identify and analyze pedagogical agents through the context in which they are constructed, the independent variables used in pedagogical agent research, and the impact of the pedagogical agent implementation. The review found that research on the design of pedagogical agents has different forms, namely text, voice, 2-D character, 3-D character, and human. The independent variables used in the studies are categorized into the appearance of agents and the role of agents. Moreover, the combination of pedagogical agent designs and role designs of pedagogical agents has significant positive impacts on student learning and student behavior. Recommendations are also provided at the end of this review.
Article
Full-text available
In the light of substantial improvements to the quality and availability of virtual reality (VR) hardware seen since 2013, this review seeks to update our knowledge about the use of head-mounted displays (HMDs) in education and training. Following a comprehensive search 21 documents reporting on experimental studies were identified, quality assessed, and analysed. The quality assessment shows that the study quality was below average according to the Medical Education Research Study Quality Instrument, especially for the studies that were designed as user evaluations of educational VR products. The review identified a number of situations where HMDs are useful for skills acquisition. These include cognitive skills related to remembering and understanding spatial and visual information and knowledge; psychomotor skills related to head-movement, such as visual scanning or observational skills; and affective skills related to controlling your emotional response to stressful or difficult situations. Outside of these situations the HMDs had no advantage when compared to less immersive technologies or traditional instruction and in some cases even proved counterproductive because of widespread cybersickness, technological challenges, or because the immersive experience distracted from the learning task.
Article
This article explores the ethical problems arising from the use of ChatGPT as a kind of generative AI and suggests responses based on the Human-Centered Artificial Intelligence (HCAI) framework. The HCAI framework is appropriate because it understands technology above all as a tool to empower, augment, and enhance human agency while referring to human wellbeing as a “grand challenge,” thus perfectly aligning itself with ethics, the science of human flourishing. Further, HCAI provides objectives, principles, procedures, and structures for reliable, safe, and trustworthy AI which we apply to our ChatGPT assessments. The main danger ChatGPT presents is the propensity to be used as a “weapon of mass deception” (WMD) and an enabler of criminal activities involving deceit. We review technical specifications to better comprehend its potentials and limitations. We then suggest both technical (watermarking, styleme, detectors, and fact-checkers) and non-technical measures (terms of use, transparency, educator considerations, HITL) to mitigate ChatGPT misuse or abuse and recommend best uses (creative writing, non-creative writing, teaching and learning). We conclude with considerations regarding the role of hu mans in ensuring the proper use of ChatGPT for individual and social wellbeing.
Article
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
Conference Paper
We present online survey results on the activities and usage motives of social virtual reality (social VR) users. Based on content analysis of users' free-text responses, we found that most users, in fact, use these applications for social activities and to satisfy their diverse social needs. The second most frequently mentioned categories of activities and motives relate to experiential aspects such as entertainment activities. Another important category of motives, which has only recently been described in related work, relates to the self, such as personal growth. Although our results indicate that social VR provides a superior social experience than traditional digital social spaces, like games or social media, they also reveal a desire for better and affordable tracking technology, increased sensory immersion, and further improvement concerning social features. Our findings complement related work as they come from a comparatively large sample (N= 273) and summarize a general user view on social VR. Besides confirming an intuitive assumption, they help identify use cases and opportunities for further research related to social VR.
Article
We model and test two school-based peer cultures: one that stigmatizes effort and one that rewards ability. The model shows that either may reduce participation in educational activities when peers can observe participation and performance. We design a field experiment that allows us to test for, and differentiate between, these two concerns. We find that peer pressure reduces takeup of an SAT prep package virtually identically across two very different high school settings. However, the effects arise from very distinct mechanisms: a desire to hide effort in one setting and a desire to hide low ability in the other.
Article
Background: The current research investigated the association between teacher-student relationship (both teacher-perceived and student-perceived relationship quality) and students' prosocial behaviours, as well as the mediating roles of students' attitudes towards school and perceived academic competence in this association. Sample: Four hundred and fifty-nine Italian primary students (aged 4-9, Mage = 7.05, SDage = 1.37) and 47 teachers (aged 26-60, Mage = 48.35, SDage = 8.13) participated and finished all the questionnaires and scales. Methods: Multiple regression analyses and bootstrapping analyses were employed to test the direct and the mediating effects between the teacher/student-perceived relationship and students' prosocial behaviours. Results: Results indicated that (1) teacher-student relationship was positively associated with students' prosocial behaviour; and (2) students' attitudes towards school could significantly mediate the association between teacher/student-perceived relationship and students' prosocial behaviours. Conclusions: Our understanding of how teacher-student relationship helps to enhance students' prosocial behaviours, as well as the intervention programmes that aim to enhance students' prosocial behaviours, may benefit from these findings.
Article
Chatbots have been around for years and have been used in many areas such as medicine or commerce. Our focus is on the development and current uses of chatbots in the field of education, where they can function as service assistants or as educational agents. In this research paper, we attempt to make a systematic review of the literature on educational chatbots that address various issues. From 485 sources, 80 studies on chatbots and their application in education were selected through a step‐by‐step procedure based on the guidelines of the PRISMA framework, using a set of predefined criteria. The results obtained demonstrate the existence of different types of educational chatbots currently in use that affect student learning or improve services in various areas. This paper also examines the type of technology used to unravel the learning outcome that can be obtained from each type of chatbots. Finally, our results identify instances where a chatbot can assist in learning under conditions similar to those of a human tutor, while exploring other possibilities and techniques for assessing the quality of chatbots. Our analysis details these findings and can provide a solid framework for research and development of chatbots for the educational field.
Conference Paper
Now that high-end consumer phones can support immersive virtual reality, we ask whether social virtual reality is a promising medium for supporting distributed groups of users. We undertook an exploratory in-the-wild study using Samsung Gear VR headsets to see how existing social groups that had become geographically dispersed could use VR for collaborative activities. The study showed a strong propensity for users to feel present and engaged with group members. Users were able to bring group behaviors into the virtual world. To overcome some technical limitations, they had to create novel forms of interaction. Overall, the study found that users experience a range of emotional states in VR that are broadly similar to those that they would experience face-to-face in the same groups. The study highlights the transferability of existing social group dynamics in VR interactions but suggests that more work would need to be done on avatar representations to support some intimate conversations.
Conference Paper
This study aims at investigating which cues teachers detect and process from their students during instruction. This information capturing process depends on teachers’ sensitivity, or awareness, to students’ needs, which has been recognized as crucial for classroom management. We recorded the gaze behaviors of two pre-service teachers and two experienced teachers during a whole math lesson in primary classrooms. Thanks to a simple Learning Analytics interface, the data analysis reports, firstly, which were the most often tracked students, in relation with their classroom behavior and performance; secondly, which relationships exist between teachers’ attentional frequency distribution and lability, and the overall classroom climate they promote, measured by the Classroom Assessment Scoring System. Results show that participants’ gaze patterns are mainly related to their experience. Learning Analytics use cases are eventually presented, enabling researchers or teacher trainers to further explore the eye-tracking data.
Chapter
How strongly does a student's cognitive and motivational development depend on various characteristics of schooling? This article describes the origins and developing profile of the field of research on differential effects of learning environments, the current knowledge about central characteristics of schooling, persistent methodological challenges, and implications for educational practice.