Content uploaded by Tianlong Zhong
Author content
All content in this area was uploaded by Tianlong Zhong on Jul 05, 2024
Content may be subject to copyright.
Education and Information Technologies
https://doi.org/10.1007/s10639-024-12787-9
Abstract
The signicance of interdisciplinary learning has been well-recognized by higher
education institutions. However, when teaching interdisciplinary learning to junior
undergraduate students, their limited disciplinary knowledge and underrepresen-
tation of students from some disciplines can hinder their learning performance.
ChatGPT’s ability to engage in human-like conversations and massive knowledge
grounded in dierent disciplines holds promise in enriching undergraduate students
with the disciplinary knowledge that they lack. In this exploratory study, we en-
gaged 130 undergraduate students in a three-condition quasi-experiment to examine
how ChatGPT inuences their demonstrated and perceived interdisciplinary learn-
ing quality, as measured by their online posts and surveys, respectively. The content
analysis results show that overall, students’ online posts could be coded into four
interdisciplinary learning dimensions: diversity, disciplinary grounding, cognitive
advancement, and integration. The means of the rst three dimensions were close to
the middle level (ranging from 0.708 to 0.897, and the middle level is 1), whereas
the mean score of integration was relatively small (i.e., 0.229). Students under the
ChatGPT condition demonstrated improved disciplinary grounding. Regarding their
perceived interdisciplinary learning quality, we did not nd signicant dierences
across the three conditions in the pre- or post-surveys. The ndings underscore
ChatGPT’s ability to enhance students’ disciplinary grounding and the signicance
of further fostering their integration skills.
Keywords ChatGPT · Persona · Interdisciplinary learning quality · Quasi-
experiment · Content analysis · Undergraduate
Received: 25 September 2023 / Accepted: 12 May 2024
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
2024
The inuences of ChatGPT on undergraduate students’
demonstrated and perceived interdisciplinary learning
TianlongZhong1· GaoxiaZhu2· ChenyuHou1· YuhanWang2· XiuyiFan3
Extended author information available on the last page of the article
1 3
Education and Information Technologies
1 Introduction
The complex issues we face nowadays (e.g., climate change, energy, ethical use of
stem cells) require the integration of dierent discipline knowledge, insights, meth-
ods, and the collaboration between experts with diverse disciplinary backgrounds
(Frodeman et al., 2010; Kidron & Kali, 2023). Working in an interdisciplinary
team and applying interdisciplinary knowledge and skills to solve issues, generate
novel ideas, or explain phenomena that a single discipline cannot tackle increas-
ingly become graduates’ critical employability and sustainable development skills
(Brassler & Dettmers, 2017). Accordingly, the responsibilities of higher education
institutions to engage students in interdisciplinary learning and prepare them for
future careers become more recognized (Roy et al., 2013). Interdisciplinary learning
is about integrating knowledge, methods, and insights from various disciplines to
explain or tackle complex or broad phenomena, questions, problems, or topics that
cannot be eectively addressed by a single discipline alone (Bybee, 2013; Ivanits-
kaya et al., 2002; Klein & Newell, 1997). However, researchers and educators often
suggest that it is challenging to create authentic interdisciplinary learning experi-
ences for students (Stentoft, 2017; Zhu & Burrow, 2022). A primary challenge is that
students are not experts in their areas and may not have an adequate level of knowl-
edge to be applied to the interdisciplinary learning task at hand, not to mention to
navigate complex discipline boundaries and integrate knowledge from multiple dis-
ciplines (Kidron & Kali, 2023; Stentoft, 2017). Furthermore, in classrooms, students
usually have limited time and resources to prepare for the interdisciplinary learning
task, given the required curriculum to be covered (Sharp, 2015).
The release of ChatGPT (OpenAI, 2023) shows new premises to address these
challenges, given its power to be involved in reasonable conversations of various
disciplines with learners and its ability to take and maintain a dened persona and
identity (Qadir, 2023; Zhu et al., 2023). A persona is a detailed representation of a
user’s characteristics, behaviors, and goals (Cooper, 1999), and it is widely used in
the Human-Computer Interaction eld. We conjecture that ChatGPT and ChatGPT
Persona (e.g., a virtual agent that represents an undergraduate student with specic
disciplinary backgrounds) would enhance students’ interdisciplinary learning quality
during the process; ChatGPT Persona would outperform ChatGPT given its potential
to provide more discipline-specic responses to learners. Interdisciplinary learning
quality in this study refers to the extent to which students integrate knowledge and
methods from multiple disciplines to create a comprehensive understanding of com-
plex subjects (Boix-Mansilla et al., 2009; Boix-Mansilla & Duraising, 2007; Kidron
& Kali, 2015). Developing rigorous criteria for evaluating interdisciplinary learning
quality during the learning process is another research gap that deserves more atten-
tion (Kidron & Kali, 2023).
This study aims to address these research gaps through a quasi-experimental
design. We engaged 130 undergraduate students in interdisciplinary learning in Chat-
GPT, ChatGPT Persona, and None-ChatGPT conditions and examined how their
demonstrated and perceived interdisciplinary learning quality diers in these condi-
tions, as measured by content analysis of their online posts and self-reported sur-
veys. This study is novel and signicant in the following ways. First, we identied a
1 3
Education and Information Technologies
niche and meaningful use of ChatGPT and ChatGPT Persona in the interdisciplinary
learning context. Second, we conducted empirical research to examine their eec-
tiveness in this context, which responds to limited empirical research on using Chat-
GPT in interdisciplinary learning. Finally, we developed a coding scheme to analyze
the online posts generated by students during the interdisciplinary learning process,
which has extended the literature on the measurement of interdisciplinary learning
quality and can be applied in future research.
2 Literature review
2.1 Interdisciplinary learning
Interdisciplinary learning emphasizes collaboration among learners to cross the
boundaries of disciplines and benet from the diverse insights of group members
with varying disciplinary perspectives (Kidron & Kali, 2023). By doing so, learners
are more likely to achieve remarkable cognitive advancement in explaining phenom-
ena, solving problems, or designing novel ideas or products (Boix-Mansilla, 2010;
Boix-Mansilla & Duraising, 2007; Broadbent & Gallotti, 2015). Higher education
institutions have increasingly recognized the benets of interdisciplinary learning,
such as improving employers’ critical thinking and creativity and helping learners
identify bias, embrace uncertainty, value moral principles, and apply knowledge to
practical situations (Alberta Education, 2015; Madden et al., 2013). Therefore, to
prepare students for their future careers and foster their high-order skills described
above, educators (Boix-Mansilla, 2010; Roy et al., 2013) advocate for interdisci-
plinary education and research at the higher education level or even from primary
schools.
However, interdisciplinary learning is a relatively undertheorized concept in
higher education. Generally, there is insucient understanding of how institutions
frame it, how educators enact it, and how students experience it (Lyall et al., 2016;
Markauskaite et al., 2020). Because of the undeveloped conceptual foundations and
lack of coherent connection between curricular elements, undergraduate students
perceive some existing interdisciplinary courses as choppy and experience strug-
gles and diculties in the learning process (Eisen et al., 2009). Another challenge
of implementing interdisciplinary learning in higher education is that undergraduate
students are novices rather than disciplinary experts (Kidron & Kali, 2023; MacLeod,
2018). They tend to lack disciplinary knowledge and skills, constraining their ability
to engage in interdisciplinary learning and build interdisciplinary understandings.
Some studies have been conducted to foster students’ interdisciplinary learning.
For instance, Kidron and Kali (2015) developed the Boundary Breaking for Interdisci-
plinary Learning Model, which includes two principles: breaking boundaries between
disciplines and breaking boundaries between learners. The model helps students inte-
grate knowledge from various discipline lenses through technology support (e.g.,
highlighting cross-cutting themes), engage in meaningful discourse, and be exposed
to ideas and ways in a learning community. Their empirical study in an undergradu-
ate students’ online interdisciplinary course suggested this model enhanced students’
1 3
Education and Information Technologies
interdisciplinary understanding of the course’s theme and improved their ability to
synthesize insights from multiple disciplines taught in this course. In recent research,
Kidron and Kali (2023) further studied how a learning community approach could
help address the challenge of integrating disciplinary ideas to achieve interdisciplin-
ary understanding. Their quasi-experiment with undergraduate students indicated
that students’ ability to synthesize disciplinary ideas was signicantly higher in the
learning community approach than in the individual learning approach. Furthermore,
the debate as a teamwork activity has been applied in interdisciplinary learning.
Merrell et al. (2017) found that debates encourage students to investigate, analyze,
and integrate materials from diverse disciplines, oering universities an approach to
delivering interdisciplinary courses with both depth and breadth. Similarly, Zhan et
al. (2017) discovered that students preferred activities that promote active learning,
such as debates, where they could synthesize ideas across disciplines. Informed by
these studies, we used the debate activity as an approach to creating an authentic
scenario for students from dierent schools to think and contribute arguments and
counterarguments from dierent disciplines and perspectives, as well as integrate
ideas to develop coherent and structured arguments. Miro, an online collaboration
platform, was used to support students to learn as a learning community in which
they could access shared posts within and between groups.
Besides curriculum design, constructing more rigorous criteria for evaluating
interdisciplinary learning quality also draws much attention (Gvili et al., 2016;
Huutoniemi, 2010). There is increasing interest in understanding interdisciplinary
learning dynamics, processes, and patterns (Kidron & Kali, 2023). Some studies have
explored the components of interdisciplinarity. For instance, Boix-Mansilla et al.
(2009) introduced a Targeted Assessment Rubric for Interdisciplinary Writing, which
comprised four essential dimensions: purposefulness (i.e., the quality of the essay
clearly expresses its purpose), disciplinary grounding (i.e., to what extent students
eectively apply disciplinary knowledge and methods), integration (i.e., to what
extent students incorporate two or more disciplines in an essay and advance their
understanding), and critical awareness (i.e., the awareness of strength and limitations
of selected disciplines). Lattuca et al.‘s (2012) review of the literature yielded eight
dimensions of interdisciplinarity: awareness of disciplinarity, appreciation of disci-
plinary perspectives, appreciation of non-disciplinary perspectives, recognition of
disciplinary limitations, interdisciplinary evaluation, ability to nd common ground,
reexivity, and integrative skill. Based on Boix-Mansilla et al.‘s (2009) work, Kidron
and Kali (2023) constructed an Interdisciplinary Knowledge Integration rubric,
which kept purposefulness, disciplinary grounding, and integration dimensions. Spe-
cically, they expanded integration into four sub-dimensions: integrative lens, idea
connection, disciplinary analysis through integrative lens, and synthesis.
This study will respond to two highlighted research limitations regarding under-
graduate students’ limited disciplinary knowledge (i.e., novices) and the need to look
into the interdisciplinary learning process using a more rigorous evaluation approach.
Specically, we would leverage ChatGPT, which can engage in conversations with
undergraduates in dierent disciplines, and content analysis to examine online posts
generated by students during the interdisciplinary learning process.
1 3
Education and Information Technologies
2.2 AI and ChatGPT for interdisciplinary learning
Articial intelligence (AI) has been applied in interdisciplinary learning for various
purposes. First, it has been used to predict students’ interdisciplinary learning perfor-
mance. Lee et al. (2023) used deep learning and computer vision methods to classify
the dierent modes of students’ STEM learning, categorizing them as passive, active,
constructive, or interactive. Yee et al. (2023) applied the natural language processing
(NLP) method to evaluate student essays in an interdisciplinary course, focusing on
three aspects: disciplinary grounding, disciplinary integration, and disciplinary even-
ness. Second, AI technologies have been used to enhance students’ interdisciplinary
learning processes. Iku-Silan et al. (2023) developed an interdisciplinary learning
chatbot with NLP technology, which can provide students with tailored tips and
resource suggestions from an interdisciplinary knowledge website. They found that
compared to the control condition in which the chatbot was not available, using the
chatbot signicantly enhanced student learning achievements, extrinsic motivation,
collective ecacy, cognitive engagement, emotional engagement, and satisfaction
with the learning approach. Third, AI has been used for teacher professional devel-
opment in interdisciplinary areas. Kajonmanee et al. (2020) developed a personal-
ized learning system that signicantly bolstered in-service teachers’ Technological
Pedagogical Content Knowledge (TPACK) in STEM teaching. Tang et al. (2023)
developed a platform to augment K-12 STEM education by seamlessly incorporating
machine learning into scientic discovery lesson plans, e.g., machine learning to help
detect risk factors for heart disease.
In addition to these applications in interdisciplinary learning, research has inves-
tigated the potential of employing AI in collaborative learning tasks in general. First,
AI can facilitate group formation based on learner models. For example, Sadeghi and
Kardan (2015) developed a model for group formation based on student knowledge
and preferences, resulting in improved learner satisfaction and higher scores in the
class. Second, AI can be applied to examine the collaboration process. Ouyang et
al. (2023) used the Hidden Markov Model combined with Lag Sequential Analysis
and Frequent Sequence Mining to analyze the collaborative patterns in collaborative
knowledge construction. AI can also help quantify group members’ contributions to
the group work, which is informative for learners and teachers (Upton & Kay, 2009).
Third, AI can contribute to problem-solving and inquiry learning processes. Chatbots
have been implemented in classrooms to facilitate learning by explaining concepts,
answering questions, and assessing performances (Okonkwo & Ade-Ibijola, 2021).
Hwang and Won (2021) found that people consistently contribute more creative ideas
when having conversations with an AI chatbot during a 10-minute brainstorming
task. Students could also use Chatbots to assess their grammar correctness and sen-
tence cohesion when having simulated conversations (Huang et al., 2022). Fourth, AI
chatbots can enhance students’ learning engagement. Chatting with a pedagogic chat-
bot orally or by writing builds a supportive environment for students and increases
their engagement (Troussas et al., 2017). Interaction with chatbots not only prevents
boredom but also facilitates more ecient knowledge acquisition (Okonkwo & Ade-
Ibijola, 2021). Despite these benets, previously developed AI systems were often
specialized, tailored for particular tasks and elds, and lacked generality (Xu, 2020).
1 3
Education and Information Technologies
ChatGPT, compared to the AI tools designed for specic purposes, is more versa-
tile for dierent educational scenarios and has shown its capacity to support various
types of learning in diverse domains. Lo’s (2023) review of ChatGPT’s usage in
education concluded that ChatGPT could aid teaching by generating course materials
and designing assessment tasks, as well as support learning through functions like
answering queries, summarizing content, fostering collaboration, and assisting with
exam preparation. It has the potential to support student-centered learning (Cooper,
2023) and personalized learning (Hong, 2023). College students can use LLMs like
GPT to improve writing tasks, critical thinking, and problem-solving skills (Kasneci
et al., 2023). ChatGPT could provide logical explanations for obscure concepts in
chemistry and medicine (Clark, 2023). It could also achieve passing scores on law
school exams (Choi et al., 2023), university-level computer science exams (Bordt &
von Luxburg, 2023), and medical exams (Gilson et al., 2023). These studies imply
that ChatGPT can provide sucient support and appropriate knowledge for students
lacking disciplinary knowledge during interdisciplinary learning.
Furthermore, given that ChatGPT was developed using an extensive dataset
covering a wide range of disciplines, it holds signicant potential for fostering
interdisciplinary learning. McBee et al. (2023) employed ChatGPT to simulate an
interdisciplinary dialogue by having it assume the roles of a moderator and vari-
ous experts (e.g., a physiotherapist, psychologist, nutritionist, AI specialist, and an
athlete) in a panel discussing “chatbots in sports rehabilitation.” The study was a
simulated interdisciplinary discussion without involving any humans, and they did
not conduct systematic content analysis on the dialogue. Prentzas and Sidiropoulou
(2023) collected undergraduate students’ feedback on utilizing ChatGPT for cre-
ative writing. Students recognize that ChatGPT has the potential to provide valuable
insights and assistance in a variety of subjects and elds, demonstrating its usefulness
not just in specic areas but across a broad spectrum of disciplines.
GPT can also play dierent roles in assisting teaching and learning. They can
be (1) an interlocutor such as a role-playing partner for students to have conversa-
tions with, (2) a content provider by modifying materials, recommending materials,
and making materials culturally relevant, (3) a teaching assistant to help solve learn-
ing challenges, and (4) an evaluator that provides initial grading of writing (Jeon &
Lee, 2023). The aordances and roles of ChatGPT, for instance, being a customized
role-play partner that provides perspectives and knowledge from a wide range of
disciplines, especially the ones that students lack, made us curious about ChatGPT’s
potential to help students overcome the lack of disciplinary knowledge. Therefore,
we integrated ChatGPT into our instructional design to promote interdisciplinary
learning.
2.3 The current study
The literature on interdisciplinary learning in higher education suggests two research
gaps to be addressed to support practical implementation and theoretical develop-
ment: (1) how to deal with undergraduate students’ limited knowledge in their disci-
plines so that they can adequately contribute to the interdisciplinary learning task at
hand; (2) how to develop more rigorous criteria to evaluate interdisciplinary learning
1 3
Education and Information Technologies
quality during the learning process. ChatGPT’s breadth of knowledge in various dis-
ciplines and ability to engage in human-like conversations and play dierent roles
make it a promising tool to complement undergraduate students with the disciplinary
knowledge they lack. To address the rst research gap, we experimented with using
ChatGPT in teaching a digital literacy course. Students can use ChatGPT or ChatGPT
Persona (i.e., a persona with a specic disciplinary background) as an assistant or col-
laborative partner during their interdisciplinary learning task. To address the second
research gap, we developed an interdisciplinary learning quality coding scheme to
analyze the learning process data. Specically, this study aimed to respond to the fol-
lowing three research questions:
1. What interdisciplinary learning quality do undergraduate students demonstrate in
the learning process?
2. How do ChatGPT and ChatGPT Persona aect undergraduates’ demonstrated
interdisciplinary learning quality?
3. How do ChatGPT and ChatGPT Persona aect undergraduates’ perceived inter-
disciplinary learning quality?
3 Methods
3.1 Participants
The research took place in a two-hour lesson in a digital literacy course oered to rst
and second-year undergraduate students at a public university in Southeast Asia in
March 2023. Four classes (i.e., T1, T2, T3, and T4) participated in this study. There
were 48, 48, 47, and 40 students in each class (183 students in total), and 130 students
consented to participate in this research. The participants represented various aca-
demic disciplines, including Business (32 students), Humanities and Social Sciences
(26 students), Engineering (69 students), and Science (3 students). In each class,
the students were divided into groups of ve to six students. Each group was made
up of students from dierent disciplines to facilitate their interdisciplinary learning
and communication, and we tried to group non-participating students into the same
groups for data collection purposes. The same instructor taught all classes. The Insti-
tutional Review Board of the University approved the research.
3.2 Instructional design
A two-hour lesson on Articial Intelligence (AI) was developed by our research
team, comprising an instructor with expertise in AI and researchers and graduate
students with learning sciences and psychology backgrounds. When designing the
lesson, the research team applied the following characteristics of interdisciplinary
learning presented in the literature (Lam et al., 2014; MacLeod & van der Veen,
2020; Spelt et al., 2009):
1 3
Education and Information Technologies
(1) Creating real-world scenarios: Interdisciplinary learning often revolves around
equipping students with the skills and knowledge necessary to address complex
real-world challenges (Remington-Doucette et al., 2013). Therefore, we asked
the groups to prepare for a debate organized by the University Debating Soci-
ety on “Articial Intelligence or the Internet, which has more profound implica-
tions for our society?” This activity closely aligned with students’ everyday lives,
ensuring its authenticity.
(2) Encouraging collaborative contributions from various disciplines: Debate can
integrate knowledge from diverse disciplines and provide a platform for students
to draw upon knowledge and concepts from multiple disciplines (Merrell et al.,
2017). Furthermore, the debate format entails a collaborative and team-based
approach, necessitating joint eorts from students to collaboratively prepare
arguments and responses. Under this debate topic, the ramications of AI and the
Internet are not conned to a singular domain or academic discipline. Instead,
they permeate numerous sectors, such as computer science, economics, psychol-
ogy, law, sociology, health, and ethics.
(3) Making cognitive advancements: By integrating ideas from dierent elds,
students can benet from each other’s knowledge and expertise, integrate the
knowledge to better explain phenomena, solve problems, and design solutions
or products (Boix-Mansilla et al., 2009). Consequently, the objective of the
debate activity was to enable students to articulate the diverse applications of AI
technology and the Internet across various elds, identify relevant metrics for
assessing the merits and drawbacks of employing AI and the Internet in distinct
domains, and construct arguments to support their stand and convince others.
Meanwhile, we integrated ChatGPT into the activity design to complement students’
lack of disciplinary knowledge and augment the diversity of disciplines represented
within the groups. The design considerations are as follows: Firstly, pursuing inter-
disciplinary learning necessitates diverse knowledge from multiple disciplines. Nev-
ertheless, the student’s background knowledge within their disciplines may exhibit
homogeneity. Despite our eorts to assign students from dierent disciplines to each
group, the limited number of students available and the unequal representation of
students from dierent schools constrained the coverage of disciplines within each
group. Furthermore, including ChatGPT may facilitate students’ supplementation of
unfamiliar subject expertise. Finally, the participants in the study were primarily rst
or second-year undergraduate students at the university who might not have devel-
oped an in-depth understanding of their subjects.
As shown in Fig. 1, we assigned the four classes into three conditions: T1 using
ChatGPT Persona (experimental condition 1); T3 & T4 using ChatGPT (experimen-
tal condition 2); and T2 not using ChatGPT (control condition). The primary consid-
eration for the condition assignment is allowing as many students as possible to use
ChatGPT in class as requested by the students and instructor, but at the same time,
keep a comparison class who did not use ChatGPT. The procedures for the three
conditions were identical, except that the ChatGPT and ChatGPT Persona conditions
used ChatGPT during the learning processes while the comparison class did not. For
the debate activity, the students drew lots at the beginning of each class to determine
1 3
Education and Information Technologies
their sides. The positive teams supported that “AI has more profound eects.” In
contrast, the opposing teams supported the view that “the internet has more profound
eects.”
Students used Miro, a visual collaboration platform (https://miro.com), to share
and integrate ideas during class. As shown in Fig. 2, we designed dierent colors
and shapes of stickers to record ideas from dierent disciplines as well as students’
prompts and ChatGPT’s responses. Furthermore, we designed scaolding to facili-
tate the students to engage in the following steps to complete the activity.
(1) Form a framework In this step, students were asked to think and discuss mea-
suring “implications for our society” in which dimensions and what metrics to
use. Students in the ChatGPT Persona condition could use ChatGPT to create
a virtual persona (e.g., Peter, a 20-year-old Singaporean undergraduate student
from the School of Education) and invite the persona to think about the dimen-
sions and metrics with them. For instance, one feedback a group received from
their persona is:
Fig. 1 The ow of activity procedure
1 3
Education and Information Technologies
“… I’d be happy to engage in the debate on the impact of AI vs. the Internet on
society… Let’s explore these dimensions: 1. Education: The Internet has democ-
ratized education, enabling e-learning and online courses. AI has the potential
to enhance personalized learning experiences and provide intelligent tutoring
systems. 2. Privacy and Data Security…”.
Students in the ChatGPT condition could use ChatGPT without any constraints.
Here is a conversation between students and ChatGPT:
Student: what are the metrics used to evaluate the impact of AI?
ChatGPT: 1. Productivity … 2. Employment … 3. Economic growth … 4. Health-
care outcomes … 5. Environmental impact …”.
In contrast, students in the control condition did not use ChatGPT but brain-
stormed the dimensions and metrics with their group members.
(2) Case analysis Based on the dimensions in step 1, students were required to evalu-
ate the impacts of both AI and the Internet and highlight cases from various
disciplines. In the ChatGPT Persona condition, when a group asked their persona
about the application of the Internet and AI in healthcare, the persona answered,
“As a medicine student, some potential applications of the Internet in health-
care include: Telemedicine … Electronic Health Records … Patient Education …
Some applications of AI in healthcare include: Medical Imaging … Personalized
Medicine …”. Similarly, students in the ChatGPT condition used ChatGPT freely
while the ones in the comparison condition analyzed cases without the help of
ChatGPT.
(3) Build arguments and debate Then, students built arguments by drawing and
integrating the ideas (e.g., dimension, metrics, cases) they, ChatGPT Persona
or ChatGPT (experimental conditions) contributed from various disciplines.
The arguments should include an introduction, statement of fact, refutation, and
Fig. 2 Content Board on Miro
1 3
Education and Information Technologies
conclusion. The students were encouraged to consider the arguments their oppo-
nent teams might have. Students in the experiment conditions could use Chat-
GPT to prepare their debate scripts while students in the control group generated
arguments themselves.
During the debate, the positive and negative sides took turns to speak, with each
group represented as a speaker as there were multiple groups on each side.
(4) Reection At the end of the course, students in the experimental groups wrote
reections about their experience using ChatGPT, while students in the control
group wrote reections about the debate.
3.3 Data collection procedures
The data collection procedures of this research mainly include three steps: (1) after
the participants completed the interdisciplinary activity of the previous week, we
collected 115 surveys on students’ perceived interdisciplinary learning quality. The
surveys were collected before the quasi-experiment and treated as pre-surveys. (2)
During the two-hour in-class activity, we collected 253 posts on Miro boards that
were generated by participants. The posts include 139 posts by students in the Chat-
GPT condition, 65 posts by students in the ChatGPT Persona condition, and 49 posts
by students in the non-ChatGPT condition. (3) After participants completed the
quasi-experiment, we collected 103 surveys on students’ perceived interdisciplinary
learning quality, which were treated as post-surveys. After pairing (1) and (3), 87
pairs of pre- and post-surveys on students’ perceived interdisciplinary learning qual-
ity were collected.
The pre- and post-surveys were identical and consisted of three items: “I collabo-
rated with my team to investigate and nd an acceptable solution,” “I integrated my
ideas with my team to collectively generate a solution,” and “The solution my team
and I developed eectively meets our requirements.” The items were responded to
on a 5-point Likert scale, from “strongly disagree” to “strongly agree.” These items
were derived from the Integration subscale of the Team Collaboration Questionnaire
(Cole et al., 2018). The three items demonstrated high internal consistency with a
Cronbach’s alpha coecient of 0.91.
3.4 Data analysis
We conducted the following analyses to respond to the three research questions:
Firstly, we applied content analysis (Krippendor, 2004) to examine the online
posts using a self-developed interdisciplinary learning quality coding scheme. Sub-
sequently, we utilized the Multivariate Kruskal-Wallis test (as a substitute for non-
parametric MANOVA) and Kruskal-Wallis test (as a substitute for non-parametric
ANOVA) to investigate the variations in students’ demonstrated and perceived inter-
disciplinary learning quality across ChatGPT Persona, ChatGPT, and Non-ChatGPT
conditions, due to the lack of a normal distribution and homogeneity of variances.
1 3
Education and Information Technologies
3.4.1 Content analysis of interdisciplinary learning quality (RQ1)
To analyze participants’ online posts, we developed an interdisciplinary learning
quality coding scheme based on the literature (Boix-Mansilla et al., 2009; Kidron
& Kali, 2023) and driven by our collected data. Through an extensive review of the
literature, rich discussions in the research team, and application to the collected data,
several key interdisciplinary learning dimensions emerged: (1) disciplinary ground-
ing (i.e., acquiring a solid grasp of fundamental concepts, theories, and methodolo-
gies in multiple academic elds, (2) diversity (i.e., diverse disciplinary perspectives
and experiences), (3) cognitive advancement (i.e., addressing a phenomenon, solving
a problem, or developing innovative solutions or ideas), (4) integration (i.e., inte-
grating knowledge from various disciplines, establishing connections across dierent
areas, and forming a comprehensive understanding), and (5) reection (i.e., reecting
on the limitations of their interdisciplinary work and strengthening it through critical
analysis). Table 1 presents the details of our interdisciplinary learning quality cod-
ing scheme. Disciplinary grounding, diversity, integration, cognitive advancement,
and reection were coded as ordinal variables with three levels. Taking disciplinary
grounding as an example, 0 represents no disciplinary grounding, whereas 2 denotes
deep grounding.
Two researchers coded all the Miro notes based on the other four dimensions
except for reection. The reason is that we explicitly required students to reect on
their experience of using ChatGPT rather than on the limitations and strengths of their
interdisciplinary work, in order to deepen the understanding of how ChatGPT could
inuence interdisciplinary learning. Two researchers coded all posts independently
and then discussed them to achieve a consensus on each post. To answer RQ1, after
coding agreements were reached on all Miro notes, we calculated the mean, median,
and standard deviation of the coding frequency in each interdisciplinary learning
quality dimension demonstrated in Miro notes.
3.4.2 Comparing demonstrated interdisciplinary learning quality among ChatGPT,
ChatGPT Persona, and control conditions (RQ2)
As described in the above section, we coded the diversity, disciplinary grounding,
integration, and cognitive advancement reected in each note to represent students’
demonstrated interdisciplinary learning quality. Thereafter, due to the absence of a
normal distribution and homogeneity of variances in the demonstrated interdisciplin-
ary learning quality, we used the ULT package (He et al., 2017; Maugoust, 2023)
in R to perform a multivariate Kruskal-Wallis test (substitute for non-parametric
MANOVA) for analyzing the variation in the four dimensions of demonstrated inter-
disciplinary learning quality across three conditions: ChatGPT, ChatGPT Persona,
and Non-ChatGPT. If a general dierence was detected for a dimension of dem-
onstrated interdisciplinary learning quality, we further conducted Dunn-Bonferroni
tests using the FSA package (Ogle et al., 2023) in R to examine the dierences across
the three conditions because the test is appropriate for non-parametric data or in cases
with limited sample sizes. The Dunn test (Dunn, 1961), with Bonferonni corrections,
1 3
Education and Information Technologies
Coding categories Denition/dening
features
Examples
Disciplinary
grounding
No grounding No disciplinary knowl-
edge (terms, examples)
nor methods
I want to make more money and prots
Partial grounding Only disciplinary
knowledge (terms,
examples) or methods
Can use standard deviation to determine extra logistics
or manpower required if needed
Deep grounding Both disciplinary
knowledge and methods
(1) Create a user prole to gain demographic details
and interests (quiz-format, pick and choose from a list)
(2) Recommend suitable courses and materials based
on the user’s information. (3) Include a reward system
to encourage users to continue staying on the app and
use it (e.g., daily streak, tiering, monetary and non-
monetary rewards for redemption)
Diversity
No disciplinary
perspective
Number of
disciplines = 0
AI has a more profound impact. shape various areas of
our lives. information is given but a.i. is the one that
process
Single disciplinary
perspective
Number of
disciplines = 1
AI Environmental: Can reduce waste after learning the
most ecient impact. Negative: If takes too long to
nd eciency, may waste more energy instead. Auto-
mation replaces human, require more energy
Multiple disciplin-
ary perspectives
Number of
disciplines > 1
impacts of AI and Internet: - AI can have both nega-
tive and positive impacts in several aspects such as
economic, political, environmental etc. such as loss of
jobs from AI taking over human tasks, reducing carbon
footprints since AI can help to answer our questions
more easily, increasing reliance on technology etc.
Cognitive
advancement
Simple claim Denition of terms
or concepts or factual
information. Opinion
without any elabora-
tion or justication,
indicating shared or
dierent opinion or
understanding
ICC modules that are relevant or could be useful to the
course they study.
Elaboration Partial explanations,
reasons, relation-
ships, or mechanisms
mentioned without
explanation in detail; or
elaborations of terms,
phenomena
Post survey can be done by students for feedbacks
regarding how much they nd the ICC materials useful
so the algorithm can be improved be developers
Table 1 The demonstrated interdisciplinary learning quality coding scheme
1 3
Education and Information Technologies
Coding categories Denition/dening
features
Examples
Explanation Reasons, relationships,
or mechanisms claried
in detail
For decomposition, we can break down the desired app
into dierent functionalities such as proles of users,
user interface, tracking of user behaviour, algorithms
to recommend relevant courses based on their prefer-
ences, also about what technologies to use: How will
we store our data, is it through a cloud computing
platform? What programming language will we use?
Java? R? Python? How will we design the app, will it
be a mobile app or a web app?
Integration
No Integration No Connecting nor
comparing ideas from
dierent disciplines
Opposing argument. AI is stronger than the internet.
- AI cannot extend its reach without the internet, like
chair without screws. -Hence Internet being the founda-
tion has a more profound impact.
Simple Integration Connecting or compar-
ing ideas from dierent
disciplines with no or
brief elaboration
Dimensions of “Impact on our society” (1) Wealth (2)
Applications across industry (3) Environmental Impact
Deep Integration Connecting or
comparing ideas with
judgments/examples/
details, and it is clear
how this synthesis can
help create coherence of
multiple disciplines
However, AI still has a more profound impact as it has
the potential to aect the livelihoods of people. Al-
though the internet has transformed the way we work,
AI has the potential to eliminate jobs due to automation
and the rise of robots due to this. e.g., logistics, service
sector, hence, despite the economic opportunities
provided by the internet and transformations in making
work more convenient, a.i. is more detrimental as it af-
fects the livelihoods of the people and is more impactful
in enacting change in people to learn new skills. People
feel more compelled to learn new skills due to a.i.
rather than the internet, as a bid to not get replaced.
Reection
No reection Students do not reect
on the limitations and
strength of their inter-
disciplinary work
Simple reection Students reect only
on the limitations or
strength of their inter-
disciplinary work
LIMITATION:
Some students might not take cyber security seriously,
as they have never been involved in a cyber security
incident. Thus, they might ignore the design strategies
Deep reection Students reect on both
the limitations and
strength of their inter-
disciplinary work
Social Media
Advantage:
1. lots of students are on social media, able to get the
message across
2. Not as costly as other strategies
Disadvantage:
1. Students can choose to scroll past, ignore the post
Table 1 (continued)
1 3
Education and Information Technologies
is an eective method for identifying distinct dierences between specic groups
after detecting general dierences at the group level.
3.4.3 Comparing perceived interdisciplinary learning quality among ChatGPT,
ChatGPT Persona, and control conditions (RQ3)
Regarding RQ3, the pre- and post-surveys were used to analyze perceived interdisci-
plinary learning quality. Similarly, considering the absence of a normal distribution
and homogeneity of variances in the perceived interdisciplinary learning quality, we
utilized the R to conduct the Kruskal-Wallis test to examine the variation in perceived
interdisciplinary learning quality across the three conditions. The Kruskal-Wallis test
is a non-parametric test and a well-adopted substitute for ANOVA, which is suitable
when data does not meet the assumptions of normality and homogeneity of variances
(Kruskal & Wallis, 1952).
4 Results
4.1 RQ1: Undergraduate students’ demonstrated interdisciplinary learning
quality in the learning process
Table 2 presents the descriptive statistics of the content analysis results of the 253
posts in terms of interdisciplinary learning quality, namely, diversity, disciplinary
grounding, integration, and cognitive advancement. The mean of diversity was 0.897
(SD = 0.705), and most notes (69.6%) demonstrated viewpoints from at least one dis-
cipline. Figure 3 depicts the bar chart of various disciplines discussed when partici-
pants shared their perspectives on how AI and the Internet impact society on the Miro
board. The exploration covered twenty disciplines, with business, sociology, and
computer science standing out as the top three most frequently discussed subjects.
Regarding disciplinary grounding (M = 0.708, SD = 0.481), most posts were catego-
rized as partial grounding, suggesting participants only used disciplinary knowledge
(terms, examples) or methods in their notes rather than both. Although students
discussed various disciplines, their performance on integration was relatively low
(M = 0.229, SD = 0.449), and 80.6% of the posts were coded as “no integration”. In
contrast, in the cognitive advancement (M = 0.854, SD = 0.820) dimension, the stu-
dents showed more elaboration and explanation (58.1%), indicating that they could
elaborate on and explain their arguments.
Diversity Disciplinary
grounding
Integration Cognitive
advancement
N 253 253 253 253
Mean 0.897 0.708 0.229 0.854
Median 1 1 0 1
Stan-
dard
devia-
tion
1.222 0.481 0.499 0.820
Table 2 Descriptive statistics
of the diversity, disciplinary
grounding, integration, and
cognitive advancement
1 3
Education and Information Technologies
4.2 RQ2: The eect of using ChatGPT and ChatGPT persona on undergraduates’
demonstrated interdisciplinary learning quality
To examine the eects of using ChatGPT on students’ demonstrated interdisciplinary
learning, we conducted Multivariate Kruskal-Wallis (MKW) tests on diversity, disci-
plinary grounding, integration, and cognitive advancement of Miro notes across the
ChatGPT, ChatGPT Persona, and non-ChatGPT conditions. The result of the MKW
test (χ2 = 26.24, p < 0.05, η² = 0.02) suggests a signicant dierence in interdisci-
plinary learning demonstrated among the three conditions. To further examine the
dierences across the three conditions, we performed a post hoc test (Dunn’s test)
with Bonferroni corrections. As Table 3 shows, there were no signicant dierences
in diversity, integration, and cognitive advancement across the three conditions.
However, a signicant dierence was observed regarding disciplinary grounding:
the posts written in the ChatGPT condition exhibited a higher level of disciplinary
grounding than those in the ChatGPT Persona and non-ChatGPT conditions. Figure 4
presents an interval plot that visualizes the dierence.
4.3 RQ3: The eect of using ChatGPT and ChatGPT persona on undergraduates’
perceived interdisciplinary learning quality
Table 4 presents the Kruskal-Wallis test results on students’ pre- and post-surveys on
perceived interdisciplinary learning quality among the ChatGPT, ChatGPT Persona,
and Non-ChatGPT conditions. There was no signicant dierence across the three
conditions in the pre-test (χ² = 2.27, p = 0.32, η² = 0.03), indicating the three condi-
tions (i.e., ChatGPT, ChatGPT Persona, and Non-ChatGPT) were comparable before
the quasi-experiment. The post-test result (χ² = 0.65, p = 0.72, η² = 0.01) indicates that
undergraduate students in the three conditions did not perceive signicant dierences
in interdisciplinary learning quality after the quasi-experiment.
Fig. 3 Bar chart of the frequency of dierent disciplines demonstrated in Miro posts
1 3
Education and Information Technologies
Fig. 4 Interval plot of disciplinary grounding in ChatGPT, ChatGPT Persona, and Non-ChatGPT
condition
Demonstrated
interdisciplinary
learning
Comparison ZP (Bonferroni
adjustment)
Diversity ChatGPT - Chat-
GPT persona
1.20 0.69
ChatGPT
- Non-ChatGPT
1.52 0.39
ChatGPT persona
- Non-ChatGPT
0.38 1.00
Disciplinary
grounding
ChatGPT - Chat-
GPT persona
2.81 0.02
ChatGPT
- Non-ChatGPT
2.90 0.01
ChatGPT persona
- Non-ChatGPT
0.32 1.00
Integration ChatGPT - Chat-
GPT persona
-1.50 0.40
ChatGPT
- Non-ChatGPT
-0.86 1.00
ChatGPT persona
- Non-ChatGPT
0.44 1.00
Cognitive
advancement
ChatGPT - Chat-
GPT persona
0.24 1.00
ChatGPT
- Non-ChatGPT
-1.94 0.16
ChatGPT persona
- Non-ChatGPT
-1.90 0.17
Table 3 The result of Dunn’s
test
1 3
Education and Information Technologies
5 Discussion
This study investigated the eect of using ChatGPT on undergraduate students’ dem-
onstrated and perceived interdisciplinary learning quality. We conducted a quasi-
experiment in which we assigned 130 undergraduate students to ChatGPT, ChatGPT
Persona, and Non-ChatGPT conditions. Online Miro notes and pre- and post-surveys
were collected to examine demonstrated and perceived interdisciplinary learning
quality. Regarding the demonstrated interdisciplinary learning quality, we coded the
notes in four dimensions: diversity, disciplinary grounding, integration, and cognitive
advancement, using a coding scheme. The descriptive analysis of the coding results
suggests the relatively weak integration demonstrated in students’ notes across the
three conditions. The Multivariate Kruskal-Wallis test indicates that students in the
ChatGPT condition exhibited higher disciplinary grounding than the other two. We
did not nd a signicant dierence in students’ perceived interdisciplinary learning
quality across the three conditions. These ndings are worth further discussion.
5.1 Students’ demonstrated Interdisciplinary Learning Quality
We found that students’ performance on integration was relatively low, possibly
for two reasons. Firstly, integration is a complex and challenging task that requires
students to synthesize knowledge from multiple disciplines and apply it to a spe-
cic problem (Biggs & Collis, 1982). This process is complicated, as it requires a
deep understanding of the disciplines involved and the ability to make connections
among them. Theories in interdisciplinary learning suggest that achieving integra-
tion requires a high level of cognitive and metacognitive skills (Boix-Mansilla et al.,
2009; Kidron & Kali, 2023), which undergraduate students may not have fully devel-
oped. Secondly, this study analyzed student learning using relatively separate Miro
notes, usually with short text lengths. Students might need to rst contribute diverse
disciplinary ideas before integrating them. In this case, it is natural for students to
integrate ideas in a small proportion of notes, resulting in a lower mean score on
this dimension. It is also possible that the brevity of the notes limited students’ abil-
ity to demonstrate their integration skills or they prioritized contributing more notes
by themselves (we used stickers with dierent colors and shapes to mark the notes
contributed by students from dierent disciplines) instead of making high eorts to
integrate various notes. Future interdisciplinary learning design should emphasize
the importance of integration and quality of notes rather than quantity, scaold stu-
dents to integrate disciplinary ideas and provide time for them to do so.
χ² df pη²
Pre-perceived interdisciplinary learning
quality
2.27 2 0.32 0.03
Post-perceived interdisciplinary learning
quality
0.65 2 0.72 0.01
Table 4 Kruskal-Wallis test on
the pre- and post-perceived in-
terdisciplinary learning quality
1 3
Education and Information Technologies
5.2 GPT on demonstrated and perceived interdisciplinary learning quality
There was no signicant dierence in perceived interdisciplinary learning quality
across the three conditions, suggesting that ChatGPT did not notably impact stu-
dents’ perceived interdisciplinary learning experiences. One possible reason for this
result is that the interdisciplinary learning quality survey focused on measuring an
individual’s perception of the extent to which their team collaboratively researched,
integrated ideas, and developed eective solutions that meet their requirements in
the three conditions rather than focusing on their perceived eectiveness of ChatGPT
on their interdisciplinary learning quality. Furthermore, group collaboration dynam-
ics might play more prominent roles in inuencing their perceived interdisciplinary
learning quality. Moreover, it may not be sucient to only make ChatGPT available
but not provide support to help students eectively integrate ChatGPT into collabora-
tive interdisciplinary learning contexts. Further research may examine how students
use ChatGPT during collaborative learning and how to support their collaboration
with peers and ChatGPT, extending research to the human-human-AI context.
Regarding demonstrated interdisciplinary learning quality, a signicant dierence
was observed in disciplinary grounding across the three conditions. Posts written in
the ChatGPT condition exhibited a higher level of disciplinary grounding than those
in the ChatGPT Persona and non-ChatGPT conditions. As Redshaw and Frampton
(2014) suggested, students require various disciplinary resources in the interdisci-
plinary learning process. We think that the NLP capabilities of ChatGPT might have
enabled it to eciently provide relevant disciplinary information to students during
the learning process, helping them to better identify and apply disciplinary knowl-
edge. Similarly, some research (e.g., Kasneci et al., 2023; McBee et al., 2023; Prent-
zas & Sidiropoulou, 2023) suggests GPT’s prociency in playing dierent roles in
aiding students with writing tasks across diverse disciplines. Furthermore, Iku-Silan
et al. (2023) indicated that AI chatbots enhance students’ ability to organize knowl-
edge, thereby eectively facilitating interdisciplinary learning outcomes.
Several reasons may explain why students in the ChatGPT condition outperformed
those in the ChatGPT Persona condition in disciplinary grounding. First, the Chat-
GPT condition did not limit information from various disciplines, whereas the Chat-
GPT Persona might constrain the domain knowledge and perspectives within a single
discipline as the persona was dened. Second, the persona elements might distract
students from working on the task and developing their arguments. Instead, students
could focus on the entertainment value of the tool. For example, several groups chose
to name the persona “Dr. X”—the title of their instructor, and one group tried various
personas like bank robbers and prisoners, which do not seem to be very relevant to
the interdisciplinary learning task. The second explanation can be further supported
by the fact that the posts in the ChatGPT persona condition are shorter in number of
words (mean = 39.39) compared to those in the other two conditions. The ChatGPT
condition’s posts averaged 48.57, and the control condition’s posts averaged 46.29
words. It suggests that students might spend too much time “playing” with the per-
sona rather than delving into the content of the task. In a similar vein, some studies
(e.g., Boeve-de Pauw et al., 2019; Falk, 1983) found that the impact of the novelty
of the learning environment on learning outcomes does not follow a linear pattern.
1 3
Education and Information Technologies
A lack of novelty in a learning setting may lead to boredom, whereas excessive nov-
elty can be distracting or induce anxiety. Considering the study was implemented in
March 2023 when ChatGPT was new to the eld of education, and the majority of
the participants were rst-year students who might not have been familiar with the
concept of persona before, asking them to dene personas rst and then chat with the
ChatGPT personas for their interdisciplinary task might be too novel to them and thus
distract them from the task itself. Further research may scaold students to dene and
communicate with ChatGPT personas for their interdisciplinary learning and study
the impacts of ChatGPT personas when students’ perceived novelty fades.
Though the eect size of the disciplinary grounding dierence among conditions
is small to medium (η² = 0.02), it conveys a meaningful and important consequence
of the impacts of ChatGPT in complementing students’ disciplinary knowledge. The
small eect size could be attributed to the relatively short intervention duration, span-
ning just one class session lasting two hours, and the quasi-experiment design. Learn-
ing outcomes are a product of sustained eort (Ruiz-Primo et al., 2002), and the
development of students’ cognitive ability is complicated and accumulates over time
(Götz et al., 2022). Researchers begin to recognize the importance of small eect
sizes and advocate the need to be cautious about demanding large eect sizes and
understand that small eect sizes are to be expected in eld experiments (Götz et al.,
2022; Kraft, 2020). Future research should replicate this study with a longer duration.
The dierence between classes using ChatGPT and not using ChatGPT is insig-
nicant in higher-level cognitive skills such as integration and cognitive advance-
ment. Similarly, some studies show that ChatGPT is less capable of directly helping
students with higher-level cognitive skills, such as synthesis and evaluation (e.g.,
Elsayed, 2023; Ghazali et al., 2024; Stutz et al., 2023). Accordingly, Kostka and Ton-
celli (2023) advocated that AI should be integrated into education to assist students
in achieving higher-level skills. Furthermore, although ChatGPT is good at provid-
ing extensive knowledge from various disciplines, it is not specically designed for
education. Educators need to consider how to support students in analyzing, verify-
ing, and evaluating ChatGPT’s responses and integrating valid responses with other
reliable sources to solve problems and address issues.
The course content, instructional approach and activity design may have played an
important role in shaping students’ learning. Using a debate format has advantages
and disadvantages for promoting interdisciplinary learning. On the one hand, debates
can encourage students to engage in critical thinking, research, and collaboration as
they prepare arguments and consider multiple perspectives on a complex issue. On
the other hand, the competitive nature of debates and time limitations may limit stu-
dents’ ability to fully explore and integrate knowledge from multiple disciplines, as
they may focus more on winning the argument using catching facts than developing
a deep understanding of the topic. Moreover, students’ prior knowledge, motivation,
and engagement with course material (e.g., Computer Science students and Psychol-
ogy students tend to have dierent prior knowledge and interests in AI) may also
have inuenced their learning, independent of the use of ChatGPT. These factors
need to be further considered and included in future studies.
1 3
Education and Information Technologies
5.3 Implications and limitations
This study provides practical and empirical implications regarding adopting ChatGPT
and designing interdisciplinary learning experiences for students in higher educa-
tion. Practically, the nding regarding using ChatGPT helped undergraduate students
deepen their disciplinary grounding suggests the possibility of using ChatGPT to
complement undergraduate students’ lack of disciplinary knowledge and approaches.
However, it should also be noted that we did not nd signicant dierences in inte-
gration or cognitive advancement among the conditions using ChatGPT or not. These
results indicate that as researchers (Cooper, 2023; Zhu et al., 2023) suggested, Chat-
GPT will not provide a “one-size-ts-all” solution for educational issues such as
interdisciplinary learning; instead, when cultivating relatively higher-level skills like
integrating ideas, critical thinking, and making cognitive advancement, students’
eorts and teachers’ deliberate activity design are still critical. While taking advan-
tage of ChatGPT’s benets, students and teachers must be cautious of its potential
threats to students’ critical thinking and deep understanding/learning (Dwivedi et al.,
2023). In education, future research needs to examine how to use ChatGPT, in what
context, and for what purposes to optimize its benets and avoid its risks, or at least
make the potential risks transparent and manageable. For instance, one researcher,
aware of ChatGPT’s limitation in providing accurate information, asked secondary
school students to critically review ChatGPT’s explanations of physics concepts or
use ChatGPT to generate concept quizzes for self-assessment (Bitzenbauer, 2023).
Such approaches could promote students’ awareness that ChatGPT is not the “ulti-
mate epistemic authority” in providing disciplinary knowledge, and critical thinking
is still needed for their learning (Cooper, 2023).
Empirically, we developed an interdisciplinary learning quality coding scheme
that can be used in future research to analyze students’ interdisciplinary learning
quality, patterns, and dynamics. For instance, researchers may use the coding scheme
to examine how students’ ideas and arguments build upon each other and evolve to
develop a more comprehensive interdisciplinary understanding. Furthermore, how we
integrated ChatGPT into our instructional design and the development of the coding
scheme can inspire technical and pedagogical design regarding using ChatGPT and
other generative AI tools to support interdisciplinary learning. For instance, future
research may design an intelligent evaluation tool/learning analytics that monitors
students’ interdisciplinary learning quality in real time to promote their reections
on the limitations of their design solutions and support them in making action plans
regarding how to improve interdisciplinary learning.
Despite its implications and signicance, this exploratory study has several limita-
tions. First, we measured students’ demonstrated interdisciplinary learning quality by
analyzing their Miro notes, which may not fully capture the complexity or depth of
students’ interdisciplinary contributions. Although students prepared for the debate
in groups, our analysis using each Miro note as a unit of analysis might have lim-
ited the possibility of best representing how students worked together to integrate
knowledge from multiple disciplines and develop their interdisciplinary understand-
ing over time. Further research may explore ways to better collect and segment stu-
dents’ learning process data (e.g., considering the notes generated in one task as a
1 3
Education and Information Technologies
whole) to fully understand their interdisciplinary learning process. Second, the dura-
tion of this quasi-experimental study was relatively short to avoid the unfairness of
not allowing some classes to use ChatGPT. Future research may consider replicating
this study over a longer period to examine whether the eect found in this study lasts
and whether there exist long-term impacts of using ChatGPT on students’ interdisci-
plinary learning. Doing so will also help address the potential novelty issue described
above. Third, although we expected students to integrate ideas from multiple disci-
plines to prepare solid arguments and counterarguments for the debate activity, they
might use a collection of facts from singular disciplines as arguments, especially
when the time was limited. Future research may consider developing more authentic
and complex interdisciplinary activities that cannot be resolved without integrating
knowledge and skills from dierent disciplines.
6 Conclusion
This study explored the application of ChatGPT in interdisciplinary learning, address-
ing the challenge of limited disciplinary knowledge among junior undergraduate stu-
dents. Through a quasi-experiment involving 130 participants in ChatGPT Persona,
ChatGPT, and Non-ChatGPT conditions, we assessed ChatGPT’s eects on students’
demonstrated and perceived interdisciplinary learning quality. The results revealed
that students across the three conditions demonstrated weak performance in integrat-
ing knowledge from dierent disciplines; students in the ChatGPT condition outper-
formed in discipline grounding compared to those in the other two conditions. Our
ndings emphasize the need for additional support to enhance students’ knowledge
integration abilities. Moreover, the study suggests that ChatGPT’s eectiveness may
vary across dierent aspects of interdisciplinary learning. It is essential to recog-
nize that higher-level skills like integration, critical thinking, and deep learning still
require students’ eort and teachers’ intentional design, indicating the supplementary
nature of ChatGPT in specic learning areas.
While we believe this study has provided valuable insights into introducing Chat-
GPT in undergraduate interdisciplinary learning, it has limitations in its study dura-
tion and analysis unit. Future studies should consider extending the length of the
study, including other potential factors that may play a role in the learning process
(e.g., students’ prior knowledge, perceived novelty of the design, motivation, and
engagement with course material) and collecting more diverse data (e.g., group
reports, academic performance, and students’ interviews) to further understand how
ChatGPT inuences students’ interdisciplinary learning.
Acknowledgements The authors are indebted to the students who participated in this study.
Funding This study was supported by the NTU Edex Teaching and Learning Grants (Grant No. NTU
EdeX 1/22 ZG).
Data availability Because of condentiality agreements and ethical concerns, the data used in this study
will not be made public. These data will be made available to other researchers on a case-by-case basis.
1 3
Education and Information Technologies
Declarations
Competing interests The authors declare no potential conict of interest in the work.
References
Alberta Education (2015). Interdisciplinary Learning. https://www.learnalberta.ca/content/kes/pdf/or_
ws_tea_elem_05_interdis.pdf
Biggs, J. B., & Collis, K. F. (1982). The psychological structure of Creative writing. Australian Journal of
Education, 26(1), 59–70. https://doi.org/10.1177/000494418202600104
Bitzenbauer, P. (2023). ChatGPT in physics education: A pilot study on easy-to-implement activities. Con-
temporary Educational Technology, 15(3), ep430. https://doi.org/10.30935/cedtech/13176
Boeve-de Pauw, J., Van Hoof, J., & Van Petegem, P. (2019). Eective eld trips in nature: The interplay
between novelty and learning. Journal of Biological Education, 53(1), 21–33. https://doi.org/10.108
0/00219266.2017.1418760
Boix-Mansilla, V. (2010). Learning to Synthesize: The development of Interdisciplinary understanding. In
R. Frodeman, J. T. Klein, C. Mitcham, & J. B. Holbrook (Eds.), The Oxford Handbook of Interdisci-
plinarity (pp. 288–306). Oxford University Press.
Boix-Mansilla, V., & Duraising, E. D. (2007). Targeted Assessment of Students’ Interdisciplinary Work:
An empirically grounded Framework proposed. The Journal of Higher Education, 78(2), 215–237.
https://doi.org/10.1080/00221546.2007.11780874
Boix-Mansilla, V., Duraisingh, E. D., Wolfe, C. R., & Haynes, C. (2009). Targeted Assessment Rubric: An
empirically grounded Rubric for Interdisciplinary writing. The Journal of Higher Education, 80(3),
334–353. https://doi.org/10.1080/00221546.2009.11779016
Bordt, S., & von Luxburg, U. (2023). ChatGPT Participates in a Computer Science Exam
(arXiv:2303.09461). arXiv. https://doi.org/10.48550/arXiv.2303.09461
Brassler, M., & Dettmers, J. (2017). How to enhance interdisciplinary competence—interdisciplinary
problem-based learning versus Interdisciplinary Project-based learning. Interdisciplinary Journal of
Problem-Based Learning, 11(2). https://doi.org/10.7771/1541-5015.1686
Broadbent, S., & Gallotti, M. (2015). Collective intelligence: How does it emerge. NESTA.
Bybee, R. W. (2013). The case for STEM education: Challenges and opportunities. National Science
Teachers Association.
Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D. (2023). ChatGPT Goes to Law School (SSRN
Scholarly Paper 4335905). https://doi.org/10.2139/ssrn.4335905
Clark, T. M. (2023). Investigating the Use of an Articial Intelligence Chatbot with General Chemistry
exam questions. Journal of Chemical Education, 100(5), 1905–1916. https://doi.org/10.1021/acs.
jchemed.3c00027
Cole, M. L., Cox, J. D., & Stavros, J. M. (2018). SOAR as a mediator of the Relationship between Emo-
tional intelligence and collaboration among professionals working in teams: Implications for entrepre-
neurial teams. SAGE Open, 8(2), 2158244018779109. https://doi.org/10.1177/2158244018779109
Cooper, A. (1999). The Inmates are Running the Asylum. In U. Arend, E. Eberleh, & K. Pitschke (Eds.),
Software-Ergonomie ’99: Design von Informationswelten (pp. 17–17). Vieweg + Teubner Verlag.
https://doi.org/10.1007/978-3-322-99786-9_1
Cooper, G. (2023). Examining Science Education in ChatGPT: An exploratory study of Generative
Articial Intelligence. Journal of Science Education and Technology, 32(3), 444–452. https://doi.
org/10.1007/s10956-023-10039-y
Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American Statistical Association,
56(293), 52–64. https://doi.org/10.1080/01621459.1961.10482090
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang,
A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J.,
Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., & Wright, R. (2023). Opinion Paper: So what
if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of
generative conversational AI for research, practice and policy. International Journal of Information
Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
1 3
Education and Information Technologies
Eisen, A., Hall, A., Lee, T. S., & Zupko, J. (2009). Teaching Water: Connecting Across disciplines and
into Daily Life to address Complex Societal issues. College Teaching, 57(2), 99–104. https://doi.
org/10.3200/CTCH.57.2.99-104
Elsayed, S. (2023). Towards Mitigating ChatGPT’s Negative Impact on Education: Optimizing Ques-
tion Design through Bloom’s Taxonomy (arXiv:2304.08176). arXiv. https://doi.org/10.48550/
arXiv.2304.08176
Falk, J. H. (1983). Field trips: A look at environmental eects on learning. Journal of Biological Educa-
tion, 17(2), 137–142. https://doi.org/10.1080/00219266.1983.9654522
Frodeman, R., Klein, J. T., Mitcham, C., & Holbrook, J. B. (2010). The Oxford Handbook of Interdiscipli-
narity. Oxford University Press.
Ghazali, S. A., Zaki, N., Ali, L., & Harous, S. (2024). Exploring the potential of ChatGPT as a Substi-
tute teacher: A Case Study. International Journal of Information and Education Technology, 14(2),
271–278. https://doi.org/10.18178/ijiet.2024.14.2.2048
Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2023). How
does ChatGPT perform on the United States Medical Licensing examination? The implications of
Large Language Models for Medical Education and Knowledge Assessment. JMIR Medical Educa-
tion, 9(1), e45312. https://doi.org/10.2196/45312
Götz, F. M., Gosling, S. D., & Rentfrow, P. J. (2022). Small eects: The Indispensable Foundation for a
cumulative Psychological Science. Perspectives on Psychological Science, 17(1), 205–215. https://
doi.org/10.1177/1745691620984483
Gvili, I. E. F., Weissburg, M., Yen, J., Helms, M. E., & Tovey, C. (2016). Development of scoring rubric for
evaluating integrated understanding in an undergraduate biologically-inspired design course. Inter-
national Journal of Engineering Education. https://www.semanticscholar.org/paper/Development-
of-scoring-rubric-for-evaluating-in-an-Gvili-Weissburg/53fb00b8bf56209192de2da3528aa31adaf
c5f66
He, F., Mazumdar, S., Tang, G., Bhatia, T., Anderson, S. J., Dew, M. A., Krafty, R., Nimgaonkar, V.,
Deshpande, S., Hall, M., & Reynolds, I. I. I., C. F (2017). Non-parametric MANOVA approaches for
non-normal multivariate outcomes with missing values. Communications in Statistics - Theory and
Methods, 46(14), 7188–7200. https://doi.org/10.1080/03610926.2016.1146767
Hong, W. C. H. (2023). The impact of ChatGPT on foreign language teaching and learning: Opportunities
in education and research. Journal of Educational Technology and Innovation, 5(1), Article1. https://
jeti.thewsu.org/index.php/cieti/article/view/103
Huang, W., Hew, K. F., & Fryer, L. K. (2022). Chatbots for language learning—are they really useful? A
systematic review of chatbot-supported language learning. Journal of Computer Assisted Learning,
38(1), 237–257. https://doi.org/10.1111/jcal.12610
Huutoniemi, K. (2010). Evaluating interdisciplinary research. In R. Frodeman, J. T. Klein, C. Mitcham, &
J. B. Holbrook (Eds.), The Oxford handbook of interdisciplinarity (pp. 309–320). Oxford University
Press.
Hwang, A. H. C., & Won, A. S. (2021). IdeaBot: Investigating Social Facilitation in Human-Machine
Team Creativity. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems,
1–16. https://doi.org/10.1145/3411764.3445270
Iku-Silan, A., Hwang, G. J., & Chen, C. H. (2023). Decision-guided chatbots and cognitive styles in
interdisciplinary learning. Computers & Education, 201, 104812. https://doi.org/10.1016/j.
compedu.2023.104812
Ivanitskaya, L., Clark, D., Montgomery, G., & Primeau, R. (2002). Interdisciplinary learning: Process and
outcomes. Innovative Higher Education, 27(2), 95–111. https://doi.org/10.1023/A:1021105309984
Jeon, J., & Lee, S. (2023). Large language models in education: A focus on the complementary relation-
ship between human teachers and ChatGPT. Education and Information Technologies. https://doi.
org/10.1007/s10639-023-11834-1
Kajonmanee, T., Chaipidech, P., Srisawasdi, N., & Chaipah, K. (2020). A personalised mobile learning
system for promoting STEM discipline teachers’ TPACK development. International Journal of
Mobile Learning and Organisation, 14(2), 215–235. https://doi.org/10.1504/IJMLO.2020.106186
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G.,
Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeer, J.,
Poquet, O., Sailer, M., Schmidt, A., Seidel, T., & Kasneci, G. (2023). ChatGPT for good? On oppor-
tunities and challenges of large language models for education. Learning and Individual Dierences,
103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
1 3
Education and Information Technologies
Kidron, A. (2015). Y. Kali (Ed.), Boundary breaking for interdisciplinary learning. Research in Learning
Technology 23 https://doi.org/10.3402/rlt.v23.26496
Kidron, A., & Kali, Y. (2023). Promoting interdisciplinary understanding in asynchronous online higher
education courses: A learning communities approach. Instructional Science, 1–31. https://doi.
org/10.1007/s11251-023-09635-7
Klein, J. T., & Newell, W. H. (1997). Advancing Interdisciplinary studies. In J. G. Ga, & J. L. Ratcli
(Eds.), Handbook of the undergraduate Crriculum: A Comprehensive Guide to purposes, structures,
practices, and change (pp. 393–415). Jossey-Bass.
Kostka, I., & Toncelli, R. (2023). Exploring applications of ChatGPT to English Language Teaching:
Opportunities, challenges, and recommendations. Teaching English as a second or foreign. Lan-
guage–TESL-EJ, 27(3). https://doi.org/10.55593/ej.27107int
Krippendor, K. (2004). Content analysis: An introduction to its methodology (2nd ed.). Sage.
Kruskal, W. H., & Wallis, W. A. (1952). Use of ranks in One-Criterion Variance Analysis. Journal of the
American Statistical Association, 47(260), 583–621. https://doi.org/10.1080/01621459.1952.10483
441
Lam, J. C. K., Walker, R. M., & Hills, P. (2014). Interdisciplinarity in sustainability studies: A review.
Sustainable Development, 22(3), 158–176. https://doi.org/10.1002/sd.533
Lattuca, L., Knight, D., & Bergom, I. (2012). Developing a measure of interdisciplinary competence for
engineers. 2012 ASEE Annual Conference & Exposition Proceedings, 25.415.1-25.415.19. https://
doi.org/10.18260/1-2--21173
Lee, H. Y., Cheng, Y. P., Wang, W. S., Lin, C. J., & Huang, Y. M. (2023). Exploring the learning pro-
cess and eectiveness of STEM Education via Learning Behavior Analysis and the interactive-con-
structive- active-Passive Framework. Journal of Educational Computing Research, 61(5), 951–976.
https://doi.org/10.1177/07356331221136888
Lo, C. K. (2023). What is the impact of ChatGPT on Education? A Rapid Review of the literature. Educa-
tion Sciences, 13(4). https://doi.org/10.3390/educsci13040410 Article 4.
Lyall, C., Meagher, L., Bandola, J., & Kettle, A. (2016). Interdisciplinary provision in higher education:
Current and future challenges. University of Edinburgh.
MacLeod, M. (2018). What makes interdisciplinarity dicult? Some consequences of domain specicity
in interdisciplinary practice. Synthese, 195(2), 697–720. https://doi.org/10.1007/s11229-016-1236-4
MacLeod, M., & van der Veen, J. T. (2020). Scaolding interdisciplinary project-based learning: A case
study. European Journal of Engineering Education, 45(3), 363–377. https://doi.org/10.1080/03043
797.2019.1646210
Madden, M. E., Baxter, M., Beauchamp, H., Bouchard, K., Habermas, D., Hu, M., Ladd, B., Pearon, J., &
Plague, G. (2013). Rethinking STEM Education: An interdisciplinary STEAM curriculum. Procedia
Computer Science, 20, 541–546. https://doi.org/10.1016/j.procs.2013.09.316
Markauskaite, L., Muukkonen, H., Damsa, C., Thompson, K., Arthars, N., Celik, I., Sutphen, M., Ester-
hazy, R., Solbrekke, T. D., Sugrue, C., McCune, V., Wheeler, P., Vasco, D., & Kali, Y. (2020). Inter-
disciplinary Learning in Undergraduate and Graduate Education: Conceptualizations and Empirical
Accounts. https://repository.isls.org//handle/1/6664
Maugoust, J. (2023). Multivariate Kruskal-Wallis test [Computer software]. https://github.com/
jacobmaugoust/ULT/blob/master/R/multkw.R
McBee, J. C., Han, D. Y., Liu, L., Ma, L., Adjeroh, D. A., Xu, D., & Hu, G. (2023). Interdisciplinary
Inquiry via PanelGPT: Application to explore Chatbot Application in sports Rehabilitation. medRxiv,
2023.07.23.23292452. https://doi.org/10.1101/2023.07.23.23292452
Merrell, B., Calderwood, K. J., & Graham, T. (2017). Across the disciplines: Structured Classroom debates
in Interdisciplinary Curricula. Contemporary Argumentation & Debate, 37, 57–74.
Ogle, D., Doll, J., Wheeler, A., & Dinno, A. (2023). FSA: Simple Fisheries Stock Assessment Methods (R
package version 0.9.5) [Computer software]. https://CRAN.R-project.org/package=FSA
Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Com-
puters and Education: Articial Intelligence, 2, 100033. https://doi.org/10.1016/j.caeai.2021.100033
OpenAI (2023). ChatGPT. https://chat.openai.com
Ouyang, F., Wu, M., Zhang, L., Xu, W., Zheng, L., & Cukurova, M. (2023). Making strides towards
AI-supported regulation of learning in collaborative knowledge construction. Computers in Human
Behavior, 142, 107650. https://doi.org/10.1016/j.chb.2023.107650
Prentzas, J., & Sidiropoulou, M. (2023). Assessing the Use of Open AI Chat-GPT in a University Depart-
ment of Education. 2023 14th International Conference on Information, Intelligence, Systems &
Applications (IISA), 1–4. https://doi.org/10.1109/IISA59645.2023.10345910
1 3
Education and Information Technologies
Qadir, J. (2023). Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI
for Education. 2023 IEEE Global Engineering Education Conference (EDUCON), 1–9. https://doi.
org/10.1109/EDUCON54358.2023.10125121
Redshaw, C. H., & Frampton, I. (2014). Optimising inter-disciplinary problem-based learning in Post-
graduate Environmental and Science Education: Recommendations from a case study. International
Journal of Environmental and Science Education, 9(1), 97–110.
Remington-Doucette, S. M., Hiller Connell, K. Y., Armstrong, C. M., & Musgrove, S. L. (2013). Assessing
sustainability education in a transdisciplinary undergraduate course focused on real‐world problem
solving: A case for disciplinary grounding. International Journal of Sustainability in Higher Educa-
tion, 14(4), 404–433. https://doi.org/10.1108/IJSHE-01-2012-0001
Roy, E. D., Morzillo, A. T., Seijo, F., Reddy, S. M. W., Rhemtulla, J. M., Milder, J. C., Kuemmerle, T., &
Martin, S. L. (2013). The elusive pursuit of Interdisciplinarity at the Human—Environment Interface.
BioScience, 63(9), 745–753. https://doi.org/10.1525/bio.2013.63.9.10
Ruiz-Primo, M. A., Shavelson, R. J., Hamilton, L., & Klein, S. (2002). On the evaluation of systemic sci-
ence education reform: Searching for instructional sensitivity. Journal of Research in Science Teach-
ing, 39(5), 369–393. https://doi.org/10.1002/tea.10027
Sadeghi, H., & Kardan, A. A. (2015). A novel justice-based linear model for optimal learner group forma-
tion in computer-supported collaborative learning environments. Computers in Human Behavior, 48,
436–447. https://doi.org/10.1016/j.chb.2015.01.020
Sharp, E. (2015). Interdisciplinary experiences: A postgraduate geographer’s perspective. Journal of
Geography in Higher Education, 39(2), 220–225. https://doi.org/10.1080/03098265.2014.956295
Spelt, E. J. H., Biemans, H. J. A., Tobi, H., Luning, P. A., & Mulder, M. (2009). Teaching and learning
in interdisciplinary higher education: A systematic review. Educational Psychology Review, 21(4),
365–378. https://doi.org/10.1007/s10648-009-9113-z
Stentoft, D. (2017). From saying to doing interdisciplinary learning: Is problem-based learning the answer?
Active Learning in Higher Education, 18(1), 51–61. https://doi.org/10.1177/1469787417693510
Stutz, P., Elixhauser, M., Grubinger-Preiner, J., Linner, V., Reibersdorfer-Adelsberger, E., Traun, C.,
Wallentin, G., Wöhs, K., & Zuberbühler, T. (2023). Ch(e)atGPT? An Anecdotal Approach address-
ing the Impact of ChatGPT on Teaching and Learning GIScience [Preprint]. EdArXiv. https://doi.
org/10.35542/osf.io/j3m9b
Tang, J., Zhou, X., Wan, X., Daley, M., & Bai, Z. (2023). ML4STEM Professional Development Program:
Enriching K-12 STEM teaching with machine learning. International Journal of Articial Intelli-
gence in Education, 33(1), 185–224. https://doi.org/10.1007/s40593-022-00292-4
Troussas, C., Krouska, A., & Virvou, M. (2017). Integrating an Adjusted Conversational Agent into a
Mobile-Assisted Language Learning Application. 2017 IEEE 29th International Conference on Tools
with Articial Intelligence (ICTAI), 1153–1157. https://doi.org/10.1109/ICTAI.2017.00176
Upton, K., & Kay, J. (2009). Narcissus: Group and Individual Models To Support Small Group Work. In G.
J. Houben, G. McCalla, F. Pianesi, & M. Zancanaro (Eds.), User modeling, Adaptation, and person-
alization (Vol. 5535, pp. 54–65). Springer. https://doi.org/10.1007/978-3-642-02247-0_8
Xu, L. (2020). The Dilemma and countermeasures of AI in Educational Application. 2020 4th Inter-
national Conference on Computer Science and Articial Intelligence, 289, 294. https://doi.
org/10.1145/3445815.3445863
Yee, B. L. C., Hou, C., Zhu, G., Lim, F. S., Lyu, S., & Fan, X. (2023). A Software platform for evaluating
Student essays in Interdisciplinary learning with topic classication techniques. Articial Intelli-
gence in Education, 645–651. https://doi.org/10.1007/978-3-031-36336-8_100
Zhan, Y., So, W. W. M., & Cheng, I. N. Y. (2017). Students’ beliefs and experiences of interdisciplinary
learning. Asia Pacic Journal of Education, 37(3), 375–388. https://doi.org/10.1080/02188791.201
7.1301880
Zhu, G., & Burrow, A. L. (2022). Youth Voice in Self-Driven Learning as a context for Interdisciplin-
ary Learning. Journal of Educational Studies and Multidisciplinary Approaches, 2(1). https://doi.
org/10.51383/jesma.2022.29
Zhu, G., Fan, X., Hou, C., Zhong, T., Seow, P., Shen-Hsing, A. C., Rajalingam, P., Yew, L. K., & Poh, T. L.
(2023). Embrace Opportunities and Face Challenges: Using ChatGPT in Undergraduate Students’
Collaborative Interdisciplinary Learning (arXiv:2305.18616). arXiv. https://doi.org/10.48550/
arXiv.2305.18616
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional aliations.
1 3
Education and Information Technologies
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and appli-
cable law.
Authors and Aliations
TianlongZhong1· GaoxiaZhu2· ChenyuHou1· YuhanWang2· XiuyiFan3
Gaoxia Zhu
gaoxia.zhu@nie.edu.sg
Tianlong Zhong
tianlong001@e.ntu.edu.sg
Chenyu Hou
CHENYU004@e.ntu.edu.sg
Yuhan Wang
NIE22.WY2447@e.ntu.edu.sg
Xiuyi Fan
xyfan@ntu.edu.sg
1 Graduate College, Nanyang Technological University, Singapore, Singapore
2 National Institute of Education (NIE), Nanyang Technological University, Singapore,
Singapore
3 Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore,
Singapore
1 3
A preview of this full-text is provided by Springer Nature.
Content available from Education and Information Technologies
This content is subject to copyright. Terms and conditions apply.