ArticlePDF Available

Investigating undergraduate students' perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study

Authors:

Figures

Computers and Education: Articial Intelligence 6 (2024) 100203
Available online 12 January 2024
2666-920X/© 2024 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
Investigating undergraduate studentsperceptions and awareness of using
ChatGPT as a regular assistance tool: A user acceptance perspective study
Hayder Albayati
Global Convergence Management, Department at the College of Endicott, Woosong University, 171 Dongdaejeon-ro, Dong-gu, Daejeon, Republic of Korea
ARTICLE INFO
Keywords:
ChatGPT
TAM
Undergraduate student behavior
Privacy
Security
Social inuence
And trust
ABSTRACT
This study examines the factors inuencing user acceptance of ChatGPT as a daily reference tool and assesses
varying levels of user awareness. It aims to offer valuable insights into the potential benets and challenges of
implementing ChatGPT in an educational context. To achieve this objective, we employ an integrated model
comprising the Technology Acceptance Model (TAM) and four novel external constructs: Privacy, Security, Social
Inuence, and Trust. This proposed model delivers an in-depth understanding of user acceptance by simulta-
neously measuring diverse user perspectives. Adopting a quantitative research approach, the study surveys
undergraduate students regarding their use of ChatGPT. The results contribute to bridging the gap between
technology and users, shedding light on usersactual experiences and considerations regarding AI-based tools.
Specically, the study is expected to reveal the signicant inuence of external factors on user acceptance of
ChatGPT and to provide a set of recommendations for educational institutions, policymakers, and developers.
This study aids the developers of ChatGPT and similar technologies by offering insights into how to design and
enhance more user-friendly and secure systems that better meet users needs and expectations.
1. Introduction
Intelligent software and hardware, commonly known as intelligent
agents, are increasingly integrated into our daily lives due to the rising
use of Articial Intelligence (AI). These agents are capable of performing
a wide range of tasks, from simple manual labor to complex operations.
One of the most prevalent examples of AI systems is the chatbot, which is
also an elementary yet widespread example of intelligent Human-
Computer Interaction (HCI) (Bansal & Khan, 2018).
Although chatbots can simulate human conversation and provide
entertainment, their purpose extends beyond that (Iku-Silan et al.,
2023). They are utilized in various elds such as education, information
retrieval, business, and e-commerce, offering useful services (Shawar &
Atwell, 2007). OpenAI, an AI company, unveiled ChatGPT, a new
version of chatbots. ChatGPT is a large language model (LLM) that uses
machine learning to learn from vast datasets of text and can produce
highly sophisticated and intelligent writing. This groundbreaking tech-
nology has signicant implications for both science and society. Re-
searchers and other professionals have already leveraged ChatGPT and
other LLMs to compose essays and speeches, condense literature, rene
papers, detect research gaps, and even write computer code, including
statistical analyses (van Dis et al., 2023). ChatGPT was launched on
November 30, 2022, and has since garnered signicant attention,
attracting over one million subscribers within its rst week of release
(Baidoo-Anu & Owusu Ansah, 2023).
With the rapid increase in ChatGPT usage and interest in its appli-
cations, students are eager to take advantage of this tool in many areas.
Undergraduate students primarily seek help with academic assignments
for clarication of concepts or assistance with their work. It is useful for
understanding academic material (van Dis et al., 2023). Some students
use ChatGPT to obtain general information on various topics, serving as
a quick reference tool for their studies or broader knowledge elds (Cao
et al., 2023). Students interact with ChatGPT to gain insights into
complex topics, solve academic challenges, or receive additional ex-
planations (Lund & Wang, 2023). They use ChatGPT for easy and quick
access to information, beneting from its availability 24/7 for imme-
diate assistance (Baidoo-Anu & Owusu Ansah, 2023).
ChatGPT has evolved through four versions with varying numbers of
parameters: ChatGPT-1 has 117 million parameters, ChatGPT-2 has 1.5
billion parameters, ChatGPT-3 has 175 billion parameters, and
ChatGPT-4 has 100 trillion parameters. The increase in parameters has
enabled ChatGPT-4 to achieve unprecedented levels of performance and
generate text resembling human speech (OpenAI, 2023). ChatGPT is a
powerful tool that offers several benets for researchers. It assists in
E-mail address: Hayder1111@wsu.ac.kr.
Contents lists available at ScienceDirect
Computers and Education: Articial Intelligence
journal homepage: www.sciencedirect.com/journal/computers-and-education-artificial-intelligence
https://doi.org/10.1016/j.caeai.2024.100203
Received 30 July 2023; Received in revised form 8 January 2024; Accepted 8 January 2024
Computers and Education: Articial Intelligence 6 (2024) 100203
2
identifying relevant literature by providing summaries or lists of rele-
vant publications based on topics or keywords. ChatGPT also generates
text in specic styles or tones, facilitating the drafting of research papers
and other documents (Lund & Wang, 2023). Additionally, it can analyze
large amounts of text data, such as social media posts or news articles,
and support research comprehension in multiple languages through
machine translation (Rudolph et al., 2023). Researchers can stay
updated on the latest developments in their eld using ChatGPTs
automated summarization capabilities. Furthermore, by providing
domain-specic answers, ChatGPT serves as a valuable tool for scholars
seeking prompt and efcient assistance (Lund & Wang, 2023).
In summary, ChatGPTs functionality showcases the diverse ways it
can be used to support and enhance the research activities of researchers
across different disciplines. The tools adaptability and range of appli-
cations make it a valuable asset for researchers seeking assistance at
different stages of their work (Frieder et al., 2023; George & George,
2023; Haleem et al., 2022; Koco´
n et al., 2023). Here are the most
commonly used functions:
Provide summaries of texts and literature.
Providing explanations, clarications, and supplementary informa-
tion on complex research topics.
Improve literature searches by rening search queries, suggesting
alternative keywords, and providing additional context.
Create interactive learning modules through educational content,
quizzes, and games.
Provide suggestions for improving language and coherence.
Provides machine translation capabilities in multiple languages.
Create text with a certain tone that matches the human writing style.
Collaboratively generate ideas, outlines, or rough drafts.
Analyze large amounts of data, including social media posts or news
articles.
2. Organization of this paper
To provide a comprehensive understanding, the paper is organized as
follows: Section 3 encompasses literature reviews; Section 4 elucidates
the Technology Acceptance Model (TAM); Section 5 introduces the hy-
potheses and model development; Section 6 outlines the research
methodology; Section 7 describes the results and discussion; and Section
8 covers the conclusion, limitations, and future work.
3. Literature review
AI-generated text profoundly impacts every sector, elevating ex-
pectations regarding AI capabilities and shaping prospects for technol-
ogy development. It is crucial to consider the advantages and
disadvantages of AI in human life and the future. This research aims to
spotlight some of these aspects, focusing on factors inuencing the
current usage of AI generative technology and exploring potential de-
velopments shortly. Some research suggests that employee conviction
regarding the personal usefulness of chatbots is crucial. Intrinsic moti-
vation among employees positively impacts their intention to use it
(Brachten et al., 2021).
ChatGPT can be useful in a wide range of elds and applications as an
AI language model. Here are some examples:
Customer service: ChatGPT can provide automated customer service
and support through chatbots, helping businesses save time and re-
sources by handling common inquiries and requests without human
intervention.
Education: ChatGPT can offer personalized learning experiences by
answering student questions and providing assignment feedback. It
can also generate educational content and lesson plans.
Healthcare: ChatGPT can assist with medical diagnoses and treat-
ment recommendations provide patients with information about
their health conditions and answer questions about medical
procedures.
Marketing: ChatGPT can generate personalized marketing content,
such as product descriptions and promotional messages. It can also
provide customer insights and analytics to help businesses optimize
their marketing strategies.
Finance: ChatGPT can offer nancial advice and investment recom-
mendations to clients. It can also generate nancial reports and
forecasts based on market trends and data analysis.
ChatGPT can be useful in any eld that involves language processing
and communication. Its versatility and adaptability make it a valu-
able tool for businesses and organizations across a wide range of
industries. However, there are limitations to ChatGPTs perfor-
mance, mostly related to privacy, security, ethics, transparency, and
unintended consequences. To address these limitations, it is impor-
tant to implement appropriate security measures, ensure trans-
parency in how ChatGPT operates, and carefully consider the
potential impacts of its use (Baidoo-Anu & Owusu Ansah, 2023;
Dwivedi et al., 2023; Eloundou et al., 2023; Lund & Wang, 2023;
Rudolph et al., 2023). Here are a few factors that may expose or
interrupt ChatGPTs effectiveness:
Privacy: ChatGPT has been trained on vast amounts of text data,
some of which may be private or sensitive. There is a risk that per-
sonal or sensitive information could be inadvertently exposed
through its use or in the event of data breaches.
Security: ChatGPT may be susceptible to security risks like hacking,
malware attacks, or other cyber threats. Without proper manage-
ment of these risks, sensitive data could be lost or ChatGPTs func-
tionality compromised.
Ethics: ChatGPT may inadvertently reect biases in the generated
text or be misused for malicious purposes, raising ethical concerns.
Conversely, it can also prompt ethical discussions by producing
thought-provoking content on moral issues and highlighting ethical
dilemmas.
Transparency: The inner workings of ChatGPT may be complex and
opaque, leading to a lack of transparency in its decision-making
processes. This complexity can make it challenging to discern why
certain responses are generated or to identify and rectify errors or
biases.
Unintended consequences: The use of ChatGPT might lead to unin-
tended outcomes, such as perpetuating biases or reinforcing harmful
stereotypes. It is crucial to monitor and assess ChatGPTs outputs
diligently to prevent any detrimental effects.
Additionally, it is crucial to consider the acceptability of this tech-
nology, particularly from the users perspective. There is a potential risk
that automation may lead to a sense of personal detachment, resulting in
a reluctance to embrace this technology (Patel & Lam, 2023). To sum-
marize the potential drawbacks of ChatGPT, we propose conducting a
study to measure undergraduate students acceptance and awareness
levels of using ChatGPT. This research aims to gain insights into the
effectiveness of ChatGPT in supporting their learning process, providing
additional resources, or facilitating academic tasks.
From a social science perspective, many users of ChatGPT have
developed an interest in uncovering the background and secrets behind
its impressive performance (Cao et al., 2023). The responses generated
by ChatGPT exhibit a high level of concordance, making it easy for
human learners to comprehend the internal language, logic, and rela-
tional ow presented in the explanation text (Kung et al., 2023).
ChatGPT has attracted both investors and organizations, yet it is not
seen as a replacement for humans. Instead, it is viewed as a tool that can
enhance productivity and assist in fullling higher-order needs (Dwivedi
et al., 2023). It has quickly become popular among both casual users and
professionals, streamlining tasks and responsibilities, and making them
easier and faster to complete (Dwivedi et al., 2023).
However, some social scientists argue that the use of generative AI,
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
3
like ChatGPT, may supplant human researchers and the unique skills
they bring to the research process, including critical thinking, creativity,
and empathy (Raman et al., 2023; Whitford, 2022). This could lead to a
reduction in the quality and depth of social science research, as well as a
lack of human oversight and accountability. Some view it as an
untrusted assistant and an unexpected technology (Albayati et al., 2021,
2023; Dwivedi et al., 2023; Mollick & Mollick, 2022). ChatGPT is crit-
icized for lacking adequate privacy and security measures at certain
levels and for conducting interactions with humans openly without
appropriate control. Numerous studies have raised concerns about the
potential for ChatGPT to expose userspersonal information and search
content (Dwivedi et al., 2023; Kung et al., 2023). In addition to privacy
and security issues, ChatGPT has also been criticized for its lack of
transparency in operations. Users are often uncertain about how their
information is being collected, stored, and used, which can erode trust
(Cao et al., 2023; Chandra et al., 2022; Lund & Wang, 2023). Moreover,
the ethical implications of ChatGPTs interactions, such as the potential
to deceive or manipulate users, particularly those who are vulnerable or
emotionally distressed, have also been called into question (Cao et al.,
2023; Dwivedi et al., 2023; Rudolph et al., 2023).
Technology acceptance has been a subject of study for decades, with
numerous research papers utilizing the Technology Acceptance Model
(TAM) to explore various aspects of technology adoption among users
(Folkinshteyn & Lennon, 2016; Gefen et al., 2003b; Venkatesh & Davis,
2000; Yu et al., 2005). TAM is a widely recognized theoretical frame-
work that elucidates how users come to accept and use new information
technologies. Initially developed by Fred Davis in the 1980s and sub-
sequently rened by Venkatesh and Davis in the 1990s (Davis, 1985;
Davis et al., 1989Aug). TAM has proven applicable across different
technologies and contexts. It has been tested in various technological
environments such as websites, mobile apps, social media, and e-com-
merce platforms, consistently demonstrating its validity and reliability
(Grani´
c & Maranguni´
c, 2019; Saad´
e & Bahli, 2005; Salloum et al.,
2019). Researchers have continued to extend and modify TAM to better
account for additional factors like trust, satisfaction, and intention to
use, and have developed various model variations (Esposito et al., 2020;
Kesharwani & Singh Bisht, 2012).
The purpose of this study is to explore the factors inuencing un-
dergraduate studentsacceptance of ChatGPT as a regular assistance tool
and to assess their awareness of its usage from various angles. The study
aims to gain insights into the effects of ChatGPT on usersdaily lives and
provide valuable insights into the potential benets and challenges
associated with implementing ChatGPT in an educational context.
To accomplish this objective, we will employ a quantitative research
approach using questionnaires (Almalki, 2016; Morse, 1991, 2016; Rao
& Woolcock, 2003; Schoonenboom & Johnson, 2017; Teddlie &
Tashakkori, 2011), and integrate a model incorporating the Technology
Acceptance Model (TAM) along with four new constructs: Privacy, Se-
curity, Social Inuence, and Trust. This proposed model aims to
comprehensively understand user acceptance by examining different
perspectives simultaneously. Our goal is to bridge the gap between
technology and users by shedding light on their actual experiences and
considerations.
Through this study, we will assess the level of impact and conve-
nience and provide a range of recommendations and insights for users,
academics, and the industry.
To help investigate the research goals and objectives, we asked three
questions:
Q1. What are undergraduate students key considerations and expe-
riences when using ChatGPT, and how do these perceptions shape their
acceptance and usage patterns?
Q2. What are the factors that inuence undergraduate students
acceptance of ChatGPT as a regular assistance tool? and how does it
impact their academic performance and overall well-being?
Q3. Were the undergraduate students aware of the usage of ChatGPT,
and what are their expectations of its benets and drawbacks?
This research will provide the appropriate answers to these questions
and clarify the misleading and confusing regarding ChatGPT for un-
dergraduate students.
4. Technology acceptance model (TAM)
The Technology Acceptance Model (TAM), as illustrated in (Fig. 1), is
a theoretical framework designed to explain how users accept and adopt
new information technologies. The TAM posits that the acceptance of
new technology is primarily inuenced by two main factors: perceived
usefulness and perceived ease of use (Davis, 1989Sep). Perceived use-
fulness is dened as the degree to which a user believes that technology
will enhance their job performance or productivity. In contrast,
perceived ease of use refers to the degree to which a user believes that
the technology will be easy to understand and use (Davis et al.,
1989Aug).
According to the TAM, perceived usefulness and perceived ease of
use are inuenced by several other external factors such as the users
demographic characteristics, social inuences, and organizational cul-
ture (Connor & Siegrist, 2010; Saad´
e & Bahli, 2005). The TAM has been
widely applied in research and practice to understand and predict the
adoption and use of various technologies, including information sys-
tems, mobile apps, and social media platforms (Min et al., 2021). By
identifying factors that inuence user acceptance, the model can help to
inform the design and implementation of new technologies, as well as
strategies for promoting their adoption and use (Albayati et al., 2020).
The TAM proposes that attitude toward using and behavioral
intention to use are the primary determinants of technology acceptance
and use. By understanding the factors that inuence attitude toward
using and behavioral intention to use, the TAM can help inform the
design and implementation of new technologies, as well as strategies for
promoting their adoption and use.
5. Hypotheses and model development
This research aims to understand how the acceptance of ChatGPT as
an assistance tool by undergraduate students correlates with their aca-
demic performance. For instance, students may accept and use ChatGPT,
experiencing either a positive or negative impact on their grades, un-
derstanding of course materials, or overall academic success.
Utilizing the TAM model helps in comprehending how TAM con-
structs are related to studentsacceptance of ChatGPT as an assistance
tool. If students nd ChatGPT easy to use and perceive it as useful, it may
positively inuence their acceptance.
Additionally, this research incorporates four external constructs
privacy, security, social inuence, and trust alongside the TAM con-
structs, aiming to explore the interactive impact of these constructs on
each other. Moreover, it investigates the impact on studentsdecisions to
accept or reject ChatGPT as a tool.
5.1. TAM-core constructs
The Technology Acceptance Model (TAM) (Davis, 1989Sep 1993;
Davis et al., 1989; Aug; Venkatesh & Davis, 2000) comprises two key
constructs: perceived usefulness and perceived ease of use. These con-
structs are further inuenced by external factors, known as external
variables. Below is a brief description of each construct:
Perceived Usefulness: This term refers to the extent to which a user
believes that technology will enhance their job performance or
productivity. Users are more likely to accept and adopt new tech-
nology if they perceive it as benecial in achieving their goals or
tasks (Bhattacherjee, 2000). Perceived usefulness is intricately
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
4
linked to an individuals motivation to use technology. If an indi-
vidual perceives technology as useful, they are more likely to be
motivated to use it (Yu et al., 2005). Conversely, if technology is
perceived as not useful, their motivation to use it diminishes. In the
TAM model, perceived usefulness is a crucial factor inuencing an
individuals attitude and intention toward technology usage (Kar-
ahanna & Straub, 1999). The model suggests that individuals are
more inclined to adopt and utilize technology if they believe it aids in
accomplishing their goals and tasks (Autry et al., 2010). Therefore, it
is hypothesized that:
H4. Perceived usefulness positively and signicantly impacts users
attitudes toward using ChatGPT as a daily reference.
Perceived Ease of Use: Refers to the degree to which a user believes
that technology is easy to use and understand. Users are more likely
to accept and adopt new technology if they perceive it as easy to use
and learn (Autry et al., 2010; Martins et al., 2014). Perceived ease of
use is closely related to an individuals perception of the effort
required to use technology (Venkatesh, 2000). If an individual per-
ceives a technology to be easy to use, they are more likely to be
motivated to use it (Karahanna & Straub, 1999). On the other hand,
if an individual perceives a technology to be difcult to use, they are
less likely to be motivated to use it (Adams et al., 1992). In the TAM
model, perceived ease of use is another central factor that inuences
an individuals attitude to use technology (Venkatesh & Bala, 2008).
The model proposes that individuals are more likely to adopt and use
technology if they perceive it to be easy to use and learn (Al-Shara
et al., 2016). When users perceive a system or technology as easy to
use, they are more likely to perceive it as useful as well. This is
because ease of use reduces the perceived effort and cognitive load
required to use the technology (Al-Shara et al., 2016). Therefore, it
is hypothesized that:
H2. Perceived Ease of Use positively and signicantly impacts users
attitudes toward using ChatGPT as a daily reference.
H3. Perceived Ease of Use positively and signicantly impacts users
perceived usefulness of ChatGPT as a daily reference.
Also, the TAM model has three more constructs, which consist of the
user intention and attitude:
Attitude: Refers to the users overall positive or negative evaluation
of the technology (Ajzen & Fishbein, 1972). Attitude toward using is
inuenced by the perceived usefulness and perceived ease of use of
the technology (Aug; Venkatesh & Bala, 2008; Davis et al., 1989;
Venkatesh & Davis, 2000). In the TAM model, Attitude is seen as a
reection of an individuals subjective evaluation of a technology
based on its perceived usefulness and ease of use (Yang & Yoo, 2004).
If an individual perceives a technology to be useful and easy to use,
they are likely to have a positive attitude toward the technology,
which increases their intention to use it (Aghdaie et al., 2011).
Conversely, if an individual perceives a technology to be not useful
or difcult to use, they are likely to have a negative attitude towards
the technology, which decreases their intention to use it (Bhatta-
cherjee & Premkumar, 2004). Therefore, it is hypothesized that:
H1. Attitude positively and signicantly impacts on the users
Behavioral Intention toward the use of ChatGPT as a daily reference.
Behavioral Intention: Refers to the users intention or willingness to
use technology. Behavioral intention to use is inuenced by the
users attitude toward using the technology and their subjective
norm, which refers to the perceived social pressure to use the tech-
nology (Ajzen & Fishbein, 1972). In the TAM model, Behavioral
Intention is seen as a direct result of an individuals Attitude towards
a technology (Fishbein, 1975). If an individual has a positive attitude
towards technology, they are more likely to have a high intention to
use it. The use of technology by most users is strongly inuenced by
their ICT competence and attitude (Ferede et al., 2022). Conversely,
if an individual has a negative attitude toward technology, they are
less likely to have a high intention to use it (Kim & Ko, 2010).
5.2. External constructs
In addition to the TAM model, we will expand it by incorporating
four new constructs: privacy, security, social inuence, and trust. These
will assist in investigating the acceptance of ChatGPT and the users
level of awareness. The selection of these four constructs aligns with
established theoretical frameworks in technology acceptance and re-
ects practical considerations in the educational context. The interplay
between these constructs is examined individually and collectively from
various perspectives within the literature, with numerous studies
investigating the relationship between students and new technologies.
The specic context and objectives of this research have guided the
selection of these external constructs, which are derived from both the
literature and research background:
Privacy: The privacy construct refers to an individuals expectation
and perception of control over the collection, use, and disclosure of
their personal information (Basak et al., 2016). It is an important
factor that inuences individuals behavior in the context of tech-
nology adoption and uses (Qin et al., 2009). The privacy construct is
closely related to the TAM model, as an individuals perception of
privacy and control over their personal information can signicantly
impact their perceived usefulness and ease of use of technology
(Al-Shara et al., 2016; Basak et al., 2016; Wu et al., 2023). For
example, if an individual perceives that their personal information is
Fig. 1. Technology acceptance model TAM (Davis, 1989Sep).
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
5
being collected and used without their consent, they may be less
likely to nd the technology useful or easy to use, even if it has a clear
benet for them (Hajian et al., 2016). To retain users, service pro-
viders must balance both privacy concerns and social inuence
perspectives (Zhou & Li, 2014). When users feel that their conver-
sations with any system are private and secure, they are more likely
to trust and adopt the technology, leading to an increase in their
perceived usefulness of the technology (Basak et al., 2016). While
any system promotes privacy concerns and pushes for data protec-
tion, greater comfort, and ease with technology is likely to increase,
resulting in greater perceived ease of use. The resulting sense of
comfort and security in privacy can encourage users to explore the
systems features more freely and interact with it more frequently,
which may also contribute to greater perceived ease of use (Feath-
erman et al., 2010). Clear privacy determinations like communica-
tion and transparency contribute to data protection practices and
building trust (Dragan & Manulis, 2018). Addressing privacy con-
cerns directly and implementing measures to mitigate potential risks
can foster and impact the trust (Joinson et al., 2010). Data privacy
and control reect positively on user trust and impact the condence
in technology use (Sicari et al., 2015). Therefore, it is hypothesized
that:
H5. Privacy positively and signicantly impacts Perceived Ease of Use
toward the use of ChatGPT as a daily reference.
H6. Privacy positively and signicantly impacts the Perceived Use-
fulness of the use of ChatGPT as a daily reference.
H7. Privacy positively and signicantly impacts Social Inuence to-
ward the use of ChatGPT as a daily reference.
H8. Privacy positively and signicantly impacts Trust toward the use
of ChatGPT as a daily reference.
Security: The security construct refers to an individuals perception
of the protection of their information and data from unauthorized
access, use, and disclosure (Al-Shara et al., 2016). It is an important
factor that inuences individuals behavior in the context of tech-
nology adoption and use, particularly for technologies that involve
the transfer and storage of sensitive information. The security
construct is closely related to the TAM model, as an individuals
perception of security can signicantly impact their perceived use-
fulness and ease of use of technology (Al-Shara et al., 2016; Basak
et al., 2016; Wu et al., 2023). Information security behaviors can be
improved through security education, training, and awareness
(SETA) programs, according to some scholars (Al-Shara et al., 2016;
Casal´
o, Flavi´
an, & Guinalíu, 2007; Chen et al., 2021). When users
perceive the system as secure, they may have more trust and willing
to share personal information or condential data with the system,
which can increase its usefulness in providing tailored responses
(Jahangir & Begum, 2008). In any industry that uses technology,
Security is a critical aspect that impacts social inuence and adoption
(Patel & Patel, 2018). For example, if an individual perceives that
technology is not secure, they may be less likely to nd the tech-
nology useful or easy to use, even if it has a clear benet for them.
However, Users are more likely to trust the technology of software if
they believe that their interactions and data are secure, reducing
concerns about unauthorized access or data breaches (Suh & Han,
2003). Control and restricting access, assisting user trust, and secu-
rity can reect in assuring the condentiality of user information
(Pearson & Benameur, 2010). Users gain trust in the web application
when they see that security is actively maintained and that efforts are
made to address potential risks immediately (Shin, 2010). Therefore,
it is hypothesized that:
H9. Security positively and signicantly impacts Perceived Ease of Use
toward the use of ChatGPT as a daily reference.
H10. Security positively and signicantly impacts the Perceived Use-
fulness of the use of ChatGPT as a daily reference.
H11. Security positively and signicantly impacts Social Inuence
toward the use of ChatGPT as a daily reference.
H12. Security positively and signicantly impacts Trust toward the
use of ChatGPT as a daily reference.
Social Inuence: The social construct refers to the social inuence
and norms that can affect an individuals behavior in the context of
technology adoption and use (Malhotra & Galletta, 1999; Tramow
& Finlay, 1996). It is an important factor that inuences individuals
behavior, particularly in situations where the use of technology is
socially embedded and/or requires coordination with others. The
social construct is closely related to the TAM model, as an in-
dividuals perception of social inuence can signicantly impact
their perceived usefulness and ease of use of technology (Hsu & Lu,
2004; Malhotra & Galletta, 1999). Individuals perceive that others in
their social network use a system in their daily activities, and they
may develop a more positive attitude toward the system. During the
Covid-19 period, many researchers investigated the experience with
online distance teaching and evaluated the social inuence (Sidi
et al., 2023). Additionally, if a user perceives that their use of the
system aligns with the norms and values of their social group, they
may be more likely to adopt a positive attitude toward the system
and use it more frequently (Prislin & Wood, 2005a, 2005b). For
example, if an individual perceives that their social network or peers
are using technology, they may be more likely to adopt and use the
technology themselves, even if they do not perceive it to be highly
useful or easy to use. Similarly, if an individual perceives that there is
a social norm or expectation to use a particular technology, they may
be more likely to adopt and use the technology. Therefore, it is hy-
pothesized that:
H13. Social Inuence positively and signicantly impacts Attitude
toward the use of ChatGPT as a daily reference.
H14. Social Inuence positively and signicantly impacts the users
Behavioral Intention toward the use of ChatGPT as a daily reference.
Trust: The trust construct refers to an individuals belief that a
technology or system can be relied upon to perform as intended and
protect their interests (Falcone & Castelfranchi, 2001; Roy et al.,
2001). Trust is an important factor that inuences individuals
behavior in the context of technology adoption and uses (Kesharwani
& Singh Bisht, 2012). The trust construct is closely related to the
TAM model, as an individuals perception of trust can signicantly
impact their perceived usefulness and ease of use of technology
(Al-Shara et al., 2016; Basak et al., 2016). If users trust a system to
provide accurate and helpful information, they are more likely to
develop a positive attitude toward the system. This positive attitude
may lead to increased usage levels, as well as the intention to
continue using it in the future. Additionally, trust may encourage
users to share personal information with the system, which can
further enhance the accuracy and helpfulness of its responses (Kim &
Gambino, 2016). Limbu, Wolf, and Lunsford found that the
perceived ethics of an Internet retailers website signicantly affect
consumers trust and attitudes toward the retailers website which
eventually has positive impacts on purchase and revisit intentions,
website trust was positively related to attitudes toward the site
(Limbu et al., 2012). For example, if an individual does not trust a
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
6
technology or system, they may be less likely to nd the technology
useful or easy to use, even if it has a clear benet for them (Wu &
Chen, 2005). Therefore, it is hypothesized that:
H15. Trust positively and signicantly impacts Attitude toward the
use of ChatGPT as a daily reference.
H16. Trust positively and signicantly impacts the users Behavioral
Intention toward the use of ChatGPT as a daily reference.
5.3. Research model design
The model design (Fig. 2), illustrates the integration between the
TAM model and the four external constructs using Structural Equation
Modeling (SEM) (Hair Jr et al., 2016).
6. Research methodology
This research aims to explore the factors inuencing undergraduate
students acceptance of ChatGPT as a regular assistance tool and to
assess their awareness of its usage from various perspectives. The study
seeks to gain insights into the effects of ChatGPT on usersdaily lives and
provide valuable information on the potential benets and challenges
associated with its implementation in an educational context.
To achieve this objective, we will employ a quantitative research
approach by distributing survey questionnaires to currently enrolled
undergraduate students in the United States of America. The eligibility
criteria for participant selection include being an undergraduate student
currently enrolled in a national academic institution.
The quantitative approach will commence with the collection of
demographic data, including age, gender, and education level, from the
survey. This information will help better understand the participants
characteristics and how these may inuence their attitudes and in-
tentions toward using ChatGPT. Additionally, we will ask undergraduate
students questions regarding their usage of ChatGPT to gain more in-
sights into their usage behavior.
The recruitment process involves reaching out to undergraduate
students globally, leveraging online platforms, and ensuring the ethical
treatment of participants throughout the research. The selection criteria
are as follows:
Currently enrolled undergraduate students
Students from diverse educational backgrounds to capture a wide
range of perspectives and experiences
Participants who are well-exposed to ChatGPT, understand its use
and capabilities
Participants from diverse demographic backgrounds to enhance the
studys generalizability and applicability.
In this study, the recruitment of undergraduate students was carried
out using the Prolic platform, renowned for its extensive participant
pool and academic credibility. The selection of Prolic was based on
several key criteria: its wide accessibility to a diverse undergraduate
demographic, user-friendly interface, capability for precise targeting,
and a strong reputation for generating high-quality data. Furthermore,
Fig. 2. The proposed model.
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
7
Prolics commitment to ethical standards and transparent operations
provided an added layer of assurance in the conduct of our research.
Ethical integrity was paramount throughout the recruitment process.
Before participation, all individuals were presented with a comprehen-
sive informed consent form. This document detailed the researchs ob-
jectives, procedures, and the participants rights, including detailed
information on data usage and storage, ensuring informed and voluntary
participation. The consent process was designed to be interactive,
requiring an active conrmation of understanding and agreement.
Prolics adherence to GDPR and other privacy regulations, along with
our stringent data anonymization protocols, further fortied the pro-
tection of participant privacy and data security.
Then will involve measuring the TAM and the external constructs
using a validated questionnaire. The questionnaire will use a 5-point
Likert scale, with 1 indicating strongly disagree and 5 indicating
strongly agree."
The data collected will be analyzed using descriptive statistics,
including mean, standard deviation, and frequency distribution. Addi-
tionally, the study will utilize Structural Equation Modeling (SEM) to
test the hypothesized relationships between the constructs.
The SEM analysis will offer insights into the extent to which the TAM
model can explain users attitudes and behavioral intentions toward
using ChatGPT. The study will also explore the role of external con-
structs in inuencing usersattitudes and intentions. The ndings of this
study will contribute to a better understanding of the factors inuencing
technology adoption and usage and provide valuable insights for tech-
nology developers and marketers to enhance the design and marketing
of their products.
7. Results and discussion
Descriptive statistics
The sample responses were collected from a total of 637 under-
graduate students through an online survey over two months. After data
trimming to remove missing values and outliers, the nal number of
responses was 603. Table 1 presents the demographic information of the
respondents. The data reveals that males constituted 54.7% of the re-
spondents, while females accounted for 45.3%.
The age distribution of the respondents was as follows: 49.8% were
between 15 and 24 years old, 36.4% between 25 and 34 years old, and
13.8% between 35 and 44 years old. Regarding the respondents edu-
cation level, all were undergraduate students, distributed across aca-
demic years as follows: 14% were rst-year students, 28.9% were
second-year students, 20.6% were third-year students, and 36.5%
were fourth-year students.
Data analysis
For statistical analysis, the study utilized Partial Least Squares-
Structural Equation Modeling (PLS-SEM) using SmartPLS 4 (Ringle
et al., 2015; Ronaghi & Mosakhani). Due to the nature of this study,
PLS-SEM is considered to be the best approach to nding the best results
(Hair Jr et al., 2016; Ringle et al., 2015). In the case of the reective
measurement, model (Hair Jr et al., 2016), it is proposed that scholars
should identify convergent validity by considering the outer loadings of
all items and the average variance extracted (AVE). The coefcient of
determination and the path coefcients of determination were measured
based on the structural model (Hair Jr et al., 2016; Selya et al., 2012). To
support the measurement and structural model, this paper applied all
the criteria mentioned above.
Measurement model assessment
It is necessary to measure the reliability, validity, and factor loading
for every item, according to (Hair Jr et al., 2016). Reliability is the
correspondence of a measure; a measure is reliable when it produces
consistent outcomes under consistent conditions and the loading value
of each item should be equal to or greater than (0.7) to be considered
reliable.
Cronbachs Alpha and composite reliability values should be equal to
or greater than (0.7). As shown in Table 2, all items are reliable and
accept the set criteria. Validity refers to the degree to which the in-
dicators of a construct collectively measure that construct. Convergent
validity is typically established using the average variance extracted
(AVE), which calculates the average of the squared loadings of the items
associated with the construct.
This refers to the extent to which a latent construct accounts for the
variation in its indicators. An AVE value of 0.5 or higher indicates that
the construct explains more than half of the variance in its items (Hair Jr
et al., 2016). In Table 2, both Cronbachs Alpha and composite reli-
ability values exceed 0.7, and the AVE values surpass 0.5. Consequently,
the convergent validity of the constructs is conrmed.
To assess discriminant validity, this study evaluates the Fornell-
Larcker criterion and cross-loadings. The Fornell-Larcker criterion in-
volves comparing the square root of the AVE (Average Variance
Extracted) value with the correlations between latent variables, as
outlined in (Table 3). Cross-loadings are examined by ensuring that each
indicators loading is higher on its intended latent variable than on other
variables. According to the data presented in Table 4, the cross-loading
criterion is met satisfactorily, as all items have loadings greater than 0.7,
indicating the highest value when compared to loadings from other
variables.
Structural model assessment
The evaluation of the model involved assessing the level of
discrepancy in the dependent variables. Path coefcients and R-squared
(R
2
) were used as the main indicators for estimating the structural model
(Ringle et al., 2015). R
2
is a statistical measure that quanties the pro-
portion of variance in a dependent variable that can be explained by one
or more independent variables in a regression model. Correlation mea-
sures the strength of the relationship between an independent and
dependent variable, while R
2
indicates the extent to which the variance
of one variable can account for the variance of the second variable as
showing in Fig. 3.
Behavioral Intention (BI), R
2
=76.6%: This high R
2
value suggests
that the independent variables in the model collectively explain
76.6% of the variance in behavioral intention. In other words, the
factors considered in the study have a substantial inuence on stu-
dentsbehavioral intention to use ChatGPT as a daily reference.
Attitude (A), R
2
=69.6%: The R
2
value of 69.6% indicates a strong
inuence of independent variables on users attitudes toward uti-
lizing ChatGPT for their daily tasks. This suggests that factors
considered in the study play a signicant role in shaping users
attitudes.
Perceived Usefulness (U), R
2
=59.6%: The R
2
value of 59.6% for
perceived usefulness implies that the independent variables explain
Table 1
Demographic information.
Item Values Frequency Percentage
Gender Male 330 54.7 %
Female 272 45.3 %
Age 15~24 300 49.8 %
25~34 219 36.4 %
35~44 83 13.8 %
Undergraduate level 1st grade 84 14.0 %
2nd grade 174 28.9 %
3rd grade 124 20.6 %
4th grade 220 36.5 %
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
8
almost 60% of the variance in usersperceptions of the usefulness of
ChatGPT. Factors in the study contribute substantially to shaping
usersopinions about the utility of the system.
Perceived Ease of Use (E), R
2
=23.5%: The comparatively lower R
2
value of 23.5% for perceived ease of use suggests that the model may
not capture all the factors inuencing usersperceptions of the ease
of using ChatGPT. Additional variables or considerations may be
needed to explain ease of use better.
Social Inuence (SI), R
2
=46.3%: The R
2
value of 46.3% indicates a
moderate level of inuence from independent variables on social
inuence. This suggests that factors in the model contribute signi-
cantly but not as strongly as in the case of behavioral intention or
attitude.
Trust (T), R
2
=59.7%: The R
2
value of 59.7% for trust indicates that
the independent variables explain a substantial portion of the vari-
ance in users trust in ChatGPT. Trust is a crucial factor, and the
model appears to capture the relevant inuences effectively.
The path coefcients are essential measures for assessing the struc-
tural model. According to the path analysis, as depicted in (Fig. 3) and
outlined in (Table 5), each hypothesis was evaluated by estimating the
p-values and path coefcients. It was found that all hypotheses are
supported except for H5 and H14. The supported hypotheses indicate
signicant paths between the independent and dependent variables.
However, H5 and H14 are not supported, indicating that Privacy (P) has
no impact on Perceived Ease of Use (E), and Social Inuence (SI) has no
Table 2
Measurement model results.
Latent Variable Indicators Convergent Validity Internal Consistency Reliability Discriminant Validity?
Loading Indicators Reliability AVE Composite Reliability Cronbachs Alpha
>0.70 >0.50 >0.50 0.600.90 0.600.90
Attitude A1 0.879 0.773 0.776 0.912 0.856 yes
A2 0.866 0.750
A3 0.898 0.806
Behavioral Intention BI1 0.940 0.884 0.865 0.951 0.922 yes
BI2 0.942 0.887
BI3 0.909 0.826
Perceived Ease of Use E1 0.858 0.736 0.718 0.911 0.869 yes
E2 0.869 0.755
E3 0.848 0.719
E4 0.815 0.664
Privacy P1 0.816 0.666 0.737 0.933 0.910 yes
P2 0.846 0.716
P3 0.825 0.681
P4 0.860 0.740
P5 0.828 0.686
P6 0.803 0.645
Security SE1 0.818 0.669 0.678 0.927 0.905 yes
SE2 0.779 0.607
SE3 0.775 0.601
SE4 0.813 0.661
SE5 0.826 0.682
SE6 0.807 0.651
SE7 0.820 0.672
SE8 0.922 0.850
Social Inuence SI1 0.914 0.835 0.649 0.937 0.923 yes
SI2 0.913 0.834
SI3 0.826 0.682
Trust T1 0.829 0.687 0.840 0.940 0.905 yes
T2 0.784 0.615
T3 0.818 0.669
T4 0.860 0.740
Perceived Usefulness U1 0.876 0.767 0.663 0.887 0.830 yes
U2 0.907 0.823
U3 0.756 0.572
U4 0.887 0.787
U5 0.762 0.581
Table 3
FornellLarcker criterion results.
Attitude
(A)
Behavioral Intention
(BI)
Perceived Ease of Use
(E)
Perceived Usefulness
(U)
Privacy
(P)
Security
(SE)
Social Inuence
(SI)
Trust
(T)
Attitude (A) 0.881
Behavioral Intention
(BI)
0.868 0.930
Perceived Ease of Use
(E)
0.614 0.570 0.848
Perceived Usefulness
(U)
0.803 0.755 0.680 0.859
Privacy (P) 0.593 0.572 0.414 0.568 0.823
Security (SE) 0.622 0.586 0.484 0.599 0.754 0.805
Social Inuence (SI) 0.620 0.584 0.369 0.589 0.642 0.666 0.917
Trust (T) 0.693 0.679 0.615 0.710 0.723 0.759 0.633 0.814
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
9
impact on Behavioral Intention (BI).
The results of this paper found that;
H1. (B =0.755, p <0.05): The path between Attitude (A) and
Behavioral Intention (BI) is particularly important and exhibits a sig-
nicant and robust association. This relationship is notably stronger
than any other within the model. Specically, it is observed that attitude
exerts a powerful positive inuence on the users behavioral intention to
use ChatGPT as a daily reference. The p-value (<0.05) indicates that this
relationship is statistically signicant.
H2. (B =0.103, p <0.05) describes the path between Perceived Ease
of Use (E) and Attitude (A). This result indicates a positive relationship
between the perceived ease of using ChatGPT and the attitude of un-
dergraduate students toward using it. In other words, if students nd
ChatGPT easy to use, it positively inuences their attitude toward using
it. The p-value (<0.05) conrms the statistical signicance of this
relationship.
H3. (B =0.509, p <0.05) describes the path between Perceived Ease
of Use (E) and Perceived Usefulness (U). This nding demonstrates a
positive relationship between the perceived ease of using ChatGPT and
its perceived usefulness. If students perceive ChatGPT as easy to use,
they are more likely to consider it useful. The p-value (<0.05) conrms
the statistical signicance of this relationship.
H4. (B =0.526, p <0.05) describes the path between Perceived
Usefulness (U) and Attitude (A). This result suggests a positive rela-
tionship between the perceived usefulness of ChatGPT and the attitude
of undergraduate students toward using it. If students perceive ChatGPT
as useful, it positively inuences their attitude toward using it. The p-
value (<0.05) conrms the statistical signicance of this relationship.
H5. (B =0.000, p >0.05) describes the path between Privacy (P) and
Perceived Ease of Use (E). This implies no discernible relationship be-
tween privacy concerns and the perceived ease of using ChatGPT. In
practical terms, it means that studentsprivacy concerns do not signif-
icantly impact how they perceive the ease of using the system. The
elevated p-value suggests that any observed association between privacy
concerns and ease of use may be coincidental, underscoring the absence
of a robust link between these variables. This nding contributes valu-
able insights into the nuanced factors inuencing user perceptions,
suggesting that privacy concerns may not be a primary driver in shaping
how students perceive the ease of interacting with ChatGPT.
H6. (B =0.208, p <0.05) describes the path between Privacy (P) and
Perceived Usefulness (U). This nding indicates a positive relationship
between privacy concerns and the perceived usefulness of ChatGPT. If
students have privacy concerns, it may affect how they perceive its
usefulness. The p-value (<0.05) conrms the statistical signicance of
this relationship.
H7. (B =0.271, p <0.05) describes the path between Privacy (P) and
Social Inuence (SI). This result indicates a positive relationship be-
tween privacy concerns and social inuence. If students have privacy
concerns, it may affect their perception of social inuence related to
using ChatGPT. The p-value (<0.05) conrms the statistical signicance
of this relationship.
H8. (B =0.274, p <0.05) describes the path between Privacy (P) and
Trust (T). This nding suggests a positive relationship between privacy
concerns and trust. If students have privacy concerns, it may impact
their level of trust in using ChatGPT. The p-value (<0.05) conrms the
statistical signicance of this relationship.
H9. (B =0.484, p <0.05) describes the path between Security (SE)
Table 4
Cross loadings results.
Attitude (A) Behavioral Intention (BI) Perceived Ease of Use (E) Perceived Usefulness (U) Privacy (P) Security (SE) Social Inuence (SI) Trust (T)
A1 0.879 0.819 0.591 0.720 0.505 0.546 0.504 0.628
A2 0.866 0.708 0.465 0.637 0.527 0.542 0.592 0.570
A3 0.898 0.762 0.559 0.761 0.537 0.556 0.548 0.629
BI1 0.830 0.940 0.526 0.722 0.539 0.567 0.574 0.648
BI2 0.821 0.942 0.544 0.713 0.538 0.546 0.532 0.629
BI3 0.770 0.909 0.523 0.671 0.519 0.522 0.522 0.618
E1 0.482 0.473 0.858 0.555 0.331 0.425 0.276 0.508
E2 0.582 0.540 0.869 0.638 0.398 0.453 0.365 0.559
E3 0.520 0.468 0.848 0.559 0.352 0.399 0.338 0.500
E4 0.489 0.446 0.815 0.544 0.314 0.358 0.262 0.514
P1 0.434 0.409 0.272 0.431 0.762 0.645 0.513 0.542
P2 0.472 0.454 0.281 0.466 0.816 0.685 0.538 0.595
P3 0.539 0.531 0.411 0.509 0.846 0.732 0.501 0.639
P4 0.479 0.463 0.395 0.445 0.825 0.695 0.520 0.591
P5 0.518 0.496 0.356 0.518 0.860 0.735 0.532 0.625
P6 0.480 0.465 0.319 0.432 0.828 0.724 0.574 0.574
SE1 0.515 0.472 0.426 0.495 0.712 0.803 0.488 0.637
SE2 0.495 0.464 0.415 0.493 0.754 0.818 0.511 0.655
SE3 0.520 0.487 0.362 0.492 0.702 0.779 0.562 0.584
SE4 0.511 0.512 0.437 0.505 0.642 0.775 0.540 0.599
SE5 0.479 0.450 0.359 0.452 0.675 0.813 0.555 0.592
SE6 0.511 0.475 0.384 0.493 0.668 0.826 0.547 0.609
SE7 0.466 0.418 0.356 0.453 0.670 0.807 0.537 0.572
SE8 0.510 0.492 0.375 0.470 0.678 0.820 0.553 0.639
SI1 0.568 0.546 0.325 0.536 0.580 0.594 0.922 0.564
SI2 0.550 0.510 0.344 0.535 0.571 0.605 0.914 0.577
SI3 0.585 0.548 0.344 0.550 0.614 0.632 0.913 0.597
T1 0.537 0.549 0.496 0.567 0.600 0.640 0.504 0.826
T2 0.646 0.638 0.611 0.683 0.515 0.563 0.484 0.829
T3 0.532 0.503 0.425 0.510 0.606 0.613 0.528 0.784
T4 0.536 0.515 0.461 0.546 0.640 0.662 0.548 0.818
U1 0.692 0.658 0.532 0.860 0.521 0.522 0.574 0.634
U2 0.699 0.655 0.621 0.876 0.473 0.529 0.492 0.613
U3 0.736 0.690 0.599 0.907 0.530 0.549 0.509 0.606
U4 0.547 0.502 0.584 0.756 0.354 0.380 0.382 0.511
U5 0.756 0.716 0.587 0.887 0.542 0.572 0.559 0.676
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
10
Fig. 3. Path analysis results.
Table 5
Hypotheses test results.
# Path Path
Coefcients
t Values p
Values
2.5% Condence
Intervals
97.5% Condence
Intervals
Signicance (p <
0.05)?
H1 Attitude (A) - >Behavioral Intention (BI) 0.755 23.410 0.000 0.689 0.818 yes
H2 Perceived Ease of Use (E) - >Attitude (A) 0.103 2.326 0.020 0.016 0.192 yes
H3 Perceived Ease of Use (E) - >Perceived
Usefulness (U)
0.509 13.540 0.000 0.432 0.578 yes
H4 Perceived Usefulness (U) - >Attitude (A) 0.526 10.761 0.000 0.429 0.619 yes
H5 Privacy (P) - >Perceived Ease of Use (E) 0.000 0.002 0.998 0.149 0.155 No
H6 Privacy (P) - >Perceived Usefulness (U) 0.208 3.275 0.001 0.086 0.331 yes
H7 Privacy (P) - >Social Inuence (SI) 0.271 4.031 0.000 0.138 0.401 yes
H8 Privacy (P) - >Trust (T) 0.274 4.714 0.000 0.159 0.386 yes
H9 Security (SE) - >Perceived Ease of Use (E) 0.484 6.111 0.000 0.329 0.634 yes
H10 Security (SE) - >Perceived Usefulness (U) 0.175 2.740 0.006 0.053 0.302 yes
H11 Security (SE) - >Social Inuence (SI) 0.434 6.569 0.000 0.305 0.562 yes
H12 Security (SE) - >Trust (T) 0.525 9.066 0.000 0.411 0.638 yes
H13 Social Inuence (SI) - >Attitude (A) 0.183 4.439 0.000 0.104 0.267 yes
H14 Social Inuence (SI) - >Behavioral Intention
(BI)
0.029 0.871 0.384 0.038 0.094 No
H15 Trust (T) - >Attitude (A) 0.140 2.738 0.006 0.040 0.243 yes
H16 Trust (T) - >Behavioral Intention (BI) 0.138 3.626 0.000 0.063 0.211 yes
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
11
and Perceived Ease of Use (E). This result indicates a positive relation-
ship between security concerns and the perceived ease of using
ChatGPT. If students have security concerns, it may inuence how they
perceive the ease of using it. The p-value (<0.05) conrms the statistical
signicance of this relationship.
H10. (B =0.175, p <0.05) describes the path between Security (SE)
and Perceived Usefulness (U). This nding suggests a positive relation-
ship between security concerns and the perceived usefulness of
ChatGPT. If students have security concerns, it may affect how they
perceive its usefulness. The p-value (<0.05) conrms the statistical
signicance of this relationship.
H11. (B =0.434, p <0.05) describes the path between Security (SE)
and Social Inuence (SI). This result suggests a positive relationship
between security concerns and social inuence. If students have security
concerns, it may inuence their perception of social inuence related to
using ChatGPT. The p-value (<0.05) conrms the statistical signicance
of this relationship.
H12. (B =0.525, p <0.05) describes the path between Security (SE)
and Trust (T). This nding indicates a positive relationship between
security concerns and trust. If students have security concerns, it may
impact their level of trust in using ChatGPT. The p-value (<0.05) con-
rms the statistical signicance of this relationship.
H13. (B =0.183, p <0.05) describes the path between Social Inu-
ence (SI) and Attitude (A). This result suggests a positive relationship
between social inuence and attitude. If students perceive social inu-
ence related to using ChatGPT, it positively inuences their attitude
toward using it. The p-value (<0.05) conrms the statistical signicance
of this relationship.
H14. (B =0.029, p >0.05) describes the path between Social Inu-
ence (SI) and Behavioral Intention (BI). This implies no statistically
signicant relationship between social inuence and students behav-
ioral intention to use ChatGPT. Essentially, the inuence of peers, col-
leagues, or social circles does not appear to signicantly shape students
intentions to adopt ChatGPT for their daily tasks. The elevated p-value
indicates that any apparent connection between social inuence and
behavioral intention may be coincidental, highlighting that social in-
uence may not be a decisive factor in determining the intention to use
ChatGPT in the examined population.
H15. (B =0.140, p <0.05) describes the path between Trust (T) and
Attitude (A). This result suggests a positive relationship between trust
and attitude. If students have a higher level of trust in using ChatGPT, it
positively inuences their attitude toward using it. The p-value (<0.05)
conrms the statistical signicance of this relationship.
H16. (B =0.138, p <0.05) describes the path between Trust (T) and
Behavioral Intention (BI). This nding indicates a positive relationship
between trust and behavioral intention. If students have a higher level of
trust in using ChatGPT, it positively inuences their behavioral inten-
tion to use it. The p-value (<0.05) conrms the statistical signicance of
this relationship.
Based on the results of the study and the analysis of the hypotheses,
several key ndings can be summarized:
1. Perceived Ease of Use (E) positively impacts both Attitude (A) and
Perceived Usefulness (U) of ChatGPT. When undergraduate students
nd ChatGPT easy to use, it positively inuences their attitude to-
ward and perception of its usefulness.
2. Perceived Usefulness (U) positively impacts Attitude (A). When
students perceive ChatGPT as useful, it enhances their overall atti-
tude toward using it.
3. Privacy concerns (P) do not signicantly affect the perceived ease of
using ChatGPT but positively impact its perceived usefulness. This
implies that while studentsprivacy concerns may not inuence their
perception of the ease of using ChatGPT, they can shape their
perception of its usefulness.
4. Security concerns (SE) positively inuence the Perceived Ease of Use
(E) and Perceived Usefulness (U) of ChatGPT. Students security
concerns can impact how they perceive the ease of using ChatGPT
and its usefulness.
5. Social Inuence (SI) positively inuences Attitude (A). If students
perceive social inuence related to using ChatGPT, it positively in-
uences their attitude toward using it.
6. Trust (T) positively inuences both Attitude (A) and Behavioral
Intention (BI). When students have a higher level of trust in using
ChatGPT, it positively inuences their attitude toward the tool and
their behavioral intention to use it.
Research questions
Based on the ndings and results of the research, we provide answers
to the research questions as follows:
1. What are undergraduate students key considerations and experi-
ences when using ChatGPT, and how do these perceptions shape
their acceptance and usage patterns?
The research ndings reveal that key factors such as perceived ease
of use, perceived usefulness, privacy concerns, security considerations,
trust, and social inuence are considered by undergraduate students
when using ChatGPT. These factors collectively inuence their accep-
tance and usage patterns. Students are more likely to accept and engage
with ChatGPT when they nd it easy to use, perceive it as useful, and
experience positive social inuence.
2. What are the factors that inuence undergraduate students accep-
tance of ChatGPT as a regular assistance tool, and how does it impact
their academic performance and overall well-being?
The research ndings suggest that factors such as perceived ease of
use, perceived usefulness, privacy concerns, security considerations,
social inuence, and trust inuence undergraduate studentsacceptance
of ChatGPT as a regular assistance tool. Positive perceptions of these
factors contribute to a favorable attitude toward the use of ChatGPT.
However, the direct impact of ChatGPT on academic performance and
overall well-being was not specically addressed in this research.
3. Were the undergraduate students aware of the usage of ChatGPT,
and what are their expectations of its benets and drawbacks?
The research aimed to assess students awareness of ChatGPTs
usage, with ndings indicating that awareness was a signicant aspect
of the investigation. The study also explored studentsperceptions of the
benets and drawbacks associated with ChatGPT. Findings suggest that
students perceived benets in terms of ease of use, usefulness, and social
inuence, while potential drawbacks were associated with privacy
concerns, trust, and security considerations.
8. Conclusion, limitation, and future work
Conclusion
The conclusion of this research rests on the insights gleaned from
exploring factors that inuence undergraduate students acceptance of
ChatGPT as a regular assistance tool. The studys objectives were to
evaluate students awareness of ChatGPTs functionalities and to un-
derstand how the tool impacts their daily activities. Additionally, it
aimed to unearth the potential benets and challenges of implementing
ChatGPT in educational settings.
The research found that the perceived ease of use and perceived
usefulness of ChatGPT signicantly shape studentsattitudes toward the
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
12
tool. A positive correlation exists between these perceptions and stu-
dentsacceptance of ChatGPT: when students nd ChatGPT easy to use
and benecial, their overall attitude toward the tool is likely to be
positive. This nding underscores the importance of a user-friendly
interface and clear communication of ChatGPTs practical benets to
enhance student engagement.
Privacy concerns and security measures were also highlighted as
vital in inuencing students perceptions. Although privacy issues did
not markedly impact the tools perceived ease of use, they positively
inuenced its perceived usefulness. This indicates that by addressing
privacy concerns and ensuring robust security measures, trust and reli-
ance on ChatGPT can be fostered among users.
Social inuence emerged as another critical factor. The study showed
that when students perceive a positive social inuence regarding the use
of ChatGPT, they are more likely to adopt a favorable attitude toward
the tool. This emphasizes the role of social contexts and peer inuence in
accepting and adopting new technologies.
In summary, this research offers valuable insights into the dynamics
of undergraduate students acceptance of ChatGPT. It highlights the
signicance of perceived ease of use, usefulness, privacy, security, and
social inuence. By considering these aspects, educators and adminis-
trators can effectively promote the integration of ChatGPT into educa-
tional frameworks, thereby enhancing student engagement and learning
experiences.
The research further posits that emphasizing a user-friendly interface
and the practical benets of ChatGPT can lead to more positive attitudes
and acceptance among students, presenting a compelling case for the
tools integration into educational practices. However, it is essential to
recognize the limitations of this study, such as its focus on a specic
demographic and reliance on self-reported data. Future research should
aim to diversify its sample and incorporate more objective methodolo-
gies to deepen the understanding of user acceptance and engagement
with ChatGPT.
Additionally, as the integration of technology in education continues
to expand, it is crucial to explore long-term impacts on learning out-
comes and ethical considerations, ensuring informed consent and data
protection for all participants. This study lays the groundwork for
further exploration into the integration of technologies like ChatGPT in
education, contemplating their potential effects on learning outcomes,
student well-being, and the broader educational landscape.
Limitations
It is essential to recognize certain limitations of this study. First, the
research was limited to undergraduate students, constraining the
generalizability of the ndings to broader populations. Future studies
should incorporate a more diverse sample to ensure wider applicability.
Second, this study relied on self-reported measures, which are prone to
response bias and might not accurately reect the actual behavior and
experiences of users. Including objective measures and observational
data in future research could provide a more comprehensive under-
standing of user acceptance and usage patterns. Lastly, the study was
conducted within a specic context, an educational setting, and might
not reect variations in acceptance and usage that could occur in
different domains.
Future Work
Building on this studys ndings, several avenues for future research
emerge. Firstly, investigating the long-term effects of using ChatGPT on
users learning outcomes and academic performance is essential. Un-
derstanding how ChatGPT integrates into educational processes and its
impact on student success could yield further insights. Secondly,
exploring the role of user training and support in enhancing acceptance
and addressing concerns is benecial. Developing effective training
strategies and support systems can contribute to ChatGPTs successful
implementation in educational settings. Additionally, investigating the
impact of personalized feedback and adaptive features within ChatGPT
could make it a more effective assistance tool. Finally, examining the
ethical and societal implications of using ChatGPT, including issues
related to bias, privacy, and algorithmic accountability, is crucial for its
responsible deployment and utilization.
CRediT authorship contribution statement
Hayder Albayati: Writing review & editing, Writing original
draft, Visualization, Validation, Supervision, Software, Resources,
Project administration, Methodology, Investigation, Funding acquisi-
tion, Formal analysis, Data curation, Conceptualization.
Declaration of competing interest
All authors declare that they have no conicts of interest.
Appendix A. Constructs _items
Construct Item
name
The Question The sources
Behavioral
Intention
BI1 If I have access to ChatGPT, I intend to use it (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) BI2 If I have access to ChatGPT, I would use it
BI3 I plan to use ChatGPT within the next months
Attitude A1 I am interested in using ChatGPT (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) A2 I am likely to use ChatGPT because of its attractiveness
A3 I feel my work overall will be better with ChatGPT
Perceived Ease of
Use
E1 Learning to operate ChatGPT would be easy for me (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) E2 I believe it would be easy to use ChatGPT to accomplish what I want to do
E3 It is easy for me to become skillful at using ChatGPT
E4 I believe ChatGPT is easy to use
Perceived
Usefulness
U1 Using ChatGPT would improve my work quality (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) U2 Using ChatGPT would increase my productivity
U3 Using ChatGPT would enhance my work effectiveness
U4 Using ChatGPT would decrease time consumption
Privacy P1 I think ChatGPT shows attention for the privacy of its users (Al-Emran et al., 2020; Cheung & Lee, 2001)
P2 I feel safe when I send personal information to ChatGPT
P3 I think ChatGPT following the personal data protection laws
P4 I think ChatGPT only collects user personal data that are necessary for its activity
P5 I think ChatGPT respects the users rights when obtaining personal information
(continued on next page)
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
13
(continued)
Construct Item
name
The Question The sources
P6 I think that ChatGPT will not provide my personal information to other companies
Security SE1 I think ChatGPT has mechanisms to ensure the safe transmission of its usersinformation (Al-Emran et al., 2020; Cheung & Lee, 2001)
SE2 I think ChatGPT shows good security care while using
SE3 I think ChatGPT has the sufcient technical capacity to ensure that no other organization
will supplant its identity on the Internet
SE4 I am sure of the identity of ChatGPT when I establish contact via the Internet
SE5 When I send data to ChatGPT, I am sure that they will not be intercepted by unauthorized
third parties
SE6 I think ChatGPT has sufcient technical capacity to ensure that the data I send will not be
intercepted by hackers
SE7 When I send data to ChatGPT, I am sure they cannot be modied by a third party
SE8 I think ChatGPT has sufcient technical capacity to ensure that the data I send cannot be
modied by a third party
Social Inuence SI1 People who are important to me think that I should use ChatGPT Venkatesh et al. (2012)
SI2 People who inuence my behavior think that I should use ChatGPT
SI3 People whose opinions I value prefer that I use ChatGPT
Trust T1 I believe that ChatGPT is effective and secure in what it is designed to do. (trust technology) (Alalwan et al., 2018; Fortino et al., 2019)
T2 I believe that ChatGPT enables me to do what I need to do. (trust technology)
T3 I believe that ChatGPT users are trustworthy. (trust people)
T4 I believe that ChatGPT is made in a trusted organization. (trust organization)
References
Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and
usage of information technology: A replication. MIS Quarterly, 227247.
Aghdaie, S. F. A., Piraman, A., & Fathi, S. (2011). An analysis of factors affecting the
consumers attitude of trust and their impact on internet purchasing behavior.
International Journal of Business and Social Science, 2(23).
Ajzen, I., & Fishbein, M. (1972). Attitudes and normative beliefs as factors inuencing
behavioral intentions. Journal of Personality and Social Psychology, 21(1), 1.
Al-Emran, M., Mezhuyev, V., & Kamaludin, A. (2020). Towards a conceptual model for
examining the impact of knowledge management factors on mobile learning acceptance.
Technology in Society, Article 101247.
Al-Shara, M. A., Arshah, R. A., Abo-Shanab, E., & Elayah, N. (2016). The effect of
security and privacy perceptions on customerstrust to accept internet banking
services: An extension of TAM. Journal of Engineering Applied Sciences, 11(3),
545552.
Alalwan, A. A., Baabdullah, A. M., Rana, N. P., Tamilmani, K., & Dwivedi, Y. K. (2018).
Examining adoption of mobile internet in Saudi Arabia: Extending TAM with
perceived enjoyment, innovativeness and trust. Technology in Society, 55, 100110.
Albayati, H., Alistarbadi, N., & Rho, J. J. (2023). Assessing engagement decisions in NFT
metaverse based on the theory of planned behavior (TPB). Telematics and Informatics
Reports, 10, Article 100045.
Albayati, H., Kim, S. K., & Rho, J. J. (2020). Accepting nancial transactions using
blockchain technology and cryptocurrency: A customer perspective approach.
Technology in Society, 62, Article 101320.
Albayati, H., Kim, S. K., & Rho, J. J. (2021). A study on the use of cryptocurrency wallets
from a user experience perspective. In Human behavior and emerging technologies.
https://doi.org/10.1002/hbe2.313
Almalki, S. (2016). Integrating quantitative and qualitative data in mixed methods
research–challenges and benets. Journal of Education and Learning, 5(3), 288296.
Autry, C. W., Grawe, S. J., Daugherty, P. J., & Richey, R. G. (2010). The effects of
technological turbulence and breadth on supply chain technology acceptance and
adoption. Journal of Operations Management, 28(6), 522536.
Baidoo-Anu, D., & Owusu Ansah, L. (2023). Available at: SSRN 4337484. In Education in
the era of generative articial intelligence (AI): Understanding the potential benets of
ChatGPT in promoting teaching and learning.
Bansal, H., & Khan, R. (2018). A review paper on human computer interaction.
International Journal of Advanced Research in Computer Science and Software
Engineering, 8(4), 53.
Basak, S. K., Govender, D. W., & Govender, I. (2016). Examining the impact of privacy,
security, and trust on the TAM and TTF models for e-commerce consumers: A pilot study.
2016 14th annual conference on privacy, security and trust (PST), Bass, D. (2022).
OpenAI chatbot so good it can fool humans, even when its wrong. Bloomberg.
https://www.bloomberg.com/news/articles/2022-12-07/openai-ch
atbot-so-good-it-can-fool-humans-even-when-it-s-wrong#xj4y7vzkg.
Bhattacherjee, A. (2000). Acceptance of e-commerce services: The case of electronic
brokerages. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and
Humans, 30(4), 411420.
Bhattacherjee, A., & Premkumar, G. (2004). Understanding changes in belief and attitude
toward information technology usage: A theoretical model and longitudinal test. MIS
Quarterly, 229254.
Brachten, F., Kissmer, T., & Stieglitz, S. (2021). The acceptance of chatbots in an
enterprise contextA survey study. International Journal of Information Management,
60, Article 102375.
Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2023). A comprehensive survey
of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv
preprint arXiv:2303.04226.
Casal´
o, L. V., Flavi´
an, C., & Guinalíu, M. (2007). The role of security, privacy, usability
and reputation in the development of online banking. Online information review, 31
(5), 583603.
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To Be or not to Be human? Theorizing
the role of human-like competencies in conversational articial intelligence agents.
Journal of Management Information Systems, 39(4), 9691005.
Chen, Y.-T., Shih, W.-L., Lee, C.-H., Wu, P.-L., & Tsai, C.-Y. (2021). Relationships among
undergraduatesproblematic information security behavior, compulsive internet
use, and mindful awareness in Taiwan. Computers & Education, 164, Article 104131.
Cheung, C. M., & Lee, M. K. (2001). Trust in internet shopping: Instrument development
and validation through classical and modern approaches. Journal of Global
Information Management, 9(3), 2335.
Connor, M., & Siegrist, M. (2010). Factors inuencing peoples acceptance of gene
technology: The role of knowledge, health expectations, naturalness, and social trust.
Science Communication, 32(4), 514538.
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user
information systems: Theory and results Massachusetts. Institute of Technology.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS Quarterly, 319340.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer
technology: A comparison of two theoretical models. Management Science, 35(8),
9821003.
Dragan, C. C., & Manulis, M. (2018). Bootstrapping online trust: Timeline activity proofs data
privacy management, cryptocurrencies and blockchain technology.
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K.,
Baabdullah, A. M., Koohang, A., Raghavan, V., & Ahuja, M. (2023). So what if
ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and
implications of generative conversational AI for research, practice and policy.
International Journal of Information Management, 71, Article 102642.
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at
the labor market impact potential of large language models. arXiv preprint arXiv:
2303.10130.
Esposito, C., Tamburis, O., Su, X., & Choi, C. (2020). Robust decentralised trust
management for the internet of things by using game theory. Information Processing
& Management, 57(6). https://doi.org/10.1016/j.ipm.2020.102308
Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In Trust and
deception in virtual societies (pp. 5590). Springer.
Featherman, M. S., Miyazaki, A. D., & Sprott, D. E. (2010). Reducing online privacy risk
to facilitate e-service adoption: The inuence of perceived ease of use and corporate
credibility. Journal of Services Marketing, 24(3), 219229.
Ferede, B., Elen, J., Van Petegem, W., Hunde, A. B., & Goeman, K. (2022). A structural
equation model for determinants of instructorseducational ICT use in higher
education in developing countries: Evidence from Ethiopia. Computers & Education,
188, Article 104566.
Fishbein, M. (1975). attitude, intention and behavior: An introduction to theory and research
Menlo Park. Addison-Wesley.
Folkinshteyn, D., & Lennon, M. (2016). Braving bitcoin: A technology acceptance model
(TAM) analysis. Journal of Information Technology Case and Application Research, 18
(4), 220249.
Fortino, G., Messina, F., Rosaci, D., & Sarne, G. M. (2019). Using blockchain in a
reputation-based model for grouping agents in the internet of things. IEEE
Transactions on Engineering Management.
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
14
Frieder, S., Pinchetti, L., Grifths, R.-R., Salvatori, T., Lukasiewicz, T., Petersen, P. C.,
Chevalier, A., & Berner, J. (2023). Mathematical capabilities of chatgpt. arXiv preprint
arXiv:2301.13867.
Gefen, D., Karahanna, E., & Straub, D. W. (2003a). Inexperience and experience with
online stores: The importance of TAM and trust. IEEE Transactions on Engineering
Management, 50(3), 307321.
Gefen, D., Karahanna, E., & Straub, D. W. (2003b). Trust and TAM in online shopping: An
integrated model. MIS Quarterly, 27(1), 5190.
George, A. S., & George, A. H. (2023). A review of ChatGPT AIs impact on several
business sectors. Partners Universal International Innovation Journal, 1(1), 923.
Grani´
c, A., & Maranguni´
c, N. (2019). Technology acceptance model in educational
context: A systematic literature review. British Journal of Educational Technology.
Hair, J. F., Jr., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2016). A primer on partial least
squares structural equation modeling (PLS-SEM). Sage publications.
Hajian, S., Tassa, T., & Bonchi, F. (2016). Individual privacy in social inuence networks.
Social Network Analysis and Mining, 6, 114.
Haleem, A., Javaid, M., & Singh, R. P. (2022). An era of ChatGPT as a signicant
futuristic support tool: A study on features, abilities, and challenges. BenchCouncil
transactions on benchmarks, standards and evaluations, 2(4), Article 100089.
Hsu, C.-L., & Lu, H.-P. (2004). Why do people play on-line games? An extended TAM
with social inuences and ow experience. Information Management, 41(7), 853868.
Iku-Silan, A., Hwang, G.-J., & Chen, C.-H. (2023). Decision-guided chatbots and cognitive
styles in interdisciplinary learning. Computers & Education, Article 104812.
Jahangir, N., & Begum, N. (2008). The role of perceived usefulness, perceived ease of
use, security and privacy, and customer attitude to engender customer adaptation in
the context of electronic banking. African Journal of Business Management, 2(2), 32.
Joinson, A. N., Reips, U.-D., Buchanan, T., & Schoeld, C. B. P. (2010). Privacy, trust, and
self-disclosure online. Human-Computer Interaction, 25(1), 124.
Karahanna, E., & Straub, D. W. (1999). The psychological origins of perceived usefulness
and ease-of-use. Information Management Science, 35(4), 237250.
Kesharwani, A., & Singh Bisht, S. (2012). The impact of trust and perceived risk on
internet banking adoption in India: An extension of technology acceptance model.
International Journal of Bank Marketing, 30(4), 303322.
Kim, J., & Gambino, A. (2016). Do we trust the crowd or information system? Effects of
personalization and bandwagon cues on usersattitudes and behavioral intentions
toward a restaurant recommendation website. Computers in Human Behavior, 65,
369379.
Kim, A. J.-Y., & Ko, E.-J. (2010). The impact of design characteristics on brand attitude
and purchase intention-focus on luxury fashion brands. Journal of the Korean Society
of Clothing and Textiles, 34(2), 252265.
Koco´
n, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., Bielaniewicz, J.,
Gruza, M., Janz, A., & Kanclerz, K. (2023). ChatGPT: Jack of all trades, master of none.
Information Fusion, Article 101861.
Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepa˜
no, C.,
Madriaga, M., Aggabao, R., Diaz-Candido, G., & Maningo, J. (2023). Performance of
ChatGPT on USMLE: Potential for AI-assisted medical education using large
language models. PLoS Digital Health, 2(2), Article e0000198.
Limbu, Y. B., Wolf, M., & Lunsford, D. (2012). Perceived ethics of online retailers and
consumer behavioral intentions: The mediating roles of trust and attitude. The
Journal of Research in Indian Medicine.
Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact
academia and libraries. Library Hi Tech News.
Malhotra, Y., & Galletta, D. F. (1999). Extending the technology acceptance model to
account for social inuence: Theoretical bases and empirical validation. In
Proceedings of the 32nd annual Hawaii international conference on systems sciences.
1999. HICSS-32. Abstracts and CD-ROM of full papers.
Martins, C., Oliveira, T., & Popoviˇ
c, A. (2014). Understanding the internet banking
adoption: A unied theory of acceptance and use of technology and perceived risk
application. International Journal of Information Management, 34(1), 113.
Min, S., So, K. K. F., & Jeong, M. (2021). Consumer adoption of the Uber mobile
application: Insights from diffusion of innovation theory and technology acceptance
model. In Future of tourism marketing (pp. 215). Routledge.
Mollick, E. R., & Mollick, L. (2022). New modes of learning enabled by AI chatbots.
Available at: SSRN. In Three methods and assignments.
Morse, J. M. (1991). Approaches to qualitative-quantitative methodological
triangulation. Nursing Research, 40(2), 120123.
Morse, J. M. (2016). Mixed method design: Principles and procedures (Vol. 4). Routledge.
OpenAI. (2023). GPT-4 technical report (p. 99). Cornell University. https://doi.org/
10.48550/arXiv.2303.08774
Patel, S. B., & Lam, K. (2023). ChatGPT: The future of discharge summaries? The Lancet
Digital Health, 5(3), e107e108.
Patel, K. J., & Patel, H. J. (2018). Adoption of internet banking services in Gujarat: An
extension of TAM with perceived security and social inuence. International Journal
of Bank Marketing, 36(1), 147169.
Prislin, R., & Wood, W. (2005). In Social inuence in attitudes and attitude change.
Qin, L., Kim, Y., Tan, X., & Hsu, J. (2009). In The effects of privacy concern and social
inuence on user acceptance of online social networks.
Raman, R., Mandal, S., Das, P., Kaur, T., Sanjanasri, J., & Nedungadi, P. (2023).
University students as early adopters of ChatGPT: Innovation diffusion study.
Rao, V., & Woolcock, M. (2003). Integrating qualitative and quantitative approaches in
program evaluation. The impact of economic policies on poverty and income distribution:
Evaluation techniques and tools (pp. 165190).
Ringle, C. M., Wende, S., & Becker, J.-M. (2015). SmartPLS 3. B¨
onningstedt: SmartPLS.
Retrieved July, 15, 2016.
Ronaghi, M. H., & Mosakhani, M. (2022). The effects of blockchain technology adoption
on business ethics and social sustainability: evidence from the Middle East.
Environment Development and Sustainability https://doi.org/10.1007/s10668-02
1-01729-x.
Roy, M. C., Dewit, O., & Aubert, B. A. (2001). The impact of interface usability on trust in
web retailers. Internet research.
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional
assessments in higher education? Journal of Applied Learning and Teaching, 6(1).
Saad´
e, R., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness
and perceived ease of use in on-line learning: An extension of the technology
acceptance model. Information Management Science, 42(2), 317327.
Salloum, S. A., Alhamad, A. Q. M., Al-Emran, M., Monem, A. A., & Shaalan, K. (2019).
Exploring students acceptance of E-learning through the development of a
comprehensive technology acceptance model. IEEE Access, 7, 128445128462.
Schoonenboom, J., & Johnson, R. B. (2017). How to construct a mixed methods research
design. Kolner Zeitschrift fur Soziologie und Sozialpsychologie, 69(Suppl 2), 107.
Selya, A. S., Rose, J. S., Dierker, L. C., Hedeker, D., & Mermelstein, R. J. (2012).
A practical guide to calculating Cohens f2, a measure of local effect size, from PROC
MIXED. Frontiers in Psychology, 3, 111.
Shawar, B. A., & Atwell, E. (2007). Chatbots: Are they really useful? Journal for Language
Technology and Computational Linguistics, 22(1), 2949.
Shin, D.-H. (2010). The effects of trust, security and privacy in social networking: A
security-based approach to understand the pattern of adoption. Interacting with
Computers, 22(5), 428438.
Sicari, S., Rizzardi, A., Grieco, L. A., & Coen-Porisini, A. (2015). Security, privacy and
trust in Internet of Things: The road ahead. Computer Networks, 76, 146164.
Sidi, Y., Shamir-Inbal, T., & Eshet-Alkalai, Y. (2023). From face-to-face to online: Teachers
perceived experiences in online distance teaching during the Covid-19 pandemic.
Computers & Education, Article 104831.
Suh, B., & Han, I. (2003). The impact of customer trust and perception of security control
on the acceptance of electronic commerce. International Journal of Electronic
Commerce, 7(3), 135161.
Teddlie, C., & Tashakkori, A. (2011). Mixed methods research. In The Sage handbook of
qualitative research, 4 pp. 285300).
Tramow, D., & Finlay, K. A. (1996). The importance of subjective norms for a minority
of people: Between subjects and within-subjects analyses. Personality and Social
Psychology Bulletin, 22(8), 820828.
van Dis, E. A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT:
Five priorities for research. Nature, 614(7947), 224226.
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control,
intrinsic motivation, and emotion into the technology acceptance model. Information
Systems Research, 11(4), 342365.
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda
on interventions. Decision Sciences, 39(2), 273315.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology
acceptance model: Four longitudinal eld studies. Management Science, 46(2),
186204.
Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of
information technology: Extending the unied theory of acceptance and use of
technology. MIS Quarterly, 157178.
Whitford, E. (2022). A computer can now write your college essay, maybe better than.
Wu, L., & Chen, J.-L. (2005). An extension of trust and TAM model with TPB in the initial
adoption of on-line tax: An empirical study. International Journal of Human-Computer
Studies, 62(6), 784808.
Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., Kambadur, P.,
Rosenberg, D., & Mann, G. (2023). BloombergGPT: A large language model for nance.
arXiv preprint arXiv:2303.17564.
Yang, H.-d., & Yoo, Y. (2004). Its all about attitude: Revisiting the technology
acceptance model. Decision Support Systems, 38(1), 1931.
Yu, J., Ha, I., Choi, M., & Rho, J. (2005). Extending the TAM for a t-commerce.
Information & Management, 42(7), 965976.
Zhou, T., & Li, H. (2014). Understanding mobile SNS continuance usage in China from
the perspectives of social inuence and privacy concern. Computers in Human
Behavior, 37, 283289.
Dr. Hayder Albayati is currently an assistant professor in the
Global Converg management department at the college of
Endicott / Woosong University. He obtained his Ph.D. in the
Global Information Telecommunication Technology Program
(GITTP) at the Korea Advanced Institute of Science and Tech-
nology (KAIST) in 2021. He received his MS in Information &
Communication Technology (ICT convergence) at Soongsil
University in 2017. He has a bachelors degree in Computer
Science from AL Mansoor university college. His research in-
terests include Blockchain Technology, e-Government, IoT, AI,
AR, VR, Data Analytics, Big Data, Business Development, Smart
City, and Technology Transfer and Convergence. E-mail:
hayder1111@wsu.ac.kr hayder1111@gmail.com.
H. Albayati
... Trust, a psychological construct with varying definitions across disciplines and contexts, has become an increasingly important factor to understand in student interactions with artificial intelligence (AI) in education, where students may rely on AI systems for guidance, feedback, and information under conditions of uncertainty. In educational settings, trust is known to influence students' engagement, perceptions, and outcomes across student-student (Poort et al., 2022), student-instructor (Hiatt et al., 2023;Wooten & McCroskey, 1996), student-institution (Latif et al., 2021;Payne et al., 2023), and student-technology relationships (Albayati, 2024;Khosravi et al., 2022;Nazaretsky et al., 2025;Ranalli, 2021). As educational technologies have become central to students' learning experiences, understanding the development of trust between students and these systems has become important for their effective use and adoption (Albayati, 2024;Khosravi et al., 2022;Nazaretsky et al., 2025;Ranalli, 2021). ...
... In educational settings, trust is known to influence students' engagement, perceptions, and outcomes across student-student (Poort et al., 2022), student-instructor (Hiatt et al., 2023;Wooten & McCroskey, 1996), student-institution (Latif et al., 2021;Payne et al., 2023), and student-technology relationships (Albayati, 2024;Khosravi et al., 2022;Nazaretsky et al., 2025;Ranalli, 2021). As educational technologies have become central to students' learning experiences, understanding the development of trust between students and these systems has become important for their effective use and adoption (Albayati, 2024;Khosravi et al., 2022;Nazaretsky et al., 2025;Ranalli, 2021). Without proper understanding of trust formation, students may develop inappropriate levels of trust, either overtrusting AI systems and becoming overly reliant on potentially inaccurate information, or undertrusting these systems and rejecting helpful educational tools entirely (Lyu et al., 2025). ...
... Distrust has been defined as "a confident negative expectation regarding another's conduct" and reflects a trustor's assessment of potential risks involving the trustee's intentions or abilities (Lewicki et al., 1998). Trust and distrust can each individually influence how AI in education is perceived and engaged with, leading to varying degrees of acceptance toward these tools from students and instructors (Albayati, 2024;Bergdahl & Sjöberg, 2025;Bernabei et al., 2023;Herzallah & Makaldy, 2025;Kajiwara & Kawabata, 2024;Lee & Song, 2024;Mustofa et al., 2025;Nazaretsky et al., 2025;Saihi et al., 2024;Zhang et al., 2024). A balanced approach involving appropriate levels of trust and distrust is optimal, while extremes in either direction can be problematic (Lyu et al., 2025). ...
Preprint
Full-text available
As AI chatbots become increasingly integrated in education, students are turning to these systems for guidance, feedback, and information. However, the anthropomorphic characteristics of these chatbots create ambiguity regarding whether students develop trust toward them as they would a human peer or instructor, based in interpersonal trust, or as they would any other piece of technology, based in technology trust. This ambiguity presents theoretical challenges, as interpersonal trust models may inappropriately ascribe human intentionality and morality to AI, while technology trust models were developed for non-social technologies, leaving their applicability to anthropomorphic systems unclear. To address this gap, we investigate how human-like and system-like trusting beliefs comparatively influence students' perceived enjoyment, trusting intention, behavioral intention to use, and perceived usefulness of an AI chatbot - factors associated with students' engagement and learning outcomes. Through partial least squares structural equation modeling, we found that human-like and system-like trust significantly influenced student perceptions, with varied effects. Human-like trust more strongly predicted trusting intention, while system-like trust better predicted behavioral intention and perceived usefulness. Both had similar effects on perceived enjoyment. Given the partial explanatory power of each type of trust, we propose that students develop a distinct form of trust with AI chatbots (human-AI trust) that differs from human-human and human-technology models of trust. Our findings highlight the need for new theoretical frameworks specific to human-AI trust and offer practical insights for fostering appropriately calibrated trust, which is critical for the effective adoption and pedagogical impact of AI in education.
... Accordingly, this study is motivated by the sparking ChatGPT to investigate why and how individuals accept such technology through a thorough empirical examination of real users [10]. The significance of this research lies in its standing as one of the pioneering endeavors in exploring this particular context, which investigates many motivational adoption factors [11]. Furthermore, the focus of the study is on the higher education's students since higher education institutes represent a monarchy deeply influenced by the evolution of such technology [9]. ...
... Albayati [11] • Privacy, security, social influence, and trust may be the most influential factors in adopting ChatGPT. ...
... The authors in Albayati [11] delved into the factors influencing users' acceptance of ChatGPT as a daily reference tool. The study employed an integrated model that combines the TAM model with four novel external components: privacy, security, social influence, and trust [11]. ...
Article
Full-text available
Several artificial intelligence generative language models have been developed recently. Many people in different contexts have started using them, such as education, industry, and even content creation. This study focuses on investigating how higher education students have been using ChatGPT as an example of a generative artificial intelligence tool. The context of the study was chosen due to the many voices against or in favor of using ChatGPT in higher education institutions. We deployed a mixed method to collect and analyze the data, including open-ended and closed-ended question surveys, to investigate the motivations and future expectations of students who have been using the ChatGPT tool. The research methodology comprises two main components: the initial phase involves conducting statistical analyses on questions about each of the 15 adoption factors. Responses concerning each factor were meticulously compiled and transferred into an Excel spreadsheet. Subsequently, the frequency of responses for each choice was tallied to discern prevailing trends and preferences among respondents. The second phase focuses on analyzing the open-ended questions' responses that forecast the future of ChatGPT as perceived by the participants. The data gathered from open-ended questions was analyzed thematically using Miles and Huberman's approach. Our findings reveal that students generally find using ChatGPT to be a helpful tool in their higher education institutions. The findings show that ease of use, perceived value, trialability, observability, relative advantage, social impact, and network effect are the most important factors behind their adoption decision. The findings also revealed that a significant number of students are still confused about ethical concerns and report the need for regulations when using ChatGPT. These findings are important for educators, policymakers, and technology designers who want to make the most of generative AI in education. Future research, however, is yet to be conducted to understand its ethical implications, long-term effects, and how to best incorporate it into teaching practices.
... Authors in [7] emphasized that ChatGPT could assist students in developing various skills, including reading, writing, information analysis, critical thinking, problem-solving, generating practice problems, and research. The findings of a study [24] concurred with this view, highlighting that students eagerly take advantage of this tool in multiple areas, such as reviewing concepts from academic articles, obtaining general information, seeking further elaborations, and using it as a quick reference tool for their studies or broader knowledge fields [25][26][27][28]. ...
... The study began with an online survey to collect data on students' user experiences with ChatGPT and their feedback. A customized questionnaire, based on previous research by [24,42,43], was created for this study. The questionnaire consisted of three main parts: the first collected information about the participants' study program, the second asked questions related to our research goals, and the third allowed for open comments. ...
... Acceptance Model [66]. Perceived usefulness is the essence of building up one's attitude and intention to the use of brand new technology [24]. When using ChatGPT to write assignment, nursing students generated idea with ChatGPT, and follow-up prompt was used to shape the ChatGPT output. ...
Article
Full-text available
This study investigates students’ perceptions and satisfaction with using GenAI-ChatGPT tools in their learning processes, aiming to provide a comprehensive understanding of how students perceive and integrate ChatGPT into their educational activities. A mixed-methods approach was adopted, combining an online survey of 1,963 respondents who reported using generative AI tools like ChatGPT for learning, with in-depth interviews involving 14 students to capture qualitative insights into their experiences and perceptions. The study commenced with the online survey to collect data on students’ experiences and feedback regarding ChatGPT. The internal consistency of the survey instruments was evaluated using Cronbach’s Alpha, with values exceeding 0.7, indicating high reliability. A right-tailed t-test across all three scales yielded a p-value of less than 0.001, suggesting that students had a positive user experience, perceived the tools as useful, and expressed overall satisfaction with GenAI-ChatGPT tools. Following the quantitative survey, a qualitative study involving focus groups was conducted to explore students’ perceptions and concerns about using GenAI-ChatGPT tools. The qualitative analysis identified five key themes: purposes, user experience, usefulness of GenAI-ChatGPT tools, overall satisfaction, and students’ attitudes and reactions. These focus groups provided nuanced insights into how students interact with and perceive the value of GenAI-ChatGPT tools in their academic endeavors. The findings indicate that students exhibit a positive attitude towards GenAI-ChatGPT tools, recognizing its value in enhancing quick thinking and responsiveness. The statistically significant differences between mean scores and the test value support these positive perceptions. This study offers valuable insights for the development and refinement of GenAI-ChatGPT tools to better meet student needs and expectations.
... One point to note here is that the number of students using Chat-GPT is increasing. There is research evidence that the ease-of-use attribute of this chatbot is a major factor driving the use of ChatGPT among university students (Abdaljaleel et al., 2024;Albayati, 2024;Habes et al., 2024). Of course, it is positive that ChatGPT has a userfriendly interface, but the fact that it communicates false and/or inaccurate information is unquestionably problematic. ...
Article
Full-text available
Since its launch in November 2022, Chat Generative Pretrained Transformer (ChatGPT) has rapidly become an attractive artificial intelligence (AI) tool for students around the globe. This chatbot is becoming more and more popular because of its ability to generate science-related texts practically indistinguishable from human-produced texts. While a major shortcoming of texts generated by ChatGPT is that these can contain false and/or inaccurate scientific information, very little is known about how to promote critical reading of science-related texts produced by this AI tool. We aimed at showing that the construction of scientific argument maps—visual representation of scientific argument structure—can be used to foster critical reading of these texts. The data were drawn from the argument diagrams constructed by 44 undergraduates (27 females and 17 males, 17–23 years old) during an introductory science course and audio recordings of student discussions as part of the process of co-creation of argument maps. The findings suggest that argument mapping effectively supported participants’ critical reading of ChatGPT‐generated texts while engaged them in the construction of arguments, the anticipation of counterarguments, and the production of rebuttals. This study contributes to the literature on scientific AI literacy—the combination of scientific literacy with AI literacy —by providing insights into how to engage students in the critical reading of science-related texts produced by generative AI technology.
... On the positive side, students have found AI tools like ChatGPT, Grammarly, Zotaro, and Speechify immensely helpful for various academic tasks. ChatGPT and Grammarly are particularly valued for improving sentence structure and clarity of expression, making written communication more scholarly or concise [69]. These tools are especially beneficial for international students and those for whom English is a second language, as they help rewrite and condense content [67]. ...
Article
Full-text available
The emergence of generative artificial intelligence (AI) technologies, such as large language models (LLMs) like ChatGPT, has precipitated a paradigm shift in the realms of academic writing, plagiarism, and intellectual property. This article explores the evolving landscape of English composition courses, traditionally designed to develop critical thinking through writing. As AI becomes increasingly integrated into the academic sphere, it necessitates a reevaluation of originality in writing, the purpose of learning research and writing, and the frameworks governing intellectual property (IP) and plagiarism. The paper commences with a statistical analysis contrasting the actual use of LLMs in academic dishonesty with educator perceptions. It then examines the repercussions of AI-enabled content proliferation, referencing the limitation of three books self-published per day in September 2023 by Amazon due to a suspected influx of AI-generated material. The discourse extends to the potential of AI in accelerating research akin to the contributions of digital humanities and computational linguistics, highlighting its accessibility to the general public. The article further delves into the implications of AI on pedagogical approaches to research and writing, contemplating its impact on communication and critical thinking skills, while also considering its role in bridging the digital divide and socio-economic disparities. Finally, it proposes revisions to writing curricula, adapting to the transformative influence of AI in academic contexts.
... In the milieu of technology, several scholars reported the relationship between trust and perceived risk (such as. 48,49 The degree of correlation between an individual's trust and a technology's capabilities may impact the actual consequences of technology usage. 50 In the extant literature, several studies have argued that trust negatively influences perceived performance risk. ...
Article
The use of augmented reality (AR) in e-commerce has increased manifold in the recent past aiming to enhance customers' online shopping experience. However, the research to comprehend nuances of consumers' attitudes and experiences toward adoption of AR applications is still in its infancy. Against this backdrop, the present study aims to investigate the use of AR by e-commerce platforms to enhance consumers' willingness to make online purchases using AR applications. The present study develops and validates a conceptual model drawing on stimulus-organism-response (SOR), the Push-Pull-Mooring (P-P-M) and Spatial Presence theories. Data were collected from 466 consumers in Oman and hypotheses tested utilizing PLS-SEM. The results reveal that hedonic value (engendered from spatial presence, augmentation and interactivity) enhance consumers' online purchase intention with AR applications, while privacy concerns adversely affect the intentions. However, the perceived performance risk did not show any variations. In addition, this research confirmed the moderating role of innovativeness on the relationships between perceived herd, trust, and performance risk with online shopping intentions. The present study offers several theoretical implications by unraveling the underlying mechanism of AR technology adoption for online purchases. The study outlines valuable insights for AR app developers, marketing managers and, e-commerce operators.
... In total, 87.8% of the students reported that they used ChatGPT weekly to compensate for missed class lectures and to fill knowledge gaps. Albayati (2024) found that students deem ChatGPT to be beneficial in terms of ease of use, but students also acknowledged drawbacks such as privacy and security concerns. Students expressed that ChatGPT's inability to replicate human interaction and mentorship poses challenges, particularly in fostering academic growth and emotional support (Rahman et al., 2023;Sila et al., 2023). ...
Article
Full-text available
While research articles on students’ perceptions of large language models such as ChatGPT in language learning have proliferated since ChatGPT’s release, few studies have focused on these perceptions among English as a foreign language (EFL) university students in South America or their application to academic writing in a second language (L2) for STEM classes. ChatGPT can generate human-like text that worries teachers and researchers. Academic cheating, especially in the language classroom, is not new; however, the concept of AI-giarism is novel. This study evaluated how 56 undergraduate university students in Ecuador viewed GenAI use in academic writing in English as a foreign language. The research findings indicate that students worried more about hindering the development of their own writing skills than the risk of being caught and facing academic penalties. Students believed that ChatGPT-written works are easily detectable, and institutions should incorporate plagiarism detectors. Submitting chatbot-generated text in the classroom was perceived as academic dishonesty, and fewer participants believed that submitting an assignment machine-translated from Spanish to English was dishonest. The results of this study will inform academic staff and educational institutions about how Ecuadorian university students perceive the overall influence of GenAI on academic integrity within the scope of academic writing, including reasons why students might rely on AI tools for dishonest purposes and how they view the detection of AI-based works. Ideally, policies, procedures, and instruction should prioritize using AI as an emerging educational tool and not as a shortcut to bypass intellectual effort. Pedagogical practices should minimize factors that have been shown to lead to the unethical use of AI, which, for our survey, was academic pressure and lack of confidence. By and large, these factors can be mitigated with approaches that prioritize the process of learning rather than the production of a product.
... Effort expectancy, which measures how easy a technology is to use, plays a significant role in its adoption, as users are more likely to embrace tools they perceive as simple and userfriendly, according to studies like Herrero et al. (2017), Venkatesh et al. (2003), and Albayati (2024). Research on smart apps has explored this concept, with Hanif et al. (2022) finding a strong link between effort expectancy and mobile app adoption, and Ho et al. (2021) showing that user-friendly smart itinerary-planning apps increase travelers' willingness to use them. ...
Article
Purpose This study explores tourist adoption of ChatGPT-powered digital itineraries. It investigates the factors influencing their intention to use these AI-driven travel planning applications by building upon the Unified Theory of Acceptance and Use of Technology (UTAUT) and incorporating experiential consumption theory, specifically focusing on utilitarian and hedonic values. Design/methodology/approach This research surveyed 384 travelers who use mobile applications like ChatGPT for tourism, employing an online survey and purposive sampling. PLS-SEM was used for data analysis. Findings The results confirmed a significant relationship between the intention to adopt ChatGPT’s digitalized itinerary and all UTAUT dimensions, with the exception of the facilitating condition. Both the hedonic and utilitarian values of personal consumption significantly motivate travelers in their behavioral intention to adopt ChatGPT’s digitalized itinerary. Practical implications This study advises AI travel tool developers and marketers to prioritize both utilitarian and hedonic values, such as AR integration, and user-friendly interfaces. Social influence should be leveraged through in-app sharing and communities. Ethical considerations, including data privacy and algorithmic fairness, are crucial, along with adherence to data protection laws. Originality/value This study investigates how travelers adopt AI-generated digital itineraries (like those from ChatGPT), filling a gap in research that often focuses on general smart travel app adoption. It develops a new model to explain user intentions, providing novel insights into this growing trend.
Article
Full-text available
Given the risks and ethical concerns of integrating generative artificial intelligence (GenAI) into education, scholars have argued for a critical-reflective education approach that addresses the long-term implications of GenAI. However, empirical research on GenAI education approaches is scarce. This study investigates the prevalence of protective-preventive, critical-reflective, and creative-productive GenAI education approaches in highly digitized Swiss upper secondary schools and how students’ experiences of these approaches relate to two aspects of their digital agency: GenAI confidence and GenAI responsibility. Using data from 2357 students, the results showed that the critical-reflective approach was the most commonly experienced and significantly predicted students’ GenAI confidence and responsibility. The creative-productive approach positively and significantly predicted GenAI confidence but not responsibility, while the protective-preventive approach, although the second most common approach, was not significantly related to either outcome. However, these approaches explained little variance in the dependent variables, suggesting that they may not yet be effectively implemented or that digital agency is primarily developed outside schools. Analysis of the control variables showed that identifying as a female had a negative significant effect on GenAI confidence and a positive significant effect on GenAI responsibility. The findings highlight the importance of adopting a critical-reflective GenAI education approach with attention to gender issues in fostering responsible and confident digital citizens
Article
Full-text available
Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts.
Article
Full-text available
Open Artificial Intelligence (AI) published an AI chatbot tool called ChatGPT at the end of November 2022. Generative Pre-trained Transformer (GPT) architecture is the foundation of ChatGPT. On the internet, ChatGPT has been rapidly growing. This chatbot enables users to discuss with the AI by inputting prompts, and it is based on OpenAI’s language model. Although ChatGPT is fantastic and produces exciting results for writing tales, poetry, songs, essays, and other things, it has certain restrictions. Users may ask the bot questions, and it will reply with pertinent, convincing subjects and replies. ChatGPT has now risen to the top of several academic agendas. Administrators create task teams and hold institution-wide meetings to react to the tools, with most of the advice being to adopt this technology. This paper briefs about the ChatGPT and its need. Further, various Progressive Work Flow Processes of the ChatGPT Tool are stated diagrammatically. Specific features and capabilities of the ChatGPT Support System are studied in this paper. Finally, we identified and discussed the significant roles of ChatGPT in the current scenario. The neural language models that form the foundation of character AI have been developed from the bottom up with talks in mind. This technology implies that the programme uses deep learning methods to analyse and produce text. The model “understands” the subtleties of human-produced natural language using vast amounts of data from the internet.
Article
Full-text available
Almost every few decades, there is an innovation that completely changes the world. We mean innovations that play a vital role in raising the standard of living, such as the internet or airplanes. What will be the next definitive moment in history. It's here, and it's called Chat GPT. It was created by the artificial intelligence research firm Open AI. ChatGPT is a natural language processing (NLP) model that combines GPT-2, a transformer-based language model developed by OpenAI, with supervised and reinforcement learning techniques to fine-tune it (an approach to transfer learning) on the GPT-3 group of large language patterns developed by OpenAI. The model enables users to interact naturally with an AI system through text-based conversations. It could be used for customer service applications and to create virtual assistants for voice and text conversations. ChatGPT also provides features such as topic detection, emotion detection, and sentiment analysis capabilities to help users understand their conversation partner better. Additionally, it has the capability to generate multiple conversation threads in order to create more realistic interactions between user and bot. We will also explore some of the challenges facing AI development and how we can overcome them. This article is about the recent developments in the field of artificial intelligence (AI). AI has advanced significantly over recent years, with a wide range of applications and new technologies being developed. And discuss a few of these advancements and use to improve human lives. In this paper, we discuss how ChatGPT-a Natural Language Generation (NLG) model powered by OpenAI's GPT-3 technology-can enhance e-commerce via chat, as well as other sectors such as education, entertainment, finance, health, news and productivity. We will analyze the current use-cases of ChatGPT in these sectors and explore possible future applications. We will also discuss how this technology can use to create more personalized content for users. Finally, we will look at how ChatGPT can help to make customer service more efficient and effective for businesses.
Article
Full-text available
Non-Fungible Tokens (NFTs) have reached enormous levels of interest all over the world; The attraction was huge besides buying or creating an NFT. However, the actual use requires consideration of many aspects and sources to make decisions and engage in the NFT market. Moving forward and selecting what to create or where to buy, it is necessary to assess the NFT and whether it is worth investing in or not. On the other hand, Metaverse has become a trending topic and market interest has increased dramatically. However, NFT is crucial to enable the Metaverse; NFT approved its essential to the success of Metaverse adoption. In conclusion, this choice is complicated, especially when it involves extensive knowledge and information from different sources and perspectives (Engineering and Social Science). Some aspects are related to the customer experience and trust, and others are more likely to be asset-related. As we look at the blockchain market, we distinguish a variety of platforms and applications that can be used conveniently, presenting numerous options and possibilities. That brings to mind some ideas and asks some questions to assist the engagement in the NFT metaverse; When is the best moment to engage in the NFT metaverse? Is trading or engaging in the NFT metaverse trustworthy? Is it legal to use and trade NFTs, is there any need for governments to get involved? What NFT piece should create or what price to buy? What is the best and most secure platform to use? Assessing the NFTs requires deep research into different sources of controversial information and data. However, no such reference or study covers these considerations in detail; the lack of trusted resources and knowledge impacts the user experience and engagement. This paper contributes to social science studies by performing a multi-factor (comprehensive) analysis that includes the NFT Metaverse engagement decision and user behavior. We are using an extended model of the theory of planned behavior (TPB), proposing a model which includes external factors to help identify the variables that influence the engagement with NFTs in the metaverse. This comprehensive study has a multi-perspective approach; customer perspective, social perspective, technology perspective, legal perspective, and market perspective. We present extensive knowledge and information to be a helpful piece of awareness for everyone who intends to buy, invest, or create an NFT. This study uses a quantitative analysis method and provides meaningful results explaining this dilemma and the reasons behind the massive adoption of NFT and its fluctuation worldwide. This work helps the decision-makers, creators, and investors consider new development perspectives in the NFT metaverse. The methodology used primary survey data and analyzed with Smart-PLS 4 to determine variance-based structural equation modeling (SEM) using the partial least squares path modeling (PLS) method.
Article
Full-text available
We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations. These results suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making.
Article
The urgent shift to online distance teaching and learning during the Covid-19 pandemic presented teachers with unique pedagogical, technological, and psychological challenges. The aim of this study was to map the main positive and negative experiences of teachers during this transition, as well as to examine intra- and interpersonal factors that affected teachers' ability to cope effectively with the challenges of online distance teaching. We used a mixed-method approach that combined qualitative (interviews) and quantitative (questionnaires) analyses. The interviews were analyzed using a grounded theory approach, specifically a bottom-up analysis, which led to the identification of five primary categories reflecting teachers' main concerns in online distance teaching (i.e., social, emotional, cognitive, pedagogical, and system support. The two most prominent categories were pedagogy and emotions, illustrating their centrality in teachers' experiences. A regression analysis of the questionnaires' data revealed that the two main variables which predicted both positive and negative experiences in online distance teaching were self-efficacy and teachers' attitudes towards technology integration in teaching. Findings of this study allow formulation of guidelines to promote factors related to positive experiences in online distance teaching.
Article
Research on chatbots has distinct interdisciplinary features and can be generalized to learning in various fields. However, related research is still fragmented across disciplines and applications. This study aimed to develop a decision-guided chatbot for interdisciplinary learning. To inves�tigate the effects of this learning model on learning achievements, learning motivation, collective efficacy, classroom engagement, satisfaction with the learning approach, and cognitive load of learners with different cognitive styles, this study was conducted in an environment education course in a junior high school in northern Taiwan. A total of 71 learners from two classes were recruited in this study; the experimental group, a class of 35 learners, adopted a decision-guided chatbot for learning, while the control group, a class of 36 learners, adopted conventional technology-assisted learning. The results showed that the experimental group significantly out�performed the control group on learning achievements, extrinsic motivation, collective efficacy, cognitive engagement, emotional engagement, and satisfaction with the learning approach. Moreover, the experimental group perceived lower mental efforts. In terms of cognitive styles, analytical learners had significantly higher learning achievements than intuitive learners. In the control group, the analytical learners had higher cognitive engagement than intuitive learners. In the experimental group, analytical learners had significantly lower mental load than intuitive learners. In addition, the analytical learners and intuitive learners in the experimental group respectively perceived higher mental load than those in the control group
Article
Conversational AI is a game-changer for science. Here’s how to respond. Conversational AI is a game-changer for science. Here’s how to respond.