Content uploaded by Hayder Albayati
Author content
All content in this area was uploaded by Hayder Albayati on Jul 17, 2024
Content may be subject to copyright.
Computers and Education: Articial Intelligence 6 (2024) 100203
Available online 12 January 2024
2666-920X/© 2024 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
Investigating undergraduate students’ perceptions and awareness of using
ChatGPT as a regular assistance tool: A user acceptance perspective study
Hayder Albayati
Global Convergence Management, Department at the College of Endicott, Woosong University, 171 Dongdaejeon-ro, Dong-gu, Daejeon, Republic of Korea
ARTICLE INFO
Keywords:
ChatGPT
TAM
Undergraduate student behavior
Privacy
Security
Social inuence
And trust
ABSTRACT
This study examines the factors inuencing user acceptance of ChatGPT as a daily reference tool and assesses
varying levels of user awareness. It aims to offer valuable insights into the potential benets and challenges of
implementing ChatGPT in an educational context. To achieve this objective, we employ an integrated model
comprising the Technology Acceptance Model (TAM) and four novel external constructs: Privacy, Security, Social
Inuence, and Trust. This proposed model delivers an in-depth understanding of user acceptance by simulta-
neously measuring diverse user perspectives. Adopting a quantitative research approach, the study surveys
undergraduate students regarding their use of ChatGPT. The results contribute to bridging the gap between
technology and users, shedding light on users’ actual experiences and considerations regarding AI-based tools.
Specically, the study is expected to reveal the signicant inuence of external factors on user acceptance of
ChatGPT and to provide a set of recommendations for educational institutions, policymakers, and developers.
This study aids the developers of ChatGPT and similar technologies by offering insights into how to design and
enhance more user-friendly and secure systems that better meet users’ needs and expectations.
1. Introduction
Intelligent software and hardware, commonly known as intelligent
agents, are increasingly integrated into our daily lives due to the rising
use of Articial Intelligence (AI). These agents are capable of performing
a wide range of tasks, from simple manual labor to complex operations.
One of the most prevalent examples of AI systems is the chatbot, which is
also an elementary yet widespread example of intelligent Human-
Computer Interaction (HCI) (Bansal & Khan, 2018).
Although chatbots can simulate human conversation and provide
entertainment, their purpose extends beyond that (Iku-Silan et al.,
2023). They are utilized in various elds such as education, information
retrieval, business, and e-commerce, offering useful services (Shawar &
Atwell, 2007). OpenAI, an AI company, unveiled ChatGPT, a new
version of chatbots. ChatGPT is a large language model (LLM) that uses
machine learning to learn from vast datasets of text and can produce
highly sophisticated and intelligent writing. This groundbreaking tech-
nology has signicant implications for both science and society. Re-
searchers and other professionals have already leveraged ChatGPT and
other LLMs to compose essays and speeches, condense literature, rene
papers, detect research gaps, and even write computer code, including
statistical analyses (van Dis et al., 2023). ChatGPT was launched on
November 30, 2022, and has since garnered signicant attention,
attracting over one million subscribers within its rst week of release
(Baidoo-Anu & Owusu Ansah, 2023).
With the rapid increase in ChatGPT usage and interest in its appli-
cations, students are eager to take advantage of this tool in many areas.
Undergraduate students primarily seek help with academic assignments
for clarication of concepts or assistance with their work. It is useful for
understanding academic material (van Dis et al., 2023). Some students
use ChatGPT to obtain general information on various topics, serving as
a quick reference tool for their studies or broader knowledge elds (Cao
et al., 2023). Students interact with ChatGPT to gain insights into
complex topics, solve academic challenges, or receive additional ex-
planations (Lund & Wang, 2023). They use ChatGPT for easy and quick
access to information, beneting from its availability 24/7 for imme-
diate assistance (Baidoo-Anu & Owusu Ansah, 2023).
ChatGPT has evolved through four versions with varying numbers of
parameters: ChatGPT-1 has 117 million parameters, ChatGPT-2 has 1.5
billion parameters, ChatGPT-3 has 175 billion parameters, and
ChatGPT-4 has 100 trillion parameters. The increase in parameters has
enabled ChatGPT-4 to achieve unprecedented levels of performance and
generate text resembling human speech (OpenAI, 2023). ChatGPT is a
powerful tool that offers several benets for researchers. It assists in
E-mail address: Hayder1111@wsu.ac.kr.
Contents lists available at ScienceDirect
Computers and Education: Articial Intelligence
journal homepage: www.sciencedirect.com/journal/computers-and-education-artificial-intelligence
https://doi.org/10.1016/j.caeai.2024.100203
Received 30 July 2023; Received in revised form 8 January 2024; Accepted 8 January 2024
Computers and Education: Articial Intelligence 6 (2024) 100203
2
identifying relevant literature by providing summaries or lists of rele-
vant publications based on topics or keywords. ChatGPT also generates
text in specic styles or tones, facilitating the drafting of research papers
and other documents (Lund & Wang, 2023). Additionally, it can analyze
large amounts of text data, such as social media posts or news articles,
and support research comprehension in multiple languages through
machine translation (Rudolph et al., 2023). Researchers can stay
updated on the latest developments in their eld using ChatGPT’s
automated summarization capabilities. Furthermore, by providing
domain-specic answers, ChatGPT serves as a valuable tool for scholars
seeking prompt and efcient assistance (Lund & Wang, 2023).
In summary, ChatGPT’s functionality showcases the diverse ways it
can be used to support and enhance the research activities of researchers
across different disciplines. The tool’s adaptability and range of appli-
cations make it a valuable asset for researchers seeking assistance at
different stages of their work (Frieder et al., 2023; George & George,
2023; Haleem et al., 2022; Koco´
n et al., 2023). Here are the most
commonly used functions:
•Provide summaries of texts and literature.
•Providing explanations, clarications, and supplementary informa-
tion on complex research topics.
•Improve literature searches by rening search queries, suggesting
alternative keywords, and providing additional context.
•Create interactive learning modules through educational content,
quizzes, and games.
•Provide suggestions for improving language and coherence.
•Provides machine translation capabilities in multiple languages.
•Create text with a certain tone that matches the human writing style.
•Collaboratively generate ideas, outlines, or rough drafts.
•Analyze large amounts of data, including social media posts or news
articles.
2. Organization of this paper
To provide a comprehensive understanding, the paper is organized as
follows: Section 3 encompasses literature reviews; Section 4 elucidates
the Technology Acceptance Model (TAM); Section 5 introduces the hy-
potheses and model development; Section 6 outlines the research
methodology; Section 7 describes the results and discussion; and Section
8 covers the conclusion, limitations, and future work.
3. Literature review
AI-generated text profoundly impacts every sector, elevating ex-
pectations regarding AI capabilities and shaping prospects for technol-
ogy development. It is crucial to consider the advantages and
disadvantages of AI in human life and the future. This research aims to
spotlight some of these aspects, focusing on factors inuencing the
current usage of AI generative technology and exploring potential de-
velopments shortly. Some research suggests that employee conviction
regarding the personal usefulness of chatbots is crucial. Intrinsic moti-
vation among employees positively impacts their intention to use it
(Brachten et al., 2021).
ChatGPT can be useful in a wide range of elds and applications as an
AI language model. Here are some examples:
•Customer service: ChatGPT can provide automated customer service
and support through chatbots, helping businesses save time and re-
sources by handling common inquiries and requests without human
intervention.
•Education: ChatGPT can offer personalized learning experiences by
answering student questions and providing assignment feedback. It
can also generate educational content and lesson plans.
•Healthcare: ChatGPT can assist with medical diagnoses and treat-
ment recommendations provide patients with information about
their health conditions and answer questions about medical
procedures.
•Marketing: ChatGPT can generate personalized marketing content,
such as product descriptions and promotional messages. It can also
provide customer insights and analytics to help businesses optimize
their marketing strategies.
•Finance: ChatGPT can offer nancial advice and investment recom-
mendations to clients. It can also generate nancial reports and
forecasts based on market trends and data analysis.
•ChatGPT can be useful in any eld that involves language processing
and communication. Its versatility and adaptability make it a valu-
able tool for businesses and organizations across a wide range of
industries. However, there are limitations to ChatGPT’s perfor-
mance, mostly related to privacy, security, ethics, transparency, and
unintended consequences. To address these limitations, it is impor-
tant to implement appropriate security measures, ensure trans-
parency in how ChatGPT operates, and carefully consider the
potential impacts of its use (Baidoo-Anu & Owusu Ansah, 2023;
Dwivedi et al., 2023; Eloundou et al., 2023; Lund & Wang, 2023;
Rudolph et al., 2023). Here are a few factors that may expose or
interrupt ChatGPT’s effectiveness:
•Privacy: ChatGPT has been trained on vast amounts of text data,
some of which may be private or sensitive. There is a risk that per-
sonal or sensitive information could be inadvertently exposed
through its use or in the event of data breaches.
•Security: ChatGPT may be susceptible to security risks like hacking,
malware attacks, or other cyber threats. Without proper manage-
ment of these risks, sensitive data could be lost or ChatGPT’s func-
tionality compromised.
•Ethics: ChatGPT may inadvertently reect biases in the generated
text or be misused for malicious purposes, raising ethical concerns.
Conversely, it can also prompt ethical discussions by producing
thought-provoking content on moral issues and highlighting ethical
dilemmas.
•Transparency: The inner workings of ChatGPT may be complex and
opaque, leading to a lack of transparency in its decision-making
processes. This complexity can make it challenging to discern why
certain responses are generated or to identify and rectify errors or
biases.
•Unintended consequences: The use of ChatGPT might lead to unin-
tended outcomes, such as perpetuating biases or reinforcing harmful
stereotypes. It is crucial to monitor and assess ChatGPT’s outputs
diligently to prevent any detrimental effects.
Additionally, it is crucial to consider the acceptability of this tech-
nology, particularly from the user’s perspective. There is a potential risk
that automation may lead to a sense of personal detachment, resulting in
a reluctance to embrace this technology (Patel & Lam, 2023). To sum-
marize the potential drawbacks of ChatGPT, we propose conducting a
study to measure undergraduate students’ acceptance and awareness
levels of using ChatGPT. This research aims to gain insights into the
effectiveness of ChatGPT in supporting their learning process, providing
additional resources, or facilitating academic tasks.
From a social science perspective, many users of ChatGPT have
developed an interest in uncovering the background and secrets behind
its impressive performance (Cao et al., 2023). The responses generated
by ChatGPT exhibit a high level of concordance, making it easy for
human learners to comprehend the internal language, logic, and rela-
tional ow presented in the explanation text (Kung et al., 2023).
ChatGPT has attracted both investors and organizations, yet it is not
seen as a replacement for humans. Instead, it is viewed as a tool that can
enhance productivity and assist in fullling higher-order needs (Dwivedi
et al., 2023). It has quickly become popular among both casual users and
professionals, streamlining tasks and responsibilities, and making them
easier and faster to complete (Dwivedi et al., 2023).
However, some social scientists argue that the use of generative AI,
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
3
like ChatGPT, may supplant human researchers and the unique skills
they bring to the research process, including critical thinking, creativity,
and empathy (Raman et al., 2023; Whitford, 2022). This could lead to a
reduction in the quality and depth of social science research, as well as a
lack of human oversight and accountability. Some view it as an
untrusted assistant and an unexpected technology (Albayati et al., 2021,
2023; Dwivedi et al., 2023; Mollick & Mollick, 2022). ChatGPT is crit-
icized for lacking adequate privacy and security measures at certain
levels and for conducting interactions with humans openly without
appropriate control. Numerous studies have raised concerns about the
potential for ChatGPT to expose users’ personal information and search
content (Dwivedi et al., 2023; Kung et al., 2023). In addition to privacy
and security issues, ChatGPT has also been criticized for its lack of
transparency in operations. Users are often uncertain about how their
information is being collected, stored, and used, which can erode trust
(Cao et al., 2023; Chandra et al., 2022; Lund & Wang, 2023). Moreover,
the ethical implications of ChatGPT’s interactions, such as the potential
to deceive or manipulate users, particularly those who are vulnerable or
emotionally distressed, have also been called into question (Cao et al.,
2023; Dwivedi et al., 2023; Rudolph et al., 2023).
Technology acceptance has been a subject of study for decades, with
numerous research papers utilizing the Technology Acceptance Model
(TAM) to explore various aspects of technology adoption among users
(Folkinshteyn & Lennon, 2016; Gefen et al., 2003b; Venkatesh & Davis,
2000; Yu et al., 2005). TAM is a widely recognized theoretical frame-
work that elucidates how users come to accept and use new information
technologies. Initially developed by Fred Davis in the 1980s and sub-
sequently rened by Venkatesh and Davis in the 1990s (Davis, 1985;
Davis et al., 1989–Aug). TAM has proven applicable across different
technologies and contexts. It has been tested in various technological
environments such as websites, mobile apps, social media, and e-com-
merce platforms, consistently demonstrating its validity and reliability
(Grani´
c & Maranguni´
c, 2019; Saad´
e & Bahli, 2005; Salloum et al.,
2019). Researchers have continued to extend and modify TAM to better
account for additional factors like trust, satisfaction, and intention to
use, and have developed various model variations (Esposito et al., 2020;
Kesharwani & Singh Bisht, 2012).
The purpose of this study is to explore the factors inuencing un-
dergraduate students’ acceptance of ChatGPT as a regular assistance tool
and to assess their awareness of its usage from various angles. The study
aims to gain insights into the effects of ChatGPT on users’ daily lives and
provide valuable insights into the potential benets and challenges
associated with implementing ChatGPT in an educational context.
To accomplish this objective, we will employ a quantitative research
approach using questionnaires (Almalki, 2016; Morse, 1991, 2016; Rao
& Woolcock, 2003; Schoonenboom & Johnson, 2017; Teddlie &
Tashakkori, 2011), and integrate a model incorporating the Technology
Acceptance Model (TAM) along with four new constructs: Privacy, Se-
curity, Social Inuence, and Trust. This proposed model aims to
comprehensively understand user acceptance by examining different
perspectives simultaneously. Our goal is to bridge the gap between
technology and users by shedding light on their actual experiences and
considerations.
Through this study, we will assess the level of impact and conve-
nience and provide a range of recommendations and insights for users,
academics, and the industry.
To help investigate the research goals and objectives, we asked three
questions:
Q1. What are undergraduate students’ key considerations and expe-
riences when using ChatGPT, and how do these perceptions shape their
acceptance and usage patterns?
Q2. What are the factors that inuence undergraduate students’
acceptance of ChatGPT as a regular assistance tool? and how does it
impact their academic performance and overall well-being?
Q3. Were the undergraduate students aware of the usage of ChatGPT,
and what are their expectations of its benets and drawbacks?
This research will provide the appropriate answers to these questions
and clarify the misleading and confusing regarding ChatGPT for un-
dergraduate students.
4. Technology acceptance model (TAM)
The Technology Acceptance Model (TAM), as illustrated in (Fig. 1), is
a theoretical framework designed to explain how users accept and adopt
new information technologies. The TAM posits that the acceptance of
new technology is primarily inuenced by two main factors: perceived
usefulness and perceived ease of use (Davis, 1989–Sep). Perceived use-
fulness is dened as the degree to which a user believes that technology
will enhance their job performance or productivity. In contrast,
perceived ease of use refers to the degree to which a user believes that
the technology will be easy to understand and use (Davis et al.,
1989–Aug).
According to the TAM, perceived usefulness and perceived ease of
use are inuenced by several other external factors such as the user’s
demographic characteristics, social inuences, and organizational cul-
ture (Connor & Siegrist, 2010; Saad´
e & Bahli, 2005). The TAM has been
widely applied in research and practice to understand and predict the
adoption and use of various technologies, including information sys-
tems, mobile apps, and social media platforms (Min et al., 2021). By
identifying factors that inuence user acceptance, the model can help to
inform the design and implementation of new technologies, as well as
strategies for promoting their adoption and use (Albayati et al., 2020).
The TAM proposes that attitude toward using and behavioral
intention to use are the primary determinants of technology acceptance
and use. By understanding the factors that inuence attitude toward
using and behavioral intention to use, the TAM can help inform the
design and implementation of new technologies, as well as strategies for
promoting their adoption and use.
5. Hypotheses and model development
This research aims to understand how the acceptance of ChatGPT as
an assistance tool by undergraduate students correlates with their aca-
demic performance. For instance, students may accept and use ChatGPT,
experiencing either a positive or negative impact on their grades, un-
derstanding of course materials, or overall academic success.
Utilizing the TAM model helps in comprehending how TAM con-
structs are related to students’ acceptance of ChatGPT as an assistance
tool. If students nd ChatGPT easy to use and perceive it as useful, it may
positively inuence their acceptance.
Additionally, this research incorporates four external constructs –
privacy, security, social inuence, and trust – alongside the TAM con-
structs, aiming to explore the interactive impact of these constructs on
each other. Moreover, it investigates the impact on students’ decisions to
accept or reject ChatGPT as a tool.
5.1. TAM-core constructs
The Technology Acceptance Model (TAM) (Davis, 1989–Sep 1993;
Davis et al., 1989; Aug; Venkatesh & Davis, 2000) comprises two key
constructs: perceived usefulness and perceived ease of use. These con-
structs are further inuenced by external factors, known as external
variables. Below is a brief description of each construct:
•Perceived Usefulness: This term refers to the extent to which a user
believes that technology will enhance their job performance or
productivity. Users are more likely to accept and adopt new tech-
nology if they perceive it as benecial in achieving their goals or
tasks (Bhattacherjee, 2000). Perceived usefulness is intricately
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
4
linked to an individual’s motivation to use technology. If an indi-
vidual perceives technology as useful, they are more likely to be
motivated to use it (Yu et al., 2005). Conversely, if technology is
perceived as not useful, their motivation to use it diminishes. In the
TAM model, perceived usefulness is a crucial factor inuencing an
individual’s attitude and intention toward technology usage (Kar-
ahanna & Straub, 1999). The model suggests that individuals are
more inclined to adopt and utilize technology if they believe it aids in
accomplishing their goals and tasks (Autry et al., 2010). Therefore, it
is hypothesized that:
H4. Perceived usefulness positively and signicantly impacts users’
attitudes toward using ChatGPT as a daily reference.
•Perceived Ease of Use: Refers to the degree to which a user believes
that technology is easy to use and understand. Users are more likely
to accept and adopt new technology if they perceive it as easy to use
and learn (Autry et al., 2010; Martins et al., 2014). Perceived ease of
use is closely related to an individual’s perception of the effort
required to use technology (Venkatesh, 2000). If an individual per-
ceives a technology to be easy to use, they are more likely to be
motivated to use it (Karahanna & Straub, 1999). On the other hand,
if an individual perceives a technology to be difcult to use, they are
less likely to be motivated to use it (Adams et al., 1992). In the TAM
model, perceived ease of use is another central factor that inuences
an individual’s attitude to use technology (Venkatesh & Bala, 2008).
The model proposes that individuals are more likely to adopt and use
technology if they perceive it to be easy to use and learn (Al-Shara
et al., 2016). When users perceive a system or technology as easy to
use, they are more likely to perceive it as useful as well. This is
because ease of use reduces the perceived effort and cognitive load
required to use the technology (Al-Shara et al., 2016). Therefore, it
is hypothesized that:
H2. Perceived Ease of Use positively and signicantly impacts users’
attitudes toward using ChatGPT as a daily reference.
H3. Perceived Ease of Use positively and signicantly impacts users’
perceived usefulness of ChatGPT as a daily reference.
Also, the TAM model has three more constructs, which consist of the
user intention and attitude:
•Attitude: Refers to the user’s overall positive or negative evaluation
of the technology (Ajzen & Fishbein, 1972). Attitude toward using is
inuenced by the perceived usefulness and perceived ease of use of
the technology (Aug; Venkatesh & Bala, 2008; Davis et al., 1989;
Venkatesh & Davis, 2000). In the TAM model, Attitude is seen as a
reection of an individual’s subjective evaluation of a technology
based on its perceived usefulness and ease of use (Yang & Yoo, 2004).
If an individual perceives a technology to be useful and easy to use,
they are likely to have a positive attitude toward the technology,
which increases their intention to use it (Aghdaie et al., 2011).
Conversely, if an individual perceives a technology to be not useful
or difcult to use, they are likely to have a negative attitude towards
the technology, which decreases their intention to use it (Bhatta-
cherjee & Premkumar, 2004). Therefore, it is hypothesized that:
H1. Attitude positively and signicantly impacts on the user’s
Behavioral Intention toward the use of ChatGPT as a daily reference.
•Behavioral Intention: Refers to the user’s intention or willingness to
use technology. Behavioral intention to use is inuenced by the
user’s attitude toward using the technology and their subjective
norm, which refers to the perceived social pressure to use the tech-
nology (Ajzen & Fishbein, 1972). In the TAM model, Behavioral
Intention is seen as a direct result of an individual’s Attitude towards
a technology (Fishbein, 1975). If an individual has a positive attitude
towards technology, they are more likely to have a high intention to
use it. The use of technology by most users is strongly inuenced by
their ICT competence and attitude (Ferede et al., 2022). Conversely,
if an individual has a negative attitude toward technology, they are
less likely to have a high intention to use it (Kim & Ko, 2010).
5.2. External constructs
In addition to the TAM model, we will expand it by incorporating
four new constructs: privacy, security, social inuence, and trust. These
will assist in investigating the acceptance of ChatGPT and the users’
level of awareness. The selection of these four constructs aligns with
established theoretical frameworks in technology acceptance and re-
ects practical considerations in the educational context. The interplay
between these constructs is examined individually and collectively from
various perspectives within the literature, with numerous studies
investigating the relationship between students and new technologies.
The specic context and objectives of this research have guided the
selection of these external constructs, which are derived from both the
literature and research background:
•Privacy: The privacy construct refers to an individual’s expectation
and perception of control over the collection, use, and disclosure of
their personal information (Basak et al., 2016). It is an important
factor that inuences individuals’ behavior in the context of tech-
nology adoption and uses (Qin et al., 2009). The privacy construct is
closely related to the TAM model, as an individual’s perception of
privacy and control over their personal information can signicantly
impact their perceived usefulness and ease of use of technology
(Al-Shara et al., 2016; Basak et al., 2016; Wu et al., 2023). For
example, if an individual perceives that their personal information is
Fig. 1. Technology acceptance model TAM (Davis, 1989–Sep).
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
5
being collected and used without their consent, they may be less
likely to nd the technology useful or easy to use, even if it has a clear
benet for them (Hajian et al., 2016). To retain users, service pro-
viders must balance both privacy concerns and social inuence
perspectives (Zhou & Li, 2014). When users feel that their conver-
sations with any system are private and secure, they are more likely
to trust and adopt the technology, leading to an increase in their
perceived usefulness of the technology (Basak et al., 2016). While
any system promotes privacy concerns and pushes for data protec-
tion, greater comfort, and ease with technology is likely to increase,
resulting in greater perceived ease of use. The resulting sense of
comfort and security in privacy can encourage users to explore the
system’s features more freely and interact with it more frequently,
which may also contribute to greater perceived ease of use (Feath-
erman et al., 2010). Clear privacy determinations like communica-
tion and transparency contribute to data protection practices and
building trust (Dragan & Manulis, 2018). Addressing privacy con-
cerns directly and implementing measures to mitigate potential risks
can foster and impact the trust (Joinson et al., 2010). Data privacy
and control reect positively on user trust and impact the condence
in technology use (Sicari et al., 2015). Therefore, it is hypothesized
that:
H5. Privacy positively and signicantly impacts Perceived Ease of Use
toward the use of ChatGPT as a daily reference.
H6. Privacy positively and signicantly impacts the Perceived Use-
fulness of the use of ChatGPT as a daily reference.
H7. Privacy positively and signicantly impacts Social Inuence to-
ward the use of ChatGPT as a daily reference.
H8. Privacy positively and signicantly impacts Trust toward the use
of ChatGPT as a daily reference.
•Security: The security construct refers to an individual’s perception
of the protection of their information and data from unauthorized
access, use, and disclosure (Al-Shara et al., 2016). It is an important
factor that inuences individuals’ behavior in the context of tech-
nology adoption and use, particularly for technologies that involve
the transfer and storage of sensitive information. The security
construct is closely related to the TAM model, as an individual’s
perception of security can signicantly impact their perceived use-
fulness and ease of use of technology (Al-Shara et al., 2016; Basak
et al., 2016; Wu et al., 2023). Information security behaviors can be
improved through security education, training, and awareness
(SETA) programs, according to some scholars (Al-Shara et al., 2016;
Casal´
o, Flavi´
an, & Guinalíu, 2007; Chen et al., 2021). When users
perceive the system as secure, they may have more trust and willing
to share personal information or condential data with the system,
which can increase its usefulness in providing tailored responses
(Jahangir & Begum, 2008). In any industry that uses technology,
Security is a critical aspect that impacts social inuence and adoption
(Patel & Patel, 2018). For example, if an individual perceives that
technology is not secure, they may be less likely to nd the tech-
nology useful or easy to use, even if it has a clear benet for them.
However, Users are more likely to trust the technology of software if
they believe that their interactions and data are secure, reducing
concerns about unauthorized access or data breaches (Suh & Han,
2003). Control and restricting access, assisting user trust, and secu-
rity can reect in assuring the condentiality of user information
(Pearson & Benameur, 2010). Users gain trust in the web application
when they see that security is actively maintained and that efforts are
made to address potential risks immediately (Shin, 2010). Therefore,
it is hypothesized that:
H9. Security positively and signicantly impacts Perceived Ease of Use
toward the use of ChatGPT as a daily reference.
H10. Security positively and signicantly impacts the Perceived Use-
fulness of the use of ChatGPT as a daily reference.
H11. Security positively and signicantly impacts Social Inuence
toward the use of ChatGPT as a daily reference.
H12. Security positively and signicantly impacts Trust toward the
use of ChatGPT as a daily reference.
•Social Inuence: The social construct refers to the social inuence
and norms that can affect an individual’s behavior in the context of
technology adoption and use (Malhotra & Galletta, 1999; Tramow
& Finlay, 1996). It is an important factor that inuences individuals’
behavior, particularly in situations where the use of technology is
socially embedded and/or requires coordination with others. The
social construct is closely related to the TAM model, as an in-
dividual’s perception of social inuence can signicantly impact
their perceived usefulness and ease of use of technology (Hsu & Lu,
2004; Malhotra & Galletta, 1999). Individuals perceive that others in
their social network use a system in their daily activities, and they
may develop a more positive attitude toward the system. During the
Covid-19 period, many researchers investigated the experience with
online distance teaching and evaluated the social inuence (Sidi
et al., 2023). Additionally, if a user perceives that their use of the
system aligns with the norms and values of their social group, they
may be more likely to adopt a positive attitude toward the system
and use it more frequently (Prislin & Wood, 2005a, 2005b). For
example, if an individual perceives that their social network or peers
are using technology, they may be more likely to adopt and use the
technology themselves, even if they do not perceive it to be highly
useful or easy to use. Similarly, if an individual perceives that there is
a social norm or expectation to use a particular technology, they may
be more likely to adopt and use the technology. Therefore, it is hy-
pothesized that:
H13. Social Inuence positively and signicantly impacts Attitude
toward the use of ChatGPT as a daily reference.
H14. Social Inuence positively and signicantly impacts the user’s
Behavioral Intention toward the use of ChatGPT as a daily reference.
•Trust: The trust construct refers to an individual’s belief that a
technology or system can be relied upon to perform as intended and
protect their interests (Falcone & Castelfranchi, 2001; Roy et al.,
2001). Trust is an important factor that inuences individuals’
behavior in the context of technology adoption and uses (Kesharwani
& Singh Bisht, 2012). The trust construct is closely related to the
TAM model, as an individual’s perception of trust can signicantly
impact their perceived usefulness and ease of use of technology
(Al-Shara et al., 2016; Basak et al., 2016). If users trust a system to
provide accurate and helpful information, they are more likely to
develop a positive attitude toward the system. This positive attitude
may lead to increased usage levels, as well as the intention to
continue using it in the future. Additionally, trust may encourage
users to share personal information with the system, which can
further enhance the accuracy and helpfulness of its responses (Kim &
Gambino, 2016). Limbu, Wolf, and Lunsford found that the
perceived ethics of an Internet retailer’s website signicantly affect
consumers’ trust and attitudes toward the retailer’s website which
eventually has positive impacts on purchase and revisit intentions,
website trust was positively related to attitudes toward the site
(Limbu et al., 2012). For example, if an individual does not trust a
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
6
technology or system, they may be less likely to nd the technology
useful or easy to use, even if it has a clear benet for them (Wu &
Chen, 2005). Therefore, it is hypothesized that:
H15. Trust positively and signicantly impacts Attitude toward the
use of ChatGPT as a daily reference.
H16. Trust positively and signicantly impacts the user’s Behavioral
Intention toward the use of ChatGPT as a daily reference.
5.3. Research model design
The model design (Fig. 2), illustrates the integration between the
TAM model and the four external constructs using Structural Equation
Modeling (SEM) (Hair Jr et al., 2016).
6. Research methodology
This research aims to explore the factors inuencing undergraduate
students’ acceptance of ChatGPT as a regular assistance tool and to
assess their awareness of its usage from various perspectives. The study
seeks to gain insights into the effects of ChatGPT on users’ daily lives and
provide valuable information on the potential benets and challenges
associated with its implementation in an educational context.
To achieve this objective, we will employ a quantitative research
approach by distributing survey questionnaires to currently enrolled
undergraduate students in the United States of America. The eligibility
criteria for participant selection include being an undergraduate student
currently enrolled in a national academic institution.
The quantitative approach will commence with the collection of
demographic data, including age, gender, and education level, from the
survey. This information will help better understand the participants’
characteristics and how these may inuence their attitudes and in-
tentions toward using ChatGPT. Additionally, we will ask undergraduate
students questions regarding their usage of ChatGPT to gain more in-
sights into their usage behavior.
The recruitment process involves reaching out to undergraduate
students globally, leveraging online platforms, and ensuring the ethical
treatment of participants throughout the research. The selection criteria
are as follows:
•Currently enrolled undergraduate students
•Students from diverse educational backgrounds to capture a wide
range of perspectives and experiences
•Participants who are well-exposed to ChatGPT, understand its use
and capabilities
•Participants from diverse demographic backgrounds to enhance the
study’s generalizability and applicability.
In this study, the recruitment of undergraduate students was carried
out using the Prolic platform, renowned for its extensive participant
pool and academic credibility. The selection of Prolic was based on
several key criteria: its wide accessibility to a diverse undergraduate
demographic, user-friendly interface, capability for precise targeting,
and a strong reputation for generating high-quality data. Furthermore,
Fig. 2. The proposed model.
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
7
Prolic’s commitment to ethical standards and transparent operations
provided an added layer of assurance in the conduct of our research.
Ethical integrity was paramount throughout the recruitment process.
Before participation, all individuals were presented with a comprehen-
sive informed consent form. This document detailed the research’s ob-
jectives, procedures, and the participant’s rights, including detailed
information on data usage and storage, ensuring informed and voluntary
participation. The consent process was designed to be interactive,
requiring an active conrmation of understanding and agreement.
Prolic’s adherence to GDPR and other privacy regulations, along with
our stringent data anonymization protocols, further fortied the pro-
tection of participant privacy and data security.
Then will involve measuring the TAM and the external constructs
using a validated questionnaire. The questionnaire will use a 5-point
Likert scale, with 1 indicating “strongly disagree” and 5 indicating
“strongly agree."
The data collected will be analyzed using descriptive statistics,
including mean, standard deviation, and frequency distribution. Addi-
tionally, the study will utilize Structural Equation Modeling (SEM) to
test the hypothesized relationships between the constructs.
The SEM analysis will offer insights into the extent to which the TAM
model can explain users’ attitudes and behavioral intentions toward
using ChatGPT. The study will also explore the role of external con-
structs in inuencing users’ attitudes and intentions. The ndings of this
study will contribute to a better understanding of the factors inuencing
technology adoption and usage and provide valuable insights for tech-
nology developers and marketers to enhance the design and marketing
of their products.
7. Results and discussion
•Descriptive statistics
The sample responses were collected from a total of 637 under-
graduate students through an online survey over two months. After data
trimming to remove missing values and outliers, the nal number of
responses was 603. Table 1 presents the demographic information of the
respondents. The data reveals that males constituted 54.7% of the re-
spondents, while females accounted for 45.3%.
The age distribution of the respondents was as follows: 49.8% were
between 15 and 24 years old, 36.4% between 25 and 34 years old, and
13.8% between 35 and 44 years old. Regarding the respondents’ edu-
cation level, all were undergraduate students, distributed across aca-
demic years as follows: 14% were rst-year students, 28.9% were
second-year students, 20.6% were third-year students, and 36.5%
were fourth-year students.
•Data analysis
For statistical analysis, the study utilized Partial Least Squares-
Structural Equation Modeling (PLS-SEM) using SmartPLS 4 (Ringle
et al., 2015; Ronaghi & Mosakhani). Due to the nature of this study,
PLS-SEM is considered to be the best approach to nding the best results
(Hair Jr et al., 2016; Ringle et al., 2015). In the case of the reective
measurement, model (Hair Jr et al., 2016), it is proposed that scholars
should identify convergent validity by considering the outer loadings of
all items and the average variance extracted (AVE). The coefcient of
determination and the path coefcients of determination were measured
based on the structural model (Hair Jr et al., 2016; Selya et al., 2012). To
support the measurement and structural model, this paper applied all
the criteria mentioned above.
•Measurement model assessment
It is necessary to measure the reliability, validity, and factor loading
for every item, according to (Hair Jr et al., 2016). Reliability is the
correspondence of a measure; a measure is reliable when it produces
consistent outcomes under consistent conditions and the loading value
of each item should be equal to or greater than (0.7) to be considered
reliable.
Cronbach’s Alpha and composite reliability values should be equal to
or greater than (0.7). As shown in Table 2, all items are reliable and
accept the set criteria. Validity refers to the degree to which the in-
dicators of a construct collectively measure that construct. Convergent
validity is typically established using the average variance extracted
(AVE), which calculates the average of the squared loadings of the items
associated with the construct.
This refers to the extent to which a latent construct accounts for the
variation in its indicators. An AVE value of 0.5 or higher indicates that
the construct explains more than half of the variance in its items (Hair Jr
et al., 2016). In Table 2, both Cronbach’s Alpha and composite reli-
ability values exceed 0.7, and the AVE values surpass 0.5. Consequently,
the convergent validity of the constructs is conrmed.
To assess discriminant validity, this study evaluates the Fornell-
Larcker criterion and cross-loadings. The Fornell-Larcker criterion in-
volves comparing the square root of the AVE (Average Variance
Extracted) value with the correlations between latent variables, as
outlined in (Table 3). Cross-loadings are examined by ensuring that each
indicator’s loading is higher on its intended latent variable than on other
variables. According to the data presented in Table 4, the cross-loading
criterion is met satisfactorily, as all items have loadings greater than 0.7,
indicating the highest value when compared to loadings from other
variables.
•Structural model assessment
The evaluation of the model involved assessing the level of
discrepancy in the dependent variables. Path coefcients and R-squared
(R
2
) were used as the main indicators for estimating the structural model
(Ringle et al., 2015). R
2
is a statistical measure that quanties the pro-
portion of variance in a dependent variable that can be explained by one
or more independent variables in a regression model. Correlation mea-
sures the strength of the relationship between an independent and
dependent variable, while R
2
indicates the extent to which the variance
of one variable can account for the variance of the second variable as
showing in Fig. 3.
•Behavioral Intention (BI), R
2
=76.6%: This high R
2
value suggests
that the independent variables in the model collectively explain
76.6% of the variance in behavioral intention. In other words, the
factors considered in the study have a substantial inuence on stu-
dents’ behavioral intention to use ChatGPT as a daily reference.
•Attitude (A), R
2
=69.6%: The R
2
value of 69.6% indicates a strong
inuence of independent variables on users’ attitudes toward uti-
lizing ChatGPT for their daily tasks. This suggests that factors
considered in the study play a signicant role in shaping users’
attitudes.
•Perceived Usefulness (U), R
2
=59.6%: The R
2
value of 59.6% for
perceived usefulness implies that the independent variables explain
Table 1
Demographic information.
Item Values Frequency Percentage
Gender Male 330 54.7 %
Female 272 45.3 %
Age 15~24 300 49.8 %
25~34 219 36.4 %
35~44 83 13.8 %
Undergraduate level 1st grade 84 14.0 %
2nd grade 174 28.9 %
3rd grade 124 20.6 %
4th grade 220 36.5 %
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
8
almost 60% of the variance in users’ perceptions of the usefulness of
ChatGPT. Factors in the study contribute substantially to shaping
users’ opinions about the utility of the system.
•Perceived Ease of Use (E), R
2
=23.5%: The comparatively lower R
2
value of 23.5% for perceived ease of use suggests that the model may
not capture all the factors inuencing users’ perceptions of the ease
of using ChatGPT. Additional variables or considerations may be
needed to explain ease of use better.
•Social Inuence (SI), R
2
=46.3%: The R
2
value of 46.3% indicates a
moderate level of inuence from independent variables on social
inuence. This suggests that factors in the model contribute signi-
cantly but not as strongly as in the case of behavioral intention or
attitude.
•Trust (T), R
2
=59.7%: The R
2
value of 59.7% for trust indicates that
the independent variables explain a substantial portion of the vari-
ance in users’ trust in ChatGPT. Trust is a crucial factor, and the
model appears to capture the relevant inuences effectively.
The path coefcients are essential measures for assessing the struc-
tural model. According to the path analysis, as depicted in (Fig. 3) and
outlined in (Table 5), each hypothesis was evaluated by estimating the
p-values and path coefcients. It was found that all hypotheses are
supported except for H5 and H14. The supported hypotheses indicate
signicant paths between the independent and dependent variables.
However, H5 and H14 are not supported, indicating that Privacy (P) has
no impact on Perceived Ease of Use (E), and Social Inuence (SI) has no
Table 2
Measurement model results.
Latent Variable Indicators Convergent Validity Internal Consistency Reliability Discriminant Validity?
Loading Indicators Reliability AVE Composite Reliability Cronbach’s Alpha
>0.70 >0.50 >0.50 0.60–0.90 0.60–0.90
Attitude A1 0.879 0.773 0.776 0.912 0.856 yes
A2 0.866 0.750
A3 0.898 0.806
Behavioral Intention BI1 0.940 0.884 0.865 0.951 0.922 yes
BI2 0.942 0.887
BI3 0.909 0.826
Perceived Ease of Use E1 0.858 0.736 0.718 0.911 0.869 yes
E2 0.869 0.755
E3 0.848 0.719
E4 0.815 0.664
Privacy P1 0.816 0.666 0.737 0.933 0.910 yes
P2 0.846 0.716
P3 0.825 0.681
P4 0.860 0.740
P5 0.828 0.686
P6 0.803 0.645
Security SE1 0.818 0.669 0.678 0.927 0.905 yes
SE2 0.779 0.607
SE3 0.775 0.601
SE4 0.813 0.661
SE5 0.826 0.682
SE6 0.807 0.651
SE7 0.820 0.672
SE8 0.922 0.850
Social Inuence SI1 0.914 0.835 0.649 0.937 0.923 yes
SI2 0.913 0.834
SI3 0.826 0.682
Trust T1 0.829 0.687 0.840 0.940 0.905 yes
T2 0.784 0.615
T3 0.818 0.669
T4 0.860 0.740
Perceived Usefulness U1 0.876 0.767 0.663 0.887 0.830 yes
U2 0.907 0.823
U3 0.756 0.572
U4 0.887 0.787
U5 0.762 0.581
Table 3
Fornell–Larcker criterion results.
Attitude
(A)
Behavioral Intention
(BI)
Perceived Ease of Use
(E)
Perceived Usefulness
(U)
Privacy
(P)
Security
(SE)
Social Inuence
(SI)
Trust
(T)
Attitude (A) 0.881
Behavioral Intention
(BI)
0.868 0.930
Perceived Ease of Use
(E)
0.614 0.570 0.848
Perceived Usefulness
(U)
0.803 0.755 0.680 0.859
Privacy (P) 0.593 0.572 0.414 0.568 0.823
Security (SE) 0.622 0.586 0.484 0.599 0.754 0.805
Social Inuence (SI) 0.620 0.584 0.369 0.589 0.642 0.666 0.917
Trust (T) 0.693 0.679 0.615 0.710 0.723 0.759 0.633 0.814
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
9
impact on Behavioral Intention (BI).
The results of this paper found that;
H1. (B =0.755, p <0.05): The path between Attitude (A) and
Behavioral Intention (BI) is particularly important and exhibits a sig-
nicant and robust association. This relationship is notably stronger
than any other within the model. Specically, it is observed that attitude
exerts a powerful positive inuence on the user’s behavioral intention to
use ChatGPT as a daily reference. The p-value (<0.05) indicates that this
relationship is statistically signicant.
H2. (B =0.103, p <0.05) describes the path between Perceived Ease
of Use (E) and Attitude (A). This result indicates a positive relationship
between the perceived ease of using ChatGPT and the attitude of un-
dergraduate students toward using it. In other words, if students nd
ChatGPT easy to use, it positively inuences their attitude toward using
it. The p-value (<0.05) conrms the statistical signicance of this
relationship.
H3. (B =0.509, p <0.05) describes the path between Perceived Ease
of Use (E) and Perceived Usefulness (U). This nding demonstrates a
positive relationship between the perceived ease of using ChatGPT and
its perceived usefulness. If students perceive ChatGPT as easy to use,
they are more likely to consider it useful. The p-value (<0.05) conrms
the statistical signicance of this relationship.
H4. (B =0.526, p <0.05) describes the path between Perceived
Usefulness (U) and Attitude (A). This result suggests a positive rela-
tionship between the perceived usefulness of ChatGPT and the attitude
of undergraduate students toward using it. If students perceive ChatGPT
as useful, it positively inuences their attitude toward using it. The p-
value (<0.05) conrms the statistical signicance of this relationship.
H5. (B =0.000, p >0.05) describes the path between Privacy (P) and
Perceived Ease of Use (E). This implies no discernible relationship be-
tween privacy concerns and the perceived ease of using ChatGPT. In
practical terms, it means that students’ privacy concerns do not signif-
icantly impact how they perceive the ease of using the system. The
elevated p-value suggests that any observed association between privacy
concerns and ease of use may be coincidental, underscoring the absence
of a robust link between these variables. This nding contributes valu-
able insights into the nuanced factors inuencing user perceptions,
suggesting that privacy concerns may not be a primary driver in shaping
how students perceive the ease of interacting with ChatGPT.
H6. (B =0.208, p <0.05) describes the path between Privacy (P) and
Perceived Usefulness (U). This nding indicates a positive relationship
between privacy concerns and the perceived usefulness of ChatGPT. If
students have privacy concerns, it may affect how they perceive its
usefulness. The p-value (<0.05) conrms the statistical signicance of
this relationship.
H7. (B =0.271, p <0.05) describes the path between Privacy (P) and
Social Inuence (SI). This result indicates a positive relationship be-
tween privacy concerns and social inuence. If students have privacy
concerns, it may affect their perception of social inuence related to
using ChatGPT. The p-value (<0.05) conrms the statistical signicance
of this relationship.
H8. (B =0.274, p <0.05) describes the path between Privacy (P) and
Trust (T). This nding suggests a positive relationship between privacy
concerns and trust. If students have privacy concerns, it may impact
their level of trust in using ChatGPT. The p-value (<0.05) conrms the
statistical signicance of this relationship.
H9. (B =0.484, p <0.05) describes the path between Security (SE)
Table 4
Cross loadings results.
Attitude (A) Behavioral Intention (BI) Perceived Ease of Use (E) Perceived Usefulness (U) Privacy (P) Security (SE) Social Inuence (SI) Trust (T)
A1 0.879 0.819 0.591 0.720 0.505 0.546 0.504 0.628
A2 0.866 0.708 0.465 0.637 0.527 0.542 0.592 0.570
A3 0.898 0.762 0.559 0.761 0.537 0.556 0.548 0.629
BI1 0.830 0.940 0.526 0.722 0.539 0.567 0.574 0.648
BI2 0.821 0.942 0.544 0.713 0.538 0.546 0.532 0.629
BI3 0.770 0.909 0.523 0.671 0.519 0.522 0.522 0.618
E1 0.482 0.473 0.858 0.555 0.331 0.425 0.276 0.508
E2 0.582 0.540 0.869 0.638 0.398 0.453 0.365 0.559
E3 0.520 0.468 0.848 0.559 0.352 0.399 0.338 0.500
E4 0.489 0.446 0.815 0.544 0.314 0.358 0.262 0.514
P1 0.434 0.409 0.272 0.431 0.762 0.645 0.513 0.542
P2 0.472 0.454 0.281 0.466 0.816 0.685 0.538 0.595
P3 0.539 0.531 0.411 0.509 0.846 0.732 0.501 0.639
P4 0.479 0.463 0.395 0.445 0.825 0.695 0.520 0.591
P5 0.518 0.496 0.356 0.518 0.860 0.735 0.532 0.625
P6 0.480 0.465 0.319 0.432 0.828 0.724 0.574 0.574
SE1 0.515 0.472 0.426 0.495 0.712 0.803 0.488 0.637
SE2 0.495 0.464 0.415 0.493 0.754 0.818 0.511 0.655
SE3 0.520 0.487 0.362 0.492 0.702 0.779 0.562 0.584
SE4 0.511 0.512 0.437 0.505 0.642 0.775 0.540 0.599
SE5 0.479 0.450 0.359 0.452 0.675 0.813 0.555 0.592
SE6 0.511 0.475 0.384 0.493 0.668 0.826 0.547 0.609
SE7 0.466 0.418 0.356 0.453 0.670 0.807 0.537 0.572
SE8 0.510 0.492 0.375 0.470 0.678 0.820 0.553 0.639
SI1 0.568 0.546 0.325 0.536 0.580 0.594 0.922 0.564
SI2 0.550 0.510 0.344 0.535 0.571 0.605 0.914 0.577
SI3 0.585 0.548 0.344 0.550 0.614 0.632 0.913 0.597
T1 0.537 0.549 0.496 0.567 0.600 0.640 0.504 0.826
T2 0.646 0.638 0.611 0.683 0.515 0.563 0.484 0.829
T3 0.532 0.503 0.425 0.510 0.606 0.613 0.528 0.784
T4 0.536 0.515 0.461 0.546 0.640 0.662 0.548 0.818
U1 0.692 0.658 0.532 0.860 0.521 0.522 0.574 0.634
U2 0.699 0.655 0.621 0.876 0.473 0.529 0.492 0.613
U3 0.736 0.690 0.599 0.907 0.530 0.549 0.509 0.606
U4 0.547 0.502 0.584 0.756 0.354 0.380 0.382 0.511
U5 0.756 0.716 0.587 0.887 0.542 0.572 0.559 0.676
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
10
Fig. 3. Path analysis results.
Table 5
Hypotheses test results.
# Path Path
Coefcients
t Values p
Values
2.5% Condence
Intervals
97.5% Condence
Intervals
Signicance (p <
0.05)?
H1 Attitude (A) - >Behavioral Intention (BI) 0.755 23.410 0.000 0.689 0.818 yes
H2 Perceived Ease of Use (E) - >Attitude (A) 0.103 2.326 0.020 0.016 0.192 yes
H3 Perceived Ease of Use (E) - >Perceived
Usefulness (U)
0.509 13.540 0.000 0.432 0.578 yes
H4 Perceived Usefulness (U) - >Attitude (A) 0.526 10.761 0.000 0.429 0.619 yes
H5 Privacy (P) - >Perceived Ease of Use (E) 0.000 0.002 0.998 −0.149 0.155 No
H6 Privacy (P) - >Perceived Usefulness (U) 0.208 3.275 0.001 0.086 0.331 yes
H7 Privacy (P) - >Social Inuence (SI) 0.271 4.031 0.000 0.138 0.401 yes
H8 Privacy (P) - >Trust (T) 0.274 4.714 0.000 0.159 0.386 yes
H9 Security (SE) - >Perceived Ease of Use (E) 0.484 6.111 0.000 0.329 0.634 yes
H10 Security (SE) - >Perceived Usefulness (U) 0.175 2.740 0.006 0.053 0.302 yes
H11 Security (SE) - >Social Inuence (SI) 0.434 6.569 0.000 0.305 0.562 yes
H12 Security (SE) - >Trust (T) 0.525 9.066 0.000 0.411 0.638 yes
H13 Social Inuence (SI) - >Attitude (A) 0.183 4.439 0.000 0.104 0.267 yes
H14 Social Inuence (SI) - >Behavioral Intention
(BI)
0.029 0.871 0.384 −0.038 0.094 No
H15 Trust (T) - >Attitude (A) 0.140 2.738 0.006 0.040 0.243 yes
H16 Trust (T) - >Behavioral Intention (BI) 0.138 3.626 0.000 0.063 0.211 yes
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
11
and Perceived Ease of Use (E). This result indicates a positive relation-
ship between security concerns and the perceived ease of using
ChatGPT. If students have security concerns, it may inuence how they
perceive the ease of using it. The p-value (<0.05) conrms the statistical
signicance of this relationship.
H10. (B =0.175, p <0.05) describes the path between Security (SE)
and Perceived Usefulness (U). This nding suggests a positive relation-
ship between security concerns and the perceived usefulness of
ChatGPT. If students have security concerns, it may affect how they
perceive its usefulness. The p-value (<0.05) conrms the statistical
signicance of this relationship.
H11. (B =0.434, p <0.05) describes the path between Security (SE)
and Social Inuence (SI). This result suggests a positive relationship
between security concerns and social inuence. If students have security
concerns, it may inuence their perception of social inuence related to
using ChatGPT. The p-value (<0.05) conrms the statistical signicance
of this relationship.
H12. (B =0.525, p <0.05) describes the path between Security (SE)
and Trust (T). This nding indicates a positive relationship between
security concerns and trust. If students have security concerns, it may
impact their level of trust in using ChatGPT. The p-value (<0.05) con-
rms the statistical signicance of this relationship.
H13. (B =0.183, p <0.05) describes the path between Social Inu-
ence (SI) and Attitude (A). This result suggests a positive relationship
between social inuence and attitude. If students perceive social inu-
ence related to using ChatGPT, it positively inuences their attitude
toward using it. The p-value (<0.05) conrms the statistical signicance
of this relationship.
H14. (B =0.029, p >0.05) describes the path between Social Inu-
ence (SI) and Behavioral Intention (BI). This implies no statistically
signicant relationship between social inuence and students’ behav-
ioral intention to use ChatGPT. Essentially, the inuence of peers, col-
leagues, or social circles does not appear to signicantly shape students’
intentions to adopt ChatGPT for their daily tasks. The elevated p-value
indicates that any apparent connection between social inuence and
behavioral intention may be coincidental, highlighting that social in-
uence may not be a decisive factor in determining the intention to use
ChatGPT in the examined population.
H15. (B =0.140, p <0.05) describes the path between Trust (T) and
Attitude (A). This result suggests a positive relationship between trust
and attitude. If students have a higher level of trust in using ChatGPT, it
positively inuences their attitude toward using it. The p-value (<0.05)
conrms the statistical signicance of this relationship.
H16. (B =0.138, p <0.05) describes the path between Trust (T) and
Behavioral Intention (BI). This nding indicates a positive relationship
between trust and behavioral intention. If students have a higher level of
trust in using ChatGPT, it positively inuences their behavioral inten-
tion to use it. The p-value (<0.05) conrms the statistical signicance of
this relationship.
Based on the results of the study and the analysis of the hypotheses,
several key ndings can be summarized:
1. Perceived Ease of Use (E) positively impacts both Attitude (A) and
Perceived Usefulness (U) of ChatGPT. When undergraduate students
nd ChatGPT easy to use, it positively inuences their attitude to-
ward and perception of its usefulness.
2. Perceived Usefulness (U) positively impacts Attitude (A). When
students perceive ChatGPT as useful, it enhances their overall atti-
tude toward using it.
3. Privacy concerns (P) do not signicantly affect the perceived ease of
using ChatGPT but positively impact its perceived usefulness. This
implies that while students’ privacy concerns may not inuence their
perception of the ease of using ChatGPT, they can shape their
perception of its usefulness.
4. Security concerns (SE) positively inuence the Perceived Ease of Use
(E) and Perceived Usefulness (U) of ChatGPT. Students’ security
concerns can impact how they perceive the ease of using ChatGPT
and its usefulness.
5. Social Inuence (SI) positively inuences Attitude (A). If students
perceive social inuence related to using ChatGPT, it positively in-
uences their attitude toward using it.
6. Trust (T) positively inuences both Attitude (A) and Behavioral
Intention (BI). When students have a higher level of trust in using
ChatGPT, it positively inuences their attitude toward the tool and
their behavioral intention to use it.
•Research questions
Based on the ndings and results of the research, we provide answers
to the research questions as follows:
1. What are undergraduate students’ key considerations and experi-
ences when using ChatGPT, and how do these perceptions shape
their acceptance and usage patterns?
The research ndings reveal that key factors such as perceived ease
of use, perceived usefulness, privacy concerns, security considerations,
trust, and social inuence are considered by undergraduate students
when using ChatGPT. These factors collectively inuence their accep-
tance and usage patterns. Students are more likely to accept and engage
with ChatGPT when they nd it easy to use, perceive it as useful, and
experience positive social inuence.
2. What are the factors that inuence undergraduate students’ accep-
tance of ChatGPT as a regular assistance tool, and how does it impact
their academic performance and overall well-being?
The research ndings suggest that factors such as perceived ease of
use, perceived usefulness, privacy concerns, security considerations,
social inuence, and trust inuence undergraduate students’ acceptance
of ChatGPT as a regular assistance tool. Positive perceptions of these
factors contribute to a favorable attitude toward the use of ChatGPT.
However, the direct impact of ChatGPT on academic performance and
overall well-being was not specically addressed in this research.
3. Were the undergraduate students aware of the usage of ChatGPT,
and what are their expectations of its benets and drawbacks?
The research aimed to assess students’ awareness of ChatGPT’s
usage, with ndings indicating that awareness was a signicant aspect
of the investigation. The study also explored students’ perceptions of the
benets and drawbacks associated with ChatGPT. Findings suggest that
students perceived benets in terms of ease of use, usefulness, and social
inuence, while potential drawbacks were associated with privacy
concerns, trust, and security considerations.
8. Conclusion, limitation, and future work
•Conclusion
The conclusion of this research rests on the insights gleaned from
exploring factors that inuence undergraduate students’ acceptance of
ChatGPT as a regular assistance tool. The study’s objectives were to
evaluate students’ awareness of ChatGPT’s functionalities and to un-
derstand how the tool impacts their daily activities. Additionally, it
aimed to unearth the potential benets and challenges of implementing
ChatGPT in educational settings.
The research found that the perceived ease of use and perceived
usefulness of ChatGPT signicantly shape students’ attitudes toward the
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
12
tool. A positive correlation exists between these perceptions and stu-
dents’ acceptance of ChatGPT: when students nd ChatGPT easy to use
and benecial, their overall attitude toward the tool is likely to be
positive. This nding underscores the importance of a user-friendly
interface and clear communication of ChatGPT’s practical benets to
enhance student engagement.
Privacy concerns and security measures were also highlighted as
vital in inuencing students’ perceptions. Although privacy issues did
not markedly impact the tool’s perceived ease of use, they positively
inuenced its perceived usefulness. This indicates that by addressing
privacy concerns and ensuring robust security measures, trust and reli-
ance on ChatGPT can be fostered among users.
Social inuence emerged as another critical factor. The study showed
that when students perceive a positive social inuence regarding the use
of ChatGPT, they are more likely to adopt a favorable attitude toward
the tool. This emphasizes the role of social contexts and peer inuence in
accepting and adopting new technologies.
In summary, this research offers valuable insights into the dynamics
of undergraduate students’ acceptance of ChatGPT. It highlights the
signicance of perceived ease of use, usefulness, privacy, security, and
social inuence. By considering these aspects, educators and adminis-
trators can effectively promote the integration of ChatGPT into educa-
tional frameworks, thereby enhancing student engagement and learning
experiences.
The research further posits that emphasizing a user-friendly interface
and the practical benets of ChatGPT can lead to more positive attitudes
and acceptance among students, presenting a compelling case for the
tool’s integration into educational practices. However, it is essential to
recognize the limitations of this study, such as its focus on a specic
demographic and reliance on self-reported data. Future research should
aim to diversify its sample and incorporate more objective methodolo-
gies to deepen the understanding of user acceptance and engagement
with ChatGPT.
Additionally, as the integration of technology in education continues
to expand, it is crucial to explore long-term impacts on learning out-
comes and ethical considerations, ensuring informed consent and data
protection for all participants. This study lays the groundwork for
further exploration into the integration of technologies like ChatGPT in
education, contemplating their potential effects on learning outcomes,
student well-being, and the broader educational landscape.
•Limitations
It is essential to recognize certain limitations of this study. First, the
research was limited to undergraduate students, constraining the
generalizability of the ndings to broader populations. Future studies
should incorporate a more diverse sample to ensure wider applicability.
Second, this study relied on self-reported measures, which are prone to
response bias and might not accurately reect the actual behavior and
experiences of users. Including objective measures and observational
data in future research could provide a more comprehensive under-
standing of user acceptance and usage patterns. Lastly, the study was
conducted within a specic context, an educational setting, and might
not reect variations in acceptance and usage that could occur in
different domains.
•Future Work
Building on this study’s ndings, several avenues for future research
emerge. Firstly, investigating the long-term effects of using ChatGPT on
users’ learning outcomes and academic performance is essential. Un-
derstanding how ChatGPT integrates into educational processes and its
impact on student success could yield further insights. Secondly,
exploring the role of user training and support in enhancing acceptance
and addressing concerns is benecial. Developing effective training
strategies and support systems can contribute to ChatGPT’s successful
implementation in educational settings. Additionally, investigating the
impact of personalized feedback and adaptive features within ChatGPT
could make it a more effective assistance tool. Finally, examining the
ethical and societal implications of using ChatGPT, including issues
related to bias, privacy, and algorithmic accountability, is crucial for its
responsible deployment and utilization.
CRediT authorship contribution statement
Hayder Albayati: Writing – review & editing, Writing – original
draft, Visualization, Validation, Supervision, Software, Resources,
Project administration, Methodology, Investigation, Funding acquisi-
tion, Formal analysis, Data curation, Conceptualization.
Declaration of competing interest
All authors declare that they have no conicts of interest.
Appendix A. Constructs’ _items
Construct Item
name
The Question The sources
Behavioral
Intention
BI1 If I have access to ChatGPT, I intend to use it (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) BI2 If I have access to ChatGPT, I would use it
BI3 I plan to use ChatGPT within the next months
Attitude A1 I am interested in using ChatGPT (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) A2 I am likely to use ChatGPT because of its attractiveness
A3 I feel my work overall will be better with ChatGPT
Perceived Ease of
Use
E1 Learning to operate ChatGPT would be easy for me (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) E2 I believe it would be easy to use ChatGPT to accomplish what I want to do
E3 It is easy for me to become skillful at using ChatGPT
E4 I believe ChatGPT is easy to use
Perceived
Usefulness
U1 Using ChatGPT would improve my work quality (Aug; Gefen et al., 2003a; Davis et al., 1989;
Venkatesh & Davis, 2000) U2 Using ChatGPT would increase my productivity
U3 Using ChatGPT would enhance my work effectiveness
U4 Using ChatGPT would decrease time consumption
Privacy P1 I think ChatGPT shows attention for the privacy of its users (Al-Emran et al., 2020; Cheung & Lee, 2001)
P2 I feel safe when I send personal information to ChatGPT
P3 I think ChatGPT following the personal data protection laws
P4 I think ChatGPT only collects user personal data that are necessary for its activity
P5 I think ChatGPT respects the user’s rights when obtaining personal information
(continued on next page)
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
13
(continued)
Construct Item
name
The Question The sources
P6 I think that ChatGPT will not provide my personal information to other companies
Security SE1 I think ChatGPT has mechanisms to ensure the safe transmission of its users’ information (Al-Emran et al., 2020; Cheung & Lee, 2001)
SE2 I think ChatGPT shows good security care while using
SE3 I think ChatGPT has the sufcient technical capacity to ensure that no other organization
will supplant its identity on the Internet
SE4 I am sure of the identity of ChatGPT when I establish contact via the Internet
SE5 When I send data to ChatGPT, I am sure that they will not be intercepted by unauthorized
third parties
SE6 I think ChatGPT has sufcient technical capacity to ensure that the data I send will not be
intercepted by hackers
SE7 When I send data to ChatGPT, I am sure they cannot be modied by a third party
SE8 I think ChatGPT has sufcient technical capacity to ensure that the data I send cannot be
modied by a third party
Social Inuence SI1 People who are important to me think that I should use ChatGPT Venkatesh et al. (2012)
SI2 People who inuence my behavior think that I should use ChatGPT
SI3 People whose opinions I value prefer that I use ChatGPT
Trust T1 I believe that ChatGPT is effective and secure in what it is designed to do. (trust technology) (Alalwan et al., 2018; Fortino et al., 2019)
T2 I believe that ChatGPT enables me to do what I need to do. (trust technology)
T3 I believe that ChatGPT users are trustworthy. (trust people)
T4 I believe that ChatGPT is made in a trusted organization. (trust organization)
References
Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and
usage of information technology: A replication. MIS Quarterly, 227–247.
Aghdaie, S. F. A., Piraman, A., & Fathi, S. (2011). An analysis of factors affecting the
consumer’s attitude of trust and their impact on internet purchasing behavior.
International Journal of Business and Social Science, 2(23).
Ajzen, I., & Fishbein, M. (1972). Attitudes and normative beliefs as factors inuencing
behavioral intentions. Journal of Personality and Social Psychology, 21(1), 1.
Al-Emran, M., Mezhuyev, V., & Kamaludin, A. (2020). Towards a conceptual model for
examining the impact of knowledge management factors on mobile learning acceptance.
Technology in Society, Article 101247.
Al-Shara, M. A., Arshah, R. A., Abo-Shanab, E., & Elayah, N. (2016). The effect of
security and privacy perceptions on customers’ trust to accept internet banking
services: An extension of TAM. Journal of Engineering Applied Sciences, 11(3),
545–552.
Alalwan, A. A., Baabdullah, A. M., Rana, N. P., Tamilmani, K., & Dwivedi, Y. K. (2018).
Examining adoption of mobile internet in Saudi Arabia: Extending TAM with
perceived enjoyment, innovativeness and trust. Technology in Society, 55, 100–110.
Albayati, H., Alistarbadi, N., & Rho, J. J. (2023). Assessing engagement decisions in NFT
metaverse based on the theory of planned behavior (TPB). Telematics and Informatics
Reports, 10, Article 100045.
Albayati, H., Kim, S. K., & Rho, J. J. (2020). Accepting nancial transactions using
blockchain technology and cryptocurrency: A customer perspective approach.
Technology in Society, 62, Article 101320.
Albayati, H., Kim, S. K., & Rho, J. J. (2021). A study on the use of cryptocurrency wallets
from a user experience perspective. In Human behavior and emerging technologies.
https://doi.org/10.1002/hbe2.313
Almalki, S. (2016). Integrating quantitative and qualitative data in mixed methods
research–challenges and benets. Journal of Education and Learning, 5(3), 288–296.
Autry, C. W., Grawe, S. J., Daugherty, P. J., & Richey, R. G. (2010). The effects of
technological turbulence and breadth on supply chain technology acceptance and
adoption. Journal of Operations Management, 28(6), 522–536.
Baidoo-Anu, D., & Owusu Ansah, L. (2023). Available at: SSRN 4337484. In Education in
the era of generative articial intelligence (AI): Understanding the potential benets of
ChatGPT in promoting teaching and learning.
Bansal, H., & Khan, R. (2018). A review paper on human computer interaction.
International Journal of Advanced Research in Computer Science and Software
Engineering, 8(4), 53.
Basak, S. K., Govender, D. W., & Govender, I. (2016). Examining the impact of privacy,
security, and trust on the TAM and TTF models for e-commerce consumers: A pilot study.
2016 14th annual conference on privacy, security and trust (PST), Bass, D. (2022).
OpenAI chatbot so good it can fool humans, even when it’s wrong. Bloomberg.
https://www.bloomberg.com/news/articles/2022-12-07/openai-ch
atbot-so-good-it-can-fool-humans-even-when-it-s-wrong#xj4y7vzkg.
Bhattacherjee, A. (2000). Acceptance of e-commerce services: The case of electronic
brokerages. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and
Humans, 30(4), 411–420.
Bhattacherjee, A., & Premkumar, G. (2004). Understanding changes in belief and attitude
toward information technology usage: A theoretical model and longitudinal test. MIS
Quarterly, 229–254.
Brachten, F., Kissmer, T., & Stieglitz, S. (2021). The acceptance of chatbots in an
enterprise context–A survey study. International Journal of Information Management,
60, Article 102375.
Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2023). A comprehensive survey
of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv
preprint arXiv:2303.04226.
Casal´
o, L. V., Flavi´
an, C., & Guinalíu, M. (2007). The role of security, privacy, usability
and reputation in the development of online banking. Online information review, 31
(5), 583–603.
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To Be or not to Be human? Theorizing
the role of human-like competencies in conversational articial intelligence agents.
Journal of Management Information Systems, 39(4), 969–1005.
Chen, Y.-T., Shih, W.-L., Lee, C.-H., Wu, P.-L., & Tsai, C.-Y. (2021). Relationships among
undergraduates’ problematic information security behavior, compulsive internet
use, and mindful awareness in Taiwan. Computers & Education, 164, Article 104131.
Cheung, C. M., & Lee, M. K. (2001). Trust in internet shopping: Instrument development
and validation through classical and modern approaches. Journal of Global
Information Management, 9(3), 23–35.
Connor, M., & Siegrist, M. (2010). Factors inuencing people’s acceptance of gene
technology: The role of knowledge, health expectations, naturalness, and social trust.
Science Communication, 32(4), 514–538.
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user
information systems: Theory and results Massachusetts. Institute of Technology.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS Quarterly, 319–340.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer
technology: A comparison of two theoretical models. Management Science, 35(8),
982–1003.
Dragan, C. C., & Manulis, M. (2018). Bootstrapping online trust: Timeline activity proofs data
privacy management, cryptocurrencies and blockchain technology.
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K.,
Baabdullah, A. M., Koohang, A., Raghavan, V., & Ahuja, M. (2023). “So what if
ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and
implications of generative conversational AI for research, practice and policy.
International Journal of Information Management, 71, Article 102642.
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at
the labor market impact potential of large language models. arXiv preprint arXiv:
2303.10130.
Esposito, C., Tamburis, O., Su, X., & Choi, C. (2020). Robust decentralised trust
management for the internet of things by using game theory. Information Processing
& Management, 57(6). https://doi.org/10.1016/j.ipm.2020.102308
Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In Trust and
deception in virtual societies (pp. 55–90). Springer.
Featherman, M. S., Miyazaki, A. D., & Sprott, D. E. (2010). Reducing online privacy risk
to facilitate e-service adoption: The inuence of perceived ease of use and corporate
credibility. Journal of Services Marketing, 24(3), 219–229.
Ferede, B., Elen, J., Van Petegem, W., Hunde, A. B., & Goeman, K. (2022). A structural
equation model for determinants of instructors’ educational ICT use in higher
education in developing countries: Evidence from Ethiopia. Computers & Education,
188, Article 104566.
Fishbein, M. (1975). attitude, intention and behavior: An introduction to theory and research
Menlo Park. Addison-Wesley.
Folkinshteyn, D., & Lennon, M. (2016). Braving bitcoin: A technology acceptance model
(TAM) analysis. Journal of Information Technology Case and Application Research, 18
(4), 220–249.
Fortino, G., Messina, F., Rosaci, D., & Sarne, G. M. (2019). Using blockchain in a
reputation-based model for grouping agents in the internet of things. IEEE
Transactions on Engineering Management.
H. Albayati
Computers and Education: Articial Intelligence 6 (2024) 100203
14
Frieder, S., Pinchetti, L., Grifths, R.-R., Salvatori, T., Lukasiewicz, T., Petersen, P. C.,
Chevalier, A., & Berner, J. (2023). Mathematical capabilities of chatgpt. arXiv preprint
arXiv:2301.13867.
Gefen, D., Karahanna, E., & Straub, D. W. (2003a). Inexperience and experience with
online stores: The importance of TAM and trust. IEEE Transactions on Engineering
Management, 50(3), 307–321.
Gefen, D., Karahanna, E., & Straub, D. W. (2003b). Trust and TAM in online shopping: An
integrated model. MIS Quarterly, 27(1), 51–90.
George, A. S., & George, A. H. (2023). A review of ChatGPT AI’s impact on several
business sectors. Partners Universal International Innovation Journal, 1(1), 9–23.
Grani´
c, A., & Maranguni´
c, N. (2019). Technology acceptance model in educational
context: A systematic literature review. British Journal of Educational Technology.
Hair, J. F., Jr., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2016). A primer on partial least
squares structural equation modeling (PLS-SEM). Sage publications.
Hajian, S., Tassa, T., & Bonchi, F. (2016). Individual privacy in social inuence networks.
Social Network Analysis and Mining, 6, 1–14.
Haleem, A., Javaid, M., & Singh, R. P. (2022). An era of ChatGPT as a signicant
futuristic support tool: A study on features, abilities, and challenges. BenchCouncil
transactions on benchmarks, standards and evaluations, 2(4), Article 100089.
Hsu, C.-L., & Lu, H.-P. (2004). Why do people play on-line games? An extended TAM
with social inuences and ow experience. Information Management, 41(7), 853–868.
Iku-Silan, A., Hwang, G.-J., & Chen, C.-H. (2023). Decision-guided chatbots and cognitive
styles in interdisciplinary learning. Computers & Education, Article 104812.
Jahangir, N., & Begum, N. (2008). The role of perceived usefulness, perceived ease of
use, security and privacy, and customer attitude to engender customer adaptation in
the context of electronic banking. African Journal of Business Management, 2(2), 32.
Joinson, A. N., Reips, U.-D., Buchanan, T., & Schoeld, C. B. P. (2010). Privacy, trust, and
self-disclosure online. Human-Computer Interaction, 25(1), 1–24.
Karahanna, E., & Straub, D. W. (1999). The psychological origins of perceived usefulness
and ease-of-use. Information Management Science, 35(4), 237–250.
Kesharwani, A., & Singh Bisht, S. (2012). The impact of trust and perceived risk on
internet banking adoption in India: An extension of technology acceptance model.
International Journal of Bank Marketing, 30(4), 303–322.
Kim, J., & Gambino, A. (2016). Do we trust the crowd or information system? Effects of
personalization and bandwagon cues on users’ attitudes and behavioral intentions
toward a restaurant recommendation website. Computers in Human Behavior, 65,
369–379.
Kim, A. J.-Y., & Ko, E.-J. (2010). The impact of design characteristics on brand attitude
and purchase intention-focus on luxury fashion brands. Journal of the Korean Society
of Clothing and Textiles, 34(2), 252–265.
Koco´
n, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., Bielaniewicz, J.,
Gruza, M., Janz, A., & Kanclerz, K. (2023). ChatGPT: Jack of all trades, master of none.
Information Fusion, Article 101861.
Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepa˜
no, C.,
Madriaga, M., Aggabao, R., Diaz-Candido, G., & Maningo, J. (2023). Performance of
ChatGPT on USMLE: Potential for AI-assisted medical education using large
language models. PLoS Digital Health, 2(2), Article e0000198.
Limbu, Y. B., Wolf, M., & Lunsford, D. (2012). Perceived ethics of online retailers and
consumer behavioral intentions: The mediating roles of trust and attitude. The
Journal of Research in Indian Medicine.
Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact
academia and libraries. Library Hi Tech News.
Malhotra, Y., & Galletta, D. F. (1999). Extending the technology acceptance model to
account for social inuence: Theoretical bases and empirical validation. In
Proceedings of the 32nd annual Hawaii international conference on systems sciences.
1999. HICSS-32. Abstracts and CD-ROM of full papers.
Martins, C., Oliveira, T., & Popoviˇ
c, A. (2014). Understanding the internet banking
adoption: A unied theory of acceptance and use of technology and perceived risk
application. International Journal of Information Management, 34(1), 1–13.
Min, S., So, K. K. F., & Jeong, M. (2021). Consumer adoption of the Uber mobile
application: Insights from diffusion of innovation theory and technology acceptance
model. In Future of tourism marketing (pp. 2–15). Routledge.
Mollick, E. R., & Mollick, L. (2022). New modes of learning enabled by AI chatbots.
Available at: SSRN. In Three methods and assignments.
Morse, J. M. (1991). Approaches to qualitative-quantitative methodological
triangulation. Nursing Research, 40(2), 120–123.
Morse, J. M. (2016). Mixed method design: Principles and procedures (Vol. 4). Routledge.
OpenAI. (2023). GPT-4 technical report (p. 99). Cornell University. https://doi.org/
10.48550/arXiv.2303.08774
Patel, S. B., & Lam, K. (2023). ChatGPT: The future of discharge summaries? The Lancet
Digital Health, 5(3), e107–e108.
Patel, K. J., & Patel, H. J. (2018). Adoption of internet banking services in Gujarat: An
extension of TAM with perceived security and social inuence. International Journal
of Bank Marketing, 36(1), 147–169.
Prislin, R., & Wood, W. (2005). In Social inuence in attitudes and attitude change.
Qin, L., Kim, Y., Tan, X., & Hsu, J. (2009). In The effects of privacy concern and social
inuence on user acceptance of online social networks.
Raman, R., Mandal, S., Das, P., Kaur, T., Sanjanasri, J., & Nedungadi, P. (2023).
University students as early adopters of ChatGPT: Innovation diffusion study.
Rao, V., & Woolcock, M. (2003). Integrating qualitative and quantitative approaches in
program evaluation. The impact of economic policies on poverty and income distribution:
Evaluation techniques and tools (pp. 165–190).
Ringle, C. M., Wende, S., & Becker, J.-M. (2015). SmartPLS 3. B¨
onningstedt: SmartPLS.
Retrieved July, 15, 2016.
Ronaghi, M. H., & Mosakhani, M. (2022). The effects of blockchain technology adoption
on business ethics and social sustainability: evidence from the Middle East.
Environment Development and Sustainability https://doi.org/10.1007/s10668-02
1-01729-x.
Roy, M. C., Dewit, O., & Aubert, B. A. (2001). The impact of interface usability on trust in
web retailers. Internet research.
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional
assessments in higher education? Journal of Applied Learning and Teaching, 6(1).
Saad´
e, R., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness
and perceived ease of use in on-line learning: An extension of the technology
acceptance model. Information Management Science, 42(2), 317–327.
Salloum, S. A., Alhamad, A. Q. M., Al-Emran, M., Monem, A. A., & Shaalan, K. (2019).
Exploring students’ acceptance of E-learning through the development of a
comprehensive technology acceptance model. IEEE Access, 7, 128445–128462.
Schoonenboom, J., & Johnson, R. B. (2017). How to construct a mixed methods research
design. Kolner Zeitschrift fur Soziologie und Sozialpsychologie, 69(Suppl 2), 107.
Selya, A. S., Rose, J. S., Dierker, L. C., Hedeker, D., & Mermelstein, R. J. (2012).
A practical guide to calculating Cohen’s f2, a measure of local effect size, from PROC
MIXED. Frontiers in Psychology, 3, 111.
Shawar, B. A., & Atwell, E. (2007). Chatbots: Are they really useful? Journal for Language
Technology and Computational Linguistics, 22(1), 29–49.
Shin, D.-H. (2010). The effects of trust, security and privacy in social networking: A
security-based approach to understand the pattern of adoption. Interacting with
Computers, 22(5), 428–438.
Sicari, S., Rizzardi, A., Grieco, L. A., & Coen-Porisini, A. (2015). Security, privacy and
trust in Internet of Things: The road ahead. Computer Networks, 76, 146–164.
Sidi, Y., Shamir-Inbal, T., & Eshet-Alkalai, Y. (2023). From face-to-face to online: Teachers’
perceived experiences in online distance teaching during the Covid-19 pandemic.
Computers & Education, Article 104831.
Suh, B., & Han, I. (2003). The impact of customer trust and perception of security control
on the acceptance of electronic commerce. International Journal of Electronic
Commerce, 7(3), 135–161.
Teddlie, C., & Tashakkori, A. (2011). Mixed methods research. In The Sage handbook of
qualitative research, 4 pp. 285–300).
Tramow, D., & Finlay, K. A. (1996). The importance of subjective norms for a minority
of people: Between subjects and within-subjects analyses. Personality and Social
Psychology Bulletin, 22(8), 820–828.
van Dis, E. A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT:
Five priorities for research. Nature, 614(7947), 224–226.
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control,
intrinsic motivation, and emotion into the technology acceptance model. Information
Systems Research, 11(4), 342–365.
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda
on interventions. Decision Sciences, 39(2), 273–315.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology
acceptance model: Four longitudinal eld studies. Management Science, 46(2),
186–204.
Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of
information technology: Extending the unied theory of acceptance and use of
technology. MIS Quarterly, 157–178.
Whitford, E. (2022). A computer can now write your college essay, maybe better than.
Wu, L., & Chen, J.-L. (2005). An extension of trust and TAM model with TPB in the initial
adoption of on-line tax: An empirical study. International Journal of Human-Computer
Studies, 62(6), 784–808.
Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., Kambadur, P.,
Rosenberg, D., & Mann, G. (2023). BloombergGPT: A large language model for nance.
arXiv preprint arXiv:2303.17564.
Yang, H.-d., & Yoo, Y. (2004). It’s all about attitude: Revisiting the technology
acceptance model. Decision Support Systems, 38(1), 19–31.
Yu, J., Ha, I., Choi, M., & Rho, J. (2005). Extending the TAM for a t-commerce.
Information & Management, 42(7), 965–976.
Zhou, T., & Li, H. (2014). Understanding mobile SNS continuance usage in China from
the perspectives of social inuence and privacy concern. Computers in Human
Behavior, 37, 283–289.
Dr. Hayder Albayati is currently an assistant professor in the
Global Converg management department at the college of
Endicott / Woosong University. He obtained his Ph.D. in the
Global Information Telecommunication Technology Program
(GITTP) at the Korea Advanced Institute of Science and Tech-
nology (KAIST) in 2021. He received his MS in Information &
Communication Technology (ICT convergence) at Soongsil
University in 2017. He has a bachelor’s degree in Computer
Science from AL Mansoor university college. His research in-
terests include Blockchain Technology, e-Government, IoT, AI,
AR, VR, Data Analytics, Big Data, Business Development, Smart
City, and Technology Transfer and Convergence. E-mail:
hayder1111@wsu.ac.kr hayder1111@gmail.com.
H. Albayati