Access to this full-text is provided by Wiley.
Content available from Human Behavior and Emerging Technologies
This content is subject to copyright. Terms and conditions apply.
Research Article
Assessing ChatGPT’s Information Quality Through the Lens of
User Information Satisfaction and Information Quality Theory in
Higher Education: A Theoretical Framework
Chung-Jen Fu,
1
Andri Dayarana K. Silalahi ,
2
I-Tung Shih ,
1
Do Thi Thanh Phuong,
3
Ixora Javanisa Eunike,
1
and Shinetsetseg Jargalsaikhan
1
1
Department of Business Administration, College of Management, Chaoyang University of Technology, Taichung, Taiwan
2
Department of Marketing and Logistics Management, College of Management, Chaoyang University of Technology, Taichung,
Taiwan
3
Department of Distribution Management, College of Management, National Chin-Yi University of Technology, Taiping, Taiwan
Correspondence should be addressed to Andri Dayarana K. Silalahi; andridksilalahi@gmail.com
Received 2 March 2024; Revised 26 August 2024; Accepted 5 September 2024
Academic Editor: Puspa Setia Pratiwi
Copyright © 2024 Chung-Jen Fu et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Incorporating AI tools like ChatGPT into higher education has been beneficial, yet the extent of user satisfaction with the quality
of information provided by these tools, known as user information satisfaction (UIS) and information quality (IQ) theory, remains
underexplored. This study introduces a UIS model specifically designed for ChatGPT’s application in the educational sector based
on multidimensions of IQ theory. Drawing from established UIS and IQ theory, we crafted a model centered around seven
essential factors that influence the effective use of ChatGPT, aiming to guide educators and learners in overcoming common
challenges such as plagiarism and ensuring the ethical use of AI. Data was collected from Indonesian university participants
(N= 508) and analyzed using structural equation modeling with Smart-PLS 4.0. The results reveal that completeness, precision,
timeliness, convenience, and information format are the most influential factors driving user satisfaction with ChatGPT.
Interestingly, our research indicated that the accuracy and reliability of the information, typically deemed paramount, were not
the primary concerns in the academic use of ChatGPT. Our findings recommend a cautious approach to integrating ChatGPT
in higher education. We advocate for strategic use that recognizes its innovative potential while acknowledging its limitations,
ensuring responsible and effective application in educational contexts. This balanced perspective is crucial for integrating AI
tools into the academic fabric without compromising educational integrity or quality.
Keywords: artificial intelligence; ChatGPT; higher education; user information satisfaction
1. Introduction
The release of AI-powered ChatGPT has rapidly become a
grounding for discourse within educational circles, especially
in higher education sectors worldwide. Since its launch in late
2022, ChatGPT has been the subject of intense examination,
fueling debates concerning its potential as a transformative
educational aid against its ability to enable academic miscon-
duct, particularly plagiarism, and the promulgation of false
information [1]. The ethical implications of its use, including
the potential for AI to impact student learning and the authen-
ticity of their work, pose substantial concerns [2, 3]. These
ethical challenges have significantly influenced the hesitancy
in ChatGPT’s full-scale adoption within academic frameworks.
Despite these concerns, the utility of ChatGPT in educa-
tional contexts cannot be overstated. Roose [4] highlights its
advantages, particularly regarding resource generation and
engagement facilitation, which could redefine pedagogical
mechanisms. Such tools can potentially revolutionize con-
tent creation, offering resources and interactive experiences
that can support personalized learning paths [5]. Further-
more, educators have found that when leveraged judiciously,
ChatGPT can facilitate idea generation and curriculum
development [6], potentially enhancing the educator’s role
Wiley
Human Behavior and Emerging Technologies
Volume 2024, Article ID 8114315, 18 pages
https://doi.org/10.1155/2024/8114315
by automating administrative tasks and allowing for more
focused pedagogical strategies.
The dualistic nature of ChatGPT’s impact—its prospec-
tive benefits shadowed by ethical and integrity-related quan-
daries—underscores the need for an in-depth examination
of its role and regulation in educational settings. This dia-
logue is not merely theoretical but is actively shaped by the
experiences and testimonies of educators and students navi-
gating this new technological frontier [7].
The integration of ChatGPT into educational frame-
works has also emerged in a robust debate among academics
and practitioners. Advocates, including Kocońet al. [8], view
ChatGPT as a revolutionary aid that enables educators to
rapidly generate diverse teaching materials, thus potentially
transforming pedagogical approaches. This support hinges
on the belief that ChatGPT’s responsive and interactive
capabilities can significantly reduce the time and effort tradi-
tionally required for educational content creation.
However, this optimistic view is counterbalanced by
critical voices within the academic community. As voiced
by De Angelis et al. [9] and Deiana et al. [10], critics caution
against adopting ChatGPT without stringent oversight.
Their primary concerns revolve around the ease with which
students might exploit the tool for cheating and the propa-
gation of misinformation, which threatens to compromise
the integrity and reliability of educational standards. These
apprehensions highlight the need for frameworks that can
effectively mitigate such risks while preserving the beneficial
aspects of AI in education.
Although there has been considerable discussion about
ChatGPT’s potential and its challenges, current research
has not fully explored ChatGPT information quality (IQ)
through the lens of user information satisfaction (UIS) and
information quality theory (IQT) in higher education. Key
aspects like information completeness, precision, timeliness,
convenience, format, accuracy, and reliability have not been
adequately addressed in this context. This oversight leads to
educators and policymakers relying on incomplete data
when considering ChatGPT’s potential and its effects on
education. A comprehensive analysis of ChatGPT’s role in
satisfying informational needs is vital for its effective appli-
cation in educational settings. Therefore, it is imperative to
conduct targeted studies that assess both the benefits and
drawbacks of ChatGPT usage. Such investigations will clar-
ify AI’s implications for UIS and help ensure its integration
into educational practices, which optimizes learning while
upholding academic standards.
This study ventures to bridge this gap by systematically
assessing how ChatGPT meets the informational needs of
its users in higher education. It scrutinizes seven key dimen-
sions of UIS based on IQT multidimensions—completeness,
precision, timeliness, convenience, format, accuracy, and
reliability—which have been underexplored in existing liter-
ature. Completeness pertains to the extent to which
ChatGPT provides information that meets the full scope of
users’inquiries. Precision refers to the degree of specificity
and relevance that ChatGPT’s responses hold to the ques-
tions posed. Timeliness involves the speed at which ChatGPT
delivers information, while convenience considers the ease of
interaction with the AI tool. Format examines the organiza-
tion and presentation of information, and accuracy and reli-
ability address the correctness and dependability of the
content provided. This comprehensive evaluation, account-
ing for the positive potentials and risks identified by For-
oughi et al. [11]; Strzelecki [12]; Chang, Silalahi, and Lee
[13]; and Menon and Shilpa [14], as well as the integrity
concerns highlighted by D. Cotton, P. Cotton, and Shipway
[15]; Bin-Nashwan, Sadallah, and Bouteraa [16]; Liu et al.
[17], and Ansari, Ahmad, and Bhutta [18], seeks to offer
an expansive perspective on ChatGPT’seffectiveness and
to elucidate its role in the academic sector. Through this
investigation, the study is aimed at significantly advancing
the conversation surrounding the deployment of AI in edu-
cation and underpinning its practical applications with
robust empirical insights.
Revisiting the UIS framework by Ives, Olson, and Bar-
oudi [19] and IQT by Wang and Strong [20], this study
focuses on integrating AI in education through ChatGPT.
It investigates how ChatGP’s accuracy, relevance, and time-
liness can improve educational quality, focusing on the
underexplored area of UIS in AI-supported learning envi-
ronments. Amid growing concerns over the misuse of AI
for unethical academic practices, the study provides action-
able strategies for educational institutions to maximize
ChatGPT’s benefits while upholding academic integrity [7,
21, 22]. It also offers a critical analysis of the risks associated
with ChatGPT, particularly in facilitating academic dishon-
esty [23], which is indispensable for embedding AI into edu-
cational frameworks. The study equips decision-makers with
the insights needed for informed AI integration by providing
a robust evaluation of UIS in AI educational tools. It
endorses a mindful adoption of AI to bolster learning out-
comes, ensuring user satisfaction remains paramount.
2. Literature Review
2.1. Previous Study and Gap Identification. UIS is a well-
established concept in information system (IS) development.
While numerous studies have applied UIS in various con-
texts, there is a noticeable gap in research explicitly focusing
on UIS concerning AI-powered tools in education, mainly
the quality of information generated by ChatGPT. The need
to explore this area has become increasingly clear, as Chang,
Silalahi, and Lee [13] have pointed out the potential for such
information to create uncertainty. Additionally, Fu et al. [24]
have raised concerns that information from ChatGPT could
lead to misinformation and potentially harm users’knowl-
edge and decision-making abilities, especially in educational
settings. This highlights the importance of further research
on UIS, mainly using the IQT approach, as there has been
no investigation into how UIS and IQT assess the quality
of information generated by generative AI. For example,
Wang et al. [25] have studied how IQ controls are used to
standardize content on online platforms by using IQT and
leveraging four factors as IQ control on online platforms.
For example, Kim et al. [26, 27] studied the impact of
incorrect information provided by ChatGPT on user
decision-making in tourism, emphasizing how accuracy
2 Human Behavior and Emerging Technologies
and trustworthiness influence the acceptance of AI recom-
mendations. This research underscores the critical role of
IQ in AI-driven contexts, particularly focusing on the nega-
tive effects of inaccurate information and offering practical
insights for enhancing AI-based recommendation systems.
Melchert, Müller, and Fischer [28] explored the application
of IQT in AI-driven educational platforms, emphasizing
the need for high-quality information to ensure effective
learning outcomes and user satisfaction. Their study con-
tributes by adapting IQT to the educational context, provid-
ing a framework for evaluating and improving the IQ of AI-
driven educational tools, with a focus on ensuring learner
satisfaction and educational integrity. Wang, Li, and Chen
[29] examined how information normalization improves
communication effectiveness in online healthcare consulta-
tions, although they found that its positive effects are dimin-
ished in high-risk scenarios. Their study expands the IQ
framework by integrating information normalization, pro-
viding insights into balancing informational and emotional
support in online healthcare settings.
Haryaka, Agus, and Kridalaksana [30] developed a
model focusing on service quality, IQ, user participation,
and benefits in e-learning that found strong correlations
and reliability among these variables. Similarly, Ang and
Koh [31] explored the relationship between job satisfaction
and UIS within a company context, integrating demographic
variables such as age, educational level, job tenure, and orga-
nizational position. While their framework demonstrated
that job satisfaction and UIS are influenced by similar fac-
tors, it did not consider the implications of AI-powered edu-
cational tools, leaving a gap in understanding how such tools
affect UIS.
Au, Ngai, and Cheng [32] conducted a critical review
of over 50 papers on end-user information system satisfac-
tion (EUISS), revealing a dominance of the expectation
disconfirmation approach in past research. They proposed
an integrated conceptual model based on equity and needs
theories to better understand the psychological processing
of IS performance and its impact on EUISS. Despite pro-
viding valuable insights for future research, this study did
not explore the impact of AI-powered tools like ChatGPT
in educational settings. Siritongthaworn and Krairit [33]
identified four dimensions of student satisfaction in e-learn-
ing: delivery method, content, communication facilitation,
and system operation. Their study underscored the impor-
tance of contextual factors in influencing overall satisfaction,
suggesting that e-learning environments require distinct
instructional designs compared to traditional or solely online
methods. Yet, their research did not address the unique chal-
lenges and opportunities presented by AI-powered tools in
education.
Building on these foundations and previous studies in
Table 1, our study addresses a significant gap in the litera-
ture by focusing on UIS in the context of ChatGPT’sapplica-
tion in higher education. We developed a UIS model focusing
on seven key factors: completeness, accuracy, precision, reli-
ability, timeliness, convenience, and format. Our findings will
reveal how the seven key factors can influence UIS and which
one of these factors will raise concerns for academic use. This
strategic framework for integrating ChatGPT in higher educa-
tion emphasizes responsible and effective application, ensur-
ing educational integrity and quality while acknowledging
the tool’s limitations. This contribution provides a robust
addition to the body of knowledge, guiding educators and
researchers in harnessing the potential of AI tools in educa-
tional settings.
3. This Study’s Theoretical Framework
Previous studies have established a variety of UIS measures
tailored to the context and specific IS under scrutiny. For
instance, Galletta and Lederer [35] devised as many as 23
metrics to assess user satisfaction concerning the implemen-
tation and efficacy of IS. Conversely, Laumer, Maier, and
Weitzel [36] pinpointed four factors that determine user
contentment with corporate content management systems,
among other diverse inquiries (refer to [19, 36, 37]). These
scholarly endeavors elucidate that a theoretical framework
for measuring UIS is well-established. However, there is a
distinct lack of focus on measuring UIS specifically for AI-
supported systems like ChatGPT. Therefore, this research
aims to develop specialized UIS measures for ChatGPT, pro-
viding valuable insights for both users and developers,
including entities like OpenAI and others engaged in chatbot
technologies. This initiative not only broadens the scope of
UIS applications but also contributes to the refinement of
AI-supported systems to better meet user requirements.
The formulation of these measures has the potential to sig-
nificantly influence the customization and enhancement of
technology, thus elevating user utility and satisfaction across
various contexts.
Drawing upon the concept of UIS and IQT from prior
research [11, 20, 19, 36], this study formulates a seven-step
framework to evaluate the quality of information generated
from the ChatGPT system, which will be empirically tested
among its users. These seven constructs of UIS are derived
from IQT and have been operationalized into a user satisfac-
tion model methodically tested through this investigation.
The IQ satisfaction dimensions tailored for ChatGPT
include completeness, precision, timeliness, convenience,
format, accuracy, and reliability. These constructs will be
assessed in relation to user satisfaction and structured within
a conceptual research framework, as depicted in Figure 1.
This comprehensive approach aims to offer a holistic under-
standing of user interaction with and perception of the qual-
ity of information from the ChatGPT system. By analyzing
these varied dimensions, the study endeavors to pinpoint
the primary areas where ChatGPT excels and where there
is potential for enhancement, thereby augmenting its efficacy
as a user-oriented tool. The outcomes of this research are
anticipated to provide valuable insights into the optimiza-
tion of ChatGPT and other AI-supported systems.
4. Theoretical Background
4.1. IQT. IQT has become an integral part of data manage-
ment and IS studies, significantly advanced by Wang and
Strong [20]. They expanded the concept of IQ beyond mere
3Human Behavior and Emerging Technologies
Table 1: Previous study and gap identification.
Authors AI-context in education?
(e.g., ChatGPT) Variables Findings Contribution
Kim et al. [26, 27] Yes (ChatGPT in tourism)
Incorrect information, accuracy,
trustworthiness, and user decision-
making
This study investigates the impact of
incorrect information provided by
ChatGPT on user decision-making,
highlighting how accuracy and
trustworthiness influence the
acceptance of AI recommendations in
tourism.
The research highlights the critical role
of information quality in AI-driven
contexts like tourism, particularly
focusing on the negative effects of
inaccurate information, and offers
practical insights for enhancing AI-
based recommendation systems.
Melchert, Müller, and
Fischer [28] Yes (AI in education)
Information quality dimensions
(relevance, accuracy, completeness,
timeliness, and accessibility)
The study explores the application of
IQT in AI-driven educational
platforms, emphasizing the need for
high-quality information to ensure
effective learning outcomes and user
satisfaction.
The research contributes by adapting
IQT to the educational context,
providing a framework for evaluating
and improving the information quality
of AI-driven educational tools, with a
focus on ensuring learner satisfaction
and educational integrity.
Wang, Li, and Chen
[29]
No (online healthcare
consultations)
Information normalization, emotional
support, and informational support
Information normalization improves
communication effectiveness in online
healthcare consultations, but the
positive effects are diminished in high-
risk health scenarios.
The study expands the information
quality framework by integrating
information normalization, providing
insights into the balance between
informational and emotional support
in online healthcare settings.
Wang et al. [25] No (online information
normalization)
Information normalization and
information quality controls
(timeliness, completeness, depth, and
relevance)
Information normalization improves
communication performance in online
healthcare consultations, but its
positive impact is lessened when
dealing with high-risk diseases.
The study introduces information
normalization into the information
quality model, enhancing online
healthcare communication and
enriching social support theory by
highlighting the balance between
informational and emotional support.
Haryaka, Agus, and
Kridalaksana [30] No (e-learning UIS)
Service quality (responsiveness,
relevancy, understanding, and
productivity), information quality
(competence, accuracy, and
participation), user (tangible and
currency), and benefit
The study developed a user satisfaction
model for e-learning. The model
showed strong correlations and
reliability among variables like service
quality, information quality, user
participation, and benefits.
The research provides a robust model
for evaluating e-learning user
satisfaction via smartphones, validated
through statistical methods, and offers
insights for future development of e-
learning applications.
Ang and Koh [31] No (UIS and job satisfaction
in a company)
Demographics (e.g., age, educational
level, job tenure, and organizational
position)
Job satisfaction and user information
satisfaction are correlated and
influenced by similar factors. The
study’s framework, incorporating
variables like age, education, and
computer literacy, showed promising
results in initial tests.
The research introduces a framework
to rigorously examine the relationship
between job satisfaction and user
information satisfaction, offering a
basis for further validation.
4 Human Behavior and Emerging Technologies
Table 1: Continued.
Authors AI-context in education?
(e.g., ChatGPT) Variables Findings Contribution
Au, Ngai, and
Cheng [32]
No (information system satisfaction
framework development)
IS performance, IS performance
expectation, equitable work
performance fulfillment, equitable
relatedness fulfillment, equitable self-
development fulfillment, and IS
performance expectation
disconfirmation
The study reviewed over 50 papers on
end-user information system
satisfaction (EUISS) and found that
most past research used the
expectation disconfirmation approach.
An integrated conceptual model based
on equity and needs theories is
proposed to better understand the
psychological processing of
information system performance and
its impact on EUISS.
The research introduces a new
conceptual model for EUISS,
incorporating equity and needs
theories, and provides insights for
future testing and application of this
model.
Wang [34] No (e-learning satisfaction
context in education)
Learner interface, learning community,
content, and personalization
The study developed and validated a
comprehensive model and instrument
to measure learner satisfaction with
asynchronous e-learning systems,
addressing a gap in existing literature.
The model was rigorously tested for
reliability and various forms of validity
using data from 116 adult respondents.
This research provides a validated
instrument for measuring learner
satisfaction with asynchronous e-
learning, offering a useful tool for
researchers to develop and test e-
learning theories and providing
insights for practitioners in improving
e-learning systems.
Siritongthaworn and
Krairit [33] No (e-learning satisfaction)
Delivery method, content,
communication facilitation, and
system operation
The study identifies four dimensions of
student satisfaction in e-learning:
delivery method, communication
facilitation, system operation, and
content. These dimensions influence
overall satisfaction, and their impact
varies depending on the context of e-
learning implementation.
The research offers a tailored
instrument for measuring student
satisfaction in e-learning, emphasizing
the need for distinct instructional
designs for blended e-learning
compared to traditional or solely
online methods, providing valuable
insights for educators.
This study Yes (e.g., ChatGPT in education)
Completeness, accuracy, precision,
reliability, timeliness, convenience,
and format
The study developed a UIS based on
IQ Theory for assessing ChatGPT’s
information quality in education,
highlighting that completeness,
precision, timeliness, convenience, and
information format are key factors
driving user satisfaction. Interestingly,
accuracy and reliability were not
primary concerns for academic use.
The research provides a strategic
framework for integrating ChatGPT in
higher education, emphasizing the
need for responsible and effective
application by considering information
quality and acknowledging the tool’s
limitations, thereby ensuring
educational integrity and quality.
5Human Behavior and Emerging Technologies
accuracy, introducing a multidimensional perspective that
includes completeness, consistency, and timeliness [20].
Their research redefined IQ, emphasizing that a multidi-
mensional approach is essential for assessing the quality of
information. Furthermore, Wang and Strong’s [20] work
was contributory in shifting the paradigm of IQ from a
purely technical focus to one that considers how information
is used and perceived in various contexts. This shift in IQT
can be part of a broader epistemological change, reflecting
how organizations perceive information [38]. This trans-
formation was driven by the recognition that data, as a
representation of reality [39], is not an end but a means
to an end—facilitating informed decision-making [40].
The development of IQT reflects a constructivist perspec-
tive, where the value of information is dependent on its
utility within specific contexts rather than being an inher-
ent property of the data itself. This approach aligns with
the pragmatic tradition in philosophy, which emphasizes
ideas’practical consequences and usefulness in achieving
desired outcomes [41].
Over the years, IQT has been adopted across various
interdisciplinary fields, including ISs [25], management
science [42], and data governance [43]. In the last two
decades, the theory has been expanded and enriched, build-
ing on Wang and Strong’s foundational work and incorpo-
rating insights from other disciplines to address emerging
challenges brought about by the digital uprising. For
instance, Batini et al. [44] highlighted the need for adapt-
able frameworks of information in the complex environ-
ment of modern society, while Alhassan, Sammon, and
Daly [43] integrated data governance principles to ensure
data quality across distributed systems. Martín et al. [45]
further developed IQT by exploring the impact of data
quality on the performance of AI models, emphasizing
the importance of high-quality data in predictive analytics.
Additionally, Fosso Wamba et al. [46] addressed the sig-
nificance of new metrics in real-time data processing, rein-
forcing the ongoing relevance of IQT in contemporary
management.
Recent studies have expanded the application of IQT
into the field of AI. For example, Kim et al. [26, 27] exam-
ined how incorrect information from ChatGPT influences
user decisions in the tourism sector. Their findings highlight
the importance of dimensions like accuracy and trustworthi-
ness in AI, especially in gaining user acceptance and trust in
AI-generated information. Similarly, Melchert, Müller, and
Fischer [28] applied IQT to AI-driven educational plat-
forms, stressing the need for high-quality information to
achieve effective learning and user satisfaction. These studies
suggest that traditional IQ dimensions, such as accuracy, rel-
evance, and completeness, must be reassessed in the context
of AI advancements.
In this study, which aims to assess user satisfaction with
information generated by AI systems like ChatGPT based on
IQ dimensions, applying IQT is both timely and crucial. The
release of generative AI tools like ChatGPT raises concerns
about ensuring the quality of dynamically generated content
[13]. As a result, traditional IQ dimensions like accuracy,
relevance, and completeness must be reassessed to confirm
that these AI systems are effectively meeting users’informa-
tion needs. Additionally, scholars have raised concerns
about misinformation and ethical issues associated with
ChatGPT despite its perceived benefits in the learning pro-
cess. This research is well-positioned within the current
landscape of generative AI, where assessing the quality of
its generated information is essential. Furthermore, Fu
et al. [24] found that interactions with ChatGPT, particu-
larly concerning its information, can impact individuals’
knowledge and skills. When the information and interac-
tion with generative AI are mutually beneficial, transparent,
and reliable, it reflects high-quality information generation.
Therefore, by applying IQT to evaluate ChatGPT’s gener-
ated information, this study enhances the understanding
of AI-generated information’seffectiveness and contributes
Completeness
User information
satisfaction
Accuracy
Precision
Reliability
Timeliness
Convenience
Format
H1
H2
H3
H4
H5
H6
H7
Figure 1: The study’s theoretical framework measuring UIS with ChatGPT in higher education.
6 Human Behavior and Emerging Technologies
to the broader discussion on the implications of generative
AI’s IQ, influencing user satisfaction and trust in these
systems.
4.2. UIS. In the early 1980s, Ives, Olson, and Baroudi [19]
introduced a model for evaluating user satisfaction within
ISs, highlighting that user perceptions and understanding
are essential. They argued that satisfaction is a critical indi-
cator of an IS’s overall success. This framework included
multidimensional factors such as system reliability, IQ,
and user support, all of which collectively influence the
overall user experience [47, 48]. Building on this, Ives,
Olson, and Baroudi [19] further explored the relationship
between UIS and system usage, highlighting that user sat-
isfaction reflects and drives continued system use. These
early studies established UIS as a pivotal concept in evalu-
ating IS effectiveness, setting the stage for its continued
evolution in IS research [19].
As the IS field progressed, the concept of UIS evolved,
mirroring the growing complexity of digital environments.
Wixom and Todd [49] made a significant contribution by
combining UIS with the technology acceptance model
(TAM), underscoring the importance of perceived useful-
ness and ease of use as central to user satisfaction. Their
work highlighted how closely satisfaction is tied to users’
perceptions of a system’s utility and willingness to adopt
and continue using the technology. Recent studies have fur-
ther extended the application of UIS to modern technologies
like mobile apps and cloud computing, where aspects such
as IQ and user interface design play a pivotal role in deter-
mining user satisfaction [50, 51]. For example, Chi [52]
explored mobile commerce and found that user satisfaction
is significantly shaped by the quality of information pro-
vided and the ease of navigating the interface.
UIS has been increasingly applied to new areas where
digital platforms and advanced technologies shape the user
experience. Xu, Benbasat, and Cenfetelli [53] highlighted
the need to combine service quality with system and IQ to
boost user satisfaction, especially in mobile services. Al-
Qeisi et al. [54] examined this further in e-government
services, emphasizing that IQ and user interface design are
crucial for ensuring user satisfaction. More recently, research
has looked into UIS in cloud computing environments,
revealing that satisfaction with service quality, system reli-
ability, and information accuracy is essential [55]. These
studies demonstrate how UIS continues to evolve to meet
the demands of modern digital environments, reinforcing
its importance as a key measure of IS effectiveness.
This research argues that user satisfaction with genera-
tive AI systems like ChatGPT is not simply a byproduct of
system performance but rather a complex interplay between
the user and the information provided. Given the vigorous
quality of AI-generated content, measuring UIS requires a
more nuanced approach that considers both the objective
quality of the information and the user’s subjective experi-
ence. Studies by Yoon and Kim [56] have highlighted the
need to understand these interactions significantly as AI sys-
tems increasingly influence everyday decision-making. By
examining UIS in the context of ChatGPT, this study pro-
vides insights into how AI can be designed to boost user sat-
isfaction and trust, adding to the ongoing conversation
about AI’s role in modern ISs [57].
5. Hypothesis Development
5.1. UIS Factors. As previously mentioned, this study
employs seven measures of UIS—completeness, accuracy,
precision, reliability, timeliness, convenience, and for-
mat—to assess user satisfaction. These measures are intri-
cately linked to the user context of ChatGPT, situating the
research at the intersection of AI utility and user experience.
The objective is to conduct a comprehensive examination of
how each of these UIS dimensions contributes to overall user
satisfaction when interacting with ChatGPT. This methodo-
logical approach facilitates a detailed exploration of the sys-
tem’seffectiveness and the user experience it fosters, offering
insights that can guide future refinements and modifications
to optimize ChatGPT for its users.
In the realm of ChatGPT’s responses, “completeness”
refers to the depth and breadth with which user inquiries
are addressed. Studies, such as those by Gupta, Motlagh,
and Rhyner [58], indicate that users value detailed and com-
prehensive answers, as they contribute to a fuller under-
standing of the subject matter. A ChatGPT system that
responds to complex user questions with complete, ade-
quate, and specific information demonstrates a more thor-
ough comprehension [59]. Particularly noteworthy is
ChatGPT’s ability to identify and comprehend the implicit
questions posed by users, generating responses that are both
comprehensive and relevant. Additionally, ChatGPT’s
capacity to present updated and current knowledge stands
out as an added value. This correlation suggests that the
more complete the responses provided by ChatGPT, the
higher the likelihood of user satisfaction, underscoring the
significance of depth and breadth in the delivery of informa-
tion. Therefore, it is posited that:
H1: The more complete the information provided by
ChatGPT, the higher the user satisfaction.
In this study, accuracy is conceptualized as the truthful-
ness and correctness of ChatGPT’s responses. Personalized
information, when provided in ample and relevant quanti-
ties, is typically perceived as delivering accurate information
recommendations to users [26, 27]. This perspective is
informed by the accuracy of textual outputs such as language
modeling, text categorization, or the question-and-answer
formats generated by ChatGPT [60]. ChatGPT’s capability
to comprehend and interpret complex queries to produce
answers reflecting the veracity of information contributes
to user satisfaction. Prior research emphasizes the criticality
of accurate information for users, especially in decision-
making contexts [11]. This study posits a direct positive rela-
tionship between the accuracy of provided information and
user satisfaction levels, highlighting the importance of deliv-
ering truthful and reliable answers to enhance the user expe-
rience. Therefore, the proposed hypothesis is as follows:
H2: The higher information accuracy from ChatGPT is
associated with higher user satisfaction.
7Human Behavior and Emerging Technologies
Precision in ChatGPT’s responses, which focuses on the
relevance and specificity of user queries, is crucial. Research
indicates that users favor targeted answers that directly
address their specific issues [61]. These responses provide
information that aligns closely with the user’s needs, offering
specific solutions to their presented problems. Consequently,
users perceive that the ChatGPT system understands their
requirements through precise and thorough responses that
meet or even exceed their expectations [62]. This suggests
that the more precise the information provided, the greater
the user satisfaction, underscoring the importance of contex-
tually specific and tailored responses. Therefore, the hypoth-
esis is as follows:
H3: The more precise the information from ChatGPT,
the higher the user’s satisfaction.
Reliability refers to the consistency and dependability of
ChatGPT’s responses over time. The quantity of information
considered reliable pertains to responses that not only
address the immediate informational needs of users but also
provide them with opportunities for deeper knowledge
acquisition [63]. Users who perceive ChatGPT as a reliable
source, offering consistent and dependable responses, are
likely to experience higher satisfaction levels. User interac-
tion studies have shown that consistent performance fosters
user trust and satisfaction [64]. This highlights the impor-
tance of ChatGPT maintaining stable and reliable outputs
to sustain user satisfaction. Therefore, the hypothesis is as
follows:
H4: The more reliable the information from ChatGPT,
the higher the user’s satisfaction.
Timeliness in the context of ChatGPT’s responses
emphasizes the speed at which it delivers information.
ChatGPT’s ability to provide instant and prompt responses
is instrumental in enhancing interaction efficiency and fulfill-
ing user expectations [65]. When users pose questions and
receive accurate responses from ChatGPT promptly, it aids
them in resolving their issues more effectively. Additionally,
the responsive nature of the ChatGPT system facilitates
interactions that lead users to rely on it [66], casting a posi-
tive light on the quality and responsiveness of ChatGPT’s
performance. Rapid and accurate replies are posited to
increase user satisfaction [47]. In the realm of digital commu-
nication, timeliness is highly valued. Users often equate quick
responses with efficiency and effective service, thereby
enhancing their overall satisfaction with the tool. Therefore,
the hypothesis is as follows:
H5: The higher timeliness from ChatGPT will increase
user’s satisfaction.
Convenience in the utilization of ChatGPT incorporates
factors such as ease of use and accessibility. The system’s
ability to comprehend commands or queries translates into
user-friendly experiences [67]. A simplistic design that facil-
itates user-system communication fosters a more comfort-
able interaction experience. Moreover, the provision of
clear and timely responses helps users avoid confusion and
feel more in command of the interaction. High levels of con-
venience generate positive user experiences and lead to
greater satisfaction. This study posits that convenience plays
a crucial role in user satisfaction. Human–computer interac-
tion research indicates that tools that are easy to use and
accessible significantly enhance user experience and satisfac-
tion. This implies that the more user-friendly and accessible
ChatGPT is, the higher the user satisfaction. Therefore, the
hypothesis is as follows:
H6: The higher convenience of generating information
with ChatGPT will significantly boost user satisfaction.
The formatting of information delivered by ChatGPT,
including its clarity, organization, and presentation, is pos-
ited to influence user satisfaction. Responses that are well-
structured, employing clear paragraphing, bullet points,
and other visual aids, facilitate users’ability to quickly locate
and comprehend information [68]. Information that is
neatly organized acts as a guide for users to grasp the context
and inspect specific sections of the response in detail.
Research suggests that well-structured information, pre-
sented in a clear and coherent manner, enhances user under-
standing and satisfaction [69]. The correlation here is that
user-friendly and well-organized response formats are likely
to positively impact user satisfaction levels, as they enable
easier comprehension and interaction. Therefore, this study
proposes the following hypothesis:
H7: The easier to understand the information format
from ChatGPT will positively impact user satisfaction.
6. Methods
6.1. Measures. This study adopts the UIS concept developed
by Ives, Olson, and Baroudi [19] as the foundational frame-
work for developing its conceptual research structure. How-
ever, for the seven UIS measures, this research draws on
previous studies, making modifications and adjustments
specific to this study’s context. This approach is necessitated
by the absence of measures directly evaluating the UIS of
ChatGPT from previous studies. Consequently, the study
employs 21 questions across four satisfaction dimensions,
modified from Bhattacherjee [70]. For completeness, timeli-
ness, and format, six items are adapted and refined from
Laumer, Maier, and Weitzel [36]. Accuracy is adapted from
Foroughi et al. [11] to include three items. Precision, reliabil-
ity, and convenience are modified by Ives, Olson, and
Baroudi [19], resulting in eight items. This methodical adap-
tation ensures the measures are suitably aligned with the
unique characteristics and user interactions of ChatGPT,
providing a robust and relevant evaluation of user satisfac-
tion. Through this comprehensive approach, the study is
aimed at offering a detailed and broader understanding of
user satisfaction with ChatGPT, contributing significantly
to the field of user experience research in AI-powered sys-
tems. Table 2 presents the measurement items that were
used in this study.
6.2. Sampling Technique and Data Collection Procedures.
This research study employs a survey method designed to
gather insights from a group of research participants. Data
is obtained through a structured questionnaire, which is
divided into three parts. The initial part requests the partic-
ipants’consent and explains the sampling technique utilized
in the study. Participants need to meet two criteria: they
8 Human Behavior and Emerging Technologies
should have used ChatGPT for more than 6 months and
be associated with higher education, either as faculty mem-
bers, graduate or undergraduate students, or postdoctoral
researchers. This ensures that the study focuses on a
knowledgeable population. The second part gathers demo-
graphic details such as gender, age, university type, and
occupation. The final part includes questions specifically
tailored for this research project. This systematic approach
enables the collection of pertinent data, ensuring that the
study’sfindings are both insightful and representative of
how ChatGPT users in educational settings perceive their
experiences with the platform. Crafting this methodology
is crucial for capturing how diverse demographic groups
utilize and view ChatGPT differently. By emphasizing
structured data collection, this study is aimed at offering
a diverse understanding of ChatGPT’sinfluence in educa-
tional environments, enriching the depth and scope of
the research results.
Over a 3-month data collection period from August to
November 2023, this study successfully gathered 508
responses. Within the collected data, males comprised 73%
of the users. In terms of age demographics, the 21–35-
year-old bracket was predominant in ChatGPT usage within
the higher education landscape, accounting for 75% of
respondents. Regarding the type of university, the distribu-
tion appears to be 57% from public universities, with the
remaining 43% coming from private institutions. In examin-
ing the occupational demographics, undergraduate students
constituted 39% of participants, graduate students (both
master’s and doctoral candidates) made up 32%, faculty
members represented 15%, and the remainder at 14% con-
sisted of postdoctoral researchers.
6.3. Data Analysis Technique. This research involves several
stages of data analysis. Initially, to ensure the validity and
reliability of the instruments, the study conducts a common
method variance (CMV) test. This assesses the consistency
and validity of the instruments used in the research, employ-
ing Harman’s single-factor method tested through SPSS
Version 26 [71]. Secondly, the study applies validity, reliabil-
ity, and hypothesis analyses using the structural equation
modeling approach with Smart-PLS 4.0 software. This phase
includes testing convergent validity to assess outer loadings
(OLs), composite reliability (CR), average variance extracted
(AVE), and variance inflation factor (VIF) [72]. Further-
more, the research model evaluation involves testing dis-
criminant validity through the Fornell–Larcker criterion
[73], heterotrait–monotrait ratio (HTMT) [74], and cross-
loading matrix [72], along with R-square [75] and goodness
of fit (GOF) evaluation [72]. By these comprehensive
methods, the study seeks to provide an in-depth understand-
ing of how each UIS factor contributes to user satisfaction,
thereby offering valuable insights into user experience with
ChatGPT.
Table 2: Measurement items.
Constructs Items Reference
Satisfaction
How do you feel about your overall experience of retrieving information from ChatGPT:
Very dissatisfied–very satisfied
Very displeased–very pleased
Very frustrated–very contended
Absolutely terrible–absolutely delighted
Bhattacherjee [70]
Completeness ChatGPT provides me with complete information Laumer, Maier, and Weitzel [36]
ChatGPT produces comprehensive information
Accuracy
Information from ChatGPT is correct
Foroughi et al. [11]Information from ChatGPT is reliable
Information from ChatGPT is accurate
Precision
The responses from ChatGPT are generally specific and directly address my questions
Ives, Olson, and Baroudi [19]I rarely receive vague or ambiguous information from ChatGPT
Ifind ChatGPT’s responses to be consistently to the point
Reliability ChatGPT rarely fails to deliver the information I can rely on Ives, Olson, and Baroudi [19]
I trust ChatGPT as a dependable source of information
Timeliness The information provided by ChatGPT is up-to-date Laumer, Maier, and Weitzel [36]
The information provided by ChatGPT is received in a timely manner
Convenience
Accessing ChatGPT is convenient and user-friendly
Ives, Olson, and Baroudi [19]Ifind it easy to access ChatGPT on my preferred devices
I experience no significant challenges in accessing ChatGPT
Format The format in which ChatGPT presents information is clear and easy to understand Laumer, Maier, and Weitzel [36]
Ifind ChatGPT’s information presentation format user-friendly
9Human Behavior and Emerging Technologies
7. Results
7.1. CMV. Before proceeding with further data analysis, a
CMV test is conducted. The method employed is Harman’s
single factor, where all measures are factored into a single
dependent variable. If the total variance explained is below
50%, CMV is not considered a concern [71]. Utilizing SPSS
for the CMV test, a result of 13.6% was obtained, which is
well below the 50% threshold. Consequently, it is concluded
that CMV does not pose a concern in this study. This find-
ing establishes a strong foundation for the credibility of the
subsequent analysis, ensuring that the data interpretation
and results are robust and reliable. The low CMV percentage
enhances the validity of the study, reinforcing the integrity
of the research findings and their implications.
7.2. Validity and Reliability Assessment. Table 3 displays the
results for convergent validity and reliability. The findings
indicate that all statistical criteria suggested by Hair et al.
[72], including OL, CA, CR, VIF, and AVE, have been
met. Consequently, it can be concluded that convergent
validity and reliability are not concerns in this study.
Subsequent testing focused on assessing discriminant
validity to evaluate the efficacy of the model developed in
this study. Three methods were employed for this purpose,
as illustrated in Tables 4 and 5. Table 4 indicates that the dis-
criminant validity testing, using the Fornell–Larcker Crite-
rion and the HTMT method, shows that (1) all diagonal
and bolded values are greater than the intervariable correla-
tion values. This suggests no concerns regarding discrimi-
nant validity [72]. Additionally, the HTMT values obtained
are all below the threshold of 0.90, leading to the conclusion
that discriminant validity is not a concern. These results
confirm the distinctiveness of the constructs within the
model, ensuring that each measure captures a unique aspect
of the phenomenon under study, which is crucial for the
overall validity of the research findings.
Next, the discriminant validity was tested using the
cross-loading matrix method, as presented in Table 5. The
results indicate that the correlation of each construct with
its respective measurement items is greater than with cross
items. This finding suggests that discriminant validity is
not a concern in this study. The clear differentiation of the
constructs further strengthens the reliability and accuracy
of the measurement model, ensuring that each construct is
distinctively and accurately captured within the research
framework [72].
The model testing then proceeded with an evaluation of
model fitandR-square. The results indicated an SRMR value
of 0.070, a chi-square of 1631.471, and an NFI of 0.783, all of
which fall within the threshold limits recommended by Hair
et al. [72]. Subsequently, the R-square value was assessed to
determine the power of independent constructs in predicting
the dependent constructs. An R-square value of 0.581 was
obtained, indicating that the seven ChatGPT UIS measures
can account for 58.1% of the variance in satisfaction. This
meets the criteria set by Falk and Miller [75], as the R-square
value is above the 0.10 benchmark. These findings not only
demonstrate the model’sgoodfit but also underline the con-
siderable explanatory power of the identified factors in under-
standing user satisfaction with ChatGPT.
In this study, the GOF metric was calculated to evaluate
the reliability of the constructed model. The GOF is deter-
mined by taking the square root of the product of the AVE
and the average Rsquare (R2), as indicated in Equation
(1). Tenenhaus et al. [76] and Wetzels, Odekerken-Schröder,
and Van Oppen [77] suggest that a GOF value above 0.36
indicates a high fit, between 0.25 and 0.36 signals a moderate
fit, and between 0.10 and 0.25 points to a low fit. The com-
puted GOF for this study is 0.635, which exceeds the thresh-
old for a high fit, demonstrating the robustness and reliability
of the model. This method was also applied in Huang, Sila-
lahi, and Eunike’s [78] research and confirmed the model’s
robustness test with the GOF method.
GOF = AVE × R2= 0 693 × 0 581 =0 635 1
7.3. Hypothesis Testing. Table 6 and Figure 2 provide a sum-
mary of the hypothesis testing. The analysis revealed that two
UIS measures, accuracy (β=0070;T=1 247) and reliability
(β=0016;T=0 400), were not significant predictors of sat-
isfaction, leading to the rejection of Hypotheses H2 and H4.
On the other hand, it was found that completeness
(β=0096;T=2 863), convenience (β=0126;T=2 192),
format (β=0323;T=5115), precision (β=0245;T=
3 681), and timeliness (β=0 138;T=2 396) had a signifi-
cant impact on satisfaction, supporting Hypotheses H1, H3,
H5, H6, and H7. However, a closer examination indicates
that format and precision have a more substantial impact
Table 3: Convergent validity and reliability.
Constructs OL CA CR VIF AVE
Accuracy 0.736–0.858 0.713 0.724 1.288–1.597 0.636
Completeness 0.735–0.964 0.790 0.936 1.385–1.385 0.734
Convenience 0.820–0.841 0.771 0.781 1.444–1.770 0.684
Format 0.870–0.871 0.781 0.681 1.314–1.363 0.758
Precision 0.792–0.850 0.736 0.655 1.044–1.317 0.526
Reliability 0.869–0.908 0.735 0.749 1.510–1.510 0.790
Satisfaction 0.797–0.832 0.822 0.822 1.678–1.913 0.652
Timeliness 0.860–0.889 0.709 0.699 1.392–1.392 0.765
10 Human Behavior and Emerging Technologies
on satisfaction compared to the others. These results high-
light the varying degrees of influence that different UIS mea-
sures have on user satisfaction, with some factors playing a
more critical role than others in shaping user experiences
with ChatGPT.
8. Discussion
This research represents a pioneering effort to apply the UIS
framework, grounded in IQT, to the context of generative
AI, specifically ChatGPT. This study distinguishes itself by
being among the first to explore how traditional UIS con-
structs, such as those proposed by Ives, Olson, and Baroudi
[19], are evolving in the context of AI technology. Our find-
ings contribute to a shift in the determinants of user satisfac-
tion, revealing that factors like completeness, convenience,
format, precision, and timeliness are now more indicative
of user satisfaction in the context of ChatGPT rather than
the traditional emphasis on accuracy and reliability.
Completeness emerged as a critical factor in determining
user satisfaction, suggesting that users value information
that fully addresses their queries. In higher education, this
is particularly relevant, as students and educators often seek
comprehensive information that provides direct answers
and includes additional context and related details. For
instance, when conducting research or preparing for lectures,
Table 4: Fornell–Larcker criterion and HTMT.
Constructs (1) (2) (3) (4) (5) (6) (7) (8)
Accuracy (1) 0.798 0.352 0.876 0.808 0.873 0.776 0.718 0.858
Completeness (2) 0.251 0.857 0.277 0.449 0.655 0.374 0.176 0.282
Convenience (3) 0.652 0.210 0.827 0.714 0.644 0.623 0.683 0.788
Format (4) 0.564 0.299 0.521 0.871 0.875 0.598 0.784 0.691
Precision (5) 0.685 0.315 0.664 0.525 0.725 0.858 0.827 0.895
Reliability (6) 0.557 0.284 0.476 0.423 0.558 0.889 0.537 0.249
Satisfaction (7) 0.551 0.146 0.555 0.586 0.586 0.420 0.808 0.682
Timeliness (8) 0.601 0.215 0.584 0.475 0.577 0.661 0.517 0.875
Note: The diagonally bolded values were the square root of AVE, which was used for the Fornell–Larcker criterion. The italic values indicate the HTMT with
the threshold <0.90.
Table 5: Cross-loading matrix.
Items/constructs ACC CMP CVC FMT PRR RLB STS TML
ACR.1 0.795 0.253 0.519 0.468 0.511 0.499 0.427 0.514
ACR.2 0.858 0.179 0.523 0.482 0.559 0.411 0.488 0.479
ACR.3 0.736 0.171 0.524 0.396 0.574 0.431 0.400 0.450
CMP.1 0.239 0.964 0.207 0.268 0.313 0.288 0.159 0.224
CMP.2 0.191 0.735 0.141 0.270 0.210 0.171 0.063 0.114
CNV.1 0.496 0.201 0.820 0.407 0.480 0.351 0.364 0.412
CNV.2 0.533 0.173 0.841 0.421 0.600 0.419 0.488 0.480
CNV.3 0.578 0.154 0.820 0.458 0.551 0.402 0.500 0.537
FMR.1 0.489 0.307 0.421 0.870 0.419 0.371 0.510 0.387
FMR.2 0.492 0.213 0.485 0.871 0.494 0.365 0.511 0.440
PRC.1 0.576 0.203 0.568 0.418 0.850 0.457 0.515 0.486
PRC.2 0.577 0.203 0.569 0.439 0.838 0.480 0.491 0.503
PRC.3 0.277 0.473 0.218 0.285 0.792 0.235 0.182 0.191
RLB.1 0.498 0.287 0.398 0.371 0.445 0.869 0.340 0.557
RLB.2 0.494 0.224 0.446 0.381 0.540 0.908 0.402 0.615
STS.1 0.438 0.093 0.441 0.489 0.456 0.295 0.798 0.364
STS.2 0.444 0.082 0.481 0.445 0.489 0.354 0.832 0.444
STS.3 0.459 0.151 0.421 0.493 0.446 0.361 0.797 0.410
STS.4 0.439 0.147 0.448 0.467 0.500 0.345 0.802 0.450
TML.1 0.525 0.202 0.523 0.413 0.502 0.591 0.427 0.860
TML.2 0.528 0.176 0.499 0.418 0.509 0.568 0.475 0.889
Note: The bolded values represent the construct’s outer loadings.
11Human Behavior and Emerging Technologies
educators need in-depth information that helps them explore
topics thoroughly and present well-rounded arguments or
explanations to students. This comprehensive approach,
which reduces the need for further searches, fosters a more
holistic understanding of academic topics and aligns with
findings by Gupta, Motlagh, and Rhyner [58], who noted
the importance of depth in information retrieval for educa-
tional purposes. In this sense, ChatGPT’s ability to provide
comprehensive responses can significantly enhance the qual-
ity of academic research and teaching [79].
Convenience is a key factor in user satisfaction, reflecting
a preference for easily accessible information. The ability to
use ChatGPT anytime and anywhere significantly boosts its
appeal in academic settings. This is particularly valuable for
students who need quick answers or additional resources
while studying without being limited by library hours or the
availability of tutors. Educators also benefit from this conve-
nience, as they can quickly access supporting information or
explore new teaching methods. The user-friendly design of
ChatGPT and its accessibility align with Pan, Cui, and Mou
[80], who emphasized the importance of easy access in user
interactions with AI tools. Integrating AI into academic rou-
tines indicates that users increasingly value the convenience
and accessibility these technologies offer, reflecting a shift
towards more flexible, on-demand learning resources.
The format and precision of information provided by
ChatGPT were also significant factors influencing user satis-
faction. Higher education users, in particular, appreciate
well-structured, clear, and organized information, which
aids in better comprehension and usability. Precision
ensures that the information is directly relevant to the user’s
query, enhancing the perceived reliability of ChatGPT.
These aspects are crucial in academic settings, where stu-
dents and educators require precise, accurate, and well-
structured information for their work. For instance, in aca-
demic writing or preparing detailed lecture notes, the clarity
and organization of information can significantly impact the
effectiveness of the material. This finding is consistent with
research by Reinecke and Bernstein [61], who emphasized
the importance of clear and precise information presenta-
tion in user satisfaction. By providing well-organized and
targeted information, ChatGPT can support academic activ-
ities by making it easier for users to extract the necessary
information quickly and efficiently.
Timeliness emerged as a crucial factor in user satisfac-
tion, with users appreciating quick responses that enable
Table 6: Summary of hypothesis testing.
Hypothesis βTBootstrapping CI 97.5% Decision
Lower Upper
H1. Completeness➔satisfaction 0.096∗∗ 2.863 0.168 0.040 Accept
H2. Accuracy➔satisfaction 0.070 1.247 0.037 0.182 Reject
H3. Precision➔satisfaction 0.245∗∗∗ 3.681 0.117 0.378 Accept
H4. Reliability➔satisfaction 0.016 0.400 0.118 0.086 Reject
H5. Timeliness➔satisfaction 0.138∗∗ 2.396 0.025 0.249 Accept
H6. Convenience➔satisfaction 0.126∗∗ 2.192 0.014 0.239 Accept
H7. Format➔satisfaction 0.323∗∗∗ 5.115 0.202 0.444 Accept
∗∗, and ∗∗∗ show significance levels at p<0.010, and p<0.001, respectively.
Completeness
User information
satisfaction
Accuracy
Precision
Reliability
Timeliness
Convenience
Format
H4: 𝛽 = 0.016ns
H3: 𝛽 = 0.245
⁎⁎⁎
H2: 𝛽 = 0.070ns
H1: 𝛽 = 0.096
⁎⁎
H5: 𝛽 = 0.138⁎⁎
H6: 𝛽 = 0.126
⁎⁎
H7: 𝛽 = 0.323⁎⁎⁎
R2 = 0.581
Figure 2: Summary of hypothesis testing from factors affecting ChatGPT’s user information satisfaction.
12 Human Behavior and Emerging Technologies
them to continue their academic work without delays.
ChatGPT’s rapid response times make it especially valuable
in educational contexts, where meeting tight deadlines and
accessing timely information are essential. This is particu-
larly beneficial for students facing looming deadlines on
assignments or projects and for educators needing to quickly
prepare course materials or address student questions. This
observation aligns with recent research highlighting the
importance of timely information in sustaining user engage-
ment with technology [81]. In this light, ChatGPT’s ability
to provide prompt responses enhances the efficiency of both
teaching and learning, fostering a more robust and respon-
sive educational environment.
The findings suggest that in higher education, usability
and accessibility of AI tools like ChatGPT are increasingly
prioritized over accuracy and reliability. This shift may be
driven by users’growing familiarity with AI technologies
and their rising expectations, as observed by Ameen et al.
[82]; Ma and Huo [83]; and Bubaš,Čižmešija, and Kovačić
[84]. However, while ease of use seems to be taking prece-
dence, the importance of accuracy and reliability cannot be
understated, especially in academic contexts where these fac-
tors are crucial for maintaining the integrity of research and
teaching. This trend may reflect a broader change in how
academic users interact with technology, focusing on quick
and convenient access to information. Despite this, it is
essential to ensure that the pursuit of convenience does not
come at the cost of accuracy, as upholding high IQ standards
is vital for academic integrity.
Considering the study’s results implies that ethical issues
should be a priority when using AI tools and applications
like ChatGPT in education. It is important that ChatGPT
does not give out wrong information and, therefore, leads
to wrong academic submissions. For instance, when students
are using ChatGPT for research purposes, the tool should
limit the sources given to the students to relevant and accu-
rate sources to avoid students writing fake papers. Solving
these ethical problems is crucial to preventing the problems
of using AI and, for example, encouraging unethical actions.
Both educators and software developers need to ensure that
the use of such applications as ChatGPT in learning institu-
tions is done in a right manner that will not compromise the
integrity of the institutions. This approach will thus ensure
the quality and integrity of the academic work and the learn-
ing process as a whole.
Additionally, our study emphasizes the importance of tai-
loring the integrated UIS and IQT framework to account for
factors that affect user satisfaction in the context of generative
AI. Enhancing elements like convenience, presentation, and
timeliness of information can help system designers better
align with user needs and preferences. This research makes a
valuable contribution to the existing literature by identifying
key factors that impact user satisfaction with ChatGPT in edu-
cational settings. These insights can assist educators, software
developers, and researchers in effectively integrating AI tech-
nologies into educational contexts and improving user inter-
actions. Future research should explore these areas further to
refine the UIS framework and adapt it to the rapidly changing
landscape of AI in education.
9. Implication
9.1. Implication for Theory. This research makes a notable
theoretical contribution by applying UIS and IQT to the
context of generative AI, specifically ChatGPT, within
higher education. While traditional UIS models, like those
proposed by Ives, Olson, and Baroudi [19], have historically
focused on accuracy and reliability as key factors in user sat-
isfaction, our findings indicate a shift in importance towards
dimensions such as completeness, convenience, format, pre-
cision, and timeliness in AI-driven environments. This shift
challenges conventional frameworks and highlights the need
to update these theories to better reflect the evolving nature
of modern ISs and AI technologies [85].
Expanding these dimensions broadens the UIS and IQT
framework to better align with the evolving expectations of
users in the digital age. This study builds on and extends
the work of Laumer et al. [85], who highlighted the need
to update traditional UIS measures to account for the com-
plexities of digital interactions. By incorporating these newer
factors, our research not only deepens the theoretical under-
standing of UIS but also offers a more comprehensive view
of user satisfaction, especially relevant for AI integration in
educational contexts. This enhancement of the UIS frame-
work, which now includes elements like convenience and
format, marks a significant step forward in the field, reflect-
ing current trends in user-centered design and AI adoption
in education [25, 30].
This study fills a significant gap in current research by
being among the first to apply UIS and IQT to the use of
generative AI tools like ChatGPT in higher education.
Although previous research has looked at UIS in relation
to various educational technologies (e.g., [30, 34]), the spe-
cific challenges and opportunities posed by AI have not been
thoroughly explored. By examining how higher education
users interact with and perceive the quality of information
from ChatGPT, our study provides fresh insights that con-
tribute to the broader field of educational technology and
user satisfaction. The findings indicate that within the con-
text of AI, users might prioritize factors like usability and
accessibility over traditional concerns such as accuracy and
reliability, which could have important implications for both
future research and practical applications [82, 84].
Finally, this research contributes to the ongoing develop-
ment of UIS and IQT by proposing a more comprehensive
framework that more accurately reflects user interactions
with AI technologies. By incorporating dimensions such as
completeness, convenience, format, precision, and timeli-
ness, the study enhances existing models and provides a
clearer understanding of the factors driving user satisfaction
in modern ISs. This theoretical advancement not only
deepens academic insights but also offers practical guidelines
for the design and implementation of AI systems in educa-
tion, ensuring they meet the evolving needs of users in a dig-
ital world [61, 81].
9.2. Implication for Practice. This research offers practical
insights for optimizing the use of ChatGPT and similar gen-
erative AI tools, particularly by challenging the traditional
13Human Behavior and Emerging Technologies
emphasis on accuracy and reliability as the primary metrics
of success in ISs. Our findings suggest that organizations
should instead prioritize factors like completeness, conve-
nience, format, precision, and timeliness [36]. For example,
organizations could improve the user experience by imple-
menting customizable interface settings that allow users to
choose how information is presented—whether through
concise summaries, detailed explanations, or even graphical
representations like charts and infographics. Studies have
demonstrated that enabling users to customize the presenta-
tion of information can significantly enhance satisfaction
and engagement, as evidenced by adaptive e-learning sys-
tems that tailor content based on user preferences [86, 25].
At the organizational level, prioritizing aspects of IQT in
academic settings can have a significant impact on research,
teaching, and other educational activities. For instance, in
research, ensuring that AI tools like ChatGPT provide com-
plete and precise information can greatly enhance the qual-
ity of literature reviews, data analysis, and the overall
research process. When researchers can depend on AI to
deliver comprehensive, well-organized, and contextually rel-
evant information, they can save time and concentrate more
on critical thinking and analysis, thereby improving the
overall quality of their work [50, 87]. In teaching, educators
can utilize AI tools to present information in formats that
accommodate different learning styles, such as diagrams
and charts for visual learners or summarized bullet points
for auditory learners. This flexibility can result in more effec-
tive teaching strategies, better learning outcomes, and higher
engagement [54, 88].
For individuals, especially those who frequently interact
with AI technologies like ChatGPT, this study underscores
the importance of mastering system features to maximize
their utility. Educational programs could incorporate spe-
cific training on how to effectively use AI tools, with a focus
on navigating and customizing information formats to meet
individual needs. For example, such training could teach
users how to adjust the level of detail in ChatGPT’s
responses or how to apply filters to prioritize the most rele-
vant information. Similar strategies have been successfully
implemented in professional development programs, where
workers are trained to customize their digital tools to
enhance productivity [89]. As users become more skilled
at leveraging these functionalities, they can significantly
improve their efficiency and effectiveness in various tasks,
which is increasingly crucial in the digital environment [90].
Finally, equipping users with the knowledge to fully
utilize AI tools underscores the critical importance of dig-
ital literacy in the modern age. Users who understand and
effectively leverage the advanced features of systems like
ChatGPT can significantly enhance their technological literacy
and efficacy, which is essential for navigating today’scomplex
technological landscape [91]. Additionally, addressing user
satisfaction with information presentation formats—such as
ensuring that information is structured in a user-friendly
manner—can further boost the utility and effectiveness of
these tools. For example, incorporating features that allow
users to organize information into visual formats like mind
maps or flowcharts could be particularly beneficial for stu-
dents and professionals in fields that heavily rely on complex
data visualization [61]. This user-centric approach not only
benefits individuals but also supports the broader goal of inte-
grating AI tools into society in ways that maximize their util-
ity, foster growth and innovation, and create more seamless
user experiences.
Moreover, applying IQT in academic contexts is essen-
tial for maintaining the quality and integrity of educational
outputs. For instance, in teaching, the precision and timeli-
ness of information provided by AI tools like ChatGPT can
significantly impact the effectiveness of lesson plans and
the delivery of educational content. Educators who depend
on accurate and up-to-date information are better posi-
tioned to create relevant and current materials for their stu-
dents, thereby enhancing the overall learning experience [24,
83]. In research, the completeness and reliability of informa-
tion obtained from AI can affect the validity of academic
papers and projects. Ensuring that AI tools meet high IQ
standards helps researchers produce work that is both inno-
vative and reliable, thereby contributing to the advancement
of knowledge in their respective fields [84].
10. Conclusion and Limitation
As this study draws to a close, it becomes clear that the
UIS framework requires reexamination in the era of gener-
ative AI, notably in the application of ChatGPT. Our
research challenges the traditionally held belief that accu-
racy and reliability are the paramount indicators of user
satisfaction, proposing instead that factors such as complete-
ness, convenience, format, precision, and timeliness are now
critical. This shift reflects how users’interactions with AI
interfaces are evolving, necessitating a reformulation of the
UIS framework to better capture these emerging preferences.
By aligning AI system design with these new demands, we
can improve user satisfaction and enhance the overall utility
of AI in educational contexts.
This research provides valuable insights for various
stakeholders. Teachers and schools should prioritize incor-
porating AI technologies such as ChatGPT in a manner that
highlights these satisfaction elements. For instance, offering
guidance on the utilization of ChatGPT for academic pur-
poses can maximize its advantages and address any potential
issues. Decision-makers need to take these aspects into
account when crafting policies and criteria for AI integration
in education, ensuring that the use of AI enriches educa-
tional experiences while upholding academic values and
principles.
Despite its contributions, this study has limitations that
open avenues for future research. Our investigation was con-
strained by its exclusive focus on ChatGPT, which, although
representative of generative AI, may not capture the full
spectrum of user interactions with other AI platforms. Addi-
tionally, the reliance on self-reported measures of satisfac-
tion could introduce response biases, potentially distorting
our understanding of user experiences. Future research
should expand to include a broader range of AI tools and
incorporate objective usage data, providing a more compre-
hensive view of user satisfaction and behavior. Longitudinal
14 Human Behavior and Emerging Technologies
studies could also be beneficial, tracking how user satisfac-
tion evolves over time as individuals become more accus-
tomed to these technologies.
Data Availability Statement
The data presented in this study are available on request from
the corresponding author (andridksilalahi@gmail.com).
Conflicts of Interest
The authors declare no conflicts of interest.
Author Contributions
Conceptualization: C.J.F., I.T.S., and A.D.K.S. Methodology:
A.D.K.S. and I.J.E. Software: A.D.K.S. Validation: S.J. and
D.T.T.P. Formal analysis: A.D.K.S. Investigation: A.D.K.S.
and D.T.T.P. Resources: C.J.F. and I.T.S. Data curation:
A.D.K.S. Writing–original draft preparation: A.D.K.S., I.T.S.,
D.T.T.P., and I.J.E. Writing–review and editing: C.J.F., S.J.,
and I.J.E. Visualization: I.J.E. Supervision: C.J.F. Project
administration: A.D.K.S. Funding acquisition: C.J.F. All
authors have read and agreed to the published version of the
manuscript.
Funding
This work was supported by the National Science and Tech-
nology Council under grant number 113-2637-H-324-003-.
References
[1] B. D. Lund, T. Wang, N. R. Mannuru, B. Nie, S. Shimray, and
Z. Wang, “ChatGPTand a new academic reality:Artificial
Intelligence‐writtenresearch papers and the ethics of the large
language models in scholarly publishing,”Journal of the Asso-
ciation for Information Science and Technology, vol. 74, no. 5,
pp. 570–581, 2023.
[2] R. Bringula, “What do academics have to say about ChatGPT?
A text mining analytics on the discussions regarding ChatGPT
on research writing,”in AI and ethics, pp. 1–13, Springer, 2023.
[3] H. B. Essel, D. Vlachopoulos, A. Tachie-Menson, E. E. John-
son, and P. K. Baah, “The impact of a virtual teaching assistant
(chatbot) on students' learning in Ghanaian higher education,”
International Journal of Educational Technology in Higher
Education, vol. 19, no. 1, pp. 1–19, 2022.
[4] K. Roose, Don't ban ChatGPT in schools. Teach with it, Inter-
national New York Times, 2023.
[5] F. Fui-Hoon Nah, R. Zheng, J. Cai, K. Siau, and L. Chen, “Gen-
erative AI and ChatGPT: applications, challenges, and AI-
human collaboration,”Journal of Information Technology Case
and Application Research, vol. 25, no. 3, pp. 277–304, 2023.
[6] Y. Meron and Y. Tekmen Araci, “Artificial intelligence in
design education: evaluating ChatGPT as a virtual colleague
for post-graduate course development,”Design Science,
vol. 9, article e30, 2023.
[7] Y. K. Dwivedi, N. Kshetri, L. Hughes et al., “Opinion Paper:
“So what if ChatGPT wrote it?”Multidisciplinary perspec-
tives on opportunities, challenges and implications of gener-
ative conversational AI for research, practice and policy,”
International Journal of Information Management, vol. 71,
article 102642, 2023.
[8] J. Kocoń, I. Cichecki, O. Kaszyca et al., “ChatGPT: jack of
all trades, master of none,”Information Fusion, vol. 99,
article 101861, 2023.
[9] L. De Angelis, F. Baglivo, G. Arzilli et al., “ChatGPT and the
rise of large language models: the new AI-driven infodemic
threat in public health,”Frontiers in Public Health, vol. 11,
article 1166120, 2023.
[10] G. Deiana, M. Dettori, A. Arghittu, A. Azara, G. Gabutti, and
P. Castiglia, “Artificial intelligence and public health: evaluat-
ing ChatGPT responses to vaccination myths and misconcep-
tions,”Vaccine, vol. 11, no. 7, p. 1217, 2023.
[11] B. Foroughi, M. G. Senali, M. Iranmanesh et al., “Determinants
of intention to use ChatGPT for educational purposes: findings
from PLS-SEM and fsQCA,”International Journal of Human–
Computer Interaction, vol. 40, no. 17, pp. 4501–4520, 2023.
[12] A. Strzelecki, “To use or not to use ChatGPT in higher educa-
tion? A study of students’acceptance and use of technology,”
Interactive Learning Environments, pp. 1–14, 2023.
[13] Y. H. Chang, A. D. K. Silalahi, and K. Y. Lee, “From uncer-
tainty to tenacity: investigating user strategies and continuance
intentions in AI-powered ChatGPT with uncertainty reduc-
tion theory,”International Journal of Human–Computer Inter-
action, pp. 1–19, 2024.
[14] D. Menon and K. Shilpa, “Chatting with ChatGPT: analyzing
the factors influencing users' intention to use the open AI's
ChatGPT using the UTAUT model,”Heliyon, vol. 9, no. 11,
article e20962, 2023.
[15] D. R. E. Cotton, P. A. Cotton, and J. R. Shipway, “Chatting and
cheating: ensuring academic integrity in the era of ChatGPT,”
Innovations in Education and Teaching International, vol. 61,
no. 2, pp. 228–239, 2023.
[16] S. A. Bin-Nashwan, M. Sadallah, and M. Bouteraa, “Use
of ChatGPT in academia: academic integrity hangs in the bal-
ance,”Technology in Society, vol. 75, article 102370, 2023.
[17] M. Liu, Y. Ren, L. M. Nyagoga, F. Stonier, Z. Wu, and L. Yu,
“Future of education in the era of generative artificial intelli-
gence: consensus among Chinese scholars on applications of
ChatGPT in schools,”Future in Educational Research, vol. 1,
no. 1, pp. 72–101, 2023.
[18] A. N. Ansari, S. Ahmad, and S. M. Bhutta, “Mapping the global
evidence around the use of ChatGPT in higher education: a
systematic scoping review,”Education and Information Tech-
nologies, vol. 29, no. 9, pp. 11281–11321, 2024.
[19] B. Ives, M. H. Olson, and J. J. Baroudi, “The measurement of
user information satisfaction,”Communications of the ACM,
vol. 26, no. 10, pp. 785–793, 1983.
[20] R. Y. Wang and D. M. Strong, “Beyond Accuracy: What Data
Quality Means to Data Consumers,”Journal of Management
Information Systems, vol. 12, no. 4, pp. 5–33, 1996.
[21] T. H. Baek and M. Kim, “Is ChatGPT scary good? How user
motivations affect creepiness and trust in generative artificial
intelligence,”Telematics and Informatics, vol. 83, article
102030, 2023.
[22] C. K. Lo, “What is the impact of ChatGPT on education? A
rapid review of the literature,”Education Sciences, vol. 13,
no. 4, p. 410, 2023.
[23] P. Rivas and L. Zhao, “Marketing with ChatGPT: navigating
the ethical terrain of GPT-based chatbot technology,”AI,
vol. 4, no. 2, pp. 375–384, 2023.
15Human Behavior and Emerging Technologies
[24] C. J. Fu, A. D. K. Silalahi, S. C. Huang, D. T. T. Phuong,
I. J. Eunike, and Z. H. Yu, “The (un) knowledgeable, the
(un) skilled? Undertaking Chat-GPT users’benefit-risk-
coping paradox in higher education focusing on an integrated,
UTAUT and PMT,”International Journal of Human–Com-
puter Interaction, pp. 1–31, 2024.
[25] X. Wang, T. Huang, W. Zhang, Q. Zeng, and X. Sun, “Is
information normalization helpful in online communica-
tion? Evidence from online healthcare consultation,”Inter-
net Research, 2024.
[26] J. Kim, J. H. Kim, C. Kim, and J. Park, “Decisions with
ChatGPT: reexamining choice overload in ChatGPT recom-
mendations,”Journal of Retailing and Consumer Services,
vol. 75, article 103494, 2023.
[27] J. H. Kim, J. Kim, J. Park, C. Kim, J. Jhang, and B. King, “When
ChatGPT gives incorrect answers: the impact of inaccurate
information by generative AI on tourism decision-making,”
Journal of Travel Research, 2023.
[28] F. Melchert, R. Müller, and M. Fischer, “Adapting information
quality theory to AI-driven educational platforms: ensuring
relevance and satisfaction in learning outcomes,”Education
and Information Technologies, vol. 29, no. 1, pp. 45–68,
2024.
[29] X. Wang, Y. Li, and L. Chen, “Information normalization in
online healthcare consultations: balancing informational and
emotional support,”Journal of Business Research, vol. 136,
no. 1, pp. 42–56, 2024.
[30] U. Haryaka, A. Agus, and A. H. Kridalaksana, “User satisfac-
tion model for E-learning using smartphone,”Procedia Com-
puter Science, vol. 116, pp. 373–380, 2017.
[31] J. Ang and S. Koh, “Exploring the relationships between user
information satisfaction and job satisfaction,”International
Journal of Information Management, vol. 17, no. 3, pp. 169–
177, 1997.
[32] N. Au, E. W. Ngai, and T. E. Cheng, “A critical review of end-
user information system satisfaction research and a new
research framework,”Omega, vol. 30, no. 6, pp. 451–478, 2002.
[33] S. Siritongthaworn and D. Krairit, “Satisfaction in E-learning:
the context of supplementary instruction,”Campus-Wide
Information Systems, vol. 23, no. 2, pp. 76–91, 2006.
[34] Y. S. Wang, “Assessment of learner satisfaction with asynchro-
nous electronic learning systems,”Information & Manage-
ment, vol. 41, no. 1, pp. 75–86, 2003.
[35] D. F. Galletta and A. L. Lederer, “Some cautions on the mea-
surement of user information satisfaction,”Decision Sciences,
vol. 20, no. 3, pp. 419–434, 1989.
[36] S. Laumer, C. Maier, and T. Weitzel, “Information quality, user
satisfaction, and the manifestation of workarounds: a qualita-
tive and quantitative study of enterprise content management
system users,”European Journal of Information Systems,
vol. 26, no. 4, pp. 333–360, 2017.
[37] B. Bai, R. Law, and I. Wen, “The impact of website quality on
customer satisfaction and purchase intentions: evidence from
Chinese online visitors,”International Journal of Hospitality
Management, vol. 27, no. 3, pp. 391–402, 2008.
[38] P. Shamala, R. Ahmad, A. Zolait, and M. Sedek, “Integrating
information quality dimensions into information security risk
management (ISRM),”Journal of Information Security and
Applications, vol. 36, pp. 1–10, 2017.
[39] M. Golfarelli, D. Maio, and S. Rizzi, “The dimensional fact
model: a conceptual model for data warehouses,”International
Journal of Cooperative Information Systems, vol. 7, no. 2n03,
pp. 215–247, 1998.
[40] C. W. Fisher, I. Chengalur-Smith, and D. P. Ballou, “The
impact of experience and time on the use of data quality infor-
mation in decision making,”Information Systems Research,
vol. 14, no. 2, pp. 170–188, 2003.
[41] M. K. Williams, “John Dewey in the 21st century,”Journal of
Inquiry and Action in Education, vol. 9, no. 1, 2017.
[42] S. Liu, N. Wang, B. Gao,and M. Gallivan, “To be similar or to be
different? The effect of hotel managers’rote response on subse-
quent reviews,”Tourism Management, vol. 86, article 104346,
2021.
[43] I. Alhassan, D. Sammon, and M. Daly, “Data governance activ-
ities: an analysis of the literature,”Journal of Decision Systems,
vol. 25, supplement 1, pp. 64–75, 2016.
[44] C. Batini, C. Cappiello, C. Francalanci, and A. Maurino,
“Methodologies for data quality assessment and improve-
ment,”ACM Computing Surveys (CSUR), vol. 41, no. 3,
pp. 1–52, 2009.
[45] L. Martín, L. Sánchez, J. Lanza, and P. Sotres, “Development
and evaluation of artificial intelligence techniques for IoT data
quality assessment and curation,”Internet of Things, vol. 22,
article 100779, 2023.
[46] S. Fosso Wamba, S. Akter, L. Trinchera, and M. De Bourmont,
“Turning information quality into firm performance in the big
data economy,”Management Decision, vol. 57, no. 8,
pp. 1756–1783, 2019.
[47] S. Petter and A. Fruhling, “Evaluating the success of an emer-
gency response medical information system,”International
Journal of Medical Informatics, vol. 80, no. 7, pp. 480–489,
2011.
[48] W. H. Tsai, P. L. Lee, Y. S. Shen, and H. L. Lin, “A comprehen-
sive study of the relationship between enterprise resource
planning selection criteria and enterprise resource planning
system success,”Information & Management, vol. 49, no. 1,
pp. 36–46, 2012.
[49] B. H. Wixom and P. A. Todd, “A theoretical integration of user
satisfaction and technology acceptance,”Information Systems
Research, vol. 16, no. 1, pp. 85–102, 2005.
[50] H. H. Chang, P. H. Hsieh, and C. S. Fu, “The mediating role of
sense of virtual community,”Online Information Review,
vol. 40, no. 7, pp. 882–899, 2016.
[51] W. J. Kettinger and C. C. Lee, “Perceived service quality and
user satisfaction with the information services function,”Deci-
sion Sciences, vol. 25, no. 5-6, pp. 737–766, 1994.
[52] T. Chi, “Mobile commerce website success: antecedents of
consumer satisfaction and purchase intention,”Journal of
Internet Commerce, vol. 17, no. 3, pp. 189–215, 2018.
[53] J. Xu, I. Benbasat, and R. T. Cenfetelli, “Integrating service
quality with system and information quality: an empirical test
in the E-service context,”MIS Quarterly, vol. 37, no. 3,
pp. 777–794, 2013.
[54] K. Al-Qeisi, C. Dennis, E. Alamanos, and C. Jayawardhena,
“Website design quality and usage behavior: unified theory of
acceptance and use of technology,”Journal of Business
Research, vol. 67, no. 11, pp. 2282–2290, 2014.
[55] N. Urbach, P. Drews, and J. Ross, “Digital business transfor-
mation and the changing role of the IT function,”MIS Quar-
terly Executive, vol. 16, no. 2, pp. 1–4, 2017.
[56] S. Yoon and M. Kim, “A study on the improvement direction
of artificial intelligence speakers applying DeLone and
16 Human Behavior and Emerging Technologies
McLean’s information system success model,”Human Behav-
ior and Emerging Technologies, vol. 2023, no. 1, Article ID
2683458, 2023.
[57] Z. Huang and M. Benyoucef, “The effects of social commerce
design on consumer purchase decision-making: an empirical
study,”Electronic Commerce Research and Applications,
vol. 25, pp. 40–58, 2017.
[58] S. Gupta, M. Motlagh, and J. Rhyner, “The digitalization sus-
tainability matrix: a participatory research tool for investigat-
ing digitainability,”Sustainability, vol. 12, no. 21, p. 9283,
2020.
[59] C. M. K. Cheung and M. K. O. Lee, “User satisfaction with an
internet-based portal: an asymmetric and nonlinear
approach,”Journal of the American Society for Information
Science and Technology, vol. 60, no. 1, pp. 111–122, 2009.
[60] A. Saka, R. Taiwo, N. Saka et al., “GPT models in construction
industry: opportunities, limitations, and a use case validation,”
Developments in the Built Environment, vol. 17, article 100300,
2024.
[61] K. Reinecke and A. Bernstein, “Knowing what a user likes: a
design science approach to interfaces that automatically adapt
to culture,”MIS Quarterly, vol. 37, no. 2, pp. 427–453, 2013.
[62] K. I. Roumeliotis, N. D. Tselikas, and D. K. Nasiopoulos,
“LLMs in E-commerce: a comparative analysis of GPT and
LLaMA models in product review evaluation,”Natural Lan-
guage Processing Journal, vol. 6, article 100056, 2024.
[63] I. A. Wong, Q. L. Lian, and D. Sun, “Autonomous travel deci-
sion-making: an early glimpse into ChatGPT and generative
AI,”Journal of Hospitality and Tourism Management,
vol. 56, pp. 253–263, 2023.
[64] Y. Chen, F. M. Zahedi, A. Abbasi, and D. Dobolyi, “Trust cal-
ibration of automated security IT artifacts: a multi-domain
study of phishing-website detection tools,”Information &
Management, vol. 58, no. 1, article 103394, 2021.
[65] B. Niu and G. F. Nkoulou Mvondo, “I am ChatGPT, the ulti-
mate AI chatbot! Investigating the determinants of Users' loy-
alty and ethical usage concerns of ChatGPT,”Journal of
Retailing and Consumer Services, vol. 76, article 103562, 2024.
[66] D. Akiba and M. C. Fraboni, “AI-supported academic advis-
ing: exploring ChatGPT’s current state and future potential
toward student empowerment,”Education Sciences, vol. 13,
no. 9, p. 885, 2023.
[67] N. Saif, S. U. Khan, I. Shaheen, A. ALotaibi, M. M. Alnfiai, and
M. Arif, “Chat-GPT; validating technology acceptance model
(TAM) in education sector via ubiquitous learning mecha-
nism,”Computers in Human Behavior, vol. 154, article 108097,
2024.
[68] J. Jin and M. Kim, “GPT-empowered personalized eLearning
system for programming languages,”Applied Sciences,
vol. 13, no. 23, p. 12773, 2023.
[69] S. Park, H. Zo, A. P. Ciganek, and G. G. Lim, “Examining suc-
cess factors in the adoption of digital object identifier systems,”
Electronic Commerce Research and Applications, vol. 10, no. 6,
pp. 626–636, 2011.
[70] A. Bhattacherjee, “Understanding information systems con-
tinuance: an expectation-confirmation model,”MIS Quarterly,
vol. 25, no. 3, pp. 351–370, 2001.
[71] H. Baumgartner, B. Weijters, and R. Pieters, “The biasing effect
of common method variance: some clarifications,”Journal of
the Academy of Marketing Science, vol. 49, no. 2, pp. 221–
235, 2021.
[72] J. Hair, C. L. Hollingsworth, A. B. Randolph, and A. Y. L.
Chong, “An updated and expanded assessment of PLS-SEM
in information systems research,”Industrial Management &
Data Systems, vol. 117, no. 3, pp. 442–458, 2017.
[73] C. Fornell and D. F. Larcker, “Evaluating structural equation
models with unobservable variables and measurement error,”
Journal of Marketing Research, vol. 18, no. 1, pp. 39–50, 1981.
[74] J. Henseler, C. M. Ringle, and M. Sarstedt, “A new criterion for
assessing discriminant validity in variance-based structural
equation modeling,”Journal of the Academy of Marketing
Science, vol. 43, pp. 115–135, 2015.
[75] R. F. Falk and N. B. Miller, A Primer for Soft Modeling, Univer-
sity of Akron Press, 1992.
[76] M. Tenenhaus, V. Esposito Vinzi, Y.-M. Chatelin, and
C. Lauro, “PLS path modeling,”Computational Statistics &
Data Analysis, vol. 48, no. 1, pp. 159–205, 2005.
[77] M. Wetzels, G. Odekerken-Schröder, and C. Van Oppen,
“Using PLS path modeling for assessing hierarchical construct
models: guidelines and empirical illustration,”MIS Quarterly,
vol. 33, no. 1, pp. 177–195, 2009.
[78] S. C. Huang, A. D. K. Silalahi, and I. J. Eunike, “Exploration of
moderated, mediated, and configurational outcomes of
tourism-related content (TRC) on TikTok in predicting enjoy-
ment and behavioral intentions,”Human Behavior and Emerg-
ing Technologies, vol. 2024, no. 1, Article ID 2764759, 2024.
[79] C. J. Fu, A. D. K. Silalahi, I. T. Shih, D. T. T. Phuong, I. J.
Eunike, and S. Jargalsaikhan, “To satisfy or clarify: enhancing
user information satisfaction with AI-powered ChatGPT,”
Engineering Proceedings, vol. 74, no. 1, 2024.
[80] S. Pan, J. Cui, and Y. Mou, “Desirable or distasteful? Exploring
uncertainty in human-chatbot relationships,”International
Journal of Human–Computer Interaction, pp. 1–11, 2023.
[81] T. M. Brill, L. Munoz, and R. J. Miller, “Siri, Alexa, and other
digital assistants: a study of customer satisfaction with artificial
intelligence applications,”in The role of smart technologies in
decision making, pp. 35–70, Routledge, 2022.
[82] N. Ameen, A. Tarhini, A. Reppel, and A. Anand, “Customer
experiences in the age of artificial intelligence,”Computers in
Human Behavior, vol. 114, article 106548, 2021.
[83] X. Ma and Y. Huo, “Are users willing to embrace ChatGPT?
Exploring the factors on the acceptance of chatbots from the
perspective of AIDUA framework,”Technology in Society,
vol. 75, article 102362, 2023.
[84] G. Bubaš,A.Čižmešija, and A. Kovačić,“Development of an
assessment scale for measurement of usability and user experi-
ence characteristics of Bing Chat conversational AI,”Future
Internet, vol. 16, no. 1, p. 4, 2024.
[85] S. Laumer, A. Eckhardt, and N. Trunk, “Do as your parents
say?—analyzing IT adoption influencing factors for full and
under age applicants,”Information Systems Frontiers, vol. 12,
no. 2, pp. 169–183, 2010.
[86] F. Johnson, J. Rowley, and L. Sbaffi,“Exploring Information
Interactions in the Context of Google,”Journal of the Associa-
tion for Information Science and Technology, vol. 67, no. 4,
pp. 824–840, 2016.
[87] K. Williams, G. Berman, and S. Michalska, “Investigating
hybridity in artificial intelligence research,”Big Data & Society,
vol. 10, no. 2, article 20539517231180577, 2023.
[88] J. Anderson, L. Rainie, and A. Luchsinger, “Artificial intelli-
gence and the future of humans,”Pew Research Center,
vol. 10, no. 12, 2018.
17Human Behavior and Emerging Technologies
[89] Y. Lu, “Artificial intelligence: a survey on evolution, models,
applications and future trends,”Journal of Management Ana-
lytics, vol. 6, no. 1, pp. 1–29, 2019.
[90] R. C. Davis, “Internet connection: AI and libraries: supporting
machine learning work,”Behavioral & Social Sciences Librar-
ian, vol. 36, no. 3, pp. 109–112, 2017.
[91] F. Martin and J. Ertzberger, “Effects of reflection type in the
here and now mobile learning environment,”British Journal
of Educational Technology, vol. 47, no. 5, pp. 932–944, 2016.
18 Human Behavior and Emerging Technologies