ArticlePDF Available

Assessing ChatGPT’s Information Quality Through the Lens of User Information Satisfaction and Information Quality Theory in Higher Education: A Theoretical Framework

Wiley
Human Behavior and Emerging Technologies
Authors:

Abstract and Figures

Incorporating AI tools like ChatGPT into higher education has been beneficial, yet the extent of user satisfaction with the quality of information provided by these tools, known as user information satisfaction (UIS) and information quality (IQ) theory, remains underexplored. This study introduces a UIS model specifically designed for ChatGPT’s application in the educational sector based on multidimensions of IQ theory. Drawing from established UIS and IQ theory, we crafted a model centered around seven essential factors that influence the effective use of ChatGPT, aiming to guide educators and learners in overcoming common challenges such as plagiarism and ensuring the ethical use of AI. Data was collected from Indonesian university participants (N=508) and analyzed using structural equation modeling with Smart-PLS 4.0. The results reveal that completeness, precision, timeliness, convenience, and information format are the most influential factors driving user satisfaction with ChatGPT. Interestingly, our research indicated that the accuracy and reliability of the information, typically deemed paramount, were not the primary concerns in the academic use of ChatGPT. Our findings recommend a cautious approach to integrating ChatGPT in higher education. We advocate for strategic use that recognizes its innovative potential while acknowledging its limitations, ensuring responsible and effective application in educational contexts. This balanced perspective is crucial for integrating AI tools into the academic fabric without compromising educational integrity or quality.
This content is subject to copyright. Terms and conditions apply.
Research Article
Assessing ChatGPTs Information Quality Through the Lens of
User Information Satisfaction and Information Quality Theory in
Higher Education: A Theoretical Framework
Chung-Jen Fu,
1
Andri Dayarana K. Silalahi ,
2
I-Tung Shih ,
1
Do Thi Thanh Phuong,
3
Ixora Javanisa Eunike,
1
and Shinetsetseg Jargalsaikhan
1
1
Department of Business Administration, College of Management, Chaoyang University of Technology, Taichung, Taiwan
2
Department of Marketing and Logistics Management, College of Management, Chaoyang University of Technology, Taichung,
Taiwan
3
Department of Distribution Management, College of Management, National Chin-Yi University of Technology, Taiping, Taiwan
Correspondence should be addressed to Andri Dayarana K. Silalahi; andridksilalahi@gmail.com
Received 2 March 2024; Revised 26 August 2024; Accepted 5 September 2024
Academic Editor: Puspa Setia Pratiwi
Copyright © 2024 Chung-Jen Fu et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Incorporating AI tools like ChatGPT into higher education has been benecial, yet the extent of user satisfaction with the quality
of information provided by these tools, known as user information satisfaction (UIS) and information quality (IQ) theory, remains
underexplored. This study introduces a UIS model specically designed for ChatGPTs application in the educational sector based
on multidimensions of IQ theory. Drawing from established UIS and IQ theory, we crafted a model centered around seven
essential factors that inuence the eective use of ChatGPT, aiming to guide educators and learners in overcoming common
challenges such as plagiarism and ensuring the ethical use of AI. Data was collected from Indonesian university participants
(N= 508) and analyzed using structural equation modeling with Smart-PLS 4.0. The results reveal that completeness, precision,
timeliness, convenience, and information format are the most inuential factors driving user satisfaction with ChatGPT.
Interestingly, our research indicated that the accuracy and reliability of the information, typically deemed paramount, were not
the primary concerns in the academic use of ChatGPT. Our ndings recommend a cautious approach to integrating ChatGPT
in higher education. We advocate for strategic use that recognizes its innovative potential while acknowledging its limitations,
ensuring responsible and eective application in educational contexts. This balanced perspective is crucial for integrating AI
tools into the academic fabric without compromising educational integrity or quality.
Keywords: articial intelligence; ChatGPT; higher education; user information satisfaction
1. Introduction
The release of AI-powered ChatGPT has rapidly become a
grounding for discourse within educational circles, especially
in higher education sectors worldwide. Since its launch in late
2022, ChatGPT has been the subject of intense examination,
fueling debates concerning its potential as a transformative
educational aid against its ability to enable academic miscon-
duct, particularly plagiarism, and the promulgation of false
information [1]. The ethical implications of its use, including
the potential for AI to impact student learning and the authen-
ticity of their work, pose substantial concerns [2, 3]. These
ethical challenges have signicantly inuenced the hesitancy
in ChatGPTs full-scale adoption within academic frameworks.
Despite these concerns, the utility of ChatGPT in educa-
tional contexts cannot be overstated. Roose [4] highlights its
advantages, particularly regarding resource generation and
engagement facilitation, which could redene pedagogical
mechanisms. Such tools can potentially revolutionize con-
tent creation, oering resources and interactive experiences
that can support personalized learning paths [5]. Further-
more, educators have found that when leveraged judiciously,
ChatGPT can facilitate idea generation and curriculum
development [6], potentially enhancing the educators role
Wiley
Human Behavior and Emerging Technologies
Volume 2024, Article ID 8114315, 18 pages
https://doi.org/10.1155/2024/8114315
by automating administrative tasks and allowing for more
focused pedagogical strategies.
The dualistic nature of ChatGPTs impactits prospec-
tive benets shadowed by ethical and integrity-related quan-
dariesunderscores the need for an in-depth examination
of its role and regulation in educational settings. This dia-
logue is not merely theoretical but is actively shaped by the
experiences and testimonies of educators and students navi-
gating this new technological frontier [7].
The integration of ChatGPT into educational frame-
works has also emerged in a robust debate among academics
and practitioners. Advocates, including Kocońet al. [8], view
ChatGPT as a revolutionary aid that enables educators to
rapidly generate diverse teaching materials, thus potentially
transforming pedagogical approaches. This support hinges
on the belief that ChatGPTs responsive and interactive
capabilities can signicantly reduce the time and eort tradi-
tionally required for educational content creation.
However, this optimistic view is counterbalanced by
critical voices within the academic community. As voiced
by De Angelis et al. [9] and Deiana et al. [10], critics caution
against adopting ChatGPT without stringent oversight.
Their primary concerns revolve around the ease with which
students might exploit the tool for cheating and the propa-
gation of misinformation, which threatens to compromise
the integrity and reliability of educational standards. These
apprehensions highlight the need for frameworks that can
eectively mitigate such risks while preserving the benecial
aspects of AI in education.
Although there has been considerable discussion about
ChatGPTs potential and its challenges, current research
has not fully explored ChatGPT information quality (IQ)
through the lens of user information satisfaction (UIS) and
information quality theory (IQT) in higher education. Key
aspects like information completeness, precision, timeliness,
convenience, format, accuracy, and reliability have not been
adequately addressed in this context. This oversight leads to
educators and policymakers relying on incomplete data
when considering ChatGPTs potential and its eects on
education. A comprehensive analysis of ChatGPTs role in
satisfying informational needs is vital for its eective appli-
cation in educational settings. Therefore, it is imperative to
conduct targeted studies that assess both the benets and
drawbacks of ChatGPT usage. Such investigations will clar-
ify AIs implications for UIS and help ensure its integration
into educational practices, which optimizes learning while
upholding academic standards.
This study ventures to bridge this gap by systematically
assessing how ChatGPT meets the informational needs of
its users in higher education. It scrutinizes seven key dimen-
sions of UIS based on IQT multidimensionscompleteness,
precision, timeliness, convenience, format, accuracy, and
reliabilitywhich have been underexplored in existing liter-
ature. Completeness pertains to the extent to which
ChatGPT provides information that meets the full scope of
usersinquiries. Precision refers to the degree of specicity
and relevance that ChatGPTs responses hold to the ques-
tions posed. Timeliness involves the speed at which ChatGPT
delivers information, while convenience considers the ease of
interaction with the AI tool. Format examines the organiza-
tion and presentation of information, and accuracy and reli-
ability address the correctness and dependability of the
content provided. This comprehensive evaluation, account-
ing for the positive potentials and risks identied by For-
oughi et al. [11]; Strzelecki [12]; Chang, Silalahi, and Lee
[13]; and Menon and Shilpa [14], as well as the integrity
concerns highlighted by D. Cotton, P. Cotton, and Shipway
[15]; Bin-Nashwan, Sadallah, and Bouteraa [16]; Liu et al.
[17], and Ansari, Ahmad, and Bhutta [18], seeks to oer
an expansive perspective on ChatGPTseectiveness and
to elucidate its role in the academic sector. Through this
investigation, the study is aimed at signicantly advancing
the conversation surrounding the deployment of AI in edu-
cation and underpinning its practical applications with
robust empirical insights.
Revisiting the UIS framework by Ives, Olson, and Bar-
oudi [19] and IQT by Wang and Strong [20], this study
focuses on integrating AI in education through ChatGPT.
It investigates how ChatGPs accuracy, relevance, and time-
liness can improve educational quality, focusing on the
underexplored area of UIS in AI-supported learning envi-
ronments. Amid growing concerns over the misuse of AI
for unethical academic practices, the study provides action-
able strategies for educational institutions to maximize
ChatGPTs benets while upholding academic integrity [7,
21, 22]. It also oers a critical analysis of the risks associated
with ChatGPT, particularly in facilitating academic dishon-
esty [23], which is indispensable for embedding AI into edu-
cational frameworks. The study equips decision-makers with
the insights needed for informed AI integration by providing
a robust evaluation of UIS in AI educational tools. It
endorses a mindful adoption of AI to bolster learning out-
comes, ensuring user satisfaction remains paramount.
2. Literature Review
2.1. Previous Study and Gap Identication. UIS is a well-
established concept in information system (IS) development.
While numerous studies have applied UIS in various con-
texts, there is a noticeable gap in research explicitly focusing
on UIS concerning AI-powered tools in education, mainly
the quality of information generated by ChatGPT. The need
to explore this area has become increasingly clear, as Chang,
Silalahi, and Lee [13] have pointed out the potential for such
information to create uncertainty. Additionally, Fu et al. [24]
have raised concerns that information from ChatGPT could
lead to misinformation and potentially harm usersknowl-
edge and decision-making abilities, especially in educational
settings. This highlights the importance of further research
on UIS, mainly using the IQT approach, as there has been
no investigation into how UIS and IQT assess the quality
of information generated by generative AI. For example,
Wang et al. [25] have studied how IQ controls are used to
standardize content on online platforms by using IQT and
leveraging four factors as IQ control on online platforms.
For example, Kim et al. [26, 27] studied the impact of
incorrect information provided by ChatGPT on user
decision-making in tourism, emphasizing how accuracy
2 Human Behavior and Emerging Technologies
and trustworthiness inuence the acceptance of AI recom-
mendations. This research underscores the critical role of
IQ in AI-driven contexts, particularly focusing on the nega-
tive eects of inaccurate information and oering practical
insights for enhancing AI-based recommendation systems.
Melchert, Müller, and Fischer [28] explored the application
of IQT in AI-driven educational platforms, emphasizing
the need for high-quality information to ensure eective
learning outcomes and user satisfaction. Their study con-
tributes by adapting IQT to the educational context, provid-
ing a framework for evaluating and improving the IQ of AI-
driven educational tools, with a focus on ensuring learner
satisfaction and educational integrity. Wang, Li, and Chen
[29] examined how information normalization improves
communication eectiveness in online healthcare consulta-
tions, although they found that its positive eects are dimin-
ished in high-risk scenarios. Their study expands the IQ
framework by integrating information normalization, pro-
viding insights into balancing informational and emotional
support in online healthcare settings.
Haryaka, Agus, and Kridalaksana [30] developed a
model focusing on service quality, IQ, user participation,
and benets in e-learning that found strong correlations
and reliability among these variables. Similarly, Ang and
Koh [31] explored the relationship between job satisfaction
and UIS within a company context, integrating demographic
variables such as age, educational level, job tenure, and orga-
nizational position. While their framework demonstrated
that job satisfaction and UIS are inuenced by similar fac-
tors, it did not consider the implications of AI-powered edu-
cational tools, leaving a gap in understanding how such tools
aect UIS.
Au, Ngai, and Cheng [32] conducted a critical review
of over 50 papers on end-user information system satisfac-
tion (EUISS), revealing a dominance of the expectation
disconrmation approach in past research. They proposed
an integrated conceptual model based on equity and needs
theories to better understand the psychological processing
of IS performance and its impact on EUISS. Despite pro-
viding valuable insights for future research, this study did
not explore the impact of AI-powered tools like ChatGPT
in educational settings. Siritongthaworn and Krairit [33]
identied four dimensions of student satisfaction in e-learn-
ing: delivery method, content, communication facilitation,
and system operation. Their study underscored the impor-
tance of contextual factors in inuencing overall satisfaction,
suggesting that e-learning environments require distinct
instructional designs compared to traditional or solely online
methods. Yet, their research did not address the unique chal-
lenges and opportunities presented by AI-powered tools in
education.
Building on these foundations and previous studies in
Table 1, our study addresses a signicant gap in the litera-
ture by focusing on UIS in the context of ChatGPTsapplica-
tion in higher education. We developed a UIS model focusing
on seven key factors: completeness, accuracy, precision, reli-
ability, timeliness, convenience, and format. Our ndings will
reveal how the seven key factors can inuence UIS and which
one of these factors will raise concerns for academic use. This
strategic framework for integrating ChatGPT in higher educa-
tion emphasizes responsible and eective application, ensur-
ing educational integrity and quality while acknowledging
the tools limitations. This contribution provides a robust
addition to the body of knowledge, guiding educators and
researchers in harnessing the potential of AI tools in educa-
tional settings.
3. This Studys Theoretical Framework
Previous studies have established a variety of UIS measures
tailored to the context and specic IS under scrutiny. For
instance, Galletta and Lederer [35] devised as many as 23
metrics to assess user satisfaction concerning the implemen-
tation and ecacy of IS. Conversely, Laumer, Maier, and
Weitzel [36] pinpointed four factors that determine user
contentment with corporate content management systems,
among other diverse inquiries (refer to [19, 36, 37]). These
scholarly endeavors elucidate that a theoretical framework
for measuring UIS is well-established. However, there is a
distinct lack of focus on measuring UIS specically for AI-
supported systems like ChatGPT. Therefore, this research
aims to develop specialized UIS measures for ChatGPT, pro-
viding valuable insights for both users and developers,
including entities like OpenAI and others engaged in chatbot
technologies. This initiative not only broadens the scope of
UIS applications but also contributes to the renement of
AI-supported systems to better meet user requirements.
The formulation of these measures has the potential to sig-
nicantly inuence the customization and enhancement of
technology, thus elevating user utility and satisfaction across
various contexts.
Drawing upon the concept of UIS and IQT from prior
research [11, 20, 19, 36], this study formulates a seven-step
framework to evaluate the quality of information generated
from the ChatGPT system, which will be empirically tested
among its users. These seven constructs of UIS are derived
from IQT and have been operationalized into a user satisfac-
tion model methodically tested through this investigation.
The IQ satisfaction dimensions tailored for ChatGPT
include completeness, precision, timeliness, convenience,
format, accuracy, and reliability. These constructs will be
assessed in relation to user satisfaction and structured within
a conceptual research framework, as depicted in Figure 1.
This comprehensive approach aims to oer a holistic under-
standing of user interaction with and perception of the qual-
ity of information from the ChatGPT system. By analyzing
these varied dimensions, the study endeavors to pinpoint
the primary areas where ChatGPT excels and where there
is potential for enhancement, thereby augmenting its ecacy
as a user-oriented tool. The outcomes of this research are
anticipated to provide valuable insights into the optimiza-
tion of ChatGPT and other AI-supported systems.
4. Theoretical Background
4.1. IQT. IQT has become an integral part of data manage-
ment and IS studies, signicantly advanced by Wang and
Strong [20]. They expanded the concept of IQ beyond mere
3Human Behavior and Emerging Technologies
Table 1: Previous study and gap identication.
Authors AI-context in education?
(e.g., ChatGPT) Variables Findings Contribution
Kim et al. [26, 27] Yes (ChatGPT in tourism)
Incorrect information, accuracy,
trustworthiness, and user decision-
making
This study investigates the impact of
incorrect information provided by
ChatGPT on user decision-making,
highlighting how accuracy and
trustworthiness inuence the
acceptance of AI recommendations in
tourism.
The research highlights the critical role
of information quality in AI-driven
contexts like tourism, particularly
focusing on the negative eects of
inaccurate information, and oers
practical insights for enhancing AI-
based recommendation systems.
Melchert, Müller, and
Fischer [28] Yes (AI in education)
Information quality dimensions
(relevance, accuracy, completeness,
timeliness, and accessibility)
The study explores the application of
IQT in AI-driven educational
platforms, emphasizing the need for
high-quality information to ensure
eective learning outcomes and user
satisfaction.
The research contributes by adapting
IQT to the educational context,
providing a framework for evaluating
and improving the information quality
of AI-driven educational tools, with a
focus on ensuring learner satisfaction
and educational integrity.
Wang, Li, and Chen
[29]
No (online healthcare
consultations)
Information normalization, emotional
support, and informational support
Information normalization improves
communication eectiveness in online
healthcare consultations, but the
positive eects are diminished in high-
risk health scenarios.
The study expands the information
quality framework by integrating
information normalization, providing
insights into the balance between
informational and emotional support
in online healthcare settings.
Wang et al. [25] No (online information
normalization)
Information normalization and
information quality controls
(timeliness, completeness, depth, and
relevance)
Information normalization improves
communication performance in online
healthcare consultations, but its
positive impact is lessened when
dealing with high-risk diseases.
The study introduces information
normalization into the information
quality model, enhancing online
healthcare communication and
enriching social support theory by
highlighting the balance between
informational and emotional support.
Haryaka, Agus, and
Kridalaksana [30] No (e-learning UIS)
Service quality (responsiveness,
relevancy, understanding, and
productivity), information quality
(competence, accuracy, and
participation), user (tangible and
currency), and benet
The study developed a user satisfaction
model for e-learning. The model
showed strong correlations and
reliability among variables like service
quality, information quality, user
participation, and benets.
The research provides a robust model
for evaluating e-learning user
satisfaction via smartphones, validated
through statistical methods, and oers
insights for future development of e-
learning applications.
Ang and Koh [31] No (UIS and job satisfaction
in a company)
Demographics (e.g., age, educational
level, job tenure, and organizational
position)
Job satisfaction and user information
satisfaction are correlated and
inuenced by similar factors. The
studys framework, incorporating
variables like age, education, and
computer literacy, showed promising
results in initial tests.
The research introduces a framework
to rigorously examine the relationship
between job satisfaction and user
information satisfaction, oering a
basis for further validation.
4 Human Behavior and Emerging Technologies
Table 1: Continued.
Authors AI-context in education?
(e.g., ChatGPT) Variables Findings Contribution
Au, Ngai, and
Cheng [32]
No (information system satisfaction
framework development)
IS performance, IS performance
expectation, equitable work
performance fulllment, equitable
relatedness fulllment, equitable self-
development fulllment, and IS
performance expectation
disconrmation
The study reviewed over 50 papers on
end-user information system
satisfaction (EUISS) and found that
most past research used the
expectation disconrmation approach.
An integrated conceptual model based
on equity and needs theories is
proposed to better understand the
psychological processing of
information system performance and
its impact on EUISS.
The research introduces a new
conceptual model for EUISS,
incorporating equity and needs
theories, and provides insights for
future testing and application of this
model.
Wang [34] No (e-learning satisfaction
context in education)
Learner interface, learning community,
content, and personalization
The study developed and validated a
comprehensive model and instrument
to measure learner satisfaction with
asynchronous e-learning systems,
addressing a gap in existing literature.
The model was rigorously tested for
reliability and various forms of validity
using data from 116 adult respondents.
This research provides a validated
instrument for measuring learner
satisfaction with asynchronous e-
learning, oering a useful tool for
researchers to develop and test e-
learning theories and providing
insights for practitioners in improving
e-learning systems.
Siritongthaworn and
Krairit [33] No (e-learning satisfaction)
Delivery method, content,
communication facilitation, and
system operation
The study identies four dimensions of
student satisfaction in e-learning:
delivery method, communication
facilitation, system operation, and
content. These dimensions inuence
overall satisfaction, and their impact
varies depending on the context of e-
learning implementation.
The research oers a tailored
instrument for measuring student
satisfaction in e-learning, emphasizing
the need for distinct instructional
designs for blended e-learning
compared to traditional or solely
online methods, providing valuable
insights for educators.
This study Yes (e.g., ChatGPT in education)
Completeness, accuracy, precision,
reliability, timeliness, convenience,
and format
The study developed a UIS based on
IQ Theory for assessing ChatGPTs
information quality in education,
highlighting that completeness,
precision, timeliness, convenience, and
information format are key factors
driving user satisfaction. Interestingly,
accuracy and reliability were not
primary concerns for academic use.
The research provides a strategic
framework for integrating ChatGPT in
higher education, emphasizing the
need for responsible and eective
application by considering information
quality and acknowledging the tools
limitations, thereby ensuring
educational integrity and quality.
5Human Behavior and Emerging Technologies
accuracy, introducing a multidimensional perspective that
includes completeness, consistency, and timeliness [20].
Their research redened IQ, emphasizing that a multidi-
mensional approach is essential for assessing the quality of
information. Furthermore, Wang and Strongs [20] work
was contributory in shifting the paradigm of IQ from a
purely technical focus to one that considers how information
is used and perceived in various contexts. This shift in IQT
can be part of a broader epistemological change, reecting
how organizations perceive information [38]. This trans-
formation was driven by the recognition that data, as a
representation of reality [39], is not an end but a means
to an endfacilitating informed decision-making [40].
The development of IQT reects a constructivist perspec-
tive, where the value of information is dependent on its
utility within specic contexts rather than being an inher-
ent property of the data itself. This approach aligns with
the pragmatic tradition in philosophy, which emphasizes
ideaspractical consequences and usefulness in achieving
desired outcomes [41].
Over the years, IQT has been adopted across various
interdisciplinary elds, including ISs [25], management
science [42], and data governance [43]. In the last two
decades, the theory has been expanded and enriched, build-
ing on Wang and Strongs foundational work and incorpo-
rating insights from other disciplines to address emerging
challenges brought about by the digital uprising. For
instance, Batini et al. [44] highlighted the need for adapt-
able frameworks of information in the complex environ-
ment of modern society, while Alhassan, Sammon, and
Daly [43] integrated data governance principles to ensure
data quality across distributed systems. Martín et al. [45]
further developed IQT by exploring the impact of data
quality on the performance of AI models, emphasizing
the importance of high-quality data in predictive analytics.
Additionally, Fosso Wamba et al. [46] addressed the sig-
nicance of new metrics in real-time data processing, rein-
forcing the ongoing relevance of IQT in contemporary
management.
Recent studies have expanded the application of IQT
into the eld of AI. For example, Kim et al. [26, 27] exam-
ined how incorrect information from ChatGPT inuences
user decisions in the tourism sector. Their ndings highlight
the importance of dimensions like accuracy and trustworthi-
ness in AI, especially in gaining user acceptance and trust in
AI-generated information. Similarly, Melchert, Müller, and
Fischer [28] applied IQT to AI-driven educational plat-
forms, stressing the need for high-quality information to
achieve eective learning and user satisfaction. These studies
suggest that traditional IQ dimensions, such as accuracy, rel-
evance, and completeness, must be reassessed in the context
of AI advancements.
In this study, which aims to assess user satisfaction with
information generated by AI systems like ChatGPT based on
IQ dimensions, applying IQT is both timely and crucial. The
release of generative AI tools like ChatGPT raises concerns
about ensuring the quality of dynamically generated content
[13]. As a result, traditional IQ dimensions like accuracy,
relevance, and completeness must be reassessed to conrm
that these AI systems are eectively meeting usersinforma-
tion needs. Additionally, scholars have raised concerns
about misinformation and ethical issues associated with
ChatGPT despite its perceived benets in the learning pro-
cess. This research is well-positioned within the current
landscape of generative AI, where assessing the quality of
its generated information is essential. Furthermore, Fu
et al. [24] found that interactions with ChatGPT, particu-
larly concerning its information, can impact individuals
knowledge and skills. When the information and interac-
tion with generative AI are mutually benecial, transparent,
and reliable, it reects high-quality information generation.
Therefore, by applying IQT to evaluate ChatGPTs gener-
ated information, this study enhances the understanding
of AI-generated informationseectiveness and contributes
Completeness
User information
satisfaction
Accuracy
Precision
Reliability
Timeliness
Convenience
Format
H1
H2
H3
H4
H5
H6
H7
Figure 1: The studys theoretical framework measuring UIS with ChatGPT in higher education.
6 Human Behavior and Emerging Technologies
to the broader discussion on the implications of generative
AIs IQ, inuencing user satisfaction and trust in these
systems.
4.2. UIS. In the early 1980s, Ives, Olson, and Baroudi [19]
introduced a model for evaluating user satisfaction within
ISs, highlighting that user perceptions and understanding
are essential. They argued that satisfaction is a critical indi-
cator of an ISs overall success. This framework included
multidimensional factors such as system reliability, IQ,
and user support, all of which collectively inuence the
overall user experience [47, 48]. Building on this, Ives,
Olson, and Baroudi [19] further explored the relationship
between UIS and system usage, highlighting that user sat-
isfaction reects and drives continued system use. These
early studies established UIS as a pivotal concept in evalu-
ating IS eectiveness, setting the stage for its continued
evolution in IS research [19].
As the IS eld progressed, the concept of UIS evolved,
mirroring the growing complexity of digital environments.
Wixom and Todd [49] made a signicant contribution by
combining UIS with the technology acceptance model
(TAM), underscoring the importance of perceived useful-
ness and ease of use as central to user satisfaction. Their
work highlighted how closely satisfaction is tied to users
perceptions of a systems utility and willingness to adopt
and continue using the technology. Recent studies have fur-
ther extended the application of UIS to modern technologies
like mobile apps and cloud computing, where aspects such
as IQ and user interface design play a pivotal role in deter-
mining user satisfaction [50, 51]. For example, Chi [52]
explored mobile commerce and found that user satisfaction
is signicantly shaped by the quality of information pro-
vided and the ease of navigating the interface.
UIS has been increasingly applied to new areas where
digital platforms and advanced technologies shape the user
experience. Xu, Benbasat, and Cenfetelli [53] highlighted
the need to combine service quality with system and IQ to
boost user satisfaction, especially in mobile services. Al-
Qeisi et al. [54] examined this further in e-government
services, emphasizing that IQ and user interface design are
crucial for ensuring user satisfaction. More recently, research
has looked into UIS in cloud computing environments,
revealing that satisfaction with service quality, system reli-
ability, and information accuracy is essential [55]. These
studies demonstrate how UIS continues to evolve to meet
the demands of modern digital environments, reinforcing
its importance as a key measure of IS eectiveness.
This research argues that user satisfaction with genera-
tive AI systems like ChatGPT is not simply a byproduct of
system performance but rather a complex interplay between
the user and the information provided. Given the vigorous
quality of AI-generated content, measuring UIS requires a
more nuanced approach that considers both the objective
quality of the information and the users subjective experi-
ence. Studies by Yoon and Kim [56] have highlighted the
need to understand these interactions signicantly as AI sys-
tems increasingly inuence everyday decision-making. By
examining UIS in the context of ChatGPT, this study pro-
vides insights into how AI can be designed to boost user sat-
isfaction and trust, adding to the ongoing conversation
about AIs role in modern ISs [57].
5. Hypothesis Development
5.1. UIS Factors. As previously mentioned, this study
employs seven measures of UIScompleteness, accuracy,
precision, reliability, timeliness, convenience, and for-
matto assess user satisfaction. These measures are intri-
cately linked to the user context of ChatGPT, situating the
research at the intersection of AI utility and user experience.
The objective is to conduct a comprehensive examination of
how each of these UIS dimensions contributes to overall user
satisfaction when interacting with ChatGPT. This methodo-
logical approach facilitates a detailed exploration of the sys-
temseectiveness and the user experience it fosters, oering
insights that can guide future renements and modications
to optimize ChatGPT for its users.
In the realm of ChatGPTs responses, completeness
refers to the depth and breadth with which user inquiries
are addressed. Studies, such as those by Gupta, Motlagh,
and Rhyner [58], indicate that users value detailed and com-
prehensive answers, as they contribute to a fuller under-
standing of the subject matter. A ChatGPT system that
responds to complex user questions with complete, ade-
quate, and specic information demonstrates a more thor-
ough comprehension [59]. Particularly noteworthy is
ChatGPTs ability to identify and comprehend the implicit
questions posed by users, generating responses that are both
comprehensive and relevant. Additionally, ChatGPTs
capacity to present updated and current knowledge stands
out as an added value. This correlation suggests that the
more complete the responses provided by ChatGPT, the
higher the likelihood of user satisfaction, underscoring the
signicance of depth and breadth in the delivery of informa-
tion. Therefore, it is posited that:
H1: The more complete the information provided by
ChatGPT, the higher the user satisfaction.
In this study, accuracy is conceptualized as the truthful-
ness and correctness of ChatGPTs responses. Personalized
information, when provided in ample and relevant quanti-
ties, is typically perceived as delivering accurate information
recommendations to users [26, 27]. This perspective is
informed by the accuracy of textual outputs such as language
modeling, text categorization, or the question-and-answer
formats generated by ChatGPT [60]. ChatGPTs capability
to comprehend and interpret complex queries to produce
answers reecting the veracity of information contributes
to user satisfaction. Prior research emphasizes the criticality
of accurate information for users, especially in decision-
making contexts [11]. This study posits a direct positive rela-
tionship between the accuracy of provided information and
user satisfaction levels, highlighting the importance of deliv-
ering truthful and reliable answers to enhance the user expe-
rience. Therefore, the proposed hypothesis is as follows:
H2: The higher information accuracy from ChatGPT is
associated with higher user satisfaction.
7Human Behavior and Emerging Technologies
Precision in ChatGPTs responses, which focuses on the
relevance and specicity of user queries, is crucial. Research
indicates that users favor targeted answers that directly
address their specic issues [61]. These responses provide
information that aligns closely with the users needs, oering
specic solutions to their presented problems. Consequently,
users perceive that the ChatGPT system understands their
requirements through precise and thorough responses that
meet or even exceed their expectations [62]. This suggests
that the more precise the information provided, the greater
the user satisfaction, underscoring the importance of contex-
tually specic and tailored responses. Therefore, the hypoth-
esis is as follows:
H3: The more precise the information from ChatGPT,
the higher the users satisfaction.
Reliability refers to the consistency and dependability of
ChatGPTs responses over time. The quantity of information
considered reliable pertains to responses that not only
address the immediate informational needs of users but also
provide them with opportunities for deeper knowledge
acquisition [63]. Users who perceive ChatGPT as a reliable
source, oering consistent and dependable responses, are
likely to experience higher satisfaction levels. User interac-
tion studies have shown that consistent performance fosters
user trust and satisfaction [64]. This highlights the impor-
tance of ChatGPT maintaining stable and reliable outputs
to sustain user satisfaction. Therefore, the hypothesis is as
follows:
H4: The more reliable the information from ChatGPT,
the higher the users satisfaction.
Timeliness in the context of ChatGPTs responses
emphasizes the speed at which it delivers information.
ChatGPTs ability to provide instant and prompt responses
is instrumental in enhancing interaction eciency and fulll-
ing user expectations [65]. When users pose questions and
receive accurate responses from ChatGPT promptly, it aids
them in resolving their issues more eectively. Additionally,
the responsive nature of the ChatGPT system facilitates
interactions that lead users to rely on it [66], casting a posi-
tive light on the quality and responsiveness of ChatGPTs
performance. Rapid and accurate replies are posited to
increase user satisfaction [47]. In the realm of digital commu-
nication, timeliness is highly valued. Users often equate quick
responses with eciency and eective service, thereby
enhancing their overall satisfaction with the tool. Therefore,
the hypothesis is as follows:
H5: The higher timeliness from ChatGPT will increase
users satisfaction.
Convenience in the utilization of ChatGPT incorporates
factors such as ease of use and accessibility. The systems
ability to comprehend commands or queries translates into
user-friendly experiences [67]. A simplistic design that facil-
itates user-system communication fosters a more comfort-
able interaction experience. Moreover, the provision of
clear and timely responses helps users avoid confusion and
feel more in command of the interaction. High levels of con-
venience generate positive user experiences and lead to
greater satisfaction. This study posits that convenience plays
a crucial role in user satisfaction. Humancomputer interac-
tion research indicates that tools that are easy to use and
accessible signicantly enhance user experience and satisfac-
tion. This implies that the more user-friendly and accessible
ChatGPT is, the higher the user satisfaction. Therefore, the
hypothesis is as follows:
H6: The higher convenience of generating information
with ChatGPT will signicantly boost user satisfaction.
The formatting of information delivered by ChatGPT,
including its clarity, organization, and presentation, is pos-
ited to inuence user satisfaction. Responses that are well-
structured, employing clear paragraphing, bullet points,
and other visual aids, facilitate usersability to quickly locate
and comprehend information [68]. Information that is
neatly organized acts as a guide for users to grasp the context
and inspect specic sections of the response in detail.
Research suggests that well-structured information, pre-
sented in a clear and coherent manner, enhances user under-
standing and satisfaction [69]. The correlation here is that
user-friendly and well-organized response formats are likely
to positively impact user satisfaction levels, as they enable
easier comprehension and interaction. Therefore, this study
proposes the following hypothesis:
H7: The easier to understand the information format
from ChatGPT will positively impact user satisfaction.
6. Methods
6.1. Measures. This study adopts the UIS concept developed
by Ives, Olson, and Baroudi [19] as the foundational frame-
work for developing its conceptual research structure. How-
ever, for the seven UIS measures, this research draws on
previous studies, making modications and adjustments
specic to this studys context. This approach is necessitated
by the absence of measures directly evaluating the UIS of
ChatGPT from previous studies. Consequently, the study
employs 21 questions across four satisfaction dimensions,
modied from Bhattacherjee [70]. For completeness, timeli-
ness, and format, six items are adapted and rened from
Laumer, Maier, and Weitzel [36]. Accuracy is adapted from
Foroughi et al. [11] to include three items. Precision, reliabil-
ity, and convenience are modied by Ives, Olson, and
Baroudi [19], resulting in eight items. This methodical adap-
tation ensures the measures are suitably aligned with the
unique characteristics and user interactions of ChatGPT,
providing a robust and relevant evaluation of user satisfac-
tion. Through this comprehensive approach, the study is
aimed at oering a detailed and broader understanding of
user satisfaction with ChatGPT, contributing signicantly
to the eld of user experience research in AI-powered sys-
tems. Table 2 presents the measurement items that were
used in this study.
6.2. Sampling Technique and Data Collection Procedures.
This research study employs a survey method designed to
gather insights from a group of research participants. Data
is obtained through a structured questionnaire, which is
divided into three parts. The initial part requests the partic-
ipantsconsent and explains the sampling technique utilized
in the study. Participants need to meet two criteria: they
8 Human Behavior and Emerging Technologies
should have used ChatGPT for more than 6 months and
be associated with higher education, either as faculty mem-
bers, graduate or undergraduate students, or postdoctoral
researchers. This ensures that the study focuses on a
knowledgeable population. The second part gathers demo-
graphic details such as gender, age, university type, and
occupation. The nal part includes questions specically
tailored for this research project. This systematic approach
enables the collection of pertinent data, ensuring that the
studysndings are both insightful and representative of
how ChatGPT users in educational settings perceive their
experiences with the platform. Crafting this methodology
is crucial for capturing how diverse demographic groups
utilize and view ChatGPT dierently. By emphasizing
structured data collection, this study is aimed at oering
a diverse understanding of ChatGPTsinuence in educa-
tional environments, enriching the depth and scope of
the research results.
Over a 3-month data collection period from August to
November 2023, this study successfully gathered 508
responses. Within the collected data, males comprised 73%
of the users. In terms of age demographics, the 2135-
year-old bracket was predominant in ChatGPT usage within
the higher education landscape, accounting for 75% of
respondents. Regarding the type of university, the distribu-
tion appears to be 57% from public universities, with the
remaining 43% coming from private institutions. In examin-
ing the occupational demographics, undergraduate students
constituted 39% of participants, graduate students (both
masters and doctoral candidates) made up 32%, faculty
members represented 15%, and the remainder at 14% con-
sisted of postdoctoral researchers.
6.3. Data Analysis Technique. This research involves several
stages of data analysis. Initially, to ensure the validity and
reliability of the instruments, the study conducts a common
method variance (CMV) test. This assesses the consistency
and validity of the instruments used in the research, employ-
ing Harmans single-factor method tested through SPSS
Version 26 [71]. Secondly, the study applies validity, reliabil-
ity, and hypothesis analyses using the structural equation
modeling approach with Smart-PLS 4.0 software. This phase
includes testing convergent validity to assess outer loadings
(OLs), composite reliability (CR), average variance extracted
(AVE), and variance ination factor (VIF) [72]. Further-
more, the research model evaluation involves testing dis-
criminant validity through the FornellLarcker criterion
[73], heterotraitmonotrait ratio (HTMT) [74], and cross-
loading matrix [72], along with R-square [75] and goodness
of t (GOF) evaluation [72]. By these comprehensive
methods, the study seeks to provide an in-depth understand-
ing of how each UIS factor contributes to user satisfaction,
thereby oering valuable insights into user experience with
ChatGPT.
Table 2: Measurement items.
Constructs Items Reference
Satisfaction
How do you feel about your overall experience of retrieving information from ChatGPT:
Very dissatisedvery satised
Very displeasedvery pleased
Very frustratedvery contended
Absolutely terribleabsolutely delighted
Bhattacherjee [70]
Completeness ChatGPT provides me with complete information Laumer, Maier, and Weitzel [36]
ChatGPT produces comprehensive information
Accuracy
Information from ChatGPT is correct
Foroughi et al. [11]Information from ChatGPT is reliable
Information from ChatGPT is accurate
Precision
The responses from ChatGPT are generally specic and directly address my questions
Ives, Olson, and Baroudi [19]I rarely receive vague or ambiguous information from ChatGPT
Ind ChatGPTs responses to be consistently to the point
Reliability ChatGPT rarely fails to deliver the information I can rely on Ives, Olson, and Baroudi [19]
I trust ChatGPT as a dependable source of information
Timeliness The information provided by ChatGPT is up-to-date Laumer, Maier, and Weitzel [36]
The information provided by ChatGPT is received in a timely manner
Convenience
Accessing ChatGPT is convenient and user-friendly
Ives, Olson, and Baroudi [19]Ind it easy to access ChatGPT on my preferred devices
I experience no signicant challenges in accessing ChatGPT
Format The format in which ChatGPT presents information is clear and easy to understand Laumer, Maier, and Weitzel [36]
Ind ChatGPTs information presentation format user-friendly
9Human Behavior and Emerging Technologies
7. Results
7.1. CMV. Before proceeding with further data analysis, a
CMV test is conducted. The method employed is Harmans
single factor, where all measures are factored into a single
dependent variable. If the total variance explained is below
50%, CMV is not considered a concern [71]. Utilizing SPSS
for the CMV test, a result of 13.6% was obtained, which is
well below the 50% threshold. Consequently, it is concluded
that CMV does not pose a concern in this study. This nd-
ing establishes a strong foundation for the credibility of the
subsequent analysis, ensuring that the data interpretation
and results are robust and reliable. The low CMV percentage
enhances the validity of the study, reinforcing the integrity
of the research ndings and their implications.
7.2. Validity and Reliability Assessment. Table 3 displays the
results for convergent validity and reliability. The ndings
indicate that all statistical criteria suggested by Hair et al.
[72], including OL, CA, CR, VIF, and AVE, have been
met. Consequently, it can be concluded that convergent
validity and reliability are not concerns in this study.
Subsequent testing focused on assessing discriminant
validity to evaluate the ecacy of the model developed in
this study. Three methods were employed for this purpose,
as illustrated in Tables 4 and 5. Table 4 indicates that the dis-
criminant validity testing, using the FornellLarcker Crite-
rion and the HTMT method, shows that (1) all diagonal
and bolded values are greater than the intervariable correla-
tion values. This suggests no concerns regarding discrimi-
nant validity [72]. Additionally, the HTMT values obtained
are all below the threshold of 0.90, leading to the conclusion
that discriminant validity is not a concern. These results
conrm the distinctiveness of the constructs within the
model, ensuring that each measure captures a unique aspect
of the phenomenon under study, which is crucial for the
overall validity of the research ndings.
Next, the discriminant validity was tested using the
cross-loading matrix method, as presented in Table 5. The
results indicate that the correlation of each construct with
its respective measurement items is greater than with cross
items. This nding suggests that discriminant validity is
not a concern in this study. The clear dierentiation of the
constructs further strengthens the reliability and accuracy
of the measurement model, ensuring that each construct is
distinctively and accurately captured within the research
framework [72].
The model testing then proceeded with an evaluation of
model tandR-square. The results indicated an SRMR value
of 0.070, a chi-square of 1631.471, and an NFI of 0.783, all of
which fall within the threshold limits recommended by Hair
et al. [72]. Subsequently, the R-square value was assessed to
determine the power of independent constructs in predicting
the dependent constructs. An R-square value of 0.581 was
obtained, indicating that the seven ChatGPT UIS measures
can account for 58.1% of the variance in satisfaction. This
meets the criteria set by Falk and Miller [75], as the R-square
value is above the 0.10 benchmark. These ndings not only
demonstrate the modelsgoodt but also underline the con-
siderable explanatory power of the identied factors in under-
standing user satisfaction with ChatGPT.
In this study, the GOF metric was calculated to evaluate
the reliability of the constructed model. The GOF is deter-
mined by taking the square root of the product of the AVE
and the average Rsquare (R2), as indicated in Equation
(1). Tenenhaus et al. [76] and Wetzels, Odekerken-Schröder,
and Van Oppen [77] suggest that a GOF value above 0.36
indicates a high t, between 0.25 and 0.36 signals a moderate
t, and between 0.10 and 0.25 points to a low t. The com-
puted GOF for this study is 0.635, which exceeds the thresh-
old for a high t, demonstrating the robustness and reliability
of the model. This method was also applied in Huang, Sila-
lahi, and Eunikes [78] research and conrmed the models
robustness test with the GOF method.
GOF = AVE × R2= 0 693 × 0 581 =0 635 1
7.3. Hypothesis Testing. Table 6 and Figure 2 provide a sum-
mary of the hypothesis testing. The analysis revealed that two
UIS measures, accuracy (β=0070;T=1 247) and reliability
(β=0016;T=0 400), were not signicant predictors of sat-
isfaction, leading to the rejection of Hypotheses H2 and H4.
On the other hand, it was found that completeness
(β=0096;T=2 863), convenience (β=0126;T=2 192),
format (β=0323;T=5115), precision (β=0245;T=
3 681), and timeliness (β=0 138;T=2 396) had a signi-
cant impact on satisfaction, supporting Hypotheses H1, H3,
H5, H6, and H7. However, a closer examination indicates
that format and precision have a more substantial impact
Table 3: Convergent validity and reliability.
Constructs OL CA CR VIF AVE
Accuracy 0.7360.858 0.713 0.724 1.2881.597 0.636
Completeness 0.7350.964 0.790 0.936 1.3851.385 0.734
Convenience 0.8200.841 0.771 0.781 1.4441.770 0.684
Format 0.8700.871 0.781 0.681 1.3141.363 0.758
Precision 0.7920.850 0.736 0.655 1.0441.317 0.526
Reliability 0.8690.908 0.735 0.749 1.5101.510 0.790
Satisfaction 0.7970.832 0.822 0.822 1.6781.913 0.652
Timeliness 0.8600.889 0.709 0.699 1.3921.392 0.765
10 Human Behavior and Emerging Technologies
on satisfaction compared to the others. These results high-
light the varying degrees of inuence that dierent UIS mea-
sures have on user satisfaction, with some factors playing a
more critical role than others in shaping user experiences
with ChatGPT.
8. Discussion
This research represents a pioneering eort to apply the UIS
framework, grounded in IQT, to the context of generative
AI, specically ChatGPT. This study distinguishes itself by
being among the rst to explore how traditional UIS con-
structs, such as those proposed by Ives, Olson, and Baroudi
[19], are evolving in the context of AI technology. Our nd-
ings contribute to a shift in the determinants of user satisfac-
tion, revealing that factors like completeness, convenience,
format, precision, and timeliness are now more indicative
of user satisfaction in the context of ChatGPT rather than
the traditional emphasis on accuracy and reliability.
Completeness emerged as a critical factor in determining
user satisfaction, suggesting that users value information
that fully addresses their queries. In higher education, this
is particularly relevant, as students and educators often seek
comprehensive information that provides direct answers
and includes additional context and related details. For
instance, when conducting research or preparing for lectures,
Table 4: FornellLarcker criterion and HTMT.
Constructs (1) (2) (3) (4) (5) (6) (7) (8)
Accuracy (1) 0.798 0.352 0.876 0.808 0.873 0.776 0.718 0.858
Completeness (2) 0.251 0.857 0.277 0.449 0.655 0.374 0.176 0.282
Convenience (3) 0.652 0.210 0.827 0.714 0.644 0.623 0.683 0.788
Format (4) 0.564 0.299 0.521 0.871 0.875 0.598 0.784 0.691
Precision (5) 0.685 0.315 0.664 0.525 0.725 0.858 0.827 0.895
Reliability (6) 0.557 0.284 0.476 0.423 0.558 0.889 0.537 0.249
Satisfaction (7) 0.551 0.146 0.555 0.586 0.586 0.420 0.808 0.682
Timeliness (8) 0.601 0.215 0.584 0.475 0.577 0.661 0.517 0.875
Note: The diagonally bolded values were the square root of AVE, which was used for the FornellLarcker criterion. The italic values indicate the HTMT with
the threshold <0.90.
Table 5: Cross-loading matrix.
Items/constructs ACC CMP CVC FMT PRR RLB STS TML
ACR.1 0.795 0.253 0.519 0.468 0.511 0.499 0.427 0.514
ACR.2 0.858 0.179 0.523 0.482 0.559 0.411 0.488 0.479
ACR.3 0.736 0.171 0.524 0.396 0.574 0.431 0.400 0.450
CMP.1 0.239 0.964 0.207 0.268 0.313 0.288 0.159 0.224
CMP.2 0.191 0.735 0.141 0.270 0.210 0.171 0.063 0.114
CNV.1 0.496 0.201 0.820 0.407 0.480 0.351 0.364 0.412
CNV.2 0.533 0.173 0.841 0.421 0.600 0.419 0.488 0.480
CNV.3 0.578 0.154 0.820 0.458 0.551 0.402 0.500 0.537
FMR.1 0.489 0.307 0.421 0.870 0.419 0.371 0.510 0.387
FMR.2 0.492 0.213 0.485 0.871 0.494 0.365 0.511 0.440
PRC.1 0.576 0.203 0.568 0.418 0.850 0.457 0.515 0.486
PRC.2 0.577 0.203 0.569 0.439 0.838 0.480 0.491 0.503
PRC.3 0.277 0.473 0.218 0.285 0.792 0.235 0.182 0.191
RLB.1 0.498 0.287 0.398 0.371 0.445 0.869 0.340 0.557
RLB.2 0.494 0.224 0.446 0.381 0.540 0.908 0.402 0.615
STS.1 0.438 0.093 0.441 0.489 0.456 0.295 0.798 0.364
STS.2 0.444 0.082 0.481 0.445 0.489 0.354 0.832 0.444
STS.3 0.459 0.151 0.421 0.493 0.446 0.361 0.797 0.410
STS.4 0.439 0.147 0.448 0.467 0.500 0.345 0.802 0.450
TML.1 0.525 0.202 0.523 0.413 0.502 0.591 0.427 0.860
TML.2 0.528 0.176 0.499 0.418 0.509 0.568 0.475 0.889
Note: The bolded values represent the constructs outer loadings.
11Human Behavior and Emerging Technologies
educators need in-depth information that helps them explore
topics thoroughly and present well-rounded arguments or
explanations to students. This comprehensive approach,
which reduces the need for further searches, fosters a more
holistic understanding of academic topics and aligns with
ndings by Gupta, Motlagh, and Rhyner [58], who noted
the importance of depth in information retrieval for educa-
tional purposes. In this sense, ChatGPTs ability to provide
comprehensive responses can signicantly enhance the qual-
ity of academic research and teaching [79].
Convenience is a key factor in user satisfaction, reecting
a preference for easily accessible information. The ability to
use ChatGPT anytime and anywhere signicantly boosts its
appeal in academic settings. This is particularly valuable for
students who need quick answers or additional resources
while studying without being limited by library hours or the
availability of tutors. Educators also benet from this conve-
nience, as they can quickly access supporting information or
explore new teaching methods. The user-friendly design of
ChatGPT and its accessibility align with Pan, Cui, and Mou
[80], who emphasized the importance of easy access in user
interactions with AI tools. Integrating AI into academic rou-
tines indicates that users increasingly value the convenience
and accessibility these technologies oer, reecting a shift
towards more exible, on-demand learning resources.
The format and precision of information provided by
ChatGPT were also signicant factors inuencing user satis-
faction. Higher education users, in particular, appreciate
well-structured, clear, and organized information, which
aids in better comprehension and usability. Precision
ensures that the information is directly relevant to the users
query, enhancing the perceived reliability of ChatGPT.
These aspects are crucial in academic settings, where stu-
dents and educators require precise, accurate, and well-
structured information for their work. For instance, in aca-
demic writing or preparing detailed lecture notes, the clarity
and organization of information can signicantly impact the
eectiveness of the material. This nding is consistent with
research by Reinecke and Bernstein [61], who emphasized
the importance of clear and precise information presenta-
tion in user satisfaction. By providing well-organized and
targeted information, ChatGPT can support academic activ-
ities by making it easier for users to extract the necessary
information quickly and eciently.
Timeliness emerged as a crucial factor in user satisfac-
tion, with users appreciating quick responses that enable
Table 6: Summary of hypothesis testing.
Hypothesis βTBootstrapping CI 97.5% Decision
Lower Upper
H1. Completenesssatisfaction 0.096∗∗ 2.863 0.168 0.040 Accept
H2. Accuracysatisfaction 0.070 1.247 0.037 0.182 Reject
H3. Precisionsatisfaction 0.245∗∗∗ 3.681 0.117 0.378 Accept
H4. Reliabilitysatisfaction 0.016 0.400 0.118 0.086 Reject
H5. Timelinesssatisfaction 0.138∗∗ 2.396 0.025 0.249 Accept
H6. Conveniencesatisfaction 0.126∗∗ 2.192 0.014 0.239 Accept
H7. Formatsatisfaction 0.323∗∗∗ 5.115 0.202 0.444 Accept
∗∗, and ∗∗∗ show signicance levels at p<0.010, and p<0.001, respectively.
Completeness
User information
satisfaction
Accuracy
Precision
Reliability
Timeliness
Convenience
Format
H4: 𝛽 = 0.016ns
H3: 𝛽 = 0.245
⁎⁎⁎
H2: 𝛽 = 0.070ns
H1: 𝛽 = 0.096
⁎⁎
H5: 𝛽 = 0.138⁎⁎
H6: 𝛽 = 0.126
⁎⁎
H7: 𝛽 = 0.323⁎⁎⁎
R2 = 0.581
Figure 2: Summary of hypothesis testing from factors aecting ChatGPTs user information satisfaction.
12 Human Behavior and Emerging Technologies
them to continue their academic work without delays.
ChatGPTs rapid response times make it especially valuable
in educational contexts, where meeting tight deadlines and
accessing timely information are essential. This is particu-
larly benecial for students facing looming deadlines on
assignments or projects and for educators needing to quickly
prepare course materials or address student questions. This
observation aligns with recent research highlighting the
importance of timely information in sustaining user engage-
ment with technology [81]. In this light, ChatGPTs ability
to provide prompt responses enhances the eciency of both
teaching and learning, fostering a more robust and respon-
sive educational environment.
The ndings suggest that in higher education, usability
and accessibility of AI tools like ChatGPT are increasingly
prioritized over accuracy and reliability. This shift may be
driven by usersgrowing familiarity with AI technologies
and their rising expectations, as observed by Ameen et al.
[82]; Ma and Huo [83]; and Bubaš,Čižmešija, and Kovačić
[84]. However, while ease of use seems to be taking prece-
dence, the importance of accuracy and reliability cannot be
understated, especially in academic contexts where these fac-
tors are crucial for maintaining the integrity of research and
teaching. This trend may reect a broader change in how
academic users interact with technology, focusing on quick
and convenient access to information. Despite this, it is
essential to ensure that the pursuit of convenience does not
come at the cost of accuracy, as upholding high IQ standards
is vital for academic integrity.
Considering the studys results implies that ethical issues
should be a priority when using AI tools and applications
like ChatGPT in education. It is important that ChatGPT
does not give out wrong information and, therefore, leads
to wrong academic submissions. For instance, when students
are using ChatGPT for research purposes, the tool should
limit the sources given to the students to relevant and accu-
rate sources to avoid students writing fake papers. Solving
these ethical problems is crucial to preventing the problems
of using AI and, for example, encouraging unethical actions.
Both educators and software developers need to ensure that
the use of such applications as ChatGPT in learning institu-
tions is done in a right manner that will not compromise the
integrity of the institutions. This approach will thus ensure
the quality and integrity of the academic work and the learn-
ing process as a whole.
Additionally, our study emphasizes the importance of tai-
loring the integrated UIS and IQT framework to account for
factors that aect user satisfaction in the context of generative
AI. Enhancing elements like convenience, presentation, and
timeliness of information can help system designers better
align with user needs and preferences. This research makes a
valuable contribution to the existing literature by identifying
key factors that impact user satisfaction with ChatGPT in edu-
cational settings. These insights can assist educators, software
developers, and researchers in eectively integrating AI tech-
nologies into educational contexts and improving user inter-
actions. Future research should explore these areas further to
rene the UIS framework and adapt it to the rapidly changing
landscape of AI in education.
9. Implication
9.1. Implication for Theory. This research makes a notable
theoretical contribution by applying UIS and IQT to the
context of generative AI, specically ChatGPT, within
higher education. While traditional UIS models, like those
proposed by Ives, Olson, and Baroudi [19], have historically
focused on accuracy and reliability as key factors in user sat-
isfaction, our ndings indicate a shift in importance towards
dimensions such as completeness, convenience, format, pre-
cision, and timeliness in AI-driven environments. This shift
challenges conventional frameworks and highlights the need
to update these theories to better reect the evolving nature
of modern ISs and AI technologies [85].
Expanding these dimensions broadens the UIS and IQT
framework to better align with the evolving expectations of
users in the digital age. This study builds on and extends
the work of Laumer et al. [85], who highlighted the need
to update traditional UIS measures to account for the com-
plexities of digital interactions. By incorporating these newer
factors, our research not only deepens the theoretical under-
standing of UIS but also oers a more comprehensive view
of user satisfaction, especially relevant for AI integration in
educational contexts. This enhancement of the UIS frame-
work, which now includes elements like convenience and
format, marks a signicant step forward in the eld, reect-
ing current trends in user-centered design and AI adoption
in education [25, 30].
This study lls a signicant gap in current research by
being among the rst to apply UIS and IQT to the use of
generative AI tools like ChatGPT in higher education.
Although previous research has looked at UIS in relation
to various educational technologies (e.g., [30, 34]), the spe-
cic challenges and opportunities posed by AI have not been
thoroughly explored. By examining how higher education
users interact with and perceive the quality of information
from ChatGPT, our study provides fresh insights that con-
tribute to the broader eld of educational technology and
user satisfaction. The ndings indicate that within the con-
text of AI, users might prioritize factors like usability and
accessibility over traditional concerns such as accuracy and
reliability, which could have important implications for both
future research and practical applications [82, 84].
Finally, this research contributes to the ongoing develop-
ment of UIS and IQT by proposing a more comprehensive
framework that more accurately reects user interactions
with AI technologies. By incorporating dimensions such as
completeness, convenience, format, precision, and timeli-
ness, the study enhances existing models and provides a
clearer understanding of the factors driving user satisfaction
in modern ISs. This theoretical advancement not only
deepens academic insights but also oers practical guidelines
for the design and implementation of AI systems in educa-
tion, ensuring they meet the evolving needs of users in a dig-
ital world [61, 81].
9.2. Implication for Practice. This research oers practical
insights for optimizing the use of ChatGPT and similar gen-
erative AI tools, particularly by challenging the traditional
13Human Behavior and Emerging Technologies
emphasis on accuracy and reliability as the primary metrics
of success in ISs. Our ndings suggest that organizations
should instead prioritize factors like completeness, conve-
nience, format, precision, and timeliness [36]. For example,
organizations could improve the user experience by imple-
menting customizable interface settings that allow users to
choose how information is presentedwhether through
concise summaries, detailed explanations, or even graphical
representations like charts and infographics. Studies have
demonstrated that enabling users to customize the presenta-
tion of information can signicantly enhance satisfaction
and engagement, as evidenced by adaptive e-learning sys-
tems that tailor content based on user preferences [86, 25].
At the organizational level, prioritizing aspects of IQT in
academic settings can have a signicant impact on research,
teaching, and other educational activities. For instance, in
research, ensuring that AI tools like ChatGPT provide com-
plete and precise information can greatly enhance the qual-
ity of literature reviews, data analysis, and the overall
research process. When researchers can depend on AI to
deliver comprehensive, well-organized, and contextually rel-
evant information, they can save time and concentrate more
on critical thinking and analysis, thereby improving the
overall quality of their work [50, 87]. In teaching, educators
can utilize AI tools to present information in formats that
accommodate dierent learning styles, such as diagrams
and charts for visual learners or summarized bullet points
for auditory learners. This exibility can result in more eec-
tive teaching strategies, better learning outcomes, and higher
engagement [54, 88].
For individuals, especially those who frequently interact
with AI technologies like ChatGPT, this study underscores
the importance of mastering system features to maximize
their utility. Educational programs could incorporate spe-
cic training on how to eectively use AI tools, with a focus
on navigating and customizing information formats to meet
individual needs. For example, such training could teach
users how to adjust the level of detail in ChatGPTs
responses or how to apply lters to prioritize the most rele-
vant information. Similar strategies have been successfully
implemented in professional development programs, where
workers are trained to customize their digital tools to
enhance productivity [89]. As users become more skilled
at leveraging these functionalities, they can signicantly
improve their eciency and eectiveness in various tasks,
which is increasingly crucial in the digital environment [90].
Finally, equipping users with the knowledge to fully
utilize AI tools underscores the critical importance of dig-
ital literacy in the modern age. Users who understand and
eectively leverage the advanced features of systems like
ChatGPT can signicantly enhance their technological literacy
and ecacy, which is essential for navigating todayscomplex
technological landscape [91]. Additionally, addressing user
satisfaction with information presentation formatssuch as
ensuring that information is structured in a user-friendly
mannercan further boost the utility and eectiveness of
these tools. For example, incorporating features that allow
users to organize information into visual formats like mind
maps or owcharts could be particularly benecial for stu-
dents and professionals in elds that heavily rely on complex
data visualization [61]. This user-centric approach not only
benets individuals but also supports the broader goal of inte-
grating AI tools into society in ways that maximize their util-
ity, foster growth and innovation, and create more seamless
user experiences.
Moreover, applying IQT in academic contexts is essen-
tial for maintaining the quality and integrity of educational
outputs. For instance, in teaching, the precision and timeli-
ness of information provided by AI tools like ChatGPT can
signicantly impact the eectiveness of lesson plans and
the delivery of educational content. Educators who depend
on accurate and up-to-date information are better posi-
tioned to create relevant and current materials for their stu-
dents, thereby enhancing the overall learning experience [24,
83]. In research, the completeness and reliability of informa-
tion obtained from AI can aect the validity of academic
papers and projects. Ensuring that AI tools meet high IQ
standards helps researchers produce work that is both inno-
vative and reliable, thereby contributing to the advancement
of knowledge in their respective elds [84].
10. Conclusion and Limitation
As this study draws to a close, it becomes clear that the
UIS framework requires reexamination in the era of gener-
ative AI, notably in the application of ChatGPT. Our
research challenges the traditionally held belief that accu-
racy and reliability are the paramount indicators of user
satisfaction, proposing instead that factors such as complete-
ness, convenience, format, precision, and timeliness are now
critical. This shift reects how usersinteractions with AI
interfaces are evolving, necessitating a reformulation of the
UIS framework to better capture these emerging preferences.
By aligning AI system design with these new demands, we
can improve user satisfaction and enhance the overall utility
of AI in educational contexts.
This research provides valuable insights for various
stakeholders. Teachers and schools should prioritize incor-
porating AI technologies such as ChatGPT in a manner that
highlights these satisfaction elements. For instance, oering
guidance on the utilization of ChatGPT for academic pur-
poses can maximize its advantages and address any potential
issues. Decision-makers need to take these aspects into
account when crafting policies and criteria for AI integration
in education, ensuring that the use of AI enriches educa-
tional experiences while upholding academic values and
principles.
Despite its contributions, this study has limitations that
open avenues for future research. Our investigation was con-
strained by its exclusive focus on ChatGPT, which, although
representative of generative AI, may not capture the full
spectrum of user interactions with other AI platforms. Addi-
tionally, the reliance on self-reported measures of satisfac-
tion could introduce response biases, potentially distorting
our understanding of user experiences. Future research
should expand to include a broader range of AI tools and
incorporate objective usage data, providing a more compre-
hensive view of user satisfaction and behavior. Longitudinal
14 Human Behavior and Emerging Technologies
studies could also be benecial, tracking how user satisfac-
tion evolves over time as individuals become more accus-
tomed to these technologies.
Data Availability Statement
The data presented in this study are available on request from
the corresponding author (andridksilalahi@gmail.com).
Conflicts of Interest
The authors declare no conicts of interest.
Author Contributions
Conceptualization: C.J.F., I.T.S., and A.D.K.S. Methodology:
A.D.K.S. and I.J.E. Software: A.D.K.S. Validation: S.J. and
D.T.T.P. Formal analysis: A.D.K.S. Investigation: A.D.K.S.
and D.T.T.P. Resources: C.J.F. and I.T.S. Data curation:
A.D.K.S. Writingoriginal draft preparation: A.D.K.S., I.T.S.,
D.T.T.P., and I.J.E. Writingreview and editing: C.J.F., S.J.,
and I.J.E. Visualization: I.J.E. Supervision: C.J.F. Project
administration: A.D.K.S. Funding acquisition: C.J.F. All
authors have read and agreed to the published version of the
manuscript.
Funding
This work was supported by the National Science and Tech-
nology Council under grant number 113-2637-H-324-003-.
References
[1] B. D. Lund, T. Wang, N. R. Mannuru, B. Nie, S. Shimray, and
Z. Wang, ChatGPTand a new academic reality:Articial
Intelligencewrittenresearch papers and the ethics of the large
language models in scholarly publishing,Journal of the Asso-
ciation for Information Science and Technology, vol. 74, no. 5,
pp. 570581, 2023.
[2] R. Bringula, What do academics have to say about ChatGPT?
A text mining analytics on the discussions regarding ChatGPT
on research writing,in AI and ethics, pp. 113, Springer, 2023.
[3] H. B. Essel, D. Vlachopoulos, A. Tachie-Menson, E. E. John-
son, and P. K. Baah, The impact of a virtual teaching assistant
(chatbot) on students' learning in Ghanaian higher education,
International Journal of Educational Technology in Higher
Education, vol. 19, no. 1, pp. 119, 2022.
[4] K. Roose, Don't ban ChatGPT in schools. Teach with it, Inter-
national New York Times, 2023.
[5] F. Fui-Hoon Nah, R. Zheng, J. Cai, K. Siau, and L. Chen, Gen-
erative AI and ChatGPT: applications, challenges, and AI-
human collaboration,Journal of Information Technology Case
and Application Research, vol. 25, no. 3, pp. 277304, 2023.
[6] Y. Meron and Y. Tekmen Araci, Articial intelligence in
design education: evaluating ChatGPT as a virtual colleague
for post-graduate course development,Design Science,
vol. 9, article e30, 2023.
[7] Y. K. Dwivedi, N. Kshetri, L. Hughes et al., Opinion Paper:
So what if ChatGPT wrote it?Multidisciplinary perspec-
tives on opportunities, challenges and implications of gener-
ative conversational AI for research, practice and policy,
International Journal of Information Management, vol. 71,
article 102642, 2023.
[8] J. Kocoń, I. Cichecki, O. Kaszyca et al., ChatGPT: jack of
all trades, master of none,Information Fusion, vol. 99,
article 101861, 2023.
[9] L. De Angelis, F. Baglivo, G. Arzilli et al., ChatGPT and the
rise of large language models: the new AI-driven infodemic
threat in public health,Frontiers in Public Health, vol. 11,
article 1166120, 2023.
[10] G. Deiana, M. Dettori, A. Arghittu, A. Azara, G. Gabutti, and
P. Castiglia, Articial intelligence and public health: evaluat-
ing ChatGPT responses to vaccination myths and misconcep-
tions,Vaccine, vol. 11, no. 7, p. 1217, 2023.
[11] B. Foroughi, M. G. Senali, M. Iranmanesh et al., Determinants
of intention to use ChatGPT for educational purposes: ndings
from PLS-SEM and fsQCA,International Journal of Human
Computer Interaction, vol. 40, no. 17, pp. 45014520, 2023.
[12] A. Strzelecki, To use or not to use ChatGPT in higher educa-
tion? A study of studentsacceptance and use of technology,
Interactive Learning Environments, pp. 114, 2023.
[13] Y. H. Chang, A. D. K. Silalahi, and K. Y. Lee, From uncer-
tainty to tenacity: investigating user strategies and continuance
intentions in AI-powered ChatGPT with uncertainty reduc-
tion theory,International Journal of HumanComputer Inter-
action, pp. 119, 2024.
[14] D. Menon and K. Shilpa, Chatting with ChatGPT: analyzing
the factors inuencing users' intention to use the open AI's
ChatGPT using the UTAUT model,Heliyon, vol. 9, no. 11,
article e20962, 2023.
[15] D. R. E. Cotton, P. A. Cotton, and J. R. Shipway, Chatting and
cheating: ensuring academic integrity in the era of ChatGPT,
Innovations in Education and Teaching International, vol. 61,
no. 2, pp. 228239, 2023.
[16] S. A. Bin-Nashwan, M. Sadallah, and M. Bouteraa, Use
of ChatGPT in academia: academic integrity hangs in the bal-
ance,Technology in Society, vol. 75, article 102370, 2023.
[17] M. Liu, Y. Ren, L. M. Nyagoga, F. Stonier, Z. Wu, and L. Yu,
Future of education in the era of generative articial intelli-
gence: consensus among Chinese scholars on applications of
ChatGPT in schools,Future in Educational Research, vol. 1,
no. 1, pp. 72101, 2023.
[18] A. N. Ansari, S. Ahmad, and S. M. Bhutta, Mapping the global
evidence around the use of ChatGPT in higher education: a
systematic scoping review,Education and Information Tech-
nologies, vol. 29, no. 9, pp. 1128111321, 2024.
[19] B. Ives, M. H. Olson, and J. J. Baroudi, The measurement of
user information satisfaction,Communications of the ACM,
vol. 26, no. 10, pp. 785793, 1983.
[20] R. Y. Wang and D. M. Strong, Beyond Accuracy: What Data
Quality Means to Data Consumers,Journal of Management
Information Systems, vol. 12, no. 4, pp. 533, 1996.
[21] T. H. Baek and M. Kim, Is ChatGPT scary good? How user
motivations aect creepiness and trust in generative articial
intelligence,Telematics and Informatics, vol. 83, article
102030, 2023.
[22] C. K. Lo, What is the impact of ChatGPT on education? A
rapid review of the literature,Education Sciences, vol. 13,
no. 4, p. 410, 2023.
[23] P. Rivas and L. Zhao, Marketing with ChatGPT: navigating
the ethical terrain of GPT-based chatbot technology,AI,
vol. 4, no. 2, pp. 375384, 2023.
15Human Behavior and Emerging Technologies
[24] C. J. Fu, A. D. K. Silalahi, S. C. Huang, D. T. T. Phuong,
I. J. Eunike, and Z. H. Yu, The (un) knowledgeable, the
(un) skilled? Undertaking Chat-GPT usersbenet-risk-
coping paradox in higher education focusing on an integrated,
UTAUT and PMT,International Journal of HumanCom-
puter Interaction, pp. 131, 2024.
[25] X. Wang, T. Huang, W. Zhang, Q. Zeng, and X. Sun, Is
information normalization helpful in online communica-
tion? Evidence from online healthcare consultation,Inter-
net Research, 2024.
[26] J. Kim, J. H. Kim, C. Kim, and J. Park, Decisions with
ChatGPT: reexamining choice overload in ChatGPT recom-
mendations,Journal of Retailing and Consumer Services,
vol. 75, article 103494, 2023.
[27] J. H. Kim, J. Kim, J. Park, C. Kim, J. Jhang, and B. King, When
ChatGPT gives incorrect answers: the impact of inaccurate
information by generative AI on tourism decision-making,
Journal of Travel Research, 2023.
[28] F. Melchert, R. Müller, and M. Fischer, Adapting information
quality theory to AI-driven educational platforms: ensuring
relevance and satisfaction in learning outcomes,Education
and Information Technologies, vol. 29, no. 1, pp. 4568,
2024.
[29] X. Wang, Y. Li, and L. Chen, Information normalization in
online healthcare consultations: balancing informational and
emotional support,Journal of Business Research, vol. 136,
no. 1, pp. 4256, 2024.
[30] U. Haryaka, A. Agus, and A. H. Kridalaksana, User satisfac-
tion model for E-learning using smartphone,Procedia Com-
puter Science, vol. 116, pp. 373380, 2017.
[31] J. Ang and S. Koh, Exploring the relationships between user
information satisfaction and job satisfaction,International
Journal of Information Management, vol. 17, no. 3, pp. 169
177, 1997.
[32] N. Au, E. W. Ngai, and T. E. Cheng, A critical review of end-
user information system satisfaction research and a new
research framework,Omega, vol. 30, no. 6, pp. 451478, 2002.
[33] S. Siritongthaworn and D. Krairit, Satisfaction in E-learning:
the context of supplementary instruction,Campus-Wide
Information Systems, vol. 23, no. 2, pp. 7691, 2006.
[34] Y. S. Wang, Assessment of learner satisfaction with asynchro-
nous electronic learning systems,Information & Manage-
ment, vol. 41, no. 1, pp. 7586, 2003.
[35] D. F. Galletta and A. L. Lederer, Some cautions on the mea-
surement of user information satisfaction,Decision Sciences,
vol. 20, no. 3, pp. 419434, 1989.
[36] S. Laumer, C. Maier, and T. Weitzel, Information quality, user
satisfaction, and the manifestation of workarounds: a qualita-
tive and quantitative study of enterprise content management
system users,European Journal of Information Systems,
vol. 26, no. 4, pp. 333360, 2017.
[37] B. Bai, R. Law, and I. Wen, The impact of website quality on
customer satisfaction and purchase intentions: evidence from
Chinese online visitors,International Journal of Hospitality
Management, vol. 27, no. 3, pp. 391402, 2008.
[38] P. Shamala, R. Ahmad, A. Zolait, and M. Sedek, Integrating
information quality dimensions into information security risk
management (ISRM),Journal of Information Security and
Applications, vol. 36, pp. 110, 2017.
[39] M. Golfarelli, D. Maio, and S. Rizzi, The dimensional fact
model: a conceptual model for data warehouses,International
Journal of Cooperative Information Systems, vol. 7, no. 2n03,
pp. 215247, 1998.
[40] C. W. Fisher, I. Chengalur-Smith, and D. P. Ballou, The
impact of experience and time on the use of data quality infor-
mation in decision making,Information Systems Research,
vol. 14, no. 2, pp. 170188, 2003.
[41] M. K. Williams, John Dewey in the 21st century,Journal of
Inquiry and Action in Education, vol. 9, no. 1, 2017.
[42] S. Liu, N. Wang, B. Gao,and M. Gallivan, To be similar or to be
dierent? The eect of hotel managersrote response on subse-
quent reviews,Tourism Management, vol. 86, article 104346,
2021.
[43] I. Alhassan, D. Sammon, and M. Daly, Data governance activ-
ities: an analysis of the literature,Journal of Decision Systems,
vol. 25, supplement 1, pp. 6475, 2016.
[44] C. Batini, C. Cappiello, C. Francalanci, and A. Maurino,
Methodologies for data quality assessment and improve-
ment,ACM Computing Surveys (CSUR), vol. 41, no. 3,
pp. 152, 2009.
[45] L. Martín, L. Sánchez, J. Lanza, and P. Sotres, Development
and evaluation of articial intelligence techniques for IoT data
quality assessment and curation,Internet of Things, vol. 22,
article 100779, 2023.
[46] S. Fosso Wamba, S. Akter, L. Trinchera, and M. De Bourmont,
Turning information quality into rm performance in the big
data economy,Management Decision, vol. 57, no. 8,
pp. 17561783, 2019.
[47] S. Petter and A. Fruhling, Evaluating the success of an emer-
gency response medical information system,International
Journal of Medical Informatics, vol. 80, no. 7, pp. 480489,
2011.
[48] W. H. Tsai, P. L. Lee, Y. S. Shen, and H. L. Lin, A comprehen-
sive study of the relationship between enterprise resource
planning selection criteria and enterprise resource planning
system success,Information & Management, vol. 49, no. 1,
pp. 3646, 2012.
[49] B. H. Wixom and P. A. Todd, A theoretical integration of user
satisfaction and technology acceptance,Information Systems
Research, vol. 16, no. 1, pp. 85102, 2005.
[50] H. H. Chang, P. H. Hsieh, and C. S. Fu, The mediating role of
sense of virtual community,Online Information Review,
vol. 40, no. 7, pp. 882899, 2016.
[51] W. J. Kettinger and C. C. Lee, Perceived service quality and
user satisfaction with the information services function,Deci-
sion Sciences, vol. 25, no. 5-6, pp. 737766, 1994.
[52] T. Chi, Mobile commerce website success: antecedents of
consumer satisfaction and purchase intention,Journal of
Internet Commerce, vol. 17, no. 3, pp. 189215, 2018.
[53] J. Xu, I. Benbasat, and R. T. Cenfetelli, Integrating service
quality with system and information quality: an empirical test
in the E-service context,MIS Quarterly, vol. 37, no. 3,
pp. 777794, 2013.
[54] K. Al-Qeisi, C. Dennis, E. Alamanos, and C. Jayawardhena,
Website design quality and usage behavior: unied theory of
acceptance and use of technology,Journal of Business
Research, vol. 67, no. 11, pp. 22822290, 2014.
[55] N. Urbach, P. Drews, and J. Ross, Digital business transfor-
mation and the changing role of the IT function,MIS Quar-
terly Executive, vol. 16, no. 2, pp. 14, 2017.
[56] S. Yoon and M. Kim, A study on the improvement direction
of articial intelligence speakers applying DeLone and
16 Human Behavior and Emerging Technologies
McLeans information system success model,Human Behav-
ior and Emerging Technologies, vol. 2023, no. 1, Article ID
2683458, 2023.
[57] Z. Huang and M. Benyoucef, The eects of social commerce
design on consumer purchase decision-making: an empirical
study,Electronic Commerce Research and Applications,
vol. 25, pp. 4058, 2017.
[58] S. Gupta, M. Motlagh, and J. Rhyner, The digitalization sus-
tainability matrix: a participatory research tool for investigat-
ing digitainability,Sustainability, vol. 12, no. 21, p. 9283,
2020.
[59] C. M. K. Cheung and M. K. O. Lee, User satisfaction with an
internet-based portal: an asymmetric and nonlinear
approach,Journal of the American Society for Information
Science and Technology, vol. 60, no. 1, pp. 111122, 2009.
[60] A. Saka, R. Taiwo, N. Saka et al., GPT models in construction
industry: opportunities, limitations, and a use case validation,
Developments in the Built Environment, vol. 17, article 100300,
2024.
[61] K. Reinecke and A. Bernstein, Knowing what a user likes: a
design science approach to interfaces that automatically adapt
to culture,MIS Quarterly, vol. 37, no. 2, pp. 427453, 2013.
[62] K. I. Roumeliotis, N. D. Tselikas, and D. K. Nasiopoulos,
LLMs in E-commerce: a comparative analysis of GPT and
LLaMA models in product review evaluation,Natural Lan-
guage Processing Journal, vol. 6, article 100056, 2024.
[63] I. A. Wong, Q. L. Lian, and D. Sun, Autonomous travel deci-
sion-making: an early glimpse into ChatGPT and generative
AI,Journal of Hospitality and Tourism Management,
vol. 56, pp. 253263, 2023.
[64] Y. Chen, F. M. Zahedi, A. Abbasi, and D. Dobolyi, Trust cal-
ibration of automated security IT artifacts: a multi-domain
study of phishing-website detection tools,Information &
Management, vol. 58, no. 1, article 103394, 2021.
[65] B. Niu and G. F. Nkoulou Mvondo, I am ChatGPT, the ulti-
mate AI chatbot! Investigating the determinants of Users' loy-
alty and ethical usage concerns of ChatGPT,Journal of
Retailing and Consumer Services, vol. 76, article 103562, 2024.
[66] D. Akiba and M. C. Fraboni, AI-supported academic advis-
ing: exploring ChatGPTs current state and future potential
toward student empowerment,Education Sciences, vol. 13,
no. 9, p. 885, 2023.
[67] N. Saif, S. U. Khan, I. Shaheen, A. ALotaibi, M. M. Alnai, and
M. Arif, Chat-GPT; validating technology acceptance model
(TAM) in education sector via ubiquitous learning mecha-
nism,Computers in Human Behavior, vol. 154, article 108097,
2024.
[68] J. Jin and M. Kim, GPT-empowered personalized eLearning
system for programming languages,Applied Sciences,
vol. 13, no. 23, p. 12773, 2023.
[69] S. Park, H. Zo, A. P. Ciganek, and G. G. Lim, Examining suc-
cess factors in the adoption of digital object identier systems,
Electronic Commerce Research and Applications, vol. 10, no. 6,
pp. 626636, 2011.
[70] A. Bhattacherjee, Understanding information systems con-
tinuance: an expectation-conrmation model,MIS Quarterly,
vol. 25, no. 3, pp. 351370, 2001.
[71] H. Baumgartner, B. Weijters, and R. Pieters, The biasing eect
of common method variance: some clarications,Journal of
the Academy of Marketing Science, vol. 49, no. 2, pp. 221
235, 2021.
[72] J. Hair, C. L. Hollingsworth, A. B. Randolph, and A. Y. L.
Chong, An updated and expanded assessment of PLS-SEM
in information systems research,Industrial Management &
Data Systems, vol. 117, no. 3, pp. 442458, 2017.
[73] C. Fornell and D. F. Larcker, Evaluating structural equation
models with unobservable variables and measurement error,
Journal of Marketing Research, vol. 18, no. 1, pp. 3950, 1981.
[74] J. Henseler, C. M. Ringle, and M. Sarstedt, A new criterion for
assessing discriminant validity in variance-based structural
equation modeling,Journal of the Academy of Marketing
Science, vol. 43, pp. 115135, 2015.
[75] R. F. Falk and N. B. Miller, A Primer for Soft Modeling, Univer-
sity of Akron Press, 1992.
[76] M. Tenenhaus, V. Esposito Vinzi, Y.-M. Chatelin, and
C. Lauro, PLS path modeling,Computational Statistics &
Data Analysis, vol. 48, no. 1, pp. 159205, 2005.
[77] M. Wetzels, G. Odekerken-Schröder, and C. Van Oppen,
Using PLS path modeling for assessing hierarchical construct
models: guidelines and empirical illustration,MIS Quarterly,
vol. 33, no. 1, pp. 177195, 2009.
[78] S. C. Huang, A. D. K. Silalahi, and I. J. Eunike, Exploration of
moderated, mediated, and congurational outcomes of
tourism-related content (TRC) on TikTok in predicting enjoy-
ment and behavioral intentions,Human Behavior and Emerg-
ing Technologies, vol. 2024, no. 1, Article ID 2764759, 2024.
[79] C. J. Fu, A. D. K. Silalahi, I. T. Shih, D. T. T. Phuong, I. J.
Eunike, and S. Jargalsaikhan, To satisfy or clarify: enhancing
user information satisfaction with AI-powered ChatGPT,
Engineering Proceedings, vol. 74, no. 1, 2024.
[80] S. Pan, J. Cui, and Y. Mou, Desirable or distasteful? Exploring
uncertainty in human-chatbot relationships,International
Journal of HumanComputer Interaction, pp. 111, 2023.
[81] T. M. Brill, L. Munoz, and R. J. Miller, Siri, Alexa, and other
digital assistants: a study of customer satisfaction with articial
intelligence applications,in The role of smart technologies in
decision making, pp. 3570, Routledge, 2022.
[82] N. Ameen, A. Tarhini, A. Reppel, and A. Anand, Customer
experiences in the age of articial intelligence,Computers in
Human Behavior, vol. 114, article 106548, 2021.
[83] X. Ma and Y. Huo, Are users willing to embrace ChatGPT?
Exploring the factors on the acceptance of chatbots from the
perspective of AIDUA framework,Technology in Society,
vol. 75, article 102362, 2023.
[84] G. Bubaš,A.Čižmešija, and A. Kovačić,Development of an
assessment scale for measurement of usability and user experi-
ence characteristics of Bing Chat conversational AI,Future
Internet, vol. 16, no. 1, p. 4, 2024.
[85] S. Laumer, A. Eckhardt, and N. Trunk, Do as your parents
say?analyzing IT adoption inuencing factors for full and
under age applicants,Information Systems Frontiers, vol. 12,
no. 2, pp. 169183, 2010.
[86] F. Johnson, J. Rowley, and L. Sba,Exploring Information
Interactions in the Context of Google,Journal of the Associa-
tion for Information Science and Technology, vol. 67, no. 4,
pp. 824840, 2016.
[87] K. Williams, G. Berman, and S. Michalska, Investigating
hybridity in articial intelligence research,Big Data & Society,
vol. 10, no. 2, article 20539517231180577, 2023.
[88] J. Anderson, L. Rainie, and A. Luchsinger, Articial intelli-
gence and the future of humans,Pew Research Center,
vol. 10, no. 12, 2018.
17Human Behavior and Emerging Technologies
[89] Y. Lu, Articial intelligence: a survey on evolution, models,
applications and future trends,Journal of Management Ana-
lytics, vol. 6, no. 1, pp. 129, 2019.
[90] R. C. Davis, Internet connection: AI and libraries: supporting
machine learning work,Behavioral & Social Sciences Librar-
ian, vol. 36, no. 3, pp. 109112, 2017.
[91] F. Martin and J. Ertzberger, Eects of reection type in the
here and now mobile learning environment,British Journal
of Educational Technology, vol. 47, no. 5, pp. 932944, 2016.
18 Human Behavior and Emerging Technologies
... This technology suits various applications since it can adjust to different conditions and situations. It is highly accurate and fluent in responding to commands, but it requires a deep comprehension of the environment and the capacity for human thought [82,83]. Few studies have found the potential of ChatGPT in the tertiary education levels. ...
Article
Full-text available
The significance of social media content in consumers’ decision-making journeys has acquired substantial attention among scholars and business practitioners in recent times. However, the exploration of how marketing strategies should design social media content to influence behavioral intentions remains fairly inadequate, particularly within the tourism industry. This study is aimed at developing a model that includes the moderating, mediating, and configuration effects of tourism-related content (TRC) dimensions on TikTok to predict enjoyment and behavioral intention. This study employs a hybrid approach of structural equation modeling (SEM) and fuzzy set qualitative comparative analysis (fsQCA) to test hypotheses and propositions using a sample of 319 participants who have experience watching TRC on TikTok and have the intention to visit the destinations presented in the content. The results from SEM confirm that content reliability and understandability significantly influence perceived enjoyment. Furthermore, visit intention is predicted to increase through the contributions of content understandability and perceived enjoyment. Insights from the mediating effect reveal that perceived enjoyment serves as a fully mediating factor between content understandability and visit intention. Moreover, the moderating effects of gender and frequency of use exhibit significant differences in their impacts on perceived enjoyment and visit intention. The outcomes of fsQCA confirm that various configurations of TRC dimensions and enjoyment provide valuable insights for designing content-marketing strategies. The consideration of different combinations of these constructs can impact behavioral intentions. This research makes significant contributions to both theory and marketing practice, as the comprehensive discussion of the combinations of configurations provides amplified insights into this study’s findings.
Article
Full-text available
After the introduction of the ChatGPT conversational artificial intelligence (CAI) tool in November 2022, there has been a rapidly growing interest in the use of such tools in higher education. While the educational uses of some other information technology (IT) tools (including collaboration and communication tools, learning management systems, chatbots, and videoconferencing tools) have been frequently evaluated regarding technology acceptance and usability attributes of those technologies, similar evaluations of CAI tools and services like ChatGPT, Bing Chat, and Bard have only recently started to appear in the scholarly literature. In our study, we present a newly developed set of assessment scales that are related to the usability and user experiences of CAI tools when used by university students, as well as the results of evaluation of these assessment scales specifically regarding the CAI Bing Chat tool (i.e., Microsoft Copilot). The following scales were developed and evaluated using a convenience sample (N = 126) of higher education students: Perceived Usefulness, General Usability, Learnability, System Reliability, Visual Design and Navigation, Information Quality, Information Display, Cognitive Involvement, Design Appeal, Trust, Personification, Risk Perception, and Intention to Use. For most of the aforementioned scales, internal consistency (Cronbach alpha) was in the range from satisfactory to good, which implies their potential usefulness for further studies of related attributes of CAI tools. A stepwise linear regression revealed that the most influential predictors of Intention to Use Bing Chat (or ChatGPT) in the future were the usability variable Perceived Usefulness and two user experience variables—Trust and Design Appeal. Also, our study revealed that students’ perceptions of various specific usability and user experience characteristics of Bing Chat were predominantly positive. The evaluated assessment scales could be beneficial in further research that would include other CAI tools like ChatGPT/GPT-4 and Bard.
Article
Full-text available
Large Language Models (LLMs) trained on large data sets came into prominence in 2018 after Google introduced BERT. Subsequently, different LLMs such as GPT models from OpenAI have been released. These models perform well on diverse tasks and have been gaining widespread applications in fields such as business and education. However, little is known about the opportunities and challenges of using LLMs in the construction industry. Thus, this study aims to assess GPT models in the construction industry. A critical review, expert discussion and case study validation are employed to achieve the study's objectives. The findings revealed opportunities for GPT models throughout the project lifecycle. The challenges of leveraging GPT models are highlighted and a use case prototype is developed for materials selection and optimization. The findings of the study would be of benefit to researchers, practitioners and stakeholders, as it presents research vistas for LLMs in the construction industry.
Article
Purpose This study aims to investigate the role of information normalization in online healthcare consultation, a typical complex human-to-human communication requiring both effectiveness and efficiency. The globalization and digitization trend calls for high-quality information, and normalization is considered an effective method for improving information quality. Meanwhile, some researchers argued that excessive normalization (standardized answers) may be perceived as impersonal, repetitive, and cold. Thus, it is not appreciated for human-to-human communication, for instance, when patients are anxious about their health condition (e.g. with high-risk disease) in online healthcare consultation. Therefore, the role of information normalization in human communication is worthy to be explored. Design/methodology/approach Data were collected from one of the largest online healthcare consultation platforms (Dxy.com). This study expanded the existing information quality model by introducing information normalization as a new dimension. Information normalization was assessed using medical templates, extracted through natural language processing methods such as Bidirectional Encoder Representations from Transformers (BERT) and Latent Dirichlet Allocation (LDA). Patient decision-making behaviors, namely, consultant selection and satisfaction, were chosen to evaluate communication performance. Findings The results confirmed the positive impact of information normalization on communication performance. Additionally, a negative moderating effect of disease risk on the relationship between information normalization and patient decision-making was identified. Furthermore, the study demonstrated that information normalization can be enhanced through experiential learning. Originality/value These findings highlighted the significance of information normalization in online healthcare communication and extended the existing information quality model. It also facilitated patient decision-making on online healthcare platforms by providing a comprehensive information quality measurement. In addition, the moderating effects indicated the contradiction between informational support and emotional support, enriching the social support theory.
Article
This study investigates how inaccurate information provided by ChatGPT impacts travelers’ acceptance of recommendations. Six experiments were conducted based on the accessibility-diagnosticity framework. These examined the moderating role of the prominence and type of incorrect information and their effects on decision-making. The results show that participants perceived more accuracy and trustworthiness, leading to stronger intentions to visit when incorrect information was absent. However, there was a decline in their intentions to visit when incorrect information was present and more prominent or in the same domain. This effect diminished when multiple domains were involved or when participants were focused on the initial task. The research highlights that both the prominence and type of incorrect information are boundary conditions and provides insights into AI applications in tourism. Furthermore, it offers practical implications for online travel agencies in terms of user interface and user experience design planning.
Article
The introduction of ChatGPT in academia has raised significant concerns, particularly regarding issues like plagiarism and the ethical use of generative text for academic purposes. Existing literature has predominantly focused on ChatGPT adoption, leaving a notable gap in addressing the strategies users employ to mitigate these emerging challenges. To bridge this gap, this research utilizes the Uncertainty Reduction Theory (URT) to formulate user strategies for reducing uncertainty. These strategies include both interactive and passive approaches. Concurrently, the study identifies key sources of uncertainty, which include concerns related to transparency, information accuracy, and privacy. Additionally, it introduces the concepts of seeking clarification and consulting peer feedback as mediating roles to facilitate reduced uncertainty. We tested these hypotheses with a sample of Indonesian users (N = 566) using structural equation modeling via Smart-PLS 4.0 software. The results confirm that interactive Uncertainty Reduction Strategies (URS) are more effective in reducing uncertainty when using ChatGPT compared to passive URS. Transparency concerns, information accuracy, and privacy concerns are identified as factors that increase the level of uncertainty. In contrast, consulting peer feedback is the most effective strategy for reducing uncertainty compared to seeking clarification at the individual-system level. Insights from the mediating effects confirm that consulting peer feedback significantly mediates uncertainty sources to reduce uncertainty. The study discusses various strategies for the ethical use of ChatGPT by users in the educational context and contributes significantly to theoretical development.
Article
The advent of Chat-GPT, an AI-driven technology, is reshaping various sectors, particularly higher education. This study, merging the Unified Theory of Acceptance and Use of Technology (UTAUT) with the Protection Motivation Theory (PMT), explores the relationship between technology adoption and protection motivation in higher education, addressing the concept of “the (un)knowledgeable, the (un)skilled.” Through Structural Equation Modeling, findings from Indonesian participants (N = 445), including students and lecturers, reveal that perceived benefits such as task efficiency, hedonic motivation, performance expectancy, and effort expectancy predominantly motivate Chat-GPT use. Risk perception leans more on perceived vulnerability than severity, with insignificant impact from response cost. Response efficacy and self-efficacy significantly determine intention. Insight from moderating effect of educational level suggest that doctoral-level users demonstrate increased Chat-GPT usage, and gender plays a role in response efficacy and task efficiency. Both educational level and gender actively influence users’ intentions and actual usage behavior. These insights guide educators, policymakers, and institutions in ethically integrating AI, managing evolving risk perceptions, and empowering users in the dynamic AI landscape of higher education. Policymakers (e.g., university boards) are urged to dexterity an ethical framework and authorize in user efficacy to ensure equitable access and benefits from Chat-GPT in education.