ArticlePDF Available

The Sudden Disruptive Rise of Generative Artificial Intelligence?: An Evaluation of their Impact on Higher Education and the Global Workplace.

Authors:
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
Available online 21 April 2024
2199-8531/© 2024 The Author(s). Published by Elsevier Ltd on behalf of Prof JinHyo Joseph Yun. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
The sudden disruptive rise of generative articial intelligence? An
evaluation of their impact on higher education and the global workplace
Wilson Kia Onn Wong
a
,
b
a
Pan Sutong Shanghai-Hong Kong Economic Policy Research Institute (PSEI), Lingnan University, Hong Kong
b
Academy for Applied Policy Studies and Education Futures (AAPSEF), The Education University of Hong Kong, Hong Kong
ARTICLE INFO
Keywords:
GAI
Disruptive
GPT
LLMs
AI-optimists
AI-sceptics
ABSTRACT
This paper evaluates the rise of Generative Articial Intelligence(GAI) in its myriad forms, with the highest
prole being the Large Language Models (LLMs). More importantly, it analyses the potentially disruptive
impact of this ascendant technology on higher education and the global workplace. The ndings of this paper
indicate that students pursuing higher education tend to perceive GAI favourably, as it frees them from the toil of
rote-learning. However, the view is rather mixed in the case of educators, who are still coming to grips with this
seemingly disruptive technology. In the case of the global labour market, GAI has the potential to decimate
legions of white-collar jobs once it eliminates inherent issues of biases, security and misinformation. Despite the
medias constant labelling of GAI as a disruptive technology that has suddenly burst onto the technological scene,
it is evidenced in this paper that the technology has taken nearly eight decades to reach todays level of tech-
nological advancement. Further, it is far from reaching its full potential, as it is still incorporating advances in
pattern recognition, planning and problem solving, and quantum computing technologies. This study also warns
of concentrating the power of this game-changing technology in the hands of a few major corporate titans.
1. Introduction
Since November 2022, Generative Articial Intelligence(GAI) has
taken the world by storm, in the form of Large Language Models
(LLMs) and associated technologies such as Open AIs ChatGPT/DALL-E,
Microsofts Bing GPT-4, and Googles Gemini (formerly known as Bard).
Their seemingly uncanny ability to create supposedly original content
(comprising words, images and code) in mere seconds sent re-
verberations across academia and the world at large. But what exactly
are these seemingly game-changing technologies? As indicated by its
name, the purpose of GAI is to generate content in the form of images,
text, code, audio content and suggestions (Toner, 2023) (see Table 1).
The most high-prole of this purportedly revolutionary technology
would be the LLMs (e.g. ChatGPT), a form of Articial Intelligence(AI)
trained on an immense compendium of books, articles, written content
from the internet (e.g. Wikipedia) and even social media and online
forums, with the expressed purpose of generating human-like responses
to natural language queries (i.e. questions in everyday language) from
users (Mearian, 2023). The technology powering LLMs such as ChatGPT
is known as the Generative Pre-trained Transformer (GPT). It is a
neural network (i.e. technology that mimics the workings of the human
brain) that attempts to predict the likelihood of certain words being
stringed together in a sentence; based on this premise, it could be argued
that a larger dataset would translate into greater predictive accuracy
(Lin et al., 2022).
LLM technology has advanced to a degree that a Google software
engineer, Blake Lemoine, possibly mistook the responses from the
companys LLM, LaMDA (Language Model for Dialogue Applications)
for sentience (i.e. the ability to experience feelings and sensations) (De
Cosmo, 2022). This incident also demonstrates the ability of LLMs to
pass the Turing test, once the gold standard for assessing whether
machines could simulate intelligence and behaviour comparable to
human beings (Turing, 1950); the ostensible success of LLMs has led to
the test being described as broken, with industry experts urging for the
deployment of more relevant ones (Biever, 2023; Stone, 2023). More-
over, this outwardly signicant progress has obfuscated the distinct
possibility that current LLMs could be merely mimicking self--
awarenessfrom their training data. By extension, they could be char-
acterised as being highly-sophisticated chatbots, incapable of generating
truly original ideas, as they are merely synthesising the information they
have been trained on; Physicist Michio Kaku has likened them to mere
augmented recording devices, which have warranted too much
E-mail addresses: wilsonwong2@ln.edu.hk, willy_kia@yahoo.com.
Contents lists available at ScienceDirect
Journal of Open Innovation: Technology, Market,
and Complexity
journal homepage: www.sciencedirect.com/journal/journal-of-open-innovation-technology-
market-and-complexity
https://doi.org/10.1016/j.joitmc.2024.100278
Received 17 February 2024; Received in revised form 8 April 2024; Accepted 14 April 2024
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
2
attention; Section 6 of this paper posits the deployment of quantum
computing to address this inherent weakness (Hetzner, 2023).
However, there are others such as AI researcher Eliezer Yudkowsky
who asserts that we do not fully understand the inner workings of these
LLMs and these systems could be on the verge of achieving superhuman
intelligence, far surpassing that of humanity. In this dystopian scenario,
Yudkowsky argues that humanity could face an existential crisis and
posits a potentially devastating one-sided conict between humans and
our cognitively superior AI progeny (Yudkowsky, 2023). An equally
dystopian view was also echoed by scientist Geoffrey Hinton, who
described the potentially unabated advancement of these AI technolo-
gies as such: If it gets to be much smarter than us, it will be very good at
manipulation because it will have learned that from us, and there are
very few examples of a more intelligent thing being controlled by a less
intelligent thing(Hinton, 2023, p.1). Nonetheless there is a possibility
that such fears are unwarranted and stem from the controversial belief
that the advent of a superior civilisation would invariably result in the
demise of the supposedly inferior counterpart, such as the disastrous
encounter between the Spanish explorers and the Incans in the 16th
Century, which saw the near decimation of the latter (Diamond, 2005).
Regardless of whether they are mere augmented recording devices or
machines on the cusp of attaining superhuman intelligence, their advent
has the potential to upend the business model of higher educational
institutions across the globe. In a 2023 Best Colleges Survey, approxi-
mately a fth of surveyed students admitted to using ChatGPT or similar
LLM technologies to complete their assignments or exams (Welding,
2023). This has inevitably triggered the following questions: Should the
use of LLMs be banned from higher educational institutions? Or should
higher educational institutions attempt to incorporate LLMs as an
enabler in their curriculums?
While uncertainties continue to unfold, the only certainties are LLMs
are here to stay, and they will continue to advance in sophistication and
computational power (due to expanding data pools and increasingly
powerful microchips). In the global workplace, the advent of GAI has
also invariably unleashed considerable waves of anxiety and fear across
the human workforce, although still not matching the experience of the
Luddites when confronted with mechanized looms. This paper attempts
to provide an exhaustive study of the impact of this seemingly state-of-
the-art technology on todays higher education institutions and the
subsequent recipient of their products (i.e. graduates), the global
workplace through the lenses of open innovation dynamics.
2. Literature Review
Despite its seemingly sudden and disruptive rise in recent years, GAI
technology is fundamentally based on computing technologies that have
been evolving for approximately eight decades; major tech companies
such as Google, Dell, Microsoft, Lenovo and IBM have been working on
GAI and LLMs for years long before OpenAIs ChatGPT burst onto the
scene in November 2022 (Smith-Goodson, 2023). Its origins could be
traced to a report written by Alan Turing in 1948, which laid the
framework for articial neural networks, the brains powering these
systems. In Turings seminal report, he was exploring the possibility of
machines exhibiting humanlike intelligent behaviour, in an era where
there was an unwillingness to admit the possibility that mankind can
have any rivals in intellectual power (Turing, 1948, p.1). The term AI
was subsequently coined in 1956 at a summer workshop at Dartmouth
College, Hanover, New Hampshire. Moreover, the founders of AI
delineated their vision as such: Every aspect of learning or any other
feature of intelligence can in principle be so precisely described that a
machine can be made to simulate it (McCarthy et al., 1955, p.2).
Fortuitously, the invention of the microchip in 1958 by Jack Kilby, an
engineer at Texas Instruments, was a major leap forward in mankinds
development of intelligence akin to his own (Miller, 2022). This positive
loop was further reinforced by the invention of the rst chatbot, named
ELIZA by MIT scientist, Joseph Weizembaum in 1966, who was
attempting to facilitate natural language conversation with computers
(Weizembaum, 1966).
Further, the development of Graphics Processing Unit (GPU)
technology by computer scientist Ivan Sutherland in the 1960s and their
subsequent mass market debut in the late 1990s enabled the rise of to-
days GAI technologies (or all AI technologies for that matter) (Peddie,
2023). GPUs are well-suited for AI applications due to their remarkable
ability to process several computations concurrently (known as parallel
processing). They were initially mainly used in the video gaming
market, with its initial developers never envisaging its pivotal role in the
Table 1
Examples of GAI Models in the market. Source: Bradshaw et al. (2024), Hsiao
(2024), Baidu Research (2023), OpenAI (2023a), OpenAI (2023b), Rawat
(2023), Mistral AI (2024), David (2024), X.ai (2024).
Name Developer Capabilities
ChatGPT OpenAI (Microsoft has
invested US$13 billion
in OpenAI, with rights
to prot sharing despite
not owning any equity).
Generates text and
suggestions based on
pattern recognition. Also
has expanding coding
capabilities.
Bing Chat (runs on
ChatGPT GPT-4
technology)
Microsoft Generates text and
suggestions based on
pattern recognition. Has
limited but expanding
coding capabilities.
Gemini (renamed Bard
on February 8, 2024)
Google Generates text and
suggestions based on
pattern recognition. Also
has expanding coding
capabilities.
ERNIE (Enhanced
Representation
through Knowledge
Integration)
Baidu Generates text and
suggestions based on
pattern recognition. Has
some (and expanding)
coding capabilities. Most
effective if instructions are
in Chinese, as it was trained
largely using a Chinese-
based dataset.
DALL-E OpenAI Generates images from text.
Bing Image Creator
(runs on Dall-E
technology)
Microsoft Generates images from text.
Codex OpenAI Generates code from natural
language instructions.
SourceAI OpenAI Generates code and by
extension software.
Hugging Face Hugging Face, Inc. Code generation
capabilities such as auto-
completion and text
summarizing.
GitHub OpenAI and GitHub Converts natural language
suggestions into coding
instructions across a variety
of languages.
Mistral Mistral AI (Microsoft
has invested US$16
million to date, with no
equity stake)
Generates text and
suggestions in multiple
languages based on pattern
recognition. Also has
expanding coding
capabilities. Generates
numerical code from text,
making natural language
processing easier (Mistral
Embed).
Grok X.ai Generates text and
suggestions based on
pattern recognition. Also
has expanding coding
capabilities. Claims to have
superior long context
understanding and
reasoning capabilities vis-
`
a-vis its rivals.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
3
development of AI technologies, highlighting the unintended conse-
quences of technological development (Shadow, 2023). Nonetheless,
the aforementioned technologies would not have resulted in todays
GAI, if not for the development of the Internet, which provides a sig-
nicant portion of their training data; research on the Internet
commenced in the late 1960s in the form of the Advanced Research
Projects Agency Network (ARPANET), a project funded by the U.S.
Department of Defence with the aim of enabling computers to commu-
nicate with one another on a single network (McLean, 2023).
As evidenced, GAI is an ensemble of ideas (across tech companies,
academia and government research institutes) that has taken approxi-
mately 80 years to evolve to its current form (and is still rapidly
evolving), with unprecedented and unpredictable impact on the global
higher education landscape and workplace. This evolvement is very
much akin to an industry exposed to open innovation dynamics, where
companies and industries have to utilise both internal and external
knowledge and expertise to achieve sustainable growth (Chesbrough,
2006).
Nonetheless the arrival of this AI interloper is not without its dan-
gers. Seo et al. (2021) argued that for AI to be effectively integrated into
our online learning systems, they should ensure easy interpretability and
constant human involvement and feedback. However, in the case of GAI,
humans very often do not understand the mechanics of their increas-
ingly complex operations and behavioural patterns; Michael C. Frank, a
researcher at Stanford University, likens attempts to understanding the
fundamental mechanics of LLMs as efforts in probing alien intelligence
(Frank, 2023). This paper attempts to alleviate this lack of under-
standing through a thorough analysis of the latest advances in GAI and
their ramications. Further, in section six of this paper, the author also
posits measures to augment the robustness of GAI, which is still plagued
by issues of bias and accuracy.
3. Methodology
This paper deploys a case study approach complemented by an open
innovationframework. The relevance of the case study approach is due
to its efcacy at analysing phenomena emanating and evolving in a uid
real-life environment. In this context, the analysis would be on the
potentially disruptive impact of a seemingly revolutionary technology
on todays higher educational institutes and global workspace with their
entrenched norms and practices. This case study approach also facili-
tates the exploration of alternative narratives with greater effective-
ness than traditional data dependent econometric models, where access
to reliable data is not readily available, particularly in highly-uid sit-
uations (e.g. the continuous advancement of GAI) which are still rapidly
evolving (Morck and Yeung, 2011). Further, this methodology allows
researchers to identify the critical features of actual events, involving
the utilisation of several evidence sources (Yin, 1981a, 1981b, 1984;
Wong, 2023).
However, the methodology of this paper also acknowledges the
consequence of managing open innovation complexity as a means of
delivering sustainable growth at both rm and industry levels (Yun,
2015). This is of particular importance to the information technology
and AI industry which the earlier literature review segment identied to
have been on an eighty-year growth trajectory, resulting in todays GAI.
But for the key players (e.g. Microsoft, Google) to sustain their
expanding business models, they would have to draw expertise and
technologies from other stakeholders (i.e. policymakers, startups,
academia) in this GAI innovation ecosystem (Yun, 2016; Regona et al.,
2022).
4. Disruptor or enabler of the higher education landscape?
The viral proliferation of LLMs amongst students across the worlds
higher education institutions is a foregone conclusion. LLMs such as
ChatGPT have demonstrated the ability to pass exams at the University
of Minnesota albeit with only satisfactory grades. After using ChatGPT to
generate answers for four real exams at the University of Minnesota Law
School, the academics investigating the technologys effectiveness,
proceeded to grade the tests (comprising 95 multiple choice questions
and 12 essay questions) through blind marking. The investigation
revealed that ChatGPT attained an overall performance on par with that
of a C+student (Choi et al., 2022). At the University of Pennsylvanias
Wharton School of Business, it (i.e. ChatGPT3) delivered a marginally
superior outcome, by attaining a B to B- performance at the institutions
nal MBA Operations Management core course (Terwiesch, 2023).
Despite this initial lacklustre performance, the capabilities of LLMs have
been advancing at a remarkable pace. For instance, later versions of
ChatGPT (i.e. GPT-4) were able to deliver performances approximating
the top 10% of candidates taking Americas Uniform Bar Exam, a sig-
nicant improvement from earlier versions (i.e. GPT-3.5) which were
only capable of passing, with grades in the bottom 10% (Kimmel, 2023;
OpenAI, 2023).
But is the advent of this technology the opening of Pandoras box or a
more benign perennial gale of creative destruction? (Schumpeter,
2010). It all depends on whether you are an AI-sceptic or AI-opti-
mist. The former argues that the use of LLMs could possibly erode
students capacity to learn and acquire knowledge and subsequently
hurt their ability to compete in the workforce post-graduation (de Fine
Licht, 2023); in view of the nature of scientic inquiry, a healthy dose of
scepticism is arguably a good thing (Sagan and Druyan, 1995). Further,
they could urge for the outright ban of LLMs on the campuses of higher
educational institutes, in efforts to discourage cheating in exams or
plagiarism. This measure is relatively draconian, as some AI-optimists
would argue. For instance, while Australias leading research-intensive
Group of Eightuniversities have reverted to pen and paper exams
to prevent cheating, they continue to recognise LLMsimmense value as
a learning tool for students and urge assessment redesign as a means of
dealing with the advent of LLMs; assessment redesign strategies would
involve the introduction of more eldwork, oral presentations, labora-
tory activities and internships (i.e. experiential learning) as assessment
components instead of relying on traditional essay assignments where
the temptation to cheat via increasingly powerful LLMs is signicantly
higher (Cassidy, 2023).
Further, this LLM-augmented experiential learning trend could
signicantly enhance students learning experiences by compelling
them to work more closely with organisations that are increasingly
buffeted by advances in AI. Moreover, through this approach, students
would acquire more industry-relevant skills and knowledge which
would invariably enhance their employability post-graduation. Simi-
larly, educators could also take advantage of the increasingly powerful
compositional powers of LLMs to set examination and related assess-
ment questions, in the process releasing more time for research activ-
ities; this is a boon for educators working in publish or perishresearch-
driven institutions. With the expanding automation of administrative
tasks through the deployment of LLMs and other GAI tools, educators
could also focus more on mentoring roles and Oxbridge tutorial style
meetings (a luxury in this day and age), in the process enriching the
educational experience of students, which has been steadily eroded by
the increasing corporatization of universities worldwide (Alibaˇ
si´
c et al.,
2024).
Given the double-edged nature of LLMs impact on the educational
landscape, educators need to manage their expanding presence delib-
eratively, as there is an on-going and arguably increasing risk that AI-
scepticscould inadvertently obstruct critical GAI development. More-
over, there is a danger that AI-scepticism could devolve into AI-
pessimism. Ironically, AI-pessimists are not always Luddites and
could paradoxically include titans of the tech industry such as Elon Musk
(co-founder of Tesla) and Steve Wozniak (co-founder of Apple), who
have signed an open letter to pause development of LLMs more powerful
than GPT-4 for at least six months, citing growing fears over the exis-
tential risks to humankind (Future of Life Institute, 2023). Some
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
4
individuals in the AI-pessimismcamp are also urging for the increased
regulation of AI systems, without regard for the consequences. The
logical questions would be: Is it possible to regulate scientic prog-
ress?and if it is possible, what would be the ramications, intended or
otherwise?. Imagine if governments, fearful of the power and potential
of physics, sought to regulate the eld at the beginning of the 20th
century. Under the stranglehold of this regulation, humanity would not
have mastered nuclear ssion through the Manhattan Project, allowing
us to deliver sustainable clean energy to meet our ever-increasing energy
needs; Pessimists could argue that this discovery killed hundreds of
thousands of people in Japan but on the ip side, optimists could make
the argument that it hastened the end of the devastating Second World
War in a manner which conventional weapons could not achieve and in
the process, avoiding as many as a million casualties (Bernstein, 1998).
5. Job obliteration or inevitable creative destruction in the
global workplace
The advent of GAI has invariably generated widespread fears of a
global job market apocalypse, particularly amongst college educated
white-collar employees, traditionally thought to be more immune to the
job decimating effects of automation than their blue-collar counterparts
in manufacturing and agriculture. Moreover this potentially large-scale
supplantation of lucrative white-collar employment could upend tradi-
tional beliefs that professional jobs involving creativity and emotional
intelligence have ironclad security (Miller and Cox, 2023). Further
stoking these dystopian fears would encompass research from OpenAI (i.
e. one of the alleged transgressors) scientists, Tyna Eloundou and her
colleagues which indicate that around 80% of the U.S. workforce could
have at least 10% of their work tasks affected by the introduction of
LLMs while approximately 19% of workers may see at least 50% of their
tasks impacted (Eloundou et al., 2023). In truth, it has been proven
time and again throughout history that technology, however revolu-
tionary, would require a relatively long period to diffuse through the
economy. This is evidenced in the automotive industry. Although the
internal combustion energy powering automobiles was invented in
1879, it took decades for the revolutionary technology to evolve into the
automobile and they attained critical mass only circa, 1913, triggered by
the price reductions enabled by Henry Fords moving assembly line
(Gordon, 2016). This relatively gradual pace of technological progress is
captured succinctly in Robert Solows sardonic quip: You can see the
computer age everywhere but in the productivity statistics, nearly three
decades after the invention of the microchip (Solow, 1987, p. 36).
Some analysts have argued that the mass adoption of GAI would be
signicantly faster than earlier revolutionary technologies owing to
their ease of use and low cost (The Economist, 2023); for instance, the
use of OpenAIs GPT-3.5 is available for free while the more advanced
GPT-4 involves an accessible US$20 monthly subscription fee (OpenAI,
2024; The Business Times, 2024). However these seeming advantages
may not necessarily translate into mass market adoption. A survey in
April 2023 by machine learning observability platform, Arize AI indi-
cated that since the launch of ChatGPT in November 2022, about one in
ten machine learning teams surveyed have adopted the use of LLMs,
with another approximately two-fths planning to deploy them within a
year. However, there remains another two-fths of respondents who
indicated they have no plans to eld this technology in their operations
over the coming year. Their decision to err on the side of cautionis
driven by privacy and security concerns (Arize AI, 2023). A string of
leaking incidents involving LLMs seem to validate their circumspection.
In February 2023, major nancial institutions such as Bank of America,
Citigroup, and JP Morgan Chase all imposed temporary bans on the use
of LLMs amongst their employees. Subsequently, in April 2023, some
staff at Koreas Samsung Electronics had inadvertently compromised
their companys intellectual property security by uploading internal
source code onto ChatGPT which resulted in a blanket ban on LLMs in
the rm (encompassing company-owned computers, mobile devices and
its internal networks) in May (Gurman, 2023). Another factor (albeit one
of a technical nature) obstructing the widespread adoption of LLMs
across industries would be the possibility of the technology halluci-
nating sometimes; hallucinating is industry speak (amongst AI pro-
fessionals) that refers to LLMs making things up or producing content
that is factually inaccurate (Neugebauer, 2023). For companies making
commercial decisions based on faulty intelligence generated by LLMS,
the consequences would certainly involve staggering losses. In the case
of fast-moving nancial services which involve split-second decisions,
investment decisions based on ctitious or inaccurate information
generated by LLMs would lead to losses which are not only disastrously
signicant but also almost painfully immediate, with little or no room
for remedy. However, in the marketing function, the management of
companies tend to be relatively more receptive to GAI deployment, due
to the ability of these systems to deliver a superior personalised expe-
rience; by the rst quarter of 2023, nearly three quarters of US com-
panies have deployed GAI tools (encompassing chatbots) in their
marketing activities (Kshetri et al., 2024; Dencheva, 2023).
Further obstructing the rapid deployment of LLMs and GAI tech-
nologies across industries would be the innate biases embedded within
their algorithms. Biases would include the discrimination against people
on the basis of gender, race, and skin tones. For instance, Srinivasan and
Uchino (2021) revealed that an AI Generative art program, AIportraits
had taken the liberty of lightening the skin tone of a biracial actress,
Tessa Thompson in its portrait rendition. This discrimination could even
extend to political afliations and religion, with potentially deleterious
impacts on elections and the public trust; Motoki et al. (2023) indicate
that ChatGPT exhibits signicant political bias towards members of the
Democrat party in the United States, which may not come across as a
surprise, as many of the developers of these technologies tend to be
liberal wealthy democrats based in Silicon Valley. The biases in these
LLMs are an unfortunate reection of the real-world biases existing in
the data, on which they are trained; this inherent weakness of LLMs is
evidenced in the following statement by researchers, Skylar Kolisko and
Carolyn Jane Anderson: Although these models are powerful, they have
also been shown to learn toxic behaviour and harmful social biases from
the massive amounts of uncurated text data on which they are trained
(Kolisko and Anderson, 2023, p.15825). To alleviate the seemingly
intractable biases present in datasets, GAI developers would need to
devote more time and resources to cleansing them, but this could be a
Sisyphean task as our datasets are expanding at a exponential rate; over
a relatively brief seven years (20182025), the amount of global
real-time data is expected to increase by a factor of ten, from ve zet-
tabytes to 51 zettabytes (Taylor, 2023). Moreover, these developers
would have to vigilantly monitor the interactions between users and
LLMs for evidence of bias, which is again a monumental task, as the
number of questions posed to LLMs are only expected to increase. LLM
users have also been urged to provide feedback on biases, as part of the
reinforcement learning from human feedback (Heikkil¨
a, 2023).
Further, we could design bias-aware algorithms which are capable of
evaluating the various kinds of bias and subsequently proceed to alle-
viate their impact on GAIs output. But the rst principle (see Figure 1)
in designing a relatively bias-free GAI would be to always collect data-
sets that are as diverse as possible, with the greatest representation from
various demographics (i.e. dataset augmentation) (Ferrara, 2024).
Overcoming the condentiality and technical issues and innate bia-
ses hampering the widespread deployment of LLMs and related GAI
technologies would involve a considerable amount of time and re-
sources; for instance, Samsung Electronics, in efforts to deal with the
leaking of internal source code through the use of ChatGPT, has
announced plans to develop its own internal LLM (Kim, 2023). With this
challenging backdrop, it is likely that they would have to endure the
same, gradual adoption pathways of earlier revolutionary technologies.
Based on the aforementioned discussions, it would seem that blue-collar
jobs, involving manual labour, are the ones that could be spared the
potential ravages of GAI. While that could be true to some extent,
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
5
blue-collar employees are still not spared from the prospect of
displacement by other forms of automation brought about by advances
in AI. For instance, why would farmers require existing or additional
workers to harvest or plough their lands, if they have access to auton-
omous tractors capable of performing these tasks (or more) 247, with
no need for toilet or meal breaks and more importantly, would not ask
for pay increments or demand the formation of unions to safeguard their
rights (John Deere, 2023). It is increasingly evident that AIs revolu-
tionary impact does not differentiate between white-collar or blue-collar
occupations. As long as it is an occupation with clearly dened pro-
cedures (i.e. a formulaic system), AI could potentially replicate that role
with relative ease; regardless of whether it is an operating theatre or
todays modern farming operations, the personnel involved tend to
adhere to certain established protocols (Ford, 2016). In view of this
premise, the occupations that would be spared the job-decimating ef-
fects of mass AI adoption would be the unstructured ones, which involve
considerable nonlinear human interaction (e.g. counsellor, psychologist,
teacher, insurance advisor); human behavioural patterns are considered
nonlinear due to the presence of signicant non-verbal cues and emo-
tions which are still incomprehensible to the pattern-recognition capa-
bilities of AI systems (Bishop, 2021). Given the potentially seismic shift
in global workplace (includes both blue and white collar professions)
wrought by this AI revolution, there should be added institutional
emphasis on preparing both todays and tomorrows workforce for ca-
reers in the so-called unstructured professions; even in the information
technology industry, greater emphasis should be placed on under-
standing the inner workings of increasingly sophisticated GAI and other
AI systems instead of merely attaining programming efciency (i.e. a
relatively dispensable structured skillset). Further, the steady adoption
of AI systems across companies and industries, would invariably result
in the need for considerable economic reskilling and upskilling, as
existing workers may lack the critical skills needed to manage this sea
change in operations and management; this could in turn affect pro-
ductivity in the short term, as reskilling and upskilling requires time and
investment (L˘
az˘
aroiu and Rogalska, 2023).
6. GAI: Game changer or mere hype?
From the earlier sections, it is evident that GAI technologies have
plenty of room for improvement, as they are still plagued with accuracy
and technical issues, and seemingly intransigent biases. Until these is-
sues are resolved, LLMs could remain a platform for the propagation of
misinformation and falsehoods, thus hurting its prospects of having a
truly transformative impact on academia and the global workplace
(Margolin, 2023). In terms of sheer accuracy, the Google search engine,
launched over two decades earlier in August 1998, still surpasses LLMs
like ChatGPT in stark contrast to proclamations from AI-optimiststhat
LLMs are poised to supplant Google and other search engines. Googles
superior accuracy and objectivity is attributable to its ability to focus on
multiple parameters such as relevance, credibility and popularity
(Google, 2023). Moreover, Googles algorithm targets a signicantly
wider range of data types, encompassing news, images, videos, and
maps. However, ChatGPT does not possess this critical technical
advantage at this point, in the process limiting its accuracy and objec-
tivity. In addition, the free and invariably most widely accessed version
of ChatGPT currently lacks access to the latest data (its training data is
up to a relatively archaic January 2022), further compromising its
factual rigour and relevance. Advanced versions of ChatGPT (i.e. GPT-4
and ChatGPT Team) with more up-to-date training data are available to
users willing to pay the respective US$20 and US$25 monthly sub-
scription fees. Its rival, Googles Gemini also has a similar package
(OpenAI, 2024; Hsiao, 2024). Companies with their IT budgets tend to
be receptive to OpenAIs paid subscription offerings, but it is unclear
whether the mass market user would be willing to make the nancial
commitment (The Business Times, 2024). Such pay-to-use measures
could also widen the digital divide between developed and developing
countries, as denizens from the latter could nd the monthly subscrip-
tion fees prohibitive.
Further, defenders of LLMs argue that ChatGPT and its ilk are mainly
designed to generate seemingly cogent human-like responses, on the
back of years of training data, and are not meant to rival specialised
search engines such as Google (Murphy, 2023). This tenuous position
stands in contradiction to earlier fears of tech luminaries urging a pause
to seemingly robust LLM development poised to sweep aside earlier
technological developments (including search engines) (Future of Life
Institute, 2023); earlier in 2023, Google executives were also fearful that
the onset of LLMs would disrupt their immensely protable US$150
billion annual search business (for all intents and purposes, a monopoly)
and relegate their prized search engine to the ash heap of history (Lib-
eratore, 2023). In the spirit of creative destruction, where emerging
and increasingly robust business models are expected to displace obso-
lete predecessors, AI developers such as Google DeepMind (a Google
division) are actively working on strengthening the accuracy, objectivity
and overall performance of their GAI technology. Google DeepMinds
Gemini LLM is predicated on the companys unique technology which
involves instructing computer programmes in mastering complex games
like Go with the expressed purpose of outperforming existing LLMs in
the marketplace; through its preternatural ability to master sophisti-
cated strategies via intuitive pattern recognition, planning and problem
solving, the companys AlphaGo programme defeated Go world cham-
pion, Lee Sedol 41 in a series of matches in 2016 (Paleja, 2023; Google
DeepMind, 2023).
Moreover, advances in quantum computing could bolster the per-
formance of GAI technologies. As opposed to traditional classical com-
puters which run almost all of todays AI systems, quantum computers
are supposedly capable of solving an exponentially greater number of
problems simultaneously than their classical counterparts. This is due to
their remarkable nonbinary properties (known as the superposition),
which allows the subatomic particles in our electronic data to share their
properties and strengths, a capability that far surpasses the performance
Fig. 1. Alleviating bias in datasets.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
6
of todays classical computers which are basically binary in nature
(either 1 or 0); theoretically, the binary nature of classical systems
greatly limits the problem-solving capabilities vis-`
a-vis their non-binary
challengers. Mathematical proofs posit that quantum computers could
theoretically process a relatively innite number of problems concur-
rently, positioning them as ideal partners to LLMs, which have to deal
with incredible levels of uncertainty (Brooks, 2023; Lin et al., 2023);
existing LLMs that run on todays classical computers are incapable of
robust uncertainty estimates for their responses (Sankararaman et al.,
2022). Studies in quantum mechanics dictate that the superposition
undergirding the game-changing problem-solving capabilities of todays
quantum computers is not technically sustainable, and they quickly lose
the remarkable multi-processing capability (in sudden and irreversible
collapses) which differentiates them from their classical peers (Reich,
2013); quantum computers still remain proof-of-concept technologies.
Without the advent of genuine quantum computers with sustainable
superpositions, it is possible that LLMs would have to run on
binary-based classical systems which inherently limits their
problem-solving and predictive capabilities.
7. Regulating GAI development
The seemingly increasing capabilities of GAI and other AI platforms
have already compelled policymakers to examine how the public sector
could harness its growing powers. Many governments today are still
struggling to establish frameworks and guidelines to effectively intro-
duce GAI into their respective public sectors. This lacklustre perfor-
mance could be driven by governmentsuneasiness and lack of digital
maturity, as they fear widespread AI adoption in their public sectors
could erode their democratic institutions; GAIs tendency to halluci-
nate could result in misinformation that negatively impacts govern-
ment policies and political outcomes (i.e. elections) (Pautz, 2023;
Feldstein, 2023). Further it could be argued that the relative rate of
GAIs progress far outpaces policymakers attempts to understand,
integrate and possibly regulate it (Alhosani and Alhashmi, 2024); this is
unsurprising, given that the immense nancial rewards in the technol-
ogy sector not only draws societys best and brightest AI developers but
also procures the best legal advice in circumventing regulatory super-
vision. Moreover, antitrust regulators are closely monitoring the recent
investments by tech behemoths such as Microsoft in GAI companies such
as OpenAI and Mistral. However this intensifying antitrust scrutiny has
been vigorously dismissed by Microsoft as potentially extraneous
strangleholds on GAI development, as its investments supposedly do not
involve any shareholdings in these rms (The Economist, 2024).
Notwithstanding the veracity of Microsofts assertion, it could be argued
that under the principles of open innovation dynamics (which empha-
sises drawing on internal and external expertise), tech rms have little
choice but to make these investments in GAI startups to secure much
needed expertise and technologies. The difculty certainly lies in
threading the ne line between securing critical expertise and engaging
in monopolistic behaviour. In a further nod to open innovation dy-
namics, the major tech rms and leading startups (i.e. OpenAI) have, in
a recent September 2023 closed-door forum in Washington DC also
openly expressed interest in working with policymakers to ensure the
effectiveness and accuracy of GAI (Chesbrough, 2006; Clayton, 2023).
The rationale for tech companies wanting to work closely with policy-
makers and regulators is intuitive; if they are cognisant of the intricacies
of regulatory frameworks and upcoming government policies, it would
be relatively easier to spur innovation and grow their respective com-
panies, and by extension, the industry.
8. Conclusion
While GAI technologies still have considerable technical challenges
to surmount, they have nonetheless made their presence felt in higher
education and the global workplace. Their rise is further complicated by
the intense tug of war between conicting interests. In academia,
students view them as an enabler, freeing them from the humdrum of
rote-learning while instructors could perceive them as disruptors that
facilitate cheating, plagiarism and the once sacrosanct grading process;
some educators could view LLMs more favourably as they could free
them from the humdrum tasks of writing assessment papers, giving
them more time for research and curriculum development. As for the
global workplace, employees are also increasingly hostile to these
technologies which they perceive as corrosive to their job security.
Conversely, employers could view them with increasing favour, due to
their potential to signicantly reduce costs by slashing headcount. This
unrelenting trend is also evident in the creative industries, once the
preserve of humans with our seemingly incomparable creativity, which
was hitherto perceived to be beyond the binary capabilities of cold,
logical computer systems; in May 2023, the Writers Guild of America, (i.
e. the union of lm and television writers), called for strikes across
Hollywood, demanding that studios (i.e. the employers) impose limits
on the use of GAI for writing the scripts of television shows and movies
(Straub, 2023). There is a possibility that once the technical kinks in GAI
are ironed out, they could pose a viable challenge to once irreplaceable
human intellect, logic and creativity. But this could take somewhat
longer than what AI-optimists hope for; for instance, it took nearly
seven decades for the Wright brothers’ “proof-of-conceptKitty Hawk (a
12-second ight that covered approximately 36 m) to transition to the
globe-trotting Boeing 747, which delivered a remarkable growth in air
travel, tourism and freight delivery (NASA, 2018). However, there is
only one certainty in this tug of warbetween AI and humans and it is
that GAI development is already set on an irrevocable path and there is
no going back to business as usual. Moreover, there is another critical
issue which bedazzled industry watchers have often failed to explain,
which is the fact that these potentially era-dening technologies are now
in the hands of a few major companies (essentially a cartel), giving them
greater capacity to dominate our increasingly AI-infused future. In this
situation, open innovation dynamics compels the remaining stake-
holders (e.g. policymakers, regulators, academics) to work closely with
this highly-inuential cartel to rene and regulate this increasingly
powerful technology to develop a more inclusive and sustainable
society.
Author contributions
I am the sole author of this research paper.
Funding
This research paper is not funded by any institution or grant.
Ethical statement
Ethical Statement is not applicable to this research paper as it does
not involve any animal or human test subjects. This research paper is
strictly my own work.
CRediT authorship contribution statement
Wilson Kia Onn Wong: Conceptualization, Data curation, Formal
analysis, Funding acquisition, Investigation, Methodology, Project
administration, Resources, Software, Supervision, Validation, Visuali-
zation, Writing original draft, Writing review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing nancial
interests or personal relationships that could have appeared to inuence
the work reported in this paper.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
7
References
Alhosani, K., Alhashmi, S.M., 2024. Opportunities, challenges, and benets of AI
innovation in government services: a review. Discov. Artif. Intell. 4, 18. https://doi.
org/10.1007/s44163-024-00111-w.
Alibaˇ
si´
c, H., Atkinson, L.C., Pelcher, J., 2024. The liminal state of academic freedom:
navigating corporatization in higher education. Discov. Educ. 3, 7. https://doi.org/
10.1007/s44217-024-00086-x.
Arize AI., 2023. Survey: Massive Retooling Around Large Language Models Underway,
https://arize.com/blog/survey-massive-retooling-around-large-language-mode
ls-underway/(accessed 22 August 2023).
Baidu Research., 2023. Introducing ERNIE 3.5: Baidus Knowledge-Enhanced Foundation
Model Takes a Giant Leap Forward, http://research.baidu.com/Blog/index-view?
id=185(accessed 13 February 2024).
Bernstein, B.J., 1998. Truman and the A-Bomb: Targeting Noncombatants, Using the
Bomb, and His Defending the ‘Decision. J. Mil. Hist. 62 (3), 547570.
Biever, C., 2023. ChatGPT broke the Turing test the race is on for new ways to assess
AI. Nat 619, 686689. https://doi.org/10.1038/d41586-023-02361-7.
Bishop, J.M., 2021. Articial intelligence is stupid and causal reasoning will not x it.
Front. Psych. 11, 513474 https://doi.org/10.3389/fpsyg.2020.513474.
Bradshaw, T., Murgia, M., Hammond, G., and Hodgson, C. 2024, How Microsofts
multibillion-dollar alliance with OpenAI really works. https://www.ft.com/content
/458b162d-c97a-4464-8afc-72d65afb28ed(accessed 9 February 2024).
Brooks, M., 2023. Quantum computers: what are they good for? Nat 617, S1S3. https://
doi.org/10.1038/d41586-023-01692-9.
Cassidy, C., 2023. Australian universities to return to ‘pen and paper exams after
students caught using AI to write essays. https://www.theguardian.com/australia
-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students
-caught-using-ai-to-write-essays(accessed 17 August 2023).
Chesbrough, H.W., 2006. Open innovation: The new imperative for creating and
proting from technology. Harvard Business Press, Boston. Massachusetts.
Choi, J.H., Hickman, K.E., Monahan, A. and Schwarcz, D.B., 2022. ChatGPT Goes to Law
School. 71 J. of Leg. Edu. 387. https://dx.doi.org/10.2139/ssrn.4335905.
Clayton, J., 2023. Overwhelming consensuson AI regulation Musk. https://www.
bbc.com/news/technology-66804996(accessed 6 April 2024).
David, E., 2024. Microsofts Mistral deal beefs up Azure without spurning OpenAI. http
s://www.theverge.com/24087008/microsoft-mistral-openai-azure-europe
(accessed 7 April 2024).
De Cosmo, L., 2022. Google Engineer Claims AI Chatbot Is Sentient: Why That Matters.
https://www.scienticamerican.com/article/google-engineer-claims-ai-cha
tbot-is-sentient-why-that-matters/(accessed 14 August 2023).
de Fine Licht, K., 2023. Integrating Large Language Models into Higher Education:
Guidelines for Effective Implementation. Comp. Sc. Math. Forum 8 (1), 65. https://
doi.org/10.3390/cmsf2023008065.
Dencheva, A., 2023. Share of marketers using generative articial intelligence (AI) in
their companies in the United States as of March 2023. https://www.statista.com/st
atistics/1388390/generative-ai-usage-marketing/(accessed 5 April 2024).
Diamond, J.M., 2005. Guns, germs and steel: a short history of everybody for the last
13,000 years. Vintage, London.
Eloundou, T., Manning, S., Mishkin, P., and Rock, D., 2023. GPTs are GPTs: An Early
Look at the Labor Market Impact Potential of Large Language Models, working
paper, arXiv preprint, 23 March 2023. https://doi.org/10.48550/arXiv.2303.10130
.
Feldstein, S., 2023. The Consequences of Generative AI for Democracy, Governance and
War. Surviv 65 (5), 117142. https://doi.org/10.1080/00396338.2023.2261260.
Ferrara, E., 2024. Fairness and bias in articial intelligence: A brief survey of sources,
impacts, and mitigation strategies. Sci 6 (1), 3. https://doi.org/10.3390/
sci6010003.
Ford, M., 2016. Rise of the Robots: Technology and the Threat of a Jobless Future, Basic
Books, New York.
Frank, M.C., 2023. Baby steps in evaluating the capacities of large language models. Nat.
Rev. Psychol. 2, 451452. https://doi.org/10.1038/s44159-023-00211-x.
Future of Life Institute, 2023. Pause Giant AI Experiments: An Open Letter. https://futur
eoife.org/open-letter/pause-giant-ai-experiments/(accessed 16 August 2023).
Google, 2023. From the garage to the Googleplex. https://about.google/our-story/
(accessed 24 August 2023).
Google DeepMind, 2023. The Challenge Match. https://www.deepmind.com/rese
arch/highlighted-research/alphago/the-challenge-match(accessed 26 August
2023).
Gordon, R.J., 2016. The Rise and Fall of American Growth: The U.S. Standard of Living
Since the Civil War, Princeton University Press, Princeton, New Jersey 08540.
Gurman, M., 2023. Samsung Bans Staffs AI Use After Spotting ChatGPT Data Leak.
https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-
and-other-generative-ai-use-by-staff-after-leak?srnd=technology-
vp&leadSource=uverify%20wall (accessed 22 August 2023).
Heikkil¨
a, M., 2023. How OpenAI is trying to make ChatGPT safer and less biased. http
s://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-
make-chatgpt-safer-and-less-biased/. (accessed 23 August 2023).
Hetzner, C., 2023. Top physicist says chatbots are just ‘gloried tape recorders, and
predicts a different computing revolution is ahead. https://fortune.com/2023
/08/14/michio-kaku-chatbots-gloried-tape-recorders-predicts-quantum-computin
g-revolution-ahead/(accessed 20 August 2023).
Hinton, G., 2023. Interview with CNNs Jake Tapper, 2 May. https://edition.cnn.com/2
023/05/02/tech/hinton-tapper-wozniak-ai-fears/index.html. (accessed 2
September 2023).
Hsiao, S., 2024. Bard becomes Gemini: Try Ultra 1.0 and a new mobile app today. http
s://blog.google/products/gemini/bard-gemini-advanced-app/(accessed 9
February 2024).
John Deere., 2023. The Next Giant Leap in Ag Technology. https://www.deere.com/en
/autonomous/(accessed 22 August 2023).
Kim, H.-B., 2023. Samsung Electronics to adopt own AI amid ChatGPT security concerns.
https://www.koreatimes.co.kr/www/tech/2023/08/129_352712.html(accessed
22 August 2023).
Kimmel, L., 2023. ChatGPT Passed the Uniform Bar Examination: Is Articial Intelligence
Smart Enough to be a Lawyer? https://international-and-comparative-law-review.
law.miami.edu/chatgpt-passed-the-uniform-bar-examination-is-articial-intelligen
ce-smart-enough-to-be-a-lawyer/#:~:text=In%20an%20unprecedented%20deve
lopment%20for,in%20the%20top%2010th%20percentile. (accessed 16 August
2023).
Kolisko, S., Anderson, C.J., 2023. Exploring Social Biases of Large Language Models in a
College Articial Intelligence Course. Proc. of. AAAI Conf. Artif. Intell. 37 (13),
1582515833. https://doi.org/10.1609/aaai.v37i13.26879.
Kshetri, N., Dwivedi, Y.K., Davenport, T.H., Panteli, N., 2024. Generative articial
intelligence in marketing: Applications, opportunities, challenges, and research
agenda. Int. J. Inf. Mgt. 75, 102716. https://doi.org/10.1016/j.
ijinfomgt.2023.102716.
L˘
az˘
aroiu, G., Rogalska, E., 2023. How generative articial intelligence technologies
shape partial job displacement and labor productivity growth. Oeconomia Copernic.
14 (3), 703706. https://doi.org/10.24136/oc.2023.020.
Liberatore, S., 2023. Could ChatGPT replace Google as the worlds go-to search engine?
Google declares code redover AIs threat to its $150billion-dollar-a-year business.
https://www.dailymail.co.uk/sciencetech/article-11781625/What-ChatGPT-re
place-Google-need-know.html(accessed 24 August 2023).
Lin, Z., Trivedi, S. and Sun, J.-M., 2023. Generating with Condence: Uncertainty
Quantication for Black-box Large Language Models, arXiv preprint arXiv:
2305.19187. https://doi.org/10.48550/arXiv.2305.19187.
Lin, T.-Y., Wang, Y.-X., Liu, X.-Y., Qiu, X.-P., 2022. A survey of transformers. AI Open 3,
111132. https://doi.org/10.1016/j.aiopen.2022.10.001.
Margolin, S., 2023. How to Prepare for AI-Generated Misinformation. https://insight.ke
llogg.northwestern.edu/article/how-to-prepare-for-ai-generated-misinformation
(accessed 26 August 2023).
McCarthy, J., Minsky, M.L., Rochester, N., and Shannon, C.E., 1955. A Proposal for the
Summer Research Project on Articial Intelligence. http://jmc.stanford.edu/artic
les/dartmouth/dartmouth.pdf(accessed 15 August 2023).
McLean, C., 2023. Who invented the Internet? Everything you need to know about the
history of the Internet. https://www.usatoday.com/story/tech/2022/08/28/when
-was-internet-created-who-invented-it/10268999002/(accessed 15 August 2023).
Mearian, L., 2023. What are LLMs, and how are they used in generative AI?. htt
ps://www.computerworld.com/article/3697649/what-are-large-language-mod
els-and-how-are-they-used-in-generative-ai.html(accessed 14 August 2023).
Miller, C., 2022. Chip War: The Fight for the Worlds Most Critical Technology, Simon &
Schuster, Inc., New York.
Miller, C.C., Cox, C., 2023. In Reversal Because of A.I., Ofce Jobs Are Now More at Risk.
https://www.nytimes.com/2023/08/24/upshot/articial-intelligence-jobs.html
(accessed 13 February 2024).
Mistral AI., 2024. Mistral technology. https://mistral.ai/technology/#models
(accessed 7 April 2024).
Morck, R., Yeung, B., 2011. Economics, History and Causation. Bus. Hist. Rev. 85 (1),
3963. https://doi.org/10.2139/ssrn.1734504.
Motoki, F., Neto, V.P., Rodrigues, V., 2023. More human than human: measuring
ChatGPT political bias. Public Choice. https://doi.org/10.1007/s11127-023-01097-
2.
Murphy, C., 2023. Google Search Versus ChatGPT - ChatGPT was never meant to be a
search engine. https://www.bostondigital.com/insights/google-search
-versus-chatgpt-chatgpt-was-never-meant-be-search-engine(accessed 24 August
2023).
NASA., 2018. 115 Years Ago: Wright Brothers Make History at Kitty Hawk. https
://www.nasa.gov/feature/115-years-ago-wright-brothers-make-history-at-kitty-h
awk(accessed 28 August 2023).
Neugebauer, F., 2023. Understanding LLM Hallucinations. https://towardsdatascience.
com/llm-hallucinations-ec831dcd7786(accessed 22 August 2023).
OpenAI., 2023a. Dall-E. https://labs.openai.com/(accessed 20 August 2023).
OpenAI., 2023b. OpenAI Codex https://openai.com/blog/openai-codex(accessed 20
August 2023).
OpenAI., 2024. Models. https://platform.openai.com/docs/models/overview
(accessed 7 February 2024).
Paleja, A., 2023. Google DeepMind to power its AI with AlphaGo-like features to ght
ChatGPT. https://interestingengineering.com/culture/google-deepmind-ai-alphag
o-chatgpt(accessed 26 August 2023).
Pautz, H., 2023. Policy making and articial intelligence in Scotland. Contemp. Soc. Sci.
18 (5), 618636. https://doi.org/10.1080/21582041.2023.2293822.
Peddie, J., 2023. The History of the GPU - Steps to Invention, Springer, Cham.
Rawat, M., 2023. Top Generative AI Tools in Code Generation/Coding. https://www.
marktechpost.com/2023/07/17/top-generative-ai-tools-in-code-generation-codin
g-2023/#:~:text=These%20technologies%20use%20machine%20learning,by%
20automating%20repetitive%20coding%20components. (accessed 20 August
2023).
Regona, M., Yigitcanlar, T., Xia, B., Li, R.Y.M., 2022. Opportunities and adoption
challenges of AI in the construction industry: A PRISMA review. J. Open. Innov.
Technol. Mark. Complex. 8 (1), 45 https://doi.org/10.3390/joitmc8010045.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
8
Reich, E.S., 2013. Physicists snatch a peep into quantum paradox. Nat 2013. https://doi.
org/10.1038/nature.2013.13899.
Sagan, C., Druyan, A., 1995. The demon-haunted world: science as a candle in the dark.
Random House, New York.
Sankararaman, K.A., Wang, S.-N., Fang, H., 2022. BayesFormer: Transformer with
Uncertainty Estimation, arXiv:2206.00826. https://doi.org/10.48550/arXiv.220
6.00826.
Schumpeter, J.A., 2010. Capitalism, Socialism and Democracy. Routledge, London.
Seo, K.-W., Tang, J., Roll, I., Fels, S., Yoon, D.-W., 2021. The impact of articial
intelligence on learnerinstructor interaction in online learning. Int. J. Educ.
Technol. High. Educ. 18, 54. https://doi.org/10.1186/s41239-021-00292-9.
Shadow., 2023. The History of Gaming: The evolution of GPUs. https://shadow.tech/en
-GB/blog/history-of-gaming-gpus(accessed 21 August 2023).
Smith-Goodson, P., 2023. The Extraordinary Ubiquity Of Generative AI And How Major
Companies Are Using It. https://www.forbes.com/sites/moorinsights/2023/07/
21/the-extraordinary-ubiquity-of-generative-ai-and-how-major-companies-are-us
ing-it/?sh=5ec153852124(accessed 14 August 2023).
Solow, R., 1987. Wed Better Watch Out. N. Y. Book Rev. 12, 36.
Srinivasan, R., Uchino, K., 2021. Biases in generative art: A causal look from the lens of
art history. Proc. 2021 ACM Conf. Fair. Account. Transpar. 4151. https://doi.org/
10.1145/3442188.3445869.
Stone, B., 2023. AI Leader Proposes a New Kind of Turing Test for Chatbots. https
://www.bloomberg.com/news/newsletters/2023-06-20/ai-turing-test-for-chatgpt
-or-bard-proposed-by-mustafa-suleyman(accessed 31 August 2023).
Straub, J., 2023. Can AI write Ted Lasso? Writers strike may open door to ChatGPT-
written scripts https://www.usatoday.com/story/opinion/2023/05/10/wga-strike
-pave-way-ai-generated-tv-movie-scripts/70198801007/(accessed 28 August
2023).
Taylor, P., 2023. Global datasphere real time data total size worldwide from 2010 to
2025. https://www.statista.com/statistics/871513/worldwide-data-created/
(accessed 23 August 2023).
Terwiesch, C. (2023), Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its
Performance in the Operations Management Course. https://mackinstitute.wharton
.upenn.edu/wp-content/uploads/2023/01/Would-ChatGPT-get-a-Wharton-MBA.
pdf(accessed 4 February 2024).
The Business Times., 2024. OpenAI signs up 260 businesses for corporate version of
ChatGPT. https://www.businesstimes.com.sg/startups-tech/startups/openai-signs-
260-businesses-corporate-version-chatgpt(accessed 9 February 2024).
The Economist., 2023. Your job is (probably) safe from articial intelligence. https://
www.economist.com/nance-and-economics/2023/05/07/your-job-is-probably-saf
e-from-articial-intelligence(accessed 22 August 2023).
The Economist., 2024. Regulators are forcing big tech to rethink its AI strategy. https
://www.economist.com/business/2024/03/27/regulators-are-forcing-big-tech-to-r
ethink-its-ai-strategy(accessed 5 April 2024).
Toner, H., 2023. What Are Generative AI, Large Language Models, and Foundation
Models? https://cset.georgetown.edu/article/what-are-generative-ai-large-languag
e-models-and-foundation-models/(accessed 14 August 2023).
Turing, A.M., 1948. Intelligent Machinery. https://www.alanturing.net/turing_archive
/archive/l/l32/L32-002.html(accessed 15 August 2023).
Turing, A.M., 1950. Comput. Mach. and Intell. Mind 59 (236), 433460. https://doi.org/
10.1093/mind/LIX.236.433.
Weizembaum, J., 1966. ELIZA- A Computer Program for the study of Natural Language
Communication between Man and Machine. Comput. Linguist. 9 (1), 3645. https://
doi.org/10.1145/365153.365168.
Welding, L., 2023. Half of College Students Say Using AI on Schoolwork Is Cheating or
Plagiarism. https://www.bestcolleges.com/research/college-students-ai-tools-surve
y/(accessed 14 August 2023).
Wong, W.K.O., 2023. Creating Articial Suns: the Sino-Western race to master limitless
clean energy through nuclear fusion. Educ. Dev. Stud. 12 (1), 2839. https://doi.
org/10.1108/AEDS-03-2022-0035.
X.ai., 2024. Announcing Grok-1.5. https://x.ai/blog/grok-1.5 (accessed 7 April 2024).
Yin, R.K., 1981a. The case study as a serious research strategy. Knowl 3 (1), 97114.
https://doi.org/10.1177/107554708100300106.
Yin, R.K., 1981b. The case study crisis: Some answers. Adm. Sci. Q. 26 (1), 5865.
https://doi.org/10.2307/2392599.
Yin, R.K., 1984. Case Study Research: Design and Methods. Sage Publications, Inc,
Beverly Hills, CA.
Yudkowsky, E., 2023. Pausing AI Developments Isnt Enough. We Need to Shut it All
Down https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
(accessed 14 August 2023).
Yun, J.J., 2015. How do we conquer the growth limits of capitalism? Schumpeterian
Dynamics of Open Innovation. J. Open. Innov. Technol. Mark. Complex. 1 (2), 17.
https://doi.org/10.1186/s40852-015-0019-3.
Yun, J.J., 2016. Open Innovation: Technology, Market and Complexity in South Korea.
Sci. Technol. Soc. 21 (3), 319323. https://doi.org/10.1177/0971721816661783.
W.K.O. Wong
... The lack of transparency in the use of GAIs is also seen as a problem [15,20,23], as these systems work like a black box, and this opacity makes it difficult to understand how the results are produced, with implications for responsibility and trust [9,30]. In general, although GAI systems are valuable tools, there are concerns about bias and reputational risk [7], as the information used in the training processes for generative technologies can present biases and discriminatory criteria [31] and the data generated by GAI can present biased and unfair responses [19,32,33]. ...
... The lack of knowledge and preparation of policymakers in the face of the rapid advance of AI raises questions about the ability of organizations to keep up with the pace of technological change [15]. In the public sector, there are difficulties in establishing structures and defining guidelines for the effective introduction of GAI in organizations, thus demonstrating a lack of digital maturity [32]. Failures in regulatory control can also result in disinformation on a large scale, damaging democracy [10]. ...
Conference Paper
Full-text available
Given the advances in the field of artificial intelligence (AI) and the increase in the supply of generative artificial intelligence (GAI) technological solutions, this article aimed to explore and understand the main concerns and challenges associated with this technology, as well as the implications related to its adoption by society and organizations. The method used was theoretical and descriptive research with a narrative review, using bibliographic research from two databases. It was found that issues such as reliability, ethics, security and privacy, prejudice, lack of regulation, environmental impact and social implications are the subject of concern in the studies identified, with regulation of the use of AI and AI literacy being extremely relevant.
... This innovative technology has transformed many industries, and recently, GAI has disrupted the educational landscape as it was previously perceived, especially that of academic research (Wong, 2024). Ethical considerations including privacy, malefeince, equity, and pedagogical appropriateness should be considered (Adams et al., 2023). ...
... The focus group participants concurred these results. These findings align with research by Wong (2024), which identifies generational differences as a significant factor in the adoption of emerging technologies in higher education. This suggests a need for tailored approaches to technology integration that address the specific concerns and training needs of different age groups within the program. ...
Article
Full-text available
This action research study explores 73 doctoral students' perceptions of using Generative Artificial Intelligence (GAI) throughout their research journey in one educational doctorate (Ed.D) program. The first phase employed surveys, while the second incorporated semi-structured focus group interviews based on the survey data from a diverse sample of students across educational disciplines currently enrolled in the university's educational leadership doctoral program. In the study's first phase, the survey quantified educators' familiarity with, attitudes towards, perceived challenges, ethical considerations, and benefits of using GAI in doctoral research. The exploration of GAI in this practitioner-inspired doctoral program has uncovered essential insights into integrating emerging technologies in advanced academic settings. This study has highlighted the complexities and considerations accompanying the use of GAI tools in doctoral research, underscoring the need for a balanced approach aware of both the advantages and the challenges inherent in their adoption and offers possible solutions to increase ethical usage of GAI.
... Many believe that using generative AI will make their work easier. In academia, students embrace the technologies, viewing them "as an enabler, freeing them from the humdrum of rotelearning," and with some professors regarding them as tools that "free them from the 'humdrum' tasks of writing assessment papers, giving them more time for research and curriculum development [23]." However, this could come at a cognitive cost. ...
... Ethical considerations include how to manage displacement and support workforce transitions. (Wong, 2024;Ban et al., 2024) The negative impacts of AI being used to increase the productivity of a business are summarized by job displacement. (Taelim Choi, 2024) Automation with the help of AI can lead to reduction or elimination of jobs in certain sectors, particularly in those involving routine or repetitive manual tasks. ...
... Buna ilişkin olarak son yıllarda YZ sektöründe ciddi bir değişim olmuştur. 2021'de piyasaya sürülen DALL-E, görsel içerik üretimi için yenilikçi çözümler sunmuş; 2022'de MidJourney, benzer şekilde yaratıcı görseller oluşturmak için kullanılan bir araç haline gelmiştir (Wong, 2024). Ayrıca, 2022'de piyasaya çıkan ChatGPT ve 2023'te tanıtılan Bard (şimdiki adı Gemini), metin oluşturma, çeviri ve veri analizi gibi süreçlerde kulla-nıcılarına destek sağlamaktadır. ...
Article
Full-text available
Bu makale, yapay zekânın üniversitelerdeki kullanımına dair kapsamlı bir inceleme sunmaktadır. Yapay zekânın eğitimde kişiselleştirilmiş öğrenme, veriye dayalı karar alma, akademik çalışmalara hız kazandırma ve toplumsal sorunlara çözüm geliştirme konularındaki potansiyeli ele alınmıştır. Ayrıca, üniversitelerin yapay zekâ entegrasyonu sırasında karşılaşabilecekleri altyapı eksiklikleri, etik sorunlar ve erişim eşitsizlikleri gibi zorluklara yönelik stratejiler tartışılmaktadır. Bu çalışma betimleyici bir araştırma olup üniversitelerde yapay zekânın kullanımına odaklanmaktadır; öğrenme yöntemleri ve süreçleri üzerindeki olası etkileri, fırsatları ve zorlukları ele alarak üniversitelerin bu süreçte nasıl bir adaptasyon stratejisi geliştirmeleri gerektiğini tartışmaktadır. Çalışma, ayrıca yapay zekânın geleceğin üniversitelerine etkisini anlamak ve bu teknolojinin eğitim süreçlerindeki rolünü optimize etmek için bir rehber olarak düşünülebilir.
... It also highlighted key trends, gaps, and the popularity of tools like Chatbots and Conversational Agents, particularly ChatGPT. Wong (2024), emphasised that educators had mixed reactions, grappling with its potential disruptions. In the global labor market, GAI could significantly reduce white-collar jobs once issues like bias, security, and misinformation are addressed. ...
Conference Paper
Engineering education is evolving continuously since its inception. The rise of generative artificial intelligence (GAI) has marked a transformative era in engineering education. This review presents the multifaceted impact of GAI on teaching approaches, learning experiences, and skill development within engineering education. GAI technologies, such as machine learning algorithms and natural language processing, have revolutionised curriculum design by enabling personalised learning pathways, adaptive assessment methods, and enhanced problem-solving capabilities. These technologies facilitate interactive and immersive learning environments through virtual labs and simulations, assisting students with hands-on experience. Moreover, GAI aids in bridging the gap between theory and practice, fostering innovation and creativity among engineering students. Challenges such as ethical considerations, the need for technical literacy among educators, and the integration of GAI tools into existing educational frameworks are also discussed. This review underscores the potential of GAI to recast engineering education by training students in such a way that they can work effectively in an increasingly AI-driven world. This study also presents future research needs on GAI
... AI is rapidly transforming society and the workforce. (1,2,3,4) Its applications range from automated customer service to sophisticated decision making in healthcare, driving substantial societal changes. (5) The World Economic Forum predicts that by 2025, humans and machines will equally share task times, emphasising the need for readiness for an AI-driven future. ...
Article
Full-text available
Introduction: As artificial intelligence (AI) has become increasingly integrated into daily life, traditional digital literacy frameworks must be revised to address the modern challenges. This study aimed to develop a comprehensive framework that redefines digital literacy in the AI era by focusing on the essential competencies and pedagogical approaches needed in AI-driven education. Methods: This study employed a constructivist and connectivist theoretical approach combined with Jabareen's methodology for a conceptual framework analysis. A systematic literature review from 2010-2024 was conducted across education, computer science, psychology, and ethics domains, using major databases including ERIC, IEEE Xplore, and Google Scholar. The analysis incorporated a modified Delphi technique to validate the framework’s components. Results: The developed framework comprises four key components: technical understanding of AI systems, practical implementation skills, critical evaluation abilities, and ethical considerations. These components are integrated with traditional digital literacy standards through a meta-learning layer that emphasises adaptability and continuous learning. This framework provides specific guidance for curriculum design, pedagogical approaches, assessment strategies, and teacher development. Conclusions: This framework offers a structured approach for reconceptualising digital literacy in the AI era, providing educational institutions with practical guidelines for implementation. Integrating technical and humanistic aspects creates a comprehensive foundation for preparing students for an AI-driven world, while identifying areas for future empirical validation.
Article
Full-text available
RESUMEN Introducción: La inteligencia artificial generativa (IAGen) puede ser calificada como una de las tecnologías más disruptivas del siglo XXI. Objetivo: Este artículo analiza la adopción de la inteligencia artificial generativa en la educación superior mediante un estudio bibliométrico y cienciométrico de publicaciones indexadas en Scopus. Materiales y Métodos: Los indicadores empleados fueron el volumen de publicaciones, la distribución de citaciones, las áreas disciplinarias, las redes de co-autoría y las palabras clave, lo que permitió identificar tendencias globales. Resultados: Los resultados revelaron un crecimiento exponencial en la producción científica, con un predominio de estudios aplicados y una colaboración internacional liderada por países de América del Norte, Europa y Asia. Las áreas más relevantes incluyeron las ciencias de la computación, las ciencias de la educación y las ciencias sociales, con un enfoque creciente hacia la personalización del aprendizaje, los desafíos éticos, así como los retos de la gobernanza y la regulación. Conclusiones: El estudio destaca la necesidad de abordar brechas regionales y fomentar investigaciones que integren perspectivas prácticas y teóricas para una adopción equitativa y sostenible. Palabras clave: Educación Superior, Inteligencia artificial, Publicación científica, Tecnología de la información. ABSTRACT Introduction: Generative artificial intelligence (IAGen) can be classified as one of the most disruptive technologies of the 21st century. Objective: This article analyzes the adoption of generative artificial intelligence in higher education through a bibliometric and scientometric study of publications indexed in Scopus. Materials and Methods: The indicators used were the volume of publications, the distribution of citations, disciplinary areas, co-authorship networks and keywords, which allowed global trends to be identified. Results: The results revealed an exponential growth in scientific production, with a predominance of applied studies and international collaboration led by countries in North America, Europe and Asia. The most relevant areas included computer sciences, educational sciences and social sciences, with an increasing focus on personalization of learning, ethical challenges, as well as challenges of governance and regulation. Conclusions: The study highlights the need to address regional gaps and encourage research that integrates practical and theoretical perspectives for equitable and sustainable adoption.
Article
Full-text available
This academic work explores the use of generative AI through Chatbot GPT, Gemini, Copilot, and Meta AI in teaching customs and international law. This analysis was carried out with a particular focus on education on international free trade agreements and the primary laws on international trade in Mexico. The study's main findings show that Copilot is a valuable tool for searching for specific information on articles and laws on international trade. This purpose was achieved by applying prompts to obtain information on the content in question. Likewise, favorable results were obtained for the cases of Chatbot GPT and Meta AI. On the other hand, Gemini showed unfavorable results because it only showed general information on the topics that were requested and even provided erroneous information. These types of tools allow students to make more efficient searches and save time when searching for information. However, they can present erroneous or general results that force them to delve deeper into the subject.
Article
Full-text available
Artificial intelligence (AI) has emerged as an excellent tool across multiple industries and holds great promise for the government, society, and economy. However, the absence of a distinct consensus regarding the definition and scope of artificial intelligence hinders its practical implementation in government settings. This article examines the various methodologies, emphases, and goals within artificial intelligence, emphasizing its ability to enhance human capabilities in critical situations. Considering the present advantages and enhanced productivity brought about by AI adoption in trailblazing government departments, this study explores the possible benefits and limitations of AI usage in the public sector. By looking at the cross-disciplinary difficulties of public AI applications, such as language hurdles and service delays, this study highlights the necessity for a thorough knowledge of the risks, impediments, and incentives of employing AI for government services. The study hopes to provide insight into AI research's ultimate aims, including object manipulation, natural language processing, and reasoning. This study emphasizes the potential for greater productivity, simplified procedures, and reduced obligations by analyzing the pros and cons of using AI in the public sector. Further, organizational theory is considered a tool for figuring out how to deal with challenges and maximize possibilities associated with AI deployment. The theory is used as the conceptual framework to understand the benefits, opportunities, and challenges involved in using AI when providing government services. The results of this research help us better understand how AI may revolutionize public service delivery by stimulating new ideas and improving efficiency. This study covers critical questions about organizational theory's role in improving government AI adoption, the challenges governments have in adopting AI, and the potential benefits AI might offer public service delivery. The research recommends a strategic approach to AI adoption in the public sector, considering organizational, ethical, and societal implications while recognizing the possibility of AI's transformative impacts on governments' service provision.
Article
Full-text available
For decades, academic freedom and shared governance have stood as cornerstones of higher education in the United States, but these principles face unprecedented challenges. Recent developments point to a disturbing decline in these critical values, stirring debates on the future viability of the higher education system. This study delves into the problematic trajectory of modern higher education, spotlighting the rise of corporate practices within academic institutions, the swelling ranks of university administration, and the disproportionate weight given to student evaluations in assessing faculty. These factors have converged to push academia into an unstable transition, a liminal phase fueled by external and internal forces. This study examines the evolving landscape of academic freedom within the corporatized university model. Utilizing the concept of liminality, it explores the transitional challenges faced by academia in balancing traditional scholarly values with emerging market-driven paradigms, arguing that the corporatization of universities represents a liminal phase, wherein the identity and purpose of academic institutions are in flux, significantly impacting the notion of academic freedom. The shift toward a consumer-oriented ethos endangers the foundational principles of higher education, risking substituting educational substance with the mere transaction of educational services. The study concludes by issuing a call to action for all stakeholders in higher education to acknowledge and confront these detrimental trends, thereby safeguarding the principles of academic freedom, shared governance, and the educational system's overall integrity and dynamism.
Article
Full-text available
The significant advancements in applying artificial intelligence (AI) to healthcare decision-making, medical diagnosis, and other domains have simultaneously raised concerns about the fairness and bias of AI systems. This is particularly critical in areas like healthcare, employment, criminal justice, credit scoring, and increasingly, in generative AI models (GenAI) that produce synthetic media. Such systems can lead to unfair outcomes and perpetuate existing inequalities, including generative biases that affect the representation of individuals in synthetic data. This survey study offers a succinct, comprehensive overview of fairness and bias in AI, addressing their sources, impacts, and mitigation strategies. We review sources of bias, such as data, algorithm, and human decision biases—highlighting the emergent issue of generative AI bias, where models may reproduce and amplify societal stereotypes. We assess the societal impact of biased AI systems, focusing on perpetuating inequalities and reinforcing harmful stereotypes, especially as generative AI becomes more prevalent in creating content that influences public perception. We explore various proposed mitigation strategies, discuss the ethical considerations of their implementation, and emphasize the need for interdisciplinary collaboration to ensure effectiveness. Through a systematic literature review spanning multiple academic disciplines, we present definitions of AI bias and its different types, including a detailed look at generative AI bias. We discuss the negative impacts of AI bias on individuals and society and provide an overview of current approaches to mitigate AI bias, including data pre-processing, model selection, and post-processing. We emphasize the unique challenges presented by generative AI models and the importance of strategies specifically tailored to address these. Addressing bias in AI requires a holistic approach involving diverse and representative datasets, enhanced transparency and accountability in AI systems, and the exploration of alternative AI paradigms that prioritize fairness and ethical considerations. This survey contributes to the ongoing discussion on developing fair and unbiased AI systems by providing an overview of the sources, impacts, and mitigation strategies related to AI bias, with a particular focus on the emerging field of generative AI.
Article
Full-text available
We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.
Conference Paper
Full-text available
The emergence of large language models (LLMs), such as OpenAI’s GPT-4, introducestransformative opportunities for higher education across various disciplines. While the integration ofLLMs into higher education has sparked significant debate regarding whether to fully incorporatethese systems into curricula or restrict their use, this paper contends that there has been an inadequatefocus on the process of establishing suitable guidelines for their usage. Given the importance ofstakeholder buy in, especially in terms of perceiving the final decision as legitimate, this paperadvocates for transparent and inclusive procedures that involve faculty, administration, and studentsduring the integration process. Once a decision is made, clear justifications for LLM guidelines shouldbe provided, paired with an effective implementation strategy, to ensure widespread acceptanceand adherence.
Article
While all functional areas in organizations are benefiting from the recent development in generative artificial intelligence (GAI), marketing has been particularly affected positively by this breakthrough innovation. However, scholars have not paid attention to the transformative impacts GAI has on marketing activities. This editorial article aims to fill this void. It outlines the current state of generative artificial intelligence in marketing. The article discusses the facilitators and barriers for the use of generative artificial intelligence in marketing. It highlights the effectiveness of insights generated by GAI in personalizing content and offerings and argues that marketing content generated by GAI is likely to be more personally relevant than that produced by earlier generations of digital technologies. The article explains how higher efficiency and productivity of marketing activities can be achieved by using GAI to create marketing content. It also describes the roles of insights and marketing content generated by GAI to improve the sales lead generation process. Implications for research, practice and policy are also discussed.