Content uploaded by Wilson KO Wong
Author content
All content in this area was uploaded by Wilson KO Wong on May 14, 2024
Content may be subject to copyright.
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
Available online 21 April 2024
2199-8531/© 2024 The Author(s). Published by Elsevier Ltd on behalf of Prof JinHyo Joseph Yun. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
The sudden disruptive rise of generative articial intelligence? An
evaluation of their impact on higher education and the global workplace
Wilson Kia Onn Wong
a
,
b
a
Pan Sutong Shanghai-Hong Kong Economic Policy Research Institute (PSEI), Lingnan University, Hong Kong
b
Academy for Applied Policy Studies and Education Futures (AAPSEF), The Education University of Hong Kong, Hong Kong
ARTICLE INFO
Keywords:
GAI
Disruptive
GPT
LLMs
“AI-optimists”
“AI-sceptics”
ABSTRACT
This paper evaluates the rise of “Generative Articial Intelligence” (GAI) in its myriad forms, with the highest
prole being the “Large Language Models” (LLMs). More importantly, it analyses the potentially disruptive
impact of this ascendant technology on higher education and the global workplace. The ndings of this paper
indicate that students pursuing higher education tend to perceive GAI favourably, as it frees them from the toil of
rote-learning. However, the view is rather mixed in the case of educators, who are still coming to grips with this
seemingly disruptive technology. In the case of the global labour market, GAI has the potential to decimate
legions of white-collar jobs once it eliminates inherent issues of biases, security and misinformation. Despite the
media’s constant labelling of GAI as a disruptive technology that has suddenly burst onto the technological scene,
it is evidenced in this paper that the technology has taken nearly eight decades to reach today’s level of tech-
nological advancement. Further, it is far from reaching its full potential, as it is still incorporating advances in
pattern recognition, planning and problem solving, and quantum computing technologies. This study also warns
of concentrating the power of this game-changing technology in the hands of a few major corporate titans.
1. Introduction
Since November 2022, “Generative Articial Intelligence” (GAI) has
taken the world by storm, in the form of “Large Language Models”
(LLMs) and associated technologies such as Open AI’s ChatGPT/DALL-E,
Microsoft’s Bing GPT-4, and Google’s Gemini (formerly known as Bard).
Their seemingly uncanny ability to create supposedly original content
(comprising words, images and code) in mere seconds sent re-
verberations across academia and the world at large. But what exactly
are these seemingly game-changing technologies? As indicated by its
name, the purpose of GAI is to generate content in the form of images,
text, code, audio content and suggestions (Toner, 2023) (see Table 1).
The most high-prole of this purportedly revolutionary technology
would be the LLMs (e.g. ChatGPT), a form of “Articial Intelligence” (AI)
trained on an immense compendium of books, articles, written content
from the internet (e.g. Wikipedia) and even social media and online
forums, with the expressed purpose of generating human-like responses
to natural language queries (i.e. questions in everyday language) from
users (Mearian, 2023). The technology powering LLMs such as ChatGPT
is known as the “Generative Pre-trained Transformer” (GPT). It is a
neural network (i.e. technology that mimics the workings of the human
brain) that attempts to predict the likelihood of certain words being
stringed together in a sentence; based on this premise, it could be argued
that a larger dataset would translate into greater predictive accuracy
(Lin et al., 2022).
LLM technology has advanced to a degree that a Google software
engineer, Blake Lemoine, possibly mistook the responses from the
company’s LLM, LaMDA (Language Model for Dialogue Applications)
for sentience (i.e. the ability to experience feelings and sensations) (De
Cosmo, 2022). This incident also demonstrates the ability of LLMs to
pass the “Turing test”, once the gold standard for assessing whether
machines could simulate intelligence and behaviour comparable to
human beings (Turing, 1950); the ostensible success of LLMs has led to
the test being described as broken, with industry experts urging for the
deployment of more relevant ones (Biever, 2023; Stone, 2023). More-
over, this outwardly signicant progress has obfuscated the distinct
possibility that current LLMs could be merely mimicking “self--
awareness” from their training data. By extension, they could be char-
acterised as being highly-sophisticated chatbots, incapable of generating
truly original ideas, as they are merely synthesising the information they
have been trained on; Physicist Michio Kaku has likened them to mere
augmented recording devices, which have warranted too much
E-mail addresses: wilsonwong2@ln.edu.hk, willy_kia@yahoo.com.
Contents lists available at ScienceDirect
Journal of Open Innovation: Technology, Market,
and Complexity
journal homepage: www.sciencedirect.com/journal/journal-of-open-innovation-technology-
market-and-complexity
https://doi.org/10.1016/j.joitmc.2024.100278
Received 17 February 2024; Received in revised form 8 April 2024; Accepted 14 April 2024
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
2
attention; Section 6 of this paper posits the deployment of quantum
computing to address this inherent weakness (Hetzner, 2023).
However, there are others such as AI researcher Eliezer Yudkowsky
who asserts that we do not fully understand the inner workings of these
LLMs and these systems could be on the verge of achieving superhuman
intelligence, far surpassing that of humanity. In this dystopian scenario,
Yudkowsky argues that humanity could face an existential crisis and
posits a potentially devastating one-sided conict between humans and
our cognitively superior AI progeny (Yudkowsky, 2023). An equally
dystopian view was also echoed by scientist Geoffrey Hinton, who
described the potentially unabated advancement of these AI technolo-
gies as such: “If it gets to be much smarter than us, it will be very good at
manipulation because it will have learned that from us, and there are
very few examples of a more intelligent thing being controlled by a less
intelligent thing” (Hinton, 2023, p.1). Nonetheless there is a possibility
that such fears are unwarranted and stem from the controversial belief
that the advent of a superior civilisation would invariably result in the
demise of the supposedly inferior counterpart, such as the disastrous
encounter between the Spanish explorers and the Incans in the 16th
Century, which saw the near decimation of the latter (Diamond, 2005).
Regardless of whether they are mere augmented recording devices or
machines on the cusp of attaining superhuman intelligence, their advent
has the potential to upend the business model of higher educational
institutions across the globe. In a 2023 Best Colleges Survey, approxi-
mately a fth of surveyed students admitted to using ChatGPT or similar
LLM technologies to complete their assignments or exams (Welding,
2023). This has inevitably triggered the following questions: Should the
use of LLMs be banned from higher educational institutions? Or should
higher educational institutions attempt to incorporate LLMs as an
enabler in their curriculums?
While uncertainties continue to unfold, the only certainties are LLMs
are here to stay, and they will continue to advance in sophistication and
computational power (due to expanding data pools and increasingly
powerful microchips). In the global workplace, the advent of GAI has
also invariably unleashed considerable waves of anxiety and fear across
the human workforce, although still not matching the experience of the
Luddites when confronted with mechanized looms. This paper attempts
to provide an exhaustive study of the impact of this seemingly “state-of-
the-art” technology on today’s higher education institutions and the
subsequent recipient of their “products” (i.e. graduates), the global
workplace through the lenses of open innovation dynamics.
2. Literature Review
Despite its seemingly sudden and disruptive rise in recent years, GAI
technology is fundamentally based on computing technologies that have
been evolving for approximately eight decades; major tech companies
such as Google, Dell, Microsoft, Lenovo and IBM have been working on
GAI and LLMs for years long before OpenAI’s ChatGPT burst onto the
scene in November 2022 (Smith-Goodson, 2023). Its origins could be
traced to a report written by Alan Turing in 1948, which laid the
framework for articial neural networks, the “brains” powering these
systems. In Turing’s seminal report, he was exploring the possibility of
machines exhibiting humanlike intelligent behaviour, in an era where
there was “an unwillingness to admit the possibility that mankind can
have any rivals in intellectual power” (Turing, 1948, p.1). The term AI
was subsequently coined in 1956 at a summer workshop at Dartmouth
College, Hanover, New Hampshire. Moreover, the founders of AI
delineated their vision as such: “Every aspect of learning or any other
feature of intelligence can in principle be so precisely described that a
machine can be made to simulate it” (McCarthy et al., 1955, p.2).
Fortuitously, the invention of the microchip in 1958 by Jack Kilby, an
engineer at Texas Instruments, was a major leap forward in mankind’s
development of intelligence akin to his own (Miller, 2022). This positive
loop was further reinforced by the invention of the rst chatbot, named
“ELIZA” by MIT scientist, Joseph Weizembaum in 1966, who was
attempting to facilitate natural language conversation with computers
(Weizembaum, 1966).
Further, the development of “Graphics Processing Unit” (GPU)
technology by computer scientist Ivan Sutherland in the 1960s and their
subsequent mass market debut in the late 1990s enabled the rise of to-
day’s GAI technologies (or all AI technologies for that matter) (Peddie,
2023). GPUs are well-suited for AI applications due to their remarkable
ability to process several computations concurrently (known as “parallel
processing”). They were initially mainly used in the video gaming
market, with its initial developers never envisaging its pivotal role in the
Table 1
Examples of GAI Models in the market. Source: Bradshaw et al. (2024), Hsiao
(2024), Baidu Research (2023), OpenAI (2023a), OpenAI (2023b), Rawat
(2023), Mistral AI (2024), David (2024), X.ai (2024).
Name Developer Capabilities
ChatGPT OpenAI (Microsoft has
invested US$13 billion
in OpenAI, with rights
to prot sharing despite
not owning any equity).
Generates text and
suggestions based on
pattern recognition. Also
has expanding coding
capabilities.
Bing Chat (runs on
ChatGPT GPT-4
technology)
Microsoft Generates text and
suggestions based on
pattern recognition. Has
limited but expanding
coding capabilities.
Gemini (renamed Bard
on February 8, 2024)
Google Generates text and
suggestions based on
pattern recognition. Also
has expanding coding
capabilities.
ERNIE (Enhanced
Representation
through Knowledge
Integration)
Baidu Generates text and
suggestions based on
pattern recognition. Has
some (and expanding)
coding capabilities. Most
effective if instructions are
in Chinese, as it was trained
largely using a Chinese-
based dataset.
DALL-E OpenAI Generates images from text.
Bing Image Creator
(runs on Dall-E
technology)
Microsoft Generates images from text.
Codex OpenAI Generates code from natural
language instructions.
SourceAI OpenAI Generates code and by
extension software.
Hugging Face Hugging Face, Inc. Code generation
capabilities such as auto-
completion and text
summarizing.
GitHub OpenAI and GitHub Converts natural language
suggestions into coding
instructions across a variety
of languages.
Mistral Mistral AI (Microsoft
has invested US$16
million to date, with no
equity stake)
Generates text and
suggestions in multiple
languages based on pattern
recognition. Also has
expanding coding
capabilities. Generates
numerical code from text,
making natural language
processing easier (Mistral
Embed).
Grok X.ai Generates text and
suggestions based on
pattern recognition. Also
has expanding coding
capabilities. Claims to have
superior long context
understanding and
reasoning capabilities vis-
`
a-vis its rivals.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
3
development of AI technologies, highlighting the “unintended conse-
quences of technological development” (Shadow, 2023). Nonetheless,
the aforementioned technologies would not have resulted in today’s
GAI, if not for the development of the Internet, which provides a sig-
nicant portion of their training data; research on the Internet
commenced in the late 1960s in the form of the “Advanced Research
Projects Agency Network” (ARPANET), a project funded by the U.S.
Department of Defence with the aim of enabling computers to commu-
nicate with one another on a single network (McLean, 2023).
As evidenced, GAI is an ensemble of ideas (across tech companies,
academia and government research institutes) that has taken approxi-
mately 80 years to evolve to its current form (and is still rapidly
evolving), with unprecedented and unpredictable impact on the global
higher education landscape and workplace. This evolvement is very
much akin to an industry exposed to open innovation dynamics, where
companies and industries have to utilise both internal and external
knowledge and expertise to achieve sustainable growth (Chesbrough,
2006).
Nonetheless the arrival of this AI interloper is not without its dan-
gers. Seo et al. (2021) argued that for AI to be effectively integrated into
our online learning systems, they should ensure easy interpretability and
constant human involvement and feedback. However, in the case of GAI,
humans very often do not understand the mechanics of their increas-
ingly complex operations and behavioural patterns; Michael C. Frank, a
researcher at Stanford University, likens attempts to understanding the
fundamental mechanics of LLMs as efforts in probing “alien intelligence”
(Frank, 2023). This paper attempts to alleviate this lack of under-
standing through a thorough analysis of the latest advances in GAI and
their ramications. Further, in section six of this paper, the author also
posits measures to augment the robustness of GAI, which is still plagued
by issues of bias and accuracy.
3. Methodology
This paper deploys a case study approach complemented by an “open
innovation” framework. The relevance of the case study approach is due
to its efcacy at analysing phenomena emanating and evolving in a uid
real-life environment. In this context, the analysis would be on the
potentially disruptive impact of a seemingly revolutionary technology
on today’s higher educational institutes and global workspace with their
entrenched norms and practices. This case study approach also facili-
tates the exploration of “alternative narratives” with greater effective-
ness than traditional data dependent econometric models, where access
to reliable data is not readily available, particularly in highly-uid sit-
uations (e.g. the continuous advancement of GAI) which are still rapidly
evolving (Morck and Yeung, 2011). Further, this methodology allows
researchers to identify the critical features of actual events, involving
the utilisation of several evidence sources (Yin, 1981a, 1981b, 1984;
Wong, 2023).
However, the methodology of this paper also acknowledges the
consequence of managing open innovation complexity as a means of
delivering sustainable growth at both rm and industry levels (Yun,
2015). This is of particular importance to the information technology
and AI industry which the earlier literature review segment identied to
have been on an eighty-year growth trajectory, resulting in today’s GAI.
But for the key players (e.g. Microsoft, Google) to sustain their
expanding business models, they would have to draw expertise and
technologies from other stakeholders (i.e. policymakers, startups,
academia) in this GAI innovation ecosystem (Yun, 2016; Regona et al.,
2022).
4. Disruptor or enabler of the higher education landscape?
The viral proliferation of LLMs amongst students across the world’s
higher education institutions is a foregone conclusion. LLMs such as
ChatGPT have demonstrated the ability to pass exams at the University
of Minnesota albeit with only satisfactory grades. After using ChatGPT to
generate answers for four real exams at the University of Minnesota Law
School, the academics investigating the technology’s effectiveness,
proceeded to grade the tests (comprising 95 multiple choice questions
and 12 essay questions) through blind marking. The investigation
revealed that ChatGPT attained an overall performance on par with that
of a C+student (Choi et al., 2022). At the University of Pennsylvania’s
Wharton School of Business, it (i.e. ChatGPT3) delivered a marginally
superior outcome, by attaining a B to B- performance at the institution’s
nal MBA Operations Management core course (Terwiesch, 2023).
Despite this initial lacklustre performance, the capabilities of LLMs have
been advancing at a remarkable pace. For instance, later versions of
ChatGPT (i.e. GPT-4) were able to deliver performances approximating
the top 10% of candidates taking America’s Uniform Bar Exam, a sig-
nicant improvement from earlier versions (i.e. GPT-3.5) which were
only capable of passing, with grades in the bottom 10% (Kimmel, 2023;
OpenAI, 2023).
But is the advent of this technology the opening of Pandora’s box or a
more benign “perennial gale of creative destruction”? (Schumpeter,
2010). It all depends on whether you are an “AI-sceptic” or “AI-opti-
mist”. The former argues that the use of LLMs could possibly erode
students’ capacity to learn and acquire knowledge and subsequently
hurt their ability to compete in the workforce post-graduation (de Fine
Licht, 2023); in view of the nature of scientic inquiry, a healthy dose of
scepticism is arguably a good thing (Sagan and Druyan, 1995). Further,
they could urge for the outright ban of LLMs on the campuses of higher
educational institutes, in efforts to discourage cheating in exams or
plagiarism. This measure is relatively draconian, as some “AI-optimists”
would argue. For instance, while Australia’s leading research-intensive
“Group of Eight” universities have reverted to “pen and paper” exams
to prevent cheating, they continue to recognise LLMs’ immense value as
a learning tool for students and urge assessment redesign as a means of
dealing with the advent of LLMs; assessment redesign strategies would
involve the introduction of more eldwork, oral presentations, labora-
tory activities and internships (i.e. experiential learning) as assessment
components instead of relying on traditional essay assignments where
the temptation to cheat via increasingly powerful LLMs is signicantly
higher (Cassidy, 2023).
Further, this LLM-augmented experiential learning trend could
signicantly enhance students’ learning experiences by compelling
them to work more closely with organisations that are increasingly
buffeted by advances in AI. Moreover, through this approach, students
would acquire more industry-relevant skills and knowledge which
would invariably enhance their employability post-graduation. Simi-
larly, educators could also take advantage of the increasingly powerful
compositional powers of LLMs to set examination and related assess-
ment questions, in the process releasing more time for research activ-
ities; this is a boon for educators working in “publish or perish” research-
driven institutions. With the expanding automation of administrative
tasks through the deployment of LLMs and other GAI tools, educators
could also focus more on mentoring roles and Oxbridge tutorial style
meetings (a luxury in this day and age), in the process enriching the
educational experience of students, which has been steadily eroded by
the increasing corporatization of universities worldwide (Alibaˇ
si´
c et al.,
2024).
Given the double-edged nature of LLMs’ impact on the educational
landscape, educators need to manage their expanding presence delib-
eratively, as there is an on-going and arguably increasing risk that “AI-
sceptics” could inadvertently obstruct critical GAI development. More-
over, there is a danger that “AI-scepticism” could devolve into “AI-
pessimism”. Ironically, “AI-pessimists” are not always Luddites and
could paradoxically include titans of the tech industry such as Elon Musk
(co-founder of Tesla) and Steve Wozniak (co-founder of Apple), who
have signed an open letter to pause development of LLMs more powerful
than GPT-4 for at least six months, citing growing fears over the exis-
tential risks to humankind (Future of Life Institute, 2023). Some
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
4
individuals in the “AI-pessimism” camp are also urging for the increased
regulation of AI systems, without regard for the consequences. The
logical questions would be: “Is it possible to regulate scientic prog-
ress?” and “if it is possible, what would be the ramications, intended or
otherwise?”. Imagine if governments, fearful of the power and potential
of physics, sought to regulate the eld at the beginning of the 20th
century. Under the stranglehold of this regulation, humanity would not
have mastered nuclear ssion through the Manhattan Project, allowing
us to deliver sustainable clean energy to meet our ever-increasing energy
needs; Pessimists could argue that this discovery killed hundreds of
thousands of people in Japan but on the ip side, optimists could make
the argument that it hastened the end of the devastating Second World
War in a manner which conventional weapons could not achieve and in
the process, avoiding as many as a million casualties (Bernstein, 1998).
5. Job obliteration or inevitable “creative destruction” in the
global workplace
The advent of GAI has invariably generated widespread fears of a
global job market apocalypse, particularly amongst college educated
white-collar employees, traditionally thought to be more immune to the
job decimating effects of automation than their blue-collar counterparts
in manufacturing and agriculture. Moreover this potentially large-scale
supplantation of lucrative white-collar employment could upend tradi-
tional beliefs that professional jobs involving creativity and emotional
intelligence have ironclad security (Miller and Cox, 2023). Further
stoking these dystopian fears would encompass research from OpenAI (i.
e. one of the alleged “transgressors”) scientists, Tyna Eloundou and her
colleagues which indicate that “around 80% of the U.S. workforce could
have at least 10% of their work tasks affected by the introduction of
LLMs while approximately 19% of workers may see at least 50% of their
tasks impacted” (Eloundou et al., 2023). In truth, it has been proven
time and again throughout history that technology, however revolu-
tionary, would require a relatively long period to diffuse through the
economy. This is evidenced in the automotive industry. Although the
internal combustion energy powering automobiles was invented in
1879, it took decades for the revolutionary technology to evolve into the
automobile and they attained critical mass only circa, 1913, triggered by
the price reductions enabled by Henry Ford’s moving assembly line
(Gordon, 2016). This relatively gradual pace of technological progress is
captured succinctly in Robert Solow’s sardonic quip: “You can see the
computer age everywhere but in the productivity statistics”, nearly three
decades after the invention of the microchip (Solow, 1987, p. 36).
Some analysts have argued that the mass adoption of GAI would be
signicantly faster than earlier revolutionary technologies owing to
their ease of use and low cost (The Economist, 2023); for instance, the
use of OpenAI’s GPT-3.5 is available for free while the more advanced
GPT-4 involves an accessible US$20 monthly subscription fee (OpenAI,
2024; The Business Times, 2024). However these seeming advantages
may not necessarily translate into mass market adoption. A survey in
April 2023 by machine learning observability platform, Arize AI indi-
cated that since the launch of ChatGPT in November 2022, about one in
ten machine learning teams surveyed have adopted the use of LLMs,
with another approximately two-fths planning to deploy them within a
year. However, there remains another two-fths of respondents who
indicated they have no plans to eld this technology in their operations
over the coming year. Their decision to “err on the side of caution” is
driven by privacy and security concerns (Arize AI, 2023). A string of
leaking incidents involving LLMs seem to validate their circumspection.
In February 2023, major nancial institutions such as Bank of America,
Citigroup, and JP Morgan Chase all imposed temporary bans on the use
of LLMs amongst their employees. Subsequently, in April 2023, some
staff at Korea’s Samsung Electronics had inadvertently compromised
their company’s intellectual property security by uploading internal
source code onto ChatGPT which resulted in a blanket ban on LLMs in
the rm (encompassing company-owned computers, mobile devices and
its internal networks) in May (Gurman, 2023). Another factor (albeit one
of a technical nature) obstructing the widespread adoption of LLMs
across industries would be the possibility of the technology “halluci-
nating” sometimes; “hallucinating” is industry speak (amongst AI pro-
fessionals) that refers to LLMs making things up or producing content
that is factually inaccurate (Neugebauer, 2023). For companies making
commercial decisions based on faulty intelligence generated by LLMS,
the consequences would certainly involve staggering losses. In the case
of fast-moving nancial services which involve split-second decisions,
investment decisions based on ctitious or inaccurate information
generated by LLMs would lead to losses which are not only disastrously
signicant but also almost painfully immediate, with little or no room
for remedy. However, in the marketing function, the management of
companies tend to be relatively more receptive to GAI deployment, due
to the ability of these systems to deliver a superior personalised expe-
rience; by the rst quarter of 2023, nearly three quarters of US com-
panies have deployed GAI tools (encompassing chatbots) in their
marketing activities (Kshetri et al., 2024; Dencheva, 2023).
Further obstructing the rapid deployment of LLMs and GAI tech-
nologies across industries would be the innate biases embedded within
their algorithms. Biases would include the discrimination against people
on the basis of gender, race, and skin tones. For instance, Srinivasan and
Uchino (2021) revealed that an AI Generative art program, AIportraits
had taken the liberty of lightening the skin tone of a biracial actress,
Tessa Thompson in its portrait rendition. This discrimination could even
extend to political afliations and religion, with potentially deleterious
impacts on elections and the public trust; Motoki et al. (2023) indicate
that ChatGPT exhibits signicant political bias towards members of the
Democrat party in the United States, which may not come across as a
surprise, as many of the developers of these technologies tend to be
liberal wealthy democrats based in Silicon Valley. The biases in these
LLMs are an unfortunate reection of the real-world biases existing in
the data, on which they are trained; this inherent weakness of LLMs is
evidenced in the following statement by researchers, Skylar Kolisko and
Carolyn Jane Anderson: “Although these models are powerful, they have
also been shown to learn toxic behaviour and harmful social biases from
the massive amounts of uncurated text data on which they are trained”
(Kolisko and Anderson, 2023, p.15825). To alleviate the seemingly
intractable biases present in datasets, GAI developers would need to
devote more time and resources to cleansing them, but this could be a
Sisyphean task as our datasets are expanding at a exponential rate; over
a relatively brief seven years (2018–2025), the amount of global
real-time data is expected to increase by a factor of ten, from ve zet-
tabytes to 51 zettabytes (Taylor, 2023). Moreover, these developers
would have to vigilantly monitor the interactions between users and
LLMs for evidence of bias, which is again a monumental task, as the
number of questions posed to LLMs are only expected to increase. LLM
users have also been urged to provide feedback on biases, as part of the
“reinforcement learning from human feedback” (Heikkil¨
a, 2023).
Further, we could design bias-aware algorithms which are capable of
evaluating the various kinds of bias and subsequently proceed to alle-
viate their impact on GAI’s output. But the rst principle (see Figure 1)
in designing a relatively bias-free GAI would be to always collect data-
sets that are as diverse as possible, with the greatest representation from
various demographics (i.e. dataset augmentation) (Ferrara, 2024).
Overcoming the condentiality and technical issues and innate bia-
ses hampering the widespread deployment of LLMs and related GAI
technologies would involve a considerable amount of time and re-
sources; for instance, Samsung Electronics, in efforts to deal with the
leaking of internal source code through the use of ChatGPT, has
announced plans to develop its own internal LLM (Kim, 2023). With this
challenging backdrop, it is likely that they would have to endure the
same, gradual adoption pathways of earlier revolutionary technologies.
Based on the aforementioned discussions, it would seem that blue-collar
jobs, involving manual labour, are the ones that could be spared the
potential ravages of GAI. While that could be true to some extent,
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
5
blue-collar employees are still not spared from the prospect of
displacement by other forms of automation brought about by advances
in AI. For instance, why would farmers require existing or additional
workers to harvest or plough their lands, if they have access to auton-
omous tractors capable of performing these tasks (or more) 24–7, with
no need for toilet or meal breaks and more importantly, would not ask
for pay increments or demand the formation of unions to safeguard their
rights (John Deere, 2023). It is increasingly evident that AI’s revolu-
tionary impact does not differentiate between white-collar or blue-collar
occupations. As long as it is an occupation with clearly dened pro-
cedures (i.e. a formulaic system), AI could potentially replicate that role
with relative ease; regardless of whether it is an operating theatre or
today’s modern farming operations, the personnel involved tend to
adhere to certain established protocols (Ford, 2016). In view of this
premise, the occupations that would be spared the job-decimating ef-
fects of mass AI adoption would be the unstructured ones, which involve
considerable nonlinear human interaction (e.g. counsellor, psychologist,
teacher, insurance advisor); human behavioural patterns are considered
nonlinear due to the presence of signicant non-verbal cues and emo-
tions which are still incomprehensible to the pattern-recognition capa-
bilities of AI systems (Bishop, 2021). Given the potentially seismic shift
in global workplace (includes both blue and white collar professions)
wrought by this AI revolution, there should be added institutional
emphasis on preparing both today’s and tomorrow’s workforce for ca-
reers in the so-called unstructured professions; even in the information
technology industry, greater emphasis should be placed on under-
standing the inner workings of increasingly sophisticated GAI and other
AI systems instead of merely attaining programming efciency (i.e. a
relatively dispensable structured skillset). Further, the steady adoption
of AI systems across companies and industries, would invariably result
in the need for considerable economic reskilling and upskilling, as
existing workers may lack the critical skills needed to manage this sea
change in operations and management; this could in turn affect pro-
ductivity in the short term, as reskilling and upskilling requires time and
investment (L˘
az˘
aroiu and Rogalska, 2023).
6. GAI: Game changer or mere hype?
From the earlier sections, it is evident that GAI technologies have
plenty of room for improvement, as they are still plagued with accuracy
and technical issues, and seemingly intransigent biases. Until these is-
sues are resolved, LLMs could remain a platform for the propagation of
misinformation and falsehoods, thus hurting its prospects of having a
truly transformative impact on academia and the global workplace
(Margolin, 2023). In terms of sheer accuracy, the Google search engine,
launched over two decades earlier in August 1998, still surpasses LLMs
like ChatGPT in stark contrast to proclamations from “AI-optimists” that
LLMs are poised to supplant Google and other search engines. Google’s
superior accuracy and objectivity is attributable to its ability to focus on
multiple parameters such as relevance, credibility and popularity
(Google, 2023). Moreover, Google’s algorithm targets a signicantly
wider range of data types, encompassing news, images, videos, and
maps. However, ChatGPT does not possess this critical technical
advantage at this point, in the process limiting its accuracy and objec-
tivity. In addition, the free and invariably most widely accessed version
of ChatGPT currently lacks access to the latest data (its training data is
up to a relatively archaic January 2022), further compromising its
factual rigour and relevance. Advanced versions of ChatGPT (i.e. GPT-4
and ChatGPT Team) with more up-to-date training data are available to
users willing to pay the respective US$20 and US$25 monthly sub-
scription fees. Its rival, Google’s Gemini also has a similar package
(OpenAI, 2024; Hsiao, 2024). Companies with their IT budgets tend to
be receptive to OpenAI’s paid subscription offerings, but it is unclear
whether the mass market user would be willing to make the nancial
commitment (The Business Times, 2024). Such pay-to-use measures
could also widen the digital divide between developed and developing
countries, as denizens from the latter could nd the monthly subscrip-
tion fees prohibitive.
Further, defenders of LLMs argue that ChatGPT and its ilk are mainly
designed to generate seemingly cogent human-like responses, on the
back of years of training data, and are not meant to rival specialised
search engines such as Google (Murphy, 2023). This tenuous position
stands in contradiction to earlier fears of tech luminaries urging a pause
to seemingly robust LLM development poised to sweep aside earlier
technological developments (including search engines) (Future of Life
Institute, 2023); earlier in 2023, Google executives were also fearful that
the onset of LLMs would disrupt their immensely protable US$150
billion annual search business (for all intents and purposes, a monopoly)
and relegate their prized search engine to the ash heap of history (Lib-
eratore, 2023). In the spirit of “creative destruction”, where emerging
and increasingly robust business models are expected to displace obso-
lete predecessors, AI developers such as Google DeepMind (a Google
division) are actively working on strengthening the accuracy, objectivity
and overall performance of their GAI technology. Google DeepMind’s
Gemini LLM is predicated on the company’s unique technology which
involves instructing computer programmes in mastering complex games
like Go with the expressed purpose of outperforming existing LLMs in
the marketplace; through its preternatural ability to master sophisti-
cated strategies via intuitive pattern recognition, planning and problem
solving, the company’s AlphaGo programme defeated Go world cham-
pion, Lee Sedol 4–1 in a series of matches in 2016 (Paleja, 2023; Google
DeepMind, 2023).
Moreover, advances in quantum computing could bolster the per-
formance of GAI technologies. As opposed to traditional classical com-
puters which run almost all of today’s AI systems, quantum computers
are supposedly capable of solving an exponentially greater number of
problems simultaneously than their classical counterparts. This is due to
their remarkable nonbinary properties (known as the “superposition”),
which allows the subatomic particles in our electronic data to share their
properties and strengths, a capability that far surpasses the performance
Fig. 1. Alleviating bias in datasets.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
6
of today’s classical computers which are basically binary in nature
(either 1 or 0); theoretically, the binary nature of classical systems
greatly limits the problem-solving capabilities vis-`
a-vis their non-binary
challengers. Mathematical proofs posit that quantum computers could
theoretically process a relatively innite number of problems concur-
rently, positioning them as ideal partners to LLMs, which have to deal
with incredible levels of uncertainty (Brooks, 2023; Lin et al., 2023);
existing LLMs that run on today’s classical computers are incapable of
robust uncertainty estimates for their responses (Sankararaman et al.,
2022). Studies in quantum mechanics dictate that the “superposition”
undergirding the game-changing problem-solving capabilities of today’s
quantum computers is not technically sustainable, and they quickly lose
the remarkable multi-processing capability (in sudden and irreversible
collapses) which differentiates them from their classical peers (Reich,
2013); quantum computers still remain proof-of-concept technologies.
Without the advent of genuine quantum computers with “sustainable
superpositions”, it is possible that LLMs would have to run on
binary-based classical systems which inherently limits their
problem-solving and predictive capabilities.
7. Regulating GAI development
The seemingly increasing capabilities of GAI and other AI platforms
have already compelled policymakers to examine how the public sector
could harness its growing powers. Many governments today are still
struggling to establish frameworks and guidelines to effectively intro-
duce GAI into their respective public sectors. This lacklustre perfor-
mance could be driven by governments’ uneasiness and lack of “digital
maturity”, as they fear widespread AI adoption in their public sectors
could erode their democratic institutions; GAI’s tendency to “halluci-
nate” could result in misinformation that negatively impacts govern-
ment policies and political outcomes (i.e. elections) (Pautz, 2023;
Feldstein, 2023). Further it could be argued that the relative rate of
GAI’s progress far outpaces policymakers’ attempts to understand,
integrate and possibly regulate it (Alhosani and Alhashmi, 2024); this is
unsurprising, given that the immense nancial rewards in the technol-
ogy sector not only draws society’s best and brightest AI developers but
also procures the best legal advice in circumventing regulatory super-
vision. Moreover, antitrust regulators are closely monitoring the recent
investments by tech behemoths such as Microsoft in GAI companies such
as OpenAI and Mistral. However this intensifying antitrust scrutiny has
been vigorously dismissed by Microsoft as potentially extraneous
strangleholds on GAI development, as its investments supposedly do not
involve any shareholdings in these rms (The Economist, 2024).
Notwithstanding the veracity of Microsoft’s assertion, it could be argued
that under the principles of open innovation dynamics (which empha-
sises drawing on internal and external expertise), tech rms have little
choice but to make these investments in GAI startups to secure much
needed expertise and technologies. The difculty certainly lies in
threading the ne line between securing critical expertise and engaging
in monopolistic behaviour. In a further nod to open innovation dy-
namics, the major tech rms and leading startups (i.e. OpenAI) have, in
a recent September 2023 closed-door forum in Washington DC also
openly expressed interest in working with policymakers to ensure the
effectiveness and accuracy of GAI (Chesbrough, 2006; Clayton, 2023).
The rationale for tech companies wanting to work closely with policy-
makers and regulators is intuitive; if they are cognisant of the intricacies
of regulatory frameworks and upcoming government policies, it would
be relatively easier to spur innovation and grow their respective com-
panies, and by extension, the industry.
8. Conclusion
While GAI technologies still have considerable technical challenges
to surmount, they have nonetheless made their presence felt in higher
education and the global workplace. Their rise is further complicated by
the intense “tug of war” between conicting interests. In academia,
students view them as an enabler, freeing them from the humdrum of
rote-learning while instructors could perceive them as disruptors that
facilitate cheating, plagiarism and the once sacrosanct grading process;
some educators could view LLMs more favourably as they could free
them from the “humdrum” tasks of writing assessment papers, giving
them more time for research and curriculum development. As for the
global workplace, employees are also increasingly hostile to these
technologies which they perceive as corrosive to their job security.
Conversely, employers could view them with increasing favour, due to
their potential to signicantly reduce costs by slashing headcount. This
unrelenting trend is also evident in the creative industries, once the
preserve of humans with our seemingly incomparable creativity, which
was hitherto perceived to be beyond the binary capabilities of cold,
logical computer systems; in May 2023, the Writers Guild of America, (i.
e. the union of lm and television writers), called for strikes across
Hollywood, demanding that studios (i.e. the employers) impose limits
on the use of GAI for writing the scripts of television shows and movies
(Straub, 2023). There is a possibility that once the technical kinks in GAI
are ironed out, they could pose a viable challenge to once irreplaceable
human intellect, logic and creativity. But this could take somewhat
longer than what “AI-optimists” hope for; for instance, it took nearly
seven decades for the Wright brothers’ “proof-of-concept” Kitty Hawk (a
12-second ight that covered approximately 36 m) to transition to the
globe-trotting Boeing 747, which delivered a remarkable growth in air
travel, tourism and freight delivery (NASA, 2018). However, there is
only one certainty in this “tug of war” between AI and humans and it is
that GAI development is already set on an irrevocable path and there is
no going back to business as usual. Moreover, there is another critical
issue which bedazzled industry watchers have often failed to explain,
which is the fact that these potentially era-dening technologies are now
in the hands of a few major companies (essentially a cartel), giving them
greater capacity to dominate our increasingly AI-infused future. In this
situation, open innovation dynamics compels the remaining stake-
holders (e.g. policymakers, regulators, academics) to work closely with
this highly-inuential cartel to rene and regulate this increasingly
powerful technology to develop a more inclusive and sustainable
society.
Author contributions
I am the sole author of this research paper.
Funding
This research paper is not funded by any institution or grant.
Ethical statement
Ethical Statement is not applicable to this research paper as it does
not involve any animal or human test subjects. This research paper is
strictly my own work.
CRediT authorship contribution statement
Wilson Kia Onn Wong: Conceptualization, Data curation, Formal
analysis, Funding acquisition, Investigation, Methodology, Project
administration, Resources, Software, Supervision, Validation, Visuali-
zation, Writing – original draft, Writing – review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing nancial
interests or personal relationships that could have appeared to inuence
the work reported in this paper.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
7
References
Alhosani, K., Alhashmi, S.M., 2024. Opportunities, challenges, and benets of AI
innovation in government services: a review. Discov. Artif. Intell. 4, 18. https://doi.
org/10.1007/s44163-024-00111-w.
Alibaˇ
si´
c, H., Atkinson, L.C., Pelcher, J., 2024. The liminal state of academic freedom:
navigating corporatization in higher education. Discov. Educ. 3, 7. https://doi.org/
10.1007/s44217-024-00086-x.
Arize AI., 2023. Survey: Massive Retooling Around Large Language Models Underway,
〈https://arize.com/blog/survey-massive-retooling-around-large-language-mode
ls-underway/〉(accessed 22 August 2023).
Baidu Research., 2023. Introducing ERNIE 3.5: Baidu’s Knowledge-Enhanced Foundation
Model Takes a Giant Leap Forward, 〈http://research.baidu.com/Blog/index-view?
id=185〉(accessed 13 February 2024).
Bernstein, B.J., 1998. Truman and the A-Bomb: Targeting Noncombatants, Using the
Bomb, and His Defending the ‘Decision. J. Mil. Hist. 62 (3), 547–570.
Biever, C., 2023. ChatGPT broke the Turing test — the race is on for new ways to assess
AI. Nat 619, 686–689. https://doi.org/10.1038/d41586-023-02361-7.
Bishop, J.M., 2021. Articial intelligence is stupid and causal reasoning will not x it.
Front. Psych. 11, 513474 https://doi.org/10.3389/fpsyg.2020.513474.
Bradshaw, T., Murgia, M., Hammond, G., and Hodgson, C. 2024, How Microsoft’s
multibillion-dollar alliance with OpenAI really works. 〈https://www.ft.com/content
/458b162d-c97a-4464-8afc-72d65afb28ed〉(accessed 9 February 2024).
Brooks, M., 2023. Quantum computers: what are they good for? Nat 617, S1–S3. https://
doi.org/10.1038/d41586-023-01692-9.
Cassidy, C., 2023. Australian universities to return to ‘pen and paper’ exams after
students caught using AI to write essays. 〈https://www.theguardian.com/australia
-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students
-caught-using-ai-to-write-essays〉(accessed 17 August 2023).
Chesbrough, H.W., 2006. Open innovation: The new imperative for creating and
proting from technology. Harvard Business Press, Boston. Massachusetts.
Choi, J.H., Hickman, K.E., Monahan, A. and Schwarcz, D.B., 2022. ChatGPT Goes to Law
School. 71 J. of Leg. Edu. 387. 〈https://dx.doi.org/10.2139/ssrn.4335905〉.
Clayton, J., 2023. ’Overwhelming consensus’ on AI regulation – Musk. 〈https://www.
bbc.com/news/technology-66804996〉(accessed 6 April 2024).
David, E., 2024. Microsoft’s Mistral deal beefs up Azure without spurning OpenAI. 〈http
s://www.theverge.com/24087008/microsoft-mistral-openai-azure-europe〉
(accessed 7 April 2024).
De Cosmo, L., 2022. Google Engineer Claims AI Chatbot Is Sentient: Why That Matters.
〈https://www.scienticamerican.com/article/google-engineer-claims-ai-cha
tbot-is-sentient-why-that-matters/〉(accessed 14 August 2023).
de Fine Licht, K., 2023. Integrating Large Language Models into Higher Education:
Guidelines for Effective Implementation. Comp. Sc. Math. Forum 8 (1), 65. https://
doi.org/10.3390/cmsf2023008065.
Dencheva, A., 2023. Share of marketers using generative articial intelligence (AI) in
their companies in the United States as of March 2023. 〈https://www.statista.com/st
atistics/1388390/generative-ai-usage-marketing/〉(accessed 5 April 2024).
Diamond, J.M., 2005. Guns, germs and steel: a short history of everybody for the last
13,000 years. Vintage, London.
Eloundou, T., Manning, S., Mishkin, P., and Rock, D., 2023. GPTs are GPTs: An Early
Look at the Labor Market Impact Potential of Large Language Models, working
paper, arXiv preprint, 23 March 2023. 〈https://doi.org/10.48550/arXiv.2303.10130
〉.
Feldstein, S., 2023. The Consequences of Generative AI for Democracy, Governance and
War. Surviv 65 (5), 117–142. https://doi.org/10.1080/00396338.2023.2261260.
Ferrara, E., 2024. Fairness and bias in articial intelligence: A brief survey of sources,
impacts, and mitigation strategies. Sci 6 (1), 3. https://doi.org/10.3390/
sci6010003.
Ford, M., 2016. Rise of the Robots: Technology and the Threat of a Jobless Future, Basic
Books, New York.
Frank, M.C., 2023. Baby steps in evaluating the capacities of large language models. Nat.
Rev. Psychol. 2, 451–452. https://doi.org/10.1038/s44159-023-00211-x.
Future of Life Institute, 2023. Pause Giant AI Experiments: An Open Letter. 〈https://futur
eoife.org/open-letter/pause-giant-ai-experiments/〉(accessed 16 August 2023).
Google, 2023. From the garage to the Googleplex. 〈https://about.google/our-story/〉
(accessed 24 August 2023).
Google DeepMind, 2023. The Challenge Match. 〈https://www.deepmind.com/rese
arch/highlighted-research/alphago/the-challenge-match〉(accessed 26 August
2023).
Gordon, R.J., 2016. The Rise and Fall of American Growth: The U.S. Standard of Living
Since the Civil War, Princeton University Press, Princeton, New Jersey 08540.
Gurman, M., 2023. Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak.
https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-
and-other-generative-ai-use-by-staff-after-leak?srnd=technology-
vp&leadSource=uverify%20wall (accessed 22 August 2023).
Heikkil¨
a, M., 2023. How OpenAI is trying to make ChatGPT safer and less biased. 〈http
s://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-
make-chatgpt-safer-and-less-biased/〉. (accessed 23 August 2023).
Hetzner, C., 2023. Top physicist says chatbots are just ‘gloried tape recorders,’ and
predicts a different computing revolution is ahead. 〈https://fortune.com/2023
/08/14/michio-kaku-chatbots-gloried-tape-recorders-predicts-quantum-computin
g-revolution-ahead/〉(accessed 20 August 2023).
Hinton, G., 2023. Interview with CNN’s Jake Tapper, 2 May. 〈https://edition.cnn.com/2
023/05/02/tech/hinton-tapper-wozniak-ai-fears/index.html〉. (accessed 2
September 2023).
Hsiao, S., 2024. Bard becomes Gemini: Try Ultra 1.0 and a new mobile app today. 〈http
s://blog.google/products/gemini/bard-gemini-advanced-app/〉(accessed 9
February 2024).
John Deere., 2023. The Next Giant Leap in Ag Technology. 〈https://www.deere.com/en
/autonomous/〉(accessed 22 August 2023).
Kim, H.-B., 2023. Samsung Electronics to adopt own AI amid ChatGPT security concerns.
〈https://www.koreatimes.co.kr/www/tech/2023/08/129_352712.html〉(accessed
22 August 2023).
Kimmel, L., 2023. ChatGPT Passed the Uniform Bar Examination: Is Articial Intelligence
Smart Enough to be a Lawyer? 〈https://international-and-comparative-law-review.
law.miami.edu/chatgpt-passed-the-uniform-bar-examination-is-articial-intelligen
ce-smart-enough-to-be-a-lawyer/#:~:text=In%20an%20unprecedented%20deve
lopment%20for,in%20the%20top%2010th%20percentile〉. (accessed 16 August
2023).
Kolisko, S., Anderson, C.J., 2023. Exploring Social Biases of Large Language Models in a
College Articial Intelligence Course. Proc. of. AAAI Conf. Artif. Intell. 37 (13),
15825–15833. https://doi.org/10.1609/aaai.v37i13.26879.
Kshetri, N., Dwivedi, Y.K., Davenport, T.H., Panteli, N., 2024. Generative articial
intelligence in marketing: Applications, opportunities, challenges, and research
agenda. Int. J. Inf. Mgt. 75, 102716. https://doi.org/10.1016/j.
ijinfomgt.2023.102716.
L˘
az˘
aroiu, G., Rogalska, E., 2023. How generative articial intelligence technologies
shape partial job displacement and labor productivity growth. Oeconomia Copernic.
14 (3), 703–706. https://doi.org/10.24136/oc.2023.020.
Liberatore, S., 2023. Could ChatGPT replace Google as the world’s go-to search engine?
Google declares ’code red’ over AI’s threat to its $150billion-dollar-a-year business.
〈https://www.dailymail.co.uk/sciencetech/article-11781625/What-ChatGPT-re
place-Google-need-know.html〉(accessed 24 August 2023).
Lin, Z., Trivedi, S. and Sun, J.-M., 2023. Generating with Condence: Uncertainty
Quantication for Black-box Large Language Models, arXiv preprint arXiv:
2305.19187. 〈https://doi.org/10.48550/arXiv.2305.19187〉.
Lin, T.-Y., Wang, Y.-X., Liu, X.-Y., Qiu, X.-P., 2022. A survey of transformers. AI Open 3,
111–132. https://doi.org/10.1016/j.aiopen.2022.10.001.
Margolin, S., 2023. How to Prepare for AI-Generated Misinformation. 〈https://insight.ke
llogg.northwestern.edu/article/how-to-prepare-for-ai-generated-misinformation〉
(accessed 26 August 2023).
McCarthy, J., Minsky, M.L., Rochester, N., and Shannon, C.E., 1955. A Proposal for the
Summer Research Project on Articial Intelligence. 〈http://jmc.stanford.edu/artic
les/dartmouth/dartmouth.pdf〉(accessed 15 August 2023).
McLean, C., 2023. Who invented the Internet? Everything you need to know about the
history of the Internet. 〈https://www.usatoday.com/story/tech/2022/08/28/when
-was-internet-created-who-invented-it/10268999002/〉(accessed 15 August 2023).
Mearian, L., 2023. What are LLMs, and how are they used in generative AI?. 〈htt
ps://www.computerworld.com/article/3697649/what-are-large-language-mod
els-and-how-are-they-used-in-generative-ai.html〉(accessed 14 August 2023).
Miller, C., 2022. Chip War: The Fight for the World’s Most Critical Technology, Simon &
Schuster, Inc., New York.
Miller, C.C., Cox, C., 2023. In Reversal Because of A.I., Ofce Jobs Are Now More at Risk.
〈https://www.nytimes.com/2023/08/24/upshot/articial-intelligence-jobs.html〉
(accessed 13 February 2024).
Mistral AI., 2024. Mistral technology. 〈https://mistral.ai/technology/#models〉
(accessed 7 April 2024).
Morck, R., Yeung, B., 2011. Economics, History and Causation. Bus. Hist. Rev. 85 (1),
39–63. https://doi.org/10.2139/ssrn.1734504.
Motoki, F., Neto, V.P., Rodrigues, V., 2023. More human than human: measuring
ChatGPT political bias. Public Choice. https://doi.org/10.1007/s11127-023-01097-
2.
Murphy, C., 2023. Google Search Versus ChatGPT - ChatGPT was never meant to be a
search engine. 〈https://www.bostondigital.com/insights/google-search
-versus-chatgpt-chatgpt-was-never-meant-be-search-engine〉(accessed 24 August
2023).
NASA., 2018. 115 Years Ago: Wright Brothers Make History at Kitty Hawk. 〈https
://www.nasa.gov/feature/115-years-ago-wright-brothers-make-history-at-kitty-h
awk〉(accessed 28 August 2023).
Neugebauer, F., 2023. Understanding LLM Hallucinations. 〈https://towardsdatascience.
com/llm-hallucinations-ec831dcd7786〉(accessed 22 August 2023).
OpenAI., 2023a. Dall-E. 〈https://labs.openai.com/〉(accessed 20 August 2023).
OpenAI., 2023b. OpenAI Codex 〈https://openai.com/blog/openai-codex〉(accessed 20
August 2023).
OpenAI., 2024. Models. 〈https://platform.openai.com/docs/models/overview〉
(accessed 7 February 2024).
Paleja, A., 2023. Google DeepMind to power its AI with AlphaGo-like features to ght
ChatGPT. 〈https://interestingengineering.com/culture/google-deepmind-ai-alphag
o-chatgpt〉(accessed 26 August 2023).
Pautz, H., 2023. Policy making and articial intelligence in Scotland. Contemp. Soc. Sci.
18 (5), 618–636. https://doi.org/10.1080/21582041.2023.2293822.
Peddie, J., 2023. The History of the GPU - Steps to Invention, Springer, Cham.
Rawat, M., 2023. Top Generative AI Tools in Code Generation/Coding. 〈https://www.
marktechpost.com/2023/07/17/top-generative-ai-tools-in-code-generation-codin
g-2023/#:~:text=These%20technologies%20use%20machine%20learning,by%
20automating%20repetitive%20coding%20components〉. (accessed 20 August
2023).
Regona, M., Yigitcanlar, T., Xia, B., Li, R.Y.M., 2022. Opportunities and adoption
challenges of AI in the construction industry: A PRISMA review. J. Open. Innov.
Technol. Mark. Complex. 8 (1), 45 https://doi.org/10.3390/joitmc8010045.
W.K.O. Wong
Journal of Open Innovation: Technology, Market, and Complexity 10 (2024) 100278
8
Reich, E.S., 2013. Physicists snatch a peep into quantum paradox. Nat 2013. https://doi.
org/10.1038/nature.2013.13899.
Sagan, C., Druyan, A., 1995. The demon-haunted world: science as a candle in the dark.
Random House, New York.
Sankararaman, K.A., Wang, S.-N., Fang, H., 2022. BayesFormer: Transformer with
Uncertainty Estimation, arXiv:2206.00826. 〈https://doi.org/10.48550/arXiv.220
6.00826〉.
Schumpeter, J.A., 2010. Capitalism, Socialism and Democracy. Routledge, London.
Seo, K.-W., Tang, J., Roll, I., Fels, S., Yoon, D.-W., 2021. The impact of articial
intelligence on learner–instructor interaction in online learning. Int. J. Educ.
Technol. High. Educ. 18, 54. https://doi.org/10.1186/s41239-021-00292-9.
Shadow., 2023. The History of Gaming: The evolution of GPUs. 〈https://shadow.tech/en
-GB/blog/history-of-gaming-gpus〉(accessed 21 August 2023).
Smith-Goodson, P., 2023. The Extraordinary Ubiquity Of Generative AI And How Major
Companies Are Using It. 〈https://www.forbes.com/sites/moorinsights/2023/07/
21/the-extraordinary-ubiquity-of-generative-ai-and-how-major-companies-are-us
ing-it/?sh=5ec153852124〉(accessed 14 August 2023).
Solow, R., 1987. We’d Better Watch Out. N. Y. Book Rev. 12, 36.
Srinivasan, R., Uchino, K., 2021. Biases in generative art: A causal look from the lens of
art history. Proc. 2021 ACM Conf. Fair. Account. Transpar. 41–51. https://doi.org/
10.1145/3442188.3445869.
Stone, B., 2023. AI Leader Proposes a New Kind of Turing Test for Chatbots. 〈https
://www.bloomberg.com/news/newsletters/2023-06-20/ai-turing-test-for-chatgpt
-or-bard-proposed-by-mustafa-suleyman〉(accessed 31 August 2023).
Straub, J., 2023. Can AI write ’Ted Lasso’? Writers strike may open door to ChatGPT-
written scripts 〈https://www.usatoday.com/story/opinion/2023/05/10/wga-strike
-pave-way-ai-generated-tv-movie-scripts/70198801007/〉(accessed 28 August
2023).
Taylor, P., 2023. Global datasphere real time data total size worldwide from 2010 to
2025. 〈https://www.statista.com/statistics/871513/worldwide-data-created/〉
(accessed 23 August 2023).
Terwiesch, C. (2023), Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its
Performance in the Operations Management Course. 〈https://mackinstitute.wharton
.upenn.edu/wp-content/uploads/2023/01/Would-ChatGPT-get-a-Wharton-MBA.
pdf〉(accessed 4 February 2024).
The Business Times., 2024. OpenAI signs up 260 businesses for corporate version of
ChatGPT. 〈https://www.businesstimes.com.sg/startups-tech/startups/openai-signs-
260-businesses-corporate-version-chatgpt〉(accessed 9 February 2024).
The Economist., 2023. Your job is (probably) safe from articial intelligence. 〈https://
www.economist.com/nance-and-economics/2023/05/07/your-job-is-probably-saf
e-from-articial-intelligence〉(accessed 22 August 2023).
The Economist., 2024. Regulators are forcing big tech to rethink its AI strategy. 〈https
://www.economist.com/business/2024/03/27/regulators-are-forcing-big-tech-to-r
ethink-its-ai-strategy〉(accessed 5 April 2024).
Toner, H., 2023. What Are Generative AI, Large Language Models, and Foundation
Models? 〈https://cset.georgetown.edu/article/what-are-generative-ai-large-languag
e-models-and-foundation-models/〉(accessed 14 August 2023).
Turing, A.M., 1948. Intelligent Machinery. 〈https://www.alanturing.net/turing_archive
/archive/l/l32/L32-002.html〉(accessed 15 August 2023).
Turing, A.M., 1950. Comput. Mach. and Intell. Mind 59 (236), 433–460. https://doi.org/
10.1093/mind/LIX.236.433.
Weizembaum, J., 1966. ELIZA- A Computer Program for the study of Natural Language
Communication between Man and Machine. Comput. Linguist. 9 (1), 36–45. https://
doi.org/10.1145/365153.365168.
Welding, L., 2023. Half of College Students Say Using AI on Schoolwork Is Cheating or
Plagiarism. 〈https://www.bestcolleges.com/research/college-students-ai-tools-surve
y/〉(accessed 14 August 2023).
Wong, W.K.O., 2023. Creating “Articial Suns”: the Sino-Western race to master limitless
clean energy through nuclear fusion. Educ. Dev. Stud. 12 (1), 28–39. https://doi.
org/10.1108/AEDS-03-2022-0035.
X.ai., 2024. Announcing Grok-1.5. https://x.ai/blog/grok-1.5 (accessed 7 April 2024).
Yin, R.K., 1981a. The case study as a serious research strategy. Knowl 3 (1), 97–114.
https://doi.org/10.1177/107554708100300106.
Yin, R.K., 1981b. The case study crisis: Some answers. Adm. Sci. Q. 26 (1), 58–65.
https://doi.org/10.2307/2392599.
Yin, R.K., 1984. Case Study Research: Design and Methods. Sage Publications, Inc,
Beverly Hills, CA.
Yudkowsky, E., 2023. Pausing AI Developments Isn’t Enough. We Need to Shut it All
Down 〈https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/〉
(accessed 14 August 2023).
Yun, J.J., 2015. How do we conquer the growth limits of capitalism? Schumpeterian
Dynamics of Open Innovation. J. Open. Innov. Technol. Mark. Complex. 1 (2), 17.
https://doi.org/10.1186/s40852-015-0019-3.
Yun, J.J., 2016. Open Innovation: Technology, Market and Complexity in South Korea.
Sci. Technol. Soc. 21 (3), 319–323. https://doi.org/10.1177/0971721816661783.
W.K.O. Wong