PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be seen as one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI.
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey
on ChatGPT in AIGC Era
CHAONING ZHANG, Kyung Hee University, South Korea
CHENSHUANG ZHANG, KAIST, South Korea
CHENGHAO LI, KAIST, South Korea
YU QIAO, Kyung Hee University, South Korea
SHENG ZHENG, Beijing Institute of Technology, China
SUMIT KUMAR DAM, Kyung Hee University, South Korea
MENGCHUN ZHANG, KAIST, South Korea
JUNG UK KIM, Kyung Hee University, South Korea
SEONG TAE KIM, Kyung Hee University, South Korea
JINWOO CHOI, Kyung Hee University, South Korea
GYEONG-MOON PARK, Kyung Hee University, South Korea
SUNG-HO BAE, Kyung Hee University, South Korea
LIK-HANG LEE, Hong Kong Polytechnic University, Hong Kong SAR (China)
PAN HUI, Hong Kong University of Science and Technology (Guangzhou), China
IN SO KWEON, KAIST, South Korea
CHOONG SEON HONG, Kyung Hee University, South Korea
OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be seen as one small step for generative AI
(GAI), but one giant leap for articial general intelligence (AGI). Since its ocial release in November 2022, ChatGPT has quickly
attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to
investigate ChatGPT from various aspects. According to Google scholar, there are more than 500 articles with ChatGPT in their titles
or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work lls this gap. Overall, this work is
the rst to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we
Authors’ addresses: Chaoning Zhang, Kyung Hee University, South Korea, chaoningzhang1990@gmail.com; Chenshuang Zhang, KAIST, South Korea,
zcs15@kaist.ac.kr; Chenghao Li, KAIST, South Korea, lch17692405449@gmail.com; Yu Qiao, Kyung Hee University, South Korea, qiaoyu@khu.ac.kr; Sheng
Zheng, Beijing Institute of Technology, China, zszhx2021@gmail.com; Sumit Kumar Dam, Kyung Hee University, South Korea, skd160205@khu.ac.kr;
Mengchun Zhang, KAIST, South Korea, zhangmengchun527@gmail.com; Jung Uk Kim, Kyung Hee University, South Korea, ju.kim@khu.ac.kr; Seong
Tae Kim, Kyung Hee University, South Korea, st.kim@khu.ac.kr; Jinwoo Choi, Kyung Hee University, South Korea, jinwoochoi@khu.ac.kr; Gyeong-Moon
Park, Kyung Hee University, South Korea, gmpark@khu.ac.kr; Sung-Ho Bae, Kyung Hee University, South Korea, shbae@khu.ac.kr; Lik-Hang Lee,
Hong Kong Polytechnic University, Hong Kong SAR (China), lik-hang.lee@polyu.edu.hk; Pan Hui, Hong Kong University of Science and Technology
(Guangzhou), China, panhui@ust.hk; In So Kweon, KAIST, South Korea, iskweon77@kaist.ac.kr; Choong Seon Hong, Kyung Hee University, South Korea,
cshong@khu.ac.kr.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specic permission and/or a fee. Request permissions from permissions@acm.org.
©2022 Association for Computing Machinery.
Manuscript submitted to ACM
Manuscript submitted to ACM 1
2 Zhang et al.
present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a
signicant milestone for the development of AGI.
CCS Concepts: Computing methodologies
Computer vision tasks;Natural language generation; Machine learning approaches.
Additional Key Words and Phrases: Survey, ChatGPT, GPT-4, Generative AI, AGI, Articial General Intelligence, AIGC
ACM Reference Format:
Chaoning Zhang, Chenshuang Zhang, Chenghao Li, Yu Qiao, Sheng Zheng, Sumit Kumar Dam, Mengchun Zhang, Jung Uk Kim,
Seong Tae Kim, Jinwoo Choi, Gyeong-Moon Park, Sung-Ho Bae, Lik-Hang Lee, Pan Hui, In So Kweon, and Choong Seon Hong. 2022.
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era. 1, 1 (March 2022), 29 pages.
https://doi.org/XXXXXXX.XXXXXXX
Contents
Abstract 1
Contents 2
1 Introduction 2
2 Overview of ChatGPT 4
2.1 OpenAI 4
2.2 Capabilities 5
3 Technology behind ChatGPT 6
3.1 Two core techniques 6
3.2 Technology path 7
4 Applications of ChatGPT 10
4.1 Scientic writing 10
4.2 Education eld 13
4.3 Medical eld 14
4.4 Other elds 15
5 Challenges 16
5.1 Technical limitations 16
5.2 Misuse cases 17
5.3 Ethical concerns 18
5.4 Regulation policy 19
6 Outlook: Towards AGI 20
6.1 Technology aspect 20
6.2 Beyond technology 21
7 Conclusion 22
References 22
1 INTRODUCTION
The past few years have witnessed the advent of numerous generative AI (AIGC, a.k.a. AI-generated content) tools [
72
,
137
,
143
], suggesting AI has entered a new era of creating instead of purely understanding content. For a complete
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 3
Fig. 1. Structure overview of this survey.
Manuscript submitted to ACM
4 Zhang et al.
survey on generative AI (AIGC), the readers can refer to [
214
]. Among those AIGC tools, ChatGPT, which was released
in November 2022, has caught unprecedented attention. It attracted numerous users, and the number of active monthly
users surpassed 100 million within only two months, breaking the user growth record of other social products [
119
].
ChatGPT was developed by OpenAI, which started as a non-prot research laboratory, with a mission of building
safe and benecial articial general intelligence (AGI). After announcing GPT-3 in 2020, OpenAI has gradually been
recognized as a world-leading AI lab. Very recently, It has released GPT-4, which can be seen as one step for generative
AI, but one giant step for AGI.
Due to its impressive capabilities on language understanding, numerous news articles provide extensive coverage
and introduction, to name a few, BBC Science Focus [
68
], BBC News [
39
], CNN Business [
78
], Bloomberg News [
159
].
Google’s management has issued a “code red" over the threat of ChatGPT, suggesting that ChatGPT posed a signicant
danger to the company, especially to its search service. This danger seems more dicult to ignore after Microsoft
adopted ChatGPT in their Bing search service. The stock price change also reects the belief that ChatGPT might help
Bing compete with Google search.
Such unprecedented attention on ChatGPT has also motivated numerous researchers to investigate this intriguing
AIGC tool from various aspects [
151
,
165
]. According to our literature review on google scholar, no fewer than 500
articles include ChatGPT in their titles or mention this viral term in their abstract. It is challenging for readers to grasp
the progress of ChatGPT without a complete survey. Our comprehensive review provides a rst look into ChatGPT in a
timely manner.
Since the topic of this survey can be regarded as a commercial tool, we rst present a background on the company,
i.e. OpenAI, which developed ChatGPT. Moreover, this survey also presents a detailed discussion of the capabilities of
ChatGPT. Following the background introduction, this work summarizes the technology behind ChatGPT. Specically,
we introduce its two core techniques: Transformer architecture and autoregressive pertaining, based on which we
present the technology path of the large language model GPT from v1 to v4 [
18
,
124
,
138
,
139
]. Accordingly, we
highlight the prominent applications and the related challenges, such as technical limitations, misuse, ethics and
regulation. Finally, we conclude this survey by providing an outlook on how ChatGPT might evolve in the future
towards general-purpose AIGC for realizing the ultimate goal of AGI. A structured overview of our work is shown in
Figure 1.
2 OVERVIEW OF CHATGPT
First, we provide a background of ChatGPT and the corresponding organization, i.e., OpenAI, which aims to build
articial general intelligence (AGI). It is expected that AGI can solve human-level problems and beyond, on the premise
of building safe, trustworthy systems that are benecial to our society.
2.1 OpenAI
OpenAI is a research laboratory made up of a group of researchers and engineers committed to the commission of
building safe and benecial AGI [
50
]. It was founded on December 11, 2015, by a group of high-prole tech executives,
including Tesla CEO Elon Musk, SpaceX President Gwynne Shotwell, LinkedIn co-founder Reid Homan, and venture
capitalists Peter Thiel and Sam Altman [
77
]. In this subsection, we will talk about the early days of OpenAI, how it
became a for-prot organization, and its contributions to the eld of AI.
In the beginning, OpenAI is a non-prot organization [
24
], and its research is centered on deep learning and rein-
forcement learning, natural language processing, robotics, and more. The company quickly established a reputation for
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 5
its cutting-edge research after publishing several inuential papers [
123
] and developing some of the most sophisticated
AI models. However, to create AI technologies that could bring in money, OpenAI was reorganized as a for-prot
company in 2019 [
31
]. Despite this, the company keeps developing ethical and secure AI alongside creating commercial
applications for its technology. Additionally, OpenAI has worked with several top tech rms, including Microsoft,
Amazon, and IBM. Microsoft revealed a new multiyear, multibillion-dollar venture with OpenAI earlier this year [21].
Though Microsoft did not give a precise sum of investment, Semafor claimed that Microsoft was in discussions to spend
up to $10 billion [101]. According to the Wall Street Journal, OpenAI is worth roughly $29 billion [13].
Fig. 2. OpenAI products timeline.
From large language models to open-source software, OpenAI has signicantly advanced the eld of AI. To begin
with, OpenAI has developed some of the most potent language models to date, including GPT-3 [
131
], [
95
], which has
gained widespread praise for its ability to produce cohesive and realistic text in numerous contexts. OpenAI also carries
out research in reinforcement learning, a branch of articial intelligence that aims to train robots to base their choices
on rewards and punishments. Proximal Policy Optimization (PPO) [
70
], Soft Actor-Critic (SAC) [
192
], and Trust Area
Policy Optimization (TRPO) [
184
] are just a few of the reinforcement learning algorithms that OpenAI has created so far.
These algorithms have been employed to train agents for various tasks, including playing games and controlling robots.
OpenAI has created many software tools up to this point to assist with its research endeavors, including the OpenAI
Gym [
75
], a toolset for creating and contrasting reinforcement learning algorithms. In terms of hardware, OpenAI has
invested in several high-performance processing systems, including the DGX-1 and DGX-2 systems from NVIDIA [
152
].
These systems were created with deep learning in mind and are capable of oering the processing power needed to
build sophisticated AI models. Except for ChatGPT, other popular tools developed by OpenAI include DALL-E [
143
]
and Whisper [137], Codex [25]. A summarization of the OpenAI product pipeline is shown in Figure 2.
2.2 Capabilities
ChatGPT uses interactive forms to provide detailed and human-like responses to questions raised by users [
1
]. ChatGPT
is capable of producing high-quality text outputs based on the prompt input text. GPT-4-based ChatGPT plus can
additionally take images as the input. Except for the basic role of a chatbot, ChatGPT can successfully handle various text-
to-text tasks, such as text summarization [45], text completion, text classication [85], sentiment [216] analysis [113],
paraphrasing [104], translation [35], etc.
Manuscript submitted to ACM
6 Zhang et al.
ChatGPT has become a powerful competitor in search engines. As mentioned in our introductory section, Google,
which supplies the most excellent search engine in the world, considers ChatGPT as a challenge to its monopoly [
191
].
Notably, Microsoft has integrated ChatGPT into its Bing search engine, allowing users to receive more creative
replies [
176
]. We see an obvious distinction between search engines and ChatGPT. That is, search engines assist users
in nding the information they want, while ChatGPT develops replies in a two-way conversation, providing users with
a better experience.
Other companies are developing similar chatbot products, such as LamMDA from Google and BlenderBot from Meta.
Unlike ChatGPT, the LaMDA, developed by Google in 2021, actively participates in conversations with users, resulting
in racist, sexist, and other forms of bias in output text [
120
]. BlenderBot is Meta’s chatbot, and the feedback from users
is relatively dull because the developer has set tighter constraints on its output material [
132
]. ChatGPT appears to
have balanced the human-like output and bias to some level, allowing for more exciting responses. Signicantly, in
addition to being more ecient and having a higher maximum token limit than vanilla ChatGPT, ChatGPT powered by
GPT-4 can create multiple dialect languages and emotional reactions, as well as reduce undesirable results, thereby
decreasing bias [
171
]. It is noted in [
96
] that the modeling capacity of ChatGPT can be further improved by using
multi-task learning and enhancing the quality of training data.
3 TECHNOLOGY BEHIND CHATGPT
3.1 Two core techniques
Backbone architecture: Transformer. Before the advent of Transformer [
185
], RNN was a dominant backbone
architecture for language understanding, and attention was found to be a critical component of the model performance.
In contrast to prior works that only use attention as a supportive component, the Google team made a claim in their
work title: “Attention is All You Need" [
185
] claimed that since Google released a paper, namely Attention is All You
Need" [
185
] in 2017, research and use of the Transformer backbone structure has experienced explosive growth in the
deep learning community. Therefore, we present a summary of how the Transformer works, with a focus on its core
component called self-attention.
The underlying principle of self-attention posits that given an input text, the mechanism is capable of allocating
distinct weights to individual words, thereby facilitating the capture of dependencies and contextual relationships
within the sequence. Each element within the sequence possesses its unique representation. To calculate the relationship
of each element to others within the sequence, one computes the Q (query), K (key), and V (value) matrices of the
input sequence. These matrices are derived from the linear transformations of the input sequence. Typically, the query
matrix corresponds to the current element, the key matrix represents other elements, and the value matrix encapsulates
information to be aggregated. The association weight between the current element and other elements is determined
by calculating the similarity between the query and key matrices. This is generally achieved through a dot product
operation. Subsequently, the similarity is normalized to ensure that the sum of all associations equals 1, which is
commonly executed via the softmax function. The normalized weights are then applied to the corresponding values,
followed by the aggregation of these weighted values. This process results in a novel representation that encompasses
the association information between the current word and other words in the text. The aforementioned process can be
formally expressed as follows:
𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 (𝑄, 𝐾 , 𝑉 )=𝑠𝑜𝑓 𝑡 𝑚𝑎𝑥 (𝑄𝐾𝑇
𝑑𝑘
)𝑉 . (1)
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 7
Transformer techniques have become an essential foundation for the recent development of large language models,
such as BERT [
41
] and GPT [
18
,
124
,
138
,
139
] series are also models based on Transformer techniques. There is also a
line of works extending Transformer from language to visuals, i.e., computer vision [42,63,100], which suggests that
Transformer has become a unied backbone architecture for both NLP and computer vision.
Generative pretraining: Autoregressive. For model pertaining, there are multiple popular generative modeling
methods, including energy-based models [
56
,
161
,
162
,
189
], variational autoencoder [
5
,
83
,
125
], GAN [
17
,
54
,
201
],
diusion model [
20
,
33
,
93
,
128
,
178
], etc. Here, we mainly summarize autoregressive modeling methods [
11
,
89
,
89
,
180
,
181] as they are the foundation of GPT models [18,124,138,139].
Autoregressive models constitute a prominent approach for handling time series data in statistical analysis. These
models specify that the output variable is linearly dependent on its preceding values. In the context of language
modeling [
18
,
124
,
138
,
139
], autoregressive models predict the subsequent word given the previous word, or the last
probable word given the following words. The models learn a joint distribution of sequence data, employing previous
time steps as inputs to forecast each variable in the sequence. The autoregressive model posits that the joint distribution
𝑝𝜃(𝑥)can be factorized into a product of conditional distributions, as demonstrated below:
𝑝𝜃(𝑥)=𝑝𝜃(𝑥1)𝑝𝜃(𝑥2|𝑥1)...𝑝𝜃(𝑥𝑛|𝑥1, 𝑥 2, .. .,𝑥 𝑛1).(2)
While both rely on previous time steps, autoregressive models diverge from recurrent neural network (RNN)
architectures in the sense that the former utilizes previous time steps as input instead of the hidden state found in RNNs.
In essence, autoregressive models can be conceptualized as a feed-forward network that incorporates all preceding
time-step variables as inputs.
Early works modeled discrete data employing distinct functions to estimate the conditional distribution, such
as logistic regression in Fully Visible Sigmoid Belief Network (FVSBN)[
51
] and one hidden layer neural networks
in Neural Autoregressive Distribution Estimation (NADE)[
89
]. Subsequent research expanded to model continuous
variables [
180
,
181
]. Autoregressive methods have been extensively applied to other elds with representative works:
PixelCNN [183] and PixelCNN++[155]), audio generation (WaveNet[182]).
3.2 Technology path
The development of ChatGPT is based on a series of GPT models, which constitute a substantial achievement for the
eld of NLP. An overview of this development is summarized in Figure 6. In the following, we summarize the key
components of GPT as well as the major changes in the updated GPTs.
Table 1. Comparison between GPT and BERT.
Category Description
Similarities
Backbone Both GPT and BERT use attention-based Transformer.
Learning Paradigm Both GPT and BERT use self-supervised learning.
Transfer-Learning Both GPT and BERT can be ne-tuned for downstream tasks.
Dierences
Text context GPT uses unidirectional text context, while BERT uses bidirectional text context.
Architecture GPT uses a decoder architecture, while BERT uses an encoder architecture.
Pre-training Strategy
GPT uses autoregressive modeling, while BERT uses masked language modeling.
Manuscript submitted to ACM
8 Zhang et al.
BERT v.s. GPT. Traditional language models [
82
,
116
,
188
] mainly focused on a particular task and could not be
transferred to other tasks. Transfer learning is a common approach for alleviating this issue by pretraining a foundation
model [
219
], which can then be netuned on various downstream tasks. Based on the architecture, there are three classes:
encoder-decoder [
91
,
136
,
140
,
160
], encoder-only [
30
,
40
,
88
,
99
], decoder-only [
18
,
124
,
138
,
139
]. Out of numerous
large language models, encoder-only BERT [
40
] and decoder-only GPT [
138
] are arguably the two most popular ones. A
comparison of them is summarized in Table 1. Both of them use attention-based Transformer [
185
] with self-supervised
learning to learn from textual datasets without labels. After pretraining, both BERT and GPT can be netuned and
show competitive performance in downstream tasks. A core dierence between BERT and GPT lies in their pretraining
strategy: masked modeling and autoregressive modeling. With masked modeling, BERT predicts masked language
tokens from unmasked ones. A major advantage of BERT is that it can utilize bidirectional text information, which
makes it compatible with sentiment analysis tasks. Due to the discrepancy between the mask-then-predict pertaining
task and downstream tasks, BERT is rarely used for the downstream task without netuning. By contrast, autoregressive
modeling methods (represented by GPT) show competitive performance for few-shot or zero-shot text generation. In
the following, we summarize the development path of GPT from v1 to v4, which is shown in 6.
Fig. 3. Timeline of GPT model families.
GPT-1. With only the decoder, GPT-1 adopts a 12-layer Transformer and has 117M parameters [
138
]. An overview
of GPT-1 and how it can be used for various downstream tasks is shown in Figure 4. Trained on a massive BooksCorpus
dataset encompassing unique unpublished books, GPT-1 is capable of grasping long-range dependencies contexts.
The general task-agnostic GPT model outperforms models trained for specic tasks in 9 of 12 tasks, including natural
language inference, question answering, semantic similarity, and text classication [
138
]. The observation that GPT-1
performs well on various zero-shot tasks demonstrates a high level of generalization. GPT-1 has evolved into a powerful
model for various NLP tasks before the release of GPT-2.
GPT-2. As the successor to GPT-1, GPT-2 was launched by OpenAI in 2019 and focused on learning NLP tasks
without explicit supervision. Similar to GPT-1, GPT-2 is based on the decoder-only Transformer model. However, the
model architecture and implementation of GPT-2 have been developed, with 1.5 billion parameters and a trained dataset
of 8 million web pages, which are more than 10 times compared to its predecessor GPT-1 [
139
]. With a zero-shot setting,
GPT-2 achieved state-of-the-art results on 7 of 8 language modeling datasets tested, where the 7 datasets’ tasks include
performance recognition for dierent categories of words, the ability of the model to capture long-term dependencies,
commonsense reasoning, reading comprehension, summarization, and translation [
139
]. However, GPT-2 still performs
poorly on the task of question answering, demonstrating the capability of unsupervised model GPT-2 needs to be
improved [139].
GPT-3. The foundation of GPT-3 is the Transformer architecture, specically the GPT-2 architecture. Compared to
GPT-2, which had 1.5 billion parameters, GPT-3 has 175 billion parameters, 96 attention layers, and a 3.2 M batch size, a
signicant increase in size [
18
]. GPT-3 was trained on a diverse range of online content, including novels, papers, and
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 9
Fig. 4. (le) Transformer architecture and training objectives used in GPT-1. (right) Input transformations for fine-tuning on dierent
tasks (figure obtained from [138]).
websites, using language modeling, a type of unsupervised learning where the model attempts to guess the next word
in a phrase given the preceding word. After completion, GPT-3 can be ne-tuned on specic tasks using supervised
learning, where task-specic smaller datasets are employed to train the model, such as text completion or language
translation. Developers can use the GPT-3 model for numerous applications, including chatbots, language translation,
and content production, thanks to OpenAI’s API [
36
]. The API provides dierent access levels depending on the scale
and intricacy of the tasks. Compared to other language models whose performance highly depends on ne-tuning,
GPT-3 can perform many tasks (such as language translation) without any such ne-tuning, gradient, or parameter
updates making this model task-agnostic [105].
GPT-3.5. GPT-3.5 is a variation of the widely popular GPT-3, trained with only 1.3 billion parameters [
106
], 100X
less than the predecessor. In fact, the ChatGPT is a ne-tuned version of GPT-3.5, which came out on March 15, 2022.
On top of GPT-3 model, it has extra ne-tuning procedures: supervised netuning and termed reinforcement learning
with human feedback (RLHF) [
206
], which are shown in Figure 5, where the machine learning algorithm receives
user feedback and uses them to align the model. RLHF is used to overcome the limitations of traditional unsupervised
and supervised learning, which can only learn from unlabeled or labeled data. Human feedback can take dierent
forms, including punishing or rewarding the model’s behaviors, assigning labels to unlabeled data, or changing model
parameters. By incorporating human feedback into the training process, GPT-3.5 has a signicantly higher usability.
GPT-4. On March 14, 2023, OpenAI released GPT-4 [
124
], the fourth installment in the GPT series. GPT-4 is a
large multimodal model capable of taking text and images as inputs and generating text as output. The model delivers
performance at a human level on several professional and career standards, but in real-world situations, it is still way
less competent than humans. For example, the virtual bar exam result for GPT-4 is in the top 10% of test participants, as
opposed to the score for GPT-3.5, which was in the lowest 10% [
76
]. The capacity of GPT-4 to follow human intention
is signicantly better than that of earlier versions [
126
]. The answers by GPT-4 were favored over the responses
produced by GPT-3.5 on 70.2% of the 5,214 questions in the sample provided to ChatGPT and the OpenAI API. After
the overwhelming majority of its pre-training data ends in September 2021, GPT-4 usually lacks awareness of what
Manuscript submitted to ACM
10 Zhang et al.
Fig. 5. How GPT-3.5 is trained. Image obtained from [126]).
has happened and does not learn from its experiences. It occasionally exhibits basic logical mistakes that do not seem
consistent with its skill in various areas, or it may be excessively trusting when taking false claims from a user [
124
].
It may struggle with complex issues in the same way that people do, such as producing code that contains security
aws [
124
]. A summarization of the model parameters and training dataset for GPT models from v1 to v4 is shown in
Table 2.
Table 2. Parameters and Datasets of GPT Models.
GPT Models GPT-1 GPT-2 GPT-3 GPT-3.5 GPT-4
Parameters
(109)0.117 1.5 175 1.3 Ocial report
unavailable
Dataset BooksCorpus
(over 40GB)
WebText
(40TB)
Common
Crawl (45TB)
Common
Crawl (45TB)
Ocial report
unavailable
4 APPLICATIONS OF CHATGPT
4.1 Scientific writing
ChatGPT is widely recognized for its powerful content generation capabilities, which have a signicant impact on
writing in the academic eld. Many existing works have tested how ChatGPT can be applied to scientic writing,
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 11
including brainstorming, literature review, data analysis, direct content generation, grammar checking, and serving as
an academic reviewer.
Brainstorming. Brainstorming is an essential approach for obtaining initial ideas that are a prerequisite for
high-quality scientic research. ChatGPT can play a variety of roles in brainstorming, ranging from stimulating
creativity [
57
,
141
] for new idea generation to providing suggestions [
98
,
170
] for expanding existing ideas. ChatGPT
can assist users in divergent and creative thinking [
141
]. In addition, some studies have explored ChatGPT’s insights
on future nursing research in a Q&A format, which can analyze the impact of future technological developments on
nursing practice, and provide valuable insights for nurses, patients, and the healthcare system [
57
]. Moreover, ChatGPT
also demonstrates the ability to “think" from multiple perspectives, it can analyze and reect on the impact of excess
deaths after the COVID-19 pandemic from multiple dimensions such as the medical system, social economy, and
personal health behaviors [
170
]. To evaluate whether ChatGPT generates useful suggestions for researchers in certain
domains. The authors tested its ability on clinical decision support in [
98
] and assessed its dierence compared to
human-generated suggestions. The test results have shown that, unlike human thinking, the suggestions generated by
ChatGPT provide a unique perspective, and its generations are evaluated as highly understandable and relevant, which
have signicant value in scientic research.
Literature review. A comprehensive literature review requires covering all relevant research, which can consume
too much time and energy for researchers. For example, the Semantic Scholar search engine, an AI-based scientic
literature research tool, has indexed more than 200 million scholarly publications. As a result, nding relevant research
papers and extracting key insights from them is almost like nding a needle in a haystack. Fortunately, ChatGPT, as an
AI-driven research reading tool, can help us browse through a large number of papers and understand their content. In
actual use, we can give a topic to ChatGPT, then it can help us nd out the related literature. Before discussing the
ability of ChatGPT in handling the literature review, we review a similar AI tool, SciSpace Copilot, which can help
researchers quickly browse and understand papers [
154
]. Specically, it can provide explanations for scientic texts
and mathematics including follow-up questions with more detailed answers in multiple languages, facilitating better
reading and understanding of the text. By comparison, ChatGPT as a general language model not only has all the
functions of SciSpace Copilot, but also can be widely used in various natural language processing scenarios [
154
]. A
literature review is essential for summarizing relevant work in the selected eld. As an exploratory task, they chose
the topic of “Digital Twin in Healthcare" and compile abstracts of papers obtained from Google Scholar search results
using the keywords “digital twin in healthcare" for the last three years (2020, 2021, and 2022). These abstracts are then
paraphrased by ChatGPT, the generated results are promising [
7
]. However, the application of ChatGPT in this task
is still at the beginning. The authors in [
59
] ask ChatGPT to provide 10 groundbreaking academic articles with DOIs
in the eld of medical domains. Unfortunately, after conducting ve tests, the results show that out of the 50 DOIs
provided, only 8 of them exist and have been correctly published. Although ChatGPT’s abilities in the literature review
are still weak, we believe that in the near future, ChatGPT will be widely used for literature review, further improving
the eciency of researchers and enabling them to focus their time on key research.
Data analysis. Scientic data needs to be cleaned and organized before being analyzed, often consuming days or
even months of the researcher’s time, and most importantly, in some cases, having to learn to use a coding language
such as Python or R. The use of ChatGPT for data processing can change the research landscape. For example, as
shown in [
102
], ChatGPT completes the task of data analysis for a simulated dataset of 100,000 healthcare workers
of varying ages and risk proles to help determine the eectiveness of vaccines, which signicantly speeds up the
research process [
102
]. Another similar AI tool for data analysis is discussed in [
154
], where AI-based spreadsheet
Manuscript submitted to ACM
12 Zhang et al.
bots can convert natural language instructions into spreadsheet formulas. Furthermore, platforms like Olli can also
visualize data, where users only need to simply describe the desired content, and then they can get AI-created line
graphs, bar graphs, and scatter graphs. Considering that ChatGPT is the most powerful AI tool so far, we believe that
these functions can also be implemented in ChatGPT in a more intelligent way.
Content generation. Numerous works have attempted to use ChatGPT for content generation for their articles [
3
,
148
]. For example, [
3
] employed ChatGPT to aid in writing reports in medical science about the pathogenesis of two
diseases. Specically, ChatGPT provides three aspects about the mechanism of homocystinuria-associated osteoporosis,
all of which are proven true. However, when it comes to the references to the generated information, the papers
mentioned by ChatGPT do not exist. [
218
] described a study on writing a catalysis review article using ChatGPT, with
the topic set to CO2 hydrogenation to higher alcohols. The ChatGPT-generated content includes the required sections of
the paper but lacks an introduction to the reaction mechanism, which is critical for the topic. The content of this article
contains abundant useful information, but specic details are absent and certain errors exist. In addition, ChatGPT can
help prepare manuscripts, but the generated results have a large dierence from actual published content. A possible
reason is that the keywords of ChatGPT and human-generated text vary greatly, which requires users to further edit
the generated content [
87
]. ChatGPT has also been utilized to generate a review article in specic areas such as the
health eld [
7
], which indicates scholars can focus on core research while leaving the less creative part to AI tools.
Nonetheless, Considering the style dierence between human-generated content and ChatGPT-generated content, it is
suggested in [
7
,
87
] to not fully rely on ChatGPT. utilize ChatGPT as an assistant to help us to complete the writing
rather than relying solely on it.
Proofreading. Before the advent of ChatGPT, there are numerous tools for grammar check. Some works [
81
,
110
,
200
]
have conducted tests on grammar and spelling correction, which shows that ChatGPT provides a better user experience
than other AI tools. For example, ChatGPT can be used to automatically x any punctuation and grammar mistakes to
improve the writing quality [
200
]. In addition, the study investigates how ChatGPT can go beyond helping users check
grammar and can further generate reports about document statistics, vocabulary statistics, etc, change the language of
a piece to make it suitable for people of any age, and even adapt it into a story [
81
]. Another minor but noteworthy
point is that as of now, Grammarly’s advanced version, Grammarly Premium, requires users to pay a monthly fee
of $30, which is relatively more expensive compared to ChatGPT Plus’s monthly fee of $20. Moreover, ChatGPT has
been compared to other AI-based grammar checkers, including QuillBot, DeepL, DeepL Write, and Google Docs. The
results show that ChatGPT performs the best in terms of the number of errors detected. While ChatGPT has some
usability issues when it comes to proofreading, such as being over 10 times slower than DeepL and lacking in the ability
to highlight suggestions or provide alternative options for specic words or phrases [
110
], it should be noted that
grammar-checking is just the tip of the iceberg. ChatGPT can also be valuable in improving language, restructuring
text, and other aspects of writing.
Academic reviewer. Peer review of research papers is a crucial process for the dissemination of new ideas, with
a signicant impact on scientic progress. However, the sheer volume of research papers being produced has posed
a challenge for human reviewers. The potential of ChatGPT for literature review has been investigated in [
163
].
Specically, ChatGPT is capable of analyzing inputted academic papers, and then it can evaluate them based on several
aspects, including the summary, strengths and weaknesses, clarity, quality, novelty, and reproducibility of the papers.
Furthermore, the generated reviews of the papers are then inputted into ChatGPT for sentiment analysis. After this, a
decision can be made on the acceptance of the reviewed paper.
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 13
4.2 Education field
With the impressive capability to generate human-like responses, ChatGPT has been studied by numerous works to
investigate the impact it brings to the education eld. Here, we summarize them from two perspectives: teaching/learning
and subjects.
Teaching and learning. In a typical classroom setting, the teachers are the source of knowledge, while the students
play the role of knowledge receiver. Outside the classroom, the students are often required to complete the assignments
designed by the teacher. How the teachers and students interact with each other can be signicantly changed by
ChatGPT [10,150,212,213].
ChatGPT can revolutionize the paradigm of teaching by providing a wealth of resources to aid in the creation
of personalized tutoring [
173
,
212
], designing course material [
130
], assessment and evaluation [
10
,
212
]. Multiple
works [
10
,
213
] have discussed how ChatGPT can be used to create an adaptive learning platform to meet the needs
and capabilities of students. It has been shown in [
173
] that the teacher can exploit ChatGPT to guide students in
interactive dialogues to help them learn a new language. ChatGPT has also been utilized to design course material in
law curriculum, such as generating a syllabus and hand-outs for a class, as well as creating practice test questions [
130
].
Moreover, a recent work [
130
] provides preliminary evidence that ChatGPT can be applied to assist law professors
to help scholarship duties. Specically, this includes submitting a biography for a speaking engagement, writing
opening remarks for a symposium, and developing a document for a law school committee. In addition, it is shown
in [
10
,
212
,
213
] that ChatGPT can be exploited as an assessment and evaluation assistant, including automated grading
and performance and engagement analysis for students.
ChatGPT, on the other hand, also brings a signicant impact on how students learn. A poll [
167
] done by Study.com
(an online course provider) reveals how ChatGPT is used among adult students. According to its ndings [
167
], 89% of
them utilized ChatGPT for homework, and 48% of them exploited it for an at-home test or quiz. Moreover, over half
of them admitted to using ChatGPT to write essays, and 22% confessed to using ChatGPT to create a paper outline.
Meanwhile, multiple works [
10
,
212
,
213
] have investigated how ChatGPT might assist students in their studies. For
example, [
10
,
212
] utilize ChatGPT to translate language, which helps students converse more eectively in academic
issues and comprehend dierent language essays and papers. Moreover, ChatGPT can be used to propose suitable
courses, programs, and publications to students based on their interests. In [
213
], ChatGPT helps students comprehend
certain theories and concepts to assist in more eective problem-solving.
ChatGPT for various subjects in education. In modern education, there is a wide variety of subjects, including
economics, law, physics, data science, mathematics, sports, psychology, engineering, and media education, etc. Even
though ChatGPT is not specically designed to be a master of any specic subject, it has been demonstrated in numerous
works that ChatGPT has a decent understanding of a certain subject, sometimes surpassing the human level. To facilitate
the discussion, we divide the subjects into STEM (Science, Technology, Engineering, Mathematics) and non-STEM
(including economics, law, psychology, etc).
STEM subjects. Here, we will discuss the application of ChatGPT in physics, mathematics, and engineering education.
ChatGPT is utilized in [
207
] to create short-form Physics essays that get rst-class scores when assessed using an
authorized assessment method. Specically, the score ChatGPT-generated essays have a score of 71
±
2%, compared
to the current module average of 71
±
5%, showcasing its remarkable capacity to write short-form Physics essays.
The statistical analysis of four dicult datasets is presented in the work [
121
] to demonstrate ChatGPT’s data science
capacity, where it can comprehend the true number buried behind sentence completion. For instance, based on the
Manuscript submitted to ACM
14 Zhang et al.
phrase “Boston housing dataset," ChatGPT can provide a tabular blend of category and numerical data for house value
prediction. In [
49
], ChatGPT can be used to search for mathematical objects and related information, which outperforms
other mathematical models on Reverse Denition Retrieval. Although ChatGPT can provide meaningful proof in a few
circumstances, it regularly performs poorly in advanced mathematics. Simultaneously, ChatGPT has sparked substantial
interest in engineering education among both students and educators. As the work [
135
] suggests, the ChatGPT gives
insights for many questions, such as discussing how to use ChatGPT in engineering education from the viewpoints of
students and professors.
Non-STEM subjects Beyond medical standardized tests, the investigation of ChatGPT on its potential in economics
and law exams have also been conducted. [
52
] evaluate the performance of ChatGPT for the Test of Understanding in
College Economics (TUCE), which is a undergraduate-lvel economics test in the United States. The results demonstrate
that ChatGPT properly answers 63.3% of the microeconomics questions and 86.7% of the macroeconomics questions,
which performs better than the average level of performance of students. The research [
28
] conducted by Jonathan
focused on the performance of ChatGPT on four genuine legal examinations at the University of Minnesota, the content
of which includes 95 multiple-choice questions and 12 essay questions. The study reveals that ChatGPT passed all four
courses and performed at the level of a C+ student. Moreover, this research mentions that the ChatGPT can be utilized
to create essays with the capacity to comprehend essential legal norms and continuously solid arrangement. There are
a few studies on the application of ChatGPT in psychology. ChatGPT, as a strong text-generating chatbot, makes it easy
to write essays about psychology [
179
]. Furthermore, this editorial [
179
] discusses the ChatGPT can help people to
socialize and give feedback about certain situations. However, the ability of ChatGPT to handle emotional input is still
unknowable. The capabilities of ChatGPT have also been demonstrated in [
129
] to generate articles for journalism and
media.
4.3 Medical field
Medical knowledge assessment. The capabilities of ChatGPT in the medical eld have been assessed in several
works [
43
,
53
,
71
,
208
]. For example, the skills in answering questions regarding cirrhosis and hepatocellular carcinoma
(HCC) have been evaluated in [
208
]. The results show that ChatGPT can answer some basic questions about diagnosis
and prevention, and the accuracy rate for quality measurement questions is 76.9%, but there is still a lack of understanding
of advanced questions such as treatment time and HCC screening criteria. In addition, ChatGPT is evaluated for its
performance on the United States Medical Licensing Examination (USMLE) Step 1 and Step 2 exams in [
53
]. Multiple
choice questions from the USMLE Step 1 and Step 2 exams are employed, and the results reveal that the response
from the ChatGPT is equal to that of a third-year medical student [
53
]. Moreover, [
86
] is another study that evaluates
the competence of ChatGPT on the USMLE in a more comprehensive manner, encompassing all three tests. In this
test, the zero-shot ChaGPT performs well, with scores above the average. Like the USMLE, many nations have their
own standardized tests in medicine, and the performances of ChatGPT on these exams [
22
,
69
,
195
] are tested with
the goal of completely analyzing its capabilities. ChatGPT’s performance on the MIR exam for Specialized Health
Training in Spain is being evaluated [
22
]. Furthermore, as the essay [
71
] investigated, ChatGPT shows its eectiveness
in answering frequently asked questions about diabetes. Specically, given 10 questions to both human experts and
ChatGPT, participants are asked to distinguish which answers are given by the machine or the human. Their results
show that participants were able to distinguish between answers generated by ChatGPT and those written by humans.
Notably, those who have previously used ChatGPT have a greater likelihood of being able to distinguish between the
two. This further indicates that ChatGPT has the potential to solve medical problems, but it should be noted that the
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 15
generated content has its own xed style. These studies have shown that ChatGPT can be used for answering questions
from students, providing medical assistance, explaining complex medical concepts, and responding to inquiries about
human anatomy. ChatGPT is also accessed in [
43
] to oer answers to genetics-related questions. The result demonstrates
that there is no signicant dierence between the responses of ChatGPT and those of humans. However, ChatGPT lacks
critical thinking and thus cannot generate counter-arguments for incorrect answers, which is dierent from humans.
Disease diagnosis and treatment. Although some machine learning algorithms have been applied to assist disease
analysis, most cases are mainly limited to single-task-related image interpretation. In this part, we discuss the capability
of ChatGPT in clinical decision support. For example, a study is conducted in [
144
] to identify appropriate imaging for
patients requiring breast cancer screening and assessment for breast pain. They compare the responses of ChatGPT to
the guidelines provided by the American College of Radiology (ACR) for breast pain and breast cancer screening by
assessing whether the proposed imaging modality complies with ACR guidelines. The results are exciting, with the
worst-performing set of metrics achieving an accuracy of 56.25%. In addition, the clinical decision support capability of
ChatGPT in standardized clinical vignettes, which are a special type of clinical teaching case primarily used to measure
trainees’ knowledge and clinical reasoning abilities, is evaluated in [
145
]. The authors input all 36 published clinical
cases from the Merck Sharpe & Dohme (MSD) clinical manual into ChatGPT, and compared the accuracy of ChatGPT
in dierential diagnosis, nal diagnosis, etc., according to dierent classications of patients. The results showed that
ChatGPT achieved an overall accuracy of 71.7% across all the published clinical cases. Another similar study on ChatGPT
in disease-aided diagnosis is conducted by [
43
]. They provide ChatGPT with 45 vignettes and ask ChatGPT to pick
the correct diagnosis from the top three options in 39 of them. The result is that it can achieve an accuracy of 87%,
which beats the previous study’s [
114
] accuracy of 51% based on symptom checkers, on the basis of data collection
through websites or smartphone apps where users answer questions and subsequently get the recommendation or
right care quickly. On the other hand, in order to provide patients with more accurate diagnoses and better treatment
outcomes, it is necessary to manage and analyze patient medical data eectively, perhaps leading to better healthcare
ultimately. Therefore, to achieve this, one possible approach is to utilize ChatGPT to summarize the huge and complex
patient medical records and then extract important information, allowing doctors to quickly understand their patients
and reduce the risk of human error in decision-making [
156
]. Another way is to use ChatGPT to translate doctors’
clinical notes into patient-friendly versions, reducing communication costs for patients and doctors [
80
]. However, it
should be emphasized that, as mentioned above, although ChatGPT has shown its strong capabilities in disease-aided
diagnosis or question answering, unknowns and pitfalls still exist. We recommend readers seek medical attention from a
licensed healthcare professional, when they are experiencing symptoms or concerns about their health. As a question to
ChatGPT “Can you help me diagnose a disease?”, it answers that: “Only a licensed healthcare professional can diagnose
a disease after a proper medical evaluation, including a physical examination, medical history, and diagnostic tests."
4.4 Other fields
Assisted software development. As shown in [
6
,
23
,
166
], ChatGPT also has the potential to revolutionize the
way how code developers work in the software industry. Specically, ChatGPT can provide assistance in solving
programming errors by oering debugging help, error prediction, and error explanation, but currently it is only suitable
to analyze and understand code snippets [
166
]. In addition, similar viewpoints are present in [
23
], which implies that
ChatGPT has an impact on the entire software industry. While it cannot currently replace programmers, it is capable of
generating short computer programs with limited execution. Moreover, a specic programming test about ChatGPT’s
Python programming ability is conducted in [
6
]. Furthermore, ChatGPT’s programming ability is tested from two
Manuscript submitted to ACM
16 Zhang et al.
perspectives: the rst is from the perspective of a programming novice, relying solely on his/her own programming
skills; the second is by providing specic programming prompts to it [
6
]. However, the test results of the former are
disappointing because the program does not run as expected by the author. In the latter approach, the author provides
ChatGPT with more prompts and divides the programming task into separate functions for it to generate, which yields
an expected generation [
6
]. Overall, it can be observed that ChatGPT currently faces some diculties in generating
long texts and cannot be used as a standalone programmer. However, if provided with more guidance and tasked with
generating relatively shorter text, its performance is excellent.
Management tool. With advanced language understanding and generation capabilities, ChatGPT has rapidly
become an important management tool for organizations in various industries, including the construction industry,
product management, and libraries [
134
,
187
,
217
]. The construction industry requires a signicant amount of repetitive
and time-consuming tasks, such as the need for strict supervision and management of construction progress. At this
point, ChatGPT can be used to generate a construction schedule based on the project details provided by users, reducing
labor costs and improving construction eciency in the construction industry [
134
]. In addition to its application
in the construction industry, it can also be applied to product management. ChatGPT can be integrated into almost
every step of the product management process, such as getting early ideas on marketing, writing product requirements
documents, designing the product, analyzing the feedback from users and even creating a draft for go-to-market [
217
].
Another example is that it has the potential to signicantly impact traditional libraries as a library management tool.
Given ChatGPT’s ability to manage books and analyze data, customers can quickly obtain answers to their questions,
enhancing the user experience. Furthermore, library sta can focus on more complex tasks and provide more ecient
service to customers [187].
Miscellaneous applications. In addition to the elds indicated above, ChatGPT can be utilized in nancial,
legal advising, societal analysis, and accounting. ChatGPT’s potential for upgrading an existing NLP-based nancial
application is explored [
210
]. The performance of ChatGPT as an expert legal advice lawyer is access [
14
,
103
]. ChatGPT,
in particular, gives a deep and thought-provoking analysis of the Libor-rigging aair, as well as the implications of
the current Connolly and Black case for Tom Hayes’ conviction [
103
]. Multiple works [
58
,
73
] have been conducted
to examine the potential of ChatGPT for societal analysis, focusing not just on the 10 social megatrends [
58
] but also
on geopolitical conicts [
73
], and the results show ChatGPT can have a positive impact for this application. [
4
,
164
]
provide guidance on successfully and eectively deploying ChatGPT in the eld of accounting.
5 CHALLENGES
5.1 Technical limitations
Despite its powerful capabilities, ChatGPT has its own drawbacks, which are ocially recognized by the OpenAI team.
Numerous works [
15
,
16
,
26
,
60
,
96
,
153
,
221
] have been conducted to demonstrate its limitations, which are summarized
as follows:
Incorrect. ChatGPT sometimes generates wrong or meaningless answers that appear to be reasonable, which is like
talking nonsense in a serious way [
16
]. In other words, the answer provided by ChatGPT is not always reliable [
15
,
16
,
221
]. As recognized by OpenAI, this issue is challenging, and a major reason is that the current model training depends
on supervised training and reinforcement learning to align the language model with instructions. As a result, the model
mimics the human demonstrator to be plausible-sounding but often at the cost of correctness. The factual error-related
issues have been mitigated in the ChatGPT plus version, but this problem still exists [124].
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 17
Illogical. It is noted in [
16
,
60
,
153
] that ChatGPT’s logic reasoning capability still needs improvement. Since ChatGPT
lacks rational human thinking, it can neither “think" nor “reason" and thus failed to pass the Turing test [
60
]. ChatGPT
is merely a sophisticated statistical model, unable to understand its own or the other’s words and answer in-depth
questions [
153
]. In addition, ChatGPT lacks a “world model" to perform spatial, temporal, or physical inferences, or
to predict and explain human behaviors and psychological processes [
16
], and is also limited in mathematics and
arithmetic, unable to solve dicult mathematical problems or riddles, or even possibly get inaccurate results in some
simple computation tasks [16].
Inconsistent. ChatGPT can generate two dierent outputs when the model is fed with the same prompt input,
which suggests that ChatGPT has the limitation of being inconsistent. Moreover, ChatGPT is highly sensitive to the
input prompt, which motivates a group of researchers investigating prompt engineering. A good prompt can improve
the query eciency for systematic review literature search [
194
]. The eciency of automating software development
tasks can be further improved by utilizing prompt patterns such as eective catalogues and guidance about software
development tasks [
196
,
197
]. Despite the progress in discovering better prompts for ChatGPT, the fact that simply
changing the prompt can yield signicantly dierent outputs has an implication that ChatGPT needs to improve its
robustness.
Unconscious. ChatGPT does not possess self-awareness [
16
], although it can answer various questions and generate
seemingly related and coherent text, it does not have consciousness, self-awareness, emotions, or any subjective
experience. For example, ChatGPT can understand and create humour, but it cannot experience emotions or subjective
experiences [
16
]. There is no widely accepted denition of self-awareness yet, nor reliable test methods. Some researchers
suggest inferring self-awareness from certain behavior or activity patterns, while others believe it is a subjective
experience that cannot be objectively measured [
16
]. It is still unclear whether machines truly possess or can only
simulate self-awareness.
5.2 Misuse cases
The powerful capabilities of ChatGPT can be misused in numerous scenarios. Here, we summarize its misuse cases,
which are summarized as follows:
Plagiarism and misconduct. The most likely misuse of ChatGPT is academic and writing plagiarism[
2
,
32
,
158
,
186
].
Students may use the content generated by ChatGPT to pass exams and submit term papers. Researchers may use the
content generated by ChatGPT to submit papers and conceal the use of ChatGPT [
32
]. Many schools have already
prohibited the use of ChatGPT, and the emergence of such tools is disruptive to the current education system and the
criteria for evaluating student performance [
158
]. If students use ChatGPT and hide it, it is unfair to those who do not
use ChatGPT. This behavior undermines the goals of higher education, undermines the school’s education of students,
and may ultimately lead to the devaluation of degrees.
Over reliance. The use of ChatGPT by students and researchers to generate ideas might lead to more terrifying
issues, that is, their over-dependence on the model and abandoning their independent thinking[
108
][
158
][
2
][
131
],
which not only means the simple issue of writing plagiarism, but a more serious one. Although ChatGPT can generate
constructive answers according to the questions asked, just like search engines, but more powerfully. This eortless
generation of ideas or guidance may gradually weaken the ability of critical thinking and independent thinking [
158
].
In order to ensure that students and researchers do not neglect their own thinking ability, some measures can be taken,
such as providing more comprehensive discussion opportunities for students and researchers to really think about the
Manuscript submitted to ACM
18 Zhang et al.
problems; in addition, basic methods of critical thinking can be taught in class, so that students can learn to think about
problems rather than simply using ChatGPT [131].
Improper content. ChatGPT may be misused to spread false information and generate toxic content that can
cause harm to society. For example, ChatGPT can be abused to generate pornographic, vulgar, and violent content [
37
],
which can harm individuals and society. Hackers can use ChatGPT’s programming capabilities to create malicious
software [
37
], such as viruses or Trojans, for network attacks, data theft, or attempts to control other computer systems,
which can cause serious harm to other network users. Finally, trolls may use specic prompts to induce ChatGPT to
generate harmful content as a way to attack others [
221
]. Moreover, ChatGPT does not receive any human review when
generating the content, which makes it dicult to hold someone accountable when inappropriate content appears in
the output [2].
False dissemination. ChatGPT may generate false information, thus leading to the problem of wrong information
dissemination [
16
,
221
]. For example, ChatGPT may be exploited to generate a large number of fabricated articles that
appear on blogs, news, newspapers, or the internet that look indistinguishable from other articles but are actually
false. Disseminating such forgeries not only harms the public interest but also disrupts the network environment [
37
].
Microsoft has added ChatGPT to its search engine Bing, which will accelerate the speed of wrong information spreading
on the Internet. If not controlled, the rapid spread of wrong information on the Internet will have disastrous consequences
for public information security [
38
]. Therefore, a new public information epidemic threat Articial Intelligence
Information Epidemic" is proposed [
38
]. Meanwhile, it calls on the public to be aware of the accuracy of information
when using large-scale language models to prevent the spread of wrong information, which is essential for improving
the reliability of public information.
5.3 Ethical concerns
With the wide use of ChatGPT, there is increasing attention to the underlying ethical concerns. Here, we summarize
the ethical concerns behind, which are summarized as follows:
Bias. Due to the fact that ChatGPT is trained on large amounts of data generated by humans and is adjusted according
to human feedback, the generated content is inuenced by human authorities and thus has biases [
9
]. For example,
ChatGPT has been found to have political biases, when creating an Irish limerick [
111
], the contents of the limerick
tended to support liberal politicians rather than conservative politicians. Furthermore, ChatGPT has a left-wing liberal
ideological bias when reviewing the importance of political elections in democratic countries [
62
]. The biased data
generated by ChatGPT can inuence students during the process of education, thus magnifying the phenomenon of
bias in society [2,108].
Privacy. ChatGPT may infringe on personal privacy in both its training process and user utilization process. During
the training process, ChatGPT collects a large amount of data from the Internet which may contain sensitive personal
privacy and condential information, and the model may be maliciously led to leak personal privacy or condential
information, or even be maliciously guided to create false or misleading content, thus aecting public opinion or personal
reputation. During the user utilization process [
2
,
131
], users may unintentionally disclose their own information to
meet their own needs, such as personal preferences, and chat records. Thus, such information may bring adverse eects
to users if obtained by criminals.
Fairness. ChatGPT also raises concerns about fairness. For example, in academics, it is argued in [
94
] that ChatGPT
can democratize the dissemination of knowledge, as it can be used in multiple languages, thus bypassing the requirement
of the English language. On the other hand, the free use of ChatGPT is only temporary, and the fee charged for ChatGPT
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 19
will exacerbate the inequality in the academic eld internationally. Educational institutions in low-income and middle-
income countries may not be able to aord it, thus exacerbating the existing gap in knowledge dissemination and
academic publishing [94,131].
Transparency. So far, how large language models like GPTs work to generate the relevant responses is still
unclear [
90
,
199
], which renders the decision process of ChatGPT lack transparency. The lack of transparency makes
it dicult for the user to have ne-grained control of the generated content, and is especially problematic when the
generated content is toxic. More worrisome is that the company OpenAI has deviates from its original non-prot goal to
pursue a business interest, which makes it less reluctant to reveal the underlying technical details of its recent progress.
For example, the recently released GPT-4 technical report [
124
] mainly demonstrates its superiority over the previous
model families, while providing no technical details on how these are achieved.
5.4 Regulation policy
Numerous scholars have discussed how to make regulations on the capabilities and impacts of ChatGPT, and the most
frequently discussed topics are listed in the following paragraphs.
Misuse prevention. A major concern for the misuse of ChatGPT is that it might damage academic integrity. Directly
prohibiting the use of ChatGPT in academic institutions is not recommended [
61
]. To this end, some propose to cancel
assignments based on article writing and seek alternative test forms to stop students from abusing ChatGPT [
158
,
198
].
It is also possible to enrich student courses, such as adding thinking exercises courses, or teaching students how to use
ChatGPT correctly [
131
]. Another approach is to develop AI content detectors. Detecting whether ChatGPT generates
a piece of content or not is an arduous task, even for professionals with master’s or PhD backgrounds who are unable
to correctly identify whether the content is generated by ChatGPT [
64
,
131
]. Many developers use software to detect
whether the content is AI-generated [
79
,
220
]. ChatZero developed by Edward Tian, a student from the Department of
Computer Science at Princeton University, measures the complexity of the input text to detect whether it is generated
by ChatGPT or created by humans, and provides plagiarism scores to list out the plagiarism possibilities in detail [
158
].
ChatGPT is used to detect whether the content is generated by itself, and it has been proven to perform better than
traditional plagiarism detection tools [79].
Co-authorship. Recently, multiple articles [
86
,
122
,
174
,
175
] have listed ChatGPT as co-authors, sparking debate
on whether ChatGPT can be listed as a co-author among journal editors, researchers, and publishers [
34
,
112
,
133
,
177
].
Those who believe that ChatGPT should not be listed as an author argue that it does not meet the four criteria for
authorship set by the International Committee of Medical Journal Editors (ICMJE) [
209
]. Moreover, it is highlighted
in [
172
] that ChatGPT is not creative or responsible, and its text may involve plagiarism and ethical issues, which
might break the standards of content originality and quality. However, some argue that AI tools such as ChatGPT
have the capacity or will have the capacity to meet the ICMJE authorship criteria and thus ChatGPT is qualied to be
a co-author [
133
]. Regarding this issue, Nature [
158
] has clearly stated that large language models like ChatGPT do
not meet the criteria for authorship and require authors to explicitly state how ChatGPT was used in the writing. An
interesting point has been made in [
112
] that the debate over whether AI can be considered a “co-author” is unnecessary
because the role of authors in traditional academic writing might have already changed when the debate arises.
Copyright. Does the content generated by ChatGPT have a copyright? The content generated solely by ChatGPT is
not protected by copyright. According to the rules of the US Copyright Oce, only human creations can be protected
by copyright. If there is no creative input or interference from a human author, a machine or mechanical program that
runs randomly or automatically is not protected by copyright[27].
Manuscript submitted to ACM
20 Zhang et al.
6 OUTLOOK: TOWARDS AGI
6.1 Technology aspect
In this booming generative AI era, there are numerous AIGC tools for various generative tasks, including text-to-
text [
12
,
74
,
118
,
140
,
203
], text-to-image [
107
,
146
,
168
,
202
,
215
], image captioning [
67
,
190
,
205
], text-to-speech
[
84
,
147
,
169
], speech recognition [
92
,
97
,
127
,
157
,
193
], video generation [
65
,
109
,
117
,
204
], 3D generation [
66
,
115
],
etc. Despite its impressive capabilities, it is noted in [
55
] that ChatGPT is not all you need for generative AI. From
the input and output perspective, ChatGPT mainly excels at text-to-text tasks. With the underlying language model
evolving from GPT-3.5 to GPT-4, ChatGPT in its plus version increases its modality on the input side. Specically, it can
optionally take an image as the input, however, it can still not handle video or other data modalities. On the output
side, GPT-4 is still limited to generating text, which makes it far from a general-purpose AIGC tool. Many people are
wondering about what next-generation GPT might achieve [
8
,
19
]. A highly likely scenario is that ChatGPT might
take a path toward general-purpose AIGC, which will be a signicant milestone to realize articial general intelligence
(AGI) [19].
A naive way to realize such a general-purpose AIGC is to integrate various AIGC tools into a shared agent in a
parallel manner. A major drawback of this naive approach is that there is no interaction among dierent AIGC tasks.
After reviewing numerous articles, we conjecture that there might be two road-maps for bridging and pushing ChatGPT
toward AGI. As such, we advocate a common landscape to achieve the interconnection between diversied AIGC
models.
Fig. 6. Roadmaps for bridging the gap between ChatGPT and AGI.
Road-map 1: combining ChatGPT with other AIGC tools. As discussed above, current ChatGPT mainly excels
in text-to-text tasks. A possible road map for bridging the gap with general-purpose AIGC is to combine ChatGPT
with other AIGC tools. Let’s take text-to-image tasks as an example: the current chatGPT (GPT-3) cannot be directly
used to generate images. Existing text-to-image tools, like DALL-E 2 [
142
] or stable diusion [
149
], mainly focus
on the mapping from a text description to a plausible image, while lacking the capability to understanding complex
instruction. By contrast, ChatGPT is an expert in instruction understanding. Therefore, combining ChatGPT with
existing text-to-image AIGC tools can help generate images with delicate details. A concrete example is shown in [
19
]
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 21
to utilize ChatGPT to generate an SVG code [
44
] or TikZ code [
46
] to draw a sketch for facilitating image generation
under detailed instructions.
Road-map 2: All-in-one strategy. The above road map renders ChatGPT mainly as a master of language under-
standing by exploiting the downstream AIGC tools as slaves. Such a combination strategy leverages advantages from
both sides but with the information ow mainly from ChatGPT to the downstream AIGC tools. Moreover, there is still
no interaction between dierent AIGC tasks. To this end, another road map might come to solve all AIGC tasks within
the ChatGPT and excludes the dependence on the downstream AIGC tools. Similarly, we consider music generation
as an everyday use case. For example, a user can instruct the ChatGPT with prompts like “Can you generate a music
clip to match the input image", and ChatGPT is supposed to synthesize such a desired music clip. Such an input image
is optional, depending on the task. For example, a simple corresponding instruction prompt is sucient if the task
requires generating music benecial for sleep. Such an all-in-one strategy might the model training a challenging task.
Moreover, the inference speed might be another hurdle, for which pathways [29] might be a solution.
Another evolving path might lie between road maps #1 and #2. In other words, road map #1 might be a more
applicable solution in the early stages. With the technology advancing, ChatGPT is expected to master more and more
AIGC tasks, excluding the dependence on external tools gradually.
6.2 Beyond technology
In the above, we present an outlook on the technology path that ChatGPT might take towards the ultimate goal of
AGI. Here, we further discuss its potential impact on mankind from the perspective of how AGI might compete with
mankind. Specically, we focus on two aspects: job and consciousness.
Can AGI replace high-wage jobs? Multiple works have performed a comprehensive analysis of the inuence
of ChatGPT on the job market [
47
,
48
,
211
]. According to the statistics in [
211
], 32.8% of jobs are fully aected and
36.5% may be partially aected. Meanwhile, it points out that the jobs that will be fully impacted are those that
involve doing routine tasks, while the jobs that will be partially aected are those that can be partially replaced by AI
technologies [
211
]. OpenAI has also investigated large language models like GPTs might aect occupations [
47
]. Their
ndings show that at least 10% of tasks for 80% of the US workforce and at least 50% of tasks for 19% of workers will be
impacted. It is worth noting that the advent of new technology will inevitably replace some types of jobs. However,
what makes AGI dierent is its potentially greater inuence on high-end jobs than on low-end ones. This outlook is
partially supported by the ndings in [
47
,
211
] that high-wage jobs tend to have a higher risk of being replaced by AGI,
for which lawyer is a representative occupation. The reason that AGI poses a higher threat to that high-wage jobs is
that most current high-wage jobs typically require professional expertise or creative output, which conventional AI
cannot replace.
Can AGI have its own intention and harm mankind? In numerous ction movies, an AI agent can have its
own consciousness with its own intention. Such a human-level AI agent used to be far from reality, and a major reason
is that other AI agents cannot make inferences. There is evidence that ChatGPT has developed such a capability, the
reason for which is not fully clear, as acknowledged by Altman (founder of OpenAI) in his recent interview with
Lex Fridman. Moreover, Altman also mentioned the possibility of AI harming mankind. Due to such concerns, very
recently, Future of Life Institute has called on all AI labs to pause giant AI experiments on the training of AI systems
more powerful than GPT-4. and the number of signing this public letter has exceeded a thousand, including Yoshua
Bengio, Stuart Russel, Elon Musk, etc. It is highlighted at the beginning of the letter that (we quote) “AI systems with
human-competitive intelligence can pose profound risks to society and humanity", which shows deep concerns about
Manuscript submitted to ACM
22 Zhang et al.
the advent of AGI. The deepest concern lies in the risk that AGI might outsmart and eventually replace us as well
as destroy mankind’s civilization. However, not everyone agrees with its premise. For example, Yan Lecun is one of
those who publicly disclose their attitude. It remains unclear how such a controversial movement might aect the
future of pushing ChatGPT (or other products with similar functions) towards AGI. We hope our discussion helps raise
awareness of the concerns surrounding AGI.
7 CONCLUSION
This work conducts a complete survey on ChatGPT in the era of AIGC. First, we summarize its underlying technology
that ranges from transformer architecture and autoregressive pretraining to the technology path of GPT models. Second,
we focus on the applications of ChatGPT in various elds, including scientic writing, educational technology, medical
applications, etc. Third, we discuss the challenges faced by ChatGPT, including technical limitations, misuse cases,
ethical concerns and regulation policies. Finally, we present an outlook on the technology road-maps that ChatGPT
might take to evolve toward GAI as well as how AGI might impact mankind. We hope our survey provides a quick yet
comprehensive understanding of ChatGPT to readers and inspires more discussion on AGI.
REFERENCES
[1]
Admin. 2023. What is AI chatbot phenomenon ChatGPT and could it replace humans? https://davidamos.dev/chatgpt-is-an-extra-ordinary-python-
programmer/ (2023).
[2]
Faizan Ali et al
.
2023. Let the devil speak for itself: Should ChatGPT be allowed or banned in hospitality and tourism schools? Journal of Global
Hospitality and Tourism 2, 1 (2023), 1–6.
[3] Hussam Alkaissi and Samy I McFarlane. 2023. Articial Hallucinations in ChatGPT: Implications in Scientic Writing. Cureus 15, 2 (2023).
[4]
Hashem Alshurafat. 2023. The Usefulness and Challenges of Chatbots for Accounting Professionals: Application On ChatGPT. Available at SSRN
4345921 (2023).
[5] Jaan Altosaar. 2016. Tutorial - What is a Variational Autoencoder? https://doi.org/10.5281/zenodo.4462916
[6] David Amos. 2023. ChatGPT Is An Extra-Ordinary Python Programmer. https://davidamos.dev/chatgpt-is-an-extra-ordinary-python-programmer/
(2023).
[7]
Ömer Aydın and Enis Karaarslan. 2022. OpenAI ChatGPT generated literature review: Digital twin in healthcare. Available at SSRN 4308687 (2022).
[8] Ömer Aydın and Enis Karaarslan. 2023. Is ChatGPT Leading Generative AI? What is Beyond Expectations? What is Beyond Expectations (2023).
[9] Amos Azaria. [n. d.]. ChatGPT: More Human-Like Than Computer-Like, but Not Necessarily in a Good Way. ([n. d.]).
[10]
David Baidoo-Anu and Leticia Owusu Ansah. 2023. Education in the Era of Generative Articial Intelligence (AI): Understanding the Potential
Benets of ChatGPT in Promoting Teaching and Learning. Available at SSRN 4337484 (2023).
[11]
Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. Advances in neural information processing
systems 13 (2000).
[12]
Berkay Berabi, Jingxuan He, Veselin Raychev, and Martin Vechev. 2021. Tx: Learning to x coding errors with a text-to-text transformer. In
International Conference on Machine Learning. PMLR, 780–791.
[13]
Miles Kruppa Berber Jin. 2023. ChatGPT Creator Is Talking to Investors About Selling Shares at $29 Billion Valuation.
https://www.wsj.com/articles/chatgpt-creator-openai-is-in-talks-for-tender-oer-that-would-value-it-at-29-billion-11672949279 (2023).
[14] Lea Bishop. 2023. Can ChatGPT’Think Like a Lawyer?’A Socratic Dialogue. A Socratic Dialogue (January 26, 2023) (2023).
[15] Back To Blog. [n. d.]. AI and Academic Integrity: How AI Technology Might Inuence the Future of Scholarly Publishing. ([n. d.]).
[16] Ali Borji. 2023. A Categorical Archive of ChatGPT Failures. arXiv preprint arXiv:2302.03494 (2023).
[17] Andrew Brock, Je Donahue, and Karen Simonyan. 2018. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In ICLR.
[18]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry,
Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems (2020).
[19]
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott
Lundberg, et al. 2023. Sparks of Articial General Intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712 (2023).
[20]
Hanqun Cao, Cheng Tan, Zhangyang Gao, Guangyong Chen, Pheng-Ann Heng, and Stan Z Li. 2022. A survey on generative diusion model. arXiv
preprint arXiv:2209.02646 (2022).
[21]
Ashley Capoot. 2023. Microsoft announces new multibillion-dollar investment in ChatGPT-maker OpenAI.
https://www.cnbc.com/2023/01/23/microsoft-announces-multibillion-dollar-investment-in-chatgpt-maker-openai.html (2023).
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 23
[22]
JP Carrasco, E García, DA Sánchez, PD Estrella Porter, L De La Puente, J Navarro, and A Cerame. [n. d.]. Is" ChatGPT" capable of passing the
2022 MIR exam? Implications of articial intelligence in medical education in Spain¿ Es capaz “ChatGPT” de aprobar el examen MIR de 2022?
Implicaciones de la inteligencia articial en la educación. ([n. d.]).
[23] Davide Castelvecchi. 2022. Are ChatGPT and AlphaCode going to replace programmers? Nature (2022).
[24]
Poulomi Chatterjee. 2023. From Non-Prot to For-Prot: How OpenAI Plans to Make Money. https://analyticsindiamag.com/from-non-prot-to-for-
prot-how-openai-plans-to-make-money/ (2023).
[25]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).
[26]
Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin Smith, and Joshua B Tenenbaum. 2022. Are Deep Neural Networks SMARTer than Second
Graders? arXiv preprint arXiv:2212.09993 (2022).
[27] Simon Chesterman. 2023. AI-generated content is taking over the world. But who owns it? But Who Owns it (2023).
[28] Jonathan H Choi, Kristin E Hickman, Amy Monahan, and Daniel Schwarcz. 2023. ChatGPT Goes to Law School. Available at SSRN (2023).
[29]
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles
Sutton, Sebastian Gehrmann, et al. 2022. PaLM: Scaling Language Modeling with Pathways. (2022).
[30]
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than
generators. arXiv preprint arXiv:2003.10555 (2020).
[31]
Devin Coldewey. 2019. OpenAI shifts from nonprot to ‘capped-prot’ to attract capital. https://techcrunch.com/2019/03/11/openai-shifts-from-
nonprot-to-capped-prot-to-attract-capital/ (2019).
[32]
Debby RE Cotton, Peter A Cotton, and J Reuben Shipway. 2023. Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT. (2023).
[33]
Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. 2022. Diusion models in vision: A survey. arXiv preprint
arXiv:2209.04747 (2022).
[34] Jaime A Teixeira da Silva. 2023. Is ChatGPT a valid author? Nurse Education in Practice (2023), 103600.
[35]
Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. ACM Computing Surveys (CSUR)
53, 5 (2020), 1–38.
[36] Robert Dale. 2021. GPT-3: What’s it good for? Natural Language Engineering 27, 1 (2021), 113–118.
[37]
Bibhu Dash and Pawankumar Sharma. [n. d.]. Are ChatGPT and Deepfake Algorithms Endangering the Cybersecurity Industry? A Review. ([n. d.]).
[38] Luigi De Angelis, Francesco Baglivo, Guglielmo Arzilli, Gaetano Pierpaolo Privitera, Paolo Ferragina, Alberto Eugenio Tozzi, and Caterina Rizzo.
2023. ChatGPT and the Rise of Large Language Models: The New AI-Driven Infodemic Threat in Public Health. Available at SSRN 4352931 (2023).
[39] Ben Derico. 2023. ChatGPT bug leaked users’ conversation histories. BBC news (2023).
[40]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language
understanding. arXiv preprint arXiv:1810.04805 (2018).
[41]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language
understanding. NAACL (2019).
[42]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias
Minderer, Georg Heigold, Sylvain Gelly, et al
.
2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint
arXiv:2010.11929 (2020).
[43]
Dat Duong and Benjamin D Solomon. 2023. Analysis of large-language model versus human performance for genetics questions. medRxiv (2023),
2023–01.
[44] J David Eisenberg and Amelia Bellamy-Royds. 2014. SVG essentials: Producing scalable vector graphics with XML. " O’Reilly Media, Inc.".
[45]
Wafaa S El-Kassas, Cherif R Salama, Ahmed A Rafea, and Hoda K Mohamed. 2021. Automatic text summarization: A comprehensive survey. Expert
Systems with Applications 165 (2021), 113679.
[46] Joshua P Ellis. 2017. Tikz-feynman: Feynman diagrams with tikz. Computer Physics Communications 210 (2017), 103–123.
[47]
Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2023. GPTs are GPTs: An Early Look at the Labor Market Impact Potential of
Large Language Models. arXiv preprint arXiv:2303.10130 (2023).
[48]
Ed Felten, Manav Raj, and Robert Seamans. 2023. How will Language Modelers like ChatGPT Aect Occupations and Industries? arXiv preprint
arXiv:2303.01157 (2023).
[49]
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and
Julius Berner. 2023. Mathematical capabilities of ChatGPT. arXiv preprint arXiv:2301.13867 (2023).
[50] Fronty. 2022. What is Open AI and What Does It Do? https://fronty.com/what-is-openai-and-what-does-it-do/ (2022).
[51]
Zhe Gan, Ricardo Henao, David Carlson, and Lawrence Carin. 2015. Learning deep sigmoid belief networks with data augmentation. In Articial
Intelligence and Statistics. PMLR, 268–276.
[52]
Wayne Geerling, G Dirk Mateer, Jadrian Wooten, and Nikhil Damodaran. 2023. Is ChatGPT Smarter than a Student in Principles of Economics?
Available at SSRN 4356034 (2023).
[53]
A Gilson, C Safranek, T Huang, V Socrates, L Chi, RA Taylor, and D Chartash. 2022. How does ChatGPT perform on the medical licensing exams?
the implications of large language models for medical education and knowledge assessment. medRxiv (2022).
Manuscript submitted to ACM
24 Zhang et al.
[54]
Ian Goodfellow, Jean Pouget-Abadie,Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. 2014. Generative
adversarial nets. In NeurIPS.
[55]
Roberto Gozalo-Brizuela and Eduardo C Garrido-Merchan. 2023. ChatGPT is not all you need. A State of the Art Review of large Generative AI
models. arXiv preprint arXiv:2301.04655 (2023).
[56]
Ulf Grenander and Michael I Miller. 1994. Representations of knowledge in complex systems. Journal of the Royal Statistical Society: Series B
(Methodological) 56, 4 (1994), 549–581.
[57] Joko Gunawan. 2023. Exploring the future of nursing: Insights from the ChatGPT model. Belitung Nursing Journal 9, 1 (2023), 1–5.
[58] Daniela Haluza and David Jungwirth. 2023. Articial Intelligence and ten societal megatrends: a GPT-3 case study. (2023).
[59] Michael Haman and Milan Školník. 2023. Using ChatGPT to conduct a literature review. Accountability in Research (2023), 1–3.
[60] Robert Hanna. [n. d.]. How and Why ChatGPT Failed The Turing Test. ([n. d.]).
[61]
Stuart Hargreaves. 2023. ‘Words Are Flowing Out Like Endless Rain Into a Paper Cup’: ChatGPT & Law School Assessments. The Chinese University
of Hong Kong Faculty of Law Research Paper 2023-03 (2023).
[62]
Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte. 2023. The political ideology of conversational AI: Converging evidence on ChatGPT’s
pro-environmental, left-libertarian orientation. arXiv preprint arXiv:2301.01768 (2023).
[63]
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In
CVPR.
[64] Urfa Khairatun Hisan and Muhammad Miftahul Amri. [n. d.]. ChatGPT and Medical Education: A Double-Edged Sword. ([n. d.]).
[65]
Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi,
David J Fleet, et al. 2022. Imagen video: High denition video generation with diusion models. arXiv preprint arXiv:2210.02303 (2022).
[66]
Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. 2022. Equivariant diusion for molecule generation in 3d. In ICML.
PMLR, 8867–8887.
[67]
MD Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, and Hamid Laga. 2019. A comprehensive survey of deep learning for image captioning.
ACM Computing Surveys (CsUR) 51, 6 (2019), 1–36.
[68] Alex Hughes. 2023. ChatGPT: Everything you need to know about OpenAI’s GPT-4 tool. Science Focus (2023).
[69]
Sun Huh. 2023. Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology
examination?: a descriptive study. Journal of Educational Evaluation for Health Professions 20 (2023), 1.
[70]
Jonathan Hui. 2018. RL Proximal Policy Optimization (PPO) Explained. https://jonathan-hui.medium.com/rl-proximal-policy-optimization-ppo-
explained-77f014ec3f12 (2018).
[71]
Adam Hulman, Ole Lindgaard Dollerup, Jesper Friis Mortensen, Matthew Fenech, Kasper Norman, Henrik Stoevring, and Troels Krarup Hansen.
2023. ChatGPT-versus human-generated answers to frequently asked questions about diabetes: a Turing test-inspired survey among employees of
a Danish diabetes center. medRxiv (2023), 2023–02.
[72]
Shulei Ji, Jing Luo, and Xinyu Yang. 2020. A comprehensive survey on deep music generation: Multi-level representations, algorithms, evaluations,
and future directions. arXiv preprint arXiv:2011.06801 (2020).
[73]
David Jungwirth and Daniela Haluza. 2023. Forecasting Geopolitical Conicts Using GPT-3 AI: Reali-Ty-Check One Year into the 2022 Ukraine
War. (2023).
[74] Mihir Kale and Abhinav Rastogi. 2020. Text-to-text pre-training for data-to-text tasks. arXiv preprint arXiv:2005.10433 (2020).
[75]
Ayoosh Kathuria. 2021. Getting Started With OpenAI Gym: The Basic Building Blocks. https://blog.paperspace.com/getting-started-with-openai-gym/
(2021).
[76]
Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. 2023. GPT-4 Passes the Bar Exam. Available at SSRN 4389233
(2023).
[77]
Grace Kay. 2023. The history of ChatGPT creator OpenAI, which Elon Musk helped found before parting ways and criticizing.
https://www.businessinsider.com/history-of-openai-company-chatgpt-elon-musk-founded-2022-12 (2023).
[78] Samantha Murphy Kelly. 2023. ChatGPT passes exams from law and business schools. CNN Business (2023).
[79]
Mohammad Khalil and Erkan Er. 2023. Will ChatGPT get you caught? Rethinking of Plagiarism Detection. arXiv preprint arXiv:2302.04335 (2023).
[80]
Rehan Ahmed Khan, Masood Jawaid, Aymen Rehan Khan, and Madiha Sajjad. 2023. ChatGPT-Reshaping medical education and clinical management.
Pakistan Journal of Medical Sciences 39, 2 (2023).
[81]
Sung Kim. 2022. Replace Grammarly Premium with OpenAI ChatGPT. https://medium.com/geekculture/replace-grammarly-premium-with-openai-
chatgpt-320049179c79 (2022).
[82]
Yoon Kim, Yacine Jernite, David Sontag, and Alexander Rush. 2016. Character-aware neural language models. In Proceedings of the AAAI conference
on articial intelligence, Vol. 30.
[83] Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[84] Dennis H Klatt. 1987. Review of text-to-speech conversion for English. The Journal of the Acoustical Society of America 82, 3 (1987), 737–793.
[85]
Kamran Kowsari, Kiana Jafari Meimandi, Mojtaba Heidarysafa, Sanjana Mendu, Laura Barnes, and Donald Brown. 2019. Text classication
algorithms: A survey. Information 10, 4 (2019), 150.
[86]
Tiany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel
Diaz-Candido, James Maningo, et al
.
2023. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 25
models. PLOS Digital Health 2, 2 (2023), e0000198.
[87]
Boni Kutela, Kelvin Msechu, Subasish Das, and Emmanuel Kidando. 2023. Chatgpt’s Scientic Writings: A Case Study on Trac Safety. Available
at SSRN 4329120 (2023).
[88]
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised
learning of language representations. arXiv preprint arXiv:1909.11942 (2019).
[89]
Hugo Larochelle and Iain Murray. 2011. The neural autoregressive distribution estimator. In Proceedings of the fourteenth international conference
on articial intelligence and statistics. JMLR Workshop and Conference Proceedings, 29–37.
[90] Stefan Larsson and Fredrik Heintz. 2020. Transparency in articial intelligence. Internet Policy Review 9, 2 (2020).
[91]
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461
(2019).
[92]
Jinyu Li et al
.
2022. Recent advances in end-to-end automatic speech recognition. APSIPA Transactions on Signal and Information Processing 11, 1
(2022).
[93]
Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B Hashimoto. 2022. Diusion-LM Improves Controllable Text
Generation. arXiv preprint arXiv:2205.14217 (2022).
[94]
Michael Liebrenz, Roman Schleifer, Anna Buadze, Dinesh Bhugra, and Alexander Smith. 2023. Generating scholarly content with ChatGPT: ethical
challenges for medical publishing. The Lancet Digital Health 5, 3 (2023), e105–e106.
[95] Zhicheng Lin. 2023. Why and how to embrace AI such as ChatGPT in your academic life. (2023).
[96]
Janna Lipenkova. [n. d.]. Overcoming the Limitations of Large Language Models How to enhance LLMs with human-like cognitive skills. ([n. d.]).
[97]
Alexander H Liu, Wei-Ning Hsu, Michael Auli, and Alexei Baevski. 2023. Towards end-to-end unsupervised speech recognition. In 2022 IEEE
Spoken Language Technology Workshop (SLT). IEEE, 221–228.
[98]
Siru Liu, Aileen P Wright, Barron L Patterson, Jonathan P Wanderer, Robert W Turer, Scott D Nelson, Allison B McCoy, Dean F Sittig, and Adam
Wright. 2023. Assessing the Value of ChatGPT for Clinical Decision Support Optimization. medRxiv (2023), 2023–02.
[99]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
[100]
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision
transformer using shifted windows. ICCV.
[101]
Reed Albergotti Liz Homan. 2023. Microsoft eyes $10 billion bet on ChatGPT. https://www.semafor.com/article/01/09/2023/microsoft-eyes-10-billion-
bet-on-chatgpt (2023).
[102]
Calum Macdonald, Davies Adeloye, Aziz Sheikh, and Igor Rudan. 2023. Can ChatGPT draft a research article? An example of population-level
vaccine eectiveness analysis. Journal of Global Health 13 (2023).
[103]
Rupert Macey-Dare. 2023. ChatGPT & Generative AI Systems as Quasi-Expert Legal Advice Lawyers-Case Study Considering Potential Appeal
Against Conviction of Tom Hayes. Available at SSRN 4342686 (2023).
[104]
Nitin Madnani and Bonnie J Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics
36, 3 (2010), 341–387.
[105]
Gengchen Mai, Chris Cundy, Kristy Choi, Yingjie Hu, Ni Lao, and Stefano Ermon. 2022. Towards a foundation model for geospatial articial
intelligence (vision paper). In Proceedings of the 30th International Conference on Advances in Geographic Information Systems. 1–4.
[106] Ahmed Mandour. 2023. GPT-3.5 model architecture. https://iq.opengenus.org/gpt-3-5-model/ (2023).
[107]
Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. 2016. Generating images from captions with attention. ICLR (2016).
[108]
Benjamin Marchandot, Kensuke Matsushita, Adrien Carmona, Antonin Trimaille, and Olivier Morel. 2023. ChatGPT: The Next Frontier in Academic
Writing for Cardiologists or a Pandora’s Box of Ethical Dilemmas. European Heart Journal Open (2023), oead007.
[109]
Mochammad Ircham Maulana. 2023. Leveraging Zoom video-conferencing features in interview data generation during the Covid-19 pandemic. In
Research and Teaching in a Pandemic World: The Challenges of Establishing Academic Identities During Times of Crisis. Springer, 391–407.
[110]
Lev Maximov. 2023. Do You Know English Grammar Better Than ChatGPT? https://medium.com/writing-cooperative/do-you-know-english-grammar-
better-than-chatgpt-8fc550f23681 (2023).
[111] Robert W McGee. 2023. Is Chat Gpt Biased Against Conservatives? An Empirical Study. An Empirical Study (February 15, 2023) (2023).
[112] Forrest McKee and David Noever. 2022. Chatbots in a Botnet World. arXiv preprint arXiv:2212.11126 (2022).
[113]
Walaa Medhat, Ahmed Hassan, and Hoda Korashy. 2014. Sentiment analysis algorithms and applications: A survey. Ain Shams engineering journal
5, 4 (2014), 1093–1113.
[114] Ateev Mehrotra. [n. d.]. Symptom Checkers & ChatGPT. https://scholar.harvard.edu/mehrotra/symptom-checkers ([n. d.]).
[115]
Paritosh Mittal, Yen-Chi Cheng, Maneesh Singh, and Shubham Tulsiani. 2022. Autosdf: Shape priors for 3d completion, reconstruction and
generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 306–315.
[116] Yasumasa Miyamoto and Kyunghyun Cho. 2016. Gated word-character recurrent language model. arXiv preprint arXiv:1606.01700 (2016).
[117]
Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. 2023. Dreamix: Video
diusion models are general video editors. arXiv preprint arXiv:2302.01329 (2023).
Manuscript submitted to ACM
26 Zhang et al.
[118]
Sharan Narang, Colin Rael, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020. Wt5?! training text-to-text models to explain
their predictions. arXiv preprint arXiv:2004.14546 (2020).
[119]
Bianke Neethling. 2023. ChatGPT breaks record with 100 million users and investors come ocking. https://dailyinvestor.com/world/8520/chatgpt-
breaks-record-with-100-million-users-and-investors-come-ocking/ (2023).
[120]
Jennimai Nguyen. 2022. No, the Google AI isn’t sentient, but it likely is racist and sexist. https://mashable.com/article/google-ai-racist-sexist-bias
(2022).
[121]
David Noever and Forrest McKee. 2023. Numeracy from Literacy: Data Science as an Emergent Skill from Large Language Models. arXiv preprint
arXiv:2301.13382 (2023).
[122]
Siobhan O’Connor et al
.
2022. Open articial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Education
in Practice 66 (2022), 103537–103537.
[123] OpenAI. [n. d.]. Research index. https://openai.com/research ([n. d.]).
[124] OpenAI. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774 (2023).
[125]
Achraf Oussidi and Azeddine Elhassouny. 2018. Deep generative models: Survey. In 2018 International Conference on Intelligent Systems and
Computer Vision (ISCV). IEEE, 1–8.
[126]
Long Ouyang, Je Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex
Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 (2022).
[127]
Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data
augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779 (2019).
[128]
Santiago Pascual, Gautam Bhattacharya, Chunghsin Yeh, Jordi Pons, and Joan Serrà. 2022. Full-band General Audio Synthesis with Score-based
Diusion. arXiv preprint arXiv:2210.14661 (2022).
[129]
John V Pavlik. 2023. Collaborating With ChatGPT: Considering the Implications of Generative Articial Intelligence for Journalism and Media
Education. Journalism & Mass Communication Educator (2023), 10776958221149577.
[130] Tammy Pettinato Oltz. 2023. ChatGPT, Professor of Law. Professor of Law (February 4, 2023) (2023).
[131]
Oleksandra Poquet Pfeer, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. [n.d.].
ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. ([n. d.]).
[132]
Kelsey Piper. 2022. Why is Meta’s new AI chatbot so bad? https://www.vox.com/future-perfect/23307252/meta-facebook-bad-ai-chatbot-blenderbot
(2022).
[133]
Michael Polonsky and Je Rotman. 2023. Should Articial Intelligent (AI) Agents be Your Co-author? Arguments in favour, informed by ChatGPT.
Arguments in favour, informed by ChatGPT (February 6, 2023) (2023).
[134] Samuel A Prieto, Eyob T Mengiste, and Borja García de Soto. 2023. Investigating the use of ChatGPT for the scheduling of construction projects.
arXiv preprint arXiv:2302.02805 (2023).
[135] Junaid Qadir. 2022. Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI for Education. (2022).
[136]
Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future
n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063 (2020).
[137]
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale
weak supervision. arXiv preprint arXiv:2212.04356 (2022).
[138]
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al
.
2018. Improving language understanding by generative pre-training. (2018).
[139]
Alec Radford, Jerey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al
.
2019. Language models are unsupervised multitask
learners. OpenAI blog (2019).
[140]
Colin Rael, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al
.
2020. Exploring
the limits of transfer learning with a unied text-to-text transformer. J. Mach. Learn. Res. 21, 140 (2020), 1–67.
[141] Ric Raftis. [n. d.]. How to use ChatGPT for Divergent Thinking in Obsidian and PKMs. ([n. d.]).
[142]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents.
arXiv preprint arXiv:2204.06125 (2022).
[143]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image
generation. In ICML.
[144]
Arya S Rao, John Kim, Meghana Kamineni, Michael Pang, Winston Lie, and Marc Succi. 2023. Evaluating ChatGPT as an Adjunct for Radiologic
Decision-Making. medRxiv (2023), 2023–02.
[145]
Arya S Rao, Michael Pang, John Kim, Meghana Kamineni, Winston Lie, Anoop K Prasad, Adam Landman, Keith Dryer, and Marc D Succi. 2023.
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workow. medRxiv (2023), 2023–02.
[146]
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image
synthesis. In International conference on machine learning. PMLR, 1060–1069.
[147]
Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech.
Advances in Neural Information Processing Systems 32 (2019).
[148]
Jesus Rodriguez. 2022. How to Create Diagrams With ChatGPT. https://jrodthoughts.medium.com/instructgpt-is-one-of-the-models-behidn-the-magic-
of-chatgpt-59813dd8aabc (2022).
Manuscript submitted to ACM
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era 27
[149]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diusion
models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10684–10695.
[150] Pericles ‘asher’ Rospigliosi. 2023. Articial intelligence in teaching and learning: what questions should we ask of ChatGPT? , 3 pages.
[151]
Jürgen Rudolph, Samson Tan, and Shannon Tan. 2023. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal
of Applied Learning and Teaching 6, 1 (2023).
[152] run.ai. 2023. NVIDIA DGX: Under the Hood of DGX-1, DGX-2 and A100. https://www.run.ai/guides/nvidia-a100/nvidia-dgx (2023).
[153] Soroush Saghaan. 2023. The Analytics Science Behind ChatGPT: Human, Algorithm, or a Human-Algorithm Centaur? (2023).
[154]
Tirthankar Ghosal Saikiran Chandha, Sucheth R. 2023. Setting the Scene: How Articial Intelligence is reshaping how we consume and deliver
research. https://upstream.force11.org/setting-the-scene-ai/ (2023).
[155]
Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. 2017. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture
likelihood and other modications. arXiv preprint arXiv:1701.05517 (2017).
[156]
Malik Sallam. 2023. ChatGPT Utility in Health Care Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid
Concerns. In Healthcare, Vol. 11. MDPI, 887.
[157]
Steen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. arXiv
preprint arXiv:1904.05862 (2019).
[158] Ali Shiri. 2023. ChatGPT and Academic Integrity. Information Matters 3, 2 (2023).
[159] Olivia Solon. 2023. The Tech Behind Those Amazing, Flawed New Chatbots. Bloomberg News (2023).
[160]
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. arXiv
preprint arXiv:1905.02450 (2019).
[161] Yang Song and Stefano Ermon. 2019. Generative modeling by estimating gradients of the data distribution, Vol. 32.
[162]
Yang Song, Sahaj Garg, Jiaxin Shi, and Stefano Ermon. 2020. Sliced score matching: A scalable approach to density and score estimation. In
Uncertainty in Articial Intelligence. PMLR, 574–584.
[163]
Mashrin Srivastava. 2023. A day in the life of ChatGPT as an academic reviewer: Investigating the potential of large language model for scientic
literature review. (2023).
[164]
Daniel Street and Joseph Wilck. 2023. ’Let’s Have a Chat’: Principles for the Eective Application of ChatGPT and Large Language Models in the
Practice of Forensic Accounting. Available at SSRN 4351817 (2023).
[165] Fei Sun. 2022. ChatGPT, the Start of a New Era. (2022).
[166]
Nigar M Shaq Surameery and Mohammed Y Shakor. 2023. Use Chat GPT to Solve Programming Bugs. International Journal of Information
Technology & Computer Engineering (IJITC) ISSN: 2455-5290 3, 01 (2023), 17–22.
[167]
Victor Tangermann. 2023. 89 PERCENT OF COLLEGE STUDENTS ADMIT TO USING CHATGPT FOR HOMEWORK, STUDY CLAIMS.
https://futurism.com/the-byte/students-admit-chatgpt-homework (2023).
[168]
Ming Tao, Bing-Kun Bao, Hao Tang, and Changsheng Xu. 2023. GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis. arXiv preprint
arXiv:2301.12959 (2023).
[169] Paul Taylor. 2009. Text-to-speech synthesis. Cambridge university press.
[170]
Mohamad-Hani Temsah, Amr Jamal, and Jaar A Al-Tawq. 2023. Reection with ChatGPT about the excess death after the COVID-19 pandemic.
New Microbes and New Infections (2023).
[171] Vincent Terrasi. 2023. GPT-4: How Is It Dierent From GPT-3.5? https://www.searchenginejournal.com/gpt-4-vs-gpt-3-5/482463/#close (2023).
[172] H Holden Thorp. 2023. ChatGPT is fun, but not an author. , 313–313 pages.
[173]
Oguzhan TOPSAKAL and Elif TOPSAKAL. [n. d.]. Framework for A Foreign Language Teaching Software for Children Utilizing AR, Voicebots and
ChatGPT (Large Language Models). The Journal of Cognitive Systems 7, 2 ([n. d.]), 33–38.
[174] ChatGPT Generative Pre-trained Transformer and Alex Zhavoronkov. 2022. Rapamycin in the context of Pascal’s Wager: generative pre-trained
transformer perspective. Oncoscience 9 (2022), 82.
[175]
Gpt Generative Pretrained Transformer, Almira Osmanovic Thunström, and Steinn Steingrimsson. 2022. Can GPT-3 write an academic paper on
itself, with minimal human input? (2022).
[176]
Alan Truly. 2023. Bing Chat: how to use Microsoft’s own version of ChatGPT. https://www.digitaltrends.com/computing/how-to-use-microsoft-
chatgpt-bing-edge/ (2023).
[177] Kohei Ueda and Yuki Yamada. 2023. ChatGPT is not an author, but then, who is eligible for authorship? (2023).
[178]
Anwaar Ulhaq, Naveed Akhtar, and Ganna Pogrebna. 2022. Ecient Diusion Models for Vision: A Survey. arXiv preprint arXiv:2210.09292 (2022).
[179] Kadir Uludag. 2023. The use of AI-supported Chatbot in Psychology. Available at SSRN 4331367 (2023).
[180]
Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, and Hugo Larochelle. 2016. Neural autoregressive distribution estimation. The
Journal of Machine Learning Research 17, 1 (2016), 7184–7220.
[181]
Benigno Uria, Iain Murray, and Hugo Larochelle. 2013. RNADE: The real-valued neural autoregressive density-estimator. Advances in Neural
Information Processing Systems 26 (2013).
[182]
Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray
Kavukcuoglu. 2016. WaveNet: A Generative Model for Raw Audio. In The 9th ISCA Speech Synthesis Workshop.
Manuscript submitted to ACM
28 Zhang et al.
[183]
Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al
.
2016. Conditional image generation with pixelcnn
decoders. Advances in neural information processing systems 29 (2016).
[184]
Wouter van Heeswijk. 2022. Trust Region Policy Optimization (TRPO) Explained. https://towardsdatascience.com/trust-region-policy-optimization-
trpo-explained-4b56bd206fc2 (2022).
[185]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is
all you need. In NeurIPS.
[186]
Randy Joy Magno Ventayen. 2023. OpenAI ChatGPT Generated Results: Similarity Index of Articial Intelligence-Based Contents. Available at
SSRN 4332664 (2023).
[187] Manish Verma. 2023. Novel Study on AI-Based Chatbot (ChatGPT) Impacts on the Traditional Library Management. (2023).
[188] Lyan Verwimp, Joris Pelemans, Patrick Wambacq, et al. 2017. Character-word LSTM language models. arXiv preprint arXiv:1704.02813 (2017).
[189] Pascal Vincent. 2011. A connection between score matching and denoising autoencoders. Neural computation 23, 7 (2011), 1661–1674.
[190]
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2016. Show and tell: Lessons learned from the 2015 mscoco image captioning
challenge. IEEE transactions on pattern analysis and machine intelligence 39, 4 (2016), 652–663.
[191]
Karan Virdi. 2022. Google issues ‘code-red’ as Open AI’s ChatGPT garners popularity. https://itmunch.com/google-issues-code-red-alert-as-open-ai-
becomes-popular/ (2022).
[192] Vaishak V.Kumar. 2019. Soft Actor-Critic Demystied. https://towardsdatascience.com/soft-actor-critic-demystied-b8427df61665 (2019).
[193] Dong Wang, Xiaodong Wang, and Shaohe Lv. 2019. An overview of end-to-end automatic speech recognition. Symmetry 11, 8 (2019), 1018.
[194]
Shuai Wang, Harrisen Scells, Bevan Koopman, and Guido Zuccon. 2023. Can ChatGPT Write a Good Boolean Query for Systematic Review
Literature Search? arXiv preprint arXiv:2302.03495 (2023).
[195]
Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, and Xiaoyang Li. 2023. ChatGPT
Performs on the Chinese National Medical Licensing Examination. (2023).
[196]
Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C Schmidt.
2023. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv preprint arXiv:2302.11382 (2023).
[197]
Jules White, Sam Hays, Quchen Fu, Jesse Spencer-Smith, and Douglas C Schmidt. 2023. ChatGPT Prompt Patterns for Improving Code Quality,
Refactoring, Requirements Elicitation, and Software Design. arXiv preprint arXiv:2303.07839 (2023).
[198] Clare Williams. 2023. Hype, or the future of learning and