ArticlePDF Available

Deepfake Detection on Social Media: Leveraging Deep Learning and FastText Embeddings for Identifying Machine-Generated Tweets

Authors:
  • Khwaja fareed university of engineering and information technology

Abstract and Figures

Recent advancements in natural language production provide an additional tool to manipulate public opinion on social media. Furthermore, advancements in language modelling have significantly strengthened the generative capabilities of deep neural models, empowering them with enhanced skills for content generation. Consequently, text-generative models have become increasingly powerful allowing the adversaries to use these remarkable abilities to boost social bots, allowing them to generate realistic deepfake posts and influence the discourse among the general public. To address this problem, the development of reliable and accurate deepfake social media message-detecting methods is important. Under this consideration, current research addresses the identification of machine-generated text on social networks like Twitter. In this study, a simple deep learning model in combination with word embeddings is employed for the classification of tweets as human-generated or bot-generated using a publicly available Tweepfake dataset. A conventional Convolutional Neural Network (CNN) architecture is devised, leveraging FastText word embeddings, to undertake the task of identifying deepfake tweets. To showcase the superior performance of the proposed method, this study employed several machine learning models as baseline methods for comparison. These baseline methods utilized various features, including Term Frequency, Term Frequency-Inverse Document Frequency, FastText, and FastText subword embeddings. Moreover, the performance of the proposed method is also compared against other deep learning models such as Long short-term memory (LSTM) and CNN-LSTM displaying the effectiveness and highlighting its advantages in accurately addressing the task at hand. Experimental results indicate that the design of the CNN architecture coupled with the utilization of FastText embeddings is suitable for efficient and effective classification of the tweet data with a superior 93% accuracy.
Content may be subject to copyright.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI
Deepfake Detection on Social Media:
Leveraging Deep Learning and FastText
Embeddings for Identifying
Machine-Generated Tweets
Saima Sadiq1, Turki Aljrees 2, Saleem Ullah1
1Department of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan, Pakistan; s.kamrran@gmail.com
(S.S.); salimbzu@gmail.com (S.U.)
2Department College of Computer Science and Engineering, University of Hafr Al-Batin, Hafar Al-Batin 39524, Saudi Arabia; tajrees@uhb.edu.sa
This work is supported by the Department of Computer Science, Khwaja Fareed University of Engineering and Information Technology,
Rahim Yar Khan, Pakistan.
ABSTRACT Recent advancements in natural language production provide an additional tool to manipulate
public opinion on social media. Furthermore, advancements in language modelling have significantly
strengthened the generative capabilities of deep neural models, empowering them with enhanced skills
for content generation. Consequently, text-generative models have become increasingly powerful allowing
the adversaries to use these remarkable abilities to boost social bots, allowing them to generate realistic
deepfake posts and influence the discourse among the general public. To address this problem, the
development of reliable and accurate deepfake social media message-detecting methods is important. Under
this consideration, current research addresses the identification of machine-generated text on social networks
like Twitter. In this study, a straightforward deep learning model in combination with word embeddings is
employed for the classification of tweets as human-generated or bot-generated using a publicly available
Tweepfake dataset. A conventional Convolutional Neural Network (CNN) architecture is devised, leveraging
FastText word embeddings, to undertake the task of identifying deepfake tweets. To showcase the superior
performance of the proposed method, this study employed several machine learning models as baseline
methods for comparison. These baseline methods utilized various features, including Term Frequency,
Term Frequency-Inverse Document Frequency, FastText, and FastText subword embeddings. Moreover, the
performance of the proposed method is also compared against other deep learning models such as Long
short-term memory (LSTM) and CNN-LSTM displaying the effectiveness and highlighting its advantages
in accurately addressing the task at hand. Experimental results indicate that the streamlined design of the
CNN architecture, coupled with the utilization of FastText embeddings, allowed for efficient and effective
classification of the tweet data with a superior 93% accuracy.
INDEX TERMS Text Classification, Machine Learning, Deep Learning, Deepfake, Machine generated text
I. INTRODUCTION
SOCIAL media platforms were created for people to
connect and share their opinions and ideas through texts,
images, audio, and videos [1]. A bot is computer software
that manages a fake account on social media by liking,
sharing, and uploading posts that may be real or forged
using techniques like gap-filling text, search-and- replace,
and video editing or deepfake [2]. Deep learning is a part
of machine learning that learns feature representation from
input data. Deepfake is a combination of "deep learning" and
"fake" and refers to artificial intelligence-generated multime-
dia (text, image, audio and video) that may be misleading [3].
Deepfake multimedia’s creation and sharing on social media
have already created problems in a number of fields such as
politics [4] by deceiving viewers into thinking that they were
created by humans.
Using social media, it is easier and faster to propagate false
information with the aim of manipulating people’s percep-
tions and opinions especially to build mistrust in a democratic
country [5]. Accounts with varying degrees of humanness
VOLUME 4, 2016 1
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
like cyborg accounts to sockpuppets are used to achieve this
goal [6]. On the other hand, fully automated social media
accounts also known as social bots mimic human behaviour
[7]. Particularly, the widespread use of bots and recent devel-
opments in natural language-based generative models, such
as the GPT [8] and Grover [9], give the adversary a means
to propagate false information more convincingly. The Net
Neutrality case in 2017 serves as an illustrative example:
millions of duplicated comments played a significant role in
the Commission’s decision to repeal [10]. The issue needs to
be addressed that simple text manipulation techniques may
build false beliefs and what could be the impact of more
powerful transformer-based models. Recently, there have
been instances of the use of GPT-2 [11] and GPT-3 [12]: to
generate tweets to test the generating skills and automatically
make blog articles. A bot based on GPT-3 interacted with
people on Reddit using the account "/u/thegentlemetre" to
post comments to inquiries on /r/AskReddit [13]. Though
most of the remarks made by the bot were harmless. De-
spite the fact that no harm has been done thus far, OpenAI
should be concerned about the misuse of GPT-3 due to this
occurrence. However, in order to protect genuine information
and democracy on social media, it is important to create a
sovereign detection system for machine-generated texts, also
known as deepfake text.
In 2019, a generative model namely GPT-2 displayed
enhanced text-generating capabilities [12] which remained
unrecognizable by the humans [14], [15]. Deepfake text on
social media is mainly written by the GPT model; this may
be due to the fact that the GPT model is better than Grover
[16] and CTRL [17] at writing short text [18]. Consequently,
it is highly challenging to detect machine-generated text pro-
duced by GPT-2 than by RNN or other previously generated
techniques [19]. To address this significant challenge, the
present study endeavours to examine deepfakes generated by
RNN, as well as GPT-2 and various other bots. Specifically,
the study focuses on employing cutting-edge deepfake text
detection techniques tailored to the dynamic social media
environment. State-of-the-art research works regarding deep-
fake text detection include [15], [19], [20]. Authors in [21]
improved the detection of deepfake text generated by GPT 2.
Deepfake detecting techniques are constantly being im-
proved, including deepfake audio identification techniques
[22], [23], deepfake video screening methods [24], and deep-
fake text detection techniques. Neural network models tend
to learn characteristics of machine-generated text instead of
discriminating human-written text from machine text [25].
Some techniques like replacing letters with homoglyphs and
adding commonly misspelled words have made the machine-
generated text detection task more challenging [25]. In addi-
tion, previous studies mostly performed deepfake text detec-
tion in long text-like stories and news articles. The research
claimed that it is easier to identify deepfakes in longer text
[26]. The use of cutting-edge detection methods on machine-
generated text posted on social media is a less explored
research area [26]. Text posted on social media is often short,
especially on Twitter [27]. There is also a lack of properly
labelled datasets containing human and machine-generated
short text in the research community [19]. Researchers in
[28], [29] used a tweet dataset containing tweets generated
by a wide range of bots like cyborg, social bot, spam bot,
and sockpuppet [30]. However, their dataset was human
labelled and research claimed that humans are unable to
identify machine-generated text. The authors in [19] provided
a labelled dataset namely Tweepfake containing human text
and machine-generated text on Twitter using techniques such
as RNN, LSTM, Markov and GPT-2. With the aim of investi-
gating challenges faced in the detection of deepfake text, this
study makes use of the same dataset.
The dataset containing both bot-generated and human-
written tweets is used to evaluate the performance of the pro-
posed method. This study employs various machine learning
and deep learning models, including Decision Tree (DT), Lo-
gistic Regression (LR), AdaBoost Classifier (AC), Stochastic
Gradient Descent Classifier (SGC), Random Forest (RF),
Gradient Boosting Machine (GBM), Extra tree Classifier
(ETC), Naive Bayes (NB), Convolutional Neural Network
(CNN), Long Short Term Memory (LSTM), and CNN-
LSTM, for tweet classification. Different feature extraction
techniques, such as Term Frequency (TF), Term frequency-
inverse document frequency (TF-IDF), FastText, and Fast-
Text subwords are also explored to compare their effective-
ness in identifying machine-generated text. This research
provides the following contributions:
Presenting a deep learning framework combined with
word embeddings that effectively identifies machine-
generated text on social media platforms.
Comprehensive evaluation of various machine learning
and deep learning models for tweet classification.
Investigation of different feature extraction techniques
for detecting deepfake text, with a focus on short text
prevalent on social media.
Demonstrating the superiority of our proposed method,
incorporating CNN with FastText embeddings, over al-
ternative models in accurately distinguishing machine-
generated text in the dynamic social media environment.
The rest of the article is structured as follows: Section
II discusses past work on deepfake text identification, and
Section III presents deepfake generation methods. Section
IV outlines the material and methods used in experiments to
enhance deepfake tweet detection, and Section V describes
the result and discussion section. Section VI presents and
discusses the findings.
II. RELATED WORK
Deepfake technologies initially emerged in the realm of com-
puter vision [31]–[33], advancing towards effective attempts
at audio manipulation [34], [35] and text synthesis [36]. In
computer vision, deepfakes often involve face manipulation,
including whole-facial synthesis, identity swapping, attribute
manipulation, and emotion switching [37]—as well as body
reenactment [38]. Audio deepfakes, which have recently been
2VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
used, generate spoken audio from a text corpus using the
voices of several speakers after ve seconds of listening [34].
The upgrading of the language models was made possible
in 2017 because of the development of the self-attention
mechanism and the transformer. Language modelling esti-
mates the likelihood that a given sequence of words will
appear in a sentence using various statistical and probabilistic
methodologies. The succeeding transformer-based language
models (GPT [39], BERT [40], GPT2 [36], etc.) improved
not only language-generating tasks but also natural language
interpretation tasks. In 2019, Radord et al. [36] created GPT-
2, a pre-trained language model that can create paragraphs
of text that are coherent and human-like on their own with
just one short sentence as input. The same year, authors
[9] developed GROVER, a novel method for quickly and
effectively learning and creating multi-field documents like
journal articles. The conditional language model CTRL,
which employs control codes to produce text with a particular
style, content, and task-specific behaviour, was published
shortly after [17]. Not to mention, researchers [41] intro-
duced OPTIMUS, which included a variational autoencoder
in the text production process.
The GPT-2 research team conducted an internal detection
study [42] using text samples generated by the GPT-2. First,
they assessed a conventional machine-learning method that
trains a logistic regression discriminator on TF-IDF unigram
and bigram characteristics. Following that, they tested a sim-
ple zero-shot baseline using an overall probability threshold:
a text excerpt is classified as machine-generated if, according
to GPT-2, its likelihood is closer to the mean likelihood
over all machine-generated texts than to the mean of human-
written texts.
Giant Language Model Test Room (GLTR), is a visual
tool that aids people in spotting deepfake texts [43]. The
generated text is sampled word per word from a next token
distribution; this distribution typically differs from the one
that people unconsciously use when they write or speak
(many sampling approaches may be employed, but the sim-
plest option is to take the most probable token). In order
to help individuals distinguish between human-written text
samples and machine-generated ones, GLTR aims to display
these statistical linguistic distinctions.
By employing BERT, GPT2, and GROVER as the pre-
trained language model, the developers of GROVER [9]
adhered to the fine-tuning-based detection methodology.
GROVER was the best, with the argument that a detector
with a similar architectural design may be the greatest line
of defence against text generators that use transformers. On
GPT-2 generated texts, however, OpenAI [42] disproved it
by demonstrating that fine-tuning a RoBERTa-based detec-
tor consistently produced greater accuracy than fine-tuning
a GPT-2-based detector with comparable capacity. Unlike
auto-regressive language models (such as GPT-2 and XLNET
[44]), which are defined in terms of a series of conditional
distributions, authors [45] created an energy-based deepfake
text detector. An energy-based model is defined in terms
of a single scalar energy function, which represents the
joint compatibility between all input variables. The deepfake
discriminator is an energy function that evaluates the joint
compatibility of a series of input tokens given some context
(such as a text sample, a few keywords, a collection of
phrases, or a title) and a set of network parameters. The
experimental context, where the text corpora and generator
designs alter between training and testing, was also attempted
to generalise by these authors.
Authors in [15] carried out the sole study on identify-
ing deep-fake social media messages on GPT-2-authored
Amazon evaluations. The Grover-based detector, GLTR, the
RoBERTa-based detector from OpenAI, and a straightfor-
ward ensemble that combined these detectors using logistic
regression at the score level were among the human-machine
discriminators that were assessed. The aforementioned deep-
fake text detection techniques have two drawbacks: aside
from the study [15], they focused on creating news sto-
ries, which are lengthier than social media communications.
Additionally, only one known adversarial generating model
is often used to produce deepfake text samples (GPT-2 or
GROVER). We are unsure about the number and type of
generative architectures employed in a real-world scenario.
Existing research in deepfake text detection includes meth-
ods like graph-based approach [46], feature-based approach
[47], and deep learning models like BiLSTM [48] and
RoBERTa [19]. These studies have focused on creating and
detecting news stories, which are typically longer than social
media communications. This raises concerns about the gen-
eralizability of such methods to the specific challenges posed
by short text on social media. Some studies [48], [49] used
the PAN dataset which focuses on determining profiles of
fake accounts. Others [47], [50], [51] used the Cresci dataset,
which used profile features like tweet content, activity pat-
terns, and network characteristics to find bot accounts. To aid
the research community in identifying shorter deepfake texts
created by different generating approaches, our Tweepfake
dataset offers a collection of tweets generated by several
generative models.
III. DEEPFAKE TEXT GENERATION METHODS
There are several ways to create deepfake text. Here is a brief
explanation of some common generative techniques used to
create computer-generated text.
Markov Chains is a stochastic model that depicts a succes-
sion of states by transitioning from one state to another with
a probability that solely depends on the current state. State
tokens are used in the text creation, and the next token/state
is chosen at random from a list of tokens after the current one.
The frequency with which a token follows the current token,
t, determines the likelihood that token t will be picked.
The RNN computes the multinomial distribution from
which the next token will be picked, and with the aid of
its loop structure, saves the knowledge about the previously
encountered tokens in its accumulated memory. The chosen
VOLUME 4, 2016 3
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
TABLE 1: Comparative analysis of the existing approaches.
Ref. Year Methods Dataset Findings
[50] 2018 LSTM Cresci and collaborators dataset They employed tweet level bot detection using textual
features and metadata. Their proposed architecture
achieved 96% AUC score in bot detection.
[52] 2019 Deep Forest Algorithm Dataset collected from the Twitter time-
line(API)
Authors applied SMOTE in combination with pro-
posed model and outperformed machine learning
models.
[53] 2019 Fine-tuned BERT Create their dataset using GPT-2 model Authors investigated machine-generated text using
discrminators on balanced dataset.
[49] 2020 BERT-based English tweets from the PAN competi-
tion dataset
Authors designed a bot detection model and their
proposed model achieved 83.36% weighted F1-score.
[54] 2020 Bot-AHGCN CTU-13 dataset and their own collected
botnet dataset
A multi-attributed heterogeneous graph convolution
approach has been applied for bot
[55] 2020 Gaussian kernel density
peak clustering algorithm
(GKDPCA)
dataset consisting 1971 normal human
accounts and 462 social bot accounts
Oversampling techniques are applied to improve the
classification.
[19] 2021 RoBERTa based detector TweepFake dataset Authors discriminated human written text from bot
text and presented deepfake tweet dataset and also
applied 13 models for deepfake detection.
[48] 2022 BiLSTM A dataset consisting 30000 tweets from
PAN-20
Authors investigate role of social bots in spreading
fake news
[51] 2022 Google Bert Cresci 2017 data set Classified dataset into bot or human written with 94%
accuracy.
[56] 2022 GANBOT framework Twitter social bot Their proposed model outperformed other previous
contextual LSTM methods for bot detection.
[46] 2022 Graph based approach 9 datasets including TwiBot-22 Authors addressed graph based detection on large
scale.
[47] 2022 Feature-based approach 5 Datasets (Cresci2015, Cresci2017,
Lee2011, Moghaddam2019, and
Varol2017)
Friendship preference features extracted from profiles
are employed for Bot detection.
[57] 2023 Logistic Regression,
RoBERTa
Human ChatGPT Comparison Corpus
(HC3)
Authors performed human analysis, linguistic evalu-
ation and bot-generated text detection and provided
deep insight to future research.
[58] 2023 XGBoost Human-written essays and ChatGPT
generated essays
Authors investigated TF-IDF and hand-crafted fea-
tures to detect ChatGPT generated text.
[59] 2023 Transformer-based ML
Model (DistilBERT)
ChatGPT query Dataset, ChatGPT
rephrase Dataset
Authors explored that identifying ChatGPT text of
rephrasing is more challenging. They also provide a
deep insight into the writing style of ChatGPT.
[60] 2023 Fine-tune-based detection
model (RoBERTa)
Human-written abstracts and AI-
generated abstracts
Authors investigated gap between machine-generated
text and human-written text.
token is returned as input so that the RNN can generate the
next one.
As a sampling strategy, the RNN+Markov method may use
the Markov Chain’s next token selection. In practice, the next
token is drawn at random from the RNN-generated multi-
nomial distribution, with the tokens with the greatest prob-
ability value being the most likely to be picked. However,
no references were discovered to support our RNN+Markov
mechanism theory.
LSTM creates text in the same way as RNN does. How-
ever, because of its more sophisticated structure, it is wiser
than the latter: it can learn to selectively maintain track of just
the necessary information of previously viewed text while
simultaneously reducing the vanishing gradient problem that
concerns RNNs. The memory of an LSTM is "longer" than
that of an RNN.
GPT-2 is a generative pre-trained transformer language
model based on the Attention mechanism: by using At-
tention, a language model pre-trained on millions of sen-
tences/texts learns how each token/word connects to every
other in every conceivable situation. This is the method for
creating more cohesive and non-trivial paragraphs of text.
As a language model, GPT-2’s text generation processes are
the same as RNN and LSTM: production of a multinomial
distribution at each step, followed by selection of the next
token from it using a specific sampling strategy.
The third-generation Generative Pre-trained Transformer
(GPT-3) is a neural network machine learning model that can
produce any type of text. It was trained using internet data. It
was created by OpenAI and uses a tiny quantity of input text
to produce vast quantities of accurate and complex machine-
generated text.
IV. MATERIAL AND METHODS
This section discusses the dataset, feature engineering tech-
niques, machine learning models and deep learning models
used in the experiments. The experimental strategy is pre-
sented in Figure 1.
A. DATASET
This study utilizes TweepFake [19] dataset containing 25572
tweets in total. The dataset comprises tweets from 17 human
accounts and 23 bot accounts. Each count of human and bot
is properly labelled. The latter identifies the text creation
4VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
FIGURE 1: Architecture of methodologies adopted for deep-
fake tweet classification
method that was used, which might be human (17 accounts,
12786 tweets), GPT-2 (11 accounts, 3861 tweets), RNN (7
accounts, 4181 tweets), or Others (5 accounts, 4876 tweets).
Figure 2 presents the Count-plot showing account-type data
distribution and Figure 3 shows the Count-plot showing
class-wise data distribution.
FIGURE 2: Countplot showing account-type data distribu-
tion
1) Data pre-processing
Datasets include useless data in unstructured or semi-
structured form. Such unnecessary data lengthens the
model’s training time and may degrade its performance. Pre-
processing is critical for increasing the efficiency of ML
models and conserving computing resources. Preparing the
text improves the model’s ability to anticipate outcomes
accurately. Pre-processing includes the following steps: to-
kenization, case conversion, stopword removal, and removal
of numbers.
FIGURE 3: Countplot showing class-wise data distribution
Due to the case sensitivity of machine learning models,
the model will treat the occurrence of the terms "MACHINE"
and "machine" as two different words. As a result, the dataset
must first be converted into lowercase as part of the prepro-
cessing.
The second preparation phase is removing hashtags and
usernames from tweets. The data is cleaned off of punctua-
tion such as %##&().,’" These punctuations have a direct
impact on performance since they make it harder for algo-
rithms to tell these symbols apart from textual terms.
The next step is to do stopword removal. Stopwords make
sentences easier for people to read, but classification algo-
rithms cannot understand them.
B. FEATURE EXTRACTION
To train the machine learning models, feature extraction
techniques are necessary. The models are trained in this study
using the following feature extraction methods.
1) TF
The term frequency (TF) measures the frequency with which
a term (word or token) appears in a text or corpus of
documents. It is a simple yet basic idea in both machine
learning and natural language processing. Different text-
based machine learning algorithms frequently employ TF as
a feature to assess the significance or relevance of a phrase
inside a document. It is determined by counting how many
times a phrase appears in a document and then normalising
that figure according to the total number of terms in the text.
The idea behind TF is that a term’s likelihood to be relevant
or suggestive of the content of a document increases with the
frequency with which it appears in the text.
2) TF-IDF
Another frequently used method for feature extraction from
unprocessed text data is TF-IDF. The majority of its applica-
tions are in text categorization and information retrieval [61].
In contrast to TF’s basic term count, TF-IDF additionally
gives each word a weight based on how important it is.
Inverse document frequency and word frequency are used in
this process [62].
VOLUME 4, 2016 5
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Wi,j = [1 + log(tfi,j )] ×[log(N
dfi
)] (1)
where Nrefers to the total number of documents, T Fi,j
stands for term frequency inside a document, andDf,t refers
to the document carrying the text/word t.
C. WORD EMBEDDING TECHNIQUES
1) FastText
FastText is a free and open-source toolkit developed by
Facebook AI Research (FAIR) for learning word embeddings
and classifications. It has 600 billion worth of word vectors
and 2million popular crawl words with 300dimensions.
Along with single words, it employs hand-crafted n-grams as
features. Because of its simple architecture, text categoriza-
tion is conducted extremely effectively and efficiently [63].
This approach enables the development of unsupervised or
supervised learning algorithms for producing word vectors.
It can train enormous datasets in minutes and is useful for
detecting semantic similarities and text categorization.
Various text categorization problems have made use of
various word embedding approaches. Pre-trained word em-
beddings anticipate the context of the words in an unsuper-
vised way. Words that are close together are seen as having
a similar context. FastText embedding is a good option for
representing vectors since it employs morphological cues to
identify challenging words. Its generalizability is enhanced
by this capability. FastText word embedding uses n-grams
to create vectors, which aids in handling words that are
unknown.
2) FastText Subword
FastText Subword has 2 million word vectors that were
learned using Common Crawl’s subword information (600B
tokens). By breaking down each word into its component
words, subword embedding gives us more information [64].
FastText can create word representations that are capable
of encapsulating the meaning of individual morphemes or
smaller parts of words by taking into account subword in-
formation. This is especially helpful for languages with com-
plex morphology, where words might take on several forms
depending on the present or past tense, plural forms, gender,
etc. The way FastText handles subwords is as follows:
FastText divides words into character n-grams and then
creates a vocabulary from the input corpus. It takes into
account both the whole word and its component parts.
The sum of a word’s character n-gram embeddings
serves as its representation. The word embeddings and
the character n-gram embeddings are both learned dur-
ing the training procedure.
FastText takes into account subword units, hence it
can create representations for Out-of-vocabulary words
by merging the character n-gram embeddings of those
words. FastText can now offer meaningful representa-
tions for words that were not present during training.
Text categorization tasks may be performed with Fast-
Text. To anticipate the labels of incoming text, it trains
a classifier on top of word representations.
D. DEEP LEARNING MODELS
This section presents the deep learning models employed in
the experiments including CNN, LSTM and CNN-LSTM.
The layered architecture of the deep learning models is
presented in Table 4.
1) CNN
A CNN is a deep neural network whose pooling and convo-
lution layers both learn sophisticated features. The majority
of the time, CNN is employed for jobs involving picture
classification and segmentation. The layered CNN model is
more reliable since it has undergone end-to-end training.
Since this is a feed-forward network model, features are
mapped by using filters on the output of the layers. The
CNN model also includes layers that are fully connected,
drop out, and pooling layers. The fully connected layers get
input from the output of the preceding levels to determine
the outcome. Pooling layers—which may be either maximum
or average—play a part in feature selection. In the eq 2, the
ReLU function is used as an activation function.
y=max(0, i)(2)
where yrepresents the activation’s output and irepresents
its input. Convolutional layers use weight to extract high-
level features for training. Cross entropy is a loss function
that is calculated according to the formula shown in equation
8.
crossEntropy =(i log(p) + (1 i)log(1 p)) (3)
where pis the expected probability and idisplays class
labels. The Sigmoid error function is used to predict CNN
output since it is an enhanced form of the backpropagation
model. Two target classes are output by the CNN model.
2) LSTM
LSTM is one of the state-of-the-art deep learning methods
that are frequently employed to address text classification
issues. Three gates make up an LSTM: the input gate (ik), the
output gate (ok), and the forget gate (fk). When data passes
through these gates, the gates retain critical information while
forgetting irrelevant information based on dropout value. An
important piece of data is stored in the memory block Ck.
There are several LSTM variations; the one utilised in this
study is provided in eq. 9, 10 and 11.
ik=σ(Wisk+Vihk1+bi)(4)
fk=σ(Wfsk+Vfhk1+bf)(5)
ok=σ(Wosk+Vohk1+bo)(6)
ck=tanh(Wcxk+Vchk1+bc)(7)
6VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
TABLE 2: Machine Learning Models
Reference Model Description
[65] RF The RF classifier is a decision tree-based system that employs numerous weak learners to make very accurate
predictions. RF uses a technique known as ’bootstrap bagging’ to train multiple decision trees using various
bootstrap samples. A bootstrap sample is generated by randomly subsampling the training dataset, which has
the same size as the test dataset. RF, like other ensemble classifiers, makes predictions using decision trees.
Choosing the root node at each level of decision tree construction might be difficult.
[66] DT DT is a well-known machine learning technique with several applications in regression and classification
issues. At each level of the DT, picking the root node, also known as ’attribute selection, is a vital element of
the process. The ’Gini index’ and ’information gain’ are both well-known attribute selection approaches. The
Gini index is often used to determine the extent of impurity in a dataset.
[67] LR Because of its use of the logistic equation (also known as the sigmoid function), LR is a popular approach for
dealing with binary classification issues. This function converts every given numerical value into a number
between 0 and 1 using an S-shaped curve, which is why LR is so popular.
[68] SGC It is a popular optimization algorithm that updates the model parameters by minimizing the loss function
using small batches of training data. This algorithm works in an iterative manner for computational efficacy.
Consequently, the frequent updation of parameters leads to faster convergence hence making this algorithm
the best fit for larger datasets.
[69] AC It combines multiple weak learners to develop a strong classifier. It is an iterative algorithm that assigns higher
weights to incorrectly classified samples in each iteration, enabling the subsequent weak learners to focus on
challenging examples. The final classification is made by aggregating the predictions from all weak learners.
AC is well-known for its capability to deal with complex patterns in the dataset and enhance the classification
accuracy with each iteration.
[70] GBM GBM is an effective ensemble learning technique that merges numerous weak prediction models, such as
decision trees, to construct a robust classifier. It sequentially trains the models, with each subsequent model
learning from the errors made by its predecessors. The GBM algorithm enhances a loss function by iteratively
incorporating models that minimize the loss gradient. This iterative approach results in the development of a
powerful model capable of accurately predicting the target variable.
[71] NB It is a widely used probabilistic machine learning algorithm that is based on Bayes’ theorem. NB assumes
that features are independent of each other given the class label, which simplifies the calculation process. By
calculating the probabilities of each class for a given input, it chooses the class with the highest probability
as the predicted class. Naive Bayes is known for its simplicity, fast training and prediction speed, and its
effectiveness in handling high-dimensional data.
[72] ETC ETC is a well-known ensemble learning approach for making accurate predictions by combining many
decision trees. In contrast to RF, ETC chooses a subset of the best characteristics at random for each split in
the decision tree. This approach produces de-correlated trees, which are less sensitive to individual attributes
and more resilient to noise. ETC selects the appropriate feature to partition the data based on the Gini index.
It also assesses the significance of traits according to their Gini score.
TABLE 3: Hyperparamter tuning values of Machine Learn-
ing Models
Model Hyperparameters
DT Trees=200, max_depth=30, random state=52
LR penalty=’l2’, solver=’lbfgs’
AC n_estimators=300, max_depth=300, lr=0.2
SGC alpha=1.0, binarize=0.0
RF Trees=200, max_depth=30, random state=52
GBM Trees=200, max_depth=30, random state=52, lr=0.1
ETC Trees=200, max_depth=30, random state=52
NB alpha=1.0, binarize=0.0
The corresponding weights for the matrix members are
represented by Wand V. While sk displays the input of a
certain time and bdisplays the bias, hdisplays the hidden
state up to k1time steps. Memory block cell cis refreshed
every k1time step. Every neuron in the dense layer is coupled
to every other neuron in the output layer of the LSTM.
3) CNN-LSTM
Compared to individual models, ensemble models frequently
exhibit superior performance. The advantages of combining
CNN’s powerful automated feature extraction with LSTM’s
capacity to capture long-term temporal dependencies have
led to the widespread adoption of the ensemble of CNN and
LSTM. Providing correct feature representations enables the
LSTM layers to learn temporal connections. CNN-LSTM
is an efficient solution for dealing with time series and
classification issues [73].
TABLE 4: Layered structure of deep learning models
CNN LSTM CNN-LSTM
Conv (7 × 7, @64) LSTM (100 neurons) Conv (7 × 7, @64)
Max pooling (2 × 2) Dropout (0.5) Max pooling (4 × 4)
Conv (7 × 7, @64) Dense (32 neurons) LSTM (100 neurons)
GlobalMax pooling (2 × 2) Softmax (2) Dense (32 neurons)
Dropout (0.5) Softmax (2)
Dense (32 neurons)
Softmax (2)
E. PROPOSED METHODOLOGY
This section presents the proposed methodology adopted for
tweet classification. The architecture of the proposed frame-
work is presented in Figure 4. Deep learning models like
CNN can automatically learn significant features from text
input. They are capable of capturing hierarchical patterns,
local relationships, and long-term connections, allowing the
model to extract usable representations from the incoming
text. By stacking multiple layers of CNN, dependencies of
VOLUME 4, 2016 7
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
text can be captured. This work introduces a simple deep
learning-based CNN model for tweet classification.
In the proposed framework, a labelled dataset is collected
from a public repository. The collected dataset contains
tweets from human and bot accounts. In order to simplify
the text and enhance its quality, a series of preprocessing
steps are employed to clean the tweets. The dataset is divided
into 80:20 ratios for training and testing. The next step in-
volves transforming the text into vectors using FastText word
embedding. Subsequently, these vector representations are
fed into the CNN model. The proposed methodology, which
leverages FastText word embedding in conjunction with a
3-layered CNN, is employed for the training process. The
efficacy of this approach is assessed through the utilization
of four evaluation metrics: Accuracy, Precision, Recall, and
F1-score.
FIGURE 4: Architecture of proposed framework for deep-
fake tweet classification
F. EVALUATION METRICS
The evaluation of models involves assessing their perfor-
mance using several widely utilized measures to classify deep
fake text. These measures include accuracy, recall, precision,
and F1 score. Accuracy assesses a model’s overall accuracy
by measuring the ratio of properly categorised examples
to the total number of occurrences. It provides a general
overview of the model’s performance.
Accuracy =Number of correctly classified predictions
Total predictions
(8)
while in the case of binary classification, accuracy is
measured as:
Accuracy =T P +T N
T P +F P +T N +F N (9)
whereas T P is true positive , F P is false positive, T N is true
negative, and F N is false negative and can be defined as.
TP: TP represents the positive predictions of a correctly
predicted class.
FP: FP represents the negative predictions of an incor-
rectly predicted class.
TN: TN represents the negative predictions of a correctly
predicted class.
FN: FN represents the positive predictions of an incor-
rectly predicted class
Precision represents the model’s accuracy in categorising
positive cases as the ratio of successfully categorised positive
instances to total positive instances anticipated. The model’s
ability to avoid false positives is highlighted.
P recision =T P
T P +F P (10)
Recall, also known as sensitivity or true positive rate, as-
sesses the model’s ability to properly identify positive events
out of all real positive instances. It is especially essential in
situations when the identification of positive cases is critical,
such as illness diagnosis.
Recall =T P
T P +F N (11)
The F1 score gives a fair assessment of a model’s perfor-
mance by taking accuracy and recall into consideration. It is
scored on a scale of 0 to 1, with a higher score signifying
better performance. A flawless F1 score of 1 indicates that
the model achieved perfect accuracy as well as recall. To
summarise, the F1 score combines accuracy and recall into
a single statistic that provides an overall evaluation of a
model’s performance in classification tasks.
F1score = 2 precision.recall
precision +recall (12)
V. RESULTS AND DISCUSSION
This section describes the experiment that was carried out
for this study and discusses the outcomes. In this study,
machine learning and deep learning models are employed to
detect deepfake tweets. To validate the proposed approach,
this study employs eight machine learning models: DT, LR,
AC, SGC, RF, GBM, ETC and NB. The description of these
models is presented in Table 2. These models are employed
using the hyperparameters that are best suited to the dataset.
Value ranges are fine-tuned to provide the greatest perfor-
mance when selecting the optimum hyperparameters. Table
3 shows the hyperparameter settings and tuning range.
A. EXPERIMENTAL SETUP
The results of several classifiers for identifying deepfake text
are presented in this section. The models were developed
using Python 3.8 and a Jupyter Notebook, and the testing
was carried out on a device running Windows 10 and a 7th-
generation Core i7 CPU. Accuracy, precision, recall, and F1
score were used to evaluate how well the learning models
8VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
performed. Table 5 provides detailed information on the
hardware and software requirements used in the experiment.
TABLE 5: Experimental setup for the proposed system.
Element Details
Language Python 3.8
OS 64-bit window 10
RAM 8GB
GPU Nvidia, 1060, 8GB
CPU Core i7, 7th Gen with 2.8 GHz processor
B. PERFORMANCE OF MACHINE LEARNING MODELS
This section details the experiments conducted in this re-
search and a discussion of obtained results. The experimental
results utilising various feature engineering strategies are
examined for deepfake text detection. The frequency-based
techniques (TF and TF-IDF) and Word Embedding tech-
niques (FastText and FastTest Subword) are used to compare
the output of various supervised machine learning models.
Machine learning models including DT, LR, AC, SGC, RF,
GB, ETC and NB are compared in terms of accuracy, preci-
sion, recall and F1 score. The effectiveness of every model
varies depending on the feature extraction method.
1) Comparison of classifiers using TF
The performance comparison of machine learning models
is presented in Table 6. Results show that ETC using TF
has demonstrated superior performance with a 0.83 value
of accuracy, precision, recall and F1 Score. Similarly, RF
and NB have attained 0.83 values of accuracy and recall in
classifying deepfake text. DT, LR, AC, SGC and GBM have
shown similar results with 0.75 accuracy, 0.80 precision, 0.75
recall and 0.77 F1 score.
TABLE 6: Classification result using TF.
Model Accuracy Precision Recall F1-Score
DT 0.75 0.80 0.75 0.77
LR 0.75 0.80 0.75 0.77
AC 0.75 0.80 0.75 0.77
SGC 0.75 0.80 0.75 0.77
RF 0.83 0.69 0.83 0.76
GBM 0.75 0.80 0.75 0.77
ETC 0.83 0.83 0.83 0.83
NB 0.83 0.69 0.83 0.76
2) Comparison of classifiers using TF-IDF
Table 7 presents the result of machine learning models using
TF-IDF for classifying deepfake text. It can be observed that
the use of TF-IDf has significantly improved the results of
RF and ETC. Both models achieved 0.91 scores in accuracy,
precision, recall and F1 score which is the highest score
among classifiers using TF-IDF. DT, AC, SGC, GBM and
NB did not show any improvement as compared to the results
obtained with TF.
TABLE 7: Classification result using TF-IDF.
Model Accuracy Precision Recall F1-Score
DT 0.75 0.80 0.75 0.77
LR 0.83 0.69 0.83 0.76
AC 0.75 0.80 0.75 0.77
SGC 0.75 0.80 0.75 0.77
RF 0.91 0.91 0.91 0.91
GBM 0.75 0.80 0.75 0.77
ETC 0.91 0.91 0.91 0.91
NB 0.83 0.69 0.83 0.76
3) Comparison of classifiers using FastText
Using FastText embedding for deepfake text, the efficiency
of supervised machine learning models is also compared.
FastText embedding has shown to be an excellent text classi-
fication tool. The experimental results in Table 8 reveal that
when applied with FastText, the supervised machine learning
models do not produce robust results. ETC and RF yield the
greatest result with an accuracy of 0.90 which is lower than
the 0.91 obtained by utilising TF-IDF features. These results
highlight that the FastText embedding technique does not
enhance the effectiveness of any classifier, as evidenced by
the experimental outcomes.
TABLE 8: Classification result using FastText.
Model Accuracy Precision Recall F1-Score
DT 0.75 0.80 0.75 0.77
LR 0.83 0.69 0.83 0.76
AC 0.75 0.80 0.75 0.77
SGC 0.75 0.80 0.75 0.77
RF 0.90 0.90 0.90 0.90
GBM 0.75 0.80 0.75 0.77
ETC 0.90 0.90 0.90 0.90
NB 0.83 0.69 0.83 0.76
4) Comparison of classifiers using FastText Subword
The efficacy of FastText Subword is also investigated in
combination with machine learning models or deep fake text
detection. While classifying tweets using FastTest Subword,
it can be observed that the performance of machine learning
models has degraded significantly as shown in Table 9. AC
and GBM have shown slightly better performance with a
0.62 value of accuracy which is not good enough. Results
revealed that the FastText subword did not show good results
in combination with machine learning models in classifying
deep fake text.
TABLE 9: Classification result using FastText Subword.
Model Accuracy Precision Recall F1-Score
DT 0.56 0.56 0.56 0.56
LR 0.53 0.53 0.53 0.52
AC 0.62 0.62 0.62 0.62
SGC 0.55 0.55 0.55 0.54
RF 0.61 0.61 0.61 0.61
GBM 0.62 0.63 0.62 0.61
ETC 0.59 0.59 0.59 0.59
NB 0.56 0.60 0.56 0.52
VOLUME 4, 2016 9
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
C. RESULTS OF DEEP LEARNING MODELS
This study employs some state-of-the-art deep learning mod-
els including CNN, LSTM and CNN-LSTM. These models
have been extensively used for text classification in the
literature. The architecture of these deep learning models is
presented in Table 4.
Embedding layers, Dropout layers, and dense layers are
used by all deep learning models. The embedding layer
accepts input and turns each word in reviews into vector form
for model training. The dropout layer is used to lower the
likelihood of model over-fitting and reduce the complexity of
model learning by randomly removing neurons. To produce
the required output, the dense layer is combined with the
neurons and a Softmax activation algorithm. Models are
constructed with a categorical cross-entropy function, and the
’Adam’ optimizer is utilised to optimise parameters.
Experimental results of deep learning models using Fast-
Text and FastText Subword embeddings are presented in
Table 10 and Table 11 respectively. Results revealed that
deep learning models have shown better performance with
FastText as compared to the FastText subword. CNN-LSTM
has shown the lowest results using the FastText subword with
a 0.67 value of accuracy. LSTM using FastText has shown a
0.71 accuracy score, 0.82 precision, 0.78 recall and 0.80 F1
score for deepfake text detection. CNN-LSTM has attained a
0.69 value of accuracy, 0.78 precision, 0.75 recall and 0.76
F1 Score. On the other hand, CNN in combination with
FastText has shown superior performance among all other
combinations of deep learning and machine learning models
with a 0.93 value of accuracy, 0.92 value of precision, 0.95
recall and 0.93 F1 Score.
TABLE 10: Classification result using Deep learning models
using FastText.
Model Accuracy Precision Recall F1-Score
CNN 0.93 0.92 0.95 0.93
LSTM 0.71 0.82 0.78 0.80
CNN-LSTM 0.69 0.78 0.75 0.76
TABLE 11: Classification result using Deep learning models
using FastText Subword.
Model Accuracy Precision Recall F1-Score
CNN 0.87 0.84 0.81 0.83
LSTM 0.68 0.73 0.73 0.73
CNN-LSTM 0.67 0.71 0.74 0.75
D. DISCUSSION
Performance comparisons of classifiers utilising various fea-
ture representation approaches were performed on the dataset
including deepfake tweets. The effect of TF, TF-IDF, Fast-
Text, and FastText Subword on tweets has been examined
in order to discover bot-generated tweets. Separate compar-
isons of accuracy, precision, recall, and F1 score have been
provided. Figure 5 presents the training and testing accuracy,
precision, and recall of the proposed model.
Figure 6 presents the performance comparison of deep
learning models using FastText and FastText Subwords. It
can be observed that LSTM and CNN-LSTM have shown
better performance with FastText. However, CNN using Fast-
Text has shown the highest accuracy score with a 0.93 value.
In the precision comparison of models, all three models show
improved performance with FastText. While CNN using Fast
Text has achieved the highest precision score among all
classifiers.
Figure 6 also illustrates the comparison of deep learning
models in terms of recall. The lowest recall values have been
achieved by CNN-LSTM using the FastText subword. CNN
using FastText surpassed other models with a 0.93 value of
F1 Score.
According to the explanation above, the classifiers get
the best results for deepfake text classification when trained
using FastText word embedding. This study considers the
use of Word embedding techniques due to the efficiency of
FastText embeddings in various text classification tasks [74],
[75]. By employing word embeddings, textual input is ef-
fectively transformed into numerical vectors that encapsulate
meaningful semantic relationships between words. Conse-
quently, the machine learning model can learn and identify
intricate patterns and associations between words specific to
the given task. Machine learning models have shown good
results using TF-IDF. TF-IDF captures the word’s importance
while FastText represents words as continuous vectors in a
high-dimensional space to capture semantic and grammatical
information. It takes subword information into consideration,
allowing it to handle out-of-vocabulary terms and catch
morphological similarities. Overall, CNN utilising FastText
achieved the best performance across all assessment mea-
sures. By minimising bias and variance, the randomization
and optimisation features make CNN more efficient in text
classification.
1) Performance Comparison with previous state-of-the-art
The performance of the proposed model is evaluated by com-
paring it with state-of-the-art studies in the field. In order to
ensure the most up-to-date results, a recently published work
on the same dataset has been selected for comparison that has
achieved the best results with two powerful transfer learning
models namely RoBERTa (Robustly Optimized BERT ap-
proach) and BERT (Bidirectional Encoder Representations
from Transformers). Notably, Roberta, when applied to the
same dataset, achieved an accuracy of 0.89 and an F1 score
of 0.89. Similarly, BERT, when applied to the same dataset,
also reported an accuracy of 0.89 and an F1 score of 0.89.
In contrast, the current study leveraged the combination of
CNN features and FastText for deepfake text detection. The
results of this study outperformed existing state-of-the-art
approaches, yielding a classification accuracy of 0.93. This
indicates that the proposed model offers a significant im-
provement over previous methods in accurately identifying
deepfake text. The approach employed in this study offers
distinct advantages compared to complex transfer learning
10 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
FIGURE 5: Training and Testing results of CNN model. A) Accuracy Curve, B) Precision Curve, C) Recall Curve
FIGURE 6: Performance Comparison of deep learning models
models, such as RoBERTa and BERT. The utilization of
a simple CNN model structure provides several benefits.
Firstly, it avoids the need for extensive training time and com-
putational resources that are typically required for fine-tuning
transfer learning models. This makes the proposed approach
more accessible and efficient, especially for researchers and
practitioners with limited resources.
Additionally, the fixed vocabulary size of transfer learn-
ing models can pose challenges when encountering out-of-
vocabulary terms. In contrast, the CNN model used in this
study does not suffer from such limitations. It can effectively
handle out-of-vocabulary terms without compromising per-
formance, as it does not rely on pre-defined vocabularies.
This flexibility allows the model to better adapt to diverse
and evolving textual data. By incorporating CNN features
and harnessing the power of FastText, the proposed model
achieves higher accuracy and demonstrates better perfor-
mance compared to the selected state-of-the-art approaches.
These findings highlight the effectiveness and superiority of
the proposed model in tackling the challenge of deepfake text
detection. Overall, the results obtained from the comparison
with existing studies validate the proficiency of the proposed
model and establish its competitive advantage in the field of
deepfake text detection.
VOLUME 4, 2016 11
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
TABLE 12: Results Comparison with state-of-the-art models
from literature.
Models Accuracy F1-Score
ROBERTA 0.896 0.89
BERT 0.891 0.89
Proposed Approach 0.93 0.93
VI. CONCLUSION AND FUTURE WORK
Deepfake text detection is a critical and challenging task
in the era of misinformation and manipulated content. This
study aimed to address this challenge by proposing an ap-
proach for deepfake text detection and evaluating its effec-
tiveness. A dataset containing tweets of bots and humans
is used for analysis by applying several machine learning
and deep learning models along with feature engineering
techniques. Well-known feature extraction techniques: Tf
and TF-IDF and word embedding techniques: FastText and
FastText subwords are used. By leveraging a combination of
techniques such as CNN and FastText, the proposed approach
demonstrated promising results with a 0.93 accuracy score in
accurately identifying deepfake text. Furthermore, the results
of the proposed approach is compared with other state-of-
the-art transfer learning models from previous literature.
Overall, the adoption of a CNN model structure in this
study shows its superiority in terms of simplicity, compu-
tational efficiency, and handling out-of-vocabulary terms.
These advantages make the proposed approach a compelling
option for text detection tasks, demonstrating that sophis-
ticated performance can be achieved without the need for
complex and time-consuming transfer learning models. The
findings of this study contribute to advancing the field of
deepfake detection and provide valuable insights for future
research and practical applications.
As social media continues to play a significant role in
shaping public opinion, the development of robust deepfake
text detection techniques is imperative to safeguard gen-
uine information and preserve the integrity of democratic
processes. In future research, the quantum NLP and other
cutting-edge methodologies will be applied for more sophis-
ticated and efficient detection systems, to fight against the
spread of misinformation and deceptive content on social
media platforms.
REFERENCES
[1] Jai Prakash Verma, Smita Agrawal, Bankim Patel, and Atul Patel. Big
data analytics: Challenges and applications for text, audio, video, and
social media data. International Journal on Soft Computing, Artificial
Intelligence and Applications (IJSCAI), 5(1):41–51, 2016.
[2] Husna Siddiqui, Elizabeth Healy, and Aspen Olmsted. Bot or not. In
2017 12th international conference for internet technology and secured
transactions (ICITST), pages 462–463. IEEE, 2017.
[3] Mika Westerlund. The emergence of deepfake technology: A review.
Technology innovation management review, 9(11), 2019.
[4] John Ternovski, Joshua Kalla, and Peter M Aronow. Deepfake warnings
for political videos increase disbelief but do not improve discernment:
Evidence from two experiments. 2021.
[5] Soroush Vosoughi, Deb Roy, and Sinan Aral. The spread of true and false
news online. science, 359(6380):1146–1151, 2018.
[6] Samantha Bradshaw, Hannah Bailey, and Philip N Howard. Industrialized
disinformation: 2020 global inventory of organized social media manipu-
lation. Computational Propaganda Project at the Oxford Internet Institute,
2021.
[7] Christian Grimme, Mike Preuss, Lena Adam, and Heike Trautmann.
Social bots: Human-like by means of human control? Big data, 5(4):279–
293, 2017.
[8] Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian,
Zhilin Yang, and Jie Tang. Gpt understands, too. arXiv preprint
arXiv:2103.10385, 2021.
[9] Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali
Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural
fake news. Advances in neural information processing systems, 32, 2019.
[10] Logan Beckman. The inconsistent application of internet regulations and
suggestions for the future. Nova L. Rev., 46:277, 2021.
[11] Jieh-Sheng Lee and Jieh Hsiang. Patent claim generation by fine-tuning
openai gpt-2. World Patent Information, 62:101983, 2020.
[12] Robert Dale. Gpt-3: What’s it good for? Natural Language Engineering,
27(1):113–118, 2021.
[13] Will Douglas Heaven. A gpt-3 bot posted comments on reddit for a
week and no one noticed. MIT Technology Review. Retrieved November,
24:2020, 2020.
[14] Sebastian Gehrmann, Hendrik Strobelt, and Alexander M Rush. Gltr:
Statistical detection and visualization of generated text. arXiv preprint
arXiv:1906.04043, 2019.
[15] David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H Nguyen,
Junichi Yamagishi, and Isao Echizen. Generating sentiment-preserving
fake online reviews using neural language models and their human-and
machine-based detection. In Advanced Information Networking and Ap-
plications: Proceedings of the 34th International Conference on Advanced
Information Networking and Applications (AINA-2020), pages 1341–
1354. Springer, 2020.
[16] Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali
Farhadi, Franziska Roesner, and Yejin Choi. Grover-a state-of-the-art
defense against neural fake news, 2019.
[17] Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong,
and Richard Socher. Ctrl: A conditional transformer language model for
controllable generation. arXiv preprint arXiv:1909.05858, 2019.
[18] Adaku Uchendu, Zeyu Ma, Thai Le, Rui Zhang, and Dongwon Lee.
Turingbench: A benchmark environment for turing test in the age of neural
text generation. arXiv preprint arXiv:2109.13296, 2021.
[19] Tiziano Fagni, Fabrizio Falchi, Margherita Gambini, Antonio Martella,
and Maurizio Tesconi. Tweepfake: About detecting deepfake tweets. Plos
one, 16(5):e0251415, 2021.
[20] Harald Stiff and Fredrik Johansson. Detecting computer-generated
disinformation. International Journal of Data Science and Analytics,
13(4):363–383, 2022.
[21] Margherita Gambini, Tiziano Fagni, Fabrizio Falchi, and Maurizio
Tesconi. On pushing deepfake tweet detection capabilities to the limits.
In 14th ACM Web Science Conference 2022, pages 154–163, 2022.
[22] Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales,
and Javier Ortega-Garcia. Deepfakes and beyond: A survey of face
manipulation and fake detection. Information Fusion, 64:131–148, 2020.
[23] Thanh Nguyen, Cuong M. Nguyen, Tien Nguyen, Thanh Duc, and Saeid
Nahavandi. Deep learning for deepfakes creation and detection: A survey.
09 2019.
[24] Tianxiang Chen, Avrosh Kumar, Parav Nagarsheth, Ganesh Sivaraman,
and Elie Khoury. Generalization of audio deepfake detection. In Odyssey,
pages 132–137, 2020.
[25] Max Wolff and Stuart Wolff. Attacking neural text detectors. arXiv
preprint arXiv:2002.11768, 2020.
[26] Jiameng Pu, Zain Sarwar, Sifat Muhammad Abdullah, Abdullah
Rehman, Yoonjin Kim, Parantapa Bhattacharya, Mobin Javed, and Bimal
Viswanath. Deepfake text detection: Limitations and opportunities. arXiv
preprint arXiv:2210.09421, 2022.
[27] Faris Kateb and Jugal Kalita. Classifying short text in social media: Twitter
as case study. International Journal of Computer Applications, 111(9):1–
12, 2015.
[28] Andres Garcia Silva, Cristian Berrio, and José Manuel Gómez-Pérez. An
empirical study on pre-trained embeddings and language models for bot
detection. In Proceedings of the 4th Workshop on Representation Learning
for NLP (RepL4NLP-2019), pages 148–155, 2019.
[29] Jonas Lundberg, Jonas Nordqvist, and Antonio Matosevic. On-the-fly
detection of autogenerated tweets. arXiv preprint arXiv:1802.01197, 2018.
12 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
[30] Robert Gorwa and Douglas Guilbeault. Unpacking the social media bot: A
typology to guide research and policy. Policy & Internet, 12(2):225–248,
2020.
[31] Supasorn Suwajanakorn, Steven M Seitz, and Ira Kemelmacher-
Shlizerman. Synthesizing obama: learning lip sync from audio. ACM
Transactions on Graphics (ToG), 36(4):1–13, 2017.
[32] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt,
and Matthias Nießner. Face2face: Real-time face capture and reenactment
of rgb videos. In Proceedings of the IEEE conference on computer vision
and pattern recognition, pages 2387–2395, 2016.
[33] Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. Ev-
erybody dance now. In Proceedings of the IEEE/CVF international
conference on computer vision, pages 5933–5942, 2019.
[34] Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren,
Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu,
et al. Transfer learning from speaker verification to multispeaker text-
to-speech synthesis. Advances in neural information processing systems,
31, 2018.
[35] Yefei Wang, Kaili Wang, Yi Wang, Di Guo, Huaping Liu, and Fuchun Sun.
Audio-visual grounding referring expression for robotic manipulation.
In 2022 International Conference on Robotics and Automation (ICRA),
pages 9258–9264. IEEE, 2022.
[36] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya
Sutskever, et al. Language models are unsupervised multitask learners.
OpenAI blog, 1(8):9, 2019.
[37] Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales,
and Javier Ortega-Garcia. Deepfakes and beyond: A survey of face
manipulation and fake detection. Information Fusion, 64:131–148, 2020.
[38] Yang Zhou, Jimei Yang, Dingzeyu Li, Jun Saito, Deepali Aneja, and
Evangelos Kalogerakis. Audio-driven neural gesture reenactment with
video motion graphs. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pages 3418–3428, 2022.
[39] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al.
Improving language understanding by generative pre-training. 2018.
[40] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.
Bert: Pre-training of deep bidirectional transformers for language under-
standing. arXiv preprint arXiv:1810.04805, 2018.
[41] Chunyuan Li, Xiang Gao, Yuan Li, Baolin Peng, Xiujun Li, Yizhe Zhang,
and Jianfeng Gao. Optimus: Organizing sentences via pre-trained model-
ing of a latent space. arXiv preprint arXiv:2004.04092, 2020.
[42] Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel
Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim,
Sarah Kreps, et al. Release strategies and the social impacts of language
models. arXiv preprint arXiv:1908.09203, 2019.
[43] Patrick von Platen. How to generate text: using different decoding methods
for language generation with transformers. Hugging Face, 2020.
[44] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhut-
dinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining
for language understanding. Advances in neural information processing
systems, 32, 2019.
[45] Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ran-
zato, and Arthur Szlam. Real or fake? learning to discriminate machine
from human generated text. arXiv preprint arXiv:1906.03351, 2019.
[46] Shangbin Feng, Zhaoxuan Tan, Herun Wan, Ningnan Wang, Zilong Chen,
Binchi Zhang, Qinghua Zheng, Wenqian Zhang, Zhenyu Lei, Shujie Yang,
et al. Twibot-22: Towards graph-based twitter bot detection. arXiv preprint
arXiv:2206.04564, 2022.
[47] Samaneh Hosseini Moghaddam and Maghsoud Abbaspour. Friendship
preference: Scalable and robust category of features for social bot detec-
tion. IEEE Transactions on Dependable and Secure Computing, 2022.
[48] Nick Hajli, Usman Saeed, Mina Tajvidi, and Farid Shirazi. Social bots and
the spread of disinformation in social media: the challenges of artificial
intelligence. British Journal of Management, 33(3):1238–1253, 2022.
[49] David Duki´
c, Dominik Keˇ
ca, and Dominik Stipi´
c. Are you human?
detecting bots on twitter using bert. In 2020 IEEE 7th International
Conference on Data Science and Advanced Analytics (DSAA), pages 631–
636. IEEE, 2020.
[50] Sneha Kudugunta and Emilio Ferrara. Deep neural networks for bot
detection. Information Sciences, 467:312–322, 2018.
[51] Maryam Heidari and James H Jones Jr. Bert model for social media bot
detection. 2022.
[52] Kheir Eddine Daouadi, Rim Zghal Rebaï, and Ikram Amous. Bot detection
on online social networks using deep forest. In Artificial Intelligence
Methods in Intelligent Algorithms: Proceedings of 8th Computer Science
On-line Conference 2019, Vol. 2 8, pages 307–315. Springer, 2019.
[53] Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas
Eck. Automatic detection of generated text is easiest when humans are
fooled. arXiv preprint arXiv:1911.00650, 2019.
[54] Jun Zhao, Xudong Liu, Qiben Yan, Bo Li, Minglai Shao, and Hao
Peng. Multi-attributed heterogeneous graph convolutional network for bot
detection. Information Sciences, 537:380–393, 2020.
[55] Bin Wu, Le Liu, Yanqing Yang, Kangfeng Zheng, and Xiujuan Wang.
Using improved conditional generative adversarial networks to detect
social bots on twitter. IEEE Access, 8:36664–36680, 2020.
[56] Shaghayegh Najari, Mostafa Salehi, and Reza Farahbakhsh. Ganbot: a
gan-based framework for social bot detection. Social Network Analysis
and Mining, 12:1–11, 2022.
[57] Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan
Ding, Jianwei Yue, and Yupeng Wu. How close is chatgpt to human
experts? comparison corpus, evaluation, and detection. arXiv preprint
arXiv:2301.07597, 2023.
[58] Rexhep Shijaku and Ercan Canhasi. Chatgpt generated text detection.
[59] Sandra Mitrovi´
c, Davide Andreoletti, and Omran Ayoub. Chatgpt
or human? detect and explain. explaining decisions of machine learn-
ing model for detecting short chatgpt-generated text. arXiv preprint
arXiv:2301.13852, 2023.
[60] Yongqiang Ma, Jiawei Liu, and Fan Yi. Is this abstract generated by ai?
a research for the gap between ai-generated scientific text and human-
written scientific text. arXiv preprint arXiv:2301.10416, 2023.
[61] Bei Yu. An evaluation of text classification methods for literary study.
Literary and Linguistic Computing, 23(3):327–343, 2008.
[62] Stephen Robertson. Understanding inverse document frequency: on theo-
retical arguments for idf. Journal of documentation, 2004.
[63] Chao Qiao, Bo Huang, Guocheng Niu, Daren Li, Daxiang Dong, Wei He,
Dianhai Yu, and Hua Wu. A new method of region embedding for text
classification. In ICLR, 2018.
[64] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov.
Enriching word vectors with subword information. Transactions of the
association for computational linguistics, 5:135–146, 2017.
[65] Leo Breiman. Random forests. Machine learning, 45:5–32, 2001.
[66] Sotiris B Kotsiantis. Decision trees: a recent overview. Artificial Intelli-
gence Review, 39:261–283, 2013.
[67] David G Kleinbaum, K Dietz, M Gail, Mitchel Klein, and Mitchell Klein.
Logistic regression. Springer, 2002.
[68] Nikhil Ketkar and Nikhil Ketkar. Stochastic gradient descent. Deep
learning with Python: A hands-on introduction, pages 113–132, 2017.
[69] Abdul Rehman Javed, Zunera Jalil, Syed Atif Moqurrab, Sidra Abbas, and
Xuan Liu. Ensemble adaboost classifier for accurate and fast detection of
botnet attacks in connected vehicles. Transactions on Emerging Telecom-
munications Technologies, 33(10):e4088, 2022.
[70] Alexey Natekin and Alois Knoll. Gradient boosting machines, a tutorial.
Frontiers in neurorobotics, 7:21, 2013.
[71] Geoffrey I Webb, Eamonn Keogh, and Risto Miikkulainen. Naïve bayes.
Encyclopedia of machine learning, 15:713–714, 2010.
[72] Pierre Geurts, Damien Ernst, and Louis Wehenkel. Extremely randomized
trees. Machine learning, 63:3–42, 2006.
[73] Hailun Xie, Li Zhang, and Chee Peng Lim. Evolving cnn-lstm models for
time series prediction using enhanced grey wolf optimizer. IEEE access,
8:161519–161541, 2020.
[74] S Selva Birunda and R Kanniga Devi. A review on word embedding
techniques for text classification. Innovative Data Communication Tech-
nologies and Application: Proceedings of ICIDCA 2020, pages 267–281,
2021.
[75] Erdenebileg Batbaatar, Meijing Li, and Keun Ho Ryu. Semantic-emotion
neural network for emotion recognition from text. IEEE access, 7:111866–
111878, 2019.
VOLUME 4, 2016 13
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
SAIMA SADIQ working as Assistant Professor in
Department of Computer Science at Government
Degree College for Women. Since Sep-2020, she
got herself enrolled in Ph.D Computer Science
at Khwaja Fareed University of Engineering &
IT (KFUEIT). Her recent research interests are
related to Data Mining, Machine learning & Deep
Learning based Text Mining.
SALEEM ULLAH was born in AhmedPur East,
Pakistan in 1983. He received his B.Sc. and MIT
degrees in Computer Science from Islamia Univer-
sity Bahawalpur and Bahauddin Zakariya Univer-
sity (Multan) in 2003 and 2005 respectively. From
2006 to 2009, he worked as a Network/IT Ad-
ministrator in different companies. He completed
his Doctorate degree from Chongqing University,
China in 2012. From August 2012 to Feb 2016, he
worked as an Assistant Professor in Islamia Uni-
versity Bahawalpur, Pakistan. Currently, he is working as an Associate Dean
in Khwaja Fareed University of Engineering & Information Technology,
Rahim Yar Khan since February 2016. He has almost 14 years of Industry
experience in field of IT. He is an active researcher in the field of Adhoc
Networks, IoTs, Congestion Control, Data Science, and Network Security.
14 VOLUME 4, 2016
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3308515
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
... Studies in this domain have increasingly focused on distinguishing between human-generated and AI-generated texts, employing a variety of DL models and techniques. Sadiq et al. [11] developed a methodology using a dataset of 25,572 tweets, where they applied feature extraction techniques such as Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF), alongside FastText embeddings. They demonstrated that a hybrid model combining Convolutional Neural Network (CNN) and LSTM networks outperformed standalone models, showcasing the efficacy of hybrid approaches in text classification. ...
... Likewise, the backwards' illustrated from Eqs. (11) to (14). ...
Article
Full-text available
The purpose of this study is to develop a reliable method for distinguishing between AI-generated, paraphrased, and human-written texts, which is crucial for maintaining the integrity of research and ensuring accurate information flow in critical fields such as healthcare. To achieve this, we propose HybridGAD, a novel hybrid model that combines Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Bidirectional Gated Recurrent Unit (Bi-GRU) architectures with an attention mechanism. Our methodology involves training this hybrid model on a dataset of radiology abstracts, encompassing texts generated by AI, paraphrased by AI, and written by humans. The major findings of our analysis indicate that HybridGAD achieves a high accuracy of 98%, significantly outperforming existing state-of-the-art models. This high performance is attributed to the model’s ability to effectively capture the contextual nuances and structural differences between AI-generated and human-written texts. In conclusion, HybridGAD not only enhances the accuracy of text classification in the field of radiology but also paves the way for more advanced medical diagnostic processes by ensuring the authenticity of textual information. Future research will focus on integrating textual and visual data for comprehensive radiology assessments and improving model generalization with partially labeled data. This study underscores the potential of HybridGAD in transforming medical text classification and highlights its applicability in ensuring the integrity and reliability of research in healthcare and beyond.
... This method is highly significant in sentiment analysis, and it is successful in solving the problems of the Urdu language, which have depth and details. FastText [50], which is characterized by morphological subtlety, is the foundational algorithm used in our approach. Unlike other models that generally identify words or sentence fragments, our algorithm treats the sentences as bags of character n-gram, allowing us to have an in-depth analysis of the real components. ...
Article
Full-text available
X (formerly known as Twitter) is a popular social network with hundreds of millions of users. We emphasize the benefits of using emojis to enhance the comprehension of user sentiment. Our objective was to analyze the sentiments expressed in Urdu language tweets, a task that can be demanding due to the language’s intricate structure and diverse dialects. Our research work revolves around combining emoji embeddings with the Urdu tweets dataset using multilingual BERT (mBERT) with the SentiUrdu-1M dataset, consisting of 1.14 million Urdu tweets and 1,194 emojis. The major motive of our study is twofold: (1) to evaluate the performance of pre-trained emoji2vec and our proposed method of Urdu-Specific FastText emoji embeddings in terms of their ability to distinguish emojis based on their expressions; and (2) to explore techniques for integrating Urdu tweets and emoji embeddings, including concatenation, neural network fusion, and attention mechanism fusion. Moreover, we fine-tuned the baseline models on only-text Urdu tweets using multilingual BERT and XLM-RoBERTa, achieving accuracies of 64% and 65%, respectively. Therefore, our study fills a gap in the literature by investigating the possibility of enhancing sentiment analysis in Urdu language tweets through emojis, a field that has received limited attention. The Urdu-Specific FastText emoji embeddings proposed in this paper yield better results than the pre-trained emojis from emoji2vec and improve sentiment analysis accuracy up to 95% for the neural network fusion approach.
... FastText provides the desired adaptability without sacrificing space and time efficiency by considering the morphological structure of words. FastText offers richer word representations and is capable of handling languages with large and complex vocabularies [29], [30]. ...
Article
Full-text available
Currently, social media is used as a platform for interacting with many people and has also become a source of information for social media researchers or analysts. Twitter is one of the platforms commonly used for research purposes, especially for data from tweets written by individuals. However, on Twitter, user information such as gender is not explicitly displayed in the account profile, yet there is a plethora of unstructured information containing such data, often unnoticed. This research aims to classify gender based on tweet data and account description data and determine the accuracy of gender classification using machine learning methods. The method used involves FastText as a feature extraction method and LSTM as a classification method based on the extracted data, while to achieve the most accurate results, classification is performed on tweet data, account description data, and a combination of both. This research shows that LSTM classification on account description data and combined data obtained an accuracy of 70%, while tweet data classification achieved 69%. This research concludes that FastText feature extraction with LSTM classification can be implemented for gender classification. However, there is no significant difference in accuracy results for each dataset. However, this research demonstrates that both methods can work well together and yield optimal results.
Article
Crowdfunding has become a popular financing method, attracting investors, businesses, and entrepreneurs. However, many campaigns fail to secure funding, making it crucial to reduce participation risks using artificial intelligence (AI). This study investigates the effectiveness of advanced AI techniques in predicting the success of crowdfunding campaigns on Kickstarter by analyzing campaign blurbs. We compare the performance of two widely used text representation models, bidirectional encoder representations from transformers (BERT) and FastText, in conjunction with long-short term memory (LSTM) and gradient boosting machine (GBM) classifiers. Our analysis involves preprocessing campaign blurbs, extracting features using BERT and FastText, and evaluating the predictive performance of these features with LSTM and GBM models. All experimental results show that BERT representations significantly outperform FastText, with the highest accuracy of 0.745 achieved using a fine-tuned BERT model combined with LSTM. These findings highlight the importance of using deep contextual embeddings and the benefits of fine-tuning pre-trained models for domain-specific applications. The results are benchmarked against existing methods, demonstrating the superiority of our approach. This study provides valuable insights for improving predictive models in the crowdfunding domain, offering practical implications for campaign creators and investors.
Article
Recent advancements in natural language production have enabled the creation of sophisticated deepfake social media messages, posing a significant threat to public discourse. In response, this study focuses on the development of reliable detection methods for identifying automated text on websites such as Twitter. Leveraging Tweepfake, a publicly accessible dataset, a simple deep learning model employing tf-tf-idf, word2vec and tokenizer from keras library word embeddings and a typical architecture for a convolutional neural network (CNN) is proposed. Comparative analysis against baseline methods, including various feature-based approaches and various forms of deep learning, such Long Short-Term Memory (LSTM), demonstrates the suggested method's greater performance. Experimental results showcase an impressive accuracy of 91% in accurately classifying tweet data either bot-generated or human-generated. This research contributes to the ongoing efforts to combat the proliferation of deepfake content on social media platforms. Keywords: Deep Learning, Machine Learning, Deepfake Detection, Text Classification
Article
Full-text available
The meeting between Natural Language Processing (NLP) and Quantum Computing has been very successful in recent years, leading to the development of several approaches of the so-called Quantum Natural Language Processing (QNLP). This is a hybrid field in which the potential of quantum mechanics is exploited and applied to critical aspects of language processing, involving different NLP tasks. Approaches developed so far span from those that demonstrate the quantum advantage only at the theoretical level to the ones implementing algorithms on quantum hardware. This paper aims to list the approaches developed so far, categorizing them by type, i.e., theoretical work and those implemented on classical or quantum hardware; by task, i.e., general purpose such as syntax-semantic representation or specific NLP tasks, like sentiment analysis or question answering; and by the resource used in the evaluation phase, i.e., whether a benchmark dataset or a custom one has been used. The advantages offered by QNLP are discussed, both in terms of performance and methodology, and some considerations about the possible usage QNLP approaches in the place of state-of-the-art deep learning-based ones are given.
Article
Deepfake image manipulation has achieved great attention in the previous year’s owing to brings solemn challenges from the public self-confidence. Forgery detection in face imaging has made considerable developments in detecting manipulated images. However, there is still a need for an efficient deepfake detection approach in complex background environments. This paper applies the state-of-the-art quantum transfer learning approach for classifying deepfake images from original face images. The proposed model comprises classical pre-trained ResNet-18 and quantum neural network layers that provide efficient features extraction to learn the different patterns of the deepfake face images. The proposed model is validated on a real-world deepfake dataset created using commercial software. An accuracy of 96.1 % was obtained.
Article
Social bots are intelligent software which control the behavior of fake accounts in an online social network(OSN). They pass themselves off as human accounts and manipulate the health of the ecosystem of OSNs. Thus, it is crucial to distinguish social bots from human accounts. Despite recent advances in social bot detection, state-of-the-art methods face challenges in scalability, generalization, and robustness. Considering these problems, in this paper, a new category of features, called friendship preference, is proposed. Friendship preference features are extracted from the profile attributes of the followers. The proposed feature extraction formula is designed to be scalable as the number of followers grows. Two classifiers are developed to evaluate the proposed category of features in terms of efficiency and usefulness. The classifiers are benchmarked on several real datasets. The experimental results indicate that the classifiers outperform Botometer in terms of classification performance. Scalability is assessed by inspecting the detection performance of classifiers when the attributes of a limited number of the accounts in the neighborhood of the users with large egonets are available. Furthermore, generalization is validated by crossover validation on various datasets. Finally, robustness and the ability for early detection of social bots are discussed.