Conference PaperPDF Available

Argot: Generating Adversarial Readable Chinese Texts

Authors:

Abstract and Figures

Natural language processing (NLP) models are known vulnerable to adversarial examples, similar to image processing models. Studying adversarial texts is an essential step to improve the robustness of NLP models. However, existing studies mainly focus on analyzing English texts and generating adversarial examples for English texts. There is no work studying the possibility and effect of the transformation to another language, e.g, Chinese. In this paper, we analyze the differences between Chinese and English, and explore the methodology to transform the existing English adversarial generation method to Chinese. We propose a novel black-box adversarial Chinese texts generation solution Argot, by utilizing the method for adversarial English samples and several novel methods developed on Chinese characteristics. Argot could effectively and efficiently generate adversarial Chinese texts with good readability. Furthermore, Argot could also automatically generate targeted Chinese adversarial text, achieving a high success rate and ensuring readability of the Chinese.
Content may be subject to copyright.
Argot: Generating Adversarial Readable Chinese Texts
Zihan Zhang1,Mingxuan Liu1,Chao Zhang1,2,Yiming Zhang1,
Zhou Li3,Qi Li1,2,Haixin Duan1,2,4,Donghong Sun1,2
1Tsinghua University
2Beijing National Research Center for Information Science and Technology
3University of California Irvine
4Qi An Xin Group
{zhangzih19,liumx18}@mails.tsinghua.edu.cn, chaoz@tsinghua.edu.cn,
zhangyim17@mails.tsinghua.edu.cn, zhou.li@uci.edu {qli01,duanhx,sundh105}@tsinghua.edu.cn
Abstract
Natural language processing (NLP) models are
known vulnerable to adversarial examples, simi-
lar to image processing models. Studying adver-
sarial texts is an essential step to improve the ro-
bustness of NLP models. However, existing studies
mainly focus on analyzing English texts and gener-
ating adversarial examples for English texts. There
is no work studying the possibility and effect of the
transformation to another language, e.g, Chinese.
In this paper, we analyze the differences between
Chinese and English, and explore the methodology
to transform the existing English adversarial gen-
eration method to Chinese. We propose a novel
black-box adversarial Chinese texts generation so-
lution Argot, by utilizing the method for adversarial
English samples and several novel methods devel-
oped on Chinese characteristics. Argot could ef-
fectively and efficiently generate adversarial Chi-
nese texts with good readability. Furthermore, Ar-
got could also automatically generate targeted Chi-
nese adversarial text, achieving a high success rate
and ensuring readability of the Chinese.
1 Introduction
Deep Neural Networks (DNNs) that process images are
known to be vulnerable to adversarial examples. Recent
works showed that Natural Language Processing (NLP) with
DNNs is also vulnerable [Li et al., 2018a]. Given the fact that
NLP plays an important role in information processing, e.g.,
sentiment analysis of reviews used in recommendation sys-
tems [Ravi and Ravi, 2015]and toxic content identification
in online governance [Nobata et al., 2016], studying adver-
sarial texts of NLP models is crucial and an essential step to
improve NLP models’ robustness.
However, generating adversarial texts is more challenging
than generating adversarial images. Image data is continuous,
but text data is discrete. Even minor perturbations applied
These two authors contributed equally
Corresponding author
to a word vector could yield a non-existed word vector (i.e.,
not associated with any valid word in the language). In ad-
dition, These non-existed word vector will result in meaning-
less texts, decreasing the perturbations may affect the read-
ability of texts. As a result, adversarial example genera-
tion solutions for image processing DNNs cannot be directly
ported to NLP DNNs.
Some works have been proposed to generate adversarial
texts [Li et al., 2018a]. But they all target English texts.
Researchers [Li et al., 2018a; Ebrahimi et al., 2017]pro-
posed five commonly used methods to generate English ad-
versarial texts in black-box scenario, i.e., insert, delete, swap,
substitute-C and substitute-W. But none of them discussed the
possibility about transferring the proposed methods to Chi-
nese. Moreover, Chinese itself has some unique character-
istics. Therefore, two research questions are raised: (1) Can
existing adversarial English text generation methods be trans-
formed to Chinese? How effective are they? (2) Are there
new methods for generating adversarial texts based on the
characteristics of Chinese?
In this paper, we analyzed the difference between English
and Chinese, and found Chinese texts have three unique lin-
guistic characteristics: pronunciation (pinyin), vision percep-
tion (glyph) and composition of characters (radicals). We
therefore proposed a novel adversarial Chinese text genera-
tion solution Argot, by adding perturbations based on these
characteristics. This solution could also work on other lan-
guages which also have similar characteristics as Chinese,
e.g., Korean and Japanese. In addition, Argot is able to gen-
erate both targeted and non-targeted Chinese adversarial text.
We have evaluated Argot’s performance of success rate,
perturbation, time consumption and readability, on several
NLP models for both targeted and non-targeted attacks. The
results showed that, in average, Argot could generate adver-
sarial texts with a success rate over 97.7% for non-targeted
attack, by only introducing less than 11.6% perturbations to
original texts, and 98.73% success rate with 12.25% pertur-
bations for targeted attack. The low perturbation rate im-
plies that, the adversarial texts generated by Argot have a
good readability, which is also confirmed by a user study
conducted by us. Furthermore, we proposed several candi-
date defenses against the attacks, and evaluated Argot’s per-
formance against these defenses.
Figure 1: Illustration of splitting characters into radicals.
Figure 2: Illustration of characters with similar glyph.
2 Background
2.1 Chinese Text
Similar to other languages like English, sentence is the basic
unit to express a complete idea in Chinese. In a sentence, the
smallest sememe is character in Chinese, while word is the
sememe in English. In Chinese, a few characters make up
a word (phrase). English words in a sentence are separated
by the separator space. But no separators exist between
Chinese characters and words, and thus an extra segmentation
step is often required for Chinese NLP tasks.
Chinese word. A Chinese word is typically made up of two
to four characters, while an English word usually consists of
a string of alphabet letters. While changing, adding or delet-
ing a letter usually does not impact the readability of an En-
glish word, doing so on characters can completely change the
meaning and readability of a Chinese word. For example, if
we delete the character (not) from the word (bad), the
remaining word (good) expresses an opposite meaning.
Chinese character. A Chinese character is typically com-
posed of two radicals, as shown in Figure 1, while the En-
glish sememe (word) is composed of alphabet letters. Insert-
ing/deleting/changing a letter in a English word will yield a
new word which could be printed out and easily understood,
e.g., f oolish/folish/fo0lish for “foolish” [Li et al., 2018a].
However, changing/adding/deleting a radical inside a charac-
ter will yield either a character with very different appear-
ances, or a character which does not exist in Chinese charac-
ter dictionary and cannot print or input to DNNs. As a result,
creating a new sememe in Chinese (i.e., character) acceptable
by readers is much harder than English (i.e., word).
Creating similar Chinese word. We figured out two
unique Chinese linguistic features can be utilized to generate
Chinese words without affecting the readability much.
Similar glyph. Chinese is a type of hieroglyphics language,
which utilizes the image perception of pictograph (glyph) to
help people understand a character or sentence [Wu et al.,
2019]. Some characters are similar in glyph, as shown in Fig-
ure 2. It is quite likely that a native Chinese speaker can com-
prehend a sentence in the same way even when a character
is transformed to another one with similar glyph. Moreover,
as most radicals are also legitimate characters, separating a
character into multiple radicals usually will not degrade the
readability much, as shown in Figure 1.
Similar sound. Chinese is also a phonetic language with
pinyin (a romanized spelling) and tone to distinguish the pro-
nunciation between words. As shown in Figure 3, the English
letters on top of each Chinese character are its pinyin repre-
sentations. Each character also has one out of four tones.
Figure 3: Illustration of similar pronunciations in Chinese words.
Many characters or words share similar pinyin and tones. Re-
placing a word in a sentence with another one of similar pro-
nunciation will not change its readability to a Chinese native
speaker. The major reason is that, Chinese users usually use
pinyin input method to type Chinese characters, but often-
times select a wrong character of same pinyin or of different
pinyin (e.g., due to regional dialect), making Chinese readers
familiar with the typos.
2.2 Adversarial Perturbations to Text
The adversary’s goal is generating adversarial texts by per-
turbing a piece of Chinese text to mislead a target NLP appli-
cation (e.g., toxic content detection), while keeping the texts
comprehensible by readers and the meaning intact. While
English is still the most popular language served by online
NLP services, e.g., Google Cloud NLP and Microsoft Azure
Text Analytics, services supporting Chinese are catching up.
Whether they can defend against adversarial perturbations is
questionable, as their backend technique is based on DNN,
which is known to be vulnerable to such attacks.
Here we formalize the problem. Assume an NLP classifier
fclassifies a text piece x∈ X into a label y∈ Y , i.e., y=
f(x). The adversarial perturbation function hchanges xto
xa∈ X i.e., xa=h(x). The adversary aims to change its
prediction label such that f(xa) = z, z 6=y. The change can
be targeted (z=s, where s∈ Y and sis pre-defined by the
adversary) or non-targeted (z∈ {Y y}). Also, we need xa
preserve the meaning of xfrom the perspective of readers.
Threat Model. In this work, we assume that the adversary
launches blackbox attack, who does not know the architec-
ture, training dataset, parameters and hyper-parameters of
classifier f. All they can do is querying the public API of f
using x, xa∈ X to obtain the output label and its confidence
score. While previous works focus on whitebox attack [Gong
et al., 2018], we believe blackbox attack is more practical as
model owner usually customizes a public model (e.g., retrain
or tuning hyper-parameters). While the adversary’s knowl-
edge is much more limited in this setting, we found our attack
can still achieve high accuracy.
3 Generating Adversarial Chinese Text
3.1 Adversarial Changes in Chinese Word
Previous works about generating adversarial text focused on
English [Li et al., 2018a]and five types of word perturba-
tion have been proposed: 1) Insert a space into the word; 2)
Delete a random letter; 3) Swap random two adjacent letters;
4) Substitute-C (Sub-C) or replace characters with visually
similar characters; 5) Substitute-W (Sub-W) or replace the
word with its topknearest neighbors in a context-aware word
vector space. We examine those methods but found not all of
them can be transformed to Chinese. For example, deleting a
character or radical in Chinese will have much more impact to
comprehension comparing to English. In the end, we found
five types of perturbations applicable to Chinese while 2 of
them are transformed from attacks against English with some
adjustment and 3 are novel and unique to Chinese1. Due to
the fact that languages in East Asia like Korean and Japanese
share similar properties, e.g., hieroglyphics, our methods are
expected to be applicable to those languages as well.
Synonyms. This method is transformed from English Sub-
Wmethod. But we focus on synonyms defined in a vocabu-
lary without using word embedding.
Shuffle. This method is transformed from English Swap
method. We apply it to characters within a Chinese word.
Splitting-character*. As aforementioned, some radicals
are also characters. Therefore, a special method for generat-
ing adversarial Chinese texts is splitting characters into rad-
icals. In particular, we only split characters of left-right or
semi-enclosed radical structures (left and right part of Fig-
ure 1, respectively), as it causes less confusion to readers.
This method is similar to Insert spaces to English words.
Glyph*. As aforementioned, characters with similar glyph
can be understood smoothly in a sentence. Therefore we
can replace characters with ones of similar glyph. Note that,
this is different from Sub-C, as the candidate glyph pairs are
much more than English letters (e.g., only 3 options, 1-l,
O-o and a-@, are explored in [Li et al., 2018a]).
Pinyin*. As aforementioned, replacing a character with an-
other one of similar pinyin also yield readable texts. This
feature is unique to hieroglyphics languages, and could be
utilized to generate adversarial texts as well.
3.2 Argot
We propose a solution named Argot to generate adversarial
Chinese texts, by applying aforementioned perturbations to
target Chinese words in a sentence. The workflow of Argot is
shown in Figure 4.
Given a Chinese sentence (termed x) and a target NLP
model M, Argot first segments xinto a list of Chinese words
(W=< w1, w2, ..., wn>) using a standard NLP tool2. To re-
tain the readability, we only mutate a limited set of important
words Wimp W. Specifically, we sort each word’s contri-
bution to the classification result of text xunder the NLP clas-
sifier fof target NLP model M, and mark important words
accordingly. Then, we iterate important words Wimp in order
and apply aforementioned perturbations to each word. The
target model Mwill be queried with each new sentence. If
Myields an expected label, the query sentence is reported as
an adversarial example. Details are presented as follows.
Querying model M.When evaluating the importance of a
word or the label of a mutated sentence, we will query the
target NLP model M. Given an input x, we assume Mre-
turns the confidence score for every label, and outputs a label
yif and only if the confidence score Scoref(x, y)is high-
est among all candidate labels. This is the normal setting for
MLaaS APIs and a common assumption of previous black-
box attacks [Li et al., 2018a]. When evaluating the impor-
tance of a word, we remove the word from the sentence, and
1We use “*” to mark those novel methods.
2https://github.com/fxsjy/jieba
Ω
Add Perturbations
Query
Model
𝑴
𝒙𝒂𝒅𝒗
Segment
the
Sentence
Pinyin
Glyph
Splitting-character
Synonyms
Shuffle
Stop?
Query
Model
𝑴
Find Important
Words
Word List
𝒙
Figure 4: Overview of Argot’s workflow.
query the target model Mwith the new sentence to compute
the decrease of confidence. When evaluating the label of a
mutated sentence, we design two score functions Luand Lt
to guide targeted and non-targeted attacks respectively.
Non-targeted attack. After adding perturbations on the
sentence x, a new sentence xais yielded, where xa=
h(x). We monitor the drops of confidence score Lu=
Scoref(x, y)Scoref(xa, y )to evaluate perturbations’ ef-
fects. Argot will iteratively mutate the target sentence, to
generate a sentence xn
aafter niterations, whose confidence
score Scoref(xn
a, y)drops to a very low level and is even
lower than some other lable’s (e.g., z) confidence score
Scoref(xn
a, z).
Targeted attack. In this setting, we use the increase rather
than decrease of target label s’ confidence score Lt=
Scoref(xa, s)Scoref(x, s), to measure the effectiveness
of perturbations. The ultimate goal is that, after multiple it-
erations, the label s’ confidence score Scoref(xn
a, s)reaches
to a very high level and is higher than all other labels’.
Finding important words. As Argot is a blackbox attack,
gradients are inaccessible so we use the output label and con-
fidence score as alternative indicators. In particular, for each
word wiin W, we delete it from the original sentence, query
the new sentence with M, and compare the result to the origi-
nal one’s. If deleting wimakes the original label’s confidence
score drop more than deleting another word, then wiis more
important than the other. After enumerating W, we get a list
of words sorted by importance.
Adding perturbations. Given a seed sentence, one round
of mutations will be performed on this sentence as follows.
The important words will be iterated one by one in the de-
creasing order of importance. In each iteration, the selected
word will be mutated with five candidate perturbations, yield-
ing five new sentences which will be queried with M. If ei-
ther new sentence satisfies the ultimate goal of non-targeted
or targeted attack, Argot will stop. Otherwise, this round
stops after all important words have been iterated. And,
the yielded sentence with highest Luor Ltis chosen as the
new seed sentence, and a new round of mutations will start.
Among the five perturbations, splitting-character, synonyms
and shuffle are straightforward, so we only elaborate the re-
maining two as follows.
Pinyin. Homophone and typos in typing pinyin can be uti-
lized to generate adversarial words with little influence on
readability. We propose to replace the target word with the
following three types of words:
Algorithm 1 : The detail of function glyph(c).
Input: Original character c.
Output: New character c2with similar glyph.
1: Read in all Chinese radicals into a list all radicals.
2: similarity = 0,c2=c,candidates ={}
3: radicals decompose radicals(c)
4: for radical radicals do Replace radicals
5: for other all radicals do
6: if other 6=radical then
7: radicals0replaceWith(radical, other)
8: candidates candidatesVal (radicals0)
9: end if
10: end for
11: end for
12: for radical radicals do Delete radicals
13: radicals0deleteFromRadicals(radical)
14: candidates candidatesVal (radicals0)
15: end for
16: for other all radicals do Add radicals
17: radicals0addIntoRadicals(other)
18: candidates candidatesVal (radicals0)
19: end for
20: for c1candidates do Assess the similarity of
candidates
21: score siamese similarity(c, c1)
22: if score > similarity then
23: similarity =score,c2=c1
24: end if
25: end for
26: return c2;
A homophone word [Hiruncharoenvate et al., 2015],
which has a same pinyin representation.
A word that has a similar pinyin representation but dif-
ferent head and tail nasal, by interchanging an and ang,
in and ing, as well as en and eng in the pinyin repre-
sentation.
A word that has a similar pinyin but different rolling/flat
tongues, by interchanging cwith ch,zwith zh, and s
with sh [Chen and Lee, 2000].
Glyph. For a given word consisting of multiple characters,
we replace some characters with ones of similar glyph. The
core challenge is finding similar characters. For each charac-
ter, we first decompose it into a set of radicals, with an open
source tool3. Then, we update the radical set with three strate-
gies: replacing a radical with another one from the Chinese
vocabulary, deleting one radical, or adding another radical.
A special function Val() is utilized to check whether the new
radical set can make up a legitimate Chinese character. If yes,
it returns the yielded character, otherwise it returns NULL. In
this way, we could generate a set of characters with similar
glyph. The character that is most similar to the original char-
acter will be used to mutate the sentence. Algorithm 1 shows
the details. To measure the similarity between two characters,
we develop a DNN model based on siamese network, which
3https://github.com/kfcd/chaizi/blob/master/chaizi-jt.txt
Figure 5: Architecture of Siamese network.
has been found effective in comparing input data [Chopra et
al., 2005]. The structure is shown in Figure 5. It takes two
pictures I1, I2of characters as input and sends them to two
identical CNN models gwith parameters θ. In our proto-
type, the model gconsists of three convolutional layers and
each is followed by a max-pooling layer. The number of con-
volutional filters is 64 for the first layer and 128 for last two.
Then, the distance between two pictures’ features g(I1;θ)and
g(I2;θ)is computed, and a confidence score is produced via
sigmoid. We selected 16,008 pairs of similar characters from
an open source corpus4as the training data. After training, it
can calculate the similarity between any two characters.
4 Evaluation
4.1 Evaluation Setup
NLP task. We select one of the most common NLP tasks,
i.e., text classification, as the NLP application to be attacked.
We use sentiment classification (2-class) to evaluate the non-
targeted attack, and use news classification (5-class) to eval-
uate the targeted attack.
Dataset. We create a dataset for sentiment classification us-
ing samples from an existing Chinese NLP dataset5. We
choose the reviews of hotel, book, electronic product that are
crawled from e-commerce websites. Our dataset is composed
of 15,471 negative reviews and 16,188 positive reviews. We
manually check all these reviews and delete the ones that are
too short or have ambiguous labels. We divide this dataset
into 80% and 20% for training and validation respectively.
For news classification, we use the THUNews dataset [Sun et
al., 2016]. Out of the fourteen classes, we select five classes
from the dataset, i.e., affair, education, finance, society and
sports. For each class, we sample 25,000 texts as training set
and 5,000 as validation set.
Target Models. We choose two existing models built on top
of CNN [Kim, 2014]and LSTM [Zhang et al., 2015]as the
target models. They are widely used for many NLP tasks. As
aforementioned, a big difference between English and Chi-
nese is that, there is a word separator (space) in English, but
not in Chinese. As a result, Chinese NLP models usually
have two variants, i.e., character-based and word-based, de-
pending on whether the model performs word segmentation
and whether it performs characters or words embedding. We
consider both variants in evaluation.
Metrics. Three metrics are used to comprehensively eval-
uate the generated adversarial texts: success rate, perturba-
tions and efficiency. Success rate reflects how many sen-
tences in the validation set have been successfully mutated to
4https://github.com/zzboy/chinese
5https://github.com/SophonPlus/ChineseNlpCorpus
Target Affair Education Finance Society Sports
Origin S P T S P T S P T S P T S P T
Affair char 100% 0.16 0.37 99% 0.22 0.51 100% 0.10 0.29 99% 0.23 0.55
word 100% 0.08 0.20 100% 0.02 0.14 100% 0.01 0.11 96.8% 0.14 0.27
Education char 98.4% 0.25 0.52 90% 0.32 0.69 99.2% 0.18 0.39 84% 0.37 0.79
word 99% 0.04 0.13 100% 0.02 0.12 100% 0.02 0.11 96.6% 0.10 0.23
Finance char 100% 0.12 0.35 99.4% 0.15 0.34 99.6% 0.12 0.30 98.4% 0.24 0.55
word 99.8% 0.04 0.14 100% 0.05 0.17 99.8% 0.02 0.11 97% 0.10 0.24
Society char 100% 0.15 0.42 99.8% 0.15 0.38 99.8% 0.22 0.53 99.4% 0.21 0.50
word 100% 0.04 0.17 100% 0.06 0.21 100% 0.03 0.17 100% 0.08 0.24
Sports char 99% 0.17 0.41 99.2% 0.16 0.39 96.6% 0.28 0.59 99.6% 0.15 0.36
word 100% 0.02 0.14 100% 0.04 0.17 100% 0.03 0.16 99.8% 0.01 0.10
Table 1: Targeted attack results of Argot (S: success rate, P: perturbation, T: Time/Char.).
Model Accuracy S P T
char-based CNN 89% 93.73% 0.14 0.28
word-based CNN 90% 98.31% 0.09 0.21
char-based LSTM 90% 99.73% 0.13 0.39
word-based LSTM 90% 99.05% 0.09 0.58
Table 2: Non-targeted attack results of Argot (S: success rate, P:
perturbation, T: Time/Char.).
fool the target NLP model. Perturbation reflects the average
percentage of characters in a successful adversarial sentence
that have been mutated. It also reflects the impact to read-
ability, because more perturbation tend to make readability
worse. Efficiency represents the average time consumed by
Argot when generating adversarial texts. Longer input texts
usually costs more time, since Argot is likely to mutate more
words. As a result, we use the metric Time/Char., i.e., the
time cost divided by the text length (seconds per character).
Implementation Details. The word segmentation tool for
Chinese text pre-processing used in our experiment is jieba6.
The word embedding scheme is trained from the Chinese wiki
corpus [Li et al., 2018b]with an embedding dimension of
300, using word2vec [Mikolov et al., 2013]. To generate per-
turbations, we use a tool [Tiantian, 2015]to transform Pinyin
to character and another tool [Hai Liang Wang, 2018]to
transform character to Pinyin. In addition, we use a tool [Hai
Liang Wang, 2017]to choose the appropriate synonym for a
word.
4.2 Evaluation Results
Non-targeted Attack. Table 2 shows the results of apply-
ing all 5 perturbations for non-targeted attack. The second
column shows the accuracy of original black-box models (all
around 90%). The best success rate is 99.73% on the char-
based LSTM, while the worst case can still reach 93.73% on
char-based CNN. The forth column shows the percentage of
perturbations added by Argot comparing to the text length,
showing that minor mutations are sufficient to yield the de-
sired adversarial texts.
Targeted Attack. Table 1 shows the results of target attack.
In most cases, Argot achieves more than 95% success rate by
introducing only a few perturbations. The small perturbation
rate also implies that Argot is able to retain semantics of the
original sentences.
6https://github.com/fxsjy/jieba
0% 20% 40% 60% 80% 100%
Binary-class char-based CNN
Binary-class word-based CNN
Binary-class char-based LSTM
Binary-class word-based LSTM
Multi-class char-b ased CNN
Multi-class word-based CNN
pinyin shuffle synonyms spliting-characters glyph
Figure 6: The distribution of 5 perturbations.
Efficiency Analysis. From Table 2 and Table 1, we can see
that Argot is fast at generating adversarial texts. In the best
case, it can process one Chinese character within 0.1 second
on average in the target attack setting, and 0.21 second for
non-target attack. Note that, the legth of input texts in the tar-
get attack dataset are longer than non-targeted attack dataset
in average (1000 for targeted and 80 for non-targeted), which
increases the time of generating adversarial texts. Also, the
efficiency of Argot is strongly related to the specific attack
target. For example, adversarial perturbation from Affair to
Society is much more efficient comparing to that from Ed-
ucation to Sports. We speculate the root cause is that the
categories Affair and Society are very similar and overlapped
to some extent, which makes targeted attack easier, but the
categories Education and Sports are very different.
Contributions of Perturbations. For all adversarial texts,
we evaluated what perturbations are added to the original
texts. The distributions of these 5 perturbation methods are
shown in Figure 6. As we can see, the method pinyin,glyph
and shuffle account for the largest proportion. In particu-
lar, pinyin accounts for the most perturbations overall, which
can be explained by the large perturbation space under this
method. Pinyin and tone determine the pronunciation of a
character directly, and they also contains semantic meanings
that cannot be provided by the writing system. Existing Chi-
nese NLP models focus on achieving robust classification re-
sults on words and characters, thus they are vulnerable to
pinyin perturbations. Other than pinyin, glyph contributes
more among the char-based models and shuffle contributes
more for word-based models, mainly because those models
concentrate on different set of features.
Readability Evaluation. The ultimate goal of adversarial
texts is to mislead NLP models but not humans. So, we fur-
ther evaluated the human readability of the generated adver-
sarial texts, by conducting a user study. We randomly se-
lected 50 adversarial texts generated by Argot, and queried
0
0.5
1
Binary-class
cha r-base d
CNN
Binary-class
word -ba sed
CNN
Binary-class
cha r-base d
LST M
Binary-class
word -ba sed
LST M
Multi-class
cha r-base d
CNN
Multi-class
word -ba sed
CNN
<0.4 5 <0.35 <0.25 <0 .15
Figure 7: Relation of perturbations and success rate. The x-axis
refers to perturbation threshold and models. The y-axis refers to the
success rate of non-targeted and targeted attacks.
25 volunteer native Chinese speakers around the age of 24
with a gender ratio of 1:1. In the user study, for each adver-
sarial text, volunteers are given the pair of original text and
adversarial text, and three questions to answer: (1) whether
the adversarial text is comprehensible, (2) what is the label of
the adversarial text, and (3) the semantic quality score (0-5)
of adversarial texts comparing to the original text. From the
result of this survey, 84% volunteers think that the generated
adversarial texts are readable. 91% of them consider the label
of the generated adversarial texts same as the original texts,
implying that the adversarial texts do not change the seman-
tics significantly. The average score of semantic quality is
4.6, very close to the highest score 5. In summary, we believe
that the generated adversarial texts are of very high quality in
terms of readability and semantic understanding.
Factors Affecting Success Rates. The first factor we look
into is the perturbation rate. Figure 7 shows the changes of
success rate given different perturbation rate threshold. We
can see that, the success rate is decreasing when fewer per-
turbations are allowed in all attack settings. Furthermore, the
success rate keeps at a relatively high level even we reduce the
perturbation threshold to a relatively low level. Intuitively, the
more perturbations added to the original text, the greater the
impact on readability will be. As a result, we can trade pertur-
bation rate with success rate to get adversarial texts of higher
quality. In addition, from Table 1 and Table 2, we can see that
the success rates against word-based models are higher than
that of char-based models. It suggests that char-based Chi-
nese NLP models in general are more robust than word-based
NLP models against adversarial perturbations. Another fac-
tor that may affect the success rate is the length of the original
text. Given a longer input text, Argot could generate adver-
sarial texts with a higher success rate, probably because the
longer input texts leave more opportunities for Argot to add
perturbations.
Defenses. Based on the perturbation methods Argot used,
we proposed two defense solutions.
Embedding Pinyin Features. The robustness of Chinese
NLP models could benefit from adding information from
pinyin [Zhu et al., 2018]. We therefore propose to integrate
the pinyin embedding with word embedding to improve the
robustness of the NLP models. Results showed that, this
defense lowers the success rate of attacks from 70.1% to
65.4% on average.
Embedding Glyph Features. Adding the glyph information
to Chinese NLP models could also improve their robust-
ness [Wu et al., 2019]. Results showed that, this defense
lowers the success rate from 80.1% to 77.7% on average,
if Argot only uses glyph perturbations.
The results show that integrating pinyin and glyph features of
Chinese texts indeed improves Chinese NLP models’ robust-
ness. However, the success rates of our attacks are still high.
A better defense is needed and we leave it as our future work.
5 Related Work
Adversarial texts have been studied a lot in the literature.
Most of them target English texts though. These solutions
can be categorized into white-box and black-box settings.
White-box setting. Attackers mainly use gradient com-
puted by the model to assess each word’s impacts and add
perturbations to important words to generate adversarial En-
glish texts [Papernot et al., 2016; Li et al., 2018a; Liang
et al., 2017; Samanta and Mehta, 2017; Gong et al., 2018;
Miyato et al., 2017]. In addition, the interpretability of ad-
versarial texts is studied recently [Sato et al., 2018].
Black-box setting. This setting assumes the adversary can
only access model’s input and output. Gao et al. proposed
TS (temporal score) and TTS (temporal tail score) [Gao et
al., 2018]to evaluate the importance of each word and add
letter-level disturbance to the text. Kuleshov et al. [Kuleshov
et al., 2018]and Alzantot et al. [Alzantot et al., 2018]use
similar word in the vector space to find adversarial texts. Li
et al. [Li et al., 2018a]and Liang et al. [Liang et al., 2017]
proposed iterative approaches to mutate sentence from words
and evaluated on real-world NLP applications. As Chinese
text is composed in a different way (characters and radicals),
we proposed a novel solution in this paper.
6 Conclusion
In this study, we analyze the unique characteristic of Chinese
compared to English and the limitation of existing adversarial
text generation solutions for English. Based on its character-
istic, we propose a black-box adversarial Chinese text gen-
eration solution Argot for two attack scenarios, i.e., targeted
attack and non-targeted attack. In Argot, we use five meth-
ods pinyin, glyph, splitting-characters, shuffle, and synonyms
to add perturbations. We evaluate the three metrics (success
rate, perturbation and efficiency) of generated texts of Ar-
got. The results show that Argot could generate adversarial
texts with high success rate and relatively small perturbations.
Meanwhile, the generated texts can maintain good readabil-
ity. According to user study, most volunteers can understand
the meaning and get correct label of adversarial texts.
Acknowledgements
This work was supported in part by National Key Re-
search and Development Program of China under Grant
2018YFB2101501 and 2016QY12Z2103, National Natu-
ral Science Foundation of China under Grant 61772308,
61972224, U1736209, U1836213 and U1636204 and BN-
Rist Network and Software Security Research Program under
Grant BNR2019TD01004 and BNR2019RC01009.
References
[Alzantot et al., 2018]Moustafa Alzantot, Yash Sharma,
Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-
Wei Chang. Generating natural language adversarial ex-
amples. arXiv preprint arXiv:1804.07998, 2018.
[Chen and Lee, 2000]Zheng Chen and Kai-Fu Lee. A new
statistical approach to chinese pinyin input. In Proceedings
of the 38th Annual Meeting of the Association for Compu-
tational Linguistics, pages 241–247, 2000.
[Chopra et al., 2005]Sumit Chopra, Raia Hadsell, Yann Le-
Cun, et al. Learning a similarity metric discriminatively,
with application to face verification. In CVPR (1), pages
539–546, 2005.
[Ebrahimi et al., 2017]Javid Ebrahimi, Anyi Rao, Daniel
Lowd, and Dejing Dou. Hotflip: White-box adver-
sarial examples for text classification. arXiv preprint
arXiv:1712.06751, 2017.
[Gao et al., 2018]Ji Gao, Jack Lanchantin, Mary Lou Soffa,
and Yanjun Qi. Black-box generation of adversarial text
sequences to evade deep learning classifiers. In 2018
IEEE Security and Privacy Workshops (SPW), pages 50–
56. IEEE, 2018.
[Gong et al., 2018]Zhitao Gong, Wenlu Wang, Bo Li, Dawn
Song, and Wei-Shinn Ku. Adversarial texts with gradient
methods. arXiv preprint arXiv:1801.07175, 2018.
[Hai Liang Wang, 2017]Hu Ying Xi Hai Liang Wang. Syn-
onyms, 2017.
[Hai Liang Wang, 2018]Hu Ying Xi Hai Liang Wang.
python-pinyin, 2018.
[Hiruncharoenvate et al., 2015]Chaya Hiruncharoenvate,
Zhiyuan Lin, and Eric Gilbert. Algorithmically by-
passing censorship on sina weibo with nondeterministic
homophone substitutions. In Ninth International AAAI
Conference on Web and Social Media, 2015.
[Kim, 2014]Yoon Kim. Convolutional neural networks for
sentence classification. arXiv preprint arXiv:1408.5882,
2014.
[Kuleshov et al., 2018]Volodymyr Kuleshov, Shantanu
Thakoor, Tingfung Lau, and Stefano Ermon. Adversarial
examples for natural language classification problems.
2018.
[Li et al., 2018a]Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li,
and Ting Wang. Textbugger: Generating adversar-
ial text against real-world applications. arXiv preprint
arXiv:1812.05271, 2018.
[Li et al., 2018b]Shen Li, Zhe Zhao, Renfen Hu, Wensi Li,
Tao Liu, and Xiaoyong Du. Analogical reasoning on
chinese morphological and semantic relations. In Pro-
ceedings of the 56th Annual Meeting of the Association
for Computational Linguistics (Volume 2: Short Papers),
pages 138–143. Association for Computational Linguis-
tics, 2018.
[Liang et al., 2017]Bin Liang, Hongcheng Li, Miaoqiang
Su, Pan Bian, Xirong Li, and Wenchang Shi. Deep
text classification can be fooled. arXiv preprint
arXiv:1704.08006, 2017.
[Mikolov et al., 2013]Tomas Mikolov, Kai Chen, Greg
Corrado, and Jeffrey Dean. Efficient estimation of
word representations in vector space. arXiv preprint
arXiv:1301.3781, 2013.
[Miyato et al., 2017]Takeru Miyato, Andrew M. Dai, and
Ian J. Goodfellow. Adversarial training methods for semi-
supervised text classification. In 5th International Con-
ference on Learning Representations, ICLR 2017, Toulon,
France, April 24-26, 2017, Conference Track Proceedings,
2017.
[Nobata et al., 2016]Chikashi Nobata, Joel Tetreault,
Achint Thomas, Yashar Mehdad, and Yi Chang. Abusive
language detection in online user content. In Proceedings
of the 25th international conference on world wide
web, pages 145–153. International World Wide Web
Conferences Steering Committee, 2016.
[Papernot et al., 2016]Nicolas Papernot, Patrick McDaniel,
Ananthram Swami, and Richard Harang. Crafting ad-
versarial input sequences for recurrent neural networks.
In MILCOM 2016-2016 IEEE Military Communications
Conference, pages 49–54. IEEE, 2016.
[Ravi and Ravi, 2015]Kumar Ravi and Vadlamani Ravi. A
survey on opinion mining and sentiment analysis: tasks,
approaches and applications. Knowledge-Based Systems,
89:14–46, 2015.
[Samanta and Mehta, 2017]Suranjana Samanta and Sameep
Mehta. Towards crafting text adversarial samples. arXiv
preprint arXiv:1707.02812, 2017.
[Sato et al., 2018]Motoki Sato, Jun Suzuki, Hiroyuki
Shindo, and Yuji Matsumoto. Interpretable adversarial
perturbation in input embedding space for text. In Pro-
ceedings of the Twenty-Seventh International Joint Con-
ference on Artificial Intelligence, IJCAI 2018, July 13-19,
2018, Stockholm, Sweden, pages 4323–4330, 2018.
[Sun et al., 2016]Maosong Sun, Xinxiong Chen, Kaixu
Zhang, Zhipeng Guo, and Zhiyuan Liu. Thulac: An effi-
cient lexical analyzer for chinese. Technical report, Tech-
nical Report. Technical Report, 2016.
[Tiantian, 2015]Le Tiantian. Pinyin2hanzi, 2015.
[Wu et al., 2019]Wei Wu, Yuxian Meng, Qinghong Han,
Muyu Li, Xiaoya Li, Jie Mei, Ping Nie, Xiaofei Sun, and
Jiwei Li. Glyce: Glyph-vectors for chinese character rep-
resentations. arXiv preprint arXiv:1901.10125, 2019.
[Zhang et al., 2015]Xiang Zhang, Junbo Zhao, and Yann
LeCun. Character-level convolutional networks for text
classification. In Advances in neural information process-
ing systems, pages 649–657, 2015.
[Zhu et al., 2018]Wenhao Zhu, Xin Jin, Jianyue Ni, Bao-
gang Wei, and Zhiguo Lu. Improve word embed-
ding using both writing and pronunciation. PloS one,
13(12):e0208785, 2018.
... Deep learning models are shown to be vulnerable to adversarial examples [12,19,27,32,37,42,46,48,53,62], which are perturbed input samples that mislead the model to generate erroneous outputs. Specifically, given a model M(·), for a benign input x with ground truth label y, the attack searches for a minimal perturbation δ such that M(x + δ) ̸ = y (non-targeted attack) or M(x + δ) = y ′ (targeted attack), where y ′ is the target label. ...
Preprint
Trajectory prediction forecasts nearby agents' moves based on their historical trajectories. Accurate trajectory prediction is crucial for autonomous vehicles. Existing attacks compromise the prediction model of a victim AV by directly manipulating the historical trajectory of an attacker AV, which has limited real-world applicability. This paper, for the first time, explores an indirect attack approach that induces prediction errors via attacks against the perception module of a victim AV. Although it has been shown that physically realizable attacks against LiDAR-based perception are possible by placing a few objects at strategic locations, it is still an open challenge to find an object location from the vast search space in order to launch effective attacks against prediction under varying victim AV velocities. Through analysis, we observe that a prediction model is prone to an attack focusing on a single point in the scene. Consequently, we propose a novel two-stage attack framework to realize the single-point attack. The first stage of prediction-side attack efficiently identifies, guided by the distribution of detection results under object-based attacks against perception, the state perturbations for the prediction model that are effective and velocity-insensitive. In the second stage of location matching, we match the feasible object locations with the found state perturbations. Our evaluation using a public autonomous driving dataset shows that our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV. The effectiveness of our attack is also demonstrated on a real testbed car. To the best of our knowledge, this study is the first security analysis spanning from LiDAR-based perception to prediction in autonomous driving, leading to a realistic attack on prediction. To counteract the proposed attack, potential defenses are discussed.
... Here, id represents the unique identification number of the comment, city represents the permanent residence of the commenter, which was used for the subsequent analysis of each city, and content represents the text content of the citizen comment. A sentence is the basic unit for expressing a complete idea in Chinese [35]. Therefore, a comment can be regarded as an article formed by concatenating one or more sentences. ...
Article
Full-text available
Smart cities integrate information technology with urban transformation, making it crucial to systematically evaluate their development level and effectiveness. Recent years have seen increased attention towards smart city evaluations worldwide, but there is still research space for theoretical models, technical methods, and practical applications. To address this gap, this study proposes an efficiency evaluation model for smart cities and a smart city user demand analysis model. It answers two research questions: how to configure investments in different aspects of smart city for a better user experience, and how to judge the extent and specific points of public demand in various sectors of a smart city. By analysing evaluation data, this study accurately identifies the development direction and construction focus of smart cities, supports targeted optimisation and improvement strategies, enhances user experience, and provides rationalised suggestions for a dynamic revision of smart city evaluation indicators.
Preprint
Retrieval-Augmented Generation (RAG) is applied to solve hallucination problems and real-time constraints of large language models, but it also induces vulnerabilities against retrieval corruption attacks. Existing research mainly explores the unreliability of RAG in white-box and closed-domain QA tasks. In this paper, we aim to reveal the vulnerabilities of Retrieval-Enhanced Generative (RAG) models when faced with black-box attacks for opinion manipulation. We explore the impact of such attacks on user cognition and decision-making, providing new insight to enhance the reliability and security of RAG models. We manipulate the ranking results of the retrieval model in RAG with instruction and use these results as data to train a surrogate model. By employing adversarial retrieval attack methods to the surrogate model, black-box transfer attacks on RAG are further realized. Experiments conducted on opinion datasets across multiple topics show that the proposed attack strategy can significantly alter the opinion polarity of the content generated by RAG. This demonstrates the model's vulnerability and, more importantly, reveals the potential negative impact on user cognition and decision-making, making it easier to mislead users into accepting incorrect or biased information.
Article
Non-normative texts and euphemisms are widely spread on the Internet, making it more difficult to moderate the content. These phenomena result from misspelling errors or deliberate textual attacks by users. However, current methods lack a robust design and are vulnerable to adversarial attacks. This poses a significant potential threat to content moderation in the ever-changing social media scenario. Thus, exploring robust text correction is of great significance. Text correction aims to automatically detect and correct errors in sentences, which can be employed as a defensive method against adversarial attacks. In this work, we provide PROTECT: a robust Chinese language model for more general Chinese text correction. In particular, we propose a simple but effective self-supervised learning approach for calibrating generalizable representations, which is robust to adapt to the changing adversarial texts. Specifically, we develop an adversarial-aware multi-feature representation method that establishes auxiliary supervision. To the best of our knowledge, the PROTECT is the first model to represent perfect pinyin, abbreviation pinyin, character split, visual, and phonetic features simultaneously. Based on the generated adversarial examples, PROTECT is trained from scratch through a unified text-to-text generation paradigm. This empowers the model to correct multitype text errors and inconsistent lengths simultaneously. After obtaining adversarially robust representations, we design a novel parameter-efficient tuning method consisting of context-specific adaptive prefix and semantic-consistent low-rank adaptation modules to implement zero-shot and few-shot learning. Extensive experimental results have shown that by tuning the parameters by 0.2%, PROTECT achieves best performance in the full-data and in the low-resource setting.
Article
Individuals working within a team have unique attributes which lead to the emergence of different team dynamics that could promote or hinder team performance. Implicit motivation is one such aspect that brings diversity. Research in the Psychology literature has identified three motive profiles that characterize individuals as achievement, affiliation and as power motivated. In this work, we personify agents with these motive profiles and analyze their performance through a series of experiments using a prey-predator task domain. Throughout the experiments, we analyzed the productivity of teams with different motivation compositions (heterogeneous teams) and homogeneous teams with motive profiles created by perturbing the strengths of the three original motive profiles. Furthermore, we investigated how much an agent has to suppress its motivation to pick a more rational goal and quantified this inhibition by introducing a new metric named Perceived Tension. Our experiments concluded that when working collaboratively, having teams with a diverse motivation portfolio is beneficial. It was also evident that diversely motivated teams perform well across different test settings making such teams robust to different task difficulty levels. Furthermore, results showed that predominantly affiliation-motivated teams are less tensed when working on collaborative easy tasks, however their productivity is significantly poor. The framework introduced in this paper can be used to understand the implications for workplaces and leaders when putting together organic teams to create a productive, healthy and relaxed environment for their employees.
Preprint
Despite of the superb performance on a wide range of tasks, pre-trained language models (e.g., BERT) have been proved vulnerable to adversarial texts. In this paper, we present RoChBERT, a framework to build more Robust BERT-based models by utilizing a more comprehensive adversarial graph to fuse Chinese phonetic and glyph features into pre-trained representations during fine-tuning. Inspired by curriculum learning, we further propose to augment the training dataset with adversarial texts in combination with intermediate samples. Extensive experiments demonstrate that RoChBERT outperforms previous methods in significant ways: (i) robust -- RoChBERT greatly improves the model robustness without sacrificing accuracy on benign texts. Specifically, the defense lowers the success rates of unlimited and limited attacks by 59.43% and 39.33% respectively, while remaining accuracy of 93.30%; (ii) flexible -- RoChBERT can easily extend to various language models to solve different downstream tasks with excellent performance; and (iii) efficient -- RoChBERT can be directly applied to the fine-tuning stage without pre-training language model from scratch, and the proposed data augmentation method is also low-cost.
Article
Full-text available
With the advent of Web 2.0, people became more eager to express and share their opinions on web regarding day-to-day activities and global issues as well. Evolution of social media has also contributed immensely to these activities, thereby providing us a transparent platform to share views across the world. These electronic Word of Mouth (eWOM) statements expressed on the web are much prevalent in business and service industry to enable customer to share his/her point of view. In the last one and half decades, research communities, academia, public and service industries are working rigorously on sentiment analysis, also known as, opinion mining, to extract and analyze public mood and views. In this regard, this paper presents a rigorous survey on sentiment analysis, which portrays views presented by over one hundred articles published in the last decade regarding necessary tasks, approaches, and applications of sentiment analysis. Several sub-tasks need to be performed for sentiment analysis which in turn can be accomplished using various approaches and techniques. This survey covering published literature during 2002-2015, is organized on the basis of sub-tasks to be performed, machine learning and natural language processing techniques used and applications of sentiment analysis. The paper also presents open issues and along with a summary table of a hundred and sixty-one articles.
Human-level performance in 3d multiplayer games with populationbased reinforcement learning
  • Foerster
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) [Cook, 1994] Nicholas Cook. A guide to musical analysis. Oxford University Press, 1994. [Foerster et al., 2017] Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip HS Torr, Pushmeet Kohli, and Shimon Whiteson. Stabilising experience replay for deep multi-agent reinforcement learning. arXiv preprint arXiv:1702.08887, 2017. [Foerster et al., 2018] Jakob N Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [Hernandez-Leal et al., 2019] Pablo Hernandez-Leal, Bilal Kartal, and Matthew E Taylor. A survey and critique of multiagent deep reinforcement learning. Autonomous Agents and Multi-Agent Systems, 33(6):750-797, 2019. [Ho and Ermon, 2016] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4565-4573. Curran Associates, Inc., 2016. [Jaderberg et al., 2019] Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, et al. Human-level performance in 3d multiplayer games with populationbased reinforcement learning. Science, 364(6443):859-865, 2019. [Kang et al., 2018] Bingyi Kang, Zequn Jie, and Jiashi Feng. Policy optimization with demonstrations. In International Conference on Machine Learning, pages 2474-2483, 2018. [Leyton-Brown and Shoham, 2008] Kevin Leyton-Brown and Yoav Shoham. Essentials of game theory: A concise multidisciplinary introduction. Synthesis lectures on artificial intelligence and machine learning, 2(1):1-88, 2008. [Lowe et al., 2017] Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multiagent actor-critic for mixed cooperative-competitive environments. In Advances in Neural Information Processing Systems, pages 6379-6390, 2017. [Narvekar et al., 2020] Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, and Peter Stone. Curriculum learning for reinforcement learning domains: A framework and survey, 2020. [Osborne and Rubinstein, 1994] Martin J Osborne and Ariel Rubinstein. A course in game theory. MIT press, 1994. [Peterson, 2009] Elisha Peterson. Cooperation in subset team games: altruism and selfishness. arXiv preprint arXiv:0907.2376, 2009. [Rȃdulescu et al., 2020] Roxana Rȃdulescu, Patrick Mannion, Diederik M Roijers, and Ann Nowé. Multi-objective multi-agent decision making: a utility-based analysis and survey. Autonomous Agents and Multi-Agent Systems, 34(1):10, 2020. [Rashid et al., 2018] Tabish Rashid, Mikayel Samvelyan, Christian Schroeder De Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Qmix: monotonic value function factorisation for deep multi-agent reinforcement learning. arXiv preprint arXiv:1803.11485, 2018. [Schulman et al., 2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. [Sequeira et al., 2011] Pedro Sequeira, Francisco S Melo, Rui Prada, and Ana Paiva. Emerging social awareness: Exploring intrinsic motivation in multiagent learning. In 2011 IEEE International Conference on Development and Learning (ICDL), volume 2, pages 1-6. IEEE, 2011. [Snoek et al., 2012] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951-2959, 2012. [Stone et al., 2010] Peter Stone, Gal A Kaminka, Sarit Kraus, Jeffrey S Rosenschein, et al. Ad hoc autonomous agent teams: Collaboration without pre-coordination. In AAAI, 2010. [Sugden, 2008] Robert Sugden. Nash equilibrium, team reasoning and cognitive hierarchy theory. Acta Psychologica, 128(2):402-404, 2008. [Sutton and Barto, 2018] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. [Sutton et al., 2000] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057-1063, 2000. [Tampuu et al., 2017] Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, and Raul Vicente. Multiagent cooperation and competition with deep reinforcement learning. PloS one, 12(4):e0172395, 2017. [Wilson et al., 2007] Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pages 1015-1022. ACM, 2007.
Learning a similarity metric discriminatively, with application to face verification
References [Alzantot et al., 2018] Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. arXiv preprint arXiv:1804.07998, 2018. [Chen and Lee, 2000] Zheng Chen and Kai-Fu Lee. A new statistical approach to chinese pinyin input. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 241-247, 2000. [Chopra et al., 2005] Sumit Chopra, Raia Hadsell, Yann Le-Cun, et al. Learning a similarity metric discriminatively, with application to face verification. In CVPR (1), pages 539-546, 2005. [Ebrahimi et al., 2017] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751, 2017.
Black-box generation of adversarial text sequences to evade deep learning classifiers
et al., 2018] Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pages 50-56. IEEE, 2018. [Gong et al., 2018] Zhitao Gong, Wenlu Wang, Bo Li, Dawn Song, and Wei-Shinn Ku. Adversarial texts with gradient methods. arXiv preprint arXiv:1801.07175, 2018.
Adversarial training methods for semisupervised text classification
  • Liang
Liang et al., 2017] Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006, 2017. [Mikolov et al., 2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [Miyato et al., 2017] Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. Adversarial training methods for semisupervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. [Nobata et al., 2016] Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. Abusive language detection in online user content. In Proceedings of the 25th international conference on world wide web, pages 145-153. International World Wide Web Conferences Steering Committee, 2016. [Papernot et al., 2016] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In MILCOM 2016-2016 IEEE Military Communications Conference, pages 49-54. IEEE, 2016.