ArticlePDF Available

Abstract and Figures

We present an unsupervised statistical method for automatic choice of near-synonyms when the context is given. The method uses the Web as a corpus to compute scores based on mutual information. Our evaluation experiments show that this method performs better than two previous methods on the same task. We also describe experiments in using supervised learning for this task. We present an application to an intelligent thesaurus. This work is also useful in machine translation and natural language generation.
Content may be subject to copyright.
A Statistical Model for Near-Synonym Choice
DIANA INKPEN
University of Ottawa
We present an unsupervised statistical method for automatic choice of near-synonyms when the
context is given. The method uses the Web as a corpus to compute scores based on mutual in-
formation. Our evaluation experiments show that this method performs better than two previous
methods on the same task. We also describe experiments in using supervised learning for this
task. We present an application to an intelligent thesaurus. This work is also useful in machine
translation and natural language generation.
Categories and Subject Descriptors: I.2.7 [Artificial Intelligence]: Natural Language Process-
ing—Text analysis; I.2.6 [Artificial Intelligence]: Learning—Induction, Knowledge acquisition
General Terms: Algorithms, Languages, Performance
Additional Key Words and Phrases: Lexical choice, near-synonyms, semantic similarity, Web as a
corpus, intelligent thesaurus
ACM Reference Format:
Inkpen, D. 2007. A statistical model for near-synonym choice. ACM Trans. Speech
Lang. Process. 4, 1, Article 2 (January 2007), 17 pages. DOI =10.1145/1187415.1187417
http://doi.acm.org/10.1145/1187415.1187417.
1. INTRODUCTION
When writing a text, a poorly chosen word can convey unwanted connotations,
implications, or attitudes. Similarly, in machine translation and natural lan-
guage generation systems, the choice among near-synonyms needs to be made
carefully. By near-synonyms we mean words that have the same meaning but
differ in lexical nuances. For example, error,mistake, and blunder all mean a
generic type of error, but blunder carries an implication of accident or ignorance.
In addition to paying attention to lexical nuances, when choosing a word, we
need to make sure it fits well with the other words in a sentence. In this arti-
cle, we investigate how the collocational properties of near-synonyms can help
This work is funded by the Natural Sciences and Engineering Research Council of Canada and the
University of Ottawa.
Author’s address: School of Information Technology and Engineering, University of Ottawa, 800
King Edward, Ottawa, ON, Canada, K1N 6N5; email: diana@site.uottawa.ca.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for profit or direct commercial
advantage and that copies show this notice on the first page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers,
to redistribute to lists, or to use any component of this work in other works requires prior specific
permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn
Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org.
C
2007 ACM 1550-4875/2007/01-ART2 $5.00 DOI 10.1145/1187415.1187417 http://doi.acm.org/
10.1145/1187415.1187417
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
2D. Inkpen
with choosing the best word in each context. This problem is difficult because
near-synonyms have senses that are very close to each other, and, therefore,
they occur in similar contexts.
The work we present here is needed in two of our applications. The first
one is an intelligent thesaurus. A writer can access a thesaurus to retrieve
words that are similar to a given word when there is a need to avoid repeat-
ing the same word or when the word does not seem to be the best choice in
the context. A standard thesaurus does not offer any explanation about the
differences in nuances of meaning between the possible word choices. More-
over, a standard thesaurus tool does not attempt to order the choices to suit a
particular writing context. Our intelligent thesaurus offers explanations and
orders the choices using their collocational properties relative to the writing
context.
The second application is a natural language generation (NLG)
system [Inkpen and Hirst 2003] that uses symbolic knowledge of
near-synonym differences. This knowledge was acquired by applying
information extraction techniques to entries in various dictionaries. We
included a preliminary collocation module that reduces the risk of choos-
ing a near-synonym that does not fit with the other words in a gener-
ated sentence (i.e., violates collocational constraints). The work presented
in this article allows for a more comprehensive near-synonym collocation
module.
More specifically, the task we address in this article is the selection of the best
near-synonym that should be used in a particular context. The natural way to
validate an algorithm for this task would be to ask human readers to evaluate
the quality of the algorithm’s output, but this kind of evaluation would be very
laborious. Instead, we validate the algorithms by deleting selected words from
sample sentences to see whether the algorithms can restore the missing words,
that is, we create a lexical gap and evaluate the ability of the algorithms to fill
the lexical gap. Two examples are presented in Figure 1. All the near-synonyms
of the original word, including the word itself, become the choices in the solution
set (see the figure for two examples of solution sets). The task is to automatically
fill the gap with the best choice in the particular context. We present a method of
scoring the choices. The highest scoring near-synonym will be chosen. In order to
evaluate how well our method works, we consider that the only correct solution
is the original word. This will cause our evaluation scores to underestimate
the performance of our method as more than one choice will sometimes be a
perfect solution. Moreover, what we consider to be the best choice is the typical
usage in the corpus, but it may vary from writer to writer. Nonetheless, it is
a convenient way of producing test data in an automatic way. To verify how
difficult the task is for humans, we perform experiments with human judges
on a sample of the test data. The statistical scoring method that we propose
here is based on mutual information scores of each candidate with the words in
the context. We explore how far such a method can go when using the Web as a
corpus. We estimate the counts by using the Waterloo MultiText System [Clarke
and Terra 2003b] with a corpus of about one terabyte of text collected by a Web
crawler.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 3
Sentence: This could be improved by more detailed consideration of the processes of ......... propagation inherent
in digitizing procedures.
Original near-synonym: error
Solution set: mistake, blooper, blunder, boner, contretemps, error, faux pas, goof, slip, solecism
Sentence: The day after this raid was the official start of operation strangle, an attempt to completely destroy
the ......... lines of communication.
Original near-synonym: enemy
Solution set: opponent, adversary, antagonist, competitor, enemy, foe, rival
Fig. 1. Examples of sentences with a lexical gap and candidate near-synonyms to fill the gap.
2. RELATED WORK
The idea of using the Web as a corpus of texts has been exploited by many
researchers. Radev and McKeown [1997] acquired different ways of referring
to the same named entity from the Web. Grefenstette [1999] used the Web for
example-based machine translation; Kilgarriff [2001] investigated the type of
noise in Web data; Mihalcea and Moldovan [1999] and Agirre and Martinez
[2000] used it as an additional resource for word sense disambiguation; Resnik
[1999] mined the Web for bilingual texts; Turney [2001] used Web frequency
counts to compute information retrieval-based mutual-information scores. In a
Computational Linguistics special issue which focused on the Web as a corpus
[Kilgarriff and Grefenstette 2003], Keller and Lapata [2003] show that Web
counts correlate well with counts collected from a balanced corpus: the size of
the Web compensates for the noise in the data. In this article, we are using a very
large corpus of Web pages to address a problem that has not been successfully
solved before.
In fact, the only work that addresses exactly the same task is that of Edmonds
[1997] as far as we are aware. Edmonds gives a solution based on a lexical co-
occurrence network that included second-order co-occurrences. We use a much
larger corpus and a simpler method, and we obtain much better results.
Our task has similarities to the word sense disambiguation task. Our near-
synonyms have senses that are very close to each other. In Senseval, some
of the fine-grained senses are also close to each other, so they might occur
in similar contexts, while the coarse-grained senses are expected to occur in
distinct contexts. In our case, the near-synonyms are different words to choose
from, not the same word with different senses.
Turney et al. [2003] addressed the multiple-choice synonym problem: given a
word, choose a synonym for that word among a set of possible solutions. In this
case, the solutions contain one synonym and some other (unrelated) words. They
achieve high performance by combining classifiers. Clarke and Terra [2003a]
addressed the same problem as Turney et al., using statistical associations
measures computed with counts from the Waterloo terabyte corpus. In our case,
all the possible solutions are synonyms of each other, and the task is to choose
one that best matches the context: the sentence in which the original synonym
is replaced with a gap. It is much harder to choose between words that are
near-synonyms because the context features that differentiate a word from
other words might be shared among the near-synonyms. Therefore, the choice
is done on the basis of a few features that are discriminant.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
4D. Inkpen
3. A NEW STATISTICAL METHOD FOR NEAR-SYNONYM CHOICE
Our method computes a score for each candidate near-synonym that could fill
in the gap. The near-synonym with the highest score is the proposed solution.
The score for each candidate reflects how well a near-synonym fits in with the
context. It is based on the mutual information scores between a near-synonym
and the content words in the context (we filter out the stopwords).
The pointwise mutual information (PMI) between two words xand ycom-
pares the probability of observing the two words together (their joint probabil-
ity) to the probabilities of observing xand yindependently (the probability of
occurring together by chance) [Church and Hanks 1991].
PMI(x,y)=log2
P(x,y)
P(x)P(y).
The probabilities can be approximated by P(x)=C(x)/N,P(y)=C(y)/N,
P(x,y)=C(x,y)/N, where Cdenotes frequency counts and Nis the total
number of words in the corpus. Therefore,
PMI(x,y)=log2
C(x,y)·N
C(x)·C(y),
where Ncan be ignored in comparisons since is it the same in all the cases.
We model the context as a window of size 2k around the gap (the missing
word), kwords to the left and kwords to the right of the gap. If the sentence is
s=...w1...wkGap wk+1...w2k..., for each near-synonym NSifrom the group
of candidates, the score is computed by the following formula:
Score(NSi,s)=k
j=1PMI(NSi,wj)+2k
j=k+1PMI(NSi,wj).
We also experimented with the same formula when the sum is replaced with
maximum to see whether a particular word in the context has higher influence
than the sum of all contributions.
Because we are using the Waterloo terabyte corpus and we issue queries to its
search engine, we have several possibilities of computing the frequency counts.
C(x,y) can be the number of co-occurrences of xand ywhen yimmediately
follows x, or the distance between xand ycan be up to q. We call qthe query
frame size. The tool for accessing the corpus allows us to use various values for
qin queries. We used queries of the type [q]>(x.. y), which asks how many
times xis followed by yin a frame of size q.
The search engine also allows us to approximate words counts with document
counts. If the counts C(x), C(y), and C(x,y) are approximated as the number of
document in which they appear, we obtain the PMI-IR formula [Turney 2001].
The queries we need to send to the search engine are the same but they are
restricted to document counts: C(x) is the number of document in which x
occurs; C(x,y) is the number of documents in which xis followed by yin a
frame of size q; the query is formulated as (doc../doc)>[q]>(x.. y).
Other statistical association measures, such as log-likelihood, could be used.
We tried only PMI because it is easy to compute on a Web corpus and because
Clarke and Terra [2003a] showed that PMI performed better than other mea-
sures in their experiments.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 5
Table I. Examples of Collocations and Anticollocations,
The Indicates the Anticollocations
ghastly mistake spelling mistake
ghastly error spelling error
ghastly blunder spelling blunder
ghastly faux pas spelling faux pas
ghastly blooper spelling blooper
ghastly solecism spelling solecism
ghastly goof spelling goof
ghastly contretemps spelling contretemps
ghastly boner spelling boner
ghastly slip spelling slip
We present the results in Section 6.1, where we compare our method to a
baseline algorithm that always chooses the most frequent near-synonyms and
to Edmonds’s method for the same task on the same data set. First, however,
we present two other methods to which we compare our results.
4. THE ANTICOLLOCATIONS METHOD
For the task of near-synonym choice, another method that we implemented
is the anticollocations method. By anticollocation we mean a combination of
words that a native speaker would not use and therefore should not be used
when automatically generating text. This method uses a knowledge-base of
collocational behavior of near-synonyms that we acquired in previous work
[Inkpen and Hirst 2002]. To build this knowledge base, we acquired collocations
of the near-synonyms from a corpus. For each word that collocated with a near-
synonym, we used a t-test (computed with Web counts collected through the
AltaVista search engine) to learn whether the word forms a collocation or an
anticollocation with other near-synonyms in the same group. A fragment of the
knowledge base is presented in Table I for the near-synonyms of the word error
and two collocate words ghastly and spelling. The lines marked by represent
anticollocations, and the rest represent strong collocations.
The anticollocations method simply ranks the strong collocations higher than
the anticollocations. In case of ties, it chooses the most frequent near-synonym.
In Section 6.2, we present the results from comparing this method to the method
from the previous section.
5. A SUPERVISED LEARNING METHOD
We can also apply supervised learning techniques to our task. It is easy to ob-
tain labelled training data, the same way we collected test data for the two
unsupervised methods presented previously. We train classifiers for each group
of near-synonyms. The classes are the near-synonyms in the solution set. Each
sentence is converted into a vector of features to be used for training the su-
pervised classifiers. We used two types of features. One type of features are
the scores of the left and right context with each class (i.e., with each near-
synonym from the group). The number of features of this type is equal to twice
the number of classes: one feature for the score between the near-synonym and
the part of the sentence at the left of the gap, and one feature for the score
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
6D. Inkpen
between the near-synonym and the part of the sentence at the right of the gap.
The second type of features are the words in the context windows. For each
group of near-synonyms, we used as features the 500 most frequent words sit-
uated close to the gaps in a development set. The value of a word feature for
each training example is 1 if the word is present in the sentence (at the left or
at the right of the gap), and 0 otherwise. We trained classifiers using several
machine learning algorithms to see which one is best at discriminating among
the near-synonyms. In Section 6.3, we present the results of several classifiers.
A disadvantage of the supervised method is that it requires training for each
group of near-synonyms. Additional training is required whenever we add more
near-synonyms to our knowledge base. An advantage of this method is that we
could improve the accuracy by using a combination of classifiers and by trying
other possible features. We think that part-of-speech features of the content
words in the context may not be very useful since all the possible solutions
have the same part-of-speech and might have similar syntactic behavior. Maybe
some function words immediately before the gaps could discriminate among the
near-synonyms in some groups.
6. EVALUATION
6.1 Comparison to Edmonds’s Method
In this section, we present results of the statistical method explained in Section
3. We compare our results with those of Edmonds [1997] whose solution used
the texts from the Wall Street Journal (WSJ) for the year 1989 to build a lexi-
cal co-occurrence network for each of the seven groups of near-synonyms from
Table II. The network included second-order co-occurrences. Edmonds used the
WSJ 1987 texts for testing and reported accuracies only a little higher than the
baseline. The near-synonyms in the seven groups were chosen to have low pol-
ysemy. This means that some sentences with wrong senses of near-synonyms
might be in the automatically produced test set, but hopefully not many.
For comparison purposes, in this section, we use the same test data (WSJ
1987) and the same groups of near-synonyms. Our method is based on mutual
information not on co-occurrence counts. Our counts are collected from a much
larger corpus. The seven groups of near-synonyms used by Edmonds are listed
in the first column of Table II. If we would have used groups of synonyms from
WordNet, we would probably obtain similar results because the words in the
seven groups differ only a little. Here are the words from the WordNet synsets:
mistake, error, fault
job, task, chore
duty, responsibility, obligation
difficult, hard
material, stuff
put up, provide, offer
decide, settle, resolve, adjudicate.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 7
Table II. Comparison Between the New Statistical Method from Section 3, Baseline
Algorithm, and Edmonds’s Method (See details about Experiment 1 in Section 6.1.)
Accuracy
New New
Test Set Number Edmonds’s Method Method
(Exp1 Data) of Cases Base-Line Method (Docs) (Words)
difficult, hard, tough 6,630 41.7% 47.9% 61.0% 59.1%
error, mistake, oversight 1,052 30.9% 48.9% 66.4% 61.5%
job, task, duty 5,506 70.2% 68.9% 69.7% 73.3%
responsibility, burden, 3,115 38.0% 45.3% 64.1% 66.0%
obligation, commitment
material, stuff, substance 1,715 59.5% 64.6% 68.6% 72.2%
give, provide, offer 11,504 36.7% 48.6% 52.0% 52.7%
settle, resolve 1,594 37.0% 65.9% 74.5% 76.9%
ALL (average) 31,116 44.8% 55.7% 65.1% 66.0%
Before we look at the results, we mention that the accuracy values we
compute are the percentage of correct choices when filling in the gap with
the winning near-synonym. The expected solution is the near-synonym that
was originally in the sentence, and it was taken out to create the gap. This
measure is conservative; it does not consider cases when more than one solu-
tion is correct.
Table II presents the comparative results for seven groups of near-synonyms.
The last row averages the accuracies for all of the test sentences. The second
column shows how many test sentences we collected for each near-synonym
group. The third column is for the baseline algorithm that always chooses the
most frequent near-synonym. The fourth column presents the results reported
in Edmonds [1997]. The fifth column presents the result of our method when
using document counts in PMI-IR, and the last column is for the same method
when using word counts in PMI. We show in bold the best accuracy figure for
each dataset. We notice that the automatic choice is more difficult for some
near-synonym groups than for others.
To fine-tune our statistical method, we used the dataset for the near-
synonyms of the word difficult collected from the WSJ 1989 corpus as a de-
velopment set. We varied the context window size kand the query frame q,
and determined good values for the parameter kand q. The best results were
obtained for small window sizes, k=1 and k=2 (meaning kwords to the left
and kwords to the right of the gap). For each k, we varied the query frame size
q. The results are best for a relatively small query frame, q=3, 4, 5, when the
query frame is the same or slightly larger then the context window. The results
are worse for a very small query frame, q=1, 2. The results presented in the
rest of the article are for k=2 and q=5. For all the other datasets used in this
article (from WSJ 1987 and BNC), we use the parameter values as determined
on the development set.
Table II shows that the performance is generally better for word counts than
for document counts. Therefore, we prefer the method that uses word counts
(which is also faster in our particular setting). The difference between them
is not statistically significant. Our statistical method performs significantly
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
8D. Inkpen
(1) benefit, advantage, favor, gain, profit
(2) low, gush, pour, run, spout, spurt, squirt, stream
(3) deficient, inadequate, poor, unsatisfactory
(4) afraid, aghast, alarmed, anxious, apprehensive, fearful, frightened, scared, terror-stricken
(5) disapproval, animadversion, aspersion, blame, criticism, reprehension
(6) mistake, blooper, blunder, boner, contretemps, error, faux pas, goof, slip, solecism
(7) alcoholic, boozer, drunk, drunkard, lush, sot
(8) leave, abandon, desert, forsake
(9) opponent, adversary, antagonist, competitor, enemy, foe, rival
(10) thin, lean, scrawny, skinny, slender, slim, spare, svelte, willowy, wiry
(11) lie, falsehood, fib, prevarication, rationalization, untruth
Fig. 2. Near-synonyms used in the evaluation experiments in Section 6.2.
better than both Edmond’s method and the baseline algorithm. For all the
results presented in this article, statistical significance tests were done using
the paired t-test as described in Manning and Sch ¨
utze [1999, p. 209].
In summary, the results are better for smaller context windows for the
sum of all the PMIs with the words in the context window not for taking the
maximum contribution. The performance decreases with larger query frames
q=5, 6, ..., 20 and degrades sharply when qis unlimited (the words are in
the same document no matter at what distance). Error analysis reveals that
incorrect choices happen more often when the context is weak, that is, very
short sentences or sentences with very few content words.
On average, our method performs 22 percentage points better than the base-
line algorithm and 10 percentage points better than Edmonds’s method. Its
performance is similar to that of the supervised method (see Section 6.3). An
important advantage of our method is that it works on any group of near-
synonyms without training, whereas Edmonds’s method required a lexical co-
occurrence network to be built in advance for each group of near-synonyms and
the supervised method required training for each near-synonym group.
6.2 Comparison to the Anticollocations Method
In a second experiment, we compare the results of our methods with the
anticollocations method described in Section 4. We use the dataset from
Inkpen and Hirst [2002], which contains sentences from the first half of the
British National Corpus with near-synonyms from the eleven groups listed in
Figure 2. The number of near-synonyms in each group is higher compared with
WordNet synonyms because they are taken from Hayakawa [1994], a dictio-
nary that explains differences between near-synonyms. Moreover, we retain
only the sentences in which at least one of the context words is in our pre-
viously acquired knowledge base of near-synonym collocations. That is, the
anticollocations method works only if we know how a word in the context col-
locates with the near-synonyms from a group. For the sentences that do not
contain collocations or anticollocations, it will perform no better than the base-
line because the information needed by the method is not available in the
knowledge base. Even if we increase the coverage of the knowledge base, the
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 9
Table III. Comparison Between the New Statistical Method from Section 3 and the
Anticollocations Method from Section 4 (See details about Experiment 2 in Section 6.2.)
Accuracy
Test Set Number of Anticollocations New Method New Method
(Exp2 Data) Cases Baseline Method (Docs) (Words)
TestSample 171 57.0% 63.3% 75.6% 73.3%
TestAll 332 48.5% 58.6% 70.0% 75.6%
Table IV. Comparative Results for the Supervised Learning Method Using Various
ML Learning Algorithms (Weka), Averaged Over the Seven Groups of
Near-Synonyms From the Experiment 1 Dataset
ML method (Weka) Features Accuracy (averaged)
Decision Trees PMI scores 65.4%
Decision Rules PMI scores 65.5%
Naive Bayes PMI scores 52.5%
K-Nearest Neighbor PMI scores 64.5%
Kernel Density PMI scores 60.5%
Boosting (Decision Stumps) PMI scores 67.7%
Naive Bayes 500 word features 68.0%
Decision Trees 500 word features 67.0%
Naive Bayes PMI +500 word features 66.5%
Boosting (Decision Stumps) PMI +500 word features 69.2%
anticollocations method might still fail too often due to words that were not
included.
Table III presents the results of the comparison. We used two datasets: Test-
Sample, which includes at most two sentences per collocation (the first two
sentences from the corpus) and TestAll, which includes all the sentences with
collocations as they occurred in the corpus. The reason we used these two tests
was not to bias the results due to frequent collocations.
The last two columns are the accuracies achieved by our new method. The
second to the last column shows the results of the new method when the word
counts are approximated with document counts. The improvement over the
baseline is 16 to 27 percentage points. The improvement over the anticolloca-
tions method is 10 to 17 percentage points.
6.3 Comparison to Supervised Learning
We present the results of the supervised method from Section 5 on the datasets
used in Section 6.1. As explained before, the datasets contain sentences with a
lexical gap. For each of the seven groups of near-synonyms, the class to choose
from in order to fill in the gaps is one of the near-synonyms in each cluster.
We implemented classifiers that use as features either the PMI scores of the
left and right context with each class, or the words in the context windows, or
both types of features combined. We used as features the 500 most frequent
words for each group of near-synonyms. We report accuracies for tenfold
cross-validation. We used the Weka collection of machine learning algorithms
[Witten and Frank 2000].
Table IV presents the results, averaged for the seven groups of near-
synonyms, of several classifiers from the Weka package. The classifiers that
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
10 D. Inkpen
Table V. Comparison Between the Unsupervised Statistical Method from Section 3 and the
Supervised Method Described in Section 5, on the Experiment 1 Datasets. (The results of two of
the best supervised classifiers are presented.)
Accuracy
Supervised Supervised Unsuper-
Number Base- Boosting Boosting vised
Test Set of Cases Line (PMI) (PMI +Words) Method
difficult, hard, tough 6,630 41.7% 55.8% 57.3% 59.1%
error, mistake, oversight 1,052 30.9% 68.1% 70.8% 61.5%
job, task, duty 5,506 70.2% 86.5% 86.7% 73.3%
responsibility, burden, 3,115 38.0% 66.5% 66.7% 66.0%
obligation, commitment
material, stuff, substance 1,715 59.5% 70.4% 71.0% 72.2%
give, provide, offer 11,504 36.7% 53.0% 56.1% 52.7%
settle, resolve 1,594 37.0% 74.0% 75.8% 76.9%
ALL (average) 31,116 44.8% 67.7% 69.2% 66.0%
use PMI features are Decision Trees, Decision Rules, Naive Bayes, K-Nearest
Neighbor, Kernel Density, and Boosting, a weak classifier (Decision Stumps
which are shallow decision trees). Then a Naive Bayes classifier that uses only
the word features is presented, and the same type of classifiers with both types
of features. The other classifiers from the Weka package were also tried but
the results did not improve, and these algorithms had difficulties in scaling
up. In particular, when using the 500-word features for each training example,
only the Naive Bayes algorithm was able to run in reasonable time. We noticed
that the Naive Bayes classifier performs very poorly on PMI features only (55%
average accuracy) but performs very well on word features (68% average accu-
racy). In contrast, the Decision Tree classifier performs well on PMI features,
especially when using boosting with Decision Stumps. When using both the
PMI scores and the word features, the results are slightly higher. It seems that
both types of features are sufficient for training a good classifier, but combining
them adds value.
Table V presents the detailed results of two of the supervised classifiers,
and, for easier comparison, repeats the results of the unsupervised statistical
method from Section 6.1. The supervised classifier that uses only PMI scores
performs similar to the unsupervised method. The best supervised classifier
that uses both types of features performs slightly better than the unsupervised
statistical method, but the difference is not statistically significant. We conclude
that the results of the supervised methods and the unsupervised statistical
method are similar. An important advantage of our unsupervised method is
that it works on any group of near-synonyms without training.
6.4 Comparison to Language Models
Since one of the main applications of our methods is lexical choice in machine
translation and statistical NLG, we need to compare our methods with the
current mainstream method for lexical choice, language modeling. The sum of
mutual information scores bears some similarity to a bigram language model.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 11
For example, PMI(w1,w2)+PMI(w2,w3) can be rewritten as:
PMI(w1,w2)+PMI(w2,w3)=log2
P(w1,w2)
P(w1)P(w2)+log2
P(w2,w3)
P(w2)P(w3)=
=log2P(w1,w2)P(w2,w3)
P(w1)P(w2)2P(w3)=log2P(w2|w1)P(w3|w2)
P(w2)2P(w3)2
Lexical choice in most statistical Machine Translation (MT) systems today
is heavily determined by the language model, and it has proven very difficult
in practice to apply state-of-the-art Word Sense Disambiguation/lexical choice
models to obtain improvements over a baseline statistical MT with a language
model [Carpuat and Wu 2005].
We do not go into details here, but in previous work [Inkpen and Hirst 2006],
we compared the results of the anticollocations method to the results of using
a 3-gram language model. This language model is part of the HALogen NLG
system [Langkilde and Knight 1998], and it was built on a 250 million-word
newsarticle text: AP Newswire, year 1988 and 1990; SJM, year 1991; and WSJ,
years 1987–1988 and 1990–1994. The comparison of the two methods was done
on different groups of near-synonyms, including some of the groups from the
Experiment 1 dataset. The anticollocation method performed better on the task
of choosing the best near-synonym in a context, therefore, so would the statisti-
cal method based on PMI (because we showed in Section 6.2 that our PMI-based
method achieves higher accuracy than the anticollocations method).
6.5 Experiments with Human Judges
We asked two human judges, native speakers of English, to guess the missing
word in a random sample of the Experiment 1 dataset (50 sentences for each of
the 7 groups of near-synonyms, 350 sentences in total). The results in Table VI
show that the agreement between the two judges is high (78.5%), but not perfect.
This means the task is difficult even if some wrong senses in the automatically-
produced test data might have made the task easier in a few cases.
The human judges were allowed to choose more than one correct answer
when they were convinced that more than one near-synonym fit well in the
context. They used this option sparingly, only in 5% of the 350 sentences. In
future work, we plan to allow the system to make more than one choice when ap-
propriate (e.g., when the second choice has a very close score to the first choice).
Taking the accuracy achieved by the human judges as an upper limit, we con-
clude that the automatic method has room for improvement of approximately
10–15 percentage points.
7. APPLICATIONS
7.1 Intelligent Thesaurus
The intelligent thesaurus that we are developing is an interactive application
that presents the user with a list of alternative near-synonyms, and, unlike
standard thesauri, it orders the choices to match the writing context and ex-
plains the differences between the possible choices.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
12 D. Inkpen
Table VI. Experiments with Two Human Judges on a Random Subset of the
Experiment 1 Dataset
J1-J2 J1 J2 System
Test Set Agreement Accuracy Accuracy Accuracy
difficult, hard, tough 72% 70% 76% 53%
error, mistake, oversight 82% 84% 84% 68%
job, task, duty 86% 92% 92% 78%
responsibility, burden, 76% 82% 76% 66%
obligation, commitment
material, stuff, substance 76% 82% 74% 64%
give, provide, offer 78% 68% 70% 52%
settle, resolve 80% 80% 90% 77%
ALL (average) 78.5% 79.7% 80.2% 65.4%
Table VII. Accuracies for the First Two Choices as Ordered by an Interactive
Intelligent Thesaurus
Accuracy for Accuracy for
Test Set the First Choice the First Two Choices
Experiment 1 Data, ALL 66.0% 88.5%
Experiment 2 Data, TestSample 73.3% 94.1%
Experiment 2 Data, TestAll 75.6% 87.5%
The new statistical method presented in this article (Section 3) allows us to
order the near-synonyms according to how well they fit into the writing context.
When composing a text, if the writer is unhappy with a word, he/she can select
that word and ask for a better substitute. The intelligent thesaurus provides
alternative near-synonyms in a context-dependent manner. Our experiments
show that the accuracy of the first choice as the best choice is 66 to 75%; there-
fore, there will be cases when the writer will not choose the first alternative. But
the accuracy for the first two choices is quite high, around 90%, as presented
in Table VII.
If the writer is in the process of writing and selects the last word to be replaced
with a near-synonym proposed by the thesaurus, then only the context on the
left of the word can be used for ordering the alternatives. Our method can be
easily adapted to consider only the context on the left of the gap. The results
of this case are presented in Table VIII, for the datasets used in the previous
sections. The accuracy values are lower than in the case when both the left and
the right context are considered (Table VII). This is due in part to the fact that
some sentences in the test sets have very little left context or no left context at
all. On the other hand, many times the writer composes a sentence or paragraph
and then she/he goes back to change a word that does not sound right. In this
case, both the left and right context will be available.
In the intelligent thesaurus, we could combine the supervised and unsu-
pervised method by using a supervised classifier when the confidence in the
classification is high and by using the unsupervised method otherwise. Also the
unsupervised statistical method would be used for the groups of near-synonyms
for which a supervised classifier was not previously trained.
Figure 3 presents a screen shot of the current implementation of the in-
telligent thesaurus application. The system allows the user to specify which
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 13
Table VIII. Results of the Statistical Method When Only the Left Context is
Considered, for the datasets from Section 6.1
Accuracy for Accuracy for
Test Set the First Choice the First Two Choices
Experiment 1 Data, ALL 58.05% 84.82%
Experiment 2 Data, TestSample 57.4% 75.1%
Experiment 2 Data, TestAll 56.1% 77.4%
Fig. 3. Screen shot of the intelligent thesaurus application.
synonym inventory to use. We integrated Roget’s thesaurus [Roget 1852] in
order to increase coverage (the number of words for which alternatives are
available). We used the interface provided by Jarmasz and Szpakowicz [2001].
Roget’s thesaurus contains among the related words several phrases (as seen
in Figure 3). In future work, we need to find new ways to score these phrases,
because the PMI formula that we used gives them scores that are too high.
The system also allows us to specify from which corpus to collect the
counts (including from the Web, through the Google and Yahoo API). The
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
14 D. Inkpen
system performs part-of-speech tagging (using QTAG1) on the text written in
the text editor in order to be able to suggest only synonyms with the right
part-of-speech.
Our intelligent thesaurus can present in a separate window explanations of
how the near-synonyms differ in the lexical nuances they carry. This kind of
knowledge is available in a limited form from our previous work [Inkpen and
Hirst 2001] about extracting information from a special dictionary of synonym
discrimination [Hayakawa 1994]. The coverage is limited in the sense that this
knowledge base contains only 5452 words in 909 groups of near-synonyms. The
explanations include denotational nuances (what do the near-synonyms imply
or suggest), how formal or informal they are, and how positive or negative they
might sound; also examples of usage are provided. We do not expand on this
aspect of our intelligent thesaurus here because the focus of this article is on
the collocational properties of the near-synonyms.
7.2 Natural Language Generation
In previous work [Inkpen and Hirst 2006], we presented and evaluated
an NLG system that pays particular attention to choosing the right near-
synonyms. This system extended HALogen [Langkilde and Knight 1998] with
two extra modules: a symbolic and a statistical module for near-synonym
choice.
Without going into details, we mention that the symbolic module deals with
nuances of meaning in an explicit way. Connotations or implications of certain
concepts are represented using an interlingual representation language and
concepts from an ontology. The user can specify what nuances of meaning are
preferred in the generated sentence, in addition to the main meaning. Alter-
natively, the preferences could come from the sentence in the source language
if our NLG system is used as part of an interlingual machine translation sys-
tem. The system knows which near-synonyms carry which nuances of meaning
because it contains the knowledge automatically acquired from the special dic-
tionary of synonym discrimination [Hayakawa 1994].
The statistical module uses the anticollocations method to choose near-
synonyms that collocate well with the other words in the context. We plan to re-
place it with the PMI-based method in order to increase coverage and accuracy.
8. CONCLUSION
We presented a statistical method of choosing the best near-synonym in a con-
text. We compared this method with two previous methods (Edmonds’s method
and the anticollocations method) and showed that the performance improves
considerably. We also show that our unsupervised statistical method performs
comparably to a supervised learning method.
Our method based on PMI scores performs well despite the well-known lim-
itations of PMI when used with corpora. PMI tends to have problems mostly on
1http://www.english.bham.ac.uk/staff/omason/software/qtag.html.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 15
very small counts, but it works reasonably with larger counts. Our web corpus
is quite large, therefore the problem of small counts does not appear.
We combine symbolic and statistical knowledge in two applications, an intel-
ligent thesaurus and a natural language generation system that has knowledge
of nuances of meaning of near-synonyms.
The accuracy of 66 to 75% is not enough for automatic choice but helps com-
plement the choices made by using symbolic knowledge. In the intelligent the-
saurus, we do not make the near-synonym choice automatically, but we let the
user choose. The first choice offered by the thesaurus is quite often the best one
and if we consider the first two choices, they are correct 90% of the time.
In this article we focused on idiomatic usage of near-synonyms, while, in
previous work, we looked at nuances of meaning and differences between near-
synonyms in terms of connotations and implications [Inkpen and Hirst 2006].
There are, though, some implications that are captured by idiomatic usage.
For example, Church et al. [1991] presented associations (collocations, but not
necessarily between adjacent words) for the near-synonyms ship and boat; they
suggest that a lexicographer looking at these associations can infer that boats
are generally smaller than ships because they are found in rivers and lakes and
are used for small jobs (e.g., fishing, police, pleasure), whereas ships are found
in seas and are used for serious business (e.g., cargo, war).
In future work, we plan to investigate the possibility of automatically in-
ference for this kind of knowledge or to validate already acquired knowledge.
Words that do not associate with a near-synonym but associate with all the
other near-synonyms in a cluster could tell us something about its nuances
of meaning. For example terrible slip is an anti-association, whereas terrible
associates with mistake, blunder, error. This is an indication that slip is a mi-
nor error. By further generalization, the associations could become conceptual
associations. This may allow the automatic learning of denotational distinc-
tions between near-synonyms from free text. The concepts that are common to
all the near-synonyms in a cluster could be part of their main meaning, while
those that associate only with one near-synonym could be part of their implied
nuances of meaning.
Future work includes a near-synonym sense disambiguation module to en-
sure that the intelligent thesaurus does not offer alternatives for wrong senses
of words. In addition to the groups of synonyms from WordNet, Roget, and dictio-
naries of synonyms, we could acquire synonyms from corpora so that the intelli-
gent thesaurus can offer alternatives for a very large number of words. There is
research done on acquiring distributionally similar words [Lin 1998], but they
include, in addition to near-synonyms, words that have other relations. Lin
et al. [2003] looked at filtering out the antonyms, using specific co-occurrence
patterns. Words that have other relations could also be filtered out. One way to
do this could be to collect signatures for each potential near-synonym—words
that associate with it in many contexts. For two candidate words, if one signa-
ture is contained in the other, the words are probably in an IS-A relation; if the
signatures overlap totally, it is a true near-synonymy relation; if the signatures
overlap partially, it is a different kind of relation.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
16 D. Inkpen
ACKNOWLEDGMENTS
We are most grateful to Egidio Terra, Charlie Clarke, and the School of Com-
puter Science of the University of Waterloo, for allowing us to use their terabyte
webpage corpus and the MultiText system. We thank Peter Turney and his col-
leagues at IIT/NRC Ottawa for giving us access to their local copy of the corpus.
We also thank Peter Turney for sharing his Perl code for remote access to the
corpus and for his comments on the draft of this article. We thank Graeme Hirst
for comments on earlier versions of this article. We thank Jayakumar Balas-
ingham and Apoorve Chokshi for their contribution to the implementation of
the Intelligent Thesaurus application and Mario Jarmasz for allowing us to use
his software interface to Roget’s thesaurus.
REFERENCES
AGIRRE,E.AND MARTINEZ, D. 2000. Exploring automatic word sense disambiguation with decision
lists and the Web. In Proceedings of the Workshop on Semantic Annotation And Intelligent Content
(COLING00.) Saarbr ¨
ucken/Luxembourg/Nancy.
CARPUAT,M.AND WU, D. 2005. Evaluating the word sense disambiguation performance of sta-
tistical machine translation. In Proceedings of the 43th Annual Meeting of the Association for
Computational Linguistics. Ann Arbor, MI. 120–125.
CHURCH, K., GALE, W., HANKS,P.,AND HINDLE, D. 1991. Using statistics in lexical analysis. In
U. Zernik, Ed. Lexical Acquisition: Using Online Resources to Build a Lexicon. Lawrence Erlbaum,
115–164.
CHURCH,K.AND HANKS, P. 1991. Word association norms, mutual information and lexicography.
Comput. Linguis. 16, 1, 22–29.
CLARKE,C.L.A.AND TERRA, E. 2003a. Frequency estimates for statistical word similarity mea-
sures. In Proceedings of the Human Language Technology Conference of the North American
Chapter of the Association for Computational Linguistics (HLT-NAACL03). Edmonton, Canada,
165–172.
CLARKE,C.L.A.AND TERRA, E. 2003b. Passage retrieval vs. document retrieval for factoid question
answering. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research
and Development in Information Retrieval. Toronto, Canada, 427–428.
EDMONDS, P. 1997. Choosing the word most typical in context using a lexical co-occurrence net-
work. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics.
Madrid, Spain, 507–509.
GREFENSTETTE, G. 1999. The World Wide Web as a resource for example-based machine trans-
lation tasks. In Proceedings of the ASLIB Conference on Translating and Computers. London,
UK.
HAYAKAWA, S. I., Ed. 1994. Choose the Right Word 2nd Ed. (Revised by Eugene Ehrlich). Harper
Collins Publishers.
INKPEN,D.Z.AND HIRST, G. 2001. Building a lexical knowledge-base of near-synonym differences.
In Proceedings of the Workshop on WordNet and Other Lexical Resources, Second Meeting of the
North American Chapter of the Association for Computational Linguistics (NAACL01). Pitts-
burgh, PA. 47–52.
INKPEN,D.Z.AND HIRST, G. 2002. Acquiring collocations for lexical choice between near-synonyms.
In Proceedings of the Workshop on Unsupervised Lexical Acquisition 40th Annual Meeting of the
Association for Computational Linguistics (ACL02). Philadelphia, PA. 67–76.
INKPEN,D.Z.AND HIRST, G. 2003. Near-synonym choice in natural language generation. In
Proceedings of the International Conference Recent Advances in Natural Language Processing
RANLP’03. Borovets, Bulgaria, 204–211.
INKPEN,D.Z.AND HIRST, G. 2006. Building and using a lexical knowledge-base of near-synonym
differences. Comput. Linguis. 32,2.
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
A Statistical Model for Near-Synonym Choice 17
JARMASZ,M.AND SZPAKOWICZ, S. 2001. The design and implementation of an electronic lexical
knowledge base. In Proceeding of the 14th Biennial Conference of the Canadian Society for Com-
putational Studies of Intelligence (AI01). Ottawa, Canada, 325–334.
KELLER,F.AND LAPATA, M. 2003. Using the Web to obtain frequenciesfor unseen bigrams. Comput.
Lingui. 29, 3, 459–484.
KILGARRIFF, A. 2001. Web as corpus. In Proceedings of the Corpus Linguistics Conference. Lan-
caster, UK, 342–345.
KILGARRIFF
,A.AND GREFENSTETTE, G. 2003. Introduction to the special issue on the Web as a corpus.
Computa. Linguis. 29, 3, 333–347.
LANGKILDE,I.AND KNIGHT
, K. 1998. The practical value of N-grams in generation. In Proceedings
of the 9th International Natural Language Generation Workshop. Niagara-on-the-Lake, Canada,
248–255.
LIN, D. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th
Annual Meeting of the Association for Computational Linguistics Joint with 17th International
Conference on Computational Linguistics (ACL-COLING’98). Montreal, Quebec, Canada, 768–
774.
LIN, D., ZHAO, S., QIN, L., AND ZHOU, M. 2003. Identifying synonyms among distributionally sim-
ilar words. In Proceedings of the 18th Joint International Conference on Artificial Intelligence
(IJCAI03). Acapulco, Mexico.
MANNING,C.AND SCH ¨
UTZE, H. 1999. Foundations of Statistical Natural Language Processing. The
MIT Press, Cambridge, MA.
MIHALCEA,R.AND MOLDOVAN, D. 1999. A method for word sense disambiguation from unrestricted
text. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics.
152–158.
RADEV,D.AND MCKEOWN, K. R. 1997. Building a generation knowledge source using Internet-
accessible newswire. In Proceedings of the 5th ACL Conference on Applied Natural Language
Processing (ANLP’97). Washington, DC, 221–228.
RESNIK, P. 1999. Mining the Web for bilingual text. In Proceedings of the 37th Annual Meeting
of the Association for Computational Linguistics. 527–534.
ROGET
, P. M., Ed. 1852. Roget’s Thesaurus of English Words and Phrases. Longman Group Ltd.,
Harlow, Essex, UK.
TURNEY, P. 2001. Mining the Web for synonyms: PMI-IR versus LSA on TOEFL. In Proceedings
of the Twelfth European Conference on Machine Learning (ECML 2001). Freiburg, Germany,
491–502.
TURNEY, P., LITTMAN, M., BIGHAM,J.,AND SHNAYDER, V. 2003. Combining independent modules
to solve multiple-choice synonym and analogy problems. In Proceedings of the International
Conference on Recent Advances in Natural Language Processing RANLP’03. Borovets, Bulgaria,
482–489.
WITTEN,I.H.AND FRANK, E. 2000. Data Mining: Practical Machine Learning Tools with Java
Implementations. Morgan Kaufmann, San Francisco, CA.
Received September 2004; revised March 2006, October 2006; accepted November 2006 by Dekai Wu
ACM Transactions on Speech and Language Processing, Vol. 4, No. 1, Article 2, Publication date: January 2007.
... In addition, they classified synonym pairs into two categories: those that tend to require proper synonym selection for a certain sentence and those that do not require proper synonym selection. In Inkpen (2007), author used the web as a corpus to compute scores based on mutual information. In Ababneh et al. (2021), authors have presented a new efficient model in synonyms extraction called noun-based distinctive verbs (NBDV) where tf-idf weighting scheme is replaced by a novel weighting scheme called the orbit weighing scheme (OWS). ...
... To note that many resources have been used by the community of researchers for detecting, acquiring, and extracting of synonyms such as dictionaries (Wang and Hirst, 2009;Muller et al., 2006;Blondel and Senellart, 2002;AlMaayah et al., 2016), WordNet (Lombardi and Marani, 2015), web of data (Inkpen, 2007;Lombardi and Marani, 2015), Wikipedia (Bøhn and Nørvåg, 2010;Simanosky and Ulanov, 2011), corpus (Takeuchi, 2008) and encyclopaedia (Lu et al., 2009). ...
... In addition, they classified synonym pairs into two categories: those that tend to require proper synonym selection for a certain sentence and those that do not require proper synonym selection. In Inkpen (2007), author used the web as a corpus to compute scores based on mutual information. In Ababneh et al. (2021), authors have presented a new efficient model in synonyms extraction called noun-based distinctive verbs (NBDV) where tf-idf weighting scheme is replaced by a novel weighting scheme called the orbit weighing scheme (OWS). ...
... To note that many resources have been used by the community of researchers for detecting, acquiring, and extracting of synonyms such as dictionaries (Wang and Hirst, 2009;Muller et al., 2006;Blondel and Senellart, 2002;AlMaayah et al., 2016), WordNet (Lombardi and Marani, 2015), web of data (Inkpen, 2007;Lombardi and Marani, 2015), Wikipedia (Bøhn and Nørvåg, 2010;Simanosky and Ulanov, 2011), corpus (Takeuchi, 2008) and encyclopaedia (Lu et al., 2009). ...
... In the literature of synonyms extraction, the Pointwise Mutual Information (PMI) mathematical model is used to measure the association between two terms [12,13,14,15]. The PMI of x and y considers the number of times x and y occurred together, f(x, y), and the frequency of x, f(x), and the frequency of y, f(y). ...
... ( , ) = log 2 ( , ) ( ) * ( ) Where f(x, y), f(x), and f(y) are normalized by the number of terms (N) in the whole corpus. The PMI does not demand the adjacency of x and y, and the researchers used the PMI with different window sizes, for example, Inkpen in [15] used a content window of size 2, whereas, Yu et al. in [14,16] used a content window of size 4. ...
Article
Full-text available
The traditional statistical approach in synonyms extraction is time-consuming. It is necessary to develop a new method to improve the efficiency and accuracy. This research presents a new method in synonyms extraction called Noun Based Distinctive Verbs (NBDV) that replaces the traditional tf-idf weighting scheme with a new weighting scheme called the Orbit Weighing Scheme (OWS). The OWS links the nouns to their semantic space by examining the singular verbs in each context. The new method was compared with important models in the field such as the Skip-Gram, the Continuous Bag of Words, and the GloVe model. The NBDV model was manipulated over the Arabic and English languages and the results showed 47% Recall and 51% Precision in the dictionary-based evaluation and 57.5% Precision in the human experts’ evaluation. Comparing with the synonyms extraction based on tf.idf, the NBDV obtained 11% higher recall and 10% higher precision. Regarding the efficiency, we found that on average, the synonyms extraction of a single noun requires the process of 186 verbs and in 63% of the runs; the number of singular verbs was less than 200. It is concluded that the developed method is efficient and processes the single run in linear time.
... The LCS is the most specific concept, which is a shared ancestor of the two concepts. The Pointwise Mutual Information (PMI) [23,26] is a simple method for computing corpus-based similarity of words. ...
... One practical goal of discriminating near-synonyms is to help language learners to improve their lexical/conceptual knowledge. Near-synonyms are usually not an easy task even for native speakers of a language (Gao 2012), while, computational models can use different algorithms to extract features from a pair or a number of semantically related words and then make an auto selection of the closest one to fill in a gap within a sentence (Edmonds 1997;Inkpen 2007;). e-learning system is an example of this type of model. ...
Chapter
This is an exploration of Mandarin speakers’ conceptualization of the semantic features that form the notions of Force and Motion in the Pull Verbs of Hand Action such as la 拉, tuo 拖, and zhuai 拽 in Mandarin Chinese. A corpus-based analysis was first conducted to reveal the salient semantic features embedded in the verbs. Then, adult native Mandarin speakers (N = 40) were administered a preference scale test as a measure of semantic salience based on action-provoked reasoning. The results show that the corpus-based data revealed the major semantic features of the verbs from their contextual meanings, while the native speakers’ verbal descriptions of the verb semantics based on the action-provoked reasoning focused largely on the force in relation to the objective patient involved and motion direction in relation to the subjective agent’s action. This indicates that the notions of Force, Motion, and Motion Direction are the base for L1 speakers to conceptualize the lexical semantics of the Pull Verbs and that Motion Direction is the condition for the distinction of the members of the class of Pull Verbs that are near-synonyms.
... One practical goal of discriminating near-synonyms is to help language learners to improve their lexical/conceptual knowledge. Near-synonyms are usually not an easy task even for native speakers of a language (Gao 2012), while, computational models can use different algorithms to extract features from a pair or a number of semantically related words and then make an auto selection of the closest one to fill in a gap within a sentence (Edmonds 1997;Inkpen 2007;). e-learning system is an example of this type of model. ...
Chapter
Full-text available
In this study I primarily examine the three commonly used Chinese locative phrases zai + NP + bian/mian/tou (zai-construction) through corpus analysis. Previous studies (Lin in Studies in Language and Linguistics 30:67–70, 2010; Liu in A synchronic and diachronic exploration of the monosyllabic localizer li and the disyllabic localizers limian, litou, libian and the disyllabification effect of the localizers, 2011; Tian in Cognitive analysis about the meaning of libian, waibian, limian, waimian, litou, waitou, 2011) dealt with issues regarding different meanings and structures of these phrases but they failed to systematically investigate these three localizers from a cognitive perspective. My proposal in short is that in the zai-construction, when NP is a specific noun, the distribution of these three localizers (bian ‘side’, mian ‘surface’, tou ‘head’) is semantically restricted in some situations. In contrast, when NP is a combination of a noun and a localizer, there is no restriction on the distribution of these three localizers, etc.. In addition, the use of these localizers can somewhat show the conceptual metaphorical mappings (Lakoff and Johnson, 1980) and subjectivity (Traugott in Language 65:31–55, 1989) embodied in individual mind.
... Furthermore, several of other statistical approaches target on generating nearsynonyms for lexical substitution. Among them Point-wise Mutual Information (PMI) based approach presented by Inkpen [43] and Latent Semantic Analysis (LSA) based method proposed by Wang and Hirst [84] bring new dimensions to the lexicalizations. Furthermore, unlike the first, latter mentioned LSA approach is a combination with lexical co-occurrence which makes it accurate with a level (74.5 %) which moves beyond the baseline accuracy level. ...
Article
Full-text available
Natural Language Generation (NLG) is defined as the systematic approach for producing human understandable natural language text based on non-textual data or from meaning representations. This is a significant area which empowers human-computer interaction. It has also given rise to a variety of theoretical as well as empirical approaches. This paper intends to provide a detailed overview and a classification of the state-of-the-art approaches in Natural Language Generation. The paper explores NLG architectures and tasks classed under document planning, micro-planning and surface realization modules. Additionally, this paper also identifies the gaps existing in the NLG research which require further work in order to make NLG a widely usable technology.
... Synonym is defined as expressions with the same meaning being synonymous [1]. The identification of synonym relations automatically helps to address various natural language processing (NLP) applications, such as information retrieval and question answering [2,3], automatic thesaurus construction [4,5], automatic text summarization [6], language generation [7], and lexical entailment acquisition [8]. ...
Article
In this study, a model is proposed to determine synonymy by incorporating several resources. The model extracts the features from monolingual online dictionaries, a bilingual online dictionary, WordNet, and a monolingual Turkish corpus. Once it has built a candidate list, it determines the synonymy for a given word by means of those features. All these resources and the approaches are evaluated. Taking all features into account and applying machine learning algorithms, the model shows good performance of F-measure with 81.4%. The study contributes to the literature by integrating several resources and attempting the first corpus-driven synonym detection system for Turkish.
Thesis
The purpose of this bachelor 's thesis was to analyze how the very common English adjectives big and large, which can be considered as near-synonyms, differ in their contexts and meanings. Factors influencing the choice of the topic included interest in near-synonymy and corpus linguistics. In addition, it turned out that no comprehensive corpus-based research had been conducted on the topic in the past. The study was conducted by comparing the collocations of these adjectives, i.e., the tendency to occur repeatedly in conjunction with certain other words, in this case nouns. The initial assumption was that big is more common in colloquial language, while large occurs more often in written language. The theoretical part of this thesis introduces the concepts of synonymy and near-synonymy and presents some previous studies on corpus linguistics and near-synonymy which this bachelor's thesis is based on. The research method utilized in the thesis was corpus linguistics. Research material was collected from the Corpus of Contemporary American English which is an extensive corpus of more than a billion words, containing modern English texts. The searches were made using the comparison feature of the corpus as it allows to search collocates of two words simultaneously. The study was limited to contexts where big and large are in the basic form functioning as attributive adjectives. In addition to the corpus data, two dictionaries of contemporary English were utilized in the study. The definitions of the words big and large in these dictionaries were compared with the results obtained from corpus material. Based on the results of the study using this particular material, it can be concluded that the original hypothesis regarding the occurrence of big and large was correct. In addition, the study revealed that big occurs very often in a figurative sense, while large is most often used in a literal sense. The study also showed that dictionaries were not able to describe the differences in meaning between these two words as accurately as the corpus material. Based on the results, it is also possible to identify opportunities for further research of the same topic. In the future it could be interesting to analyze how people who speak English as a foreign or second language use these adjectives. The topic could also be studied from a more qualitative perspective by examining whether the adjectives big and large have positive or negative connotations in different contexts.
Article
Full-text available
The web, teeming as it is with language data, of all manner of varieties and languages, in vast quantity and freely available, is a fabulous linguists' playground. The Special Issue explores ways in which this dream is being explored.
Conference Paper
Full-text available
The WWW is two orders of magnitude larger than the largest corpora. Although noisy, web text presents language as it is used, and statistics derived from the Web can have practical uses in many NLP applications. For this reason, the WWW should be seen and studied as any other computationally available linguistic resource. In this article, we illustrate this by showing that an Example-Based approach to lexical choice for machine translation can use the Web as an adequate and free resource.
Article
We examine the practical s~'nergy between symbolic and statistical language processing in a generator called Nitrogen. The analysis provides insight into the kinds of linguistic decisions that bigram frequency statistics can make, and how it improves scalability.. We also discuss the limits of bigram statistical knowledge. We focus on specific examples of Nitrogen's output.
Article
We present the first known empirical test of an increasingly common speculative claim, by evaluating a representative Chinese-to-English SMT model directly on word sense disambiguation performance, using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sam-ple task. Much effort has been put in de-signing and evaluating dedicated word sense disambiguation (WSD) models, in particu-lar with the Senseval series of workshops. At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT mod-els are good at predicting the right transla-tion of the words in source language sen-tences. Surprisingly however, the WSD ac-curacy of SMT models has never been eval-uated and compared with that of the dedi-cated WSD models. We present controlled experiments showing the WSD accuracy of current typical SMT models to be signifi-cantly lower than that of all the dedicated WSD models considered. This tends to sup-port the view that despite recent speculative claims to the contrary, current SMT models do have limitations in comparison with ded-icated WSD models, and that SMT should benefit from the better predictions made by the WSD models.
Conference Paper
Thesauri have always been a useful resource for natural language processing. WordNet, a kind of thesaurus, has proven invaluable in computational linguistics. We present the various applications of Roget’s Thesaurus in this field and discuss the advantages of its structure. We evaluate the merits of the 1987 edition of Penguin’s Roget’s Thesaurus of English Words and Phrases as an NLP resource: we design and implement an electronic lexical knowledge base with its material. An extensive qualitative and quantitative comparison of Roget’s and WordNet has been performed, and the ontologies as well as the semantic relations of both thesauri contrasted. We discuss the design in Java of the lexical knowledge base, and its potential applications. We also propose a framework for measuring similarity between concepts and annotating Roget’s semantic links with WordNet labels.
Conference Paper
The Web graph, meaning the graph induced by Web pages as nodes and their hyperlinks as directed edges, has become a fascinating object of study for many people: physicists, sociologists, mathematicians, computer scientists, and information retrieval ...