Conference PaperPDF Available

Software Framework for Topic Modelling with Large Corpora

Authors:

Abstract

Large corpora are ubiquitous in today’s world and memory quickly becomes the limiting factor in practical applications of the Vector Space Model (VSM). In this paper, we identify a gap in existing implementations of many of the popular algorithms, which is their scalability and ease of use. We describe a Natural Language Processing software framework which is based on the idea of document streaming, i.e. processing corpora document after document, in a memory independent fashion. Within this framework, we implement several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation, in a way that makes them completely independent of the training corpus size. Particular emphasis is placed on straightforward and intuitive framework design, so that modifications and extensions of the methods and/or their application by interested practitioners are effortless. We demonstrate the usefulness of our approach on a real-world scenario of computing document similarities within an existing digital library DML-CZ.
Software Framework for Topic Modelling with Large Corpora
Radim
ˇ
Reh
˚
u
ˇ
rek and Petr Sojka
Natural Language Processing Laboratory
Masaryk University, Faculty of Informatics
Botanick
´
a 68a, Brno, Czech Republic
{xrehurek,sojka}@fi.muni.cz
Abstract
Large corpora are ubiquitous in today’s world and memory quickly becomes the limiting factor in practical applications of the Vector
Space Model (VSM). In this paper, we identify a gap in existing implementations of many of the popular algorithms, which is their
scalability and ease of use. We describe a Natural Language Processing software framework which is based on the idea of document
streaming, i.e. processing corpora document after document, in a memory independent fashion. Within this framework, we implement
several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation, in a way that makes
them completely independent of the training corpus size. Particular emphasis is placed on straightforward and intuitive framework design,
so that modifications and extensions of the methods and/or their application by interested practitioners are effortless. We demonstrate the
usefulness of our approach on a real-world scenario of computing document similarities within an existing digital library DML-CZ.
1. Introduction
“Controlling complexity is the essence of computer programming.”
Brian Kernighan (Kernighan and Plauger, 1976)
The Vector Space Model (VSM) is a proven and powerful
paradigm in NLP, in which documents are represented as
vectors in a high-dimensional space. The idea of represent-
ing text documents as vectors dates back to early 1970’s
to the SMART system (Salton et al., 1975). The original
concept has since then been criticised, revised and improved
on by a multitude of authors (Wong and Raghavan, 1984;
Deerwester et al., 1990; Papadimitriou et al., 2000) and
became a research field of its own. These efforts seek to ex-
ploit both explicit and implicit document structure to answer
queries about document similarity and textual relatedness.
Connected to this goal is the field of topical modelling (see
e.g. (Steyvers and Griffiths, 2007) for a recent review of
this field). The idea behind topical modelling is that texts
in natural languages can be expressed in terms of a limited
number of underlying concepts (or topics), a process which
both improves efficiency (new representation takes up less
space) and eliminates noise (transformation into topics can
be viewed as noise reduction). A topical search for related
documents is orthogonal to the more well-known “fulltext”
search, which would match particular words, possibly com-
bined through boolean operators.
Research on topical models has recently picked up pace,
especially in the field of generative topic models such as La-
tent Dirichlet Allocation (Blei et al., 2003), their hierarchical
extensions (Teh et al., 2006), topic quality assessment and
visualisation (Chang et al., 2009; Blei and Lafferty, 2009).
In fact, it is our observation that the research has rather got-
ten ahead of applications—the interested public is only just
catching up with Latent Semantic Analysis, a method which
is now more than 20 years old (Deerwester et al., 1990). We
attribute reasons for this gap between research and practice
partly to inherent mathematical complexity of the inference
algorithms, partly to high computational demands of most
methods and partly to the lack of a “sandbox” environment,
which would enable practitioners to apply the methods to
their particular problem on real data, in an easy and hassle-
free manner. The research community has recognised these
challenges and a lot of work has been done in the area of
accessible NLP toolkits in the past couple of years; our con-
tribution here is one such step in the direction of closing the
gap between academia and ready-to-use software packages
1
.
Existing Systems
The goal of this paper is somewhat orthogonal to much of
the previous work in this area. As an example of another
possible direction of applied research, we cite (Elsayed et
al., 2008). While their work focuses on how to compute
pair-wise document similarities from individual document
representations in a scalable way, using Apache Hadoop
and clusters of computers, our work here is concerned with
how to scalably compute these document representations
in the first place. Although both steps are necessary for a
complete document similarity pipeline, the scope of this
paper is limited to constructing topical representations, not
answering similarity queries.
There exist several mature toolkits which deal with Vec-
tor Space Modelling. These include NLTK (Bird and
Loper, 2004), Apache’s UIMA and ClearTK (Ogren et al.,
2008), Weka (Frank et al., 2005), OpenNLP (Baldridge
et al., 2002), Mallet (McCallum, 2002), MDP (Zito et al.,
2008), Nieme (Maes, 2009), Gate (Cunningham, 2002), Or-
ange (Dem
ˇ
sar et al., 2004) and many others.
These packages generally do a very good job at their in-
tended purpose; however, from our point of view, they also
suffer from one or more of the following shortcomings:
1
Interest in the field of document similarity can also be seen
from the significant number of requests for a VSM software pack-
age which periodically crop up in various NLP mailing lists. An-
other indicator of interest are tutorials aimed at business appli-
cations; see web search results for “SEO myths and LSI” for an
interesting treatment on Latent Semantic Indexing marketing.
46
No topical modelling.
Packages commonly offer super-
vised learning functionality (i.e. classification); topic
inference is an unsupervised task.
Models do not scale.
Package requires that the whole cor-
pus be present in memory before the inference of top-
ics takes place, usually in the form of a sparse term-
document matrix.
Target domain not NLP/IR.
The package was created
with physics, neuroscience, image processing, etc. in
mind. This is reflected in the choice of terminology as
well as emphasis on different parts of the processing
pipeline.
The Grand Unified Framework.
The package covers a
broad range of algorithms, approaches and use case
scenarios, resulting in complex interfaces and depen-
dencies. From the user’s perspective, this is very desir-
able and convenient. From the developer’s perspective,
this is often a nightmare—tracking code logic requires
major effort and interface modifications quickly cas-
cade into a large set of changes.
In fact, we suspect that the last point is also the reason why
there are so many packages in the first place. For a developer
(as opposed to a user), the entry level learning curve is so
steep that it is often simpler to “roll your own” package
rather than delve into intricacies of an existing, proven one.
2. System Design
“Write programs that do one thing and do it well. Write programs
to work together. Write programs to handle text streams, because
that is a universal interface.”
Doug McIlroy (McIlroy et al., 1978)
Our choices in designing the proposed framework are a
reflection of these perceived shortcomings. They can be
explicitly summarised into:
Corpus size independence.
We want the package to be
able to detect topics based on corpora which are larger
than the available RAM, in accordance with the current
trends in NLP (see e.g. (Kilgarriff and Grefenstette,
2003)).
Intuitive API.
We wish to minimise the number of method
names and interfaces that need to be memorised in
order to use the package. The terminology is NLP-
centric.
Easy deployment.
The package should work out-of-the-
box on all major platforms, even without root privileges
and without any system-wide installations.
Cover popular algorithms.
We seek to provide novel,
scalable implementations of algorithms such as TF-
-IDF, Latent Semantic Analysis, Random Projections
or Latent Dirichlet Allocation.
We chose Python as the programming language, mainly be-
cause of its straightforward, compact syntax, multiplatform
nature and ease of deployment. Python is also suitable for
handling strings and boasts a fast, high quality library for
numerical computing, numpy, which we use extensively.
Core interfaces
As mentioned earlier, the core concept of our framework is
document streaming. A corpus is represented as a sequence
of documents and at no point is there a need for the whole
corpus to be stored in memory. This feature is not an after-
thought on lazy evaluation, but rather a core requirement
for our application and as such reflected in the package
philosophy. To ensure transparent ease of use, we define
corpus to be any iterable returning documents:
>>> for document in corpus:
>>> pass
In turn, a document is a sparse vector representation of its
constituent fields (such as terms or topics), again realised as
a simple iterable:
2
>>> for fieldId, fieldValue in document:
>>> pass
This is a deceptively simple interface; while a corpus is
allowed to be something as simple as
>>> corpus = [[(1, 0.8), (8, 0.6)]]
this streaming interface also subsumes loading/storing matri-
ces from/to disk (e.g. in the Matrix Market (Boisvert et al.,
1996) or SVMlight (Joachims, 1999) format), and allows for
constructing more complex real-world IR scenarios, as we
will show later. Note the lack of package-specific keywords,
required method names, base class inheritance etc. This is
in accordance with our main selling points: ease of use and
data scalability.
Needless to say, both corpora and documents are not re-
stricted to these interfaces; in addition to supporting itera-
tion, they may (and usually do) contain additional methods
and attributes, such as internal document ids, means of visu-
alisation, document class tags and whatever else is needed
for a particular application.
The second core interface are transformations. Where a
corpus represents data, transformation represents the pro-
cess of translating documents from one vector space into
another (such as from a TF-IDF space into an LSA space).
Realization in Python is through the dictionary
[ ]
mapping
notation and is again quite intuitive:
>>> from gensim.models import LsiModel
>>> lsi = LsiModel(corpus, numTopics = 2)
>>> lsi[new_document]
[(0, 0.197), (1, -0.056)]
>>> from gensim.models import LdaModel
>>> lda = LdaModel(corpus, numTopics = 2)
>>> lda[new_document]
[(0, 1.0)]
2
In terms of the underlying VSM, which is essentially a sparse
field-document matrix, this interface effectively abstracts away
from both the number of documents and the number of fields.
We note, however, that the abstraction focus is on the number
of documents, not fields. The number of terms and/or topics is
usually carefully chosen, with unwanted token types removed via
document frequency thresholds and stoplists. The hypothetical
use case of introducing new fields in a streaming fashion does not
come up as often in NLP.
47
2.1. Novel Implementations
While an intuitive interface is important for software adop-
tion, it is of course rather trivial and useless in itself. We
have therefore implemented some of the popular VSM meth-
ods, two of which we will describe here in greater detail.
Latent Semantic Analysis, LSA.
Developed in late 80’s
in Bell Laboratories (Deerwester et al., 1990), this method
gained popularity due to its solid theoretical background
and efficient inference of topics. The method exploits co-
occurrence between terms to project documents into a low-
dimensional space. Inference is done using linear algebra
routines for truncated Singular Value Decomposition (SVD)
on the sparse term-document matrix, which is usually first
weighted by some TF-IDF scheme. Once the SVD has been
completed, it can be used to project new documents into the
latent space, in a process called folding-in.
Since linear algebra routines have always been the front
runner of numerical computing (see e.g. (Press et al., 1992)),
some highly optimised packages for sparse SVD exist. For
example, PROPACK and SVDPACK are both based on the
Lanczos algorithm with smart reorthogonalizations, and
both are written in FORTRAN (the latter also has a C-
language port called SVDLIBC). Lightning fast as they are,
adapting the FORTRAN code is rather tricky once we hit the
memory limit for representing sparse matrices directly in
memory. For this and other reasons, research has gradually
turned to incremental algorithms for computing SVD, in
which the matrix is presented sequentially—an approach
equivalent to our document streaming. This problem refor-
mulation is not trivial and only recently have there appeared
practical algorithms for incremental SVD.
Within our framework, we have implemented Gorrell’s
Generalised Hebbian Algorithm (Gorrell, 2006), a stochas-
tic method for incremental SVD. However, this algorithm
proved much too slow in practice and we also found its inter-
nal parameters hard to tune, resulting in convergence issues.
We have therefore also implemented Brand’s algorithm for
fast incremental SVD updates (Brand, 2006). This algorithm
is much faster and contains no internal parameters to tune
3
.
To the best of our knowledge, our pure Python (numpy) im-
plementation is the only publicly available implementation
of LSA that does not require the term-document matrix to
be stored in memory and is therefore independent of the
corpus size
4
. Together with our straightforward document
streaming interface, this in itself is a powerful addition to
the set of publicly available NLP tools.
Latent Dirichlet Allocation, LDA.
LDA is another topic
modelling technique based on the bag-of-words paradigm
and word-document counts (Blei et al., 2003). Unlike La-
tent Semantic Analysis, LDA is a fully generative model,
3
This algorithm actually comes from the field of image process-
ing rather than NLP. Singular Value Decomposition, which is at
the heart of LSA, is a universal data compression/noise reduction
technique and has been successfully applied to many application
domains.
4
This includes completely ignoring the right singular vectors
during SVD computations, as the left vectors together with singular
values are enough to determine the latent space projection for new
documents.
where documents are assumed to have been generated ac-
cording to a per-document topic distribution (with a Dirich-
let prior) and per-topic word distribution. In practice, the
goal is of course not generating random documents through
these distributions, but rather inferring the distributions from
observed documents. This can be accomplished by varia-
tional Bayes approximations (Blei et al., 2003) or by Gibbs
sampling (Griffiths and Steyvers, 2004). Both of these ap-
proaches are incremental in their spirit, so that our imple-
mentation (again, in pure Python with numpy, and again
the only of its kind that we know of) “only” had to abstract
away from the original notations and implicit corpus-size
allocations to be made truly memory independent. Once the
distributions have been obtained, it is possible to assign top-
ics to new, unseen documents, through our transformation
interface.
2.2. Deployment
The framework is heavily documented and is avail-
able from
http://nlp.fi.muni.cz/projekty/
gensim/
. This website contains sections which describe
the framework and provide usage tutorials, as well as instal-
lation instructions.
The framework is open sourced and distributed under an
OSI-approved LGPL license.
3. Application of the Framework
“An idea that is developed and put into action is more important
than an idea that exists only as an idea.”
Hindu Prince Gautama Siddharta, the founder of Buddhism,
563–483 B.C.
3.1. Motivation
Many digital libraries today start to offer browsing features
based on pairwise document content similarity. For collec-
tions having hundreds of thousands documents, computation
of similarity scores is a challenge (Elsayed et al., 2008). We
have faced this task during the project of The Digital Mathe-
matics Library DML-CZ (Sojka, 2009). The emphasis was
not on developing new IR methods for this task, although
some modifications were obviously necessary—such as an-
swering the question of what constitutes a “token”, which
differs between mathematics and the more common English
ASCII texts.
With the collection’s growth and a steady feed of new papers,
lack of scalability appeared to be the main issue. This drove
us to develop our new document similarity framework.
3.2. Data
As of today, the corpus contains over 61,293 fulltext docu-
ments for a total of about 270 million tokens. There are
mathematical papers from the Czech Digital Mathemat-
ics Library DML-CZ
http://dml.cz
(22,991 papers),
from the NUMDAM repository
http://numdam.org
(17,636 papers) and from the math part of arXiv
http:
//arxiv.org/archive/math
(20,666 papers). After
filtering out word types that either appear less than five times
in the corpus (mostly OCR errors) or in more than one half
of the documents (stop words), we are left with 315,167
48
distinct word types. Although this is by no means an excep-
tionally big corpus, it already prohibits storing the sparse
term-document matrices in main memory, ruling out most
available VSM software systems.
3.3. Results
We have tried several VSM approaches to representing doc-
uments as vectors: term weighting by TF-IDF, Latent Se-
mantic Analysis, Random Projections and Latent Dirichlet
Allocation. In all cases, we used the cosine measure to
assess document similarity.
When evaluating data scalability, one of our two main design
goals (together with ease of use), we note memory usage is
now dominated by the transformation models themselves.
These in turn depend on the vocabulary size and the number
of topics (but not on the training corpus size). With 315,167
word types and 200 latent topics, both LSA and LDA models
take up about 480 MB of RAM.
Although evaluation of the quality of the obtained similari-
ties is not the subject of this paper, it is of course of utmost
practical importance. Here we note that it is notoriously
hard to evaluate the quality, as even the preferences of differ-
ent types of similarity are subjective (match of main topic,
or subdomain, or specific wording/plagiarism) and depends
on the motivation of the reader. For this reason, we have
decided to present all the computed similarities to our li-
brary users at once, see e.g.
http://dml.cz/handle/
10338.dmlcz/100785/SimilarArticles
. At the
present time, we are gathering feedback from mathemati-
cians on these results and it is worth noting that the frame-
work proposed in this paper makes such side-by-side com-
parison of methods straightforward and feasible.
4. Conclusion
We believe that our framework makes an important step in
the direction of current trends in Natural Language Process-
ing and fills a practical gap in existing software systems. We
have argued that the common practice, where each novel
topical algorithm gets implemented from scratch (often in-
venting, unfortunately, yet another I/O format for its data in
the process) is undesirable. We have analysed the reasons
for this practice and hypothesised that this partly due to the
steep API learning curve of existing IR frameworks.
Our framework makes a conscious effort to make parsing,
processing and transforming corpora into vector spaces as
intuitive as possible. It is platform independent and requires
no compilation or installations past Python+numpy. As an
added bonus, the package provides ready implementations of
some of the popular IR algorithms, such as Latent Semantic
Analysis and Latent Dirichlet Allocation. These are novel,
pure-Python implementations that make use of modern state-
of-the-art iterative algorithms. This enables them to work
over practically unlimited corpora, which no longer need to
fit in RAM.
We believe this package is useful to topic modelling experts
in implementing new algorithms as well as to the general
NLP community, who is eager to try out these algorithms
but who often finds the task of translating the original im-
plementations (not to say the original articles!) to its needs
quite daunting.
Future work will include comparison of the usefulness of
different topical models to the users of our Digital Math-
ematical Library, as well as further improving the range,
efficiency and scalability of popular topic modelling meth-
ods.
Acknowledgments
We acknowledge the support of grant MUNI/E/0084/2009 of
the Rector of Masaryk University program for PhD students’
research. Partial support of grants by EU #250503 CIP-ICT-
PSP EuDML and by the Ministry of Education of CR within
the Centre of basic research LC536 is acknowledged, too.
We would also like to thank the anonymous reviewer for pro-
viding us with additional pointers and valuable comments.
5. References
J. Baldridge, T. Morton, and G. Bierner. 2002. The
OpenNLP maximum entropy package. Technical report.
http://maxent.sourceforge.net/.
Steven Bird and Edward Loper. 2004. NLTK: The Natural
Language Toolkit. Proceedings of the ACL demonstration
session, pages 214–217.
David M. Blei and John D. Lafferty. 2009. Visualizing
Topics with Multi-Word Expressions. Arxiv preprint
http://arxiv.org/abs/0907.1013.
David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003.
Latent Dirichlet Allocation. The Journal of Machine
Learning Research, 3:993–1022.
R. F. Boisvert, R. Pozo, and K.A. Remington. 1996. The
matrix market formats: Initial design. Technical report,
Applied and Computational Mathematics Division, NIST.
Matthew Brand. 2006. Fast low-rank modifications of the
thin singular value decomposition. Linear Algebra and its
Applications, 415(1):20–30, May.
http://dx.doi.
org/10.1016/j.laa.2005.07.021.
Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean
Gerrish, and David M. Blei. 2009. Reading Tea Leaves:
How Humans Interpret Topic Models. volume 31, Van-
couver, British Columbia, CA.
Hamish Cunningham. 2002. GATE, a General Architecture
for Text Engineering. Computers and the Humanities,
36(2):223–254. http://gate.ac.uk/.
S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer,
and R. Harshman. 1990. Indexing by Latent Semantic
Analysis. Journal of the American society for Information
science, 41(6):391–407.
J. Dem
ˇ
sar, B. Zupan, G. Leban, and T. Curk. 2004. Orange:
From experimental machine learning to interactive data
mining. White Paper, Faculty of Computer and Informa-
tion Science, University of Ljubljana.
Tamer Elsayed, Jimmy Lin, and Douglas W. Oard. 2008.
Pairwise Document Similarity in Large Collections with
MapReduce. In HLT ’08: Proceedings of the 46th Annual
Meeting of the Association for Computational Linguis-
tics on Human Language Technologies, pages 265–268,
Morristown, NJ, USA. Association for Computational
Linguistics.
49
E. Frank, M. A. Hall, G. Holmes, R. Kirkby, B. Pfahringer,
and I. H. Witten. 2005. Weka: A machine learning work-
bench for data mining. Data Mining and Knowledge Dis-
covery Handbook: A Complete Guide for Practitioners
and Researchers, pages 1305–1314.
G. Gorrell. 2006. Generalized Hebbian algorithm for in-
cremental Singular Value Decomposition in Natural Lan-
guage Processing. In Proceedings of 11th Conference of
the European Chapter of the Association for Computa-
tional Linguistics (EACL), Trento, Italy, pages 97–104.
T. L. Griffiths and M. Steyvers. 2004. Finding scientific
topics. Proceedings of the National Academy of Sciences,
101(Suppl 1):5228.
Thorsten Joachims. 1999. SVMLight: Support Vector
Machine. SVM-Light Support Vector Machine
http:
//svmlight.joachims.org/
, University of Dort-
mund.
Brian W. Kernighan and P. J. Plauger. 1976. Software Tools.
Addison-Wesley Professional.
Adam Kilgarriff and Gregory Grefenstette. 2003. Introduc-
tion to the Special Issue on the Web as Corpus. Computa-
tional Linguistics, 29(3):333–347.
Francis Maes. 2009. Nieme: Large-Scale Energy-Based
Models. The Journal of Machine Learning Research,
10:743–746.
http://jmlr.csail.mit.edu/
papers/volume10/maes09a/maes09a.pdf.
A. K. McCallum. 2002. MALLET: A Machine Learning
for Language Toolkit. http://mallet.cs.umass.
edu.
M. D. McIlroy, E. N. Pinson, and B. A. Tague. 1978. UNIX
Time-Sharing System: Forward. The Bell System Techni-
cal Journal, 57(6 (part 2)), July/August.
P. V. Ogren, P. G. Wetzler, and S. J. Bethard. 2008. ClearTK:
A UIMA toolkit for statistical natural language process-
ing. Towards Enhanced Interoperability for Large HLT
Systems: UIMA for NLP, page 32.
C. H. Papadimitriou, P. Raghavan, H. Tamaki, and S. Vem-
pala. 2000. Latent semantic indexing: A probabilistic
analysis. Journal of Computer and System Sciences,
61(2):217–235.
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P.
Flannery. 1992. Numerical recipes in C. Cambridge
Univ. Press, Cambridge MA, USA.
Gerard Salton, A. Wong, and C. S. Yang. 1975. A vector
space model for automatic indexing. Communications of
the ACM, 18(11):620.
Petr Sojka. 2009. An Experience with Building Digital
Open Access Repository DML-CZ. In Proceedings of
CASLIN 2009, Institutional Online Repositories and Open
Access, 16th International Seminar, pages
74–78
, Tepl
´
a
Monastery, Czech Republic. University of West Bohemia,
Pilsen, CZ.
Mark Steyvers and Tom Griffiths, 2007. Probabilistic Topic
Models, pages 427–446. Psychology Press, February.
Yee Whye Teh, Michael I. Jordan, Matthew J. Beal,
and David M. Blei. 2006. Hierarchical Dirichlet Pro-
cesses. Journal of the American Statistical Association,
101(476):1566–1581.
S. K. M. Wong and V. V. Raghavan. 1984. Vector space
model of information retrieval: a reevaluation. In Pro-
ceedings of the 7th annual international ACM SIGIR
conference on Research and development in information
retrieval, pages 167–185. British Computer Society, Swin-
ton, UK.
T. Zito, N. Wilbert, L. Wiskott, and P. Berkes. 2008. Mod-
ular toolkit for Data Processing (MDP): a Python data
processing framework. Frontiers in Neuroinformatics, 2.
http://mdp-toolkit.sourceforge.net/.
50

Supplementary resource (1)

... In response to the problem of entity recognition for complex mechanical equipment under low-resource conditions, this study proposes a method for recognizing named entities in Chinese for complex mechanical equipment faults, incorporating dictionary information based on the LEBERT model proposed by Liu et al. [17]. The difference between this method and that of Liu et al. is that, in order to integrate domain knowledge better, this study uses the Word2Vec [30,31] method in the open-source library Gensim [32] to train a domain vocabulary for mine hoist equipment and construct a domain dictionary for mine hoist equipment and faults. A dictionary adapter is embedded between the Transformer encoders in the BERT model to match characters with the domain dictionary and obtain word set information, which is integrated into the character representation to enhance lexical information. ...
Article
Full-text available
Due to their individual advantages, the integration of lexicon information and pre-trained models like BERT has been widely adopted in Chinese sequence labeling tasks. However, given their high demand for training data, efforts have been made to enhance their performance in low-resource scenarios. Currently, certain specialized domains, such as agriculture, the industrial sector, and the metallurgical industry, suffer from a scarcity of data. Consequently, there is a dearth of effective models for entity relationship recognition when faced with limited data availability. Inspired by this, we constructed a suitable small balanced dataset and proposed a based-domain-NER model. Firstly, we construct a domain-specific dictionary based on mine hoist equipment and fault text and generate a dictionary tree to obtain word vector information. Secondly, we use a Lexicon Adapter to obtain the vector information of the domain-specific dictionary feature words matched using the characters and calculate the weights between their word vectors, integrating position encoding to enhance the positional information of the word vectors. Finally, we incorporate word vector information into the feature extraction layer to enhance the boundary information of domain entities and mitigate the semantic loss problem caused via using only character feature representation. Experimental results on a manually annotated dataset of mine hoist fault texts show that this method outperforms BiLSTM, BiLSTM-CRF, BERT, BERT-BiLSTM-CRF, and LEBERT, effectively improving the accuracy of named entity recognition (NER) for mine hoist faults.
... To create the embedding, we use Word2Vec in the Gensim library [25]. Word2Vec is one of the algorithms which produces vectors for words in textual documents. ...
Article
Full-text available
This paper explores how individuals’ language use in gender-specific groups (“mothers” and “fathers”) compares to their interactions when referred to as “parents.” Language adaptation based on the audience is well-documented, yet large-scale studies of naturally-occurring audience effects are rare. To address this, we investigate audience and gender effects in the context of parenting, where gender plays a significant role. We focus on interactions within Reddit, particularly in the parenting Subreddits r/Daddit, r/Mommit, and r/Parenting, which cater to distinct audiences. By analyzing user posts using word embeddings, we measure similarities between user-tokens and word-tokens, also considering differences among high and low self-monitors. Results reveal that in mixed-gender contexts, mothers and fathers exhibit similar behavior in discussing a wide range of topics, while fathers emphasize more on educational and family advice. Single-gender Subreddits see more focused discussions. Mothers in r/Mommit discuss medical care, sleep, potty training, and food, distinguishing themselves. In terms of individual differences, we found that, especially on r/Parenting, high self-monitors tend to conform more to the norms of the Subreddit by discussing more of the topics associated with the Subreddit.
... Further, similar to e.g. the programmatic implementation of the Word2vec Python Gensim library [37,80], the returned list of candidate words can be semantically shaped (i.e., widened or shrinked) interactively by having the user introduce multiple positive and negative words until the required level of recall and precision is achieved (see Algorithm 2). Lists of candidate similar words can also be pruned for example by limiting the number of returned top n ranked words or by returning only supra-threshold (specified) cosine values. ...
Article
Full-text available
Background Data mining of electronic health records (EHRs) has a huge potential for improving clinical decision support and to help healthcare deliver precision medicine. Unfortunately, the rule-based and machine learning-based approaches used for natural language processing (NLP) in healthcare today all struggle with various shortcomings related to performance, efficiency, or transparency. Methods In this paper, we address these issues by presenting a novel method for NLP that implements unsupervised learning of word embeddings, semi-supervised learning for simplified and accelerated clinical vocabulary and concept building, and deterministic rules for fine-grained control of information extraction. The clinical language is automatically learnt, and vocabulary, concepts, and rules supporting a variety of NLP downstream tasks can further be built with only minimal manual feature engineering and tagging required from clinical experts. Together, these steps create an open processing pipeline that gradually refines the data in a transparent way, which greatly improves the interpretable nature of our method. Data transformations are thus made transparent and predictions interpretable, which is imperative for healthcare. The combined method also has other advantages, like potentially being language independent, demanding few domain resources for maintenance, and able to cover misspellings, abbreviations, and acronyms. To test and evaluate the combined method, we have developed a clinical decision support system (CDSS) named Information System for Clinical Concept Searching (ICCS) that implements the method for clinical concept tagging, extraction, and classification. Results In empirical studies the method shows high performance (recall 92.6%, precision 88.8%, F-measure 90.7%), and has demonstrated its value to clinical practice. Here we employ a real-life EHR-derived dataset to evaluate the method’s performance on the task of classification (i.e., detecting patient allergies) against a range of common supervised learning algorithms. The combined method achieves state-of-the-art performance compared to the alternative methods we evaluate. We also perform a qualitative analysis of common word embedding methods on the task of word similarity to examine their potential for supporting automatic feature engineering for clinical NLP tasks. Conclusions Based on the promising results, we suggest more research should be aimed at exploiting the inherent synergies between unsupervised, supervised, and rule-based paradigms for clinical NLP.
Article
Why are some diseases more stigmatized than others? And, has disease stigma declined over time? Answers to these questions have been hampered by a lack of comparable, longitudinal data. Using word embedding methods, we analyze 4.7 million news articles to create new measures of stigma for 106 health conditions from 1980 to 2018. Using mixed-effects regressions, we find that behavioral health conditions and preventable diseases attract the strongest connotations of immorality and negative personality traits, and infectious diseases are most marked by disgust. These results lend new empirical support to theories that norm enforcement and contagion avoidance drive disease stigma. Challenging existing theories, we find no evidence for a link between medicalization and stigma, and inconclusive evidence on the relationship between advocacy and stigma. Finally, we find that stigma has declined dramatically over time, but only for chronic physical illnesses. In the past four decades, disease stigma has transformed from a sea of negative connotations surrounding most diseases into two primary conduits of meaning: infectious diseases spark disgust, and behavioral health conditions cue negative stereotypes. These results show that cultural meanings are especially durable when they are anchored by interests, and that cultural changes intertwine in ways that only become visible through large-scale research.
Article
This paper introduces a novel architecture for two objectives recommendation and interpretability in a unified model. We leverage textual content as a source of interpretability in content-aware recommender systems. The goal is to characterize user preferences with a set of human-understandable attributes, each is described by a single word, enabling comprehension of user interests behind item adoptions. This is achieved via a dedicated architecture, which is interpretable by design, involving two components for recommendation and interpretation. In particular, we seek an interpreter , which accepts holistic user’s representation from a recommender to output a set of activated attributes describing user preferences. Besides encoding interpretability properties such as fidelity, conciseness and diversity, the proposed memory network-based interpreter enables the generalization of user representation by discovering relevant attributes that go beyond her adopted items’ textual content. We design experiments involving both human- and functionally-grounded evaluations of interpretability. Results on four real-world datasets show that our proposed model not only discovers highly relevant attributes for interpreting user preferences, but also enjoys comparable or better recommendation accuracy than a series of baselines.
Article
Full-text available
The web, teeming as it is with language data, of all manner of varieties and languages, in vast quantity and freely available, is a fabulous linguists' playground. The Special Issue explores ways in which this dream is being explored.
Article
Full-text available
In this article, we review probabilistic topic models: graphical models that can be used to summarize a large collection of documents with a smaller number of distributions over words. Those distributions are called "topics" because, when fit to data, they capture the salient themes that run through the collection. We describe both finite-dimensional parametric topic models and their Bayesian nonparametric counterparts, which are based on the hierarchical Dirichlet process (HDP). We discuss two extensions of topic models to time-series data-one that lets the topics slowly change over time and one that lets the assumed prevalence of the topics change. Finally, we illustrate the application of topic models to nontext data, summarizing some recent research results in image analysis.
Conference Paper
Full-text available
Orange (www.ailab.si/orange) is a suite for machine learning and data mining. For researchers in machine learning, Orange offers scripting to easily prototype new algorithms and experimental procedures. For explorative data analysis, it provides a visual programming framework with emphasis on interactions and creative combinations of visual components.
Article
Full-text available
We consider problems involving groups of data where each observation within a group is a draw from a mixture model and where it is desirable to share mixture components between groups. We assume that the number of mixture components is unknown a priori and is to be inferred from the data. In this setting it is natural to consider sets of Dirichlet processes, one for each group, where the well-known clustering property of the Dirichlet process provides a nonparametric prior for the number of mixture components within each group. Given our desire to tie the mixture models in the various groups, we consider a hierarchical model, specifically one in which the base measure for the child Dirichlet processes is itself distributed according to a Dirichlet process. Such a base measure being discrete, the child Dirichlet processes necessarily share atoms. Thus, as desired, the mixture models in the different groups necessarily share mixture components. We discuss representations of hierarchical Dirichlet processes in terms of a stick-breaking process, and a generalization of the Chinese restaurant process that we refer to as the "Chinese restaurant franchise." We present Markov chain Monte Carlo algorithms for posterior inference in hierarchical Dirichlet process mixtures and describe applications to problems in information retrieval and text modeling.
Article
A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.
Article
This paper presents the design, implementation and evaluation of GATE, a General Architecture for Text Engineering.GATE lies at the intersection of human language computation and software engineering, and constitutes aninfrastructural system supporting research and development of languageprocessing software.
Article
This paper develops an identity for additive modifications of a singular value decomposition (SVD) to reflect updates, downdates, shifts, and edits of the data matrix. This sets the stage for fast and memory-efficient sequential algorithms for tracking singular values and subspaces. In conjunction with a fast solution for the pseudo-inverse of a submatrix of an orthogonal matrix, we develop a scheme for computing a thin SVD of streaming data in a single pass with linear time complexity: A rank-r thin SVD of a p × q matrix can be computed in O(pqr) time for .
Article
Latent semantic indexing (LSI) is an information retrieval technique based on the spectral analysis of the term-document matrix, whose empirical success had heretofore been without rigorous prediction and explanation. We prove that, under certain conditions, LSI does succeed in capturing the underlying semantics of the corpus and achieves improved retrieval performance. We propose the technique of random projection as a way of speeding up LSI. We complement our theorems with encouraging experimental results. We also argue that our results may be viewed in a more general framework, as a theoretical basis for the use of spectral methods in a wider class of applications such as collaborative filtering.