Wilson Fearn’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Figure 1: Comparing vocabulary size (in millions) vs the total number of words (in 10s of millions) for the AP News and Amazon corpora. Note that the vocabulary size of AP News w.r.t. the number of documents plateaus much faster than the noisier Amazon corpus.
Figure 2: Pearson correlation between the relative performance variables (train time, test time, accuracy, and vocabulary size) from the results of the different preprocessing methods.
Rare word filtering on the Amazon dataset, across various levels. Scores are the relative performance of each method over the no preprocessing baseline. Results are the average (and std) relative performance of the four models, across the five dataset seeds. Bold indicates statistical similarity to the best score, from a two-sample t-test with α = 0.05.
Exploring the Relationship Between Algorithm Performance, Vocabulary, and Run-Time in Text Classification
  • Preprint
  • File available

April 2021

·

51 Reads

Wilson Fearn

·

Orion Weller

·

Kevin Seppi

Text classification is a significant branch of natural language processing, and has many applications including document classification and sentiment analysis. Unsurprisingly, those who do text classification are concerned with the run-time of their algorithms, many of which depend on the size of the corpus' vocabulary due to their bag-of-words representation. Although many studies have examined the effect of preprocessing techniques on vocabulary size and accuracy, none have examined how these methods affect a model's run-time. To fill this gap, we provide a comprehensive study that examines how preprocessing techniques affect the vocabulary size, model performance, and model run-time, evaluating ten techniques over four models and two datasets. We show that some individual methods can reduce run-time with no loss of accuracy, while some combinations of methods can trade 2-5% of the accuracy for up to a 65% reduction of run-time. Furthermore, some combinations of preprocessing techniques can even provide a 15% reduction in run-time while simultaneously improving model accuracy.

Download


Figure 4: Plot showing human agreement with each model type. CopulaLDA performs slightly worse than LDA. Humans preferred topic assignments from Anchor Words by a wide margin.
Automatic Evaluation of Local Topic Quality

May 2019

·

28 Reads

·

·

Wilson Fearn

·

[...]

·

Kevin Seppi

Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classification. Even recent models, which aim to improve the quality of these token-level topic assignments, have been evaluated only with respect to global metrics. We propose a task designed to elicit human judgments of token-level topic assignments. We use a variety of topic model types and parameters and discover that global metrics agree poorly with human assignments. Since human evaluation is expensive we propose a variety of automated metrics to evaluate topic models at a local level. Finally, we correlate our proposed metrics with human judgments from the task on several datasets. We show that an evaluation based on the percent of topic switches correlates most strongly with human judgment of local topic quality. We suggest that this new metric, which we call consistency, be adopted alongside global metrics such as topic coherence when evaluating new topic models.


Cross-referencing using Fine-grained Topic Modeling

May 2019

·

16 Reads

Cross-referencing, which links passages of text to other related passages, can be a valuable study aid for facilitating comprehension of a text. However, cross-referencing requires first, a comprehensive thematic knowledge of the entire corpus, and second, a focused search through the corpus specifically to find such useful connections. Due to this, cross-reference resources are prohibitively expensive and exist only for the most well-studied texts (e.g. religious texts). We develop a topic-based system for automatically producing candidate cross-references which can be easily verified by human annotators. Our system utilizes fine-grained topic modeling with thousands of highly nuanced and specific topics to identify verse pairs which are topically related. We demonstrate that our system can be cost effective compared to having annotators acquire the expertise necessary to produce cross-reference resources unaided.




Citations (3)


... Except from the coherence-based metric from [12] that has a high correlation with human judgement and implementations of the basic automatic metrics for quality evaluation (such as NPMI [16], switchP [17]), there is also a variant of LLM-based metric inspired by [18]. ...

Reference:

AutoTM 2.0: Automatic Topic Modeling Framework for Documents Analysis
Automatic Evaluation of Local Topic Quality
  • Citing Conference Paper
  • January 2019

... The labor-intensive nature of these manual crossreferencing projects highlights the reasons why scholarly cross-reference sets are so rare. Lund et al. (2019) investigated reducing the cost to create a cross-reference set by using topic modeling to suggest cross-references and crowdsourcing to evaluate them. However, creating a set of cross-references will be labor-intensive no matter how much technology improves. ...

Cross-referencing Using Fine-grained Topic Modeling
  • Citing Conference Paper
  • January 2019

... Mechanisms for interaction include constraints that words should or should not appear in the same topic, [Hu et al., 2014], and defining a set of "anchor words" to characterize a topic; the latter is considered easier to guide. Lund et al. [2018] builds on it to learn predictive topics for downstream prediction tasks. Finally, Parikh and Grauman [2011] learns mid-level features for image classification by jointly finding predictive hyper-planes, and learning a model to predict the nameability of those hyper-planes, but this depends heavily on users being able to inspect instances to see how a latent feature varies among them. ...

Labeled Anchors and a Scalable, Transparent, and Interactive Classifier
  • Citing Conference Paper
  • January 2018