Richard Maclin

Richard Maclin
  • PhD Computer Science, UW-Madison
  • University of Minnesota, Duluth

About

50
Publications
13,790
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
4,546
Citations
Introduction
Current institution
University of Minnesota, Duluth

Publications

Publications (50)
Article
Though sexual violence is prevalent, formal reporting to police remains uncommon. Social media may provide a unique outlet for survivors of different forms of sexual violence, who might otherwise remain silent. This study examined the viral Twitter #WhyIDidntReport hashtag to synthesize survivors’ reasons for not reporting sexual violence to formal...
Conference Paper
Full-text available
Several prominent public health incidents that occurred at the beginning of this century due to adverse drug events (ADEs) have raised international awareness of governments and industries about pharmacovigilance (PhV), the science and activities to monitor and prevent adverse events caused by pharmaceutical products after they are introduced to th...
Article
Full-text available
We propose a novel approach for incorporating prior knowledge into the online binary support vector classifica- tion problem. An existing advice-taking approach, when prior knowledge is in the form of polyhedral knowledge sets in input space of data, is via knowledge-based support vector machines (KB- SVMs). We adopt the formalism of passive-aggres...
Article
Full-text available
Knowledge-based support vector machines (KBSVMs) incorporate advice from domain experts, which can improve generalization significantly. A major limita-tion that has not been fully addressed occurs when the expert advice is imperfect, which can lead to poorer models. We propose a model that extends KBSVMs and is able to not only learn from data and...
Article
Full-text available
Knowledge-based support vector machines (KBSVMs) incorporate advice from experts, which can improve accuracy and generaliza-tion significantly. A major limitation occurs when the expert advice is noisy or incorrect which can lead to poorer models and de-creased generalization. We propose a model that extends KBSVMs and learns not only from data and...
Conference Paper
Full-text available
Inductive Logic Programming (ILP) provides an effective method of learning logical theories given a set of positive examples, a set of negative examples, a corpus of background knowledge, and specification of a search space (e.g., via mode definitions) from which to compose the theories. While specifying positive and negative examples is relatively...
Conference Paper
Full-text available
Prior knowledge, in the form of simple advice rules, can greatly speed up convergence in learning algorithms. Online learning methods predict the label of the current point and then receive the correct label (and learn from that information). The goal of this work is to update the hypothesis taking into account not just the label feedback, but als...
Conference Paper
Full-text available
Inductive Logic Programming (ILP) provides an effective method of learning logical theories given a set of positive examples, a set of negative examples, a corpus of background knowledge, and specification of a search space (e.g., via mode definitions) from which to compose the theories. While specifying positive and negative examples is relatively...
Article
Full-text available
Bootstrap Learning (BL) is a new machine learning paradigm that seeks to build an electronic student that can learn using natural instruction provided by a human teacher and by bootstrapping on previously learned concepts. In our setting, the teacher provides (very few) examples and some advice about the task at hand using a natural instruction int...
Chapter
Full-text available
The goal of transfer learning is to speed up learning in a new task by transferring knowledge from one or more related source tasks. We describe a transfer method in which a reinforcement learner analyzes its experience in the source task and learns rules to use as advice in the target task. The rules, which are learned via inductive logic programm...
Conference Paper
Full-text available
We describe an application of inductive logic programming to transfer learning. Transfer learning is the use of knowledge learned in a source task to improve learning in a related target task. The tasks we work with are in reinforcement learning domains. Our approach transfers relational macros, which are nite-state machines in which the transition...
Conference Paper
Full-text available
Many reinforcement learning domains are highly relational. While traditional temporal-difference methods can be applied to these domains, they are limited in their capacity to exploit the relational nature of the domain. Our algorithm, AMBIL, constructs relational world models in the form of relational Markov decision processes (MDP). AMBIL works b...
Conference Paper
Full-text available
Knowledge-based classification and regression methods are especially powerful forms of learning. They allow a system to take advantage of prior domain knowledge supplied either by a human user or another algorithm, combining that knowledge with data to produce accu- rate models. A limitation of the use of prior knowledge occurs when the provided kn...
Conference Paper
Full-text available
We describe a reinforcement learning system that transfers skills from a previously learned source task to a related target task. The system uses inductive logic programming to analyze experience in the source task, and transfers rules for when to take actions. The target task learner accepts these rules through an advice-taking algorithm, which al...
Conference Paper
Full-text available
We propose a simple mechanism for incorporating ad- vice (prior knowledge), in the form of simple rules, into support-vector methods for both classification and re- gression. Our approach is based on introducing inequal- ity constraints associated with datapoints that match the advice. These constrained datapoints can be standard examples in the tr...
Conference Paper
Full-text available
The scarcity of manually labeled data for supervised machine learning methods presents a significant limi- tation on their ability to acquire knowledge. The use of kernels in Support Vector Machines (SVMs) provides an excellent mechanism to introduce prior knowledge into the SVM learners, such as by using unlabeled text or existing ontologies as ad...
Conference Paper
Full-text available
We present an extensible supervised Target-Word Sense Disambiguation system that leverages upon GATE (General Architecture for Text Engineering), NSP (Ngram Statistics Package) and WEKA (Waikato Envi- ronment for Knowledge Analysis) to present an end-to- end solution that integrates feature identification, fea- ture extraction, preprocessing and cl...
Conference Paper
Full-text available
We present a method for transferring knowledge learned in one task to a related task. Our problem solvers employ reinforcement learning to acquire a model for one task. We then transform that learned model into advice for a new task. A human teacher provides a mapping from the old task to the new task to guide this knowledge transfer. Ad- vice is i...
Conference Paper
Full-text available
We present a framework for knowledge transfer from one reinforcement learning task to a related task through advice-taking mechanisms. We discuss the importance of transfer in complex domains such as RoboCup soccer, and show how to use automatically generated advice to perform transfer.
Conference Paper
Full-text available
We have applied flve supervised learning approaches to word sense disambiguation in the medical domain. Our objective is to evaluate Support Vector Machines (SVMs) in comparison with other well known supervised learning algorithms including the na˜‡ve Bayes classifler, C4.5 decision trees, decision lists and boosting approaches. Based on these resu...
Conference Paper
Full-text available
We present a novel formulation for providing ad- vice to a reinforcement learner that employs support- vector regression as its function approximator. Our new method extends a recent advice-giving technique, called Knowledge-Based Kernel Regression (KBKR), that ac- cepts advice concerning a single action of a reinforce- ment learner. In KBKR, users...
Article
Full-text available
An adaptive semi-supervised ensemble method, ASSEMBLE, is proposed that constructs classification ensembles based on both labeled and unlabeled data. ASSEMBLE alternates between assigning "pseudo-classes" to the unlabeled data using the existing ensemble and constructing the next base classifier using both the labeled and pseudolabeled data. Mathem...
Article
Full-text available
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapir...
Article
An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freun...
Article
Full-text available
Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present a system called ratle that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external...
Article
Full-text available
This paper presents a new algorithm for Boosting the performance of an ensemble of classifiers. In Boosting, a series of classifiers is used to predict the class of data where later members of the series concentrate on training data that is incorrectly predicted by earlier members. To make a prediction about a new pattern, each classifier predicts...
Article
Full-text available
The primary goal of inductive learning is to generalize well -- that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that...
Article
Full-text available
An ensemble is a classifier created by combining the predictions of multiple component classifiers. We present a new method for combining classifiers into an ensemble based on a simple estimation of each classifier's competence. The classifiers are grouped into an ordered list where each classifier has a corresponding threshold. To classify an exam...
Article
Full-text available
As machine learning has graduated from toy problems to "real world" applications, users are finding that "real world" problems require them to perform aspects of problem solving that are not currently addressed by much of the machine learning literature. Specifically, users are finding that the tasks of selecting a set of features to define a probl...
Article
Full-text available
An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freun...
Conference Paper
Full-text available
An ensemble is a classifier created by combining the predictions of multiple component classifiers. We present a new method for combining classifiers into an ensemble based on a simple estimation of each classifier's competence. The classifiers are grouped into an ordered list where each classifier has a corresponding threshold. To classify an exam...
Article
Full-text available
Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external...
Article
Full-text available
This thesis defines and evaluates two systems that allow a teacher to provide instructions to a machine learner. My systems, FSkbann and ratle, expand the language that a teacher may use to provide advice to the learner. In particular, my techniques allow a teacher to give partially correct instructions about procedural tasks -- tasks that are solv...
Article
Full-text available
A number of problems confront standard automatic programming methods. One problem is that the combinatorics of search make automatic programming intractable for most real-world applications. Another problem is that most automatic programming systems require the user to express information in a form that is too complex. Also, most automatic programm...
Article
Full-text available
We describe a method for using machine learning to refine algorithms represented as generalized finite-state automata. The knowledge in an automaton is translated into an artificial neural network, and then refined with backpropagation on a set of examples. Our technique for translating an automaton into a network extends kbann, a system that trans...
Conference Paper
The KBANN system uses neural networks to refine domain theories. Currently, domain knowledge in KBANN is expressed as non- recursive, propositional rules. We extend KBANN to domain theories expressed as finite-state automata. We apply finite-state KBANN to the task of predicting how proteins fold, producing a small but statistically significant gai...
Article
Full-text available
We describe a method for using machine learning to refine algorithms represented as generalized finite-state automata. The knowledge in an automaton is translated into a corresponding artificial neural network, and then refined by applying backpropagation to a set of examples. Our technique for translating an automaton into a network extends the KB...
Chapter
A problem with learning systems is that often the language with which the user must express information to the system is too complex. This paper discusses techniques based on Explanation-Based Learning by Observation that allow the user to enter the information required by the system in a simplified form comfortable to him or her. A user enters a d...
Chapter
Full-text available
This chapter discusses transfer learning, which is one practical application of rule extraction. In transfer learning, information from one learning experience is applied to speed up learning in a related task. The chapter describes several techniques for transfer learning in SVM-basedreinforcement learning, and shows results from a case study.
Article
Full-text available
We propose a novel method for reinforcement learning in domains that are best described using relational ("first-order") features. Our approach is to rapidly sample a large space of such features, selecting a good subset to use as the basis for our Q-function. Our Q-function is created via a regression model that combines the collection of first-or...
Article
Full-text available
Reinforcement learning (RL) is a machine learning technique with strong links to natural learning. How- ever, it shares several "unnatural" limitations with many other successful machine learning algorithms. RL agents are not typically able to take advice or to ad- just to new situations beyond the specific problem they are asked to learn. Due to l...
Article
Reinforcement learning (RL) methods have diffi- culty scaling to large, complex problems. One ap- proach that has proven effective for scaling RL is to make use of advice provided by a human. We extend a recent advice-giving technique, called Knowledge-Based Kernel Regression (KBKR), to RL and evaluate our approach on the KeepAway subtask of the Ro...
Article
Full-text available
This report is an overview of our work on transfer in rein- forcement learning using advice-taking mechanisms. The goal in transfer learning is to speed up learning in a target task by transferring knowledge from a related, previously learned source task. Our methods are designed to do so robustly, so that positive transfer will speed up learning b...
Article
This paper introduces a new type of application for ILP called Bootstrapped Learning (BL). BL brings several challenges to ILP, including the need to (a) automate the "ILP setup" problem, (b) exploit the fact that a well-meaning teacher is providing pedagogically chosen examples and may be offering hints, (c) deal with small numbers of training exa...

Network

Cited By