Minimal Cost Complexity Pruning of Meta-Classifiers

Source: CiteSeer


m description length is described in (Quinlan & Rivest 1989). Minimal cost complexity pruning associates a complexity parameter with the number of terminal nodes of a decision tree. It prunes decision trees by minimizing the linear combination of the complexity (size) of the tree and its misclassification cost estimate (error rate). The degree of pruning is controlled by adjusting the weight of the complexity parameter, i.e. an increase of this weight parameter results in heavier pruning. Pruning an arbitrary meta-classifier consists of three stages. First we construct a decision tree model (e.g. CART) of the original meta-classifier, by learning its input/output behavior. This new model (a decision Copyright c #1999, American Association for Artificial Intelligence ( All rights reserved. tree with base classifiers as nodes) reveals and prunes the base classifiers that do not participate in the splitting criteria and hence are redundant. The next stage a

Download full-text


Available from: Salvatore J. Stolfo, Nov 25, 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This thesis research concentrates on the problem of managing a distributed collection of intelligent learning agents across large and distributed databases. The main challenge is to identify and address the issues related to the efficiency, scalability, adaptivity and compatibility of these agents and the design and implemention of a complete and coherent distributed meta-learning system for large scale data mining applications. The resulting system should be able to scale with many large databases and make effective use of the available system resources. Furthemore, it should be capable to adapt to changes in its computational environment and be flexible enough to circumvent variances in database schema definitions. In this thesis proposal we present the architecture of JAM (Java Agents for Meta-learning), a distributed data mining system, and we describe in detail several methods to cope with the issues of scalability, efficiency, adaptivity and compatibility. Through experiments, pe...
    Full-text · Article · Jan 1998
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Different researchers hold different views of what the term meta-learning exactly means. The first part of this paper provides our own perspective view in which the goal is to build self-adaptive learners, i.e., learning algorithms that improve their bias dynamically through experience by accumulating meta-knowledge. The second part provides a survey of meta-learning as reported by the machine-learning literature. We find that, despite different views and research lines, a question remains constant: how can we exploit knowledge about learning (i.e., meta-knowledge) to improve the performance of learning algorithms? Clearly the answer to this question is key to the advancement of the field and continues being the subject of intensive research.
    Full-text · Article · Sep 2001 · Artificial Intelligence Review