Fig 1 - uploaded by Sanjay Chawla
Content may be subject to copyright.
Source publication
Landmarking is a recent and promising meta-learning strategy, which defines meta-features that are themselves efficient learning algorithms. However, the choice of landmarkers is often made in an ad hoc manner. In this paper, we propose a new perspective and set of criteria for landmarkers. Based on the new criteria, we propose a landmarker generat...
Contexts in source publication
Context 1
... have thus far been employed much in the same manner as the prototypical meta-features; they simply serve as meta-features whose purpose is to help define an algorithm-generic expertise space, in which the domain of expertise of any candidate algorithm may be defined. This former definition of landmarkers is exemplified in Figure 1 (annotated from [2]). ...
Context 2
... example, a landmarker for a boosted C4.5 decision tree algorithm could corre- spond to the accuracies of several learning algorithms such as a decision stump, a naïve Bayes learner, etc. A counterpart of Figure 1, using our new perception of a landmarker is depicted in Figure 4 of Section 2.2 (after further explanation of the criteria used to generate such landmarkers). ...
Similar publications
We present a global radio frequency noise survey observed from the Fast on-Orbit Recording of Transient Events (FORTE) satellite at 800 km altitude. This is a survey of squared amplitudes (R2) in 44 frequency subbands spaced by 0.5 MHz centered at 38 MHz (“low band”) and 44 subbands spaced by 0.5 MHz centered at 130 MHz (“high band”). We define 13...
Citations
... Recently, the concept of landmarking (Fürnkranz & Petrak, 2001;Ler et al., 2004a;Pfahringer et al., 2000) has emerged as a technique that characterises a dataset by directly measuring the performance of simple and fast learning algorithms, called landmarkers. ...
... To date, three types of meta-attributes have been suggested: (i) dataset characteristics, including basic, statistical and information theoretic measurements (Brazdil et al., 2003;Gama & Brazdil, 1995;Kalousis and Hilario, 2001;Lindner & Studer, 1999;Michie et al., 1994); (ii) properties of induced classifiers over the dataset in question (Bensusan, 1998;Peng et al., 2002); and (iii) measurements that represent the performance or other output of representative classifiers (to the candidate algorithms); i.e. landmarkers (Fürnkranz & Petrak, 2001;Ler et al., 2004a;Pfahringer et al., 2000). Correspondingly, the types of algorithm selection problems that have been suggested include: (i) classifying an algorithm as appropriate or inappropriate on the learning task in question (given that an algorithm is appropriate if it is not considered worse than the best performing candidate algorithm) (e.g. ...
... Consequently, in (Ler et al., 2004a), we proposed alternate landmarker selection criteria (i.e. efficiency and correlativity) and correspondingly propose landmarkers based on the regression estimators whose independents correspond to a subset of the set of candidate algorithms. ...
Recently, we proposed a new meta-learning approach based on landmarking. This approach, which utilises a new set of criteria for selecting landmarkers, generates a set of landmarkers that are each functions over the performance over subsets of the candidate algorithms being landmarked. In this paper, we experiment with three heuristics based on correlativity and efficiency. With each heuristic, the landmarkers generated using linear regression are able to estimate accuracy well, even when only utilising a small fraction of the given algorithms. The results also show that the heuristic in which efficiencies are estimated via 1-nearest neighbour outperformed the other heuristics.