Conference Paper

Multi-class Boosting for Early Classification of Sequences.

DOI: 10.5244/C.24.24 Conference: British Machine Vision Conference, BMVC 2010, Aberystwyth, UK, August 31 - September 3, 2010. Proceedings
Source: DBLP

ABSTRACT We propose a new boosting algorithm for sequence classification, in particular one that enables early classification of multiple classes. In many practical problems, we would like to classify a sequence into one of K classes as quickly as possible, without waiting for the end of the sequence. Recently, an early classification boosting algorithm was proposed for binary classification that employs a weight propagation technique. In this paper, we extend this model to a multi-class early classification. The derivation is based on the loss function approach, and the developed model is quite simple and effective. We validated the performance through experiments with real-world data, and confirmed the superiority of our approach over the previous method.

Download full-text

Full-text

Available from: Hiroshi Sawada, Sep 03, 2015
0 Followers
 · 
97 Views
 · 
84 Downloads
  • Source
    • "Although few researchers paid attention to early facial expression recognition, some researchers have proposed effective methods in other early recognition fields such as online handwritings classification[6][8]. In [6], Ishiguro et al. proposed a multiclass early classification model based on AdaBoost and they showed the effectiveness of their methods through the classification experiments of driver behaviors and online handwritings. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This work investigates a new challenging problem: how to exactly recognize facial expression as early as possible, while most works generally focus on improving the recognition rate of facial expression recognition. The features of facial expressions in their early stage are unfortunately very sensitive to noise due to their low intensity. So, we propose a novel wavelet spectral subtraction method to spatio-temporally refine the subtle facial expression features. Moreover, in order to achieve early facial expression recognition, we newly introduce an early AdaBoost algorithm for facial expression recognition problem. Experiments using our database established by using a high-frame rate 3D sensing showed that the proposed method has a promising performance on early facial expression recognition.
    Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Anchorage, Alaska, USA, October 9-12, 2011; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a new boosting method for classification of time sequences. In the problem of on-line classification, it is essential to classify time sequences as quickly as possible in many practical cases. This type of classification is called “early classification.” Recently, an Adaboost-based “Earlyboost” has been proposed, which is known for its efficiency. In this paper, we propose a Logitboost-based early classification for further improvements of Earlyboost. We demonstrate the structure of the proposed method, and experimentally verify its performance.
    Computer Analysis of Images and Patterns - 14th International Conference, CAIP 2011, Seville, Spain, August 29-31, 2011, Proceedings, Part I; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new gesture recognition method which is helpful for a man-machine interface. The most of traditional methods need the whole part of a gesture sequence. Therefore, a system has to wait for the end of gesture to start the recognition process. It causes a time delay between the user's action and the machine's response. Early recognition determines the recognition result before the end of the gesture. Proposed method needs the detection of neither the beginning nor the end of the gesture by detecting the unique posture for a gesture class. In our experiment, we confirmed that the result is determined quite early with high recognition accuracy.
    The Brain & Neural Networks 01/2012; 19(4):167-174. DOI:10.3902/jnns.19.167
Show more