Conference Paper

Multi-task Multiple Kernel Learning

DOI: 10.1137/1.9781611972818.71 Conference: Proceedings of the Eleventh SIAM International Conference on Data Mining, SDM 2011, April 28-30, 2011, Mesa, Arizona, USA
Source: DBLP
Download full-text


Available from: Pratik Jawanpuria, Mar 14, 2014
11 Reads
  • Source
    • "Existing approaches consider several different types of information sharing strategies. For example, [1], [14] and [24] applied a mixed-norm regularizer on the weights of each linear model (task), which forces tasks to be related, and, at the same time, achieves different levels of innertask and inter-task sparsity on the weights. Another example is the model proposed in [34], which considers T tasks and restricts the T Support Vector Machine (SVM) weights to be close to a common weight, such that the weights from all tasks are related. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A traditional and intuitively appealing Multi-Task Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing amongst tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO) problem, which considers the concurrent optimization of all task objectives involved in the Multi-Task Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel Support Vector Machine (SVM) MT-MKL framework, that considers an implicitly-defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving better classification performance, when compared to other similar MTL approaches.
    IEEE transactions on neural networks and learning systems 01/2015; 26(1):51-61. DOI:10.1109/TNNLS.2014.2309939 · 4.29 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Facial action unit (AU) detection is a challenging topic in computer vision and pattern recognition. Most existing approaches design classifiers to detect AUs individually or AU combinations without considering the intrinsic relations among AUs. This paper presents a novel method, lp-norm multi-task multiple kernel learning (MTMKL), that jointly learns the classifiers for detecting the absence and presence of multiple AUs. lp-norm MTMKL is an extension of the regularized multi-task learning, which learns shared kernels from a given set of base kernels among all the tasks within Support Vector Machines (SVM). Our approach has several advantages over existing methods: (1) AU detection work is transformed to a MTL problem, where given a specific frame, multiple AUs are detected simultaneously by exploiting their inter-relations; (2) lp-norm multiple kernel learning is applied to increase the discriminant power of classifiers. Our experimental results on the CK+ and DISFA databases show that the proposed method outperforms the state-of-the-art methods for AU detection.
    2014 IEEE Winter Conference on Applications of Computer Vision (WACV); 03/2014