Conference Paper

Multi-task Multiple Kernel Learning

DOI: 10.1137/1.9781611972818.71 Conference: Proceedings of the Eleventh SIAM International Conference on Data Mining, SDM 2011, April 28-30, 2011, Mesa, Arizona, USA
Source: DBLP


This paper presents two novel formulations for learning shared feature representations across multiple tasks. The idea is to pose the problem as that of learning a shared kernel, which is constructed from a given set of base kernels, leading to improved generalization in all the tasks. The first formulation employs a (l1,lp), p ≥ 2 mixed norm regularizer promoting sparse combinations of the base kernels and unequal weightings across tasks - enabling the formulation to work with unequally reliable tasks. While this convex formulation can be solved using a suitable mirror-descent algorithm, it may not learn shared feature representations which are sparse. The second formulation extends these ideas for learning sparse feature representations constructed from multiple base kernels and shared across multiple tasks. The sparse feature representation learnt by this formulation is essentially a direct product of low-dimensional subspaces lying in the induced feature spaces of few base kernels. The formulation is posed as a (l 1,lq),q ≥ 1 mixed Schattennorm regularized problem. One main contribution of this paper is a novel mirror-descent based algorithm for solving this problem which is not a standard set-up studied in the optimization literature. The proposed formulations can also be understood as generalizations of the framework of multiple kernel learning to the case of multiple tasks and hence are suitable for various learning applications. Simulation results on real-world datasets show that the proposed formulations generalize better than state-of-the-art. The results also illustrate the efficacy of the proposed mirror-descent based algorithms.

Download full-text


Available from: Pratik Jawanpuria, Mar 14, 2014
  • Source
    • "Existing approaches consider several different types of information sharing strategies. For example, [1], [14] and [24] applied a mixed-norm regularizer on the weights of each linear model (task), which forces tasks to be related, and, at the same time, achieves different levels of innertask and inter-task sparsity on the weights. Another example is the model proposed in [34], which considers T tasks and restricts the T Support Vector Machine (SVM) weights to be close to a common weight, such that the weights from all tasks are related. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A traditional and intuitively appealing Multi-Task Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing amongst tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO) problem, which considers the concurrent optimization of all task objectives involved in the Multi-Task Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel Support Vector Machine (SVM) MT-MKL framework, that considers an implicitly-defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving better classification performance, when compared to other similar MTL approaches.
    IEEE transactions on neural networks and learning systems 01/2015; 26(1):51-61. DOI:10.1109/TNNLS.2014.2309939 · 4.29 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Facial action unit (AU) detection is a challenging topic in computer vision and pattern recognition. Most existing approaches design classifiers to detect AUs individually or AU combinations without considering the intrinsic relations among AUs. This paper presents a novel method, lp-norm multi-task multiple kernel learning (MTMKL), that jointly learns the classifiers for detecting the absence and presence of multiple AUs. lp-norm MTMKL is an extension of the regularized multi-task learning, which learns shared kernels from a given set of base kernels among all the tasks within Support Vector Machines (SVM). Our approach has several advantages over existing methods: (1) AU detection work is transformed to a MTL problem, where given a specific frame, multiple AUs are detected simultaneously by exploiting their inter-relations; (2) lp-norm multiple kernel learning is applied to increase the discriminant power of classifiers. Our experimental results on the CK+ and DISFA databases show that the proposed method outperforms the state-of-the-art methods for AU detection.
    2014 IEEE Winter Conference on Applications of Computer Vision (WACV); 03/2014