Conference Paper

Learning Non-Linear Combinations of Kernels.

Conference: Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada.
Source: DBLP

ABSTRACT This paper studies the general problem of learning kernels based on a polynomial combination of base kernels. We analyze this problem in the case of regression and the kernel ridge regression algorithm. We examine the corresponding learning kernel optimization problem, show how that minimax problem can be reduced to a simpler minimization problem, and prove that the global solution of this problem always lies on the boundary. We give a projection-based gradient descent algo- rithm for solving the optimization problem, shown empirically to converge in few iterations. Finally, we report the results of extensive exp eriments with this algo- rithm using several publicly available datasets demonstrating the effectiveness of our technique.

0 Bookmarks
 · 
229 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Affective classification and retrieval of multimedia such as audio, image, and video have become emerging research areas in recent years. The previous research focused on designing features and developing feature extraction methods. Generally, a multimedia content can be represented with different feature representations (i.e., views). However, the most suitable feature representation related to people's emotions is usually not known a priori. We propose here a novel Bayesian multiple kernel learning algorithm for affective classification and retrieval tasks. The proposed method can make use of different representations simultaneously (i.e., multiview learning) to obtain a better prediction performance than using a single feature representation (i.e., single-view learning) or a subset of features, with the advantage of automatic feature selections. In particular, our algorithm has been implemented within a multilabel setup to capture the correlation between emotions, and the Bayesian formulation enables our method to produce probabilistic outputs for measuring a set of emotions triggered by a single image. As a case study, we perform classification and retrieval experiments with our algorithm for predicting people's emotional states evoked by images, using generic low-level image features. The empirical results with our approach on the widely-used International Affective Picture System (IAPS) data set outperforms several existing methods in terms of classification performance and results interpretability.
    Neurocomputing. Accepted, to appear. 12/2014;
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we give a new generalization error bound of Multiple Kernel Learn-ing (MKL) for a general class of regularizations. Our main target in this paper is dense type regularizations including ℓ p -MKL that imposes ℓ p -mixed-norm regu-larization instead of ℓ 1 -mixed-norm regularization. According to the recent nu-merical experiments, the sparse regularization does not necessarily show a good performance compared with dense type regularizations. Motivated by this fact, this paper gives a general theoretical tool to derive fast learning rates that is ap-plicable to arbitrary mixed-norm-type regularizations in a unifying manner. As a by-product of our general result, we show a fast learning rate of ℓ p -MKL that is tightest among existing bounds. We also show that our general learning rate achieves the minimax lower bound. Finally, we show that, when the complexities of candidate reproducing kernel Hilbert spaces are inhomogeneous, dense type regularization shows better learning rate compared with sparse ℓ 1 regularization.

Preview

Download
0 Downloads
Available from