Content uploaded by Motoaki Kawanabe

Author content

All content in this area was uploaded by Motoaki Kawanabe

Content may be subject to copyright.

Available via license: CC BY 4.0

Content may be subject to copyright.

arXiv:1112.3697v1 [cs.CV] 16 Dec 2011

Insights from Classifying Visual Concepts with Multiple Kernel

Learning

Alexander Binder∗†

, Shinichi Nakajima‡

, Marius Kloft§

, Christina M¨uller, Wojciech Samek,

Ulf Brefeld¶

, Klaus-Robert M¨ullerk

, and Motoaki Kawanabe∗∗

December 19, 2011

Abstract

Combining information from various image features has

become a standard technique in concept recognition tasks.

However, the optimal way of fusing the resulting kernel

functions is usually unknown in practical applications.

Multiple kernel learning (MKL) techniques allow to de-

termine an optimal linear combination of such similarity

matrices. Classical approaches to MKL promote sparse

mixtures. Unfortunately, so-called 1-norm MKL variants

are often observed to be outperformed by an unweighted

sum kernel. The contribution of this paper is twofold:

We apply a recently developed non-sparse MKL variant

to state-of-the-art concept recognition tasks within com-

puter vision. We provide insights on beneﬁts and limits

of non-sparse MKL and compare it against its direct com-

petitors, the sum kernel SVM and the sparse MKL. We

report empirical results for the PASCAL VOC 2009 Clas-

siﬁcation and ImageCLEF2010 Photo Annotation chal-

lenge data sets. About to be submitted to PLoS ONE.

∗corresponding author, alexander.binder@tu-berlin.de

†A. Binder and W. Samek are with Technische Universit¨at Berlin and

Fraunhofer FIRST,Berlin, Germany

‡S. Nakajima is with Optical Research Laboratory, Nikon Corpora-

tion Tokyo

§M. Kloft and C. M ¨uller are with Technische Universit¨at Berlin, Ger-

many

¶U. Brefeld is with the Universit¨at Bonn, Germany

kK.-R. M¨uller is with Technische Universit¨at Berlin, Germany, and

the Institute of Pure and Applied Mathematics at UCLA, Los Angeles,

USA

∗∗M. Kawanabe is with ATR Research, Kyoto, Japan

1 Introduction

A common strategy in visual object recognition tasks is

to combine different image representations to capture rel-

evant traits of an image. Prominent representations are for

instance built from color, texture, and shape information

and used to accurately locate and classify the objects of

interest. The importance of such image features changes

across the tasks. For example, color information increases

the detection rates of stop signs in images substantially

but it is almost useless for ﬁnding cars. This is because

stop sign are usually red in most countries but cars in

principle can have any color. As additional but nonessen-

tial features not only slow down the computation time but

may even harm predictive performance, it is necessary to

combine only relevant features for state-of-the-art object

recognition systems.

We will approach visual object classiﬁcation from a

machine learning perspective. In the last decades, support

vector machines (SVM) [1, 2, 3] have been successfully

applied to many practical problems in various ﬁelds in-

cluding computer vision [4]. Support vector machines ex-

ploit similarities of the data, arising from some (possibly

nonlinear) measure. The matrix of pairwise similarities,

also known as kernel matrix, allows to abstract the data

from the learning algorithm [5, 6].

That is, given a task at hand, the practitioner needs to

ﬁnd an appropriate similarity measure and to plug the re-

sulting kernel into an appropriate learning algorithm. But

what if this similarity measure is difﬁcult to ﬁnd? We note

that [7] and [8] were the ﬁrst to exploit prior and domain

knowledge for the kernel construction.

In object recognition, translating information from var-

1

ious image descriptors into several kernels has now be-

come a standard technique. Consequently, the choice of

ﬁnding the right kernel changes to ﬁnding an appropriate

way of fusing the kernel information; however, ﬁnding

the right combination for a particular application is so far

often a matter of a judicious choice (or trial and error).

In the absence of principled approaches, practitioners

frequently resort to heuristics such as uniform mixtures

of normalized kernels [9, 10] that have proven to work

well. Nevertheless, this may lead to sub-optimal kernel

mixtures.

An alternative approach is multiple kernel learning

(MKL) that has been applied to object classiﬁcation tasks

involving various image descriptors [11, 12]. Multiple

kernel learning [13, 14, 15, 16] generalizes the support

vector machine framework and aims at learning the opti-

mal kernel mixture and the model parameters of the SVM

simultaneously. To obtain a well-deﬁned optimization

problem, many MKL approaches promote sparse mix-

tures by incorporating a 1-norm constraint on the mixing

coefﬁcients. Compared to heuristic approaches, MKL has

the appealing property of learning a kernel combination

(wrt. the ℓ1-norm constraint) and converges quickly as it

can be wrapped around a regular support vector machine

[15]. However, some evidence shows that sparse kernel

mixtures are often outperformed by an unweighted-sum

kernel [17]. As a remedy, [18, 19] propose ℓ2-norm reg-

ularized MKL variants, which promote non-sparse ker-

nel mixtures and subsequently have been extended to ℓp-

norms [20, 21].

Multiple Kernel approaches have been applied to var-

ious computer vision problems outside our scope such

multi-class problems [22] which require mutually exclu-

sive labels and object detection [23, 24] in the sense of

ﬁnding object regions in an image. The latter reaches its

limits when image concepts cannot be represented by an

object region anymore such as the Outdoor,Overall Qual-

ity or Boring concepts in the ImageCLEF2010 dataset

which we will use.

In this contribution, we study the beneﬁts of sparse

and non-sparse MKL in object recognition tasks. We

report on empirical results on image data sets from the

PASCAL visual object classes (VOC) 2009 [25] and Im-

ageCLEF2010 PhotoAnnotation [26] challenges, showing

that non-sparse MKL signiﬁcantly outperforms the uni-

form mixture and ℓ1-norm MKL. Furthermore we discuss

the reasons for performance gains and performance limi-

tations obtained by MKL based on additional experiments

using real world and synthetic data.

The family of MKL algorithms is not restricted to

SVM-based ones. Another competitor, for example, is

Multiple Kernel Learning based on Kernel Discriminant

Analysis (KDA) [27, 28]. The difference between MKL-

SVM and MKL-KDA lies in the underlying single kernel

optimization criterion while the regularization over kernel

weights is the same.

Outside the MKL family, however, within our problem

scope of image classiﬁcation and ranking lies, for exam-

ple, [29] which uses a logistic regression as base crite-

rion and results in a number of optimization parameters

equal to the number of samples times the number of input

features. Since the approach in [29] uses a priori much

more optimization variables, it poses a more challenging

and potentially more time consuming optimization prob-

lem which limits the number of applicable features and

can be evaluated for our medium scaled datasets in detail

in the future.

Alternatives use more general combinations of kernels

such as products with kernel widths as weighting param-

eters [30, 31]. As [31] point out the corresponding opti-

mization problems are no longer convex. Consequently

they may ﬁnd suboptimal solutions and it is more difﬁ-

cult to assess using such methods how much gain can be

achieved via learning of kernel weights.

This paper is organized as follows. In Section 2, we

brieﬂy review the machine learning techniques used here;

The following section3 we present our experimental re-

sults on the VOC2009 and ImageCLEF2010 datasets; in

Section 4 we discuss promoting and limiting factors of

MKL and the sum-kernel SVM in three learning scenar-

ios.

2 Methods

This section brieﬂy introduces multiple kernel learning

(MKL), and kernel target alignment. For more details we

refer to the supplement and the cited works in it.

2

2.1 Multiple Kernel Learning

Given a ﬁnite number of different kernels each of which

implies the existence of a feature mapping ψj:X → Hj

onto a hilbert space

kj(x,¯

x) = hψj(x), ψj(¯

x)iHj

the goal of multiple kernel learning is to learn SVM pa-

rameters (α, b)and linear kernel weights K=Plβlkl

simultaneously.

This can be cast as the following optimization problem

which extends support vector machines [2, 6]

min

β,v,b,ξ

1

2

m

X

j=1

v′

jvj

βj

+Ckξk1(1)

s.t. ∀i:yi

m

X

j=1

v′

jψj(xi) + b

≥1−ξi

ξ≥0;β≥0;kβkp≤1.

The usage of kernels is permitted through its partially du-

alized form:

min

βmax

α

n

X

i=1

αi−1

2

n

X

i,l=1

αiαlyiyl

m

X

j=1

βjkj(xi,xl)

s.t. ∀n

i=1 : 0 ≤αi≤C;

n

X

i=1

yiαi= 0;

∀m

j=1 :βj≥0; kβkp≤1.

For details on the solution of this optimization problem

and its kernelization we refer to the supplement and [21].

While prior work on MKL imposes a 1-norm constraint

on the mixing coefﬁcients to enforce sparse solutions ly-

ing on a standard simplex [14, 15, 32, 33], we employ a

generalized ℓp-norm constraint kβkp≤1for p≥1as

used in [20, 21]. The implications of this modiﬁcation

in the context of image concept classiﬁcation will be dis-

cussed throughout this paper.

2.2 Kernel Target Alignment

The kernel alignment introduced by [34] measures the

similarity of two matrices as a cosine angle of vectors un-

der the Frobenius product

A(K1, K2) := hK1, K2iF

kK1kFkK2kF

,(2)

It was argued in [35] that centering is required in order

to correctly reﬂect the test errors from SVMs via kernel

alignment. Centering in the corresponding feature spaces

[36] can be achieved by taking the product H KH, with

H:= I−1

n11⊤,(3)

Iis the identity matrix of size nand 1is the column vec-

tor with all ones. The centered kernel which achieves a

perfect separation of two classes is proportional to e

ye

y⊤,

where

e

y= (eyi),eyi:= (1

n+yi= +1

−1

n−

yi=−1(4)

and n+and n−are the sizes of the positive and negative

classes, respectively.

3 Empirical Evaluation

In this section, we evaluate ℓp-norm MKL in real-

world image categorization tasks, experimenting on the

VOC2009 and ImageCLEF2010 data sets. We also pro-

vide insights on when and why ℓp-norm MKL can help

performance in image classiﬁcation applications. The

evaluation measure for both datasets is the average pre-

cision (AP) over all recall values based on the precision-

recall (PR) curves.

3.1 Data Sets

We experiment on the following data sets:

1. PASCAL2 VOC Challenge 2009 We use the ofﬁcial

data set of the PASCAL2 Visual Object Classes Challenge

2009 (VOC2009) [25], which consists of 13979 images.

The use the ofﬁcial split into 3473 training, 3581 valida-

tion, and 6925 test examples as provided by the challenge

organizers. The organizers also provided annotation of

the 20 objects categories; note that an image can have

multiple object annotations. The task is to solve 20 bi-

nary classiﬁcation problems, i.e. predicting whether at

3

least one object from a class kis visible in the test im-

age. Although the test labels are undisclosed, the more

recent VOC datasets permit to evaluate AP scores on the

test set via the challenge website (the number of allowed

submissions per week being limited).

2. ImageCLEF 2010 PhotoAnnotation The Image-

CLEF2010 PhotoAnnotation data set [26] consists of

8000 labeled training images taken from ﬂickr and a test

set with undisclosed labels. The images are annotated

by 93 concept classes having highly variable concepts—

they contain both well deﬁned objects such as lake, river,

plants, trees, ﬂowers, as well as many rather ambigu-

ously deﬁned concepts such as winter, boring, architec-

ture, macro, artiﬁcial, motion blur,—however, those con-

cepts might not always be connected to objects present

in an image or captured by a bounding box. This makes

it highly challenging for any recognition system. Un-

fortunately, there is currently no ofﬁcial way to obtain

test set performance scores from the challenge organiz-

ers. Therefore, for this data set, we report on training

set cross-validation performances only. As for VOC2009

we decompose the problem into 93 binary classiﬁcation

problems. Again, many concept classes are challenging

to rank or classify by an object detection approach due

to their inherent non-object nature. As for the previous

dataset each image can be labeled with multiple concepts.

3.2 Image Features and Base Kernels

In all of our experiments we deploy 32 kernels capturing

various aspects of the images. The kernels are inspired by

the VOC 2007 winner [37] and our own experiences from

our submissions to the VOC2009 and ImageCLEF2009

challenges. We can summarize the employed kernels by

the following three types of basic features:

•Histogram over a bag of visual words over SIFT fea-

tures (BoW-S), 15 kernels

•Histogram over a bag of visual words over color in-

tensity histograms (BoW-C), 8 kernels

•Histogram of oriented gradients (HoG), 4 kernels

•Histogram of pixel color intensities (HoC), 5 kernels.

We used a higher fraction of bag-of-word-based fea-

tures as we knew from our challenge submissions that

they have a better performance than global histogram fea-

tures. The intention was, however, to use a variety of dif-

ferent feature types that have been proven to be effective

on the above datasets in the past—but at the same time

obeying memory limitations of maximally 25GB per job

as required by computer facilities used in our experiments

(we used a cluster of 23 nodes having in total 256 AMD64

CPUs and with memory limitations ranging in 32–96 GB

RAM per node).

The above features are derived from histograms that

contain no spatial information. We therefore enrich the re-

spective representations by using spatial tilings 1×1,3×

1,2×2,4×4,8×8, which correspond to single levels

of the pyramidal approach [9] (this is for capturing the

spatial context of an image). Furthermore, we apply a χ2

kernel on top of the enriched histogram features, which

is an established kernel for capturing histogram features

[10]. The bandwidth of the χ2kernel is thereby heuris-

tically chosen as the mean χ2distance over all pairs of

training examples [38].

The BoW features were constructed in a standard way

[39]: at ﬁrst, the SIFT descriptors [40] were calculated on

a regular grid with 6 pixel pitches for each image, learning

a code book of size 4000 for the SIFT features and of size

900 for the color histograms by k-means clustering (with

a random initialization). Finally, all SIFT descriptors

were assigned to visual words (so-called prototypes) and

then summarized into histograms within entire images or

sub-regions. We computed the SIFT features over the

following color combinations, which are inspired by the

winners of the Pascal VOC 2008 challenge winners from

the university of Amsterdam [41]: red-green-blue (RGB),

normalized RGB, gray-opponentColor1-opponentColor2,

and gray-normalized OpponentColor1-OpponentColor2;

in addition, we also use a simple gray channel.

We computed the 15-dimensional local color his-

tograms over the color combinations red-green-blue,

gray-opponentColor1-opponentColor2,gray, and hue (the

latter being weighted by the pixel value of the value com-

ponent in the HSV color representation).

This means, for BoW-S, we considered ﬁve color chan-

nels with three spatial tilings each (1×1,3×1, and 2×2),

resulting in 15 kernels; for BoW-C, we considered four

color channels with two spatial tilings each (1×1and

3×1), resulting in 8 kernels.

4

The HoG features were computed by discretizing the

orientation of the gradient vector at each pixel into 24

bins and then summarizing the discretized orientations

into histograms within image regions [42]. Canny de-

tectors [43] are used to discard contributions from pix-

els, around which the image is almost uniform. We com-

puted them over the color combinations red-green-blue,

gray-opponentColor1-opponentColor2, and gray, thereby

using the two spatial tilings 4×4and 8×8. For the ex-

periments we used four kernels: a product kernel created

from the two kernels with the red-green-blue color com-

bination but using different spatial tilings, another prod-

uct kernel created in the same way but using the gray-

opponentColor1-opponentColor2 color combination, and

the two kernels using the gray channel alone (but differing

in their spatial tiling).

The HoC features were constructed by discretiz-

ing pixel-wise color values and computing their 15

bin histograms within image regions. To this end,

we used the color combinations red-green-blue, gray-

opponentColor1-opponentColor2, and gray. For each

color combination the spatial tilings 2×2,3×1, and 4×4

were tried. In the experiments we deploy ﬁve kernels: a

product kernel created from the three kernels with differ-

ent spatial tilings with colors red-green-blue, a product

kernel created from the three kernels with color combina-

tion gray-opponentColor1-opponentColor2, and the three

kernels using the gray channel alone(differing in their spa-

tial tiling).

Note that building a product kernel out of χ2kernels

boils down to concatenating feature blocks (but using a

separate kernel width for each feature block). The in-

tention here was to use single kernels at separate spatial

tilings for the weaker features (for problems depending

on a certain tiling resolution) and combined kernels with

all spatial tilings merged into one kernel to keep the mem-

ory requirements low and let the algorithms select the best

choice.

In practice, the normalization of kernels is as important

for MKL as the normalization of features is for training

regularized linear or single-kernel models. This is owed

to the bias introduced by the regularization: optimal fea-

ture / kernel weights are requested to be small, implying

a bias to towards excessively up-scaled kernels. In gen-

eral, there are several ways of normalizing kernel func-

tions. We apply the following normalization method, pro-

posed in [44, 45] and entitled multiplicative normalization

in [21]; on the feature-space level this normalization cor-

responds to rescaling training examples to unit variance,

K←1

ntr(K)−1

n21⊤K1.(5)

3.3 Experimental Setup

We treat the multi-label data set as binary classiﬁcation

problems, that is, for each object category we trained

a one-vs.-rest classiﬁer. Multiple labels per image ren-

der multi-class methods inapplicable as these require mu-

tually exclusive labels for the images. The respective

SVMs are trained using the Shogun toolbox [46]. In or-

der to shed light on the nature of the presented techniques

from a statistical viewpoint, we ﬁrst pooled all labeled

data and then created 20 random cross-validation splits

for VOC2009 and 12 splits for the larger dataset Image-

CLEF2010.

For each of the 12 or 20 splits, the training images were

used for learning the classiﬁers, while the SVM/MKL reg-

ularization parameter Cand the norm parameter pwere

chosen based on the maximal AP score on the validation

images. Thereby, the regularization constant Cis op-

timized by class-wise grid search over C∈ {10i|i=

−1,−0.5,0,0.5,1}. Preliminary runs indicated that this

way the optimal solutions are attained inside the grid.

Note that for p=∞the ℓp-norm MKL boils down to a

simple SVM using a uniform kernel combination (subse-

quently called sum-kernel SVM). In our experiments, we

used the average kernel SVM instead of the sum-kernel

one. This is no limitation in this as both lead to identical

result for an appropriate choice of the SVM regularization

parameter.

For a rigorous evaluation, we would have to construct

a separate codebook for each cross validation split. How-

ever, creating codebooks and assigning descriptors to vi-

sual words is a time-consuming process. Therefore, in

our experiments we resort to the common practice of us-

ing a single codebook created from all training images

contained in the ofﬁcial split. Although this could result

in a slight overestimation of the AP scores, this affects

all methods equally and does not favor any classiﬁcation

method more than another—our focus lies on a relative

comparison of the different classiﬁcation methods; there-

5

fore there is no loss in exploiting this computational short-

cut.

3.4 Results

In this section we report on the empirical results achieved

by ℓp-norm MKL in our visual object recognition experi-

ments.

VOC 2009 Table 2 shows the AP scores attained on

the ofﬁcial test split of the VOC2009 data set (scores

obtained by evaluation via the challenge website). The

class-wise optimal regularization constant has been se-

lected by cross-validation-based model selection on the

training data set. We can observe that non-sparse MKL

outperforms the baselines ℓ1-MKL and the sum-kernel

SVM in this sound evaluation setup. We also report on

the cross-validation performance achieved on the training

data set (Table 1). Comparing the two results, one can ob-

serve a small overestimation for the cross-validation ap-

proach (for the reasons argued in Section 3.3)—however,

the amount by which this happens is equal for all meth-

ods; in particular, the ranking of the compared methods

(SVM versus ℓp-norm MKL for various values of p) is

preserved for the average over all classes and most of

the classes (exceptions are the bottle and bird class); this

shows the reliability of the cross-validation-based eval-

uation method in practice. Note that the observed vari-

ance in the AP measure across concepts can be explained

in part by the variations in the label distributions across

concepts and cross-validation splits. Unlike for the AUC

measure, the average score of the AP measure under ran-

domly ranked images depends on the ratio of positive and

negative labeled samples.

A reason why the bottle class shows such a strong de-

viation towards sparse methods could be the varying but

often small fraction of image area covered by bottles lead-

ing to overﬁtting when using spatial tilings.

We can also remark that ℓ1.333-norm achieves the best

result of all compared methods on the VOC dataset,

slightly followed by ℓ1.125-norm MKL. To evaluate the

statistical signiﬁcance of our ﬁndings, we perform a

Wilcoxon signed-rank test for the cross-validation-based

results (see Table 1; signiﬁcant results are marked in bold-

face). We ﬁnd that in 15 out of the 20 classes the opti-

mal result is achieved by truly non-sparse ℓp-norm MKL

(which means p∈]1,∞[), thus outperforming the base-

line signiﬁcantly.

ImageCLEF Table 4 shows the AP scores averaged

over all classes achieved on the ImageCLEF2010 data set.

We observe that the best result is achieved by the non-

sparse ℓp-norm MKL algorithms with norm parameters

p= 1.125 and p= 1.333. The detailed results for all

93 classes are shown in the supplemental material (see

B.1 and B.2.We can see from the detailed results that in

37 out of the 93 classes the optimal result attained by non-

sparse ℓp-norm MKL was signiﬁcantly better than the sum

kernel according to a Wilcoxon signed-rank test.

We also show the results for optimizing the norm pa-

rameter pclass-wise (see Table 5). We can see from the

table that optimizing the ℓp-norm class-wise is beneﬁcial:

selecting the best p∈]1,∞[class-wise, the result is in-

creased to an AP of 39.70. Also including ℓ1-norm MKL

in the candidate set, the performance can even be lever-

aged to 39.82—this is 0.7 AP better than the result for the

vanilla sum-kernel SVM. Also including the latter to the

set of model, the AP score only merely increases by 0.03

AP points. We conclude that optimizing the norm param-

eter pclass-wise can improve performance; however, one

can rely on ℓp-norm MKL alone without the need to addi-

tionally include the sum-kernel-SVM to the set of models.

Tables 1 and 2 show that the gain in performance for MKL

varies considerably on the actual concept class. Notice

that these observations are conﬁrmed by the results pre-

sented in Tables B.1 and B.2, see supplemental material

for details.

3.5 Analysis and Interpretation

We now analyze the kernel set in an explorative manner;

to this end, our methodological tools are the following

1. Pairwise kernel alignment scores (KA)

2. Centered kernel-target alignment scores (KTA).

3.5.1 Analysis of the Chosen Kernel Set

To start with, we computed the pairwise kernel alignment

scores of the 32 base kernels: they are shown in Fig. 1.

We recall that the kernels can be classiﬁed into the follow-

ing groups: Kernels 1–15 and 16–23 employ BoW-S and

6

Table 1: Average AP scores on the VOC2009 data set with AP scores computed by cross-validation on the training set. Bold faces

show the best method and all other ones that are not statistical-signiﬁcantly worse.

Norm Average Aeroplane Bicycle Bird Boat Bottle Bus

ℓ154.94 ±12.3 84.84 ±5.86 55.35 ±10.5 59.38 ±10.1 66.83 ±12.4 25.91 ±10.2 71.15 ±23.2

ℓ1.125 57.07 ±12.7 84.82 ±5.91 57.25 ±10.6 62.4 ±9.13 67.89 ±12.8 27.88 ±9.91 71.7 ±22.8

ℓ1.333 57.2 ±12.8 84.51 ±6.27 57.41 ±10.8 62.75 ±9.07 67.99 ±13 27.44 ±9.77 71.33 ±23.1

ℓ256.53 ±12.8 84.12 ±5.92 56.89 ±10.9 62.53 ±8.9 67.69 ±13 26.68 ±9.94 70.33 ±22.3

ℓ∞56.08 ±12.7 83.67 ±5.99 56.09 ±10.9 61.91 ±8.81 67.52 ±12.9 26.5 ±9.5 70.13 ±22.2

Norm Car Cat Chair Cow Diningtable Dog Horse

ℓ154.54 ±7.33 59.5 ±8.22 53.3 ±11.7 23.13 ±13.2 48.51 ±19.9 41.72 ±9.44 57.67 ±12.2

ℓ1.125 56.59 ±8.93 61.59 ±8.26 54.3 ±12.1 29.59 ±16.2 49.32 ±19.5 45.57 ±10.6 59.4 ±12.2

ℓ1.333 56.75 ±9.28 61.74 ±8.41 54.25 ±12.3 29.89 ±15.8 48.4 ±19.3 45.85 ±10.9 59.4 ±11.9

ℓ255.92 ±9.49 61.39 ±8.37 53.85 ±12.4 28.39 ±16.2 47 ±18.7 45.14 ±10.8 58.61 ±11.9

ℓ∞55.58 ±9.47 61.25 ±8.28 53.13 ±12.4 27.56 ±16.2 46.29 ±18.8 44.63 ±10.6 58.32 ±11.7

Norm Motorbike Person Pottedplant Sheep Sofa Train Tvmonitor

ℓ155 ±13.2 81.32 ±9.49 35.14 ±13.4 38.13 ±19.2 48.15 ±11.8 75.33 ±14.1 63.97 ±10.2

ℓ1.125 57.66 ±13.1 82.18 ±9.3 39.05 ±14.9 43.65 ±20.5 48.72 ±13 75.79 ±14.4 65.99 ±9.83

ℓ1.333 57.57 ±13 82.27 ±9.29 39.7 ±14.6 46.28 ±23.9 48.76 ±11.9 75.75 ±14.3 66.07 ±9.59

ℓ256.9 ±13.2 82.19 ±9.3 38.97 ±14.8 45.88 ±24 47.29 ±11.7 75.29 ±14.5 65.55 ±10.1

ℓ∞56.45 ±13.1 82 ±9.37 38.46 ±14.1 45.93 ±24 46.08 ±11.8 74.89 ±14.5 65.19 ±10.2

Table 2: AP scores attained on the VOC2009 test data, obtained on request from the challenge organizers. Best methods are

marked boldface.

average aeroplane bicycle bird boat bottle bus car

ℓ154.58 81.13 54.52 56.14 62.44 28.10 68.92 52.33

ℓ1.125 56.43 81.01 56.36 58.49 62.84 25.75 68.22 55.71

ℓ1.333 56.70 80.77 56.79 58.88 63.11 25.26 67.80 55.98

ℓ256.34 80.41 56.34 58.72 63.13 24.55 67.70 55.54

ℓ∞55.85 79.80 55.68 58.32 62.76 24.23 67.79 55.38

cat chair cow diningtable dog horse motorbike

ℓ155.50 52.22 36.17 45.84 41.90 61.90 57.58

ℓ1.125 57.79 53.66 40.77 48.40 46.36 63.10 60.89

ℓ1.333 58.00 53.87 43.14 48.17 46.54 63.08 61.28

ℓ257.98 53.47 40.95 48.07 46.59 63.02 60.91

ℓ∞57.30 53.07 39.74 47.27 45.87 62.49 60.55

person pottedplant sheep sofa train tvmonitor

ℓ181.73 31.57 36.68 45.72 80.52 61.41

ℓ1.125 82.65 34.61 41.91 46.59 80.13 63.51

ℓ1.333 82.72 34.60 44.14 46.42 79.93 63.60

ℓ282.52 33.40 44.81 45.98 79.53 63.26

ℓ∞82.20 32.76 44.15 45.69 79.03 63.00

7

Table 3: Average AP scores on the VOC2009 data set with norm parameter pclass-wise optimized over AP scores on the training

set. We report on test set scores obtained on request from the challenge organizers.

∞ {1,∞} {1.125,1.333,2} {1.125,1.333,2,∞} {1,1.125,1.333,2}all norms from the left

55.85 55.94 56.75 56.76 56.75 56.76

Table 4: Average AP scores obtained on the ImageCLEF2010 data set with pﬁxed over the classes and AP scores computed by

cross-validation on the training set.

ℓp-Norm 1 1.125 1.333 2 ∞

37.32 ±5.87 39.51 ±6.67 39.48 ±6.66 39.13 ±6.62 39.11 ±6.68

BoW-C features, respectively; Kernels 24 to 27 are prod-

uct kernels associated with the HoG and HoC features;

Kernels 28–30 deploy HoC, and, ﬁnally, Kernels 31–32

are based on HoG features over the gray channel. We see

from the block-diagonal structure that features that are of

the same type (but are generated for different parameter

values, color channels, or spatial tilings) are strongly cor-

related. Furthermore the BoW-S kernels (Kernels 1–15)

are weakly correlated with the BoW-C kernels (Kernels

16–23). Both, the BoW-S and HoG kernels (Kernels 24–

25,31–32) use gradients and therefore are moderately cor-

related; the same holds for the BoW-C and HoC kernel

groups (Kernels 26–30). This corresponds to our original

intention to have a broad range of feature types which are,

however, useful for the task at hand. The principle useful-

ness of our feature set can be seen a posteriori from the

fact that ℓ1-MKL achieves the worst performance of all

methods included in the comparison while the sum-kernel

SVM performs moderately well. Clearly, a higher fraction

of noise kernels would further harm the sum-kernel SVM

and favor the sparse MKL instead (we investigate the im-

pact of noise kernels on the performance of ℓp-norm MKL

in an experiment on controlled, artiﬁcial data; this is pre-

sented in the supplemental material.

Based on the observation that the BoW-S kernel sub-

set shows high KTA scores, we also evaluated the perfor-

mance restricted to the 15 BoW-S kernels only. Unsur-

prisingly, this setup favors the sum-kernel SVM, which

achieves higher results on VOC2009 for most classes;

compared to ℓp-norm MKL using all 32 classes, the sum-

kernel SVM restricted to 15 classes achieves slightly bet-

ter AP scores for 11 classes, but also slightly worse for

9 classes. Furthermore, the sum kernel SVM, ℓ2-MKL,

and ℓ1.333-MKL were on par with differences fairly be-

low 0.01 AP. This is again not surprising as the kernels

from the BoW-S kernel set are strongly correlated with

each other for the VOC data which can be seen in the

top left image in Fig. 1. For the ImageCLEF data we ob-

served a quite different picture: the sum-kernel SVM re-

stricted to the 15 BoW-S kernels performed signiﬁcantly

worse, when, again, being compared to non-sparse ℓp-

norm MKL using all 32 kernels. To achieve top state-

of-the-art performance, one could optimize the scores for

both datasets by considering the class-wise maxima over

learning methods and kernel sets. However, since the in-

tention here is not to win a challenge but a relative com-

parison of models, giving insights in the nature of the

methods—we therefore discard the time-consuming op-

timization over the kernel subsets.

From the above analysis, the question arises why re-

stricting the kernel set to the 15 BoW-S kernels affects

the performance of the compared methods differently,

for the VOC2009 and ImageCLEF2010 data sets. This

can be explained by comparing the KA/KTA scores of

the kernels attained on VOC and on ImageCLEF (see

Fig. 1 (RIGHT)): for the ImageCLEF data set the KTA

scores are substantially more spread along all kernels;

there is neither a dominance of the BoW-S subset in the

KTA scores nor a particularly strong correlation within

the BoW-S subset in the KA scores. We attribute this

to the less object-based and more ambiguous nature of

many of the concepts contained in the ImageCLEF data

set. Furthermore, the KA scores for the ImageCLEF data

(see Fig. 1 (LEFT)) show that this dataset exhibits a higher

variance among kernels—this is because the correlations

between all kinds of kernels are weaker for the Image-

8

Table 5: Average AP scores obtained on the ImageCLEF2010 data set with norm parameter pclass-wise optimized and AP scores

computed by cross-validation on the training set.

∞ {1,∞} {1.125,1.333,2} {1.125,1.333,2,∞} {1,1.125,1.333,2}all norms from the left

39.11 ±6.68 39.33 ±6.71 39.70 ±6.80 39.74 ±6.85 39.82 ±6.82 39.85 ±6.88

CLEF data.

Kernel Index

Kernel Index

10 20 30

5

10

15

20

25

30

Kernel Index

Class Index

10 20 30

5

10

15

20

Kernel Index

Kernel Index

10 20 30

5

10

15

20

25

30

Kernel Index

Class Index

10 20 30

20

40

60

80

Figure 1: Similarity of the kernels for the VOC2009 (TOP) and

ImageCLEF2010 (BOTTOM) data sets in terms of pairwise ker-

nel alignments (LEFT) and kernel target alignments (RIGHT),

respectively. In both data sets, ﬁve groups can be identiﬁed:

’BoW-S’ (Kernels 1–15), ’BoW-C’ (Kernels 16–23), ’products

of HoG and HoC kernels’ (Kernels 24–27, ’HoC single’ (Ker-

nels 28–30), and ’HoG single’ (Kernels 31–32).

Therefore, because of this non-uniformity in the spread

of the information content among the kernels, we can

conclude that indeed our experimental setting falls into

the situation where non-sparse MKL can outperform the

baseline procedures (again, see suuplemental material.

For example, the BoW features are more informativethan

HoG and HoC, and thus the uniform-sum-kernel-SVM

is suboptimal. On the other hand, because of the fact

that typical image features are only moderately informa-

tive, HoG and HoC still convey a certain amount of com-

plementary information—this is what allows the perfor-

mance gains reported in Tables 1 and 4.

Note that we class-wise normalized the KTA scores to

sum to one. This is because we are rather interested in a

comparison of the relative contributions of the particular

kernels than in their absolute information content, which

anyway can be more precisely derived from the AP scores

already reported in Tables 1 and 4. Furthermore, note that

we consider centered KA and KTA scores, since it was ar-

gued in [35] that only those correctly reﬂect the test errors

attained by established learners such as SVMs.

3.5.2 The Role of the Choice of ℓp-norm

Next, we turn to the interpretation of the norm parameter

pin our algorithm. We observe a big gap in performance

between ℓ1.125-norm MKL and the sparse ℓ1-norm MKL.

The reason is that for p > 1MKL is reluctant to set ker-

nel weights to zero, as can be seen from Figure 2. In con-

trast, ℓ1-norm MKL eliminates 62.5% of the kernels from

the working set. The difference between the ℓp-norms for

p > 1lies solely in the ratio by which the less informative

kernels are down-weighted—they are never assigned with

true zeros.

However, as proved in [21], in the computational opti-

mum, the kernel weights are accessed by the MKL algo-

rithm via the information content of the particular kernels

given by a ℓp-norm-dependent formula (see Eq. (8); this

will be discussed in detail in Section 4.1). We mention at

this point that the kernel weights all converge to the same,

uniform value for p→ ∞. We can conﬁrm these theo-

retical ﬁndings empirically: the histograms of the kernel

weights shown in Fig. 2 clearly indicate an increasing uni-

formity in the distribution of kernel weights when letting

p→ ∞. Higher values of pthus cause the weight distri-

bution to shift away from zero and become slanted to the

right while smaller ones tend to increase its skewness to

the left.

Selection of the ℓp-norm permits to tune the strength of

the regularization of the learning of kernel weights. In this

9

0 0.2 0.4 0.6 0.8

0

500

1000

Weight Values

Counts

0 0.05 0.1 0.15 0.2

0

50

100

Weight Values

Counts

0 0.05 0.1 0.15 0.2

0

50

100

Weight Values

Counts

0.05 0.1 0.15 0.2 0.25

0

50

100

Weight Values

Counts

Figure 2: Histogram of kernel weights as output by ℓp-norm MKL for the various classes on the VOC2009 data set (32 kernels ×

20 classes, resulting in 640 values): ℓ1-norm (TOP LEFT)), ℓ1.125-norm (TOP RIGHT), ℓ1.333-norm (BOTTOM L EFT), and ℓ2-norm

(BOT TOM RIGHT).

sense the sum-kernel SVM clearly is an extreme, namely

ﬁxing the kernel weights, obtained when letting p→ ∞.

The sparse MKL marks another extreme case: ℓp-norms

with pbelow 1loose the convexity property so that p= 1

is the maximally sparse choice preserving convexity at the

same time. Sparsity can be interpreted here that only a few

kernels are selected which are considered most informa-

tive according to the optimization objective. Thus, the ℓp-

norm acts as a prior parameter for how much we trust in

the informativeness of a kernel. In conclusion, this inter-

pretation justiﬁes the usage of ℓp-norm outside the exist-

ing choices ℓ1and ℓ2. The fact that the sum-kernel SVM

is a reasonable choice in the context of image annotation

will be discussed further in Section 4.1.

Our empirical ﬁndings on ImageCLEF and VOC seem

to contradict previous ones about the usefulness of MKL

reported in the literature, where ℓ1is frequently to be

outperformed by a simple sum-kernel SVM (for exam-

ple, see [30])—however, in these studies the sum-kernel

SVM is compared to ℓ1-norm or ℓ2-norm MKL only. In

fact, our results conﬁrm these ﬁndings: ℓ1-norm MKL is

outperformed by the sum-kernel SVM in all of our ex-

periments. Nevertheless, in this paper, we show that by

using the more general ℓp-norm regularization, the pre-

diction accuracy of MKL can be considerably leveraged,

even clearly outperforming the sum-kernel SVM, which

has been shown to be a tough competitor in the past [12].

But of course also the simpler sum-kernel SVM also has

its advantage, although on the computational side only: in

our experiments it was about a factor of ten faster than its

MKL competitors. Further information about runtimes of

MKL algorithms compared to sum kernel SVMs can be

taken from [47].

3.5.3 Remarks for Particular Concepts

Finally, we show images from classes where MKL helps

performance and discuss relationships to kernel weights.

We have seen above that the sparsity-inducing ℓ1-norm

MKL clearly outperforms all other methods on the bottle

class (see Table 2). Fig. 3 shows two typical highly ranked

images and the corresponding kernel weights as output

by ℓ1-norm (LEFT) and ℓ1.333-norm MKL (RIGHT), re-

spectively, on the bottle class. We observe that ℓ1-norm

MKL tends to rank highly party and people group scenes.

We conjecture that this has two reasons: ﬁrst, many peo-

ple group and party scenes come along with co-occurring

bottles. Second, people group scenes have similar gra-

dient distributions to images of large upright standing

bottles sharing many dominant vertical lines and a thin-

ner head section—see the left- and right-hand images in

Fig. 3. Sparse ℓ1-norm MKL strongly focuses on the

10

dominant HoG product kernel, which is able to capture

the aforementioned special gradient distributions, giving

small weights to two HoC product kernels and almost

completely discarding all other kernels.

Next, we turn to the cow class, for which we have seen

above that ℓ1.333-norm MKL outperforms all other meth-

ods clearly. Fig. 4 shows a typical high-ranked image

of that class and also the corresponding kernel weights

as output by ℓ1-norm (LEFT) and ℓ1.333-norm (RIGHT)

MKL, respectively. We observe that ℓ1-MKL focuses on

the two HoC product kernels; this is justiﬁed by typical

cow images having green grass in the background. This

allows the HoC kernels to easily to distinguish the cow

images from the indoor and vehicle classes such as car

or sofa. However, horse and sheep images have such a

green background, too. They differ in sheep usually being

black-white, and horses having a brown-black color bias

(in VOC data); cows have rather variable colors. Here,

we observe that the rather complex yet somewhat color-

based BoW-C and BoW-S features help performance—it

is also those kernels that are selected by the non-sparse

ℓ1.333-MKL, which is the best performing model on those

classes. In contrast, the sum-kernel SVM suffers from in-

cluding the ﬁve gray-channel-based features, which are

hardly useful for the horse and sheep classes and mostly

introduce additional noise. MKL (all variants) succeed

in identifying those kernels and assign those kernels with

low weights.

4 Discussion

In the previous section we presented empirical evidence

that ℓp-norm MKL considerably can help performance

in visual image categorization tasks. We also observed

that the gain is class-speciﬁc and limited for some classes

when compared to the sum-kernel SVM, see again Tables

1 and 2 as well as Tables B.1, B.2 in the supplemental

material. In this section, we aim to shed light on the rea-

sons of this behavior, in particular discussing strengths

of the average kernel in Section 4.1, trade-off effects in

Section 4.2 and strengths of MKL in Section 4.3. Since

these scenarios are based on statistical properties of ker-

nels which can be observed in concept recognition tasks

within computer vision we expect the results to be trans-

ferable to other algorithms which learn linear models over

0 10 20 30

0

0.2

0.4

0.6

0.8

Kernel Index

Kernel Weight

0 10 20 30

0

0.05

0.1

0.15

Kernel Index

Kernel Weight

Figure 3: Images of typical highly ranked bottle images and

kernel weights from ℓ1-MKL (left) and ℓ1.333-MKL (right).

0 10 20 30

0

0.1

0.2

0.3

0.4

0.5

Kernel Index

Kernel Weight

0 10 20 30

0

0.05

0.1

0.15

Kernel Index

Kernel Weight

Figure 4: Images of a typical highly ranked cow image and

kernel weights from ℓ1-MKL (left) and ℓ1.333-MKL (right).

11

kernels such as [28, 29].

4.1 One Argument For the Sum Kernel:

Randomness in Feature Extraction

We would like to draw attention to one aspect present in

BoW features, namely the amount of randomness induced

by the visual word generation stage acting as noise with

respect to kernel selection procedures.

Experimental setup We consider the following experi-

ment, similar to the one undertaken in [30]: we compute

a BoW kernel ten times each time using the same local

features, identical spatial pyramid tilings, and identical

kernel functions; the only difference between subsequent

repetitions of the experiment lies in the randomness in-

volved in the generation of the codebook of visual words.

Note that we use SIFT features over the gray channel that

are densely sampled over a grid of step size six, 512 vi-

sual words (for computational feasibility of the cluster-

ing), and a χ2kernel. This procedure results in ten ker-

nels that only differ in the randomness stemming from the

codebook generation. We then compare the performance

of the sum-kernel SVM built from the ten kernels to the

one of the best single-kernel SVM determined by cross-

validation-based model selection.

In contrast to [30] we try two codebook generation pro-

cedures, which differ by their intrinsic amount of random-

ness: ﬁrst, we deploy k-means clustering, with random

initialization of the centers and a bootstrap-like selection

of the best initialization (similar to the option ’cluster’

in MATLAB’s k-means routine). Second, we deploy ex-

tremely randomized clustering forests (ERCF) [48, 49],

that are, ensembles of randomized trees—the latter pro-

cedure involves a considerably higher amount of random-

ization compared to k-means.

Results The results are shown in Table 6. For both

clustering procedures, we observe that the sum-kernel

SVM outperforms the best single-kernel SVM. In par-

ticular, this conﬁrms earlier ﬁndings of [30] carried out

for k-means-based clustering. We also observe that the

difference between the sum-kernel SVM and the best

single-kernel SVM is much more pronounced for ERCF-

based kernels—we conclude that this stems from a higher

amount of randomness is involved in the ERCF clustering

method when compared to conventional k-means. The

standard deviations of the kernels in Table 6 conﬁrm this

conclusion. For each class we computed the conditional

standard deviation

std(K|yi=yj) + std(K|yi6=yj)(6)

averaged over all classes. The usage of a conditional vari-

ance estimator is justiﬁed because the ideal similarity in

kernel target alignment (cf. equation (4)) does have a vari-

ance over the kernel as a whole however the conditional

deviations in equation (6) would be zero for the ideal ker-

nel. Similarly, the fundamental MKL optimization for-

mula (8) relies on a statistic based on the two conditional

kernels used in formula (6). Finally, ERCF clustering

uses label information. Therefore averaging the class-

wise conditional standard deviations over all classes is not

expected to be identical to the standard deviation of the

whole kernel.

We observe in Table 6 that the standard deviations are

lower for the sum kernels. Comparing ERCF and k-means

shows that the former not only exhibits larger absolute

standard deviations but also greater differences between

single-best and sum-kernel as well as larger differences in

AP scores.

We can thus postulate that the reason for the superior

performance of the sum-kernel SVM stems from aver-

aging out the randomness contained in the BoW kernels

(stemming from the visual-word generation). This can be

explained by the fact that averaging is a way of reduc-

ing the variance in the predictors/models [50]. We can

also remark that such variance reduction effects can also

be observed when averaging BoW kernels with varying

color combinations or other parameters; this stems from

the randomness induced by the visual word generation.

Note that in the above experimental setup each kernel

uses the same information provided via the local features.

Consequently, the best we can do is averaging—learning

kernel weights in such a scenario is likely to suffer from

overﬁtting to the noise contained in the kernels and can

only decrease performance.

To further analyze this, we recall that, in the compu-

tational optimum, the information content of a kernel is

measured by ℓp-norm MKL via the following quantity, as

12

Method Best Single Kernel Sum Kernel

VOC09, k-Means AP: 44.42 ±12.82 45.84 ±12.94

VOC09, k-Means Std: 30.81 30.74

VOC09, ERCF AP: 42.60 ±12.50 47.49 ±12.89

VOC09, ERCF Std: 38.12 37.89

ImageCLEF, k-Means AP: 31.09 ±5.56 31.73 ±5.57

ImageCLEF, k-Means Std: 30.51 30.50

ImageCLEF, ERCF AP: 29.91 ±5.39 32.77 ±5.93

ImageCLEF, ERCF Std: 38.58 38.10

Table 6: AP Scores and standard deviations showing amount of

randomness in feature extraction:results from repeated compu-

tations of BoW Kernels with randomly initialized codebooks

proved in [21]:

β∝ kwk

2

p+1

2= X

i,j

αiyiKij αjyj!2

p+1

.(7)

In this paper we deliver a novel interpretation of the above

quantity; to this end, we decompose the right-hand term

into two terms as follows:

X

i,j

αiyiKij αjyj=X

i,j|yi=yj

αiKij αj−X

i,j|yi6=yj

αiKij αj.

The above term can be interpreted as a difference of the

support-vector-weighted sub-kernel restricted to consis-

tent labels and the support-vector-weighted sub-kernel

over the opposing labels. Equation 7 thus can be rewritten

as

β∝ X

i,j|yi=yj

αiKij αj−X

i,j|yi6=yj

αiKij αj!2

p+1

.

(8)

Thus, we observe that random inﬂuences in the features

combined with overﬁtting support vectors can suggest a

falsely high information content in this measure for some

kernels. SVMs do overﬁt on BoW features. Using the

scores attained on the training data subset we can ob-

serve that many classes are deceptive-perfectly predicted

with AP scores fairly above 0.9. At this point, non-sparse

ℓp>1-norm MKL offers a parameter pfor regularizing the

kernel weights—thus hardening the algorithm to become

robust against random noise, yet permitting to use some

degree of information given by Equation (8).

[30] reported in accordance to our idea about overﬁt-

ting of SVMs that ℓ2-MKL and ℓ1-MKL show no gain

in such a scenario while ℓ1-MKL even reduces perfor-

mance for some datasets. This result is not surprising

as the overly sparse ℓ1-MKL has a stronger tendency to

overﬁt to the randomness contained in the kernels / fea-

ture generation. The observed amount of randomness in

the state-of-the-art BoW features could be an explanation

why the sum-kernel SVM has shown to be a quite hard-

to-beat competitor for semantic concept classiﬁcation and

ranking problems.

4.2 MKL and Prior Knowledge

For solving a learning problem, there is nothing more

valuable than prior knowledge. Our empirical ﬁndings

on the VOC2009 and ImageCLEF09 data sets suggested

that our experimental setup was actually biased towards

the sum-kernel SVM via usage of prior knowledge when

choosing the set of kernels / image features. We deployed

kernels based on four features types: BoW-S, BoW-C,

HoC and HoG. However, the number of kernels taken

from each feature type is not equal. Based on our experi-

ence with the VOC and ImageCLEF challenges we used

a higher fraction of BoW kernels and less kernels of other

types such as histograms of colors or gradients because

we already knew that BoW kernels have superior perfor-

mance.

To investigate to what extend our choice of kernels in-

troduces a bias towards the sum-kernel SVM, we also per-

formed another experiment, where we deployed a higher

fraction of weaker kernels for VOC2009. The difference

to our previous experiments lies in that we summarized

the 15 BOW-S kernels in 5 product kernels reducing the

number of kernels from 32 to 22. The results are given

in Table 7; when compared to the results of the origi-

nal 32-kernel experiment (shown in Table 1), we observe

that the AP scores are in average about 4 points smaller.

This can be attributed to the fraction of weak kernels be-

ing higher as in the original experiment; consequently, the

gain from using (ℓ1.333 -norm) MKL compared to the sum-

kernel SVM is now more pronounced: over 2 AP points—

again, this can be explained by the higher fraction of weak

(i.e., noisy) kernels in the working set (this effect is also

conﬁrmed in the toy experiment carried out in supplemen-

tal material: there, we see that MKL becomes more bene-

13

Class / ℓp-norm 1.333 ∞

Aeroplane 77.82 ±7.701 76.28 ±8.168

Bicycle 50.75 ±11.06 46.39 ±12.37

Bird 57.7 ±8.451 55.09 ±8.224

Boat 62.8 ±13.29 60.9 ±14.01

Bottle 26.14 ±9.274 25.05 ±9.213

Bus 68.15 ±22.55 67.24 ±22.8

Car 51.72 ±8.822 49.51 ±9.447

Cat 56.69 ±9.103 55.55 ±9.317

Chair 51.67 ±12.24 49.85 ±12

Cow 25.33 ±13.8 22.22 ±12.41

Diningtable 45.91 ±19.63 42.96 ±20.17

Dog 41.22 ±10.14 39.04 ±9.565

Horse 52.45 ±13.41 50.01 ±13.88

Motorbike 54.37 ±12.91 52.63 ±12.66

Person 80.12 ±10.13 79.17 ±10.51

Pottedplant 35.69 ±13.37 34.6 ±14.09

Sheep 37.05 ±18.04 34.65 ±18.68

Sofa 41.15 ±11.21 37.88 ±11.11

Train 70.03 ±15.67 67.87 ±16.37

Tvmonitor 59.88 ±10.66 57.77 ±10.91

Average 52.33 ±12.57 50.23 ±12.79

Table 7: MKL versus Prior Knowledge: AP Scores with a

smaller fraction of well scoring kernels

ﬁcial when the number of noisy kernels is increased).

In summary, this experiment should remind us that se-

mantic classiﬁcation setups use a substantial amount of

prior knowledge. Prior knowledge implies a pre-selection

of highly effective kernels—a carefully chosen set of

strong kernels constitutes a bias towards the sum kernel.

Clearly, pre-selection of strong kernels reduces the need

for learning kernel weights; however, in settings where

prior knowledge is sparse, statistical (or even adaptive,

adversarial) noise is inherently contained in the feature

extraction—thus, beneﬁcial effects of MKL are expected

to be more pronounced in such a scenario.

4.3 One Argument for Learning the Multi-

ple Kernel Weights: Varying Informa-

tive Subsets of Data

In the previous sections, we presented evidence for why

the sum-kernel SVM is considered to be a strong learner

in visual image categorization. Nevertheless, in our ex-

periments we observed gains in accuracy by using MKL

for many concepts. In this section, we investigate causes

for this performance gain.

Intuitively speaking, one can claim that the kernel non-

uniformly contain varying amounts of information con-

tent. We investigate more speciﬁcally what information

content this is and why it differs over the kernels. Our

main hypothesis is that common kernels in visual concept

classiﬁcation are informative with respect to varying sub-

sets of the data. This stems from features being frequently

computed from many combinations of color channels. We

can imagine that blue color present in the upper third of an

image can be crucial for prediction of photos having clear

sky, while other photos showing a sundown or a smoggy

sky tend to contain white or yellow colors; this means

that a particular kernel / feature group can be crucial for

some images, while it may be almost useless—or even

counterproductive—for others.

However, the information content is accessed by MKL

via the quantity given by Eq. (8); the latter is a global

information measure, which is computed over the support

vectors (which in turn are chosen over the whole dataset).

In other words, the kernel weights are global weights that

uniformly hold in all regions of the input space. Explicitly

ﬁnding informative subsets of the input space on real data

may not only imply a too high computational burden (note

that the number of partitions of an n-element training set

is exponentially in n) but also is very likely to lead to

overﬁtting.

To understand the implications of the above to com-

puter vision, we performed the following toy experi-

ment. We generated a fraction of p+= 0.25 of posi-

tively labeled and p−= 0.75 of negatively labeled 6m-

dimensional training examples (motivated by the unbal-

ancedness of training sets usually encountered in com-

puter vision) in the following way: the features were di-

vided in kfeature groups each consisting of six features.

For each feature group, we split the training set into an

informative and an uninformative set (the size is varying

over the feature groups); thereby, the informative sets of

the particular feature groups are disjoint. Subsequently,

each feature group is processed by a Gaussian kernel,

where the width is determined heuristically in the same

way as in the real experiments shown earlier in this paper.

Thereby, we consider two experimental setups for sam-

pling the data, which differ in the number of employed

kernels mand the sizes of the informative sets. In both

14

cases, the informative features are drawn from two suf-

ﬁciently distant normal distributions (one for each class)

while the uninformative features are just Gaussian noise

(mixture of Gaussians). The experimental setup of the

ﬁrst experiment can be summarized as follows:

Experimental Settings for Experiment 1 (3 kernels):

nk=1,2,3= (300,300,500), p+:=P(y= +1) = 0.25

(9)

The features for the informative subset are drawn

according to

f(k)

i∼(N(0.0, σk)if yi=−1

N(0.4, σk)if yi= +1 (10)

σk=(0.3if k= 1,2

0.4if k= 3 (11)

The features for the uninformative subset are drawn

according to

f(k)∼(1 −p+)N(0.0,0.5) + p+N(0.4,0.5).(12)

For Experiment 1 the three kernels had disjoint informa-

tive subsets of sizes nk=1,2,3= (300,300,500). We used

1100 data points for training and the same amount for test-

ing. We repeated this experiment 500 times with different

random draws of the data.

Note that the features used for the uninformative sub-

sets are drawn as a mixture of the Gaussians used for the

informative subset, but with a higher variance, though.

The increased variance encodes the assumption that the

feature extraction produces unreliable results on the un-

informative data subset. None of these kernels are pure

noise or irrelevant. Each kernel is the best one for its own

informative subset of data points.

We now turn to the experimental setup of the second

experiment:

Experimental Settings for Experiment 2 (5 kernels):

nk=1,2,3,4,5= (300,300,500,200,500),

p+:=P(y= +1) = 0.25

The features for the informative subset are drawn

according to

f(k)

i∼(N(0.0, σk)if yi=−1

N(mk, σk)if yi= +1 (13)

Experiment ℓ∞-SVM ℓ1.0625-MKL t-test p-value

1 68.72 ±3.27 69.49 ±3.17 0.000266

2 55.07 ±2.86 56.39 ±2.84 4.7·10−6

Table 8: Varying Informative Subsets of Data: AP Scores in

Toy experiment using Kernels with disjoint informative subsets

of Data

mk=(0.4if k= 1,2,3

0.2if k= 4,5(14)

σk=(0.3if k= 1,2

0.4if k= 3,4,5(15)

The features for the uninformative subset are drawn

according to

f(k)∼(1 −p+)N(0.0,0.5) + p+N(mk,0.5) (16)

As for the real experiments, we normalized the ker-

nels to having standard deviation 1 in Hilbert space and

optimized the regularization constant by grid search in

C∈ {10i|i=−2,−1.5,...,2}.

Table 8 shows the results. The null hypothesis of equal

means is rejected by a t-test with a p-value of 0.000266

and 0.0000047, respectively, for Experiment 1 and 2,

which is highly signiﬁcant.

The design of the Experiment 1 is no exceptional lucky

case: we observed similar results when using more ker-

nels; the performance gaps then even increased. Experi-

ment 2 is a more complex version of Experiment 1 using

using ﬁve kernels instead of just three. Again, the infor-

mative subsets are disjoint, but this time of sizes 300,300,

500,200, and 500; the the Gaussians are centered at 0.4,

0.4,0.4,0.2, and 0.2, respectively, for the positive class;

and the variance is taken as σk= (0.3,0.3,0.4,0.4,0.4).

Compared to Experiment 1, this results in even bigger per-

formance gaps between the sum-kernel SVM and the non-

sparse ℓ1.0625-MKL. One can imagine to create learning

scenarios with more and more kernels in the above way,

thus increasing the performance gaps—since we aim at a

relative comparison, this, however, would not further con-

tribute to validating or rejecting our hypothesis.

Furthermore, we also investigate the single-kernel per-

formance of each kernel: we observed the best single-

kernel SVM (which attained AP scores of 43.60,43.40,

15

and 58.90 for Experiment 1) being inferior to both MKL

(regardless of the employed norm parameter p) and the

sum-kernel SVM. The differences were signiﬁcant with

fairly small p-values (for example, for ℓ1.25 -MKL the p-

value was about 0.02).

We emphasize that we did not design the example in

order to achieve a maximal performance gap between the

non sparse MKL and its competitors. For such an exam-

ple, see the toy experiment of [21], which is replicated in

the supplemental material including additional analysis.

Our focus here was to conﬁrm our hypothesis that ker-

nels in semantic concept classiﬁcation are based on vary-

ing subsets of the data—although MKL computes global

weights, it emphasizes on kernels that are relevant on the

largest informative set and thus approximates the infeasi-

ble combinatorial problem of computing an optimal parti-

tion/grid of the space into regions which underlie identical

optimal weights. Though, in practice, we expect the situ-

ation to be more complicated as informative subsets may

overlap between kernels.

Nevertheless, our hypothesis also opens the way to

new directions for learning of kernel weights, namely re-

stricted to subsets of data chosen according to a mean-

ingful principle. Finding such principles is one the future

goals of MKL—we sketched one possibility: locality in

feature space. A ﬁrst starting point may be the work of

[51, 52] on localized MKL.

5 Conclusions

When measuring data with different measuring devices, it

is always a challenge to combine the respective devices’

uncertainties in order to fuse all available sensor informa-

tion optimally. In this paper, we revisited this important

topic and discussed machine learning approaches to adap-

tively combine different image descriptors in a systematic

and theoretically well founded manner. While MKL ap-

proaches in principle solve this problem it has been ob-

served that the standard ℓ1-norm based MKL often cannot

outperform SVMs that use an average of a large number

of kernels. One hypothesis why this seemingly unintuitive

result may occur is that the sparsity prior may not be ap-

propriate in many real world problems—especially, when

prior knowledge is already at hand. We tested whether this

hypothesis holds true for computer vision and applied the

recently developed non-sparse ℓpMKL algorithms to ob-

ject classiﬁcation tasks. The ℓp-norm constitutes a slightly

less severe method of sparsiﬁcation. By choosing pas a

hyperparameter, which controls the degree of non-sparsity

and regularization, from a set of candidate values with the

help of a validation data, we showed that ℓp-MKL sig-

niﬁcantly improves SVMs with averaged kernels and the

standard sparse ℓ1MKL.

Future work will study localized MKL and methods to

include hierarchically structured information into MKL,

e.g. knowledge from taxonomies, semantic information

or spatial priors. Another interesting direction is MKL-

KDA [27, 28]. The difference to the method studied in the

present paper lies in the base optimization criterion: KDA

[53] leads to non-sparse solutions in αwhile ours leads to

sparse ones (i.e., a low number of support vectors). While

on the computational side the latter is expected to be ad-

vantageous, the ﬁrst one might lead to more accurate so-

lutions. We expect the regularization over kernel weights

(i.e., the choice of the norm parameter p) having similar

effects for MKL-KDA like for MKL-SVM. Future studies

will expand on that topic. 1

Acknowledgments

This work was supported in part by the Federal Ministry

of Economics and Technology of Germany (BMWi) un-

der the project THESEUS (FKZ 01MQ07018), by Federal

Ministry of Education and Research (BMBF) under the

project REMIND (FKZ 01-IS07007A), by the Deutsche

Forschungsgemeinschaft (DFG), and by the FP7-ICT pro-

gram of the European Community, under the PASCAL2

Network of Excellence (ICT-216886). Marius Kloft ac-

knowledges a scholarship by the German Academic Ex-

change Service (DAAD).

References

[1] Vapnik V (1995) The Nature of Statistical Learning The-

ory. New York: Springer.

1First experiments on ImageCLEF2010 show for sum kernel

SRKDA [54] a result of 39.29 AP points which is slighlty better than

the sum kernel results for the SVM (39.11 AP) but worse than MKL-

SVM.

16

[2] Cortes C, Vapnik V (1995) Support-vector networks. In:

Machine Learning. pp. 273–297.

[3] Vapnik VN (1998) Statistical Learning Theory. Wiley-

Interscience.

[4] Chapelle O, Haffner P, Vapnik V (1999) SVMs for

histogram-based image classiﬁcation. IEEE Trans on Neu-

ral Networks 10: 1055–1064.

[5] M¨uller KR, Mika S, R¨atsch G, Tsuda K, Sch¨olkopf B

(2001) An introduction to kernel-based learning algo-

rithms. IEEE Transactions on Neural Networks 12: 181–

201.

[6] Sch¨olkopf B, Smola AJ (2002) Learning with Kernels.

Cambridge, MA: MIT Press.

[7] Jaakkola T, Haussler D (1998) Exploiting generative mod-

els in discriminative classiﬁers. In: Advances in Neural

Information Processing Systems. volume 11, pp. 487-493.

[8] Zien A, R¨atsch G, Mika S, Sch¨olkopf B, Lengauer T, et al.

(2000) Engineering support vector machine kernels that

recognize translation initiation sites. Bioinformatics 16:

799-807.

[9] Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of

features: Spatial pyramid matching for recognizing natural

scene categories. In: IEEE Computer Society Conference

on Computer Vision and Pattern Recognition. New York,

USA, volume 2, pp. 2169–2178.

[10] Zhang J, Marszalek M, SLazebnik, Schmid C (2007) Local

features and kernels for classiﬁcation of texture and object

categories: A comprehensive study. International Journal

of Computer Vision 73: 213–238.

[11] Kumar A, Sminchisescu C (2007) Support kernel ma-

chines for object recognition. In: IEEE International Con-

ference on Computer Vision.

[12] Gehler PV, Nowozin S (2009) On feature combination for

multiclass object classiﬁcation. In: ICCV. IEEE, pp. 221-

228.

[13] Lanckriet GR, Cristianini N, Bartlett P, Ghaoui LE, Jordan

MI (2004) Learning the kernel matrix with semideﬁnite

programming. Journal of Machine Learning Research :

27–72.

[14] Bach F, Lanckriet G, Jordan M (2004) Multiple kernel

learning, conic duality, and the smo algorithm. Interna-

tional Conference on Machine Learning .

[15] Sonnenburg S, R¨atsch G, Sch¨afer C, Sch¨olkopf B (2006)

Large Scale Multiple Kernel Learning. Journal of Machine

Learning Research 7: 1531–1565.

[16] Rakotomamonjy A, Bach F, Canu S, Grandvalet Y (2008)

SimpleMKL. Journal of Machine Learning Research 9:

2491–2521.

[17] Cortes C, Gretton A, Lanckriet G,Mohri M, Rostamizadeh

A (2008). Proceedings of the NIPS Workshop on

Kernel Learning: Automatic Selection of Optimal Ker-

nels. URL http://www.cs.nyu.edu/learning\

_kernels.

[18] Kloft M, Brefeld U, Laskov P, Sonnenburg S (2008) Non-

sparse multiple kernel learning. In: Proc. of the NIPS

Workshop on Kernel Learning: Automatic Selection of

Kernels.

[19] Cortes C, Mohri M, Rostamizadeh A (2009) L2 regular-

ization for learning kernels. In: Proceedings of the In-

ternational Conference on Uncertainty in Artiﬁcial Intelli-

gence.

[20] Kloft M, Brefeld U, Sonnenburg S, Laskov P, M¨uller KR,

et al. (2009) Efﬁcient and accurate lp-norm multiple ker-

nel learning. In: Bengio Y, Schuurmans D, Lafferty J,

Williams CKI, Culotta A, editors, Advances in Neural In-

formation Processing Systems 22, MIT Press. pp. 997–

1005.

[21] Kloft M, Brefeld U, Sonnenburg S, Zien A (2011) Lp-

norm multiple kernel learning. Journal of Machine Learn-

ing Research 12: 953-997.

[22] Orabona F, Luo J, Caputo B (2010) Online-batch strongly

convex multi kernel learning. In: CVPR. pp. 787-794.

[23] Vedaldi A, Gulshan V, Varma M, Zisserman A (2009) Mul-

tiple kernels for object detection. In: Computer Vision,

2009 IEEE 12th International Conference on. pp. 606 -613.

doi:10.1109/ICCV.2009.5459183.

[24] Galleguillos C, McFee B, Belongie SJ, Lanckriet GRG

(2010) Multi-class object localization by combining local

contextual interactions. In: CVPR. pp. 113-120.

[25] Everingham M, Van Gool L, Williams CKI, Winn J,

Zisserman A (2009). The PASCAL Visual Object

Classes Challenge 2009 (VOC2009). http://www.pascal-

network.org/challenges/VOC/voc2009/

workshop/index.html.

[26] Nowak S, Huiskes MJ (2010) New strategies for im-

age annotation: Overview of the photo annotation

task at imageclef 2010. In: CLEF (Notebook Pa-

pers/LABs/Workshops).

[27] Yan F, Kittler J, Mikolajczyk K, Tahir A (2009) Non-

sparse multiple kernel learning for ﬁsher discriminant

analysis. In: Proceedings of the 2009 Ninth IEEE Interna-

tional Conference on Data Mining. Washington, DC, USA:

17

IEEE Computer Society, ICDM ’09, pp. 1064–1069. doi:

10.1109/ICDM.2009.84.

[28] Yan F, Mikolajczyk K, Barnard M, Cai H, Kittler J (2010)

Lp norm multiple kernel ﬁsher discriminant analysis for

object and image categorisation. Computer Vision and Pat-

tern Recognition, IEEE Computer Society Conference on

0: 3626-3632.

[29] Cao L, Luo J, Liang F, Huang TS (2009) Heterogeneous

feature machines for visual recognition. In: ICCV. pp.

1095-1102.

[30] Gehler PV, Nowozin S (2009) Let the kernel ﬁgure it out;

principled learning of pre-processing for kernel classiﬁers.

In: CVPR. pp. 2836-2843.

[31] Varma M, Babu BR (2009) More generality in efﬁcient

multiple kernel learning. In: ICML. p. 134.

[32] Zien A, Ong C (2007) Multiclass multiple kernel learning.

In: ICML. pp. 1191-1198.

[33] Rakotomamonjy A, Bach F, Canu S, Grandvalet Y (2007)

More efﬁciency in multiple kernel learning. In: ICML. pp.

775-782.

[34] Cristianini N, Shawe-Taylor J, Elisseeff A, Kandola J

(2002) On kernel-target alignment. In: Advances in Neural

Information Processing Systems. volume 14, pp. 367–373.

[35] Cortes C, Mohri M, Rostamizadeh A (2010) Two-stage

learning kernel algorithms. In: F¨urnkranz J, Joachims T,

editors, ICML. Omnipress, pp. 239-246.

[36] Mika S, R¨atsch G, Weston J, Sch¨olkopf B, Smola A, et al.

(2003) Constructing descriptive and discriminative nonlin-

ear features: Rayleigh coefﬁcients in kernel feature spaces.

Pattern Analysis and Machine Intelligence, IEEE Transac-

tions on 25: 623 - 628.

[37] Marszalek M, Schmid C. Learning representa-

tions for visual object class recognition. URL

http://pascallin.ecs.soton.ac.uk/\\

challenges/VOC/voc2007/workshop/\\

marszalek.pdf.

[38] Lampert C, Blaschko M (2008) A multiple kernel learning

approach to joint multi-class object detection. In: DAGM.

pp. 31–40.

[39] Csurka G, Bray C, Dance C, Fan L (2004) Visual catego-

rization with bags of keypoints. In: Workshop on Statis-

tical Learning in Computer Vision, ECCV. Prague, Czech

Republic, pp. 1–22.

[40] Lowe D (2004) Distinctive image features from scale in-

variant keypoints. International Journal of Computer Vi-

sion 60: 91–110.

[41] van de Sande KEA, Gevers T, Snoek CGM (2010) Eval-

uating color descriptors for object and scene recognition.

IEEE Trans Pat Anal & Mach Intel .

[42] Dalal N, Triggs B (2005) Histograms of oriented gradi-

entsfor human detection. In: IEEE Computer Society Con-

ference on Computer Vision and Pattern Recognition. San

Diego, USA, volume 1, pp. 886–893.

[43] Canny J (1986) A computational approach to edge detec-

tion. IEEE Trans on Pattern Analysis and Machine Intelli-

gence 8: 679–714.

[44] Zien A, Ong CS (2007) Multiclass multiple kernel learn-

ing. In: Proceedings of the 24th international conference

on Machine learning (ICML). ACM, pp. 1191–1198.

[45] Chapelle O, Rakotomamonjy A (2008) Second order opti-

mization of kernel parameters. In: Proc. of the NIPS Work-

shop on Kernel Learning: Automatic Selection of Optimal

Kernels.

[46] Sonnenburg S, R¨atsch G, Henschel S, Widmer C, Behr J,

et al. (2010) The shogun machine learning toolbox. Jour-

nal of Machine Learning Research .

[47] Kloft M, Brefeld U, Sonnenburg S, Zien A (2010) Non-

sparse regularization for multiple kernel learning. Journal

of Machine Learning Research .

[48] Moosmann F, Nowak E, Jurie F (2008) Randomized clus-

tering forests for image classiﬁcation. IEEE Transactions

on Pattern Analysis & Machine Intelligence 30: 1632–

1646.

[49] Moosmann F, Triggs B, Jurie F (2006) Fast discriminative

visual codebooks using randomized clustering forests. In:

Advances in Neural Information Processing Systems.

[50] Breiman L (1996) Bagging predictors. Mach Learn 24:

123–140.

[51] G¨onen M, Alpaydın E (2010) Localized multiple kernel

regression. In: Proceedings of the 20th IAPR International

Conference on Pattern Recognition.

[52] Yang J, Li Y, Tian Y, Duan L, Gao W (2009) Group-

sensitive multiple kernel learning for object categorization.

In: ICCV. pp. 436-443.

[53] Mika S, R¨atsch G, Weston J, Sch¨olkopf B, M¨uller KR

(1999) Fisher discriminant analysis with kernels. In: Hu

YH, Larsen J, Wilson E, Douglas S, editors, Neural Net-

works for Signal Processing IX. IEEE, pp. 41–48.

[54] Cai D, He X, Han J (2007) Efﬁcient kernel discriminant

analysis via spectral regression. In: Proc. Int. Conf. on

Data Mining (ICDM’07).

18