Conference Paper

Large-scale Outdoor Scene Classification by Boosting a Set of Highly Discriminative and Low Redundant Graphlets.

DOI: 10.1109/ICDMW.2011.108 Conference: Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, Vancouver, BC, Canada, December 11, 2011
Source: DBLP

ABSTRACT Large-scale outdoor scene classification is an important issue in multimedia information retrieval. In this paper, we propose an efficient scene classification model by integrating outdoor scene image's local features into a set of highly discriminative and less redundant graph lets (i.e., small connected sub graph). Firstly, each outdoor scene image is segmented into a number of regions in terms of its color intensity distribution. And a region adjacency graph (RAG) is defined to encode the geometric property and color intensity distribution of outdoor scene image. Then, the frequent sub-structures are mined statistically from the RAGs corresponding to the training outdoor scene images. And a selecting process is carried out to obtain a set of sub-structures from the frequent ones towards being highly discriminative and low redundant. And these selected sub-structures are used as templates to extract the corresponding graph lets. Finally, we integrate these extracted graph lets by a multi-class boosting strategy for outdoor scene classification. The experimental results on the challenging SUN~\cite{sun} data set and the LHI~\cite{lotus} data set validate the effectiveness of our approach.

0 Bookmarks
 · 
45 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Region based features are getting popular due to their higher descriptive power relative to other features. How- ever, real world images exhibit changes in image segments capturing the same scene part taken at different time, un- der different lighting conditions, from different viewpoints, etc. Segmentation algorithms reect these changes, and thus segmentations exhibit poor repeatability. In this paper we address the problem of matching regions of similar ob- jects under unstable segmentations. Merging and splitting of regions makes it difcult to nd such correspondences using one-to-one matching algorithms. We present partial region matching as a solution to this problem. We assume that the high contrast, dominant contours of an object are fairly repeatable, and use them to compute partial match- ing cost (PMC) between regions. Region correspondences are obtained under region adjacency constraints encoded by Region Adjacency Graph (RAG). We integrate PMC in a many-to-one label assignment framework for matching RAGs, and solve it using belief propagation. We show that our algorithm can match images of similar objects across unstable image segmentations. We also compare the perfor- mance of our algorithm with that of the standard one-to-one matching algorithm on three motion sequences. We con- clude that our partial region matching approach is robust under segmentation irrepeatabilities.
    2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), 24-26 June 2008, Anchorage, Alaska, USA; 01/2008
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new image representation to capture both the appearance and spatial information for image classification applications. First, we model the feature vectors, from the whole corpus, from each image and at each individual patch, in a Bayesian hierarchical framework using mixtures of Gaussians. After such a hierarchical Gaussianization, each image is represented by a Gaussian mixture model (GMM) for its appearance, and several Gaussian maps for its spatial layout. Then we extract the appearance information from the GMM parameters, and the spatial information from global and local statistics over Gaussian maps. Finally, we employ a supervised dimension reduction technique called DAP (discriminant attribute projection) to remove noise directions and to further enhance the discriminating power of our representation. We justify that the traditional histogram representation and the spatial pyramid matching are special cases of our hierarchical Gaussianization. We compare our new representation with other approaches in scene classification, object recognition and face recognition, and our performance ranks among the top in all three tasks.
    Computer Vision, 2009 IEEE 12th International Conference on; 11/2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a new criterion for discriminative dimension reduction, max-min distance analysis (MMDA). Given a data set with C classes, represented by homoscedastic Gaussians, MMDA maximizes the minimum pairwise distance of these C classes in the selected low-dimensional subspace. Thus, unlike Fisher's linear discriminant analysis (FLDA) and other popular discriminative dimension reduction criteria, MMDA duly considers the separation of all class pairs. To deal with general case of data distribution, we also extend MMDA to kernel MMDA (KMMDA). Dimension reduction via MMDA/KMMDA leads to a nonsmooth max-min optimization problem with orthonormal constraints. We develop a sequential convex relaxation algorithm to solve it approximately. To evaluate the effectiveness of the proposed criterion and the corresponding algorithm, we conduct classification and data visualization experiments on both synthetic data and real data sets. Experimental results demonstrate the effectiveness of MMDA/KMMDA associated with the proposed optimization algorithm.
    IEEE Transactions on Pattern Analysis and Machine Intelligence 06/2011; DOI:10.1109/TPAMI.2010.189 · 5.69 Impact Factor