Conference Paper

Inference Scene Labeling by Incorporating Object Detection with Explicit Shape Model.

DOI: 10.1007/978-3-642-19318-7_30 Conference: Computer Vision - ACCV 2010 - 10th Asian Conference on Computer Vision, Queenstown, New Zealand, November 8-12, 2010, Revised Selected Papers, Part III
Source: DBLP

ABSTRACT In this paper, we incorporate shape detection into contextual scene labeling and make use of both shape, texture, and context
information in a graphical representation. We propose a candidacy graph, whose vertices are two types of recognition candidates
for either a superpixel or a window patch. The superpixel candidates are generated by a discriminative classifier with textural
features as well as the window proposals by a learned deformable templates model in the bottom-up steps. The contextual and
competitive interactions between graph vertices, in form of probabilistic connecting edges, are defined by two types of contextual
metrics and the overlapping of their image domain, respectively. With this representation, a composite clustering sampling
algorithm is proposed to fast search the optimal convergence globally using the Markov Chain Monte Carlo (MCMC). Our approach
is applied on both lotus hill institute (LHI) and MSRC public datasets and achieves the state-of-art results.

0 Bookmarks
 · 
40 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present an object class detection approach which fully integrates the complementary strengths offered by shape matchers. Like an object detector, it can learn class models directly from images, and localize novel instances in the presence of intra-class variations, clutter, and scale changes. Like a shape matcher, it finds the accurate boundaries of the objects, rather than just their bounding-boxes. This is made possible by 1) a novel technique for learning a shape model of an object class given images of example instances; 2) the combination of Hough-style voting with a non-rigid point matching algorithm to localize the model in cluttered images. As demonstrated by an extensive evaluation, our method can localize object boundaries accurately, while needing no segmented examples for training (only bounding-boxes).
    Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on; 07/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure-ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automatically segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made possible by our use of an Implicit Shape Model, which integrates both capabilities into a common probabilistic framework. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a natural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with a novel MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method significantly outperforms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in different articulations and with widely varying texture patterns, even under significant partial occlusion.
    05/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow).
    07/2006: pages 1-15;