Conference Paper

SIFT-Based Image Retrieval Combining the Distance Measure of Global Image and Sub-Image.

DOI: 10.1109/IIH-MSP.2009.180 Conference: Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2009), Kyoto, Japan, 12-14 September, 2009, Proceedings
Source: DBLP

ABSTRACT

This paper presents a similarity match method based on global image and local sub-image using the SIFT features of digital images, and applies our algorithm to Content-Based Image Retrieval. In order to make the SIFT-based image retrieval results better, the most fundamental improvement comes in two areas. One is the introduction of the distance between the matched keypoints, and the shorter the distance between the matched keypoints, the lower the similarity measure. The other is that the image is partitioned off into sub-images, which reduces the mismatched keypoints. Experiments demonstrate effectiveness of the proposed approach compared with the traditional SIFT-based image retrieval and reveal it as a good option to image retrieval.

Download full-text

Full-text

Available from: Xiangwei Kong, Apr 09, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The availability of various photo archives and photo sharing systems made similarity searching much more important because the photos are not usually conveniently tagged. So the photos (images) need to be searched by their content. Moreover, it is important not only to compare images with a query holistically but also to locate images that contain the query as their part. The query can be a picture of a person, building, or an abstract object and the task is to retrieve images of the query object but from a different perspective or images capturing a global scene containing the query object. This retrieval is called the sub-image searching. In this paper, we propose an algorithm for retrieving database images by their similarity to and containment of a query. The novelty of it lies in application of a sequence alignment algorithm, which is commonly used in text retrieval. This forms an orthogonal solution to currently used approaches based on inverted files. The proposed algorithm is evaluated on a real-life data set containing photographs where images of logos are searched. It was compared to a state-of-the-art method and the improvement of 20\% in average mean precision was obtained.
    Full-text · Conference Paper · Jan 2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: The availability of various photo archives and photo sharing systems made similarity searching much more important because the photos are not usually conveniently tagged. So the photos images need to be searched by their content. Moreover, it is important not only to compare images with a query holistically but also to locate images that contain the query as their part. The query can be a picture of a person, building, or an abstract object and the task is to retrieve images of the query object but from a different perspective or images capturing a global scene containing the query object. This retrieval is called the sub-image searching. In this paper, the authors propose an algorithm, called SASISA, for retrieving database images by their similarity to and containment of a query. The novelty of it lies in application of a sequence alignment algorithm, which is commonly used in text retrieval. This forms an orthogonal solution to currently used approaches based on inverted files. They improve efficiency of SASISA by applying vector-quantization of local image feature descriptors. The proposed algorithm and its optimization are evaluated on a real-life data set containing photographs where images of logos are searched. It is compared to a state-of-the-art method Joly & Buisson, 2009 and the improvement of 16% in mean average precision mAP is obtained.
    No preview · Article · Jul 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: The ubiquity of smartphones with high quality cameras and fast network connections will spawn many new applications. One of these is visual object recognition, an emerging smartphone feature which could play roles in high-street shopping, price comparisons and similar uses. There are also potential roles for such technology in assistive applications, such as for people who have visual impairment. We introduce the Small Hand-held Object Recognition Test (SHORT), a new dataset that aims to benchmark the performance of algorithms for recognising hand-held objects from either snapshots or videos acquired using hand-held or wearable cameras. We show that SHORT provides a set of images and ground truth that help assess the many factors that affect recognition performance. SHORT is designed to be focused on the assistive systems context, though it can provide useful information on more general aspects of recognition performance for hand-held objects. We describe the present state of the dataset, comprised of a small set of high quality training images and a large set of nearly 135,000 smartphone-captured test images of 30 grocery products. In this version, SHORT addresses another context not covered by traditional datasets, in which high quality catalogue images are being compared with variable quality user-captured images; this makes the matching more challenging in SHORT than other datasets. Images of similar quality are often not present in “database” and “query” datasets, a situation being increasingly encountered in commercial applications. Finally, we compare the results of popular object recognition algorithms of different levels of complexity when tested against SHORT and discuss the research challenges arising from the particularities of visual object recognition from objects that are being held by users.
    No preview · Conference Paper · Mar 2014