Thesis

A New Approach to Automatic Saliency Identification in Images Based on Irregularity of Regions

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This research introduces an image retrieval system which is, in different ways, inspired by the human vision system. The main problems with existing machine vision systems and image understanding are studied and identified, in order to design a system that relies on human image understanding. The main improvement of the developed system is that it uses the human attention principles in the process of image contents identification. Human attention shall be represented by saliency extraction algorithms, which extract the salient regions or in other words, the regions of interest. This work presents a new approach for the saliency identification which relies on the irregularity of the region. Irregularity is clearly defined and measuring tools developed. These measures are derived from the formality and variation of the region with respect to the surrounding regions. Both local and global saliency have been studied and appropriate algorithms were developed based on the local and global irregularity defined in this work. The need for suitable automatic clustering techniques motivate us to study the available clustering techniques and to development of a technique that is suitable for salient points clustering. Based on the fact that humans usually look at the surrounding region of the gaze point, an agglomerative clustering technique is developed utilising the principles of blobs extraction and intersection. Automatic thresholding was needed in different stages of the system development. Therefore, a Fuzzy thresholding technique was developed. Evaluation methods of saliency region extraction have been studied and analysed; subsequently we have developed evaluation techniques based on the extracted regions (or points) and compared them with the ground truth data. The proposed algorithms were tested against standard datasets and compared with the existing state-of-the-art algorithms. Both quantitative and qualitative benchmarking are presented in this thesis and a detailed discussion for the results has been included. The benchmarking showed promising results in different algorithms. The developed algorithms have been utilised in designing an integrated saliency-based image retrieval system which uses the salient regions to give a description for the scene. The system auto-labels the objects in the image by identifying the salient objects and gives labels based on the knowledge database contents. In addition, the system identifies the unimportant part of the image (background) to give a full description for the scene.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Content-based image retrieval (CBIR) is a new but widely adopted method for finding images from vastand unannotated image databases. As the network and development of multimedia technologies arebecoming more popular, users are not satisfied with the traditional information retrieval techniques. Sonowadays the content based image retrieval (CBIR) are becoming a source of exact and fast retrieval. Inrecent years, a variety of techniques have been developed to improve the performance of CBIR. Dataclustering is an unsupervised method for extraction hidden pattern from huge data sets. With large datasets, there is possibility of high dimensionality. Having both accuracy and efficiency for high dimensionaldata sets with enormous number of samples is a challenging arena. In this paper the clustering techniquesare discussed and analysed. Also, we propose a method HDK that uses more than one clustering techniqueto improve the performance of CBIR.This method makes use of hierachical and divide and conquer KMeansclustering technique with equivalency and compatible relation concepts to improve the performanceof the K-Means for using in high dimensional datasets. It also introduced the feature like color, texture andshape for accurate and effective retrieval system.
Article
Full-text available
Currently, it is common data of high dimensionality and complexity and for their computational visualization and analysis; traditional representation approaches become insufficient and inefficient. Through the visual system, human perception has an important role in the visualization domain as it supports the cognitive associated process. Thus, in the development of computational tools for complex and high dimension data visualization becomes fundamental to consider the behaviour of the visual human perception. This paper presents important concepts of visual human perception system that can be applied in the implementation of more efficient computational systems for data visualization and analysis.
Article
Full-text available
We address the issue of visual saliency from three perspectives. First, we consider saliency detection as a frequency domain analysis problem. Second, we achieve this by employing the concept of nonsaliency. Third, we simultaneously consider the detection of salient regions of different size. The paper proposes a new bottom-up paradigm for detecting visual saliency, characterized by a scale-space analysis of the amplitude spectrum of natural images. We show that the convolution of the image amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector. The saliency map is obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. A Hypercomplex Fourier Transform performs the analysis in the frequency domain. Using available databases, we demonstrate experimentally that the proposed model can predict human fixation data. We also introduce a new image database and use it to show that the saliency detector can highlight both small and large salient regions, as well as inhibit repeated distractors in cluttered images. In addition, we show that it is able to predict salient regions on which people focus their attention.
Article
Full-text available
The bag-of-visual-words (BoW) model is effective for representing images and videos in many computer vision problems, and achieves promising performance in image retrieval. Nevertheless, the level of retrieval efficiency in a large-scale database is not acceptable for practical usage. Considering that the relevant images in the database of a given query are more likely to be distinctive than ambiguous, this paper defines “database saliency” as the distinctiveness score calculated for every image to measure its overall “saliency” in the database. By taking advantage of database saliency, we propose a saliency-inspired fast image retrieval scheme, S-sim, which significantly improves efficiency while retains state-of-the-art accuracy in image retrieval. There are two stages in S-sim: the bottom-up saliency mechanism computes the database saliency value of each image by hierarchically decomposing a posterior probability into local patches and visual words, the concurrent information of visual words is then bottom-up propagated to estimate the distinctiveness, and the top-down saliency mechanism discriminatively expands the query via a very low-dimensional linear SVM trained on the top-ranked images after initial search, ranking images are then sorted on their distances to the decision boundary as well as the database saliency values. We comprehensively evaluate S-sim on common retrieval benchmarks, e.g., Oxford and Paris datasets. Thorough experiments suggest that, because of the offline database saliency computation and online low-dimensional SVM, our approach significantly speeds up online retrieval and outperforms the state-of-the-art BoW-based image retrieval schemes.
Conference Paper
Full-text available
Abstract—Machine vision is still a challenging topic and attracts researchers to carry out researches in this field. Efforts have been placed to design machine vision systems (MVS) that are inspired by human vision system (HVS). Attention is one of the important properties of HVS, with which the human can focus only on part of the scene at a time; regions with more abrupt features attract human attention more than other regions. This property improves the speed of HVS in recognizing and identifying the contents of a scene. In this paper, we will discuss the human attention and its application in MVS. In addition, a new method of extracting regions of interest and hence interesting objects from the images is presented. The new method utilizes neural networks as classifiers to classify important and unimportant regions.
Article
Full-text available
This paper is a survey on different clustering techniques to achieve image segmentation. In order to increase the efficiency of the searching process, only a part of the database need to be searched. For this searching process clustering techniques can be recommended. Clustering can be termed here as a grouping of similar images in the database. Clustering is done based on different attributes of an image such as size, color, texture etc.The purpose of clustering is to get meaningful result, effective storage and fast retrieval in various areas.
Article
Full-text available
We review the neural mechanisms that support top-down control of behaviour and suggest that goal-directed behaviour uses two systems that work in concert. A basal ganglia-centred system quickly learns simple, fixed goal-directed behaviours while a prefrontal cortex-centred system gradually learns more complex (abstract or long-term) goal-directed behaviours. Interactions between these two systems allow top-down control mechanisms to learn how to direct behaviour towards a goal but also how to guide behaviour when faced with a novel situation.
Conference Paper
Full-text available
Literature survey is most important for understanding and gaining much more knowledge about specific area of a subject. In this paper a survey on content based image retrieval presented. Content Based Image Retrieval (CBIR) is a technique which uses visual features of image such as color, shape, texture, etc... to search user required image from large image database according to user's requests in the form of a query image. We consider Content Based Image Retrieval viz. labelled and unlabelled images for analyzing efficient image for different image retrieval process viz. D-EM, SVM, RF, etc. To determining the efficient imaging for Content Based Image Retrieval, We performance literature review by using principles of Content Based Image Retrieval based unlabelled images. And also give some recommendations for improve the CBIR system using unlabelled images.
Article
Full-text available
Texture is one of the important features used in CBIR systems. The methods of characterizing texture fall into two major categories: Statistical and Structural. An experimental comparison of a number of different texture features for content-based image retrieval is presented in this paper. The primary goal is to determine which texture feature or combination of texture features is most efficient in representing the spatial distribution of images. In this paper, we analyze and evaluate both Statistical and Structural texture features. For the experiments, publicly available image databases are used. Analysis and comparison of individual texture features and combined texture features are presented.
Conference Paper
Full-text available
Semantic gap, difference between visual features and semantic annotations, is an important problem of Content-Based Image Retrieval (CBIR) systems. In this study, a new Content-Based Image Retrieval system is proposed by using Visual Attention which is a part of human visual system. In the proposed work, the region of interests are extracted by using Itti-Koch visual attention model. The attention values, obtained from the saliency maps are used to define a new similarity matching method. Successful results are obtained compared to traditional region-based retrieval systems.
Conference Paper
Full-text available
In this paper, we propose an efficient saliency model using regional color and spatial information. The original image is first segmented into a set of regions using a superpixel segmentation algorithm. For each region, its color saliency is evaluated based on the color similarity measures with other regions, and its spatial saliency is evaluated based on its color distribution and spatial position. The final saliency map is generated by combining color saliency measures and spatial saliency measures of regions. Experimental results on a public dataset containing 1000 images demonstrate that our computationally efficient saliency model outperforms the other six state-of-the-art models on saliency detection performance.
Conference Paper
Full-text available
In computer vision applications it is necessary to extract the regions of interest in order to reduce the search space and to improve image contents identification. Human-Oriented Regions of Interest can be extracted by collecting some feedback from the user. The feedback usually provided by the user by giving different ranks for the identified regions in the image. This rank is then used to adapt the identification process. Nowadays eye tracking technology is widely used in different applications, one of the suggested applications is by using the data collected from the eye-tracking device, which represents the user gaze points in extracting the regions of interest. In this paper we shall introduce a new agglomerative clustering algorithm which uses blobs extraction technique and statistical measures in clustering the gaze points obtained from the eye tracker. The algorithm is fully automatic, which means does not need any human intervention to specify the stopping criterion. In the suggested algorithm the points are replaced with small regions (blobs) then these blobs are grouped together to form a cloud, from which the interesting regions are constructed.
Chapter
Colleges Worth Your Money: A Guide to What America's Top Schools Can Do for You is an invaluable guide for students making the crucial decision of where to attend college when our thinking about higher education is radically changing. At a time when costs are soaring and competition for admission is higher than ever, the college-bound need to know how prospective schools will benefit them both as students and after graduation. Colleges Worth Your Moneyprovides the most up-to-date, accurate, and comprehensive information for gauging the ROI of America’s top schools, including: In-depth profiles of 200 of the top colleges and universities across the U.S.; Over 75 key statistics about each school that cover unique admissions-related data points such as gender-specific acceptance rates, early decision acceptance rates, and five-year admissions trends at each college. The solid facts on career outcomes, including the school’s connections with recruiters, the rate of employment post-graduation, where students land internships, the companies most likely to hire students from a particular school, and much more. Data and commentary on each college’s merit and need-based aid awards, average student debt, and starting salary outcomes. Top Colleges for America’s Top Majors lists highlighting schools that have the best programs in 40+ disciplines. Lists of the “Top Feeder” undergraduate colleges into medical school, law school, tech, journalism, Wall Street, engineering, and more.
Article
In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation functions, optical transforms, digital transforms, textural edgeness, structural element, gray tone co-occurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives. -Author
Article
Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. Recently, full-resolution salient maps that retain well-defined boundaries have attracted attention. In these maps, boundaries are preserved by retaining substantially more frequency content from the original image than older techniques. However, if the salient regions comprise more than half the pixels of the image, or if the background is complex, the background gets highlighted instead of the salient object. In this paper, we introduce a method for salient region detection that retains the advantages of such saliency maps while overcoming their shortcomings. Our method exploits features of color and luminance, is simple to implement and is computationally efficient. We compare our algorithm to six state-of-the-art salient region detection methods using publicly available ground truth. Our method outperforms the six algorithms by achieving both higher precision and better recall. We also show application of our saliency maps in an automatic salient object segmentation scheme using graph-cuts.
Article
Content based image retrieval is an important research area in image processing, witha vast domain of applications like recognition systems i.e. face, finger, biometrics etc. This paper provides extensive review of various latest research work and methodologies applied in the field of CBIR. The CBIR system has been reviewed based on its fundamental components i.e. feature extraction methods in frequency and spatial domain, similarity measures like distances e.g. Euclidean Distance, sum of Absolute difference, MSE and classifiers e.g. Neural network classifier and algorithmic performance measures like precision, recall, LIRS (Length of Initial Relevant String of Image), LSRR (Length of String to Recover All Relevant Images), FAR, FRR and FTC. Use of these parameters in various applications of CBIR has been discussed and compared. These parameters play very crucial role in deciding the overall performance of the any CBIR system. There are still lots of research on in the field of content-based image retrieval for its more faster and accurate behavior.
Book
The state-of-the-art full-colored handbook gives a comprehensive introduction to the principles and the practice of calculation, layout, and understanding of optical systems and lens design. Written by reputed industrial experts in the field, this text introduces the user to the basic properties of optical systems, aberration theory, classification and characterization of systems, advanced simulation models, measuring of system quality and manufacturing issues. In this Volume: Volume 1 gives a general introduction to the field of technical optics. Although part of the series, it acts as a fully selfstanding book. With more than 700 full color graphs and it is a intuitive introduction for the beginner and a comprehensive reference for the professional.
Conference Paper
A new image retrieval system is proposed that combines the bag-of-words (BoW) model and Probabilistic Latent Semantic Analysis (PLSA). First, interest points on images are detected using the Hessian-Affine keypoint detector and Scale Invariant Feature Transform (SIFT) descriptors are computed. Graph-based visual saliency maps are then employed in order to detect and discard outliers in image descriptors. By doing so, SIFT features lying in non-salient regions can be deleted. All the remaining reliable feature descriptors are divided into a number of subsets and partial vocabularies are extracted for each of them. The final vocabulary used in the BoW model is obtained by the concatenating the partial vocabularies. The resulting BoW representations are weighted using the TF-IDF scheme. Finally, the PLSA is employed to perform a probabilistic mixture decomposition of the weighted BoW representations. Query expansion is demonstrated to improve the retrieval quality. Overall a 0.79 mean average precision is reported when the saliency filtering was applied on SIFTs and the BoW plus PLSA method was used.
Article
Economists agree that the long-term growth of living standards depends on the capacity of an economy to sustain technological progress, whether by adopting technologies from abroad, through its own technological innovations, or, most likely, through a combination of adoption and innovation. The purpose of this chapter is to describe and analyze China's science and technology (S&T) capabilities and the economic, institutional, and policy context that together are shaping the range and growth of these capabilities. We conduct this analysis against the background of a large and fast-growing literature on the subject. China's national innovation system is making two transitions – from plan to market as it moves away from a centrally directed innovation system and also from low-income developing country toward Organisation for Economic Co-Operation and Development (OECD) industrialized country status as it intensifies its innovation effort and more effectively deploys the ensuing technological gains. Many of the impulses and policies of China's current S&T system are legacies of the nation's traditional economy going back to the nineteenth century and before. These include the recognition that access to Western S&T is critical to China's economic modernization and the consequent openness to foreign technology, advisors, and investment, particularly in special zones in the coastal areas.
Article
Content Based Image Retrieval (CBIR) is a very important research area in the field of image processing, and comprises of low level feature extraction such as color, texture and shape and similarity measures for the comparison of images. Recently, the research focus in CBIR has been in reducing the semantic gap, between the low level visual features and the high level image semantics. This paper provides a comprehensive survey of all these aspects. This survey covers approaches used for extracting low level features; various distance measures for measuring the similarity of images, the mechanisms for reducing the semantic gap and about invariant image retrieval. In addition to these, various data sets used in CBIR and the performance measures, are also addressed. Finally, future research directions are also suggested.
Article
Many computational models of visual attention have been created from a wide variety of different approaches to predict where people look in images. Each model is usually introduced by demonstrating performances on new images, and it is hard to make immediate comparisons between models. To alleviate this problem, we propose a benchmark data set containing 300 natural images with eye tracking data from 39 observers to compare model performances. We calculate the performance of 10 models at predicting ground truth fixations using three different metrics. We provide a way for people to submit new models for evaluation online. We find that the Judd et al. and Graph-based visual saliency models perform best. In general, models with blurrier maps and models that include a center bias perform well. We add and optimize a blur and center bias for each model and show improvements. We compare performances to baseline models of chance, center and human performance. We show that human performance increases with the number of humans to a limit. We analyze the similarity of different models using multidimensional scaling and explore the relationship between model performance and fixation consistency. Finally, we offer observations about how to improve saliency models in the future.
Article
TEXT. Regression. Inference in Regression. Attributes as Explanatory Variables. Nonlinear Relationships. Regression and Time Series. Lagged Variables. Regression Miscellanea. More on Inference in Regression. Autoregressive Models. The Classification Problem. More on Classification. Models of Systems. CASES. Appendices. Selected References. Index.
Article
Thresholding is a necessary task in many image processing applications. In this paper we derive fuzzy rules for π-function. We use π-function to fuzzify the original image; this is constructed to locate the intensities of the misclassification regions. Based on information theory, it maximizes the information between image foreground and background. The merit of using fuzzy set is its ability to handle uncertainty and its robustness. This technique is to optimize the image threshold by effective selection of Region Of Interest (ROI). In general Valley seeking approaches are utilized to select a threshold if the histogram is bimodal. However, histograms would not be bimodal. The fuzzy region range of the π-function is chosen as one standard deviation of the arithmetic mean ). Because, the fuzzy region is spread on both sides of the image mean and the non-fuzzy data is located outside of this region. The limitation with the parent version is semi supervised, for low contrast images human perception is required. There exists no unsupervised appropriate procedure in literature to address this problem. The proposed method successfully segments the images of bimodal and multi-model histograms. The experimental results confirm the superiority of the proposed method over existing methods in performance. Our method produces more accurate and reliable results compared to the parent algorithm. This claim has been verified with some experimental trials using all categories of real world images.
Article
Saliency or salient region extraction from images is still a challenging field as it needs some understanding of the image and its nature. A technique that is suitable for some applications is not necessarily useful in other applications, thus, saliency identification is dependent upon the application. Based on a survey of existing methods of saliency detection, a new technique to extract the salient regions from an image is proposed that utilizes local features of the region surrounding each pixel. The level of saliency is decided based on the irregularity of the region with compared to other regions. To make the process fully automatic, a new Fuzzy-based thresholding technique has also been developed. In addition to the above, a survey of existing saliency evaluation techniques has been carried out and we have proposed new evaluation methods. The proposed saliency extraction technique has been compared with other algorithms reported in the literature, and the results are discussed in detail.
Conference Paper
Most of general content-based image retrieval (CBIR) algorithms cannot meet the demand for fine retrieval of flower images. Combing with features of flower images, this paper proposed a flower image retrieval algorithm based on saliency map. Firstly, to obtain the saliency map, the improved Itti's visual attention model was utilized, and then the color and LBP texture feature were extracted using the saliency map, so as to the image segmentation was avoided. Finally, the retrieval experiments on flower image data sets of the VGG group were finished. Comparative results show that the proposed algorithm is more effective than the other two algorithms, i.e. color histogram combined with LBP texture histogram based on the original image (CT), and color and LBP texture histogram based on the saliency map extracted by Itti model (ICT).
Article
Abtract The digital image data is rapidly expanding in quantity and heterogeneity. The traditional information retrieval techniques does not meet the user's demand, so there is need to develop an efficient system for content based image retrieval. The content based image retrieval are becoming a source of exact and fast retrieval. In this paper the techniques of content based image retrieval are discussed, analysed and compared. Here, to compared features as color correlogram, texture, shape, edge density, JPEG compression domain and clustering algorithm such as K-Means, C-Means for effective retrieval of an image.
Conference Paper
Several salient object detection approaches have been published which have been assessed using different evaluation scores and datasets resulting in discrepancy in model comparison. This calls for a methodological framework to compare existing models and evaluate their pros and cons. We analyze benchmark datasets and scoring techniques and, for the first time, provide a quantitative comparison of 35 state-of-the-art saliency detection models. We find that some models perform consistently better than the others. Saliency models that intend to predict eye fixations perform lower on segmentation datasets compared to salient object detection algorithms. Further, we propose combined models which show that integration of the few best models outperforms all models over other datasets. By analyzing the consistency among the best models and among humans for each scene, we identify the scenes where models or humans fail to detect the most salient object. We highlight the current issues and propose future research directions.
Conference Paper
In this paper we propose a new descriptor for content-based image retrieval that explores the locality of features. We propose to extend the bag-of-visual-words method by weighting the visual words according to their spatial locality in terms of foreground and background by using fuzzy saliency models. We evaluated our method using databases that obtains images with different conditions of illumination, color, rigid and scale transformations, and changes of the background. The analysis of the results demonstrated that our proposal presents significant improvements over competitive approaches.
Conference Paper
Automatically assigning one or more relevant keywords to image has important significance. It is easier for people to retrieve and understand large collections of image data. Recent years much research has focused upon this field. In this paper, we introduce a salient region detection and segmentation algorithm used for image retrieval and keywords auto-annotation. We investigate the properties of a bin-cross bin metric between two feather-vectors called the Earth Mover's Distance (EMD), to enhance the precision and recall performance. The EMD is based on a solution to the transportation problem from linear optimization. It is more robust than histogram matching techniques. In this paper we only focus on applications about color-feathers, and we compare the performances about image auto-annotation and retrieval between EMD and other histogram matching distances. The results indicate that our methods are more flexible and reliable.
Conference Paper
Image quality assessment is one application out of many that can be aided by the use of computational saliency models. Existing visual saliency models have not been extensively tested under a quality assessment context. Also, these models are typically geared towards predicting saliency in non-distorted images. Recent work has also focussed on mimicking the human visual system in order to predict fixation points from saliency maps. One such technique (GAFFE) that uses foveation has been found to perform well for non-distorted images. This work extends the foveation framework by integrating it with saliency maps from well known saliency models. The performance of the foveated saliency models is evaluated based on a comparison with human ground-truth eye-tracking data. For comparison, the performance of the original non-foveated saliency predictions is also presented. It is shown that the integration of saliency models with a foveation based fixation finding framework significantly improves the prediction performance of existing saliency models over different distortion types. It is also found that the information maximization based saliency maps perform the best consistently over different distortion types and levels under this foveation based framework.
Conference Paper
What makes an object salient? Most previous work assert that distinctness is the dominating factor. The difference between the various algorithms is in the way they compute distinctness. Some focus on the patterns, others on the colors, and several add high-level cues and priors. We propose a simple, yet powerful, algorithm that integrates these three factors. Our key contribution is a novel and fast approach to compute pattern distinctness. We rely on the inner statistics of the patches in the image for identifying unique patterns. We provide an extensive evaluation and show that our approach outperforms all state-of-the-art methods on the five most commonly-used datasets.
Conference Paper
A pulmonary nodule is the most common sign of lung cancer. The proposed system efficiently predicts lung tumor from Computed Tomography (CT) images through image processing techniques coupled with neural network classification as either benign or malignant. The lung CT image is denoised using non-linear total variation algorithm to remove random noise prevalent in CT images. Optimal thresholding is applied to the denoised image to segregate lung regions from surrounding anatomy. Lung nodules, approximately spherical regions of relatively high density found within the lung regions are segmented using region growing method. Textural and geometric features extracted from the lung nodules using gray level co-occurrence matrix (GLCM) is fed as input to a back propagation neural network that classifies lung tumor as cancerous or non-cancerous. The proposed system implemented on MATLAB takes less than 3 minutes of processing time and has yielded promising results that would supplement in the diagnosis of lung cancer.
Article
Content based image retrieval system is a fast growing research area, where the visual content of a query image is used to search images from large scale image databases. In this proposed an effective system, both the semantically and visually relevant features are used to retrieve the related images. The challenge for the CBIR system is how to efficiently capture the features of the query image for retrieval. In traditional content based retrieval system, the visual content features of the whole query image are used for the retrieval purpose. But in the proposed system, the object wise features of query image are utilized for the effective retrieval. Moreover, an active Recently Retrieved Image Library (RRI Library) is used, which increases the accuracy in each retrieval. An RRI library uses an index system, which maintains the recently retrieved images, and during the retrieval process, the proposed system searches the pertinent images from both the database as well as the RRI library and hence the retrieval precision is gradually increased in each retrieval. The proposed CBIR method is evaluated by querying diverse images and the retrieval efficacy is analyzed by calculating the precision-recall values for the retrieval results.
Conference Paper
Saliency algorithms in content-based image retrieval are employed to retrieve the most important regions of an image with the idea that these regions hold the essence of representative information. Such regions are then typically analysed and described for future retrieval/classification tasks rather than the entire image itself — thus minimising computational resources required. We show that we can select a small number of features for indexing using a visual saliency measure without reducing the performance of classifiers trained to find objects.