Fig 10 - uploaded by Jing Lou
Content may be subject to copyright.
Visual comparison of salient object detection results. Top three rows, middle three rows, and bottom three rows are from ASD [3, 24], ECSSD [40, 48], and ImgSal [22, 23], respectively. a Input images, and b ground truth masks. Saliency maps produced by using c the proposed CNS model, d RPC [25], e BMS [52], f FES [42], g GC [8], h HFT [23], i PCA [28], j RC [7], and k TLLT [13]
Source publication
In this paper, we will investigate the contribution of color names for the task of salient object detection. An input image is first converted to color name space, which is consisted of 11 probabilistic channels. By exploiting a surroundedness cue, we obtain a saliency map through a linear combination of a set of sequential attention maps. To overc...
Context in source publication
Context 1
... means that the segmentation results would be more stable (that is, virtually unchanged) over a wide range of thresholds. Figure 10 shows a visual comparison of the saliency maps generated by different models. For these example images, our model generates more accurate saliency maps, which are very close to the corresponding ground truth masks. ...
Similar publications
This paper assesses the performance of three convolutional neural networks for object detection at sea using Long Wavelength Infrared (LWIR) images in the 8-14 micro m range. Capturing images from ferries and annotating 20k images, fine-tuning is done of three state of art deep neural networks: RetinaNet, YOLO and Faster R-CNN. Targeting on vessels...
Object detection is one of the core tasks in computer vision. Object detection algorithms often have difficulty detecting objects with diverse scales, especially those with smaller scales. To cope with this issue, Lin et al. proposed feature pyramid networks (FPNs), which aim for a feature pyramid with higher semantic content at every scale level....
Generative networks are fundamentally different in their aim and methods compared to CNNs for classification, segmentation, or object detection. They have initially not been meant to be an image analysis tool, but to produce naturally looking images. The adversarial training paradigm has been proposed to stabilize generative methods, and has proven...
Citations
... Although the method has improved in time efficiency and accuracy, the saliency detection results are poor for underwater images with weak edges and uneven gray levels. Lou et al [22] used different color space feature fusion to extract the salient features of the target. This method can effectively suppress the background noise, but it is easy to be disturbed by the color similar background features. ...
... In this paper, 12 saliency methods are compared. SR [11], AC [2], SIM [27], MSS [1], Sun [35], HC [9], RPC [21], LC [34], FT [3], RC [9], FDC [19], CNS [22]. The results of different saliency algorithms after maximum entropy segmentation are shown in Fig. 5. ...
Underwater image detection remains a challenge due to problems such as noise, illumination inhomogeneity and low contrast. To solve these problems, this paper proposes a new level set segmentation model integrating saliency region detection (SDLSE). First, an underwater low-illumination saliency detection model is constructed and the target region is roughly segmented with the help of the saliency detection model to obtain pixel-level a prior shape information. Second, the a prior information is used as the shape constraint for finely segmenting the level set to improve the energy function of the level set. Based on the experimental data and fish dataset, the algorithm is statistically analyzed. It is verified that the segmentation effect of SDLSE model is better than other level sets in terms of segmentation accuracy and time efficiency.
... Lou et al. [28] proposed a color name method based on the central prior when the salient object is located in the middle of the image. For input images of different sizes, the input image is first fixed to a certain pixel width to obtain the optimal parameters of the structural elements at a certain scale. ...
... To verify the robustness and effectiveness of the proposed framework, we compared the proposed method with other seven most advanced methods, including Robust Background Detection (RBD) [10], Manifold Ranking (MR) [16], Saliency Filters (SF) [15], Deformed Graph Label (DGL) [33], Geodesic Saliency(GS) [14], CNS [28], and High-Dimensional Color Transform (HDCT) [24]. ...
The detection of salient regions has attracted an increasing attention in machine vision. In this study, a novel and effective framework for saliency region detection is proposed to solve the problem of the low detection accuracy of traditional methods. Firstly, we divide the image into three levels. Second, each level uses three different feature methods to generate different feature saliency maps. Subsequently, a novel integration mechanism, termed competition mechanism, is introduced into the coarse saliency maps at the same level, and the two coarse saliency maps with the highest similarity are selected for fusion to ensure the effectiveness of the salient region map. Accordingly, after adjusting the scales of the saliency map after the fusion of different levels, among three coarse saliency maps of the different levels, the two feature maps with the most significant difference are selected to fuse to further obtain the final refined saliency map. Finally, using the proposed method, experiments on three benchmark datasets were conducted. As demonstrated by the experimental results, the proposed algorithm is superior to other state-of-the-art methods.
... Bottom-up salient object detection methods are stimulated by the human visual system and can be categorized into the eye fixation prediction (EFP) approach [61,64,65] and salient object detection (SOD) approach [4,6,[66][67][68][69][70]. The two approaches are based on the definition of saliency as "where people look" or "which objects stand out" in an image [71]. ...
... The global color cues based on statistics and color contrast was recently utilized to overcome the inherent limitation of exploiting surroundedness cue alone [70]. Failure to detect a salient object linked to image borders is a major drawback of this method. ...
... The histogram of color namespaces was utilized to measure color differences for computing the weighted attention saliency maps [70]. The saliency detection method described in [66] applied a graph-based segmentation algorithm to construct uniform regions that can preserve object boundaries more efficiently. ...
Salient object detection represents a novel preprocessing stage of many practical image applications in the discipline of computer vision. Saliency detection is generally a complex process to copycat the human vision system in the processing of color images. It is a convoluted process because of the existence of countless properties inherent in color images that can hamper performance. Due to diversified color image properties, a method that is appropriate for one category of images may not necessarily be suitable for others. The selection of image abstraction is a decisive preprocessing step in saliency computation and region-based image abstraction has become popular because of its computational efficiency and robustness. However, the performances of the existing region-based salient object detection methods are extremely hooked on the selection of an optimal region granularity. The incorrect selection of region granularity is potentially prone to under- or over-segmentation of color images, which can lead to a non-uniform highlighting of salient objects. In this study, the method of color histogram clustering was utilized to automatically determine suitable homogenous regions in an image. Region saliency score was computed as a function of color contrast, contrast ratio, spatial feature, and center prior. Morphological operations were ultimately performed to eliminate the undesirable artifacts that may be present at the saliency detection stage. Thus, we have introduced a novel, simple, robust, and computationally efficient color histogram clustering method that agglutinates color contrast, contrast ratio, spatial feature, and center prior for detecting salient objects in color images. Experimental validation with different categories of images selected from eight benchmarked corpora has indicated that the proposed method outperforms 30 bottom-up non-deep learning and seven top-down deep learning salient object detection methods based on the standard performance metrics.
... regions that have higher contrast than neighbors. In computer vision, the main tasks of saliency detection include eye fixation prediction [9,14,15,19,25,40] and salient region detection [3,5,6,21,[36][37][38]42]. Many computer vision applications, such as object recognition [29], image and video compression [12], image segmentation [27], target detection [22], visual tracking [4], and video summarization [17] can benefit from saliency detection. ...
... where W and H are the width and height of S in pixels, respectively? Besides plotting Precision-recall and F-measure curves, we follow [21] and report three statistics F β for quantitative evaluation. For the segmentation of fixed thresholding, we report the average F β and the maximum F β values. ...
In this paper, we will present a new salient region detection method by exploiting its surrounding and superpixel cues. Its main highlights are: 1) An input image is quantized to 256 colors by using minimum variance quantization; 2) Saliency maps is computed based on the figure-ground segregation of the quantized image; 3) Mean saliency value of each superpixel is employed to refine saliency maps further. This can highlight salient objects robustly and suppress backgrounds evenly. Experimental results show that the proposed method produces more accurate saliency maps and performs well against twenty-one saliency models concerning three evaluation metrics on two public datasets.
... Different from the local center-surround contrast, global contrast aims to capture the uniqueness from the entire scene. In [17], the authors compute saliency maps by exploiting color names [18] and color histogram [19]. Inspired by this model, we also integrate color names into our framework to detect single-image saliency. ...
... The essence of BMS is a Gestalt principle based figure- ground segregation [20]. To overcome its limitation of only exploiting the surroundedness cue, Lou et al. [17] extend the BMS model to a Color Name Space (CNS), and invoke two global color cues to couple with the topological structure information of an input image. In CNS, the color name space is composed of eleven probabilistic channels, which are obtained by using the PLSA-bg color naming model [18]. ...
... First, three image layers are constructed for each input image. At each layer, we combine two individual saliency maps obtained by CNS [17] and RBD [26] separately. Then the three combination maps are fused into one single-image saliency map. ...
In this paper, a bottom-up and data-driven model is introduced to detect co-salient objects from an image pair. Inspired by the biologically-plausible across-scale architecture, we propose a multi-layer fusion algorithm to extract conspicuous parts from an input image. At each layer, two existing saliency models are first combined to obtain an initial saliency map, which simultaneously codes for the color names based surrounded cue and the background measure based boundary connectivity. Then a global color cue with respect to color names is invoked to refine and fuse single-layer saliency results. Finally, we exploit the color names based distance metric to measure the color consistency between a pair of saliency maps and remove those non-co-salient regions. The proposed model can generate both saliency and co-saliency maps. Experimental results show that our model performs favorably against 14 saliency models and 6 co-saliency models on the Image Pair data set.
Biomedical image segmentation is one critical component in computer-aided system diagnosis. However, various non-automatic segmentation methods are usually designed to segment target objects with single-task driven, ignoring the potential contribution of multi-task, such as the salient object detection (SOD) task and the image segmentation task. In this paper, we propose a novel dual-task framework for white blood cell (WBC) and skin lesion (SL) saliency detection and segmentation in biomedical images, called Saliency-CCE. Saliency-CCE consists of a preprocessing of hair removal for skin lesions images, a novel colour contextual extractor (CCE) module for the SOD task and an improved adaptive threshold (AT) paradigm for the image segmentation task. In the SOD task, we perform the CCE module to extract hand-crafted features through a novel colour channel volume (CCV) block and a novel colour activation mapping (CAM) block. We first exploit the CCV block to generate a target object's region of interest (ROI). After that, we employ the CAM block to yield a refined salient map as the final salient map from the extracted ROI. We propose a novel adaptive threshold (AT) strategy in the segmentation task to automatically segment the WBC and SL from the final salient map. We evaluate our proposed Saliency-CCE on the ISIC-2016, the ISIC-2017, and the SCISC datasets, which outperform representative state-of-the-art SOD and biomedical image segmentation approaches. Our code is available at https://github.com/zxg3017/Saliency-CCE.