[Show abstract][Hide abstract] ABSTRACT: This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
No preview · Article · Sep 2015 · Bio-medical materials and engineering
[Show abstract][Hide abstract] ABSTRACT: The paper presents an approach to cutting out the same target object from a pair of stereo images interactively. With this approach, a user labels parts of the object and background in either of the images with strokes. The approach generates a segmentation result immediately. In case it is not satisfying, the result can be improved by interactively drawing more strokes, or using an alternative interaction way called adding corresponding points, which is first presented in this paper. The proposed segmentation approach is capable of providing feedback fast after each interaction. The fast computation is performed in the framework of graph cut. First, the labeled parts are used to learn foreground and background color models. Next, an energy function is built by formulating the similarities between unlabeled pixels and the foreground/background color models, color difference between neighbor pixels, and stereo correspondences obtained by SIFT feature matching. At last, graph cut is utilized to find the optimum of the energy function and obtain a segmentation result. Different from state-of-the-art methods, our segmentation approach formulates sparse correspondences rather than dense matches as stereo constraints in the energy function. Experimental results demonstrate that our method is faster in computation. In the meanwhile, it generates comparable results with state-of-the-art methods.
Preview · Article · Aug 2015 · Multimedia Tools and Applications
[Show abstract][Hide abstract] ABSTRACT: Accurately modeling and predicting the visual attention behavior of human viewers can help a video analysis algorithm find interesting regions by reducing the search effort of tasks, such as object detection and recognition. In recent years, a great number and variety of visual attention models for predicting the direction of gaze on images and videos have been proposed. When a human views video, the motions of both objects in the video and of the camera greatly affect the distribution of visual fixations. Here we develop models that lead to motion features that are extracted from videos and used in a new video saliency detection method called spatial-temporal weighted dissimilarity (STWD). To achieve efficiency, frames are partitioned into blocks on which saliency calculations are made. Two spatial features are defined on each block, termed spatial dissimilarity and preference difference, which are used to characterize the spatial conspicuity of each block. The motion features extracted from each block are simple differences of motion vectors between adjacent frames. Finally, the spatial and motion features are used to generate a saliency map on each frame. Experiments on three public video datasets containing 185 video clips and corresponding eye traces revealed that the proposed saliency detection method is highly competitive with, and delivers better performance than state-of-the-art methods.
No preview · Article · Aug 2015 · Signal Processing Image Communication
[Show abstract][Hide abstract] ABSTRACT: This paper develops a novel method combining transition detection with the sample purification to filter noises in the raw EEG signal data, which helps to improve the precision of EEG based motor imagery classification. Note that the EEG samples belonging to the same class are time
sequences across multiple electrodes, and these signals are in varying degrees contaminated by noise and artifact while also attention lapses by subjects during data acquisition. To overcome this problem, firstly, the transitions of EEG signals, the Euclidean distances between adjacent samples
are larger than a given threshold, are extracted. Next, the sample purification is performed to filter the between-class noises based on the statistics of EEG signal. Finally, the purified EEG signals are treated as the input to the classifiers for BCI classification. Experimental results
show that the proposed method is effective for the BCI competition III data (Data Set V), beating the winner.
No preview · Article · Aug 2015 · Journal of Medical Imaging and Health Informatics
[Show abstract][Hide abstract] ABSTRACT: This paper presents an approach to classifying electroencephalogram (EEG) signals for brain-computer interfaces (BCI). To eliminate redundancy in high-dimensional EEG signals and reduce the coupling among different classes of EEG signals, we use principle component analysis and linear discriminant analysis to extract features that represent the raw signals. Next, we introduce the voting-based extreme learning machine to classify the features. Experiments performed on real-world data from the 2003 BCI competition indicate that our classification method outperforms state-of-the-art methods in speed and accuracy.
No preview · Article · Sep 2014 · Cognitive Computation
[Show abstract][Hide abstract] ABSTRACT: Target searching, i.e. fast locating target objects in images or videos, has attracted much attention in computer vision. A comprehensive understanding of factors influencing human visual searching is essential to design target searching algorithms for computer vision systems. In this paper, we propose a combined model to generate scan paths for computer vision to follow to search targets in images. The model explores and integrates three factors influencing human vision searching, top-down target information, spatial context and bottom-up visual saliency, respectively. The effectiveness of the combined model is evaluated by comparing the generated scan paths with human vision fixation sequences to locate targets in the same images. The evaluation strategy is also used to learn the optimal weighting coefficients of the factors through linear search. In the meanwhile, the performances of every single one of the factors and their arbitrary combinations are examined. Through plenty of experiments, we prove that the top-down target information is the most important factor influencing the accuracy of target searching. The effects from the bottom-up visual saliency are limited. Any combinations of the three factors have better performances than each single component factor. The scan paths obtained by the proposed model are optimal, since they are most similar to the human vision fixation sequences.
[Show abstract][Hide abstract] ABSTRACT: Graph cut based interactive image segmentation attracts much attention in recent years. Given an image, traditional methods generally need users to indicate parts of foreground and background by drawing strokes, etc. Next, these methods formulate energy functions, which are generally composed of color and gradient constraints. Considering that many objects to be cut out are compact, the paper presents a method that incorporates a simple but effective direct connectivity constraint. The constraint is defined geometrically based on the user input strokes. The centers of those foreground strokes are treated as foreground representing points. Pixels to be labeled, which are not directly connected to the representing points, i.e. blocked by the background strokes from the representing points, are considered to belong to the background. Results show that with the same amount of user interaction, the proposed segmentation method obtains better results than state-of-the-art ones.
[Show abstract][Hide abstract] ABSTRACT: Graph cut based interactive segmentation is useful to extract objects from images. Color and gradient constraints are two terms appearing in most of energy functions of related methods. In order to balance the two constraints, state-of-the-art methods adopt a pre-given fixed weight. However, different images and even different parts in a single image have different demands for proportion of the two constraints. This paper proposes a graph cut based segmentation method which is capable of intelligently balancing the two constraints on the fly. Particularly, it analyzes the demand of each pixel for color and gradient constraints and arranges a weight at the pixel to balance the two, automatically. Results show that the proposed method obtains better results than traditional ones.
[Show abstract][Hide abstract] ABSTRACT: In this paper, the problem on stability analysis of generalized recurrent neural networks with a time-varying delays is considered. Neither the differentiability, the monotony on these activation functions nor the differentiability on the time-varying delays are assumed. By employing a new Lyapunov-Krasovskii function, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for RNNs to be globally asymptotically stable. The proposed stability results are less conservative than some recently known ones in the literature. Finally an example is given to verify the effectiveness of the present criterion.
[Show abstract][Hide abstract] ABSTRACT: This paper investigates a locally coupled neural oscillator autonomous system qualitatively. By applying an approximation method, we give a set of parameter values with which an asymptotically stable limit cycle exists, and the sufficient conditions on the coupling parameters that guarantee asymptotically global synchronization are established under the same external input. A gradational classifier is introduced to detect synchronization, and the network model based on the analytical results is applied to image segmentation. The performance is comparable to the results from other segmentation methods.
No preview · Article · Oct 2012 · Neural Computing and Applications
[Show abstract][Hide abstract] ABSTRACT: Sparse coding theory was an effective method for finding a compact representation of multidimensional data. In this paper, its application in the field of texture images analysis by means of Independent Component Analysis (ICA) is discussed. First, a bank of basis vectors was trained
from a set of training images according to it. And the optimal texture features were selected from original ones which are extracted by convolving the test image with those basis vectors. Then the probabilities of these selected features were modeled by Gaussian Mixture Model (GMM). And final
segmentation result was obtained after applying Expectation Maximization (EM) algorithm for clustering. Finally, a short discussion of the effects of different parameters (window size, feature dimensions, etc.) was given. Furthermore, combing the optimal texture features collected by ICA with
the color features of the natural images, the proposed method was used in color image segmentation. The experimental results demonstrate that the proposed segmentation method based on sparse coding theory can archive promising performance.
No preview · Article · Mar 2012 · Journal of Computational and Theoretical Nanoscience
[Show abstract][Hide abstract] ABSTRACT: In this paper, a natural image compression method is proposed based on independent component analysis (ICA) and visual saliency detection. The proposed compression method learns basis functions trained from data using ICA to transform the image at first; and then sets percentage of the zero coefficient number in the total transforming coefficients. After that, transforming coefficients are sparser which indicates further improving of compression ratio. Next, the compression method performance is compared with the discrete cosine transform (DCT). Evaluation through both the usual PSNR and Structural Similarity Index (SSIM) measurements showed that proposed compression method is more robust than DCT. And finally, we proposed a visual saliency detection method to detect automatically the important region of image which is not or lowly compressed while the other regions are highly compressed. Experiment shows that the method can guarantee the quality of important region effectively.
Preview · Article · Mar 2012 · Journal of Computational and Theoretical Nanoscience
[Show abstract][Hide abstract] ABSTRACT: This paper presents a simple and effective method to compute the pixel saliency with full resolution in an image. First, the proposed method creates an image representation of four color channels through the modified computation on the basis of Itti et al.. Then the most informative channel is automatically identified from the derived four color channels. Finally, the pixel saliency is computed through the simple combination of contrast feature and spatial attention function on the individual channel. The proposed method is computationally very simple, but it achieves a very good performance in the comparison experiments with six other saliency detection methods. On the challenging database with 1,000 images, it outperforms six other methods in both identifying salient pixels and segmenting salient regions.
[Show abstract][Hide abstract] ABSTRACT: We propose an image conspicuity index that combines three factors: spatial dissimilarity, spatial distance and central bias. The dissimilarity between image patches is evaluated in a reduced dimensional principal component space and is inversely weighted by the spatial separations between patches. An additional weighting mechanism is deployed that reflects the bias of human fixations towards the image center. The method is tested on three public image datasets and a video clip to evaluate its performance. The experimental results indicate highly competitive performance despite the simple definition of the proposed index. The conspicuity maps generated are more consistent with human fixations than prior state-of-the-art models when tested on color image datasets. This is demonstrated using both receiver operator characteristics (ROC) analysis and the Kullback-Leibler distance metric. The method should prove useful for such diverse image processing tasks as quality assessment, segmentation, search, or compression. The high performance and relative simplicity of the conspicuity index relative to other much more complex models suggests that it may find wide usage.
Full-text · Article · Dec 2011 · IEEE Signal Processing Letters
[Show abstract][Hide abstract] ABSTRACT: Social network can help to judge the trustworthiness of outsiders and was widely used in information filtering. Since the overall topological properties were hard to be obtained, the use of social network based methods was seriously limited. In this work, emails' social connections were reconstructed based on reference social network and virtual social network, and two new approaches of spam filtering, reference social network based decision (RSND) and virtual social network based decision (VSND) were discussed. These approaches can be used as complementary tools for existing anti-spam systems to more efficiently block organized spammers. The experimental results in real emails corpora indicate that our approaches can offer the better chance for reliable discrimination between spammers and legitimates.
No preview · Article · Nov 2011 · International Journal of Digital Content Technology and its Applications
[Show abstract][Hide abstract] ABSTRACT: In this paper, a new visual saliency detection method is proposed based on the spatially weighted dissimilarity. We measured the saliency by integrating three elements as follows: the dissimilarities between image patches, which were evaluated in the reduced dimensional space, the spatial distance between image patches and the central bias. The dissimilarities were inversely weighted based on the corresponding spatial distance. A weighting mechanism, indicating a bias for human fixations to the center of the image, was employed. The principal component analysis (PCA) was the dimension reducing method used in our system. We extracted the principal components (PCs) by sampling the patches from the current image. Our method was compared with four saliency detection approaches using three image datasets. Experimental results show that our method outperforms current state-of-the-art methods on predicting human fixations.