Lijuan Duan

Beijing University of Technology, Peping, Beijing, China

Are you Lijuan Duan?

Claim your profile

Publications (48)15.55 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an approach to classifying electroencephalogram (EEG) signals for brain-computer interfaces (BCI). To eliminate redundancy in high-dimensional EEG signals and reduce the coupling among different classes of EEG signals, we use principle component analysis and linear discriminant analysis to extract features that represent the raw signals. Next, we introduce the voting-based extreme learning machine to classify the features. Experiments performed on real-world data from the 2003 BCI competition indicate that our classification method outperforms state-of-the-art methods in speed and accuracy.
    Cognitive Computation 09/2014; 6(3):477-483. DOI:10.1007/s12559-014-9264-1 · 1.10 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Graph cut based interactive image segmentation attracts much attention in recent years. Given an image, traditional methods generally need users to indicate parts of foreground and background by drawing strokes, etc. Next, these methods formulate energy functions, which are generally composed of color and gradient constraints. Considering that many objects to be cut out are compact, the paper presents a method that incorporates a simple but effective direct connectivity constraint. The constraint is defined geometrically based on the user input strokes. The centers of those foreground strokes are treated as foreground representing points. Pixels to be labeled, which are not directly connected to the representing points, i.e. blocked by the background strokes from the representing points, are considered to belong to the background. Results show that with the same amount of user interaction, the proposed segmentation method obtains better results than state-of-the-art ones.
    Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry; 11/2013
  • Wei Ma, Jing Liu, Lijuan Duan, Xinyong Zhang
    [Show abstract] [Hide abstract]
    ABSTRACT: Graph cut based interactive segmentation is useful to extract objects from images. Color and gradient constraints are two terms appearing in most of energy functions of related methods. In order to balance the two constraints, state-of-the-art methods adopt a pre-given fixed weight. However, different images and even different parts in a single image have different demands for proportion of the two constraints. This paper proposes a graph cut based segmentation method which is capable of intelligently balancing the two constraints on the fly. Particularly, it analyzes the demand of each pixel for color and gradient constraints and arranges a weight at the pixel to balance the two, automatically. Results show that the proposed method obtains better results than traditional ones.
    2013 2nd IAPR Asian Conference on Pattern Recognition (ACPR); 11/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the problem on stability analysis of generalized recurrent neural networks with a time-varying delays is considered. Neither the differentiability, the monotony on these activation functions nor the differentiability on the time-varying delays are assumed. By employing a new Lyapunov-Krasovskii function, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for RNNs to be globally asymptotically stable. The proposed stability results are less conservative than some recently known ones in the literature. Finally an example is given to verify the effectiveness of the present criterion.
    07/2013; DOI:10.2991/iccnce.2013.143
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper investigates a locally coupled neural oscillator autonomous system qualitatively. By applying an approximation method, we give a set of parameter values with which an asymptotically stable limit cycle exists, and the sufficient conditions on the coupling parameters that guarantee asymptotically global synchronization are established under the same external input. A gradational classifier is introduced to detect synchronization, and the network model based on the analytical results is applied to image segmentation. The performance is comparable to the results from other segmentation methods.
    Neural Computing and Applications 10/2012; 21(7). DOI:10.1007/s00521-012-0829-1 · 1.76 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Sparse coding theory was an effective method for finding a compact representation of multidimensional data. In this paper, its application in the field of texture images analysis by means of Independent Component Analysis (ICA) is discussed. First, a bank of basis vectors was trained from a set of training images according to it. And the optimal texture features were selected from original ones which are extracted by convolving the test image with those basis vectors. Then the probabilities of these selected features were modeled by Gaussian Mixture Model (GMM). And final segmentation result was obtained after applying Expectation Maximization (EM) algorithm for clustering. Finally, a short discussion of the effects of different parameters (window size, feature dimensions, etc.) was given. Furthermore, combing the optimal texture features collected by ICA with the color features of the natural images, the proposed method was used in color image segmentation. The experimental results demonstrate that the proposed segmentation method based on sparse coding theory can archive promising performance.
    Journal of Computational and Theoretical Nanoscience 03/2012; 6(1):441-444. DOI:10.1166/asl.2012.2325 · 1.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a natural image compression method is proposed based on independent component analysis (ICA) and visual saliency detection. The proposed compression method learns basis functions trained from data using ICA to transform the image at first; and then sets percentage of the zero coefficient number in the total transforming coefficients. After that, transforming coefficients are sparser which indicates further improving of compression ratio. Next, the compression method performance is compared with the discrete cosine transform (DCT). Evaluation through both the usual PSNR and Structural Similarity Index (SSIM) measurements showed that proposed compression method is more robust than DCT. And finally, we proposed a visual saliency detection method to detect automatically the important region of image which is not or lowly compressed while the other regions are highly compressed. Experiment shows that the method can guarantee the quality of important region effectively.
    Journal of Computational and Theoretical Nanoscience 03/2012; 6(1). DOI:10.1166/asl.2012.2279 · 1.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a simple and effective method to compute the pixel saliency with full resolution in an image. First, the proposed method creates an image representation of four color channels through the modified computation on the basis of Itti et al.[5]. Then the most informative channel is automatically identified from the derived four color channels. Finally, the pixel saliency is computed through the simple combination of contrast feature and spatial attention function on the individual channel. The proposed method is computationally very simple, but it achieves a very good performance in the comparison experiments with six other saliency detection methods. On the challenging database with 1,000 images, it outperforms six other methods in both identifying salient pixels and segmenting salient regions.
    Pattern Recognition (ICPR), 2012 21st International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose an image conspicuity index that combines three factors: spatial dissimilarity, spatial distance and central bias. The dissimilarity between image patches is evaluated in a reduced dimensional principal component space and is inversely weighted by the spatial separations between patches. An additional weighting mechanism is deployed that reflects the bias of human fixations towards the image center. The method is tested on three public image datasets and a video clip to evaluate its performance. The experimental results indicate highly competitive performance despite the simple definition of the proposed index. The conspicuity maps generated are more consistent with human fixations than prior state-of-the-art models when tested on color image datasets. This is demonstrated using both receiver operator characteristics (ROC) analysis and the Kullback-Leibler distance metric. The method should prove useful for such diverse image processing tasks as quality assessment, segmentation, search, or compression. The high performance and relative simplicity of the conspicuity index relative to other much more complex models suggests that it may find wide usage.
    IEEE Signal Processing Letters 12/2011; DOI:10.1109/LSP.2011.2167752 · 1.64 Impact Factor
  • International Journal of Digital Content Technology and its Applications 11/2011; 5(11):343-350. DOI:10.4156/jdcta.vol5.issue11.43
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a new visual saliency detection method is proposed based on the spatially weighted dissimilarity. We measured the saliency by integrating three elements as follows: the dissimilarities between image patches, which were evaluated in the reduced dimensional space, the spatial distance between image patches and the central bias. The dissimilarities were inversely weighted based on the corresponding spatial distance. A weighting mechanism, indicating a bias for human fixations to the center of the image, was employed. The principal component analysis (PCA) was the dimension reducing method used in our system. We extracted the principal components (PCs) by sampling the patches from the current image. Our method was compared with four saliency detection approaches using three image datasets. Experimental results show that our method outperforms current state-of-the-art methods on predicting human fixations.
    Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on; 07/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, we proposed an emotional face evoked EEG signal recognition framework, within this framework the optimal statistic features were extracted from original signals according to time and space, i.e., the span and electrodes. First, the EEG signals were collected using noise suppression methods, and principal component analysis (PCA) was used to reduce dimension and information redundant of data. Then the optimal statistic features were selected and combined from different electrodes based on the classification performance. We also discussed the contribution of each time span of EEG signals in the same electrodes. Finally, experiments using Fisher, Bayes and SVM classifiers show that our methods offer the better chance for reliable classification of the EEG signal. Moreover, the conclusion is supported by physiological evidence as follows: a) the selected electrodes mainly concentrate in temporal cortex of the right hemisphere, which relates with visual according to previous psychological research; b) the selected time span shows that consciousness of the face picture has a trend from posterior brain regions to anterior brain regions.
    Neural Information Processing - 18th International Conference, ICONIP 2011, Shanghai, China, November 13-17, 2011, Proceedings, Part I; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an improved neural architecture for gaze movement control in target searching. Compared with the four-layer neural structure proposed in [14], a new movement coding neuron layer is inserted between the third layer and the fourth layer in previous structure for finer gaze motion estimation and control. The disadvantage of the previous structure is that all the large responding neurons in the third layer were involved in gaze motion synthesis by transmitting weighted responses to the movement control neurons in the fourth layer. However, these large responding neurons may produce different groups of movement estimation. To discriminate and group these neurons' movement estimation in terms of grouped connection weights form them to the movement control neurons in the fourth layer is necessary. Adding a new neuron layer between the third layer and the fourth lay is the measure that we solve this problem. Comparing experiments on target locating showed that the new architecture made the significant improvement.
    01/2011; DOI:10.1109/IJCNN.2011.6033521
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a saliency guided image retargeting method. Our bio-inspired saliency measure integrates three factors: dissimilarity, spatial distance and central bias, and these three factors are supported by research on human vision system (HVS). To produce perceptual satisfactory retargeting images, we use the saliency map as the importance map in the retargeting method. We suppose that saliency maps can indicate informative regions, and filter out background in images. Experimental results demonstrate that our method outperforms previous retargeting method guided by the gray image on distorting dominant objects less. And further comparison between various saliency detection methods show that retargeting method using our saliency measure maintains more parts of foreground.
    Neural Information Processing - 18th International Conference, ICONIP 2011, Shanghai, China, November 13-17, 2011, Proceedings, Part I; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual context plays an important role in humans' top-down gaze movement control for target searching. Exploring the mental development mechanism in terms of incremental visual context encoding by population cells is an interesting issue. This paper presents a biologically inspired computational model. The visual contextual cues were used in this model for top-down eye-motion control on searching targets in images. We proposed a population cell coding mechanism for visual context encoding and decoding. The model was implemented in a neural network system. A developmental learning mechanism was simulated in this system by dynamically generating new coding neurons to incrementally encode visual context during training. The encoded context was decoded with population neurons in a top-down mode. This allowed the model to control the gaze motion to the centers of the targets. The model was developed with pursuing low encoding quantity and high target locating accuracy. Its performance has been evaluated by a set of experiments to search different facial objects in a human face image set. Theoretical analysis and experimental results show that the proposed visual context encoding algorithm without weight updating is fast, efficient and stable, and the population-cell coding generally performs better than single-cell coding and k-nearest-neighbor (k-NN)-based coding.
    IEEE Transactions on Autonomous Mental Development 10/2010; DOI:10.1109/TAMD.2010.2053365 · 1.35 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a vehicle detection method based on AdaBoost. We focus on the detection of front-view car and bus with occlusions on highway. Samples with different occlusion situations are selected into the training set. By using basic and rotated Haar-like features extracted from the samples in the set, we train an AdaBoost-based cascade vehicle detector. The performance tests on static images and short time videos show that (1) our approach detects cars more effectively than buses (2) the real-time detection of our method on video proceeds at 30 frames per second.
    Information Engineering and Computer Science, 2009. ICIECS 2009. International Conference on; 01/2010
  • Source
    IEEE Transactions on Autonomous Mental Development 01/2010; 2:196-215. · 1.35 Impact Factor
  • Lijuan Duan, Jicai Ma, Zhen Yang, Jun Miao
    [Show abstract] [Hide abstract]
    ABSTRACT: Sparse coding theory is a method for finding a reduced representation of multidimensional data When applied to images, this theory can adopt efficient codes for images that captures the statistically significant structure intrinsic in the images In this paper, we mainly discuss about its application in the area of texture images analysis by means of Independent Component Analysis Texture model construction, feature extraction and further segmentation approaches are proposed respectively The experimental results demonstrate that the segmentation based on sparse coding theory gets promising performance.
    Advances in Neural Networks - ISNN 2010, 7th International Symposium on Neural Networks, ISNN 2010, Shanghai, China, June 6-9, 2010, Proceedings, Part II; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we make some analysis on the FitzHugh-Nagumo model and improve it to build a neural network, and the network is used to implement visual selection and attention shifting. Each group of neurons representing one object of a visual input is synchronized; different groups of neurons representing different objects of a visual input are desynchronized. Cooperation and competition mechanism is also introduced to accelerate oscillating frequency of the salient object as well as to slow down other objects, which result in the most salient object jumping to a high frequency oscillation, while all other objects being silent. The object corresponding to high frequency oscillation is selected, then the selected object is inhibited and other neurons continue to oscillate to select the next salient object.
    Advances in Neural Networks - ISNN 2010, 7th International Symposium on Neural Networks, ISNN 2010, Shanghai, China, June 6-9, 2010, Proceedings, Part II; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual context plays a significant role in humans’ gaze movement for target searching. How to transform the visual context into the internal representation of a brain-like neural network is an interesting issue. Population cell coding is a neural representation mechanism which was widely discovered in primates’ visual neural system. This paper presents a biologically inspired neural network model which uses a population cell coding mechanism for visual context representation and target searching. Experimental results show that the population-cell-coding generally performs better than the single-cell-coding system.
    Artificial Neural Networks - ICANN 2010 - 20th International Conference, Thessaloniki, Greece, September 15-18, 2010, Proceedings, Part I; 01/2010

Publication Stats

158 Citations
15.55 Total Impact Points

Institutions

  • 2003–2014
    • Beijing University of Technology
      Peping, Beijing, China
  • 2009
    • Beijing Union University
      Peping, Beijing, China
  • 2002
    • Chinese Academy of Sciences
      • Institute of Computing Technology
      Peping, Beijing, China