Lijuan Duan

Beijing University of Technology, Peping, Beijing, China

Are you Lijuan Duan?

Claim your profile

Publications (41)13.37 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the problem on stability analysis of generalized recurrent neural networks with a time-varying delays is considered. Neither the differentiability, the monotony on these activation functions nor the differentiability on the time-varying delays are assumed. By employing a new Lyapunov-Krasovskii function, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for RNNs to be globally asymptotically stable. The proposed stability results are less conservative than some recently known ones in the literature. Finally an example is given to verify the effectiveness of the present criterion.
    07/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a simple and effective method to compute the pixel saliency with full resolution in an image. First, the proposed method creates an image representation of four color channels through the modified computation on the basis of Itti et al.[5]. Then the most informative channel is automatically identified from the derived four color channels. Finally, the pixel saliency is computed through the simple combination of contrast feature and spatial attention function on the individual channel. The proposed method is computationally very simple, but it achieves a very good performance in the comparison experiments with six other saliency detection methods. On the challenging database with 1,000 images, it outperforms six other methods in both identifying salient pixels and segmenting salient regions.
    Pattern Recognition (ICPR), 2012 21st International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose an image conspicuity index that combines three factors: spatial dissimilarity, spatial distance and central bias. The dissimilarity between image patches is evaluated in a reduced dimensional principal component space and is inversely weighted by the spatial separations between patches. An additional weighting mechanism is deployed that reflects the bias of human fixations towards the image center. The method is tested on three public image datasets and a video clip to evaluate its performance. The experimental results indicate highly competitive performance despite the simple definition of the proposed index. The conspicuity maps generated are more consistent with human fixations than prior state-of-the-art models when tested on color image datasets. This is demonstrated using both receiver operator characteristics (ROC) analysis and the Kullback-Leibler distance metric. The method should prove useful for such diverse image processing tasks as quality assessment, segmentation, search, or compression. The high performance and relative simplicity of the conspicuity index relative to other much more complex models suggests that it may find wide usage.
    IEEE Signal Processing Letters 12/2011; · 1.67 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a new visual saliency detection method is proposed based on the spatially weighted dissimilarity. We measured the saliency by integrating three elements as follows: the dissimilarities between image patches, which were evaluated in the reduced dimensional space, the spatial distance between image patches and the central bias. The dissimilarities were inversely weighted based on the corresponding spatial distance. A weighting mechanism, indicating a bias for human fixations to the center of the image, was employed. The principal component analysis (PCA) was the dimension reducing method used in our system. We extracted the principal components (PCs) by sampling the patches from the current image. Our method was compared with four saliency detection approaches using three image datasets. Experimental results show that our method outperforms current state-of-the-art methods on predicting human fixations.
    Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on; 07/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, we proposed an emotional face evoked EEG signal recognition framework, within this framework the optimal statistic features were extracted from original signals according to time and space, i.e., the span and electrodes. First, the EEG signals were collected using noise suppression methods, and principal component analysis (PCA) was used to reduce dimension and information redundant of data. Then the optimal statistic features were selected and combined from different electrodes based on the classification performance. We also discussed the contribution of each time span of EEG signals in the same electrodes. Finally, experiments using Fisher, Bayes and SVM classifiers show that our methods offer the better chance for reliable classification of the EEG signal. Moreover, the conclusion is supported by physiological evidence as follows: a) the selected electrodes mainly concentrate in temporal cortex of the right hemisphere, which relates with visual according to previous psychological research; b) the selected time span shows that consciousness of the face picture has a trend from posterior brain regions to anterior brain regions.
    Neural Information Processing - 18th International Conference, ICONIP 2011, Shanghai, China, November 13-17, 2011, Proceedings, Part I; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an improved neural architecture for gaze movement control in target searching. Compared with the four-layer neural structure proposed in [14], a new movement coding neuron layer is inserted between the third layer and the fourth layer in previous structure for finer gaze motion estimation and control. The disadvantage of the previous structure is that all the large responding neurons in the third layer were involved in gaze motion synthesis by transmitting weighted responses to the movement control neurons in the fourth layer. However, these large responding neurons may produce different groups of movement estimation. To discriminate and group these neurons' movement estimation in terms of grouped connection weights form them to the movement control neurons in the fourth layer is necessary. Adding a new neuron layer between the third layer and the fourth lay is the measure that we solve this problem. Comparing experiments on target locating showed that the new architecture made the significant improvement.
    01/2011;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a saliency guided image retargeting method. Our bio-inspired saliency measure integrates three factors: dissimilarity, spatial distance and central bias, and these three factors are supported by research on human vision system (HVS). To produce perceptual satisfactory retargeting images, we use the saliency map as the importance map in the retargeting method. We suppose that saliency maps can indicate informative regions, and filter out background in images. Experimental results demonstrate that our method outperforms previous retargeting method guided by the gray image on distorting dominant objects less. And further comparison between various saliency detection methods show that retargeting method using our saliency measure maintains more parts of foreground.
    Neural Information Processing - 18th International Conference, ICONIP 2011, Shanghai, China, November 13-17, 2011, Proceedings, Part I; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual context plays an important role in humans' top-down gaze movement control for target searching. Exploring the mental development mechanism in terms of incremental visual context encoding by population cells is an interesting issue. This paper presents a biologically inspired computational model. The visual contextual cues were used in this model for top-down eye-motion control on searching targets in images. We proposed a population cell coding mechanism for visual context encoding and decoding. The model was implemented in a neural network system. A developmental learning mechanism was simulated in this system by dynamically generating new coding neurons to incrementally encode visual context during training. The encoded context was decoded with population neurons in a top-down mode. This allowed the model to control the gaze motion to the centers of the targets. The model was developed with pursuing low encoding quantity and high target locating accuracy. Its performance has been evaluated by a set of experiments to search different facial objects in a human face image set. Theoretical analysis and experimental results show that the proposed visual context encoding algorithm without weight updating is fast, efficient and stable, and the population-cell coding generally performs better than single-cell coding and k-nearest-neighbor (k-NN)-based coding.
    IEEE Transactions on Autonomous Mental Development 10/2010; · 2.17 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a vehicle detection method based on AdaBoost. We focus on the detection of front-view car and bus with occlusions on highway. Samples with different occlusion situations are selected into the training set. By using basic and rotated Haar-like features extracted from the samples in the set, we train an AdaBoost-based cascade vehicle detector. The performance tests on static images and short time videos show that (1) our approach detects cars more effectively than buses (2) the real-time detection of our method on video proceeds at 30 frames per second.
    Information Engineering and Computer Science, 2009. ICIECS 2009. International Conference on; 01/2010
  • Lijuan Duan, Jicai Ma, Zhen Yang, Jun Miao
    [Show abstract] [Hide abstract]
    ABSTRACT: Sparse coding theory is a method for finding a reduced representation of multidimensional data When applied to images, this theory can adopt efficient codes for images that captures the statistically significant structure intrinsic in the images In this paper, we mainly discuss about its application in the area of texture images analysis by means of Independent Component Analysis Texture model construction, feature extraction and further segmentation approaches are proposed respectively The experimental results demonstrate that the segmentation based on sparse coding theory gets promising performance.
    Advances in Neural Networks - ISNN 2010, 7th International Symposium on Neural Networks, ISNN 2010, Shanghai, China, June 6-9, 2010, Proceedings, Part II; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we make some analysis on the FitzHugh-Nagumo model and improve it to build a neural network, and the network is used to implement visual selection and attention shifting. Each group of neurons representing one object of a visual input is synchronized; different groups of neurons representing different objects of a visual input are desynchronized. Cooperation and competition mechanism is also introduced to accelerate oscillating frequency of the salient object as well as to slow down other objects, which result in the most salient object jumping to a high frequency oscillation, while all other objects being silent. The object corresponding to high frequency oscillation is selected, then the selected object is inhibited and other neurons continue to oscillate to select the next salient object.
    Advances in Neural Networks - ISNN 2010, 7th International Symposium on Neural Networks, ISNN 2010, Shanghai, China, June 6-9, 2010, Proceedings, Part II; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual context plays a significant role in humans’ gaze movement for target searching. How to transform the visual context into the internal representation of a brain-like neural network is an interesting issue. Population cell coding is a neural representation mechanism which was widely discovered in primates’ visual neural system. This paper presents a biologically inspired neural network model which uses a population cell coding mechanism for visual context representation and target searching. Experimental results show that the population-cell-coding generally performs better than the single-cell-coding system.
    Artificial Neural Networks - ICANN 2010 - 20th International Conference, Thessaloniki, Greece, September 15-18, 2010, Proceedings, Part I; 01/2010
  • Source
    IEEE Transactions on Autonomous Mental Development 01/2010; 2:196-215. · 2.17 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Sparse coding has high-performance encoding and ability to express images, sparse encoding basis vector plays a crucial role. The computational complexity of the most existing sparse coding basis vectors of is relatively large. In order to reduce the computational complexity and save the time to train basis vectors. A new Hebbian rules based method for computation of sparse coding basis vectors is proposed in this paper. A two-layer neural network is constructed to implement the task. The main idea of our work is to learn basis vectors by removing the redundancy of all initial vectors using Hebbian rules. The experiments on natural images prove that the proposed method is effective for sparse coding basis learning. It has the smaller computational complexity compared with the previous work.
    Natural Computation, 2009. ICNC '09. Fifth International Conference on; 09/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Gaze movement plays an important role in human visual search system. In literature, the winner-take-all method is wildly used to simulate the controlling of the gaze movement. The winner-take-all is a type of single-cell coding method, which uses one cell (grandmother cell) or one response to represent an object. However, eye movement is affected by the visual context which includes more than one object in images, especially in target search. Therefore, we propose to use the population coding with more than one response rather than the single-cell coding on gaze movement control. The proposed method is supported by the theoretical analysis and experiments on a real image database which show the population-cell-coding improves the target locating accuracy by 44.4% only at the cost of coding 22.4% more information than that of single-cell-coding.
    International Joint Conference on Neural Networks, IJCNN 2009, Atlanta, Georgia, USA, 14-19 June 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A method of human skin region detection based on PCNN is proposed in this paper. Firstly, the input origin image is translated from RGB color space to YIQ color space, and I channel image is obtained. Secondly, we use the synchronous pulse firing mechanism of pulse coupled neural network (PCNN) to simulate the skin region detection mechanism of human eyes. Skin and non-skin regions are fired in different time. Therefore, skin regions are detected. Our comparison with other methods shows that the proposed method produces more accurate segmentation results.
    Advances in Neural Networks - ISNN 2009, 6th International Symposium on Neural Networks, ISNN 2009, Wuhan, China, May 26-29, 2009, Proceedings, Part III; 01/2009
  • Lijuan Duan, Jicai Ma, Jun Miao, Yuanhua Qiao
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a feature extraction approach by using ICA filters bank, which consists of the ICA basis images learned from the training images. On the basis of its ability to capture the inherent properties of textured image, we use the ICA filters bank as template model to extract the texture feature for segmentation. Experiments based on clustering and classifications are demonstrated to show the feasibility of this method.
    Fifth International Conference on Natural Computation, ICNC 2009, Tianjian, China, 14-16 August 2009, 6 Volumes; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual context between objects is an important cue for object position perception. How to effectively represent the visual context is a key issue to study. Some past work introduced task-driven methods for object perception, which led a large coding quantity. This paper proposes an approach that incorporates feature-driven mechanism into object-driven context representation for object locating. As an example, the paper discusses how a neuronal network encodes the visual context between feature salient regions and human eye centers with as little coding quantity as possible. A group of experiments on efficiency of visual context coding and object searching are analyzed and discussed, which show that the proposed method decreases the coding quantity and improve the object searching accuracy effectively.
    Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on; 07/2008
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pulse-coupled neuron networks (PCNN) can be efficiently applied to image segmentation. However, the performance of segmentation depends on the suitable PCNN parameters, which are obtained by manual experiment, and the effect of the segmentation needs to be improved for images with noise. In this paper, dynamic mechanism based PCNN(DMPCNN) is brought forward to simulate the integrate-and-fire mechanism, and it is applied to segment images with noise effectively. Parameter selection is based on dynamic mechanism. Experimental results for image segmentation show its validity and robustness.
    Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on; 07/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Image representation has been a key issue in vision research for many years. In order to represent various local image patterns or objects effectively, it is important to study the spatial relationship among these objects, especially for the purpose of searching the specific object among them. Psychological experiments have supported the hypothesis that humans cognize the world using visual context or object spatial relationship. How to efficiently learn and memorize such knowledge is a key issue that should be studied. This paper proposes a new type of neural network for learning and memorizing object spatial relationship by means of sparse coding. A group of comparison experiments for visual object searching between several sparse features are carried out to examine the proposed approach. The efficiency of sparse coding of the spatial relationship is analyzed and discussed. Theoretical and experimental results indicate that the newly developed neural network can well learn and memorize object spatial relationship and simultaneously the visual context learning and memorizing have certainly become a grand challenge in simulating the human vision system.
    Neurocomputing 06/2008; · 2.01 Impact Factor

Publication Stats

99 Citations
13.37 Total Impact Points

Institutions

  • 2003–2013
    • Beijing University of Technology
      Peping, Beijing, China
  • 2008–2010
    • Northeast Institute of Geography and Agroecology
      • Institute of Computing Technology
      Beijing, Beijing Shi, China
  • 2009
    • Beijing Union University
      Peping, Beijing, China
  • 2005
    • Harbin Institute of Technology
      • Department of Computer Science and Engineering
      Harbin, Heilongjiang Sheng, China