De Xu

Beijing Jiaotong University, Peping, Beijing, China

Are you De Xu?

Claim your profile

Publications (101)25.8 Total impact

  • International Journal of Software Engineering and Knowledge Engineering 05/2014; 24(04):635-652. DOI:10.1142/S0218194014500247 · 0.26 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Combinatorial maps are widely used in image representation and processing, however map matching problems have not been extensively researched. This paper addresses the problem of inexact matching between labeled combinatorial maps. First, the concept of edit distance is extended to combinatorial maps, and then used to define mapping between combinatorial maps as a sequence of edit operations that transforms one map into another. Subsequently, an optimal approach based on A* algorithm and an approximate approach based on Greedy algorithm are proposed to compute the distance between combinatorial maps. Experimental results show that the proposed inexact map matching approach produces richer search results than the exact map matching technique by tolerating small difference between maps. The proposed approach performs better in practice than the previous approach based on maximum common submap which cannot be directly used for comparing labels on the maps.
    Computer Vision and Image Understanding 12/2012; 116(12):1168–1177. DOI:10.1016/j.cviu.2012.08.002 · 1.36 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces 2PROM, a new algorithm that can efficiently retrieve information from a set of multimedia and heterogeneous data sources. We published IHM, a model to predict whether a x-ray picture carries the trace of cancer (viruses). We further generalized our approach to The NoCancerSpace, a dataspace for medical diagnosis of lung cancer. This paper presents the optimization algorithm used by the NocancerSpace. 2PROM is designed to optimize the Dataspace retrieval process with two main phases. Its first phase consists of building a pipeline to find the best retrieval strategies. In fact, the pipeline explores the set of alternative execution strategies to determine the cheapest one. The retrieval strategies are initial nodes of the next phase. As for the second phase, retrieval strategies are combined with predictive model to determine the most efficient way to execute a query. In order words, the optimizer considers the possible retrieval strategies for a given input query, and attempts to determine which of those strategies will be the most efficient. The retrieval strategies are represented as XML tree of “strategy nodes”. The output of the second phase is the best results found. Experiments show that 2PROM retrieves more relevant results in less time than existing systems.
    2012 11th International Conference on Signal Processing (ICSP 2012); 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Iconic communication is paramount today in order to assist people with disability (e.g. illiteracy) enjoying, as much as everybody else, the advances in information and communication technologies (e.g. Internet). Previous works tend to generalize iconic communication by translating iconic sentences into XML documents. Theses approaches are limited owing to the fact that an icon can hide several metaphors. In fact, the semantics of an icon is not the linguistic equivalent associated to the image, but is a set of attributes which can be used to describe the given icon. Second, an XML schema is not a knowledge representation, but just a message format. Therefore, to manage the knowledge hidden behind iconic sentences, a semantic model for icons needs to be formally defined. This paper extends previous icon models by first, introducing a description logics-based definition of icons semantics, and second, based on those formal definitions and the Web Ontology Language (OWL), we create an Ontology for Icons named IcOnto (read “eye can too”). We further use IcOnto to model some properties of the African Traditional Medicine (ATM), for illustration.
    2012 11th International Conference on Signal Processing (ICSP 2012); 10/2012
  • Journal of Computational and Theoretical Nanoscience 05/2012; 11(1):67-71. DOI:10.1166/asl.2012.2162 · 1.34 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We define WordNet based hierarchy concept tree (HCT) and hierarchy concept graph (HCG), HCT contains hyponym/hypernym kind of relation in WordNet while HCG has more meronym/holonym kind of edges than in HCT, and present an advanced concept vector model for generalizing standard representations of concept similarity in terms of WordNet-based HCT. In this model, each concept node in the hierarchical tree has ancestor and descendent concept nodes composing its relevancy nodes, thus a concept node is represented as a concept vector according to its relevancy nodes’ local density and the similarity of the two concepts is obtained by computing the cosine similarity of their vectors. In addition, the model is adjustable in terms of multiple descendent concept nodes. This paper also provides a method by which this concept vector may be applied with regard to HCG into HCT. With this model, semantic similarity and relatedness are computed based on HCT and HCG. The model contains structural information inherent to and hidden in the HCT and HCG. Our experiments showed that this model compares favorably to others and is flexible in that it can make comparisons between any two concepts in a WordNet-like structure without relying on any additional dictionary or corpus information.
    Journal of Systems and Software 02/2012; 85(2):370-381. DOI:10.1016/j.jss.2011.08.029 · 1.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Visually saliency detection provides an alternative methodology to image description in many applications such as adaptive content delivery and image retrieval. One of the main aims of visual attention in computer vision is to detect and segment the salient regions in an image. In this paper, we employ matrix decomposition to detect salient object in nature images. To efficiently eliminate high contrast noise regions in the background, we integrate global context information into saliency detection. Therefore, the most salient region can be easily selected as the one which is globally most isolated. The proposed approach intrinsically provides an alternative methodology to model attention with low implementation complexity. Experiments show that our approach achieves much better performance than that from the existing state-of-art methods.
    IEICE Transactions on Information and Systems 01/2012; E95.D(5):1556-1559. DOI:10.1587/transinf.E95.D.1556 · 0.19 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Removing shadows from single color images is an important problem in computer vision. In this paper, we propose a novel shadow removal approach, which could effectively remove shadows from textured surfaces, yielding high quality shadow-free images. Our approach aims at calculating scale factors to cancel the effect of shadows. Based on the regional gray edge hypothesis, which assumes the average of the reflectance differences in a region is achromatic, the scale factors can be computed without the restrictions that former algorithms need. The experimental results show that the proposed algorithm is effective.
    Optical Engineering 12/2011; 50(12):7001-. DOI:10.1117/1.3656749 · 0.96 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Tag ranking has emerged as an important research topic recently due to its potential application on web image search. Existing tag relevance ranking approaches mainly rank the tags according to their relevance levels with respect to a given image. Nonetheless, such algorithms heavily rely on the large-scale image dataset and the proper similarity measurement to retrieve semantic relevant images with multi-labels. In contrast to the existing tag relevance ranking algorithms, in this paper, we propose a novel tag saliency ranking scheme, which aims to automatically rank the tags associated with a given image according to their saliency to the image content. To this end, this paper presents an integrated framework for tag saliency ranking, which combines both visual attention model and multi-instance learning to investigate the saliency ranking order information of tags with respect to the given image. Specifically, tags annotated on the image-level are propagated to the region-level via an efficient multi-instance learning algorithm firstly; then, visual attention model is employed to measure the importance of regions in the given image. Finally, tags are ranked according to the saliency values of the corresponding regions. Experiments conducted on the COREL and MSRC image datasets demonstrate the effectiveness and efficiency of the proposed framework.
    Neurocomputing 10/2011; 74(17):3619-3627. DOI:10.1016/j.neucom.2011.06.014 · 2.01 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a method for scene categorization by integrating region contextual information into the popular Bag-of-Visual-Words approach. The Bag-of-Visual-Words approach describes an image as a bag of discrete visual words, where the frequency distributions of these words are used for image categorization. However, the traditional visual words suffer from the problem when faced these patches with similar appearances but distinct semantic concepts. The drawback stems from the independently construction each visual word. This paper introduces Region-Conditional Random Fields model to learn each visual word depending on the rest of the visual words in the same region. Comparison with the traditional Conditional Random Fields model, there are two areas of novelty. First, the initial label of each patch is automatically defined based on its visual feature rather than manually labeling with semantic labels. Furthermore, the novel potential function is built under the region contextual constraint. The experimental results on the three well-known datasets show that Region Contextual Visual Words indeed improves categorization performance compared to traditional visual words.
    Expert Systems with Applications 09/2011; 38(9):11591-11597. DOI:10.1016/j.eswa.2011.03.037 · 1.97 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Combinatorial maps explicitly encode orientations of edges around vertices, and have been used in many fields. In this paper, we address the problem of searching for patterns in model maps by putting forward the concept of symbol graph. A symbol graph will be constructed and stored for each model map in the preprocessing. Furthermore, an algorithm for submap isomorphism is presented based on symbol sequence searching in the symbol graphs. The computational complexity of this algorithm is quadratic in the worst case if we neglect the preprocessing step.
    Pattern Recognition Letters 06/2011; 32(8):1100-1107. DOI:10.1016/j.patrec.2011.02.021 · 1.55 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Localized content-based image retrieval (LCBIR) has emerged as a hot topic more recently because in the scenario of CBIR, the user is interested in a portion of the image and the rest of the image is irrelevant. In this paper, we propose a novel region-level relevance feedback method to solve the LCBIR problem. Firstly, the visual attention model is employed to measure the regional saliency of each image in the feedback image set provided by the user. Secondly, the regions in the image set are constructed to form an affinity matrix and a novel propagation energy function is defined which takes both low-level visual features and regional significance into consideration. After the iteration, regions in the positive images with high confident scores are selected as the candidate query set to conduct the next-round retrieval task until the retrieval results are satisfactory. Experimental results conducted on the SIVAL dataset demonstrate the effectiveness of the proposed approach.
    IEICE Transactions on Information and Systems 06/2011; 94-D(6):1353-1356. DOI:10.1587/transinf.E94.D.1353 · 0.19 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an efficient yet powerful codebook model, named classified codebook model, to categorize natural scene category. The current codebook model typically resorts to large codebook to obtain higher performance for scene categorization, which severely limits the practical applicability of the model. Our model formulates the codebook model with the theory of vector quantization, and thus uses the famous technique of classified vector quantization for scene-category modeling. The significant feature in our model is that it is beneficial for scene categorization, especially at small codebook size, while saving much computation complexity for quantization. We evaluate the proposed model on a well-known challenging scene dataset: 15 Natural Scenes. The experiments have demonstrated that our model can decrease the computation time for codebook generation. What is more, our model can get better performance for scene categorization, and the gain of performance becomes more pronounced at small codebook size.
    IEICE Transactions on Information and Systems 06/2011; 94-D(6):1349-1352. DOI:10.1587/transinf.E94.D.1349 · 0.19 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multi-Scale Multi-Level Generative Model in Scene Classification
    IEICE Transactions on Information and Systems 01/2011; DOI:10.1587/transinf.E94.D.167 · 0.19 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: One possible solution to estimating the illumination for color constancy and white balance in video sequences would be to apply one of the many existing illumination-estimation algorithms independently to each video frame. However, the frames in a video are generally highly correlated, so we propose a video-based illumination-estimation algorithm that takes advantage of the related information between adjacent frames. The main idea of the method is to cut the video clip into different ‘scenes.’ Assuming all the frames in one scene are under the same (or similar) illuminant, we combine the information from them to calculate the chromaticity of the scene illumination. The experimental results showed that the proposed method is effective and outperforms the original single-frame methods on which it is based.
    Computational Color Imaging - Third International Workshop, CCIW 2011, Milan, Italy, April 20-21, 2011. Proceedings; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: Removing shadows in color images is an important research problem in computer vision. In this paper, we propose a novel shadow removal approach, which effectively removes shadows from textured surfaces, yielding high quality shadow-free images. Our approach aims at calculating scale factors to cancel the effect of shadows. Based on the regional gray edge hypothesis, which assumes the average of the reflectance differences in a region is achromatic, the scale factors can be computed without the restrictions that former algorithms need. The experimental results show that the proposed algorithm is effective and improves the performance of former scale-based shadow removal methods.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Emotion categorization of natural scene images represents a very useful task for automatic image analysis systems. Psychological experiments have shown that visual information at the emotion level is aggregated according to a set of rules. Hence, we attempt to discover the emotion descriptors based on the composition of visual word representation. First, the composition of visual word representation models each image as a matrix, where elements record the correlations of pairwise visual words. In this way, an image collection is modeled as a third-order tensor. Then we discover the emotion descriptors using a novel affective-probabilistic latent semantic analysis (affective-pLSA) model, which is an extension of the pLSA model, on this tensor representation. Considering that the natural scene image may evoke multiple emotional feelings, emotion categorization is carried out using the multilabel k-nearest-neighbor approach based on emotion descriptors. The proposed approach has been tested on the International Affective Picture System and a collection of social images from the Flickr website. The experimental results have demonstrated the effectiveness of the proposed method for eliciting image emotions.
    Optical Engineering 12/2010; 49(12). DOI:10.1117/1.3518051 · 0.96 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Traditional Chinese painting is a unique form of art and highly regarded for its theory, expression and techniques throughout the world. A traditional Chinese painting is composed of three parts: the mainbody part, the seals part and scripts part. These three parts have many semantics. So extraction of them is important and urgent task. However, popular image processing techniques have little been used in this specific domain. In this paper, a novel algorithm for extraction the scripts part of traditional Chinese painting images is proposed, including the motivations of the algorithm, the description of the algorithm, experiment results and its analysis. This algorithm is mainly based on color and structure feature of Chinese characters in the scripts part of traditional Chinese painting images. The algorithm is simple but has satisfactory efficiency.
    Software Technology and Engineering (ICSTE), 2010 2nd International Conference on; 11/2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: Color constancy is an important perceptual ability of humans to recover the color of objects invariant of light information. It is also necessary for a robust machine vision system. Until now, a number of color constancy algorithms have been proposed in the literature. In particular, the edge-based color constancy uses the edge of an image to estimate light color. It is shown to be a rich framework that can represent many existing illumination estimation solutions with various parameter settings. However, color constancy is an ill-posed problem; every algorithm is always given out under some assumptions and can only produce the best performance when these assumptions are satisfied. In this article, we have investigated a combination strategy relying on the Extreme Learning Machine (ELM) technique that integrates the output of edge-based color constancy with multiple parameters. Experiments on real image data sets show that the proposed method works better than most single-color constancy methods and even some current state-of-the-art color constancy combination strategies.
    ACM Transactions on Applied Perception 10/2010; 8:5. DOI:10.1145/1857893.1857898 · 1.05 Impact Factor

Publication Stats

260 Citations
25.80 Total Impact Points

Institutions

  • 1970–2012
    • Beijing Jiaotong University
      • School of Computer and Information Technology
      Peping, Beijing, China
  • 2007
    • Chonbuk National University
      • Department of Electronic Engineering
      Tsiuentcheou, Jeollabuk-do, South Korea
  • 2005
    • Beijing Union University
      Peping, Beijing, China