Dawei Song

Tianjin University, T’ien-ching-shih, Tianjin Shi, China

Are you Dawei Song?

Claim your profile

Publications (150)31.95 Total impact

  • Peng Zhang · Qian Yu · Yuexian Hou · Dawei Song · Jingfei Li · Bin Hu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently, a Distribution Separation Method (DSM) is proposed for relevant feedback in information retrieval, which aims to approximate the true relevance distribution by separating a seed irrelevance distribution from the mixture one. While DSM achieved a promising empirical performance, theoretical analysis of DSM is still need further study and comparison with other relative retrieval model. In this article, we first generalize DSM's theoretical property, by proving that its minimum correlation assumption is equivalent to the maximum (original and symmetrized) KL-Divergence assumption. Second, we also analytically show that the EM algorithm in a well-known Mixture Model is essentially a distribution separation process and can be simplified using the linear separation algorithm in DSM. Some empirical results are also presented to support our theoretical analysis.
  • Leszek Kaliciak · Hans Myrhaug · Ayse Goker · Dawei Song ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we prove that specific widely used models in Content-based Image Retrieval for information fusion are interchangeable. In addition, we show that even advanced, non-standard fusion strategies can be represented in dual forms. These models are often classified as representing early or late fusion strategies. We also prove that the standard Rocchio algorithm with specific similarity measurements can be represented in a late fusion form.
  • Source
    Yongqiang Chen · Peng Zhang · Dawei Song · Benyou Wang ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Formulating and reformulating reliable textual queries have been recognized as a challenging task in Information Retrieval (IR), even for experienced users. Most existing query expansion methods, especially those based on implicit relevance feedback, utilize the user's historical interaction data, such as clicks, scrolling and viewing time on documents, to derive a refined query model. It is further expected that the user's search experience would be largely improved if we could dig out user's latent query intention, in real-time, by capturing the user's current interaction at the term level directly. In this paper, we propose a real-time eye tracking based query expansion method, which is able to: (1) automatically capture the terms that the user is viewing by utilizing eye tracking techniques; (2) derive the user's latent intent based on the eye tracking terms and by using the Latent Dirichlet Allocation (LDA) approach. A systematic user study has been carried out and the experimental results demonstrate the effectiveness of our proposed methods.
    CIKM 2015; 07/2015
  • Leszek Kaliciak · Hans Myrhaug · Ayse Goker · Dawei Song ·
    [Show abstract] [Hide abstract]
    ABSTRACT: It has been shown that query can be correlated with its context to a different extent; in this case the feedback images. We introduce an adaptive weighting scheme where the respective weights are automatically modified, depending on the relationship strength between visual query and its visual context and textual query and its textual context; the number of terms or visual terms (mid-level visual features) co-occurring between current query and its context. The user simulation experiment has shown that this kind of adaptation can indeed further improve the effectiveness of hybrid CBIR models. Keywords: Hybrid Relevance Feedback, Visual Features, Textual Features, Early Fusion, Late Fusion, Re-Ranking, Adaptive Weighting Scheme
    Fusion 2015 - 18th International Conference on Information Fusion, Washington DC; 07/2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Latest research has shown that the readability of documents plays an important role in the users', particularly non-domain-expert users' information seeking and acquisition process. Classical document readability measures are based on surface text features, independent of users. In this paper, we propose to predict document's readability by integrating traditional text readability features with users' eye movement features. We expect that the latter better encode users' reading level in a personalized way. We have tested our proposed idea by conducting a preliminary user evaluation, and investigated different features' impact on readability prediction. The results show that the combination of text read-ability features and eye movement features has a higher correlation with human judgments than using either of them individually.
    SIGIR workshop on NeuroIR2015; 06/2015
  • Source
    Chen Yongqiang · Qingtao Ren · Peng Zhang · Dawei Song · Yuexian Hou ·

    SIGIR workshop on NeuroIR 2015; 05/2015
  • Source
    Thanh Vu · Alistair Willis · Dawei Song ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent research has shown that mining and modelling search tasks helps improve the performance of search personali-sation. Many approaches have been proposed to model a search task using topics discussed in relevant documents, where the topics are usually obtained from human-generated online ontology such as Open Directory Project. A limitation of these approaches is that many documents may not contain the topics covered in the ontology. Moreover, the previous studies largely ignored the dynamic nature of the search task; with the change of time, the search intent and user interests may also change. This paper addresses these problems by modelling search tasks with time-awareness using latent topics, which are automatically extracted from the task's relevance documents by an unsupervised topic modelling method (i.e., Latent Dirichlet Allocation). In the experiments, we utilise the time-aware search task to re-rank result list returned by a commercial search engine and demonstrate a significant improvement in the ranking quality.
    The 24th international conference on World Wide Web; 05/2015
  • Source
    Xiaozhao Zhao · Yuexian Hou · Dawei Song · Wenjie Li ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Typical dimensionality reduction (DR) methods are often data-oriented, focusing on directly reducing the number of random variables (features) while retaining the maximal variations in the high-dimensional data. In unsupervised situations, one of the main limitations of these methods lies in their dependency on the scale of data features. This paper aims to address the problem from a new perspective and considers model-oriented dimensionality reduction in parameter spaces of binary multivariate distributions. Specifically, we propose a general parameter reduction criterion, called Confident-Information-First (CIF) principle, to maximally preserve confident parameters and rule out less confident parameters. Formally, the confidence of each parameter can be assessed by its contribution to the expected Fisher information distance within the geometric manifold over the neighbourhood of the underlying real distribution. We then revisit Boltzmann machines (BM) from a model selection perspective and theoretically show that both the fully visible BM (VBM) and the BM with hidden units can be derived from the general binary multivariate distribution using the CIF principle. This can help us uncover and formalize the essential parts of the target density that BM aims to capture and the non-essential parts that BM should discard. Guided by the theoretical analysis, we develop a sample-specific CIF for model selection of BM that is adaptive to the observed samples. The method is studied in a series of density estimation experiments and has been shown effective in terms of the estimate accuracy.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Query language modeling based on relevance feedback has been widely applied to improve the effectiveness of information retrieval. However, intra-query term dependencies (i.e., the dependencies between different query terms and term combinations) have not yet been sufficiently addressed in the existing approaches. This article aims to investigate this issue within a comprehensive framework, namely the Aspect Query Language Model (AM). We propose to extend the AM with a hidden Markov model (HMM) structure to incorporate the intra-query term dependencies and learn the structure of a novel aspect HMM (AHMM) for query language modeling. In the proposed AHMM, the combinations of query terms are viewed as latent variables representing query aspects. They further form an ergodic HMM, where the dependencies between latent variables (nodes) are modeled as the transitional probabilities. The segmented chunks from the feedback documents are considered as observables of the HMM. Then the AHMM structure is optimized by the HMM, which can estimate the prior of the latent variables and the probability distribution of the observed chunks. Our extensive experiments on three large-scale text retrieval conference (TREC) collections have shown that our method not only significantly outperforms a number of strong baselines in terms of both effectiveness and robustness but also achieves better results than the AM and another state-of-the-art approach, namely the latent concept expansion model. © 2014 Wiley Periodicals, Inc.
    Computational Intelligence 10/2014; DOI:10.1111/coin.12058 · 0.67 Impact Factor
  • Leszek Kaliciak · Hans Myrhaug · Ayse Goker · Dawei Song ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we prove that specific early and specific late fusion strategies are interchangeable. In the case of the late fusion, we consider not only linear but also nonlinear combinations of scores. Our findings are important from both theoretical and practical (applied) perspectives. The duality of specific fusion strategies also answers the question why in the literature the experimental results for both early and late fusion are often similar. The most important aspect of our research is, however, related to the presumable drawbacks of the aforementioned fusion strategies. It is an accepted fact that the main drawback of the early fusion is the curse of dimensionality (generation of high dimensional vectors) whereas the main drawback of the late fusion is its inability to capture correlation between feature spaces. Our proof on the interchangeability of specific fusion schemes undermines this belief. Only one of the possibilities exists: either the late fusion is capable of capturing the correlation between feature spaces or the interaction between the early fusion operators and the similarity measurements decorrelates feature spaces. Keywords - Information and data fusion, early fusion, late fusion, Content-based Image Retrieval, Information Retrieval, Multimedia Retrieval, textual representation, visual representation
    The 17th International Conference on Information Fusion (Fusion 2014), Salamanca, Spain; 07/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent research has shown that the performance of search engines can be improved by enriching a user's personal profile with information about other users with shared interests. In the existing approaches, groups of similar users are often statically determined, e.g., based on the common documents that users clicked. However, these static grouping methods are query-independent and neglect the fact that users in a group may have different interests with respect to different topics. In this paper, we argue that common interest groups should be dynamically constructed in response to the user's input query. We propose a personalisation framework in which a user profile is enriched using information from other users dynamically grouped with respect to an input query. The experimental results on query logs from a major commercial web search engine demonstrate that our framework improves the performance of the web search engine and also achieves better performance than the static grouping method.
    The 37th international ACM SIGIR conference; 07/2014
  • Xiaozhao Zhao · Yuexian Hou · Dawei Song · Wenjie Li ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of the physical phenomenon being measured. However, for complex cognitive systems of high dimensionality (e. g., human language processing and image recognition), the information bound J could be excessively larger than I (J >> I), due to insufficient observation, which would lead to serious over-fitting problems in the derivation of cognitive models. Moreover, there is a lack of an established exact invariance principle that gives rise to the bound information in universal cognitive systems. This limits the direct application of EPI. To narrow down the gap between I and J, in this paper, we propose a confident-information-first (CIF) principle to lower the information bound J by preserving confident parameters and ruling out unreliable or noisy parameters in the probability density function being measured. The confidence of each parameter can be assessed by its contribution to the expected Fisher information distance between the physical phenomenon and its observations. In addition, given a specific parametric representation, this contribution can often be directly assessed by the Fisher information, which establishes a connection with the inverse variance of any unbiased estimate for the parameter via the Cramer-Rao bound. We then consider the dimensionality reduction in the parameter spaces of binary multivariate distributions. We show that the single-layer Boltzmann machine without hidden units (SBM) can be derived using the CIF principle. An illustrative experiment is conducted to show how the CIF principle improves the density estimation performance.
    Entropy 07/2014; 16(7):3670-3688. DOI:10.3390/e16073670 · 1.50 Impact Factor
  • Peng Zhang · Dawei Song · Jun Wang · Yuexian Hou ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The estimation of query model is an important task in language modeling (LM) approaches to information retrieval (IR). The ideal estimation is expected to be not only effective in terms of high mean retrieval performance over all queries, but also stable in terms of low variance of retrieval performance across different queries. In practice, however, improving effectiveness can sacrifice stability, and vice versa. In this paper, we propose to study this tradeoff from a new perspective, i.e., the bias–variance tradeoff, which is a fundamental theory in statistics. We formulate the notion of bias–variance regarding retrieval performance and estimation quality of query models. We then investigate several estimated query models, by analyzing when and why the bias–variance tradeoff will occur, and how the bias and variance can be reduced simultaneously. A series of experiments on four TREC collections have been conducted to systematically evaluate our bias–variance analysis. Our approach and results will potentially form an analysis framework and a novel evaluation strategy for query language modeling.
    Information Processing & Management 01/2014; 50(1):199–217. DOI:10.1016/j.ipm.2013.08.004 · 1.27 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper reports on an approach to the analysis of form (layout and formatting) during genre recognition recorded using eye tracking. The researchers focused on eight di erent types of e-mail, such as calls for papers, newsletters and spam, which were chosen to represent di erent genres. The study involved the collection of oculographic behaviour data based on the scanpath duration and scanpath length based metric, to highlight the ways in which people view the features of genres. We found that genre analysis based on purpose and form (layout features, etc.) was an e ective means of identifying the characteristics of these e-mails. The research, carried out on a group of 24 participants, highlighted their interaction and interpretation of the e-mail texts and the visual cues or features perceived. In addition, the ocular strategies of scanning and skimming, they employed for the processing of the texts by block, genre and representation were evaluated.
    Information Processing & Management 01/2014; 50(1):175–198. DOI:10.1016/j.ipm.2013.08.005 · 1.27 Impact Factor
  • Jingfei Li · Dawei Song · Peng Zhang · Ji-Rong Wen · Zhicheng Dou ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Personalized search has recently attracted increasing attention. This paper focuses on utilizing click-through data to personalize the web search results, from a novel perspective based on subspace projection. Specifically, we represent a user profile as a vector subspace spanned by a basis generated from a word-correlation matrix, which is able to capture the dependencies between words in the “satisfied click” (SAT Click) documents. A personalized score for each document in the original result list returned by a search engine is computed by projecting the document (represented as a vector or another word-correlation subspace) onto the user profile subspace. The personalized scores are then used to re-rank the documents through the Borda’ ranking fusion method. Empirical evaluation is carried out on a real user log data set collected from a prominent search engine (Bing). Experimental results demonstrate the effectiveness of our methods, especially for the queries with high click entropy.
    Information Retrieval Technology, 01/2014: pages 160-171;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern search engines have been moving away from simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features are now integral parts of web search engines. However, generating good query modification suggestions remains a challenging issue. Query log analysis is one of the major strands of work in this direction. Although much research has been performed on query logs collected on the web as a whole, query log analysis to enhance search on smaller and more focused collections has attracted less attention, despite its increasing practical importance. In this article, we report on a systematic study of different query modification methods applied to a substantial query log collected on a local website that already uses an interactive search engine. We conducted experiments in which we asked users to assess the relevance of potential query modification suggestions that have been constructed using a range of log analysis methods and different baseline approaches. The experimental results demonstrate the usefulness of log analysis to extract query modification suggestions. Furthermore, our experiments demonstrate that a more fine-grained approach than grouping search requests into sessions allows for extraction of better refinement terms from query log files.
    Journal of the American Society for Information Science and Technology 10/2013; 64(10-10):1975–1994. DOI:10.1002/asi.22901 · 1.85 Impact Factor
  • Teng Ma · Yuexian Hou · Xiaozhao Zhao · Dawei Song ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The road network design problem is to optimize the road network by selecting paths to improve or adding paths in the existing road network, under certain constraints, e.g., the weighted sum of modifying costs. Since its multi-objective nature, the road network design problem is often challenging for designers. Empirically, the smaller diameter a road network has, the more connected and efficient the road network is. Based on this observation, we propose a set of constrained convex models for designing road networks with small diameters. To be specific, we theoretically prove that the diameter of the road network, which is evaluated w.r.t the travel times in the network, can be bounded by the algebraic connectivity in spectral graph theory since that the upper and lower bounds of diameter are inversely proportional to algebraic connectivity. Then we can focus on increasing the algebraic connectivity instead of reducing the network diameter, under the budget constraints. The above formulation leads to a semi-definite program, in which we can get its global solution easily. Then, we present some simulation experiments to show the correctness of our method. At last, we compare our method with an existing method based on the genetic algorithm.
    Proceedings of the Twenty-Third international joint conference on Artificial Intelligence; 08/2013
  • Peng Zhang · Dawei Song · Jun Wang · Yuexian Hou ·
    [Show abstract] [Hide abstract]
    ABSTRACT: It has been recognized that, when an information retrieval (IR) system achieves improvement in mean retrieval effectiveness (e.g. mean average precision (MAP)) over all the queries, the performance (e.g., average precision (AP)) of some individual queries could be hurt, resulting in retrieval instability. Some stability/robustness metrics have been proposed. However, they are often defined separately from the mean effectiveness metric. Consequently, there is a lack of a unified formulation of effectiveness, stability and overall retrieval quality (considering both). In this paper, we present a unified formulation based on the bias-variance decomposition. Correspondingly, a novel evaluation methodology is developed to evaluate the effectiveness and stability in an integrated manner. A case study applying the proposed methodology to evaluation of query language modeling illustrates the usefulness and analytical power of our approach.
    Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval; 07/2013
  • Yuexian Hou · Xiaozhao Zhao · Dawei Song · Wenjie Li ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The classical bag-of-word models for information retrieval (IR) fail to capture contextual associations between words. In this article, we propose to investigate pure high-order dependence among a number of words forming an unseparable semantic entity, that is, the high-order dependence that cannot be reduced to the random coincidence of lower-order dependencies. We believe that identifying these pure high-order dependence patterns would lead to a better representation of documents and novel retrieval models. Specifically, two formal definitions of pure dependence—unconditional pure dependence (UPD) and conditional pure dependence (CPD)—are defined. The exact decision on UPD and CPD, however, is NP-hard in general. We hence derive and prove the sufficient criteria that entail UPD and CPD, within the well-principled information geometry (IG) framework, leading to a more feasible UPD/CPD identification procedure. We further develop novel methods for extracting word patterns with pure high-order dependence. Our methods are applied to and extensively evaluated on three typical IR tasks: text classification and text retrieval without and with query expansion.
    ACM Transactions on Information Systems 07/2013; 31(3). DOI:10.1145/2493175.2493177 · 1.02 Impact Factor
  • Yong-Jin Liu · Xi Luo · Ajay Joneja · Cui-Xia Ma · Xiao-Lan Fu · DaWei Song ·
    [Show abstract] [Hide abstract]
    ABSTRACT: 3-D CAD models are an important digital resource in the manufacturing industry. 3-D CAD model retrieval has become a key technology in product lifecycle management enabling the reuse of existing design data. In this paper, we propose a new method to retrieve 3-D CAD models based on 2-D pen-based sketch inputs. Sketching is a common and convenient method for communicating design intent during early stages of product design, e.g., conceptual design. However, converting sketched information into precise 3-D engineering models is cumbersome, and much of this effort can be avoided by reuse of existing data. To achieve this purpose, we present a user-adaptive sketch-based retrieval method in this paper. The contributions of this work are twofold. First, we propose a statistical measure for CAD model retrieval: the measure is based on sketch similarity and accounts for users' drawing habits. Second, for 3-D CAD models in the database, we propose a sketch generation pipeline that represents each 3-D CAD model by a small yet sufficient set of sketches that are perceptually similar to human drawings. User studies and experiments that demonstrate the effectiveness of the proposed method in the design process are presented.
    IEEE Transactions on Automation Science and Engineering 07/2013; 10(3):783-795. DOI:10.1109/TASE.2012.2228481 · 2.43 Impact Factor

Publication Stats

899 Citations
31.95 Total Impact Points


  • 2012-2014
    • Tianjin University
      • • School of Computer Science and Technology
      • • Department of Computer Science
      T’ien-ching-shih, Tianjin Shi, China
  • 2013
    • Shanghai Open University
      Shanghai, Shanghai Shi, China
    • Tianjin Open University
      T’ien-ching-shih, Tianjin Shi, China
  • 2005-2013
    • The Open University (UK)
      • Knowledge Media Institute
      Milton Keynes, England, United Kingdom
  • 2008-2012
    • The Robert Gordon University
      • School of Computing Science and Digital Media
      Aberdeen, Scotland, United Kingdom
    • Interamerican Open University
      Buenos Aires, Buenos Aires F.D., Argentina
  • 2006-2009
    • Milton Keynes College
      Milton Keynes, England, United Kingdom
  • 1970-2006
    • University of Queensland 
      • • Distributed Systems Technology Centre
      • • School of Information Technology and Electrical Engineering
      Brisbane, Queensland, Australia
  • 1999
    • The Chinese University of Hong Kong
      • Department of Systems Engineering and Engineering Management
      Hong Kong, Hong Kong