Dawei Song

Tianjin University, T’ien-ching-shih, Tianjin Shi, China

Are you Dawei Song?

Claim your profile

Publications (161)41.78 Total impact

  • No preview · Article · Apr 2016 · Entropy
  • No preview · Article · Apr 2016 · Entropy
  • Peng Zhang · Qian Yu · Yuexian Hou · Dawei Song · Jingfei Li · Bin Hu
    No preview · Article · Apr 2016 · Entropy
  • Jingfei Li · Peng Zhang · Dawei Song · Yuexian Hou
    [Show abstract] [Hide abstract] ABSTRACT: User interactions in search system represent a rich source of implicit knowledge about the user’s cognitive state and information need that continuously evolves over time. Despite massive efforts that have been made to exploiting and incorporating this implicit knowledge in information retrieval, it is still a challenge to effectively capture the term dependencies and the user’s dynamic information need (reflected by query modifications) in the context of user interaction. To tackle these issues, motivated by the recent Quantum Language Model (QLM), we develop a QLM based retrieval model for session search, which naturally incorporates the complex term dependencies occurring in user’s historical queries and clicked documents with density matrices. In order to capture the dynamic information within users’ search session, we propose a density matrix transformation framework and further develop an adaptive QLM ranking model. Extensive comparative experiments show the effectiveness of our session quantum language models.
    No preview · Article · Mar 2016 · Physica A: Statistical Mechanics and its Applications
  • Peng Zhang · Qian Yu · Yuexian Hou · Dawei Song · Jingfei Li · Bin Hu
    [Show abstract] [Hide abstract] ABSTRACT: Recently, a Distribution Separation Method (DSM) is proposed for relevant feedback in information retrieval, which aims to approximate the true relevance distribution by separating a seed irrelevance distribution from the mixture one. While DSM achieved a promising empirical performance, theoretical analysis of DSM is still need further study and comparison with other relative retrieval model. In this article, we first generalize DSM's theoretical property, by proving that its minimum correlation assumption is equivalent to the maximum (original and symmetrized) KL-Divergence assumption. Second, we also analytically show that the EM algorithm in a well-known Mixture Model is essentially a distribution separation process and can be simplified using the linear separation algorithm in DSM. Some empirical results are also presented to support our theoretical analysis.
    No preview · Article · Oct 2015
  • Jingfei Li · Dawei Song · Peng Zhang · Yuexian Hou
    [Show abstract] [Hide abstract] ABSTRACT: Session search aims to improve ranking effectiveness by incorporating user interaction information, including short-term interactions within one session and global interactions from other sessions (or other users). While various session search models have been developed and a large number of interaction features have been used, there is a lack of a systematic investigation on how different features would influence the session search. In this paper, we propose to classify typical interaction features into four categories (current query, current session, query change, and collective intelligence). Their impact on the session search performance is investigated through a systematic empirical study, under the widely used Learning-to-Rank framework. One of our key findings, different from what have been reported in the literature, is: features based on current query and collective intelligence have a more positive influence than features based on query change and current session. This would provide insights for development of future session search techniques.
    No preview · Chapter · Oct 2015
  • [Show abstract] [Hide abstract] ABSTRACT: In this paper, we prove that specific widely used models in Content-based Image Retrieval for information fusion are interchangeable. In addition, we show that even advanced, non-standard fusion strategies can be represented in dual forms. These models are often classified as representing early or late fusion strategies. We also prove that the standard Rocchio algorithm with specific similarity measurements can be represented in a late fusion form.
    No preview · Article · Aug 2015
  • Source
    Yongqiang Chen · Peng Zhang · Dawei Song · Benyou Wang
    [Show abstract] [Hide abstract] ABSTRACT: Formulating and reformulating reliable textual queries have been recognized as a challenging task in Information Retrieval (IR), even for experienced users. Most existing query expansion methods, especially those based on implicit relevance feedback, utilize the user's historical interaction data, such as clicks, scrolling and viewing time on documents, to derive a refined query model. It is further expected that the user's search experience would be largely improved if we could dig out user's latent query intention, in real-time, by capturing the user's current interaction at the term level directly. In this paper, we propose a real-time eye tracking based query expansion method, which is able to: (1) automatically capture the terms that the user is viewing by utilizing eye tracking techniques; (2) derive the user's latent intent based on the eye tracking terms and by using the Latent Dirichlet Allocation (LDA) approach. A systematic user study has been carried out and the experimental results demonstrate the effectiveness of our proposed methods.
    Full-text · Conference Paper · Jul 2015
  • [Show abstract] [Hide abstract] ABSTRACT: It has been shown that query can be correlated with its context to a different extent; in this case the feedback images. We introduce an adaptive weighting scheme where the respective weights are automatically modified, depending on the relationship strength between visual query and its visual context and textual query and its textual context; the number of terms or visual terms (mid-level visual features) co-occurring between current query and its context. The user simulation experiment has shown that this kind of adaptation can indeed further improve the effectiveness of hybrid CBIR models. Keywords: Hybrid Relevance Feedback, Visual Features, Textual Features, Early Fusion, Late Fusion, Re-Ranking, Adaptive Weighting Scheme
    No preview · Conference Paper · Jul 2015
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: Latest research has shown that the readability of documents plays an important role in the users', particularly non-domain-expert users' information seeking and acquisition process. Classical document readability measures are based on surface text features, independent of users. In this paper, we propose to predict document's readability by integrating traditional text readability features with users' eye movement features. We expect that the latter better encode users' reading level in a personalized way. We have tested our proposed idea by conducting a preliminary user evaluation, and investigated different features' impact on readability prediction. The results show that the combination of text read-ability features and eye movement features has a higher correlation with human judgments than using either of them individually.
    Full-text · Conference Paper · Jun 2015
  • Source
    Full-text · Conference Paper · May 2015
  • Source
    Thanh Vu · Alistair Willis · Dawei Song
    [Show abstract] [Hide abstract] ABSTRACT: Recent research has shown that mining and modelling search tasks helps improve the performance of search personali-sation. Many approaches have been proposed to model a search task using topics discussed in relevant documents, where the topics are usually obtained from human-generated online ontology such as Open Directory Project. A limitation of these approaches is that many documents may not contain the topics covered in the ontology. Moreover, the previous studies largely ignored the dynamic nature of the search task; with the change of time, the search intent and user interests may also change. This paper addresses these problems by modelling search tasks with time-awareness using latent topics, which are automatically extracted from the task's relevance documents by an unsupervised topic modelling method (i.e., Latent Dirichlet Allocation). In the experiments, we utilise the time-aware search task to re-rank result list returned by a commercial search engine and demonstrate a significant improvement in the ranking quality.
    Full-text · Conference Paper · May 2015
  • Source
    Xiaozhao Zhao · Yuexian Hou · Dawei Song · Wenjie Li
    [Show abstract] [Hide abstract] ABSTRACT: Typical dimensionality reduction (DR) methods are often data-oriented, focusing on directly reducing the number of random variables (features) while retaining the maximal variations in the high-dimensional data. In unsupervised situations, one of the main limitations of these methods lies in their dependency on the scale of data features. This paper aims to address the problem from a new perspective and considers model-oriented dimensionality reduction in parameter spaces of binary multivariate distributions. Specifically, we propose a general parameter reduction criterion, called Confident-Information-First (CIF) principle, to maximally preserve confident parameters and rule out less confident parameters. Formally, the confidence of each parameter can be assessed by its contribution to the expected Fisher information distance within the geometric manifold over the neighbourhood of the underlying real distribution. We then revisit Boltzmann machines (BM) from a model selection perspective and theoretically show that both the fully visible BM (VBM) and the BM with hidden units can be derived from the general binary multivariate distribution using the CIF principle. This can help us uncover and formalize the essential parts of the target density that BM aims to capture and the non-essential parts that BM should discard. Guided by the theoretical analysis, we develop a sample-specific CIF for model selection of BM that is adaptive to the observed samples. The method is studied in a series of density estimation experiments and has been shown effective in terms of the estimate accuracy.
    Preview · Article · Feb 2015
  • Qiuchi Li · Jingfei Li · Peng Zhang · Dawei Song
    No preview · Conference Paper · Jan 2015
  • Ning Xing · Yuexian Hou · Peng Zhang · Wenjie Li · Dawei Song
    No preview · Conference Paper · Jan 2015
  • [Show abstract] [Hide abstract] ABSTRACT: Query language modeling based on relevance feedback has been widely applied to improve the effectiveness of information retrieval. However, intra-query term dependencies (i.e., the dependencies between different query terms and term combinations) have not yet been sufficiently addressed in the existing approaches. This article aims to investigate this issue within a comprehensive framework, namely the Aspect Query Language Model (AM). We propose to extend the AM with a hidden Markov model (HMM) structure to incorporate the intra-query term dependencies and learn the structure of a novel aspect HMM (AHMM) for query language modeling. In the proposed AHMM, the combinations of query terms are viewed as latent variables representing query aspects. They further form an ergodic HMM, where the dependencies between latent variables (nodes) are modeled as the transitional probabilities. The segmented chunks from the feedback documents are considered as observables of the HMM. Then the AHMM structure is optimized by the HMM, which can estimate the prior of the latent variables and the probability distribution of the observed chunks. Our extensive experiments on three large-scale text retrieval conference (TREC) collections have shown that our method not only significantly outperforms a number of strong baselines in terms of both effectiveness and robustness but also achieves better results than the AM and another state-of-the-art approach, namely the latent concept expansion model. © 2014 Wiley Periodicals, Inc.
    No preview · Article · Oct 2014 · Computational Intelligence
  • [Show abstract] [Hide abstract] ABSTRACT: In this paper, we prove that specific early and specific late fusion strategies are interchangeable. In the case of the late fusion, we consider not only linear but also nonlinear combinations of scores. Our findings are important from both theoretical and practical (applied) perspectives. The duality of specific fusion strategies also answers the question why in the literature the experimental results for both early and late fusion are often similar. The most important aspect of our research is, however, related to the presumable drawbacks of the aforementioned fusion strategies. It is an accepted fact that the main drawback of the early fusion is the curse of dimensionality (generation of high dimensional vectors) whereas the main drawback of the late fusion is its inability to capture correlation between feature spaces. Our proof on the interchangeability of specific fusion schemes undermines this belief. Only one of the possibilities exists: either the late fusion is capable of capturing the correlation between feature spaces or the interaction between the early fusion operators and the similarity measurements decorrelates feature spaces. Keywords - Information and data fusion, early fusion, late fusion, Content-based Image Retrieval, Information Retrieval, Multimedia Retrieval, textual representation, visual representation
    No preview · Conference Paper · Jul 2014
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: Recent research has shown that the performance of search engines can be improved by enriching a user's personal profile with information about other users with shared interests. In the existing approaches, groups of similar users are often statically determined, e.g., based on the common documents that users clicked. However, these static grouping methods are query-independent and neglect the fact that users in a group may have different interests with respect to different topics. In this paper, we argue that common interest groups should be dynamically constructed in response to the user's input query. We propose a personalisation framework in which a user profile is enriched using information from other users dynamically grouped with respect to an input query. The experimental results on query logs from a major commercial web search engine demonstrate that our framework improves the performance of the web search engine and also achieves better performance than the static grouping method.
    Full-text · Conference Paper · Jul 2014
  • Yuexian Hou · Bo Wang · Dawei Song · Xiaochun Cao · Wenjie Li
    [Show abstract] [Hide abstract] ABSTRACT: In density estimation task, Maximum Entropy (Maxent) model can effectively use reliable prior information via nonparametric constraints, that is, linear constraints without empirical parameters. However, reliable prior information is often insufficient, and parametric constraints becomes necessary but poses considerable implementation complexity. Improper setting of parametric constraints can result in overfitting or underfitting. To alleviate this problem, a generalization of Maxent, under Tsallis entropy framework, is proposed. The proposed method introduces a convex quadratic constraint for the correction of (expected) quadratic Tsallis Entropy Bias (TEB). Specifically, we demonstrate that the expected quadratic Tsallis entropy of sampling distributions is smaller than that of the underlying real distribution with regard to frequentist, Bayesian prior, and Bayesian posterior framework, respectively. This expected entropy reduction is exactly the (expected) TEB, which can be expressed by the closed-form formula and acts as a consistent and unbiased correction with an appropriate convergence rate. TEB indicates that the entropy of a specific sampling distribution should be increased accordingly. This entails a quantitative reinterpretation of the Maxent principle. By compensating TEB and meanwhile forcing the resulting distribution to be close to the sampling distribution, our generalized quadratic Tsallis Entropy Bias Compensation (TEBC) Maxent can be expected to alleviate the overfitting and underfitting. We also present a connection between TEB and Lidstone estimator. As a result, TEB–Lidstone estimator is developed by analytically identifying the rate of probability correction in Lidstone. Extensive empirical evaluation shows promising performance of both TEBC Maxent and TEB-Lidstone in comparison with various state-of-the-art density estimation methods.
    No preview · Article · Jul 2014 · Computational Intelligence
  • Source
    Xiaozhao Zhao · Yuexian Hou · Dawei Song · Wenjie Li
    [Show abstract] [Hide abstract] ABSTRACT: The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of the physical phenomenon being measured. However, for complex cognitive systems of high dimensionality (e. g., human language processing and image recognition), the information bound J could be excessively larger than I (J >> I), due to insufficient observation, which would lead to serious over-fitting problems in the derivation of cognitive models. Moreover, there is a lack of an established exact invariance principle that gives rise to the bound information in universal cognitive systems. This limits the direct application of EPI. To narrow down the gap between I and J, in this paper, we propose a confident-information-first (CIF) principle to lower the information bound J by preserving confident parameters and ruling out unreliable or noisy parameters in the probability density function being measured. The confidence of each parameter can be assessed by its contribution to the expected Fisher information distance between the physical phenomenon and its observations. In addition, given a specific parametric representation, this contribution can often be directly assessed by the Fisher information, which establishes a connection with the inverse variance of any unbiased estimate for the parameter via the Cramer-Rao bound. We then consider the dimensionality reduction in the parameter spaces of binary multivariate distributions. We show that the single-layer Boltzmann machine without hidden units (SBM) can be derived using the CIF principle. An illustrative experiment is conducted to show how the CIF principle improves the density estimation performance.
    Preview · Article · Jul 2014 · Entropy

Publication Stats

1k Citations
41.78 Total Impact Points

Institutions

  • 2012-2014
    • Tianjin University
      • • School of Computer Science and Technology
      • • Department of Computer Science
      T’ien-ching-shih, Tianjin Shi, China
  • 2013
    • Tianjin Open University
      T’ien-ching-shih, Tianjin Shi, China
    • The Open University (UK)
      Milton Keynes, England, United Kingdom
    • Shanghai Open University
      Shanghai, Shanghai Shi, China
  • 2008-2012
    • The Robert Gordon University
      • School of Computing Science and Digital Media
      Aberdeen, Scotland, United Kingdom
    • Interamerican Open University
      Buenos Aires, Buenos Aires F.D., Argentina
  • 2008-2009
    • Milton Keynes College
      Milton Keynes, England, United Kingdom
  • 2001-2006
    • University of Queensland 
      • Distributed Systems Technology Centre
      Brisbane, Queensland, Australia
  • 1999
    • The Chinese University of Hong Kong
      • Department of Systems Engineering and Engineering Management
      Hong Kong, Hong Kong