Conference Paper

Experiments with Interactive Question-Answering.

DOI: 10.3115/1219840.1219866 Conference: ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA
Source: DBLP

ABSTRACT This paper describes a novel framework for interactive question-answering (Q/A) based on predictive questioning. Gen- erated off-line from topic representations of complex scenarios, predictive ques- tions represent requests for information that capture the most salient (and diverse) aspects of a topic. We present experimen- tal results from large user studies (featur- ing a fully-implemented interactive Q/A system named FERRET) that demonstrates that surprising performance is achieved by integrating predictive questions into the context of a Q/A dialogue.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Researchers and scientists increasingly find themselves in the position of having to quickly understand large amounts of technical material. Our goal is to effectively serve this need by using bibliometric text mining and summarization techniques to generate summaries of scientific literature. We show how we can use citations to produce automatically generated, readily consumable, technical extractive summaries. We first propose C-LexRank, a model for summarizing single scientific articles based on citations, which employs community detection and extracts salient information-rich sentences. Next, we further extend our experiments to summarize a set of papers, which cover the same scientific topic. We generate extractive summaries of a set of Question Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their citation sentences and show that citations have unique information amenable to creating a summary.
    Journal of Artificial Intelligence Research 02/2014; 46(1). · 1.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a supervised approach for automatically learning good decompositions of complex questions. The training data generation phase mainly builds on three steps to produce a list of simple questions corresponding to a complex question: i) the extraction of the most important sentences from a given set of relevant documents (which contains the answer to the complex question), ii) the simplification of the extracted sentences, and iii) their transformation into questions containing candidate answer terms. Such questions, considered as candidate decompositions, are manually annotated (as good or bad candidates) and used to train a Support Vector Machine (SVM) classifier. Experiments on the DUC data sets prove the effectiveness of our approach.
    Proceedings of the 17th international conference on Applications of Natural Language Processing and Information Systems; 06/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel scheme of spoken dialogue systems which uses the up-to-date information on the web. The scheme is based on information extraction which is defined by the predicate-argument (P-A) structure and realized by semantic parsing. Based on the information structure, the dialogue system can perform question answering and also proactive information presentation. Feasibility of this scheme is demonstrated with experiments using a domain of baseball news. In order to automatically select useful domain-dependent P-A templates, statistical measures are introduced, resulting to a completely unsupervised learning of the information structure given a corpus. Similarity measures of P-A structures are also introduced to select relevant information. An experimental evaluation shows that the proposed system can make more relevant responses compared with the conventional "bag-of-words" scheme.
    Proceedings of the SIGDIAL 2011 Conference, The 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue, June 17-18, 2011, Oregon Science & Health University, Portland, Oregon, USA; 01/2011


Available from