Conference Paper

Experiments with Interactive Question-Answering.

DOI: 10.3115/1219840.1219866 Conference: ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA
Source: DBLP

ABSTRACT This paper describes a novel framework for interactive question-answering (Q/A) based on predictive questioning. Gen- erated off-line from topic representations of complex scenarios, predictive ques- tions represent requests for information that capture the most salient (and diverse) aspects of a topic. We present experimen- tal results from large user studies (featur- ing a fully-implemented interactive Q/A system named FERRET) that demonstrates that surprising performance is achieved by integrating predictive questions into the context of a Q/A dialogue.

0 Bookmarks
 · 
84 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Researchers and scientists increasingly find themselves in the position of having to quickly understand large amounts of technical material. Our goal is to effectively serve this need by using bibliometric text mining and summarization techniques to generate summaries of scientific literature. We show how we can use citations to produce automatically generated, readily consumable, technical extractive summaries. We first propose C-LexRank, a model for summarizing single scientific articles based on citations, which employs community detection and extracts salient information-rich sentences. Next, we further extend our experiments to summarize a set of papers, which cover the same scientific topic. We generate extractive summaries of a set of Question Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their citation sentences and show that citations have unique information amenable to creating a summary.
    Journal of Artificial Intelligence Research 02/2014; 46(1). · 1.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a supervised approach for automatically learning good decompositions of complex questions. The training data generation phase mainly builds on three steps to produce a list of simple questions corresponding to a complex question: i) the extraction of the most important sentences from a given set of relevant documents (which contains the answer to the complex question), ii) the simplification of the extracted sentences, and iii) their transformation into questions containing candidate answer terms. Such questions, considered as candidate decompositions, are manually annotated (as good or bad candidates) and used to train a Support Vector Machine (SVM) classifier. Experiments on the DUC data sets prove the effectiveness of our approach.
    Proceedings of the 17th international conference on Applications of Natural Language Processing and Information Systems; 06/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a new methodology for enhancing the quality and relevance of suggestions provided to users of interac- tive Q/A systems. We show that by using Conditional Random Fields to combine relevance feedback gathered from users along with information derived from dis- course structure and coherence, we can accurately identify irrelevant suggestions with nearly 90% F-measure.
    01/2006;

Full-text

Download
0 Downloads
Available from