Article

Querying Web Data - The WebQA Approach

12/2002;
Source: CiteSeer

ABSTRACT The common paradigm of searching and retrieving information on the Web is based on keyword-based search using one or more search engines, and then browsing through the large number of returned URLs. This is significantly weaker than the declarative querying that is supported by DBMSs. The lack of a schema and the high volatility of Web make "database-like" querying of Web data difficult. In this paper we report on our work in building a system, called WebQA, that provides a declarative query-based approach to Web data retrieval that uses question-answering technology in extracting information from Web sites that are retrieved by search engines. The approach consists of first using meta-search techniques in an open environment to gather candidate responses from search engines and other on-line databases, and then using information extraction techniques to find the answer to the specific question from these candidates. A prototype system has been developed to test this approach. Testing includes evaluation of its performance as a question-answering system using a wellknown evaluation system called TREC-9. Its accuracy using TREC-9 data for simple questions is high and its retrieval performance is good. The system employs an open system architecture allowing for on-going improvements in various aspects.

0 Bookmarks
 · 
57 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The MultiText QA System performs question answering using a two step passage selection method. In the first step, an arbitrary passage retrieval algorithm efficiently identifies hotspots in a large target corpus where the answer might be located. In the second step, an answer selection algorithm analyzes these hotspots, considering such factors as answer type and candidate redundancy, to extract short answer snippets. This chapter describes both steps in detail, with the goal of providing sufficient information to allow independent implementation. The method is evaluated using the test collection developed for the TREC 2001 question answering track.
    12/2005: pages 259-283;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper is the first analysis of caching architectures for Question Answering (QA). We introduce the novel concept of multi-layer collaborative caches, where: (a) each resource intensive QA component is allocated a distinct segment of the cache, and (b) the overall cache is transparently spread across all nodes of the distributed system. We em- pirically analyze the proposed architecture using a real-world QA system installed on a cluster of 16 nodes. Our analysis indicates that multi-layer collaborative caches induce an almost two fold reduction in QA execution time compared to a QA system with local cache.
    Euro-Par 2007, Parallel Processing, 13th International Euro-Par Conference, Rennes, France, August 28-31, 2007, Proceedings; 01/2007
  • Source
    International Journal of Application or Innovation in Engineering & Management. 12/2013; Volume 2(issue 12):101 to 108.