[Show abstract][Hide abstract] ABSTRACT: Sentence level novelty detection aims at spotting sentences with novel information from an ordered sentence list. In the task, sentences appearing later in the list with no new meanings are eliminated. For the task of novelty detection, the contributions of this paper are three-fold. First, conceptually, this paper reveals the computational nature of the task currently overlooked by the Novelty community—Novelty as a combination of partial overlap (PO) and complete overlap (CO) relations between sentences. We define partial overlap between two sentences as a sharing of common facts, while complete overlap is when one sentence covers all of the meanings of the other sentence. Second, technically, a novel approach, the selected pool method is provided which follows naturally from the PO-CO computational structure. We provide formal error analysis for selected pool and methods based on this PO-CO framework. We address the question how accurate must the PO judgments be to outperform the baseline pool method. Third, experimentally, results were presented for all the three novelty datasets currently available. Results show that the selected pool is significantly better or no worse than the current methods, an indication that the term overlap criterion for the PO judgments could be adequately accurate.
Preview · Article · Jan 2006 · Information Retrieval
[Show abstract][Hide abstract] ABSTRACT: Sentence level novelty detection aims at reducing redundant sentences from a sentence list. In the task, sentences appearing later in the list with no new meanings are eliminated. Aiming at a better accuracy for detecting redundancy, this paper reveals the nature of the novelty detection task currently overlooked by the Novelty community $-$ Novelty as a combination of the partial overlap (PO, two sentences sharing common facts) and complete overlap (CO, the first sentence covers all the facts of the second sentence) relations. By formalizing novelty detection as a combination of the two relations between sentences, new viewpoints toward techniques dealing with Novelty are proposed. Among the methods discussed, the similarity, overlap, pool and language modeling approaches are commonly used. Furthermore, a novel approach, selected pool method is provided, which is immediate following the nature of the task. Experimental results obtained on all the three currently available novelty datasets showed that selected pool is significantly better or no worse than the current methods. Knowledge about the nature of the task also affects the evaluation methodologies. We propose new evaluation measures for Novelty according to the nature of the task, as well as possible directions for future study.
[Show abstract][Hide abstract] ABSTRACT: Novelty detection systems aim at reducing redundant documents or sentences from a list of documents chronologically ordered.
In the task, sentences appearing later in the list with no new meanings are eliminated. In an accompanying paper, the nature
of novelty detection was revealed – Novelty as a combination of the PO (partial overlap) and CO (complete overlap) relations,
which can be treated as two classification tasks; theoretical impacts were given. This paper provides what the nature of the
task mean empirically. One new method – selected pool – implementing the nature of the task gained improvements on TREC Novelty
datasets. New evaluation criteria are given, which are natural from the viewpoint of the nature of novelty detection.
[Show abstract][Hide abstract] ABSTRACT: IR group of Tsinghua University this year has used its TMiner text retrieval system for indexing and retrieval of the Terabyte track ad hoc and named-page subtasks. In doing the two tasks, we used the in-link anchor texts (the anchor of the URLs that point to the current page in the collection) together with the content texts of the web pages for building the indices. When retrieving, the word-pair method (1) was used and proved effective on 2004 and 2005 Terabyte ad hoc task topics and the 2005 named-page task. We provide further analysis of the performance of word-pair method in comparison with the Markov random field term dependence model of (2) and another generative phrase model we proposed, which is more natural on the language modeling framework (3). 1. TMiner at Terabyte 2005 On a PC of 2GB memory, with one CPU and IDE hard disks, TMiner could index 50GB text (about 200GB HTML files) with tolerable time. But since the terabyte collection contains about 100GB pure text (110GB including anchor texts), building one single index for such a large collection would cost TMiner too much time. We built 27 indices for the 27 parts of the collection in our experiments. When retrieving, we summed the DF values of the query terms from each index, and assigned the BM2500 RSV to documents in the collection according to the DF sum. This distributed index system returns exact RSV as if only one single index is constructed for the whole collection (at the expense of additional query processing time). In the ad hoc and named-page tasks, the index of in-link anchor combined with page content was used. This is the most effective way of combining anchor text for retrieval in our observation and we didn't build indices that contain no in-link anchor for comparison. In addition to the use of anchor text, since the indices we built contains full position information for the index terms, the word-pair method (1) was used in both tasks.
[Show abstract][Hide abstract] ABSTRACT: in Novelty task of TREC. Our research on this year’s novelty track mainly focused on four aspects: (1) text feature selection and reduction; (2) improved sentence classification in finding relevant information; (3)efficient sentence redundancy computing; (4) effective result filtering. All experiments have been performed on TMiner IR system, developed by THU IR group last year. 1. Text feature selection and reduction The work of text feature selection and reduction is used as the basis of both relevant and new steps. In novelty/passage retrieval, data sparseness problem is dominant. In a sentence, different terms act as different roles. What terms take most important information for a given user query? What kinds of feature are most useful to identify the core information? This is what we try to find out in our research on text feature selection and reduction. Three kinds of approaches have been carried out: (1)Using Named Entity as significant features; (2)Using POS-tagging information for feature selection; (3)Using PCA transform for feature reduction. The former two approaches take NLP information into account. In our experiments, both NEand PCA- based approaches improve system performances, while POS-tagging information does no help. In the following, we’ll give some more detailed descriptions of NE-based feature selection and PCA-based feature reduction. 1.1 Using Named Entity as significant features The basic of NE-based approach is to recognize which phrase is Named Entity and what type of Named Entity it is. In our experiments, 34 types of Named Entities have been recognized 1, such
[Show abstract][Hide abstract] ABSTRACT: Introduction Anchor text has been proofed efficient in former TREC experiments on homepage finding rusk  and somewhat useful to ad hoc retrieval by result combination[2( In this year, our conclusion was consistent with formers. Besides, the use of the URL and links inside the webpage were also observed. Again, results on training set are encouraging. We made an assumption that a key resource is more likely to link to multiple relevant documents. Then the out-degree of the page and the similarities of the documents the page point to were used as the two factors for key resource selection. Experimental results were quite good, showing their ability of f'mding key resource on one server. Two site uniting (SU) approaches have been studied to select proper pages as the representation of one server. (1) The document which has index characteristic and has a high enough similarity is reserved as key resource. (2) Documents of the same server in result list are given different reliability
[Show abstract][Hide abstract] ABSTRACT: Introduction This is the first time that Tsinghua University took part in TREC. In this year's novelty track, our basic idea is to find the key factor that help people f'md relevant and new information on a set of documents with noise. We paid attention to three points: 1. how to get full information from a short sentence; 2. how to complement hidden well-known knowledge to the sentences; 3. how to make the determination of duplication. Accordingly, expansion-based technologies are the key points. Studies of expansion technologies have been performed on three levels: efficient query expansion based on thesaurus and statistics, replacement-based document expansion, and term-expansion-related duplication elimination strategy based on overlapping measurement. Besides, two issues have been studied: finding key information in topics, and dynamic result selection. A new IR system has been developed for the task. In the system, four weighting strategies have been similarity and overlapping