Search Arguments with TARGER: query box, tag selector, and a result with the link to the original document

Search Arguments with TARGER: query box, tag selector, and a result with the link to the original document

Source publication
Article
Full-text available
Question answering platforms such as Yahoo! Answers or Quora always contained questions that ask other humans for help when comparing two or more options. Since nowadays more and more people also “talk” to their devices, such comparative questions are also part of the query stream that major search engines receive. Interestingly, major search engin...

Context in source publication

Context 1
... identify more "argumentative" sentences (or even documents) for the CAM answer, we have developed TARGER [5]: a neural argument tagger, coming with a web interface 5 and a RESTful API. The tool can tag arguments in free text inputs (cf. Fig. 2) and can retrieve arguments from the DepCC corpus that is also used in the CAM prototype (cf. Fig. 3). TARGER is based on a BiLSTM-CNN-CRF neural tagger [10] pre-trained on the persuasive essays (Essays) [7], web discourse (WebD) [8], or IBM Debater (IBM) [9] datasets and is able to identify argument components in text and classify them as claims or premises. Using TARGER's web interface or API, researchers and practitioners can thus ...

Citations

... Extracting the arguments stated in the document is one way to clearly capture the grounded statements (premises) and the final conclusion (claim) presented in the text. Therefore, many recent works focus on the arguments as a potential tool for improving comparative question answering [9][10][11], and more generally, on building an argument-based search engine as in the work of Daxenberger et al. [12] with respect to their summetix project (formely known as ArgumenText) 4 . Improving a retrieval model for argumentbased comparison systems is the target of the Touché Task 2, which has been also addressed in 2020 [13]. ...
Conference Paper
In the current world, individuals are faced with decision making problems and opinion formation processes on a daily basis. Nevertheless, answering a comparative question by retrieving documents based only on traditional measures (such as TF-IDF and BM25) does not always satisfy the need. In this paper, we propose a multi-layer architecture to answer comparative questions based on arguments. Our approach consists of a pipeline of query expansion, argument mining model, and sorting of the documents by a combination of different ranking criteria. Given the crucial role of the argument mining step, we examined two models: DistilBERT and an ensemble learning approach using stacking of SVM and DistilBERT. We compare the results of both models using two argumentation corpora on the level of argument identification task, and further using the dataset of CLEF 2021 Touché Lab shared task 2 on the level of answering comparative questions.
... Extracting the arguments stated in the document is one way to clearly capture the grounded statements (premises) and the final conclusion (claim) presented in the text. Therefore, many recent works focus on the arguments as a potential tool for improving comparative question answering [10,11]. Improving a retrieval model for argument-based comparison systems is the target of the Touché's Task 2, which has been addressed in 2020 [12] as well. ...
Conference Paper
Full-text available
In the current world, individuals are faced with decision making problems and opinion formation processes on a daily basis. For example, debating or choosing between two similar products. However, answering a comparative question by retrieving documents based only on traditional measures (such as TF-IDF and BM25) does not always satisfy the need. Thus, introducing the argumentation aspect in the information retrieval procedure recently gained significant attention. In this paper, we present our participation at the CLEF 2021 Touché Lab for the second shared task, which tackles answering comparative questions based on arguments. Therefore, we propose a novel multi-layer architecture where the argument extraction task is considered as the main engine. Our approach therefore is a pipeline of query expansion, argument identification based on DistilBert model, and sorting the documents by a combination of different ranking criteria.