About the lab

The NLP research group at the Hochschule Hannover (HsH, Hanover University of Applied Sciences and Arts) is embedded in the Department of Information Management and part of the Research Cluster Smart Data Analytics. The focus of the research group is the extraction of information from texts. Distributional Semantics in various forms is a common thread in the work of the group. On the application side keyword extraction and keyword similarity have been central topics. E.g. keyword extraction from image captions is done in the NOA Project and various papers on controlled vocabularies have been published. Further topics include e.g. acronym disambiguation and structural analysis of legal texts.

Featured projects (1)

In jurisprudence texts play a central role. Texts here always have to be interpreted in a specific context. In JuVer we aim to develop methods to find relations between texts and to make them explicit. In the project we will build a corpus of annotated legal texts, develop algorithms and implement a pilot application.

Featured research (8)

Legal documents often have a complex layout with many different headings, headers and footers, side notes, etc. For the further processing, it is important to extract these individual components correctly from a legally binding document, for example a signed PDF. A common approach to do so is to classify each (text) region of a page using its geometric and textual features. This approach works well, when the training and test data have a similar structure and when the documents of a collection to be analyzed have a rather uniform layout. We show that the use of global page properties can improve the accuracy of text element classification: we first classify each page into one of three layout types. After that, we can train a classifier for each of the three page types and thereby improve the accuracy on a manually annotated collection of 70 legal documents consisting of 20,938 text elements. When we split by page type, we achieve an improvement from 0.95 to 0.98 for single-column pages with left marginalia and from 0.95 to 0.96 for double-column pages. We developed our own feature-based method for page layout detection, which we benchmark against a standard implementation of a CNN image classifier. The approach presented here is based on corpus of freely available German contracts and general terms and conditions. Both the corpus and all manual annotations are made freely available. The method is language agnostic.
In this paper we investigate how concreteness and abstractness are represented in word embedding spaces. We use data for English and German, and show that concreteness and abstractness can be determined independently and turn out to be completely opposite directions in the embedding space. Various methods can be used to determine the direction of concreteness, always resulting in roughly the same vector. Though concreteness is a central aspect of the meaning of words and can be detected clearly in embedding spaces, it seems not as easy to subtract or add concreteness to words to obtain other words or word senses like e.g. can be done with a semantic property like gender.
In this paper, we present our approach for the KONVENS 2021 shared task Disambiguation of German Verbal Idioms. Our model is a decision tree-based classifier that uses static word embeddings and computed concreteness values to predict whether a verbal idiom is used figuratively or literal.
In order to ensure validity in legal texts like contracts and case law, lawyers rely on standardised formulations that are written carefully but also represent a kind of code with a meaning and function known to all legal experts. Using directed (acyclic) graphs to represent standardized text fragments, we are able to capture variations concerning time specifications, slight rephrasings, names, places and also OCR errors. We show how we can find such text fragments by sentence clustering, pattern detection and clustering patterns. To test the proposed methods, we use two corpora of German contracts and court decisions, specially compiled for this purpose. However, the entire process for representing standardised text fragments is language-agnostic. We analyze and compare both corpora and give an quantitative and qualitative analysis of the text fragments found and present a number of examples from both corpora.

Lab head

Christian Wartena
  • Faculty of Information and Communication

Members (3)

Jean Charbonnier
  • Hochschule Hannover
Frieda Josi
  • Hochschule Hannover

Alumni (3)

John Rothman
John Rothman
Birte Rohden
Birte Rohden
Rosa Tsegaye Aga
  • Hochschule Hannover