
Kai Labusch- PhD
- Berlin State Library
Kai Labusch
- PhD
- Berlin State Library
About
27
Publications
5,598
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
323
Citations
Introduction
Current institution
Publications
Publications (27)
In all domains and sectors, the demand for intelligent systems to support the processing and generation of digital content is rapidly increasing. The availability of vast amounts of content and the pressure to publish new content quickly and in rapid succession requires faster, more efficient and smarter processing and generation methods. With a co...
In all domains and sectors, the demand for intelligent systems to support the processing and generation of digital content is rapidly increasing. The availability of vast amounts of content and the pressure to publish new content quickly and in rapid succession requires faster, more efficient and smarter processing and generation methods. With a co...
We apply a pre-trained transformer based representational language model, i.e. BERT (Devlin et al., 2018), to named entity recognition (NER) in contemporary and historical German text and observe state of the art performance for both text categories. We further improve the recognition performance for historical German by un-supervised pre-training...
Sparse coding has become a widely used framework in signal processing and pattern recognition. After a motivation of the principle of sparse coding we show the relation to Vector Quantization and Neural Gas and describe how this relation can be used to generalize Neural Gas to successfully learn sparse coding dictionaries. We explore applications o...
Particular classes of signals, as for example natural images, can be encoded sparsely if appropriate dictionaries are used. Finding such dictionaries based on data samples, however, is a difficult optimization task. In this paper, it is shown that simple stochastic gradient descent, besides being much faster, leads to superior dictionaries compared...
We propose a new algorithm for the design of overcomplete dictionaries for sparse coding, neural gas for dictionary learning (NGDL), which uses a set of solutions for the sparse coefficients in each update step of the dictionary. In order to obtain such a set of solutions, we additionally propose the bag of pursuits (BOP) method for sparse approxim...
Genome-wide association (GWA) studies provide large amounts of high-dimensional data. GWA studies aim to identify variables
that increase the risk for a given phenotype. Univariate examinations have provided some insights, but it appears that most
diseases are affected by interactions of multiple factors, which can only be identified through a mult...
We propose a new algorithm for the design of overcomplete dictionaries for sparse coding that generalizes the Sparse Coding Neural Gas (SCNG) algorithm such that it is not bound to a particular approximation method for the coefficients of the dictionary elements. In an application to image reconstruction, a dictionary that has been learned using th...
Sparse coding employs low-dimensional subspaces in order to encode high-dimensional signals. Finding the optimal subspaces is a difficult optimization task. We show that stochastic gradient descent is superior in finding the optimal subspaces compared to MOD and K-SVD, which are both state-of-the art methods. The improvement is most significant in...
The well-known MinOver algorithm is a slight modification of the perceptron algorithm and provides the maximum-margin classifier without a bias in linearly separable two-class classification problems. DoubleMinOver as an extension of MinOver, which now includes a bias, is introduced. An O(t(-1)) convergence is shown, where t is the number of learni...
We show how the “Online Sparse Coding Neural Gas” algorithm can be applied to a more realistic model of the “Cocktail Party
Problem”. We consider a setting where more sources than observations are given and additive noise is present. Furthermore,
we make the model even more realistic, by allowing the mixing matrix to change slowly over time. We als...
We consider the problem of learning an unknown (overcomplete) basis from data that are generated from unknown and sparse linear combinations. Introducing the Sparse Coding Neural Gas algorithm, we show how to employ a combination of the original Neural Gas algorithm and Oja's rule in order to learn a simple sparse code that represents each training...
We consider the problem of separating noisy overcomplete sources from linear mixtures, i.e., we observe N mixtures of M > N sparse sources. We show that the "Sparse Coding Neural Gas" (SCNG) algorithm (8, 9) can be employed in order to estimate the mixing matrix. Based on the learned mixing matrix the sources are obtained by orthogonal matching pur...
In this brief paper, we propose a method of feature extraction for digit recognition that is inspired by vision research: a sparse-coding strategy and a local maximum operation. We show that our method, despite its simplicity, yields state-of-the-art classification results on a highly competitive digit-recognition benchmark. We first employ the uns...
We consider the problem of separating noisy overcomplete sources from linear mixtures, i.e., we observe N mixtures of M > N sparse sources. We show that the “Sparse Coding Neural Gas” (SCNG) algorithm [1] can be employed in order to estimate the
mixing matrix. Based on the learned mixing matrix the sources are obtained by orthogonal matching pursui...
We introduce the OneClassMaxMinOver (OMMO) algorithm for the problem of one-class support vector classication. The algorithm is extremely simple and therefore a convenient choice for practitioners. We prove that in the hard-margin case the algorithm converges with O(1= p t) to the maximum margin solution of the support vector approach for one-class...
We consider the problem of learning an unknown (overcom- plete) basis from an unknown sparse linear combination. Introducing the "sparse coding neural gas" algorithm, we show how to employ a combina- tion of the original neural gas algorithm and Oja's rule in order to learn a simple sparse code that represents each training sample by a multiple of...
The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statistical independence are related to each other (ii)...
The well-known MinOver algorithm is a simple modification of the perceptron algorithm and provides the maximum margin classifier
without a bias in linearly separable two class classification problems. In [1] and [2] we presented DoubleMinOver and MaxMinOver
as extensions of MinOver which provide the maximal margin solution in the primal and the Sup...
The well-known MinOver algorithm is a simple modification of the perceptron algorithm and provides the maximum margin classifier without a bias in linearly separable two class classification problems. DoubleMinOver as a slight modification of MinOver is introduced, which now includes a bias. It is shown how this simple and iterative procedure can b...
We study simulated Braitenberg agents controlled by a homeokinetic dynamic which evolve the ability to discriminate between two different types of objects. The free parameters of the homeokinetic control are varied by an evolutionary strategy. Two mirrored scenarios are used to show adaptation. Using a simple test scenario, we are able to evaluate...