Conference Paper

Sparse Kernel Logistic Regression using Incremental Feature Selection for Text-Independent Speaker Identification

IESK, Magdeburg Univ.
DOI: 10.1109/ODYSSEY.2006.248115 Conference: Speaker and Language Recognition Workshop, 2006. IEEE Odyssey 2006: The
Source: IEEE Xplore

ABSTRACT Logistic regression is a well known classification method in the field of statistical learning. Recently, a kernelized version of logistic regression has become very popular, because it allows non-linear probabilistic classification and shows promising results on several benchmark problems. In this paper we show that kernel logistic regression (KLR) and especially its sparse extensions (SKLR) are useful alternatives to standard Gaussian mixture models (GMMs) and support vector machines (SVMs) in Speaker recognition. While the classification results of KLR and SKLR are similar to the results of SVMs, we show that SKLR produces highly sparse models. Unlike SVMs the kernel logistic regression also provides an estimate of the conditional probability of class membership. In speaker identification experiments the SKLR methods outperform the SVM and the GMM baseline system on the POLY-COST database

0 Bookmarks
 · 
72 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-class classification problems can be efficiently solved by partitioning the original problem into sub-problems involving only two classes: for each pair of classes, a (potentially small) neural network is trained using only the data of these two classes. We show how to combine the outputs of the two-class neural networks in order to obtain posterior probabilities for the class decisions. The resulting probabilistic pairwise classifier is part of a handwriting recognition system which is currently applied to check reading. We present results on real world data bases and show that, from a practical point of view, these results compare favorably to other neural network approaches. 1 Introduction Generally, a pattern classifier consists of two main parts: a feature extractor and a classification algorithm. Both parts have the same ultimate goal, namely to transform a given input pattern into a representation that is easily interpretable as a class decision. In the case of feedforwar...
    06/1998;
  • [Show abstract] [Hide abstract]
    ABSTRACT: NIST has coordinated annual evaluations of text- independent speaker recognition since 1996. During the course of this series of evaluations there have been notable milestones related to the development of the evaluation paradigm and the performance achievements of state-of-the-art systems. We document here the variants of the speaker detection task that have been included in the evaluations and the history of the best performance results for this task. Finally, we discuss the data collection and protocols for the 2004 evaluation and beyond.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this document is to define a common ground for speaker recognition experiments on the POLYCOST database. It is done by defining a set of baseline experiments for which results always should be included when presenting evaluations made on this database. By including these results and by presenting the differences introduced in new experiments, a comparison between systems tested on different sites is made possible. Four baseline experiments are defined: text-dependent speaker verification (SV) on fixed password sentence, text-prompted SV on digit sequence, text-independent SV on free speech in subject's mother tongue and finally text-independent speaker identification on the same free speech. The definition of the baseline experiment includes the definition of client and impostor speakers and speakers for training a world model; sessions for enrollment and test; which speech items to use and how to compute and present results. 1. Introduction The purpose of th...
    02/1997;

Full-text (2 Sources)

View
58 Downloads
Available from
May 20, 2014