Article

Independent Component Analysis by Entropy Bound Minimization

Dept. of CSEE, UMBC, Baltimore, MD, USA
IEEE Transactions on Signal Processing (Impact Factor: 3.2). 11/2010; DOI: 10.1109/TSP.2010.2055859
Source: IEEE Xplore

ABSTRACT A novel (differential) entropy estimator is introduced where the maximum entropy bound is used to approximate the entropy given the observations, and is computed using a numerical procedure thus resulting in accurate estimates for the entropy. We show that such an estimator exists for a wide class of measuring functions, and provide a number of design examples to demonstrate its flexible nature. We then derive a novel independent component analysis (ICA) algorithm that uses the entropy estimate thus obtained, ICA by entropy bound minimization (ICA-EBM). The algorithm adopts a line search procedure, and initially uses updates that constrain the demixing matrix to be orthogonal for robust performance. We demonstrate the superior performance of ICA-EBM and its ability to match sources that come from a wide range of distributions using simulated and real-world data.

0 Followers
 · 
143 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Since the discovery of functional connectivity in fMRI data (i.e., temporal correlations between spatially distinct regions of the brain) there has been a considerable amount of work in this field. One important focus has been on the analysis of brain connectivity using the concept of networks instead of regions. Approximately ten years ago, two important research areas grew out of this concept. First, a network proposed to be "a default mode of brain function" since dubbed the default mode network was proposed by Raichle. Secondly, multisubject or group independent component analysis (ICA) provided a data-driven approach to study properties of brain networks, including the default mode network. In this paper, we provide a focused review of how ICA has contributed to the study of intrinsic networks. We discuss some methodological considerations for group ICA and highlight multiple analytic approaches for studying brain networks. We also show examples of some of the differences observed in the default mode and resting networks in the diseased brain. In summary, we are in exciting times and still just beginning to reap the benefits of the richness of functional brain networks as well as available analytic approaches.
    01/2012; 5:60-73. DOI:10.1109/RBME.2012.2211076
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive filtering has been extensively studied under the assumption that the noise is Gaussian. The most commonly used least-mean-square-error (LMSE) filter is optimal when the noise is Gaussian. However, in many practical applications, the noise can be modeled more accurately using a non-Gaussian distribution. In this correspondence, we consider non-Gaussian distributions for the noise model and show that the filter of using entropy bound minimization (EBM) leads to significant performance gain compared to the LMSE filter. The least mean p-norm (LMP) filter using the $\alpha$-stable distribution to model noise is shown to be the maximum-likelihood solution when using the generalized Gaussian distribution (GGD) to model noise. The GGD model for noise allows us to compute the Cramér–Rao lower bound (CRLB) for the error in estimating the weights. Simulations show that both the EBM and LMP filters achieve the CRLB as the sample size increases. The EBM filter is shown to be less committed with respect to unseen data yielding generally superior performance in online learning when compared to LMP. We also show that, when the noise comes from impulsive $\alpha$ -stable distributions, both the EBM and LMP filters provide better performance than LMSE. In addition, the EBM filter offers the advantage that it does not assume a certain parametric model for the noise, and by proper selection of the measuring functions, it can be adapted to a wide range of noise distributions.
    IEEE Transactions on Signal Processing 04/2012; 60(4):2049-2055. DOI:10.1109/TSP.2011.2182345 · 3.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently we introduced the concept of neural networks learning on Stiefel-Grassman manifold for MLP-like networks. Contributions of other authors have also appeared in the scientific literature about this topic. The aim of this paper is to present a general theory for it, and to illustrate how existing theories may be explained within the general framework proposed here. 1 1 Introduction In a multilayer-perceptron-like network formed by the interconnection of basic neurons, whose only adjustable part consists of weight-vectors, learning the optimal set of connection patterns may be interpreted as selecting the best directions among all possible ones in the space that the weight-vectors belong to (Fyfe, 1995). This interpretation is very useful, in that if a learning error criterion is defined over the weight-space, it measures how much interesting directions are, so that ultimately the rule with which network learns may be conceived as a searching procedure allowing to find out...
    Neural Computation 01/2001; DOI:10.1162/089976601750265036 · 1.69 Impact Factor

Preview

Download
2 Downloads
Available from

Tülay Adali