Michael S. Lewicki

Michael S. Lewicki
Case Western Reserve University | CWRU · Department of Electrical Engineering and Computer Science

Ph.D.

About

66
Publications
19,054
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
7,657
Citations
Citations since 2017
0 Research Items
1703 Citations
2017201820192020202120222023050100150200250300
2017201820192020202120222023050100150200250300
2017201820192020202120222023050100150200250300
2017201820192020202120222023050100150200250300
Additional affiliations
August 2008 - July 2009
July 2008 - present
Case Western Reserve University
Position
  • Professor (Associate)
July 2005 - June 2008
Carnegie Mellon University
Position
  • Professor (Associate)
Description
  • tenured June, 2008
Education
August 1989 - February 1996
California Institute of Technology
Field of study
  • Computation and Neural Systems
August 1985 - May 1989
Carnegie Mellon University
Field of study
  • Mathematics and Cognitive Science

Publications

Publications (66)
Article
Scene analysis is a complex process involving a hierarchy of computational problems ranging from sensory representation to feature extraction to active perception. It is studied in a wide range of fields using different approaches, but we still have only limited insight into the computations used by biological systems. Experimental approaches often...
Article
Full-text available
[This corrects the article on p. 199 in vol. 5, PMID: 24744740.].
Article
Full-text available
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustn...
Article
Full-text available
The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view tha...
Article
Full-text available
Natural sounds possess considerable statistical structure. Lewicki (2002, Nature Neurosci. 5(4), 356-363) used independent components analysis (ICA) to reveal the statistical structure of environmental sounds, animal vocalizations, and human speech. Each sound class exhibited distinct statistical properties, but filters that optimally encoded speec...
Article
Full-text available
Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated wha...
Article
Full-text available
Robust coding has been proposed as a solution to the problem of minimizing decoding error in the presence of neural noise. Many real-world problems, however, have degradation in the input signal, not just in neural representations. This generalized problem is more relevant to biological sensory coding where internal noise arises from limited neural...
Article
Full-text available
Approaches that abandon traditional speech categories offer promise for developing statistical descriptions that encapsulate how speech conveys information. Grandparents would be among the beneficiaries.
Conference Paper
Full-text available
Multiresolution (MR) representations have been very successful in image encoding, due to both their algorithmic performance and cod- ing efficiency. However these transforms are fixed, suggesting that coding efficiency could be further improved if a multiresolution code could be adapted to a specific signal class. Among adaptive cod- ing methods, i...
Article
Full-text available
This paper addresses the problem of adaptively deriving optimally sparse image representations, using an dictionary composed of shiftable kernels. Algorithmic advantages of our solution make possible the computation of an approximately shift-invariant adaptive image representation. Learned kernels can have different sizes and adapt to different sca...
Article
Full-text available
A fundamental function of the visual system is to encode the building blocks of natural scenes-edges, textures and shapes-that subserve visual tasks such as object recognition and scene understanding. Essential to this process is the formation of abstract representations that generalize from specific instances of visual input. A common view holds t...
Article
Full-text available
This paper presents a statistical data-driven method for learning intrinsic structures of impact sounds. The method applies principal and independent component analysis to learn low-dimensional representations that model the distribution of both the time-varying spectral and amplitude structure. As a result, the method is able to decompose sounds i...
Article
Full-text available
We address the problem of robust coding in which the signal information should be preserved in spite of intrinsic noise in the representation. We present a theoretical analysis for 1- and 2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an arbitrary number of coding units, th...
Conference Paper
Full-text available
Efficient coding models predict that the optimal code for natural images is a population of oriented Gabor receptive fields. These results match response properties of neurons in primary visual cortex, but not those in the retina. Does the retina use an optimal code, and if so, what is it optimized for? Previous theories of retinal coding have assu...
Article
Full-text available
The auditory neural code must serve a wide range of auditory tasks that require great sensitivity in time and frequency and be effective over the diverse array of sounds present in natural acoustic environments. It has been suggested that sensory systems might have evolved highly efficient coding strategies to maximize the information conveyed to t...
Conference Paper
Full-text available
Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neurons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplet...
Article
Full-text available
Natural images are not random; instead, they ex-hibit statistical regularities. Assuming that our vision is designed for tasks on natural images, computation in the visual system should be optimized for such regularities. Recent theoretical investigations along this line have provided many insights into the visual response properties in the early v...
Article
Full-text available
Capturing statistical regularities in complex, high-dimensional data is an important problem in machine learning and signal processing. Models such as principal component analysis (PCA) and independent component analysis (ICA) make few assumptions about the structure in the data and have good scaling properties, but they are limited to representing...
Article
Full-text available
Nonstationary acoustic features provide essential cues for many auditory tasks, including sound localization, auditory stream analysis, and speech recognition. These features can best be characterized relative to a precise point in time, such as the onset of a sound or the beginning of a harmonic periodicity. Extracting these types of features is a...
Conference Paper
Full-text available
Linear implementations of the efficient coding hypothesis, such as inde- pendent component analysis (ICA) and sparse coding models, have pro- vided functional explanations for properties of simple cel ls in V1 (1, 2). These models, however, ignore the non-linear behavior of neurons and fail to match individual and population properties of neural re...
Conference Paper
Full-text available
The representation of acoustic signals at the cochlear nerve must serve a wide range of auditory tasks that require exquisite sensitivity in both time and frequency. Lewicki (2002) demonstrated that many of the filtering properties of the cochlea could be explained in terms of efficient coding of natural sounds. This model, however, did not account...
Article
Full-text available
The theoretical principles that underlie the representation and computation of higher-order structure in natural images are poorly understood. Recently, there has been considerable interest in using information theoretic techniques, such as independent component analysis, to derive representations for natural images that are optimal in the sense of...
Article
Full-text available
We present a hierarchical Bayesian model for learning efficient codes of higher-order structure in natural images. The model, a non-linear generalization of independent component analysis, replaces the standard assumption of independence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the...
Article
The theoretical principles that underlie the representation and computation of higher-order structure in natural images are poorly understood. Recently, there has been considerable interest in using information theoretic techniques, such as independent component analysis, to derive representations for natural images that are optimal in the sense of...
Article
Full-text available
The auditory system encodes sound by decomposing the amplitude signal arriving at the ear into multiple frequency bands whose center frequencies and bandwidths are approximately exponential functions of the distance from the stapes. This organization is thought to result from the adaptation of cochlear mechanisms to the animal's auditory environmen...
Article
Full-text available
An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent, non-Gaussian densities. The algorithm estimates the data density in each class by using parametric nonlinear functions that fit to the non-Gaussian structure...
Article
Full-text available
We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions a...
Article
We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions a...
Article
A Bayesian method for inferring an optimal basis is applied to the problem of finding efficient codes for natural images. The key to the algorithm is multivariate non- Gaussian density estimation. This is equivalent, in various forms, to sparse coding or independent component analysis. The basis functions learned by the algorithm are oriented and l...
Article
We are interested in leaning efficient codes to represent classes of different images. The image classes are modeled using an ICA mixture model that assumes that the data was generated by several mutually exclusive data classes whose components are a mixture of non-Gaussian sources. The parameters of the model can be adapted using an approximate ex...
Article
A Bayesian method for inferring an optimal basis is applied to the problem of finding efficient codes for natural sounds. The key to the algorithm is multivariate non?Gaussian density estimation, which is an equivalent independent component analysis when the form of the marginal density is fixed. An important advantage of the probabilistic framewor...
Article
We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coecients. The wavelet basis, which may be either complete or overcomplete, is specied by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions are ad...
Article
Full-text available
An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent, non-Gaussian densities. The algorithm estimates the density of each class and is able to model class distributions with non-Gaussian structure. The new algori...
Article
Full-text available
An extension of the Gaussian mixture model is presented using Independent Component Analysis (ICA) and the generalized Gaussian density model. The mixture model assumes that the observed data can be categorized into mutually exclusive classes whose components are generated by a linear combination of independent sources. The source densities are mod...
Article
Full-text available
In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in ma...
Article
Full-text available
We present an unsupervised classification algorithm based on an ICA mixture model. A mixture model is a model in which the observed data can be categorized into several mutually exclusive data classes. In an ICA mixture model, it is assumed that the data in each class are generated by a linear mixture of independent sources. The algorithm finds the...
Article
Full-text available
An unsupervised classification algorithm is derived from an ICA mixture model assuming that the observed data can be categorized into several mutually exclusive data classes whose components are generated by linear mixtures of independent non-Gaussian sources. The algorithm finds the independent sources, the mixing matrix for each class and also co...
Article
Full-text available
Empirical results were obtained for the blind source separation of more sources than mixtures using a previously proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: (1) learning an overcomplete representation for the observed data and (2) inferring so...
Article
We present an unsupervised classification algorithm based on an ICA mixture model. The ICA mixture model assumes that the observed data can be categorized into several mutually exclusive data classes in which the components in each class are generated by a linear mixture of independent sources. The algorithm finds the independent sources, the mixin...
Article
Full-text available
A common way to represent a time series is to divide it into shortduration blocks, each of which is then represented by a set of basis functions. A limitation of this approach, however, is that the temporal alignment of the basis functions with the underlying structure in the time series is arbitrary. We present an algorithm for encoding a time ser...
Article
Full-text available
We apply a Bayesian method for inferring an optimal basis to the problem of nding eecient image codes for natural scenes. The basis functions learned by the algorithm are oriented and localized in both space and frequency, bearing a resemblance to Gabor functions, and increasing the number of basis functions results in a greater sampling density in...
Article
Full-text available
The detection of neural spike activity is a technical challenge that is a prerequisite for studying many types of brain function. Measuring the activity of individual neurons accurately can be difficult due to large amounts of background noise and the difficulty in distinguishing the action potentials of one neuron from those of others in the local...
Article
The detection of neural spike activity is a technical challenge that is a prerequisite for studying many types of brain function. Measuring the activity of individual neurons accurately can be difficult due to large amounts of background noise and the difficulty in distinguishing the action potentials of one neuron from those of others in the local...
Article
We derive a learning algorithm for inferring an overcomplete basis by viewing it as probabilistic model of the observed data. Overcomplete bases allow for better approximation of the underlying statistical density. Using a Laplacian prior on the basis coefficients removes redundancy and leads to representations that are sparse and are a nonlinear f...
Conference Paper
Full-text available
A common way to represent a time series is to divide it into short- duration blocks, each of which is then represented by a set of basis functions. A limitation of this approach, however, is that the tem- poral alignment of the basis functions with the underlying structure in the time series is arbitrary. We present an algorithm for encoding a time...
Article
Full-text available
Multilayer architectures such as those used in Bayesian belief networks and Helmholtz machines provide a powerful framework for representing and learning higher order statistical relations among inputs. Because exact probability calculations with these models are often intractable, there is much interest in finding approximate algorithms. We presen...
Conference Paper
We derive a learning algorithm for inferring an overcomplete basisby viewing it as probabilistic model of the observed data. Overcompletebases allow for better approximation of the underlyingstatistical density. Using a Laplacian prior on the basis coefficientsremoves redundancy and leads to representations that are sparseand are a nonlinear functi...
Article
Some of the most complex auditory neurons known are contained in the songbird forebrain nucleus HVc. These neurons are highly sensitive to auditory temporal context: they respond strongly to the bird's own song, but respond weakly or not at all when the sequence of the song syllables is altered. It is not known whether this property arises de novo...
Article
Full-text available
Auditory neurons in the forebrain nucleus HVc (hyperstriatum ventrale pars caudale) are highly sensitive to the temporal structure of the bird's own song. These "song-specific" neurons respond strongly to forward song, weakly to the song with the order of the syllables reversed, and little or not at all to reversed song. To investigate the cellular...
Article
Full-text available
Neurons in the songbird forebrain area HVc (hyperstriatum ventrale pars caudale or high vocal center) are sensitive to the temporal structure of the bird's own song and are capable of integrating auditory information over a period of several hundred milliseconds. Extracellular studies have shown that the responses of some HVc neurons depend on the...
Article
Neurons in the songbird forebrain area HVc (hyperstriatum ventrale pars caudale or high vocal center) are sensitive to the temporal structure of the bird's own song and are capable of integrating auditory information over a period of several hundred milliseconds. Extracellular studies have shown that the responses of some HVc neurons depend on the...
Article
Full-text available
Signal processing and classification algorithms often have limited applicability resulting from an inaccurate model of the signal's underlying structure. We present here an efficient, Bayesian algorithm for modeling a signal composed of the superposition of brief, Poisson-distributed functions. This methodology is applied to the specific problem of...
Article
We present a statistical model for learning efficient codes of higher-order structure in natural images. The model, a non-linear generalization of in- dependent component analysis, replaces the standard assumption of inde- pendence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coeffi...
Article
We present a statistical model for learning efficient codes of higher-order structure in natural images. The model, a non-linear generalization of in- dependent component analysis, replaces the standard assumption of inde- pendence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coeffi...
Article
We apply a probabilistic method for learning efficient image codes to the problem of unsupervised classification, segmentation and de-noising of images. The method is based on the Independent Component Analysis (ICA) mixture model proposed for unsuper- vised classification and automatic context switching in blind source separation (I). In this pape...

Network

Cited By