Article

Sparse time-frequency representations.

Center for Studies in Physics and Biology, The Rockefeller University, 1230 York Avenue, New York, NY 10021.
Proceedings of the National Academy of Sciences (Impact Factor: 9.81). 05/2006; 103(16):6094-9. DOI: 10.1073/pnas.0601707103
Source: PubMed

ABSTRACT Auditory neurons preserve exquisite temporal information about sound features, but we do not know how the brain uses this information to process the rapidly changing sounds of the natural world. Simple arguments for effective use of temporal information led us to consider the reassignment class of time-frequency representations as a model of auditory processing. Reassigned time-frequency representations can track isolated simple signals with accuracy unlimited by the time-frequency uncertainty principle, but lack of a general theory has hampered their application to complex sounds. We describe the reassigned representations for white noise and show that even spectrally dense signals produce sparse reassignments: the representation collapses onto a thin set of lines arranged in a froth-like pattern. Preserving phase information allows reconstruction of the original signal. We define a notion of "consensus," based on stability of reassignment to time-scale changes, which produces sharp spectral estimates for a wide class of complex mixed signals. As the only currently known class of time-frequency representations that is always "in focus" this methodology has general utility in signal analysis. It may also help explain the remarkable acuity of auditory perception. Many details of complex sounds that are virtually undetectable in standard sonograms are readily perceptible and visible in reassignment.

Download full-text

Full-text

Available from: Marcelo Osvaldo Magnasco, Jun 18, 2015
0 Followers
 · 
105 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present here an innovative hypothesis and report preliminary evidence that the sound of NMR signals could provide an alternative to the current representation of the individual metabolic fingerprint and supply equally significant information. The NMR spectra of the urine samples provided by four healthy donors were converted into audio signals that were analyzed in two audio experiments by listeners with both musical and non-musical training. The listeners were first asked to cluster the audio signals of two donors on the basis of perceived similarity and then to classify unknown samples after having listened to a set of reference signals. In the clustering experiment, the probability of obtaining the same results by pure chance was 7.04% and 0.05% for non-musicians and musicians, respectively. In the classification experiment, musicians scored 84% accuracy which compared favorably with the 100% accuracy attained by sophisticated pattern recognition methods. The results were further validated and confirmed by analyzing the NMR metabolic profiles belonging to two other different donors. These findings support our hypothesis that the uniqueness of the metabolic phenotype is preserved even when reproduced as audio signal and warrants further consideration and testing in larger study samples.
    Omics A Journal of Integrative Biology 03/2015; 19(3):147-156. DOI:10.1089/omi.2014.0131 · 2.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the analysis of multicomponent signals, defined as superpositions of real or complex modulated waves. It introduces two new post-transformations for the short-time Fourier transform that achieve a compact time-frequency representation while allowing for the separation and the reconstruction of the modes. These two new transformations thus benefit from both the synchrosqueezing transform (which allows for reconstruction) and the reassignment method (which achieves a compact time-frequency representation). Numerical experiments on real and synthetic signals demonstrate the efficiency of these new transformations, and illustrate their differences.
    IEEE Transactions on Signal Processing 03/2015; 63(5):1-1. DOI:10.1109/TSP.2015.2391077 · 3.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We examined the spatiotemporal dynamics of word processing by recording the electrocorticogram (ECoG) from the lateral frontotemporal cortex of neurosurgical patients chronically implanted with subdural electrode grids. Subjects engaged in a target detection task where proper names served as infrequent targets embedded in a stream of task-irrelevant verbs and nonwords. Verbs described actions related to the hand (e.g, throw) or mouth (e.g., blow), while unintelligible nonwords were sounds which matched the verbs in duration, intensity, temporal modulation, and power spectrum. Complex oscillatory dynamics were observed in the delta, theta, alpha, beta, low, and high gamma (HG) bands in response to presentation of all stimulus types. HG activity (80-200 Hz) in the ECoG tracked the spatiotemporal dynamics of word processing and identified a network of cortical structures involved in early word processing. HG was used to determine the relative onset, peak, and offset times of local cortical activation during word processing. Listening to verbs compared to nonwords sequentially activates first the posterior superior temporal gyrus (post-STG), then the middle superior temporal gyrus (mid-STG), followed by the superior temporal sulcus (STS). We also observed strong phase-locking between pairs of electrodes in the theta band, with weaker phase-locking occurring in the delta, alpha, and beta frequency ranges. These results provide details on the first few hundred milliseconds of the spatiotemporal evolution of cortical activity during word processing and provide evidence consistent with the hypothesis that an oscillatory hierarchy coordinates the flow of information between distinct cortical regions during goal-directed behavior.
    Frontiers in Neuroscience 12/2007; 1(1):185-96. DOI:10.3389/neuro.01.1.1.014.2007