Publications (47)37.1 Total impact
 [Show abstract] [Hide abstract]
ABSTRACT: We address the problem of superresolution frequency recovery using prior knowledge of the structure of a spectrally sparse, undersampled signal. In many applications of interest, some structure information about the signal spectrum is often known. The prior information might be simply knowing precisely some signal frequencies or the likelihood of a particular frequency component in the signal. We devise a general semidefinite program to recover these frequencies using theories of positive trigonometric polynomials. Our theoretical analysis shows that, given sufficient prior information, perfect signal reconstruction is possible using signal samples no more than thrice the number of signal frequencies. Numerical experiments demonstrate great performance enhancements using our method. We show that the nominal resolution necessary for the gridfree results can be improved if prior information is suitably employed.09/2014;  [Show abstract] [Hide abstract]
ABSTRACT: We address the problem of superresolution line spectrum estimation of an undersampled signal with block prior information. The component frequencies of the signal are assumed to take arbitrary continuous values in known frequency blocks. We formulate a general semidefinite program to recover these continuousvalued frequencies using theories of positive trigonometric polynomials. The proposed semidefinite program achieves superresolution frequency recovery by taking advantage of known structures of frequency blocks. Numerical experiments show great performance enhancements using our method.04/2014;  [Show abstract] [Hide abstract]
ABSTRACT: In this paper, we consider the variable selection problem for a nonlinear nonparametric system. Two approaches are proposed, one topdown approach and one bottomup approach. The topdown algorithm selects a variable by detecting if the corresponding partial derivative is zero or not at the point of interest. The algorithm is shown to have not only the parameter but also the set convergence. This is critical because the variable selection problem is binary, a variable is either selected or not selected. The bottomup approach is based on the forward/backward stepwise selection which is designed to work if the data length is limited. Both approaches determine the most important variables locally and allow the unknown nonparametric nonlinear system to have different local dimensions at different points of interest. Further, two potential applications along with numerical simulations are provided to illustrate the usefulness of the proposed algorithms.Automatica. 01/2014; 50(1):100–113.  [Show abstract] [Hide abstract]
ABSTRACT: Recent research in offthegrid compressed sensing (CS) has demonstrated that, under certain conditions, one can successfully recover a spectrally sparse signal from a few timedomain samples even though the dictionary is continuous. In particular, atomic norm minimization was proposed in \cite{tang2012csotg} to recover $1$dimensional spectrally sparse signal. However, in spite of existing research efforts \cite{chi2013compressive}, it was still an open problem how to formulate an equivalent positive semidefinite program for atomic norm minimization in recovering signals with $d$dimensional ($d\geq 2$) offthegrid frequencies. In this paper, we settle this problem by proposing equivalent semidefinite programming formulations of atomic norm minimization to recover signals with $d$dimensional ($d\geq 2$) offthegrid frequencies.12/2013;  [Show abstract] [Hide abstract]
ABSTRACT: Recent research in offthegrid compressed sensing (CS) has demonstrated that, under certain conditions, one can successfully recover a spectrally sparse signal from a few timedomain samples even though the dictionary is continuous. In this paper, we extend offthegrid CS to applications where some prior information about spectrally sparse signal is known. We specifically consider cases where a few contributing frequencies or poles, but not their amplitudes or phases, are known a priori. Our results show that equipping offthegrid CS with the knownpoles algorithm can increase the probability of recovering all the frequency components.11/2013;  [Show abstract] [Hide abstract]
ABSTRACT: In this paper we introduce an optimized Markov Chain Monte Carlo (MCMC) technique for solving the integer leastsquares (ILS) problems, which include Maximum Likelihood (ML) detection in MultipleInput MultipleOutput (MIMO) systems. Two factors contribute to the speed of finding the optimal solution by the MCMC detector: the probability of the optimal solution in the stationary distribution, and the mixing time of the MCMC detector. Firstly, we compute the optimal value of the "temperature" parameter, in the sense that the temperature has the desirable property that once the Markov chain has mixed to its stationary distribution, there is polynomially small probability ($1/\mbox{poly}(N)$, instead of exponentially small) of encountering the optimal solution. This temperature is shown to be at most $O(\sqrt{SNR}/\ln(N))$, where $SNR$ is the signaltonoise ratio, and $N$ is the problem dimension. Secondly, we study the mixing time of the underlying Markov chain of the proposed MCMC detector. We find that, the mixing time of MCMC is closely related to whether there is a local minimum in the lattice structures of ILS problems. For some lattices without local minima, the mixing time of the Markov chain is independent of $SNR$, and grows polynomially in the problem dimension; for lattices with local minima, the mixing time grows unboundedly as $SNR$ grows, when the temperature is set, as in conventional wisdom, to be the standard deviation of noises. Our results suggest that, to ensure fast mixing for a fixed dimension $N$, the temperature for MCMC should instead be set as $\Omega(\sqrt{SNR})$ in general. Simulation results show that the optimized MCMC detector efficiently achieves approximately ML detection in MIMO systems having a huge number of transmit and receive dimensions.IEEE Transactions on Signal Processing 10/2013; 62(17). · 2.81 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In compressed sensing problems, $\ell_1$ minimization or Basis Pursuit was known to have the best provable phase transition performance of recoverable sparsity among polynomialtime algorithms. It is of great theoretical and practical interest to find alternative polynomialtime algorithms which perform better than $\ell_1$ minimization. \cite{Icassp reweighted l_1}, \cite{Isit reweighted l_1}, \cite{XuScaingLaw} and \cite{iterativereweightedjournal} have shown that a twostage reweighted $\ell_1$ minimization algorithm can boost the phase transition performance for signals whose nonzero elements follow an amplitude probability density function (pdf) $f(\cdot)$ whose $t$th derivative $f^{t}(0) \neq 0$ for some integer $t \geq 0$. However, for signals whose nonzero elements are strictly suspended from zero in distribution (for example, constantmodulus, only taking values `$+d$' or `$d$' for some nonzero real number $d$), no polynomialtime signal recovery algorithms were known to provide better phase transition performance than plain $\ell_1$ minimization, especially for dense sensing matrices. In this paper, we show that a polynomialtime algorithm can universally elevate the phasetransition performance of compressed sensing, compared with $\ell_1$ minimization, even for signals with constantmodulus nonzero elements. Contrary to conventional wisdoms that compressed sensing matrices are desired to be isometric, we show that nonisometric matrices are not necessarily bad sensing matrices. In this paper, we also provide a framework for recovering sparse signals when sensing matrices are not isometric.07/2013; 
Article: Precisely Verifying the Null Space Conditions in Compressed Sensing: A Sandwiching Algorithm
[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we propose new efficient algorithms to verify the null space condition in compressed sensing (CS). Given an $(nm) \times n$ ($m>0$) CS matrix $A$ and a positive $k$, we are interested in computing $\displaystyle \alpha_k = \max_{\{z: Az=0,z\neq 0\}}\max_{\{K: K\leq k\}}$ ${\z_K \_{1}}{\z\_{1}}$, where $K$ represents subsets of $\{1,2,...,n\}$, and $K$ is the cardinality of $K$. In particular, we are interested in finding the maximum $k$ such that $\alpha_k < {1}{2}$. However, computing $\alpha_k$ is known to be extremely challenging. In this paper, we first propose a series of new polynomialtime algorithms to compute upper bounds on $\alpha_k$. Based on these new polynomialtime algorithms, we further design a new sandwiching algorithm, to compute the \emph{exact} $\alpha_k$ with greatly reduced complexity. When needed, this new sandwiching algorithm also achieves a smooth tradeoff between computational complexity and result accuracy. Empirical results show the performance improvements of our algorithm over existing known methods; and our algorithm outputs precise values of $\alpha_k$, with much lower complexity than exhaustive search.06/2013;  [Show abstract] [Hide abstract]
ABSTRACT: The problem of sequentially finding an independent and identically distributed (i.i.d.) sequence that is drawn from a probability distribution $F_1$ by searching over multiple sequences, some of which are drawn from $F_1$ and the others of which are drawn from a different distribution $F_0$, is considered. The sensor is allowed to take one observation at a time. It has been shown in a recent work that if each observation comes from one sequence, Cumulative Sum (CUSUM) test is optimal. In this paper, we propose a new approach in which each observation can be a linear combination of samples from multiple sequences. The test has two stages. In the first stage, namely scanning stage, one takes a linear combination of a pair of sequences with the hope of scanning through sequences that are unlikely to be generated from $F_1$ and quickly identifying a pair of sequences such that at least one of them is highly likely to be generated by $F_1$. In the second stage, namely refinement stage, one examines the pair identified from the first stage more closely and picks one sequence to be the final sequence. The problem under this setup belongs to a class of multiple stopping time problems. In particular, it is an ordered two concatenated Markov stopping time problem. We obtain the optimal solution using the tools from the multiple stopping time theory. Numerical simulation results show that this search strategy can significantly reduce the searching time, especially when $F_{1}$ is rare.02/2013;  [Show abstract] [Hide abstract]
ABSTRACT: In this paper, we consider using total variation minimization to recover signals whose gradients have a sparse support, from a small number of measurements. We establish the proof for the performance guarantee of total variation (TV) minimization in recovering \emph{onedimensional} signal with sparse gradient support. This partially answers the open problem of proving the fidelity of total variation minimization in such a setting \cite{TVMulti}. In particular, we have shown that the recoverable gradient sparsity can grow linearly with the signal dimension when TV minimization is used. Recoverable sparsity thresholds of TV minimization are explicitly computed for 1dimensional signal by using the Grassmann angle framework. We also extend our results to TV minimization for multidimensional signals. Stability of recovering signal itself using 1D TV minimization has also been established through a property called "almost Euclidean property for 1dimensional TV norm". We further give a lower bound on the number of random Gaussian measurements for recovering 1dimensional signal vectors with $N$ elements and $K$sparse gradients. Interestingly, the number of needed measurements is lower bounded by $\Omega((NK)^{\frac{1}{2}})$, rather than the $O(K\log(N/K))$ bound frequently appearing in recovering $K$sparse signal vectors.01/2013; 
Article: Toeplitz Matrix Based Sparse Error Correction in System Identification: Outliers and Random Noises
[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we consider robust system identification under sparse outliers and random noises. In our problem, system parameters are observed through a Toeplitz matrix. All observations are subject to random noises and a few are corrupted with outliers. We reduce this problem of system identification to a sparse error correcting problem using a Toeplitz structured realnumbered coding matrix. We prove the performance guarantee of Toeplitz structured matrix in sparse error correction. Thresholds on the percentage of correctable errors for Toeplitz structured matrices are also established. When both outliers and observation noise are present, we have shown that the estimation error goes to 0 asymptotically as long as the probability density function for observation noise is not "vanishing" around 0.Acoustics, Speech, and Signal Processing, 1988. ICASSP88., 1988 International Conference on 12/2012; · 4.63 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper, we study the hypothesis testing problem of, among $n$ random variables, determining $k$ random variables which have different probability distributions from the rest $(nk)$ random variables. Instead of using separate measurements of each individual random variable, we propose to use mixed measurements which are functions of multiple random variables. It is demonstrated that $O({\displaystyle \frac{k \log(n)}{\min_{P_i, P_j} C(P_i, P_j)}})$ observations are sufficient for correctly identifying the $k$ anomalous random variables with high probability, where $C(P_i, P_j)$ is the Chernoff information between two possible distributions $P_i$ and $P_j$ for the proposed mixed observations. We characterized the Chernoff information respectively under fixed timeinvariant mixed observations, random timevarying mixed observations, and deterministic timevarying mixed observations; in our derivations, we introduced the \emph{inner and outer conditional Chernoff information} for timevarying measurements. It is demonstrated that mixed observations can strictly improve the error exponent of hypothesis testing, over separate observations of individual random variables. We also characterized the optimal mixed observations maximizing the error exponent, and derived an explicit construction of the optimal mixed observations for the case of Gaussian random variables. These results imply that mixed observations of random variables can reduce the number of required samples in hypothesis testing applications. Compared with compressed sensing problems, this paper considers random variables which are allowed to dramatically change values in different measurements.08/2012;  [Show abstract] [Hide abstract]
ABSTRACT: In this paper, we consider robust system identification under sparse outliers and random noises. In this problem, system parameters are observed through a Toeplitz matrix. All observations are subject to random noises and a few are corrupted with outliers. We reduce this problem of system identification to a sparse error correcting problem using a Toeplitz structured realnumbered coding matrix. We prove the performance guarantee of Toeplitz structured matrix in sparse error correction. Thresholds on the percentage of correctable errors for Toeplitz structured matrices are established. When both outliers and observation noise are present, we have shown that the estimation error goes to 0 asymptotically as long as the probability density function for observation noise is not "vanishing" around 0. No probabilistic assumptions are imposed on the outliers.07/2012;  [Show abstract] [Hide abstract]
ABSTRACT: This paper proposes a lowcomplexity algorithm for blind equalization of data in OFDMbased wireless systems with general constellations. The proposed algorithm is able to recover data even when the channel changes on a symbolbysymbol basis, making it suitable for fast fading channels. The proposed algorithm does not require any statistical information of the channel and thus does not suffer from latency normally associated with blind methods. We also demonstrate how to reduce the complexity of the algorithm, which becomes especially low at high SNR. Specifically, we show that in the high SNR regime, the number of operations is of the order O(LN), where L is the cyclic prefix length and N is the total number of subcarriers. Simulation results confirm the favorable performance of our algorithm.IEEE Transactions on Signal Processing 07/2012; 60(12). · 2.81 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: In this paper, we study the mixing time of Markov Chain Monte Carlo (MCMC) for integer leastsquare (LS) optimization problems. It is found that the mixing time of MCMC for integer LS problems depends on the structure of the underlying lattice. More specifically, the mixing time of MCMC is closely related to whether there is a local minimum in the lattice structure. For some lattices, the mixing time of the Markov chain is independent of the signaltonoise ($SNR$) ratio and grows polynomially in the problem dimension; while for some lattices, the mixing time grows unboundedly as $SNR$ grows. Both theoretical and empirical results suggest that to ensure fast mixing, the temperature for MCMC should often grow positively as the $SNR$ increases. We also derive the probability that there exist local minima in an integer leastsquare problem, which can be as high as $1/3\frac{1}{\sqrt{5}}+\frac{2\arctan(\sqrt{5/3})}{\sqrt{5}\pi}$.03/2012;  [Show abstract] [Hide abstract]
ABSTRACT: Determining the susceptibility distribution from the magnetic field measured in a magnetic resonance (MR) scanner is an illposed inverse problem, because of the presence of zeroes in the convolution kernel in the forward problem. An algorithm called morphology enabled dipole inversion (MEDI), which incorporates spatial prior information, has been proposed to generate a quantitative susceptibility map (QSM). The accuracy of QSM can be validated experimentally. However, there is not yet a rigorous mathematical demonstration of accuracy for a general regularized approach or for MEDI specifically. The error in the susceptibility map reconstructed by MEDI is expressed in terms of the acquisition noise and the error in the spatial prior information. A detailed analysis demonstrates that the error in the susceptibility map reconstructed by MEDI is bounded by a linear function of these two error sources. Numerical analysis confirms that the error of the susceptibility map reconstructed by MEDI is on the same order of the noise in the original MRI data, and comprehensive edge detection will lead to reduced model error in MEDI. Additional phantom validation and human brain imaging demonstrated the practicality of the MEDI method.IEEE transactions on medical imaging. 03/2012; 31(3):81624.  [Show abstract] [Hide abstract]
ABSTRACT: It is well known that $\ell_1$ minimization can be used to recover sufficiently sparse unknown signals from compressed linear measurements. In fact, exact thresholds on the sparsity, as a function of the ratio between the system dimensions, so that with high probability almost all sparse signals can be recovered from i.i.d. Gaussian measurements, have been computed and are referred to as "weak thresholds" \cite{D}. In this paper, we introduce a reweighted $\ell_1$ recovery algorithm composed of two steps: a standard $\ell_1$ minimization step to identify a set of entries where the signal is likely to reside, and a weighted $\ell_1$ minimization step where entries outside this set are penalized. For signals where the nonsparse component entries are independent and identically drawn from certain classes of distributions, (including most well known continuous distributions), we prove a \emph{strict} improvement in the weak recovery threshold. Our analysis suggests that the level of improvement in the weak threshold depends on the behavior of the distribution at the origin. Numerical simulations verify the distribution dependence of the threshold improvement very well, and suggest that in the case of i.i.d. Gaussian nonzero entries, the improvement can be quite impressiveover 20% in the example we consider.CoRR. 11/2011; abs/1111.1396.  [Show abstract] [Hide abstract]
ABSTRACT: ℓ<sub>1</sub> minimization is often used for recovering sparse signals from an underdetermined linear system. In this paper, we focus on finding sharp performance bounds on recovering approximately sparse signals using ℓ<sub>1</sub> minimization under noisy measurements. While the restricted isometry property is powerful for the analysis of recovering approximately sparse signals with noisy measurements, the known bounds on the achievable sparsity1 level can be quite loose. The neighborly polytope analysis which yields sharp bounds for perfectly sparse signals cannot be readily generalized to approximately sparse signals. We start from analyzing a necessary and sufficient condition, the “balancedness” property of linear subspaces, for achieving a certain signal recovery accuracy. Then we give a unified null space Grassmann anglebased geometric framework to give sharp bounds on this “balancedness” property of linear subspaces. By investigating the “balancedness” property, this unified framework characterizes sharp quantitative tradeoffs between signal sparsity and the recovery accuracy of ℓ<sub>1</sub> minimization for approximately sparse signal. As a consequence, this generalizes the neighborly polytope result for perfectly sparse signals. Besides the robustness in the “strong” sense for all sparse signals, we also discuss the notions of “weak” and “sectional” robustness. Our results concern fundamental properties of linear subspaces and so may be of independent mathematical interest.IEEE Transactions on Information Theory 11/2011; · 2.62 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: We investigate the problem of reconstructing a highdimensional nonnegative sparse vector from lowerdimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes can be more efficient both with respect to signal sensing as well as reconstruction complexity. Known constructions use the adjacency matrices of expander graphs, which often lead to recovery algorithms which are much more efficient than minimization. However, prior constructions of sparse measurement matrices rely on expander graphs with very high expansion coefficients which make the construction of such graphs difficult and the size of the recoverable sets very small. In this paper, we introduce sparse measurement matrices for the recovery of nonnegative vectors, using perturbations of the adjacency matrices of expander graphs requiring much smaller expansion coefficients, hereby referred to as minimal expanders. We show that when minimization is used as the reconstruction method, these constructions allow the recovery of signals that are almost three orders of magnitude larger compared to the existing theoretical results for sparse measurement matrices. We provide for the first time tight upper bounds for the so called weak and strong recovery thresholds when minimization is used. We further show that the success of optimization is equivalent to the existence of a “unique” vector in the set of solutions to the linear equations, which enables alternative algorithms for minimization. We further show that the defined minimal expansion property is necessary for all measurement matrices for compressive sensing, (even when the nonnegativity assumption is removed) therefore implying that our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much more computationally efficient compared to minimization.IEEE Transactions on Signal Processing 02/2011; · 2.81 Impact Factor  IEEE Transactions on Information Theory. 01/2011; 57:68946919.
Publication Stats
509  Citations  
37.10  Total Impact Points  
Top Journals
Institutions

2014

University of Iowa
 Department of Electrical and Computer Engineering
Iowa City, Iowa, United States


2007–2011

California Institute of Technology
 Department of Electrical Engineering
Pasadena, CA, United States


2010

Technical University of Denmark
 Department of Informatics and Mathematical Modelling
Copenhagen, Capital Region, Denmark


2008

Purdue University
West Lafayette, Indiana, United States
