Jason T. Parker

Air Force Research Laboratory, Washington, Washington, D.C., United States

Are you Jason T. Parker?

Claim your profile

Publications (8)12.87 Total impact

  • Jason T. Parker · Yan Shou · Philip Schniter ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a scheme to estimate the parameters $b_i$ and $c_j$ of the bilinear form $z_m=\sum_{i,j} b_i z_m^{(i,j)} c_j$ from noisy measurements $\{y_m\}_{m=1}^M$, where $y_m$ and $z_m$ are related through an arbitrary likelihood function and $z_m^{(i,j)}$ are known. Our scheme is based on generalized approximate message passing (G-AMP): it treats $b_i$ and $c_j$ as random variables and $z_m^{(i,j)}$ as an i.i.d.\ Gaussian tensor in order to derive a tractable simplification of the sum-product algorithm in the large-system limit. It generalizes previous instances of bilinear G-AMP, such as those that estimate matrices $\boldsymbol{B}$ and $\boldsymbol{C}$ from a noisy measurement of $\boldsymbol{Z}=\boldsymbol{BC}$, allowing the application of AMP methods to problems such as self-calibration, blind deconvolution, and matrix compressive sensing. Numerical experiments confirm the accuracy and computational efficiency of the proposed approach.
  • Source
    Jason T. Parker · Philip Schniter · Volkan Cevher ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. In the first part of the paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In the second part of the paper, we discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters.
    IEEE Transactions on Signal Processing 10/2013; 62(22). DOI:10.1109/TSP.2014.2357776 · 2.79 Impact Factor
  • Gregory Arnold · Matthew Ferrara · Jason T. Parker ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Shape- and motion-reconstruction is inherently ill-conditioned such that estimates rapidly degrade in the pres­ence of noise, outliers, and missing data. For moving-target radar imaging applications, methods which infer the underlying geometric invariance within back-scattered data are the only known way to recover completely arbitrary target motion. We previously demonstrated algorithms that recover the target motion and shape, even with very high data drop-out (e.g., greater than 75%), which can happen due to self-shadowing, scintillation, and destructive-interference effects. We did this by combining our previous results, that a set of rigid scattering centers forms an elliptical manifold, with new methods to estimate low-rank subspaces via convex optimization routines. This result is especially significant because it will enable us to utilize more data, ultimately improving the stability of the motion-reconstruction process. Since then, we developed a feature- based shape- and motion-estimation scheme based on newly developed object-image relations (OIRs) for moving targets collected in bistatic measurement geometries. In addition to generalizing the previous OIR-based radar imaging techniques from monostatic to bistatic geometries, our formulation allows us to image multiple closely-spaced moving targets, each of which is allowed to exhibit missing data due to target self-shadowing as well as extreme outliers (scattering centers that are inconsistent with the assumed physical or geometric models). The new method is based on exploiting the underlying structure of the model equations, that is, far-field radar data matrices can be decomposed into multiple low-rank subspaces while simultaneously locating sparse outliers.
    SPIE Defense, Security, and Sensing; 05/2013
  • Matthew Ferrara · Jason T Parker · Margaret Cheney ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an 'interrupted SAR' dataset representative of in-band interference commonly encountered in very high frequency radar applications.
    Inverse Problems 04/2013; 29(5):054007. DOI:10.1088/0266-5611/29/5/054007 · 1.32 Impact Factor
  • Hatim F Alqadah · Matthew Ferrara · Howard Fan · Jason T Parker ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The linear sampling method (LSM) offers a qualitative image reconstruction approach, which is known as a viable alternative for obstacle support identification to the well-studied filtered backprojection (FBP), which depends on a linearized forward scattering model. Of practical interest is the imaging of obstacles from sparse aperture far-field data under a fixed single frequency mode of operation. Under this scenario, the Tikhonov regularization typically applied to LSM produces poor images that fail to capture the obstacle boundary. In this paper, we employ an alternative regularization strategy based on constraining the sparsity of the solution's spatial gradient. Two regularization approaches based on the spatial gradient are developed. A numerical comparison to the FBP demonstrates that the new method's ability to account for aspect-dependent scattering permits more accurate reconstruction of concave obstacles, whereas a comparison to Tikhonov-regularized LSM demonstrates that the proposed approach significantly improves obstacle recovery with sparse-aperture data.
    IEEE Transactions on Image Processing 12/2011; 21(4):2062-74. DOI:10.1109/TIP.2011.2177992 · 3.63 Impact Factor
  • Source
    Jason T. Parker · Volkan Cevher · Philip Schniter ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work, we consider a general form of noisy compressive sensing (CS) when there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm whose goal is either minimization of mean-squared error or maximization of posterior probability in the presence of these uncertainties. In particular, we extend the Approximate Message Passing (AMP) approach originally proposed by Donoho, Maleki, and Montanari, and recently generalized by Rangan, to the case of probabilistic uncertainties in the elements of the measurement matrix. Empirically, we show that our approach performs near oracle bounds. We then show that our matrix-uncertain AMP can be applied in an alternating fashion to learn both the unknown measurement matrix and signal vector. We also present a simple analysis showing that, for suitably large systems, it suffices to treat uniform matrix uncertainty as additive white Gaussian noise.
    Circuits, Systems and Computers, 1977. Conference Record. 1977 11th Asilomar Conference on 01/2011; DOI:10.1109/ACSSC.2011.6190118
  • Source
    Lee C Potter · Emre Ertin · Jason T Parker · Mujdat Cetin ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Remote sensing with radar is typically an ill-posed linear inverse problem: a scene is to be inferred from limited measurements of scattered electric fields. Parsimonious models provide a compressed representation of the unknown scene and offer a means for regularizing the inversion task. The emerging field of compressed sensing combines nonlinear reconstruction algorithms and pseudorandom linear measurements to provide reconstruction guarantees for sparse solutions to linear inverse problems. This paper surveys the use of sparse reconstruction algorithms and randomized measurement strategies in radar processing. Although the two themes have a long history in radar literature, the accessible framework provided by compressed sensing illuminates the impact of joining these themes. Potential future directions are conjectured both for extension of theory motivated by practice and for modification of practice based on theoretical insights.
    Proceedings of the IEEE 07/2010; 98(6-98):1006 - 1020. DOI:10.1109/JPROC.2009.2037526 · 4.93 Impact Factor
  • Jason T. Parker · Matthew Ferrara · Justin Bracken · Braham Himed ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional high-value monostatic imaging systems employ frequency-diverse pulses to form images from small synthetic apertures. In contrast, RF tomography utilizes a network of spatially diverse sensors to trade geometric diversity for bandwidth, permitting images to be formed with narrowband waveforms. Such a system could use inexpensive sensors with minimal ADC requirements, provide multiple viewpoints into urban canyons and other obscured environments, and offer graceful performance degradation under sensor attrition. However, numerous challenges must be overcome to field and operate such a system, including multistatic autofocus, precision timing requirements, and the development of appropriate image formation algorithms for large, sparsely populated synthetic apertures with anisotropic targets. AFRL has recently constructed an outdoor testing facility to explore these challenges with measured data. Preliminary experimental results are provided for this system, along with a description of remaining challenges and future research directions.