Jason T. Parker

Air Force Research Laboratory, Washington, Washington, D.C., United States

Are you Jason T. Parker?

Claim your profile

Publications (12)12.87 Total impact

  • Jason T. Parker · Yan Shou · Philip Schniter
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a scheme to estimate the parameters $b_i$ and $c_j$ of the bilinear form $z_m=\sum_{i,j} b_i z_m^{(i,j)} c_j$ from noisy measurements $\{y_m\}_{m=1}^M$, where $y_m$ and $z_m$ are related through an arbitrary likelihood function and $z_m^{(i,j)}$ are known. Our scheme is based on generalized approximate message passing (G-AMP): it treats $b_i$ and $c_j$ as random variables and $z_m^{(i,j)}$ as an i.i.d.\ Gaussian tensor in order to derive a tractable simplification of the sum-product algorithm in the large-system limit. It generalizes previous instances of bilinear G-AMP, such as those that estimate matrices $\boldsymbol{B}$ and $\boldsymbol{C}$ from a noisy measurement of $\boldsymbol{Z}=\boldsymbol{BC}$, allowing the application of AMP methods to problems such as self-calibration, blind deconvolution, and matrix compressive sensing. Numerical experiments confirm the accuracy and computational efficiency of the proposed approach.
    No preview · Article · Aug 2015
  • Source
    Jason T. Parker · Philip Schniter · Volkan Cevher
    [Show abstract] [Hide abstract]
    ABSTRACT: We extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. In the first part of the paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In the second part of the paper, we discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters.
    Preview · Article · Oct 2013 · IEEE Transactions on Signal Processing
  • Gregory Arnold · Matthew Ferrara · Jason T. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Shape- and motion-reconstruction is inherently ill-conditioned such that estimates rapidly degrade in the pres­ence of noise, outliers, and missing data. For moving-target radar imaging applications, methods which infer the underlying geometric invariance within back-scattered data are the only known way to recover completely arbitrary target motion. We previously demonstrated algorithms that recover the target motion and shape, even with very high data drop-out (e.g., greater than 75%), which can happen due to self-shadowing, scintillation, and destructive-interference effects. We did this by combining our previous results, that a set of rigid scattering centers forms an elliptical manifold, with new methods to estimate low-rank subspaces via convex optimization routines. This result is especially significant because it will enable us to utilize more data, ultimately improving the stability of the motion-reconstruction process. Since then, we developed a feature- based shape- and motion-estimation scheme based on newly developed object-image relations (OIRs) for moving targets collected in bistatic measurement geometries. In addition to generalizing the previous OIR-based radar imaging techniques from monostatic to bistatic geometries, our formulation allows us to image multiple closely-spaced moving targets, each of which is allowed to exhibit missing data due to target self-shadowing as well as extreme outliers (scattering centers that are inconsistent with the assumed physical or geometric models). The new method is based on exploiting the underlying structure of the model equations, that is, far-field radar data matrices can be decomposed into multiple low-rank subspaces while simultaneously locating sparse outliers.
    No preview · Conference Paper · May 2013
  • Matthew Ferrara · Jason T Parker · Margaret Cheney
    [Show abstract] [Hide abstract]
    ABSTRACT: Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an 'interrupted SAR' dataset representative of in-band interference commonly encountered in very high frequency radar applications.
    No preview · Article · Apr 2013 · Inverse Problems
  • Hatim F Alqadah · Matthew Ferrara · Howard Fan · Jason T Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: The linear sampling method (LSM) offers a qualitative image reconstruction approach, which is known as a viable alternative for obstacle support identification to the well-studied filtered backprojection (FBP), which depends on a linearized forward scattering model. Of practical interest is the imaging of obstacles from sparse aperture far-field data under a fixed single frequency mode of operation. Under this scenario, the Tikhonov regularization typically applied to LSM produces poor images that fail to capture the obstacle boundary. In this paper, we employ an alternative regularization strategy based on constraining the sparsity of the solution's spatial gradient. Two regularization approaches based on the spatial gradient are developed. A numerical comparison to the FBP demonstrates that the new method's ability to account for aspect-dependent scattering permits more accurate reconstruction of concave obstacles, whereas a comparison to Tikhonov-regularized LSM demonstrates that the proposed approach significantly improves obstacle recovery with sparse-aperture data.
    No preview · Article · Dec 2011 · IEEE Transactions on Image Processing
  • Source
    Jason T. Parker · Volkan Cevher · Philip Schniter
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work, we consider a general form of noisy compressive sensing (CS) when there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm whose goal is either minimization of mean-squared error or maximization of posterior probability in the presence of these uncertainties. In particular, we extend the Approximate Message Passing (AMP) approach originally proposed by Donoho, Maleki, and Montanari, and recently generalized by Rangan, to the case of probabilistic uncertainties in the elements of the measurement matrix. Empirically, we show that our approach performs near oracle bounds. We then show that our matrix-uncertain AMP can be applied in an alternating fashion to learn both the unknown measurement matrix and signal vector. We also present a simple analysis showing that, for suitably large systems, it suffices to treat uniform matrix uncertainty as additive white Gaussian noise.
    Preview · Article · Nov 2011 · Circuits, Systems and Computers, 1977. Conference Record. 1977 11th Asilomar Conference on
  • [Show abstract] [Hide abstract]
    ABSTRACT: The Linear Sampling Method (LSM) is a relatively novel method of solving the inverse acoustic or electromagnetic scattering problem. The linear formulation of the inverse problem, which is due to an exact linear relationship that is satisfied by far-field data, makes LSM imaging in the presence of multiple scattering a straightforward linear algebra procedure. The main drawback of the LSM is its dependence on copious data. In this work we seek to improve LSM reconstruction performance by considering undersampled single frequency multi-static far-field data coupled with a spatial gradient constraint on the regularized image. The resulting total-variation-type optimization problem is then solved by means of an alternating minimization scheme.
    No preview · Conference Paper · Dec 2010
  • Lorenzo Lo Monte · Jason T. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Underground imaging involving RF Tomography is generally severely ill-posed posed. Tikhonov Regularization is perhaps the most common method to address this ill-posedness. The proposed methods are based upon the realistic assumptions that targets (e.g. tunnels) are sparse and clustered in the scene, and have known electrical properties. Therefore, we explore the use of alternative regularization strategies leveraging sparsity of the signal and its spatial gradient, while also imposing physically-derived amplitude constraints. By leveraging this prior knowledge, cleaner scene reconstructions are obtained.
    No preview · Conference Paper · Sep 2010
  • Jason T. Parker · Matthew Ferrara · Justin Bracken · Braham Himed
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional high-value monostatic imaging systems employ frequency-diverse pulses to form images from small synthetic apertures. In contrast, RF tomography utilizes a network of spatially diverse sensors to trade geometric diversity for bandwidth, permitting images to be formed with narrowband waveforms. Such a system could use inexpensive sensors with minimal ADC requirements, provide multiple viewpoints into urban canyons and other obscured environments, and offer graceful performance degradation under sensor attrition. However, numerous challenges must be overcome to field and operate such a system, including multistatic autofocus, precision timing requirements, and the development of appropriate image formation algorithms for large, sparsely populated synthetic apertures with anisotropic targets. AFRL has recently constructed an outdoor testing facility to explore these challenges with measured data. Preliminary experimental results are provided for this system, along with a description of remaining challenges and future research directions.
    No preview · Article · Sep 2010
  • Source
    Lee C Potter · Emre Ertin · Jason T Parker · Mujdat Cetin
    [Show abstract] [Hide abstract]
    ABSTRACT: Remote sensing with radar is typically an ill-posed linear inverse problem: a scene is to be inferred from limited measurements of scattered electric fields. Parsimonious models provide a compressed representation of the unknown scene and offer a means for regularizing the inversion task. The emerging field of compressed sensing combines nonlinear reconstruction algorithms and pseudorandom linear measurements to provide reconstruction guarantees for sparse solutions to linear inverse problems. This paper surveys the use of sparse reconstruction algorithms and randomized measurement strategies in radar processing. Although the two themes have a long history in radar literature, the accessible framework provided by compressed sensing illuminates the impact of joining these themes. Potential future directions are conjectured both for extension of theory motivated by practice and for modification of practice based on theoretical insights.
    Full-text · Article · Jul 2010 · Proceedings of the IEEE
  • Source
    Jason T. Parker · Lee C. Potter
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional Space Time Adaptive Processing (STAP) formulations cast the problem as a detection task which results in an optimal decision statistic for a single target in colored Gaussian noise. In the present work, inspired by recent theoretical and algorithmic advances in the field known as compressed sensing, we impose a Laplacian prior on the targets themselves which encourages sparsity in the resulting reconstruction of the angle/Doppler plane. By casting the problem in a Bayesian framework, it becomes readily apparent that sparse regularization can be applied as a post-processing step after the use of a traditional STAP algorithm for clutter estimation. Simulation results demonstrate that this approach allows closely spaced targets to be more easily distinguished.
    Full-text · Conference Paper · Jun 2010
  • Ross W. Deming · Jason T. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: The Kaczmarz method is widely used in computed tomography applications to iteratively solve large inverse problems for which a direct solution is computationally prohibitive. In this paper, the Kaczmarz method is generalized to handle sparse frequency data collected from sparse, spatially distributed, multistatic sensors. In addition, the developed algorithm provides a mathematical solution to the underlying diffraction tomography problem described by the scalar wave equation under the Born approximation. The formulation avoids computing the pseudo-inverse of a large forward operator while still converging to the true minimum norm solution to the scattering problem, and the resulting reconstructed images are superior to the matched filter results often employed in SAR/ISAR applications. A fast version of the algorithm that exploits circular symmetries in the sensor geometry is also described.
    No preview · Conference Paper · Jun 2009