Article

The L1-norm best-fit hyperplane problem.

Virginia Commonwealth University, 1015 Floyd Avenue, P.O. Box 843083, Richmond, VA 23284.
Applied Mathematics Letters (Impact Factor: 1.48). 01/2012; 26(1):51-56. DOI: 10.1016/j.aml.2012.03.031
Source: PubMed

ABSTRACT We formalize an algorithm for solving the L(1)-norm best-fit hyperplane problem derived using first principles and geometric insights about L(1) projection and L(1) regression. The procedure follows from a new proof of global optimality and relies on the solution of a small number of linear programs. The procedure is implemented for validation and testing. This analysis of the L(1)-norm best-fit hyperplane problem makes the procedure accessible to applications in areas such as location theory, computer vision, and multivariate statistics.

0 Bookmarks
 · 
122 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This survey highlights the recent advances in algorithms for numerical linear algebra that have come from the technique of linear sketching, whereby given a matrix, one first compressed it to a much smaller matrix by multiplying it by a (usually) random matrix with certain properties. Much of the expensive computation can then be performed on the smaller matrix, thereby accelerating the solution for the original problem. In this survey we consider least squares as well as robust regression problems, low rank approximation, and graph sparsification. We also discuss a number of variants of these problems. Finally, we discuss the limitations of sketching methods.
    11/2014;
  • Journal of Mathematical Imaging and Vision 01/2014; · 2.33 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe ways to define and calculate L1-norm signal subspaces which are less sensitive to outlying data than L2-calculated subspaces. We start with the computation of the L1 maximum-projection principal component of a data matrix containing N signal samples of dimension D. We show that while the general problem is formally NP-hard in asymptotically large N, D, the case of engineering interest of fixed dimension D and asymptotically large sample size N is not. In particular, for the case where the sample size is less than the fixed dimension (N < D), we present in explicit form an optimal algorithm of computational cost 2^N. For the case N ≥ D, we present an optimal algorithm of complexity O(N^D). We generalize to multiple L1-max-projection components and present an explicit optimal L1 subspace calculation algorithm of complexity O(N^(DK−K+1)) where K is the desired number of L1 principal components (subspace rank). We conclude with illustrations of L1-subspace signal processing in the fields of data dimensionality reduction, direction-of-arrival estimation, and image conditioning/restoration.
    IEEE Transactions on Signal Processing 05/2014; · 3.20 Impact Factor

Full-text (2 Sources)

Download
240 Downloads
Available from
May 20, 2014