An improved method for the computation of the Moore–Penrose inverse matrix

Applied Mathematics and Computation (Impact Factor: 1.6). 08/2011; DOI: 10.1016/j.amc.2011.04.080
Source: arXiv

ABSTRACT In this article we provide a fast computational method in order to calculate the Moore–
Penrose inverse of singular square matrices and of rectangular matrices. The proposed
method proves to be much faster and has significantly better accuracy than the already
proposed methods, while works for full and sparse matrices.

Download full-text


Available from: Vasilios N Katsikis, Aug 23, 2015
  • Source
    • "Therefore the only parameters that should be learned are weights between the hidden layer and the output layer. The pseudo inverse method that is fast algorithm and does not fall into a local minimum is used for computing the weights between the hidden layer and the output one [8].Efficient algorithms for computing pseudo inverse methods are discussed in [11] [12]. In this case the number of hidden layer neurons is determined experimentally. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method for constructing a Radial Basis Function network based on normalized cut clustering for determining center and width of Radial Basis Functions. Normalized cut clustering can separate clusters that are non-linearly separable in the input space, so it can be able toconstruct an RBF network classifier with reduced number of hidden layer neurons in comparison with conventional RBF network obtained by k-means method. The well known pseudo inverse method is used to adjust the weights of the output layer of RBF network. Quantitative and qualitative evaluations show that the proposed method reduces the number of hidden units and preserves classification accuracy in comparison with conventional RBF network generated by k-means method. Keywords: radial basis function networks, normalized cut clustering, center and width of Radial Basis Functions, number of hidden layer neurons.
  • Source
    • "Several methods have been proposed in order to speed up the computation of the Moore-Penrose matrix (for example, see [10], [11]). In [10], the computation is optimized by using a special type of tensor product and QR factorization whereas the method proposed in [11] is based on a full-rank Cholesky decomposition . In spite of such approaches improve significantly the computational time of computing the Moore-Penrose inverse matrix, the time complexity is still equal to that provided by the SVD method. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Export Date: 29 September 2014
    Proceedings - 1st BRICS Countries Congress on Computational Intelligence, BRICS-CCI 2013; 09/2013
  • Source
    • "However, it has been observed that their method is not able to produce correct Moore-Penrose inverse due to the large errors when applied to random singular matrices as well as a collection of singular test matrices with large condition numbers obtained from matrix computation toolbox (mctoolbox) (see, Higham, 2002). Katsikis et al. (2011) also presented a very fast and reliable method to compute Moore-Penrose inverse. By using a general framework where analytic functions of scalers are first developed and then matrices substituted for the scalers, Katsaggelos and Efstratiadis (1990) produced a convergence faster than quadratic, for restricted initial estimates. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A third order iterative method for estimating the Moore-Penrose generalised inverse is developed by extending the second order iterative method described in Petkovi and Stanimirovi 2011. Convergence analysis along with the error estimates of the method are investigated. Three numerical examples, two for full rank simple and randomly generated singular rectangular matrices and third for rank deficient singular square matrices with large condition numbers from the matrix computation toolbox are worked out to demonstrate the efficacy of the method. The performance measures used are the number of iterations and CPU time used by the method. On comparing the results obtained by our method with those obtained with the method given in Petkovi and Stanimirovi 2011, it is observed that our method gives improved performance.
    International Journal of Computing Science and Mathematics 01/2013; 4.2:140-151. DOI:10.1504/IJCSM.2013.055209
Show more