An improved method for the computation of the Moore–Penrose inverse matrix

Applied Mathematics and Computation (Impact Factor: 1.6). 08/2011; DOI: 10.1016/j.amc.2011.04.080
Source: arXiv

ABSTRACT In this article we provide a fast computational method in order to calculate the Moore–
Penrose inverse of singular square matrices and of rectangular matrices. The proposed
method proves to be much faster and has significantly better accuracy than the already
proposed methods, while works for full and sparse matrices.


Available from: Vasilios N Katsikis, Jun 16, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method for constructing a Radial Basis Function network based on normalized cut clustering for determining center and width of Radial Basis Functions. Normalized cut clustering can separate clusters that are non-linearly separable in the input space, so it can be able toconstruct an RBF network classifier with reduced number of hidden layer neurons in comparison with conventional RBF network obtained by k-means method. The well known pseudo inverse method is used to adjust the weights of the output layer of RBF network. Quantitative and qualitative evaluations show that the proposed method reduces the number of hidden units and preserves classification accuracy in comparison with conventional RBF network generated by k-means method. Keywords: radial basis function networks, normalized cut clustering, center and width of Radial Basis Functions, number of hidden layer neurons.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work, a novel supervised learning method, the Minimal Learning Machine (MLM), is proposed. Learning in MLM consists in building a linear mapping between input and output distance matrices. In the generalization phase, the learned distance map is used to provide an estimate of the distance from K output reference points to the unknown target output value. Then, the output estimation is formulated as multilateration problem based on the predicted output distance and the locations of the reference points. Given its general formulation, the Minimal Learning Machine is inherently capable of operating on nonlinear regression problems as well as on multidimensional response spaces. In addition, an intuitive extension of the MLM is proposed to deal with classification problems. A comprehensive set of computer experiments illustrates that the proposed method achieves accuracies that are comparable to more traditional machine learning methods for regression and classification thus offering a computationally valid alternative to such approaches.
    Neurocomputing; 01/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose new iterative schemes for the computation of outer inverse which reduce the total number of matrix multiplications per iteration. In particular, we consider how the hyper-power method of orders 5 and 9 can be accelerated such that they require 4 and 5 matrix multiplications per iteration, respectively. These improvements are tested against quadratically convergent Schultz’ method and fastest Horner scheme hyper-power method of order three. Numerical results show the superiority and practical applicability of the proposed methods. Finally, it is shown that a possibly more efficient method should have the order at least , making it useless for practical applications.
    Journal of Computational and Applied Mathematics 04/2015; 278. DOI:10.1016/ · 1.08 Impact Factor