An improved method for the computation of the Moore–Penrose inverse matrix

Applied Mathematics and Computation (Impact Factor: 1.55). 08/2011; DOI: 10.1016/j.amc.2011.04.080
Source: arXiv


In this article we provide a fast computational method in order to calculate the Moore–
Penrose inverse of singular square matrices and of rectangular matrices. The proposed
method proves to be much faster and has significantly better accuracy than the already
proposed methods, while works for full and sparse matrices.

Download full-text


Available from: Vasilios N Katsikis,
  • Source
    • "where ψ † k is the pseudoinverse or the Moore–Penrose inverse [40] of ψ k . Equation (12) can also be solved by the recursive least squares method [41]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The emergence of smart grids has posed great challenges to traditional power system control given the multitude of new risk factors. This paper proposes an online supplementary learning controller (OSLC) design method to compensate the traditional power system controllers for coping with the dynamic power grid. The proposed OSLC is a supplementary controller based on approximate dynamic programming, which works alongside an existing power system controller. By introducing an action-dependent cost function as the optimization objective, the proposed OSLC is a nonidentifier-based method to provide an online optimal control adaptively as measurement data become available. The online learning of the OSLC enjoys the policy-search efficiency during policy iteration and the data efficiency of the least squares method. For the proposed OSLC, the stability of the controlled system during learning, the monotonic nature of the performance measure of the iterative supplementary controller, and the convergence of the iterative supplementary controller are proved. Furthermore, the efficacy of the proposed OSLC is demonstrated in a challenging power system frequency control problem in the presence of high penetration of wind generation.
    IEEE transactions on neural networks and learning systems 06/2015; DOI:10.1109/TNNLS.2015.2431734 · 4.29 Impact Factor
  • Source
    • "A large number of different methods for computing generalized inverses are available in the literature. Direct methods are usually based on SVD (Singular Value Decomposition), QR factorization [4], Gaussian elimination [11] [13], etc. On the other hand, there are certain iterative methods, mainly based on the appropriate generalizations of the wellknown hyper-power method and the Schultz method as its particular case. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose new iterative schemes for the computation of outer inverse which reduce the total number of matrix multiplications per iteration. In particular, we consider how the hyper-power method of orders 5 and 9 can be accelerated such that they require 4 and 5 matrix multiplications per iteration, respectively. These improvements are tested against quadratically convergent Schultz’ method and fastest Horner scheme hyper-power method of order three. Numerical results show the superiority and practical applicability of the proposed methods. Finally, it is shown that a possibly more efficient method should have the order at least , making it useless for practical applications.
    Journal of Computational and Applied Mathematics 04/2015; 278. DOI:10.1016/ · 1.27 Impact Factor
  • Source
    • "Therefore the only parameters that should be learned are weights between the hidden layer and the output layer. The pseudo inverse method that is fast algorithm and does not fall into a local minimum is used for computing the weights between the hidden layer and the output one [8].Efficient algorithms for computing pseudo inverse methods are discussed in [11] [12]. In this case the number of hidden layer neurons is determined experimentally. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method for constructing a Radial Basis Function network based on normalized cut clustering for determining center and width of Radial Basis Functions. Normalized cut clustering can separate clusters that are non-linearly separable in the input space, so it can be able toconstruct an RBF network classifier with reduced number of hidden layer neurons in comparison with conventional RBF network obtained by k-means method. The well known pseudo inverse method is used to adjust the weights of the output layer of RBF network. Quantitative and qualitative evaluations show that the proposed method reduces the number of hidden units and preserves classification accuracy in comparison with conventional RBF network generated by k-means method. Keywords: radial basis function networks, normalized cut clustering, center and width of Radial Basis Functions, number of hidden layer neurons.
Show more