Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The Moore–Penrose inverse of an arbitrary matrix (including singular and rectangular) has many applications in statistics, prediction theory, control system analysis, curve fitting and numerical analysis. In this paper, an algorithm based on the conjugate Gram–Schmidt process and the Moore–Penrose inverse of partitioned matrices is proposed for computing the pseudoinverse of an m×n real matrix A with m≥n and rank r≤n. Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that of pseudoinverses obtained by the other methods for large sparse matrices.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The pseudo inverse of the non-square matrix C shown in Equation (49), i.e., C † in Equation (47), can be derived by the Moore-Penrose inverse [34]. Hence,â l ( l = 2, . . . ...
... Both numbers of the delayed outputs, delayed inputs are selected as 4. The nonlinear estimator of NARX is selected as the Sigmoid network. The other nonlinear model for comparison is the NARMAX model with the FROLS (forward regression orthogonal least squares) algorithm [34]. The maximum numbers of the delayed outputs, delayed inputs are both selected as 4, and the maximum delayed noise is selected as 3. ...
... To quantify the accuracy of the identified models, the following equation applies to evaluate the validation fit: [34]. The maximum numbers of the delayed outputs, delayed inputs are both selected as 4, and the maximum delayed noise is selected as 3. ...
Article
Full-text available
A systematic identification approach for the rotor/radial active magnetic bearing (rotor/RAMB) system is presented in this study. First, the system identification of the controller of commercial TMP is undertaken, and the corresponding linear dynamic models are constructed. To perfectly excite the nonlinearities of the rotor/RAMB system, a parallel amplitude-modulated pseudo-random binary sequence (PAPRBS) generator, which possesses the merits of no correlation among the perturbation signals, is employed. The dynamics of the rotor/RAMB system is identified with a Hammerstein–Wiener model. To reduce the difficulty of the identified two nonlinear blocks, the output nonlinear characteristics are estimated prior to the recursive process. Two conventional nonlinear model structures, i.e., NARX and NARMAX, are employed for comparison to verify the effectiveness of the identified Hammerstein–Wiener model. The averaged fit values of the Hammerstein–Wiener model, NARX model, and NARMAX model are 93.25%, 88.36%, and 76.91%, respectively.
... Several attempts have been made towards increasing the computational speed of Moore-Penrose generalized inverse matrix [36][37][38] . Katsikis and Pappas [38] constructed a more reliable and fast method called ginv function using Matlab for computation of Moore-Penrose inverse matrix of a rank-n tensor-product matrix. ...
... The following is the CGS-MPi algorithm [37] : ...
... Toutounian and Ataei [37] reported that their experimental data reveal that for sparse large matrices, the Moore-Penrose inverses computed by this technique is realistically perfect with a fast computation speed, which is more than that of pseudoinverses computed by the other techniques ( Tables 2 and 3 ). ...
Article
In spite of the prominence of extreme learning machine model, as well as its excellent features such as insignificant intervention for learning and model tuning, the simplicity of implementation, and high learning speed, which makes it a fascinating alternative method for Artificial Intelligence, including Big Data Analytics, it is still limited in certain aspects. These aspects must be treated towards achieving an effective and cost-sensitive model. This review discussed the major drawbacks of ELM, which include difficulty in determination of hidden layer structure, prediction instability and Imbalanced data distributions, the poor capability of sample structure preserving (SSP), and difficulty in accommodating lateral inhibition by direct random feature mapping. Other drawbacks include multi-graph complexity, global memory size, one-by-one or chuck-by-chuck (a block of data), global memory size limitation, and challenges with big data. The recent trend proposed by experts for each drawbacks are discussed in detail towards achieving an effective and cost-sensitive model.
... The algorithm is fast, but it nevertheless works for well-conditioned matrices. Recently, some studies (Courrieu 2005;Stanimirović and Tasić 2008;Toutounian and Ataei 2009;Katsikis and Pappas 2008;Katsikis et al. 2011) presented improved algorithms to compute the Moore-Penrose inverse for rank deficient matrices. Toutounian and Ataei (2009) proposed a new algorithmic procedure based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices, while Katsikis et al. (2011) provided a fast and accurate computational tool for the Moore-Penrose inverse matrix, based on a specific QR factorization method as well as the reverse order law for generalized inverses. ...
... Recently, some studies (Courrieu 2005;Stanimirović and Tasić 2008;Toutounian and Ataei 2009;Katsikis and Pappas 2008;Katsikis et al. 2011) presented improved algorithms to compute the Moore-Penrose inverse for rank deficient matrices. Toutounian and Ataei (2009) proposed a new algorithmic procedure based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices, while Katsikis et al. (2011) provided a fast and accurate computational tool for the Moore-Penrose inverse matrix, based on a specific QR factorization method as well as the reverse order law for generalized inverses. On the basis of the second Penrose equation, Petković and Stanimirović (2011) proposed a new iterative method to compute the Moore-Penrose inverse. ...
... To obtain the Moore-Penrose inverse of an arbitrary matrix at a lower computational cost than the SVD method, some new methods have been established. On the basis of the conjugate Gram-Schmidt process, Toutounian and Ataei (2009) proposed the CGS-MPi algorithm to compute the Moore-Penrose inverse of an n × m matrix with m ≥ n, and they proved the iterative algorithm is efficient. Recently, Katsikis and Pappas (2008), Katsikis et al. (2011) presented a new algorithm based on the QR decomposition method, and they verified that the algorithm is much robust and reliable relative to other published methods. ...
Article
The Moore-Penrose inverse has many applications in civil engineering, such as structural control, nonlinear buckling, and form-finding. However, solving the generalized inverse requires ample computational resources, especially for large-sized matrices. An efficient method based on group theory for the Moore-Penrose inverse problems for symmetric structures is proposed, which can deal with not only well-conditioned but also rank deficient matrices. First, the QR decomposition algorithm is chosen to evaluate the generalized inverse of any sparse and rank deficient matrix. In comparison with other well established algorithms, the QR method has superiority in computation efficiency and accuracy. Then, a group-theoretic approach to computing the Moore-Penrose inverse for problems involving symmetric structures is described. Based on the inherent symmetry and the irreducible representations, the orthogonal transformation matrices are deduced to express the inverse problem in a symmetry-adapted coordinate system. The original problem is transferred into computing the generalized inverse of many independent submatrices. Numerical experiments on three different types of structures with cyclic or dihedral symmetry are carried out. It is concluded from the numerical results and comparisons with two conventional methods that the proposed technique is efficient and accurate.
... There are several methods for computing the Moore-Penrose inverse matrix (cf. [2,4,5,8,9,11,13,14]). One of the most commonly used methods is the Singular Value Decomposition (SVD) method. ...
... This method is very accurate but time-intensive since it requires a large amount of computational resources, especially in the case of large matrices. On a recent work, Toutounian and Ataei [14] presented an algorithm based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices, the CGS-MPi algorithm and they concluded that this algorithm is a robust and efficient tool for computing the Moore-Penrose inverse of large sparse and rank deficient matrices. Also, the recent work [13], of Petković and Stanimirović proposes a new iterative method, which is derived from the second Penrose equation, for the computation of the Moore-Penrose inverse. ...
... In the present manuscript, we construct a very fast and reliable method (see the qrginv function in the Appendix) in order to estimate the Moore-Penrose inverse matrix. The computational effort required for the qrginv function (see in order to obtain the Moore-Penrose inverse is substantially lower, particularly for large matrices, compared to those provided by the SVD method and the methods presented by Toutounian and Ataei in [14], by Petković and Stanimirović in [13] and by Courrieu in [4]. In addition, we obtain reliable and very accurate approximations in each tested cases. ...
Article
In this article we provide a fast computational method in order to calculate the Moore– Penrose inverse of singular square matrices and of rectangular matrices. The proposed method proves to be much faster and has significantly better accuracy than the already proposed methods, while works for full and sparse matrices.
... The method provided by Katsikis and Pappas [33] can be utilized to solve large sparse matrix. The algorithm from Courrieu [34] was based on Cholesky factorization. Toutounian and Ataei [35] presented a CGS-MPi algorithm based on the conjugate Gram-Schmidt process, which is relatively robust for large sparse and rank deficient matrix. ...
... For the Moore-Penrose inverse computation problem of small rectangular matrix that occurs during buffer landings, the methods of generalized inverse (Ginv) [33], tensor product matrix (TPM) [38], and improved Qrginv (IMqrg) [39], have the fastest solution speed. The methods of singular value decomposition (SVD), QR generalized inverse (Qrg) [36], and VTLSA have medium speed, and generalized inverse (Geninv) [34] method has the slowest solution speed. However, the difference among them is small, so they can all be used for stabilizer design of the FLLWR. ...
Article
Full-text available
The prober with an immovable lander and a movable rover is commonly used to explore the Moon’s surface. The rover can complete the detection on relatively flat terrain of the lunar surface well, but its detection efficiency on deep craters and mountains is relatively low due to the difficulties of reaching such places. A lightweight four-legged landing and walking robot called “FLLWR” is designed in this study. It can take off and land repeatedly between any two sites wherever on deep craters, mountains or other challenging landforms that are difficult to reach by direct ground movement. The robot integrates the functions of a lander and a rover, including folding, deploying, repetitive landing, and walking. A landing control method via compliance control is proposed to solve the critical problem of impact energy dissipation to realize buffer landing. Repetitive landing experiments on a five-degree-of-freedom lunar gravity testing platform are performed. Under the landing conditions with a vertical velocity of 2.1 m/s and a loading weight of 140 kg, the torque safety margin is 10.3% and 16.7%, and the height safety margin is 36.4% and 50.1% for the cases with or without an additional horizontal disturbance velocity of 0.4 m/s, respectively. The study provides a novel insight into the next-generation lunar exploration equipment.
... Hence, many researchers have focused on developing alternate methods to compute the Moore-Penrose pseudoinverse using matrix decomposition techniques. Comparative study of different matrix decomposition methods such as SVD, QR, and Cholesky to compute the Moore-Penrose pseudoinverse in ELM for classification problems are discussed in [18][19][20]. ...
... Step -5: Calculate A using Eq. (20) Step -6: Calculate B using Eq. (21) Step -7: Calculate C using Eq.(22) ...
Article
Extreme Learning Machine (ELM) is an efficient and effective least-square-based learning algorithm for classification, regression problems based on single hidden layer feed-forward neural network (SLFN). It has been shown in the literature that it has faster convergence and good generalization ability for moderate datasets. But, there is great deal of challenge involved in computing the pseudoinverse when there are large numbers of hidden nodes or for large number of instances to train complex pattern recognition problems. To address this problem, a few approaches such as EM-ELM, DF-ELM have been proposed in the literature. In this paper, a new rank-based matrix decomposition of the hidden layer matrix is introduced to have the optimal training time and reduce the computational complexity for a large number of hidden nodes in the hidden layer. The results show that it has constant training time which is closer towards the minimal training time and very far from worst-case training time of the DF-ELM algorithm that has been shown efficient in the recent literature.
... Due to its practical importance, the numerical determination of the generalized inverse remains an active topic of research. [3,4] The pseudoinverse A + ∈ C m×n of any matrix A ∈ C n×m is uniquely [5] characterized by the four Penrose equations [6]: ...
... Theorem 4. If Γ n+1 = 0 is linearly independent from previous observations, the least square solution X n+1 can be updated in O(mr) if X n , C n andC n are 4 To be convinced, consider An = In the identity matrix and Γ n+1 = ...
Preprint
Updating a linear least squares solution can be critical for near real-time signalprocessing applications. The Greville algorithm proposes a simple formula for updating the pseudoinverse of a matrix A $\in$ R nxm with rank r. In this paper, we explicitly derive a similar formula by maintaining a general rank factorization, which we call rank-Greville. Based on this formula, we implemented a recursive least squares algorithm exploiting the rank-deficiency of A, achieving the update of the minimum-norm least-squares solution in O(mr) operations and, therefore, solving the linear least-squares problem from scratch in O(nmr) operations. We empirically confirmed that this algorithm displays a better asymptotic time complexity than LAPACK solvers for rank-deficient matrices. The numerical stability of rank-Greville was found to be comparable to Cholesky-based solvers. Nonetheless, our implementation supports exact numerical representations of rationals, due to its remarkable algebraic simplicity.
... Due to its practical importance, the numerical determination of the generalized inverse remains an active topic of research. [3,4] The pseudoinverse A + ∈ C m×n of any matrix A ∈ C n×m is uniquely [5] characterized by the four Penrose equations [6]: ...
... Theorem 4. If Γ n+1 = 0 is linearly independent from previous observations, the least square solution X n+1 can be updated in O(mr) if X n , C n andC n are 4 To be convinced, consider An = In the identity matrix and Γ n+1 = ...
Article
Updating a linear least-squares solution can be critical for near real-time signal-processing applications. The Greville algorithm proposes a simple formula for updating the pseudoinverse of a matrix A∈Rn×m with rank r. In this paper, we explicitly derive a similar formula by maintaining a general rank factorization, which we call rank-Greville. Based on this formula, we implemented a recursive least-squares algorithm exploiting the rank-deficiency of A, achieving the update of the minimum-norm least-squares solution in O(mr) operations and, therefore, solving the linear least-squares problem from scratch in O(nmr) operations. We empirically confirmed that this algorithm displays a better asymptotic time complexity than LAPACK solvers for rank-deficient matrices. The numerical stability of rank-Greville was found to be comparable to Cholesky-based solvers. Nonetheless, our implementation supports exact numerical representations of rationals, due to its remarkable algebraic simplicity.
... Both direct and iterative methods for computing A † have been proposed [3,[7][8][9]21] . The most popular direct methods are based on the singular value decomposition (SVD) [3] , QR factorization [8] , conjugate Gram-Schmidt process [21] , LDL * factorization [13,18] , Gaussian elimination [20] , etc. Usually these methods give very accurate results but take a large amount of computational resources and time, especially in the case of large matrices. ...
... Both direct and iterative methods for computing A † have been proposed [3,[7][8][9]21] . The most popular direct methods are based on the singular value decomposition (SVD) [3] , QR factorization [8] , conjugate Gram-Schmidt process [21] , LDL * factorization [13,18] , Gaussian elimination [20] , etc. Usually these methods give very accurate results but take a large amount of computational resources and time, especially in the case of large matrices. For this reason, iterative methods to compute A † for large-scale problems have been proposed. ...
Article
A new iterative scheme for the computation of the Moore-Penrose generalized inverse of an arbitrary rectangular or singular complex matrix is proposed. The method uses ap- propriate error bounds and is applicable without restrictions on the rank of the matrix. But, it requires that the rank of the matrix is known in advance or computed beforehand. The method computes a sequence of monotonic inclusion interval matrices which contain the Moore-Penrose generalized inverse and converge to it. Successive interval matrices are constructed by using previous approximations generated from the hyperpower iterative method of an arbitrary order and appropriate error bounds of the Moore-Penrose inverse. A convergence theorem of the introduced method is established. Numerical examples in- volving randomly generated matrices are presented to demonstrate the efficacy of the pro- posed approach. The main property of our method is that the successive interval matrices are not defined using principles of interval arithmetic, but using accurately defined error bounds of the Moore-Penrose inverse.
... where the matrix † is computed using ginv method. Following [34], authors compute the rank of 11 as the number of columns, in which at least one coefficient with the absolute value greater than the set tolerance exists. As in [34], it is assumed to be equal to 10 −5 . ...
... Following [34], authors compute the rank of 11 as the number of columns, in which at least one coefficient with the absolute value greater than the set tolerance exists. As in [34], it is assumed to be equal to 10 −5 . ...
Article
Full-text available
Computing the pseudoinverse of a matrix is an essential component of many computational methods. It arises in statistics, graphics, robotics, numerical modeling, and many more areas. Therefore, it is desirable to select reliable algorithms that can perform this operation efficiently and robustly. A demanding benchmark test for the pseudoinverse computation was introduced. The stiffness matrices for higher order approximation turned out to be such tough problems and therefore can serve as good benchmarks for algorithms of the pseudoinverse computation. It was found out that only one algorithm, out of five known from literature, enabled us to obtain acceptable results for the pseudoinverse of the proposed benchmark test.
... (3,2),(4,4), (5,3.5),(6,6), (7,7) be the points in the data collection. ...
... (3,2),(4,4), (5,3.5),(6,6), (7,7) be the points in the data collection. ...
Article
Full-text available
We convert polynomial function of degree n into imprecise form to obtain an important point called conversion point. For some particular region, we collect the finite number of data points to obtain the most economical function called imprecise function. Conversion point of the functions is shown with the help of MUPAD graph. Further we study the area of the imprecise function occurred by the multiplication of sine function to know how much variation of the imprecise functions are obtained for the respective intervals.For different imprecise polynomial we study level of the rate of convergence.
... A lot of works concerning generalized inverses have been carried out, in finite and infinite dimensions (e.g., [1][2][3]). There are several methods for computing the Moore-Penrose inverse matrix [2,[4][5][6][7][8]. In a recent article [9], an improved method for the computation of the Moore-Penrose inverse matrix provided. ...
... Time . We follow the same method as in [8], and we have the rank deficient matrices as ...
Article
Full-text available
Katsikis et al. presented a computational method in order to calculate the Moore-Penrose inverse of an arbitrary matrix (including singular and rectangular) (2011). In this paper, an improved version of this method is presented for computing the pseudo inverse of an m × n real matrix A with rank r > 0 . Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that obtained by Katsikis et al.
... In the later Wei in [9,10] gave different methods to compute the weighted M-P inverse. In the recent work [24][25][26][27], some papers propose iterative methods to compute the M-P inverse for large and spare matrix. In this paper we will design an iterative method based on gradient to solve the matrix equation AXA = A, when the initial matrix X 0 = A * is taken, the M-P inverse A † can be got in maximal convergence rate. ...
... In the following Table, we will perform numerical experiments to compare Petković and Stanimirović's method [26] (PSI), Toutounian and Ataei's method [27] (CGSI) with the proposed method GBMC. ...
Article
Full-text available
In this paper, we present an iterative method based on gradient maximal convergence rate to compute Moore-Penrose inverse A(dagger) of a given matrix A. By this iterative method, when taken the initial matrix X-0 = A*, the M-P inverse A(dagger) can be obtained with maximal convergence rate in absence of roundoff errors. In the end, a numerical example is given to illustrate the effectiveness, accuracy and its computation time, which are all superior than the other methods for the large singular matrix.
... The Moore-Penrose inverse is one of the most important generalized inverses of arbitrary singular square or rectangular (real or complex) matrix. It has been extensively studied by many researchers [4,2,6,9,5,3] and many methods are proposed in the literature. Accordingly, it is important both practically and theoretically to find good higher order algorithms for computing a Moore-Penrose inverse of a given arbitrary matrix. ...
... The unique matrix A † satisfies the following four equations (i) AXA = A, (ii) XAX = X, (iii) (AX) * = AX, (iv) (XA) * = XA (1) Both direct and iterative methods (cf. [11,10,2,3,9]) can be used to compute A † . One of the most commonly used direct methods is the Singular Value Decomposition (SVD) method. ...
Article
A higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (ii) is developed by extending the iterative method described in [1]. Convergence properties as well as the error estimates of the method are studied. The efficacy of the method is demonstrated by working out four numerical examples, two involving a full rank matrix and an ill-conditioned Hilbert matrix, whereas, the other two involving randomly generated full rank and rank deficient matrices. The performance measures are the number of iterations and CPU time in seconds used by the method. It is observed that the number of iterations always decreases as expected and the CPU time first decreases gradually and then increases with the increase of the order of the method for all examples considered.
... The Moore-Penrose inverse is one of the most important generalized inverses of arbitrary singular square or rectangular (real or complex) matrix. It has been extensively studied by many researchers [4,2,6,9,5,3] and many methods are proposed in the literature. Accordingly, it is important both practically and theoretically to find good higher order algorithms for computing a Moore-Penrose inverse of a given arbitrary matrix. ...
... The unique matrix A † satisfies the following four equations (i) AXA = A, (ii) XAX = X, (iii) (AX) * = AX, (iv) (XA) * = XA (1) Both direct and iterative methods (cf. [11,10,2,3,9]) can be used to compute A † . One of the most commonly used direct methods is the Singular Value Decomposition (SVD) method. ...
Article
Full-text available
A higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (ii) is developed by extending the iterative method described in [1]. Convergence properties as well as the error estimates of the method are studied. The efficacy of the method is demonstrated by working out four numerical examples, two involving a full rank matrix and an ill-conditioned Hilbert matrix, whereas, the other two involving randomly generated full rank and rank deficient matrices. The performance measures are the number of iterations and CPU time in seconds used by the method. It is observed that the number of iterations always decreases as expected and the CPU time first decreases gradually and then increases with the increase of the order of the method for all examples considered.
... However, previous simulation research has shown that these existing numerical approaches are time-consuming processes (Courrieu 2005;Katsikis et al. 2011). In the field of mathematics, novel methodologies to quickly calculate the generalized inverse matrix (Kantún-Montiel 2014; Katsikis and Pappas 2008;McCullagh 2019;Petković and Stanimirović 2011;Soleymani 2015;Toutounian and Ataei 2009) have been proposed. The theory of generalized inverse and linear mapping was introduced into the structural engineering (Kawaguchi 2011). ...
... According to Courrieu, the computation cost is proportional to the third or fourth power of the matrix size [4]. To reduce the computation cost, several approximation methods have been presented: the Tikhonov 5 regularized matrix [5,6], Higher Order Iterative Methods [7], an algorithm for bidiagonal matrix by Demmel-Kahan [8], an algorithm based on the conjugate GramSchmidt process [9] and the QR factorization with the reverse order law for generalized inverses [10]. Recently, Xia et al. [11] proposed a novel iterative method. ...
Article
Full-text available
This study proposes an efficient approximation method for the Moore–Penrose pseudo-inverse when, for a matrix, the eigenvectors associated to the eigenvalues zero (referred to herein as zeros eigenvectors) are known in advance. The method reduces the computational cost by several orders of magnitude. The approximation is performed by the addition of a small-amplitude diagonal matrix to regularise the matrix and multiplication with a projection matrix after its regular inversion. The projection removes the components of the zeros eigenvectors. The condition for obtaining a good approximation, is that the amplitude of the small-amplitude matrix should be sufficiently smaller than the smallest non-zero eigenvalue of the matrix. When the matrix is a stiffness matrix in a support-free elasticity problem (a problem whereby the elastic body is unsupported), the zeros eigenvectors indicate rigid-body motions. The method was applied to robust support-free topology optimization, revealing its excellent accuracy and efficiency. The observed computational time was found to be proportional to the size of the stiffness matrix. Furthermore, conducting robust topology optimization for fine-mesh problems resulted in structures that exhibited biological features.
... The complexity of computing S = I − A † A is more delicate. There is extensive research on finding efficient and reliable methods to find A † , see for example [41][42][43] . One of the most commonly used methods is the Singular Value Decomposition (SVD) which is very accurate but time and memory intensive especially in the case of large matrices. ...
Article
Full-text available
A common problem in machine learning and pattern recognition is the process of identifying the most relevant features, specifically in dealing with high-dimensional datasets in bioinformatics. In this paper, we propose a new feature selection method, called Singular-Vectors Feature Selection (SVFS). Let [Formula: see text] be a labeled dataset, where [Formula: see text] is the class label and features (attributes) are columns of matrix A. We show that the signature matrix [Formula: see text] can be used to partition the columns of A into clusters so that columns in a cluster correlate only with the columns in the same cluster. In the first step, SVFS uses the signature matrix [Formula: see text] of D to find the cluster that contains [Formula: see text]. We reduce the size of A by discarding features in the other clusters as irrelevant features. In the next step, SVFS uses the signature matrix [Formula: see text] of reduced A to partition the remaining features into clusters and choose the most important features from each cluster. Even though SVFS works perfectly on synthetic datasets, comprehensive experiments on real world benchmark and genomic datasets shows that SVFS exhibits overall superior performance compared to the state-of-the-art feature selection methods in terms of accuracy, running time, and memory usage. A Python implementation of SVFS along with the datasets used in this paper are available at https://github.com/Majid1292/SVFS .
... Hence, many researchers have focused on developing alternate methods to compute the Moore-Penrose pseudoinverse using matrix decomposition techniques. Comparative study of different matrix decomposition methods such as SVD, QR, and Cholesky to compute the Moore-Penrose pseudoinverse in ELM for classification problems are discussed in [18], [19], [20]. ...
Preprint
Extreme Learning Machine (ELM) is an efficient and effective least-square-based learning algorithm for classification, regression problems based on single hidden layer feed-forward neural network (SLFN). It has been shown in the literature that it has faster convergence and good generalization ability for moderate datasets. But, there is great deal of challenge involved in computing the pseudoinverse when there are large numbers of hidden nodes or for large number of instances to train complex pattern recognition problems. To address this problem, a few approaches such as EM-ELM, DF-ELM have been proposed in the literature. In this paper, a new rank-based matrix decomposition of the hidden layer matrix is introduced to have the optimal training time and reduce the computational complexity for a large number of hidden nodes in the hidden layer. The results show that it has constant training time which is closer towards the minimal training time and very far from worst-case training time of the DF-ELM algorithm that has been shown efficient in the recent literature.
... Bu yöntemler arasında Tekil Değer Ayrışımı (SVD), tekil matrisin Cholesky faktörizasyonu ve QR faktörlemesi bulunabilmektedir. İlişkisel algoritmalar temelde Moore-Penrose ters matrisini [10,11,12 ve 13] hesaplamak için kullanılan birkaç yönteme dayanmaktadır ve bu yöntemler aşırı öğrenme makinesine uygulanmaktadır. Bunlar, tekil matris ELM (Geninv-ELM), QR faktörizasyonu, Greville yöntemidir. ...
Chapter
Full-text available
1. GİRİŞ Evrensel yaklaşımcı olarak, ileriye dönük sinir ağları, belirgin erdemleri nedeniyle birçok alanda yaygın olarak çalışılmış ve kullanılmıştır. İleriye dönük sinir ağları, doğrudan girdi numunelerinden karmaşık doğrusal olmayan eşleşmeleri yaklaşık olarak gösterebilmektedir. Öte yandan, klasik parametrik tekniklerin kullanımı zor olan çok sayıda doğal ve yapay olay için modeller sunabilmektedirler. Bununla birlikte, ileri besleme ağının tüm parametrelerinin ayarlanması gerektiğinden, ileriye dönük sinir ağlarının zaman alıcı olmasını sağlayan farklı katman parametreleri arasında bağımlılık vardır. En popüler ileriye dönük sinir ağlarından biri olan tek katmanlı ileriye dönük sinir ağları, öğrenme ve hataya dayanıklılık yeteneklerinin anlaşılabilmesi için hem teorik hem de pratik yönden yoğun şekilde incelenmiştir. Bununla birlikte, tek katmanlı ileriye dönük sinir ağların eğitimi için en popüler öğrenme algoritmaları nispeten yavaştır çünkü tek katmanlı ileriye dönük sinir ağların tüm parametrelerinin yinelemeli prosedürlerle ayarlanması gerekmektedir. Bu nedenle bu algoritmalar yerel minimumda kolayca takılabilmektedir. Son zamanlarda, tek katmanlı ileriye dönük sinir ağların verimliliğini artırmak için aşırı (uç) öğrenme makinesi (AÖM) [1, 2] olarak adlandırılan yeni bir hızlı öğrenme sinir ağı algoritması geliştirilmiştir. Manuel olarak kontrol parametrelerinin (öğrenme hızı, öğrenme iterasyonları, vb.) ve / veya yerel minimal ayarında zorluklarla karşılaşabilecek sinir ağları için (geriye yayılım algoritmaları gibi) geleneksel öğrenme algoritmalarından farklı olarak, AÖM, yinelemeli ayar olmadan tam otomatik olarak uygulanmaktadır ve teoride, kullanıcılardan herhangi bir müdahale gerektirmemektedir. Ayrıca, AÖM'nin öğrenme hızı, diğer geleneksel yöntemlere kıyasla oldukça hızlıdır. AÖM algoritmasında, giriş ağırlıkları ve sapmaları da içeren gizli düğümlerin öğrenme parametreleri bağımsız olarak atanabilmektedir ve ağın çıkış ağırlıkları basit genelleştirilmiş ters işlemle analitik olarak belirlenebilmektedir. Eğitim aşaması, zaman alan bir öğrenme süreci olmadan sabit bir doğrusal olmayan dönüşüm yoluyla verimli bir şekilde tamamlanabilmektedir. Ayrıca, AÖM algoritması iyi bir genelleme performansı sağlayabilmektedir. Ek olarak, standart AÖM'nin ilave veya radyal tabanlı aktivasyon fonksiyonu [3-4] ile evrensel yaklaşım kabiliyeti kanıtlanmıştır. Aşırı öğrenme makinelerinin sınıflandırma sınırı, gizli düğümlerin öğrenme parametreleri için rastgele atanan en uygun bir sınır olmayabilmektedir [5]. Bu nedenle, bazı örnekler, özellikle sınıflandırma sınırına yakın olanlar için AÖM tarafından yanlış sınıflandırılabilmektedir. Ayrıca AÖM'nin birçok durumda geleneksel tuning tabanlı algoritmalardan daha fazla gizli nöron gerektirmektedir [6]. Bu çalışmada aşırı öğrenme makineleri algoritması detaylı olarak incelenerek günlük hayat problemlerinden biri olan sınıflandırma problemlerinde alternatif çözüm olarak kullanılabileceğini göstermek amaçlanmıştır. Aşırı öğrenme makinelerinin etkinliğini göstermek için geleneksel yöntemlerden olan geri yayılımlı çok katmanlı yapay sinir ağı karşılaştırılmıştır. Buna ek olarak, aşırı öğrenme algoritmasının önemli adımı ve hesaplama zamanını etkileyen genelleştirilmiş ters matrisin hesaplanması adımında çeşitli tekniklerle denenmiştir. Çalışmanın geri kalanı şu şekilde
... The singular values σi are uniquely determined, and if A is square and σi are distinct, then ui and vi, the columns of the matrices U and V in (2.2), are uniquely determined up to complex signs. Definition 2.6.[16] Let A ∈ Mm×n, then there exists the unique matrix A + ∈ Mn×m satisfying the following conditions: ...
... While we have reason to admire brains, they are also unable to perform certain very useful computations. In artificial neural networks we often employ non-local matrix operations like inversion to calculate optimal weights ( Toutounian and Ataei 2009): these computations are not possible to perform locally in a distributed manner. Gradient descent algorithms such as backpropagation are unrealistic in a biological sense, but clearly very successful in deep learning. ...
Article
Full-text available
Does the energy requirements for the human brain give energy constraints that give reason to doubt the feasibility of artificial intelligence? This report will review some relevant estimates of brain bioenergetics and analyze some of the methods of estimating brain emulation energy requirements. Turning to AI, there are reasons to believe the energy requirements for de novo AI to have little correlation with brain (emulation) energy requirements since cost could depend merely of the cost of processing higher-level representations rather than billions of neural firings. Unless one thinks the human way of thinking is the most optimal or most easily implementable way of achieving software intelligence, we should expect de novo AI to make use of different, potentially very compressed and fast, processes.
... Courrieu [15] proposed a fast computation of Moore-Penrose generalized inverse matrices based on a full rank Cholesky factorization. Toutounian and Ataei [13] presented the CGS-MPi algorithm which is based on parting the Moore-Penrose inverse matrices, and the method of conjugate Gram-Schmidt process. They proved that this algorithm is efficient tool for computing the Moore-Penrose inverse and it is a robust algorithm especially dealing with rank deficient and large sparse matrices. ...
... Many methods for generalized inverse were developed. They are divided into types: the continuous-time recurrent neural networks and learning algorithms (Cichocki & Unbehauen, 1992;Wei, 2000;Huang, Zhu, & Siew, 2006;Wang, 1997) and the numerical algorithms (Boulmaarouf, Zmiranda, & Labrousse, 1997;Guo & Huang, 2010;Huang & Zhang, 2006;Petkovic & Stanimirovic, 2011;Stanimirovic & Tasic, 2008;Toutounian & Ataei, 2009;Katsikis & Pappas, 2008;Katsikis, Pappas, & Petralias, 2011;Najafi & Solary, 2006;Chen & Wang, 2011;Li & Li, 2010). The continuous-time algorithm has emerged as parallel distributed computational models, but it has relatively slow speed due to its continuous-time feature. ...
... For β < 1, the method has a linear convergence, while for β = 1 method reduces to the well-known Schultz method. Toutounian and Ataei (2009) presented an algorithm based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices. They concluded that this algorithm is a robust and efficient tool for computing the Moore-Penrose inverse of large sparse and rank deficient matrices. ...
Article
Full-text available
A third order iterative method for estimating the Moore-Penrose generalised inverse is developed by extending the second order iterative method described in Petkovi and Stanimirovi 2011. Convergence analysis along with the error estimates of the method are investigated. Three numerical examples, two for full rank simple and randomly generated singular rectangular matrices and third for rank deficient singular square matrices with large condition numbers from the matrix computation toolbox are worked out to demonstrate the efficacy of the method. The performance measures used are the number of iterations and CPU time used by the method. On comparing the results obtained by our method with those obtained with the method given in Petkovi and Stanimirovi 2011, it is observed that our method gives improved performance.
... Many methods for generalized inverse were developed. They are divided into types: the continuous-time recurrent neural networks and learning algorithms (Cichocki & Unbehauen, 1992;Wei, 2000;Huang, Zhu, & Siew, 2006;Wang, 1997) and the numerical algorithms (Boulmaarouf, Zmiranda, & Labrousse, 1997;Guo & Huang, 2010;Huang & Zhang, 2006;Petkovic & Stanimirovic, 2011;Stanimirovic & Tasic, 2008;Toutounian & Ataei, 2009;Katsikis & Pappas, 2008;Katsikis, Pappas, & Petralias, 2011;Najafi & Solary, 2006;Chen & Wang, 2011;Li & Li, 2010). The continuous-time algorithm has emerged as parallel distributed computational models, but it has relatively slow speed due to its continuous-time feature. ...
Article
Full-text available
In this letter, we propose a novel iterative method for computing generalized inverse, based on a novel KKT formulation. The proposed iterative algorithm requires making four matrix and vector multiplications at each iteration and thus has low computational complexity. The proposed method is proved to be globally convergent without any condition. Furthermore, for fast computing generalized inverse, we present an acceleration scheme based on the proposed iterative method. The global convergence of the proposed acceleration algorithm is also proved. Finally, the effectiveness of the proposed iterative algorithm is evaluated numerically.
... From the box and whisker plot in Figure 6(b) we see that the worst ESN obtained by using the new method with = 0.1 (experiment 5) performed better than the best ESN obtained with the original method trained on 5 repetitions of the YMCA (experiment 2). Due to the computation time of the pseudo-inverse calculations, the training time of a sequence of length * is longer than training a sequence of length times [17]. This implies that the running time of experiment 5 (sequence of 313 steps run 3 * 10 times) is also shorter than the running time of experiment 2 (sequence of 5 * 313 steps run 10 times). ...
Article
Full-text available
Echo state networks are a relatively new type of recurrent neural networks that have shown great potentials for solving non-linear, temporal problems. The basic idea is to transform the low dimensional temporal input into a higher dimensional state, and then train the output connection weights to make the system output the target information. Because only the output weights are altered, training is typically quick and computationally efficient compared to training of other recurrent neural networks. This paper investigates using an echo state network to learn the inverse kinematics model of a robot simulator with feedback-error-learning. In this scheme teacher forcing is not perfect, and joint constraints on the simulator makes the feedback error inaccurate. A novel training method which is less influenced by the noise in the training data is proposed and compared to the traditional ESN training method.
... There are several methods for computing the Moore-Penrose inverse of a matrix. Some of the most commonly used methods are based on the Singular Value Decomposition method (MATLAB's pinv function), the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices (see [23]), and iterative methods which are derived from the second Penrose equation (see [20]). In this work, for the determination of the Moore-Penrose inverse matrix, we use the results of a recent work, [12], where a very fast and reliable method is presented. ...
Article
Full-text available
We introduce the T-restricted weighted generalized inverse of a singular matrix A with respect to a positive semidefinite matrix T, which defines a seminorm for the space. The new approach proposed is that since T is positive semidefinite, the minimal seminorm solution is considered for all vectors perpendicular to the kernel of T.
... The Moore-Penrose pseudoinverse finds a least squares solution to the matrix inversion problem. The Moore-Penrose pseudoinverse of an m×n matrix G is a unique n×m matrix G + satisfying the four Penrose equations [8], which are given as ...
Conference Paper
Full-text available
We present the application of linear minimum mean square error (LMMSE) estimation to GPR data for achieving buried object detection. Without employing any empirical assumptions, nonstationary form of Wiener-Hopf equations is applied to GPR signals to estimate the next sample in normal conditions. A large deviation from this estimation indicates the presence of a buried object. The technique is causal, which allows it to be used in real-time applications. Our approach is theoretically optimal in linear minimum mean square error sense, and it is also validated with the tests that are carried out on a comprehensive data set of GPR signals.
Article
Full-text available
Let A be a matrix with its Moore-Penrose pseudo-inverse A†. It is proved that, after re-ordering the columns of A, the projector P=I−A†A has a block-diagonal form, that is there is a permutation matrix Π such that ΠPΠT=diag(S1,S2,…,Sk). It is further proved that each block Si corresponds to a cluster of columns of A that are linearly dependent with each other. A clustering algorithm is provided that allows to partition the columns of A into clusters where columns in a cluster correlate only with columns within the same cluster. Some applications in supervised and unsupervised learning, specially feature selection, clustering, and sensitivity of solutions of least squares solutions are discussed.
Article
In this article, time-varying matrix equation problems, including the Lyapunov equation, matrix inversion, and generalized matrix inversion are investigated in a future (or say, discrete time-varying) perspective. Then, in order to develop a unified solution model for the above three future problems, a future matrix equation (FME) is investigated. The discrete-time unified solution (DTUS) model, which is based on the zeroing neural dynamics (ZND) method and a new nine-instant Zhang et al. discretization (ZeaD) formula, is thus proposed and termed the nine-instant DTUS (9IDTUS) model. Meanwhile, theoretical analyses on the stability and precision of the 9IDTUS model are provided. In addition, conventional DTUS models obtained from the Euler forward formula, Taylor–Zhang discretization formula, and a seven-instant discretization formula are also presented for comparisons. Furthermore, numerical experiments including the robot motion generation, are conducted and analyzed to substantiate the efficacy and superiority of the proposed 9IDTUS model.
Article
Full-text available
We are concerned with a kind of iterative method for computing the Moore‐Penrose inverse, which can be considered as a discrete‐time form of recurrent neural networks. We study the momentum learning scheme of the method and discuss its semi‐convergence when computing the Moore‐Penrose inverse of a rankdeficient matrix. We prove the semi‐convergence for our new acceleration algorithm and obtain the optimal momentum factor which makes the fastest semi‐convergence. Numerical tests demonstrate the effectiveness of our new acceleration algorithm.
Article
We propose an algorithm for solving the basis pursuit problem min u∈C n {∥u∥ 1 :Au=f}. Our starting motivation is the algorithm for compressed sensing, proposed by Qiao, Li and Wu, which is based on linearized Bregman iteration with generalized inverse. Qiao, Li and Wu defined new algorithm for solving the basis pursuit problem in compressive sensing using a linearized Bregman iteration and the iterative formula of linear convergence for computing the matrix generalized inverse. In our proposed approach, we combine a partial application of the Newton's second order iterative scheme for computing the generalized inverse with the Bregman iteration. Our scheme takes lesser computational time and gives more accurate results in most cases. The effectiveness of the proposed scheme is illustrated in two applications: signal recovery from noisy data and image deblurring.
Article
Many Network Inference (NI) problems are modeled as Under-Determined Linear Inverse (UDLI) problems where the number of observations or measurements are less than the number of unknowns. In this paper, a new technique for solving NI problems in dynamic network environments is presented. This technique is called Optimal-Coherent Network Inference (OCNI) and it is applied in two stages. In the first stage, called learning phase, the Optimal Observation Matrix (OOM) of network measurements are computed. In the second stage, called Measurement and Inference Phase (MIP), the OOM is used to compute the least-norm solution and estimate the unknowns of interest. The optimal observation matrix can be adaptively modified to improve the estimation accuracy. In this paper, first, the principles of OCNI are explained, and its properties are mathematically proved and experimentally justified. In addition, a new framework for traffic matrix estimation in Software Defined Networks (SDN) is developed where the OCNI is the main technique for estimating the size of network flows. This framework is called OCcASION. Under the hard resource constraint of the size of Ternary Content Addressable Memory (TCAM) in SDN switches, OCcASION mainly uses the readily and reliably available link-load measurements to estimate the size of network flows where link-loads are provided via Simple Network Management Protocol (SNMP). In the learning phase, OCcASION computes the optimal observation matrix of SNMP link-loads. In the measurement and inference phase, OCcASION adaptively identifies and measures the most informative flows; moreover, it modifies the original OOM and accurately estimate the unknown traffic matrix. For this purpose, OCcASION uses the flexibility provided by the SDN to adaptively re-program a set of TCAM/flow-table entries of OpenFlow switches. The performance of OCcASION framework is evaluated using synthetic and real traffic traces of three practical networks topologies. It is shown that, this framework can significantly improve the accuracy of the traffic matrix estimation. For example, on Geant network the estimation error is approximately reduced by 83%, compared to regular minimum-norm estimation. Furthermore, the principles of OCNI is applied to estimate network link-delays where, in the learning phase, the OOM is computed using the network topology information. In the measurement and inference phase, a set of path-delay measurements are measured at each measurement interval, and the OOM is used to coherently estimate unknown link-delays.
Article
Zeroing dynamics (ZD, or termed Zhang dynamics after its inventor), being a special type of neurodynamic methodology, has shown powerful abilities to solve a great variety of time-varying problems with monotonically increasing odd activation functions. In this paper, two limitations of existing ZD are identified, i.e., the convex restriction on projection operations of activation functions and the low convergence speed with relatively redundant formulations on activation functions. This work breaks them by proposing modified ZD models, allowing nonconvex sets for projection operations in activation functions and possessing accelerated finite-time convergence. Theoretical analyses reveal that the proposed ZD models are of global stability with timely convergence. Finally, illustrative simulation examples, including an application to the motion generation of a robot manipulator, are provided and analyzed to substantiate the efficacy and superiority of the proposed ZD models for real-time varying matrix pseudoinversion.
Article
Much research has been devoted to complex-variable optimization problems due to their engineering applications. However, the complex-valued optimization method for solving complex-variable optimization problems is still an active research area. This paper proposes two efficient complex-valued optimization methods for solving constrained nonlinear optimization problems of real functions in complex variables, respectively. One solves the complex-valued nonlinear programming problem with linear equality constraints. Another solves the complex-valued nonlinear programming problem with both linear equality constraints and an ℓ₁-norm constraint. Theoretically, we prove the global convergence of the proposed two complex-valued optimization algorithms under mild conditions. The proposed two algorithms can solve the complex-valued optimization problem completely in the complex domain and significantly extend existing complex-valued optimization algorithms. Numerical results further show that the proposed two algorithms have a faster speed than several conventional real-valued optimization algorithms.
Article
In recent years, model order reduction (MOR) of interconnect system has become an important technique to reduce the computation complexity and improve the verification efficiency in the nanometer VLSI design. The Krylov subspaces techniques in existing MOR methods are efficient, and have become the methods of choice for generating small-scale macro-models of the large-scale multi-port RCL networks that arise in VLSI interconnect analysis. Although the Krylov subspace projection-based MOR methods have been widely studied over the past decade in the electrical computer-aided design community, all of them do not provide a best optimal solution in a given order. In this paper, a minimum norm least-squares solution for MOR by Krylov subspace methods is proposed. The method is based on generalized inverse (or pseudo-inverse) theory. This enables a new criterion for MOR-based Krylov subspace projection methods. Two numerical examples are used to test the PRIMA method based on the method proposed in this paper as a standard model.
Article
In this paper, we propose two fast complex-valued optimization algorithms for solving complex quadratic programming problems: 1) with linear equality constraints and 2) with both an l₁-norm constraint and linear equality constraints. By using Brandwood's analytic theory, we prove the convergence of the two proposed algorithms under mild assumptions. The two proposed algorithms significantly generalize the existing complex-valued optimization algorithms for solving complex quadratic programming problems with an l₁-norm constraint only and unconstrained complex quadratic programming problems, respectively. Numerical simulations are presented to show that the two proposed algorithms have a faster speed than conventional real-valued optimization algorithms.
Article
Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. After the input weights and the hidden layer biases are chosen randomly, ELM can be simply considered a linear system. However, the learning time of ELM is mainly spent on calculating the Moore-Penrose inverse matrices of the hidden layer output matrix. This paper focuses on effective computation of the Moore-Penrose inverse matrices for ELM, several methods are proposed. They are the reduced QR factorization with column Pivoting and Geninv ELM (QRGeninv-ELM), tensor product matrix ELM (TPM-ELM). And we compare QRGeninv-ELM, TPM-ELM with the relational algorithm of Moore-Penrose inverse matrices for ELM, the relational algorithms are: Cholesky factorization of singular matrix ELM (Geninv-ELM), QR factorization and Ginv ELM (QRGinv-ELM), the conjugate Gram-Schmidt process ELM (CGS-ELM). The experimental results and the statistical analysis of the experimental results both demonstrate that QRGeninv-ELM, TPM-ELM and Geninv-ELM are faster than other kinds of ELM and can reach comparable generalization performance.
Article
In this paper, an iterative scheme is proposed to find the roots of a nonlinear equation. It is shown that this iterative method has fourth order convergence in the neighborhood of the root. Based on this iterative scheme, we propose the main contribution of this paper as a new high-order computational algorithm for finding an approximate inverse of a square matrix. The analytical discussions show that this algorithm has fourth-order convergence as well. Next, the iterative method will be extended by theoretical analysis to find the pseudo-inverse (also known as the Moore-Penrose inverse) of a singular or rectangular matrix. Numerical examples are also made on some practical problems to reveal the efficiency of the new algorithm for computing a robust approximate inverse of a real (or complex) matrix.
Thesis
Full-text available
Echo state networks are a relatively new type of recurrent neural networks that have shown great potentials for solving non-linear, temporal problems. The basic idea is to transform the low dimensional temporal input into a higher dimensional state, and then train the output connection weights to make the system output the target information. Because only the output weights are altered, training is typically quick and computationally efficient compared to training of other recurrent neural networks. This paper investigates using an echo state network to learn the inverse kinematics model of a robot simulator with feedback-error-learning. In this scheme teacher forcing is not perfect, and joint constraints on the simulator makes the feedback error inaccurate. A novel training method which is less influenced by the noise in the training data is proposed and compared to the traditional ESN training method.
Conference Paper
Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. but when dealing with large datasets, we need more hidden nodes to enhance training and testing accuracy, in this case, this algorithm can't achieve high speed any more, sometimes its training can't be executed because the bias matrix is out of memory. We focus on this issue and use the Rank Reduced Matrix (MMR) method to calculate the hidden layer output matrix, the result showed this method can not only reach much higher speed but also better improve the generalization performance whenever the number of hidden nodes is large or not.
Article
The standard PLSR is presented from a geometric point of view consisting of two projections. In the first, the scores are obtained after an oblique projec-tion of the spectra onto the loadings. In the second, the vector of response values is projected orthogonally onto the scores. A metric is introduced for the oblique projection and a new algorithm for the calculation of the loadings into the variables space is proposed. This work also develops a new parameter, a vector, whose different values lead to different regression models with their own abilities of prediction; one of them is the exact form of standard PLSR. Two applications are described to illustrate the performance of the proposed method called VODKA regression, which is also a way to build least square regressions by introducing additional knowledge into the models.
Article
A natural generalization of the classical Moore–Penrose inverse is presented. The so-called S-Moore–Penrose inverse of an m×n complex matrix A, denoted by AS†, is defined for any linear subspace S of the matrix vector space Cn×m. The S-Moore–Penrose inverse AS† is characterized using either the singular value decomposition or (for the full rank case) the orthogonal complements with respect to the Frobenius inner product. These results are applied to the preconditioning of linear systems based on Frobenius norm minimization and to the linearly constrained linear least squares problem.
Article
An efficient algorithm, based on the LDL∗LDL∗ factorization, for computing {1,2,3}{1,2,3} and {1,2,4}{1,2,4} inverses and the Moore–Penrose inverse of a given rational matrix AA, is developed. We consider matrix products A∗AA∗A and AA∗AA∗ and corresponding LDL∗LDL∗ factorizations in order to compute the generalized inverse of AA. By considering the matrix products (R∗A)†R∗(R∗A)†R∗ and T∗(AT∗)†T∗(AT∗)†, where RR and TT are arbitrary rational matrices with appropriate dimensions and ranks, we characterize classes A{1,2,3}A{1,2,3} and A{1,2,4}A{1,2,4}. Some evaluation times for our algorithm are compared with corresponding times for several known algorithms for computing the Moore–Penrose inverse.
Article
Full-text available
This paper describes a technique for constructing robust preconditioners for the CGLS method applied to the solution oflarge and sparse least squares problems. The algorithm computes an incomplete LDLT factorization of the normal equations matrix without the need to form the normal matrix itself. The preconditioner is reliable (pivot breakdowns cannot occur) and has low intermediate storage requirements. Numerical experiments illustrating the performance of the preconditioner are presented. A comparison with incomplete QR preconditioners is also included.
Article
Full-text available
Many neural learning algorithms require to solve large least square systems in order to obtain synaptic weights. Moore-Penrose inverse matrices allow for solving such systems, even with rank deficiency, and they provide minimum-norm vectors of synaptic weights, which contribute to the regularization of the input-output mapping. It is thus of interest to develop fast and accurate algorithms for computing Moore-Penrose inverse matrices. In this paper, an algorithm based on a full rank Cholesky factorization is proposed. The resulting pseudoinverse matrices are similar to those provided by other algorithms. However the computation time is substantially shorter, particularly for large systems.
Article
We present a unified representation theorem for the weighted Moore–Penrose inverse. Specific expressions and computational procedures for the weighted Moore–Penrose inverse can be uniformly derived.
Book
The first iterative methods used for solving large linear systems were based on relaxation of the coordinates. Beginning with a given approximate solution, these methods modify the components of the approximation, one or a few at a time and in a certain order, until convergence is reached. Each of these modifications, called relaxation steps, is aimed at annihilating one or a few components of the residual vector. Now these techniques are rarely used separately. However, when combined with the more efficient methods described in later chapters, they can be quite successful. Moreover, there are a few application areas where variations of these methods are still quite popular.
Article
Computation of the generalised inverse A+ and rank of an arbitrary (including singular and rectangular) matrix A has many applications. This paper derives an iterative scheme to approximate the generalised inverse which can be expressed in the form of successive squaring of a composite matrix T. Given an m by n matrix A with m≈n, we show that the generalised inverse of A can be computed in parallel time ranging from O(log n) to O(log2 n), similar to previous methods. The rank of matrix A is obtained along with the generalised inverse.
Article
We derive a successive matrix squaring (SMS) algorithm to approximate the weighted generalized inverse, which can be expressed in the form of successive squaring of a composite matrix T. Given an m by n matrix A with m≈n, we show that the weighted generalized inverse of A can be computed in parallel time ranging from O(log n) to O(log2n) provided that there are enough processors to support matrix multiplication in time O(log n).
Book
From the Publisher: What is the most accurate way to sum floating point numbers? What are the advantages of IEEE arithmetic? How accurate is Gaussian elimination and what were the key breakthroughs in the development of error analysis for the method? The answers to these and many related questions are included here. This book gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic. It combines algorithmic derivations, perturbation theory, and rounding error analysis. Software practicalities are emphasized throughout, with particular reference to LAPACK and MATLAB. The best available error bounds, some of them new, are presented in a unified format with a minimum of jargon. Because of its central role in revealing problem sensitivity and providing error bounds, perturbation theory is treated in detail. Historical perspective and insight are given, with particular reference to the fundamental work of Wilkinson and Turing, and the many quotations provide further information in an accessible format. The book is unique in that algorithmic developments and motivations are given succinctly and implementation details minimized, so that attention can be concentrated on accuracy and stability results. Here, in one place and in a unified notation, is error analysis for most of the standard algorithms in matrix computations. Not since Wilkinson's Rounding Errors in Algebraic Processes (1963) and The Algebraic Eigenvalue Problem (1965) has any volume treated this subject in such depth. A number of topics are treated that are not usually covered in numerical analysis textbooks, including floating point summation, block LU factorization, condition number estimation, the Sylvester equation, powers of matrices, finite precision behavior of stationary iterative methods, Vandermonde systems, and fast matrix multiplication. Although not designed specifically as a textbook, this volume is a suitable reference for an advanced course, and could be used by instructors at all levels as a supplementary text from which to draw examples, historical perspective, statements of results, and exercises (many of which have never before appeared in textbooks). The book is designed to be a comprehensive reference and its bibliography contains more than 1100 references from the research literature. Audience Specialists in numerical analysis as well as computational scientists and engineers concerned about the accuracy of their results will benefit from this book. Much of the book can be understood with only a basic grounding in numerical analysis and linear algebra. About the Author Nicholas J. Higham is a Professor of Applied Mathematics at the University of Manchester, England. He is the author of more than 40 publications and is a member of the editorial boards of the SIAM Journal on Matrix Analysis and Applications and the IMA Journal of Numerical Analysis. His book Handbook of Writing for the Mathematical Sciences was published by SIAM in 1993.
Book
Preface 1. Background in linear algebra 2. Discretization of partial differential equations 3. Sparse matrices 4. Basic iterative methods 5. Projection methods 6. Krylov subspace methods Part I 7. Krylov subspace methods Part II 8. Methods related to the normal equations 9. Preconditioned iterations 10. Preconditioning techniques 11. Parallel implementations 12. Parallel preconditioners 13. Multigrid methods 14. Domain decomposition methods Bibliography Index.
Chapter
The principles of finite precision computation were discussed. The effects of finite precision arithmetic on numerical algorithms in numerical linear algebra were studied. The algorithms were expressed using a pseudocode based on the MATLAB language. The relative error was connected with the notion of correct significant digits. It was found that the significants digits in a number were the first nonzero digits and all succeeding digits.