Publications (12)14.23 Total impact
 [Show abstract] [Hide abstract]
ABSTRACT: We consider the convergence of the algorithm GMRES of Saad and Schultz for solving linear equations Bx=b, where B ∈ Cn × n is nonsingular and diagonalizable, and b ∈ Cn. Our analysis explicitly includes the initial residual vector r0. We show that the GMRES residual norm satisfies a weighted polynomial leastsquares problem on the spectrum of B, and that GMRES convergence reduces to an ideal GMRES problem on a rank1 modification of the diagonal matrix of eigenvalues of B. Numerical experiments show that the new bounds can accurately describe GMRES convergence.IMA Journal of Numerical Analysis 04/2014; 34(2). DOI:10.1093/imanum/drt025 · 1.70 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: The method of conjugate gradients (CG) is widely used for the iterative solution of large sparse systems of equations Ax=b, where A∈ℜ n×n is symmetric positive definite. Let x k denote the kth iterate of CG. This is a nonlinear differentiable function of b. In this paper we obtain expressions for J k , the Jacobian matrix of x k with respect to b. We use these expressions to obtain bounds on ∥J k ∥ 2 , the spectral norm condition number of x k , and discuss algorithms to compute or estimate J k v and J k T v for a given vector v.SIAM Journal on Matrix Analysis and Applications 01/2014; 35(1). DOI:10.1137/120889848 · 1.59 Impact Factor  01/2014; 2(1):763783. DOI:10.1137/140973827
 [Show abstract] [Hide abstract]
ABSTRACT: We present an explicit expression for the condition number of the truncated total least squares (TLS) solution of Ax≈b. This expression is obtained using the notion of the Fréchet derivative. We also give upper bounds on the condition number, which are simple to compute and interpret. These results generalize those in the literature for the untruncated TLS problem. Numerical experiments demonstrate that our bounds are often a very good estimate of the condition number, and provide a significant improvement to known bounds.SIAM Journal on Matrix Analysis and Applications 07/2013; 34(3). DOI:10.1137/120895019 · 1.59 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: We consider two upper bounds on the normwise backward error (BE) for linear leastsquares problems. The advantage of these bounds is their simplicity. Their behaviour in commonlyused iterative methods can be analyzed more easily than that of the BE itself, and the bounds can also be estimated very cheaply in such methods. It is known that each of these upper bounds can be orders of magnitude larger than the BE. Then one may ask: under which conditions is each of the bounds a good estimate of the BE? We partially answer this question by giving sufficient conditions for each bound to be a good estimate of the BE. We illustrate these results with some numerical examples.Linear Algebra and its Applications 07/2013; 439(1):7889. DOI:10.1016/j.laa.2013.03.007 · 0.94 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: We consider the backward error associated with a given approximate solution of a linear least squares problem. The backward error can be very expensive to compute, as it involves the minimal singular value of a certain matrix that depends on the problem data and the approximate solution. An estimate based on a regularized projection of the residual vector has been proposed in the literature and analyzed by several authors. Although numerical experiments in the literature suggest that it is a reliable estimate of the backward error for any given approximate least squares solution, to date no satisfactory explanation for this behavior had been found. We derive new bounds which confirm this experimental observation.SIAM Journal on Matrix Analysis and Applications 07/2012; 33(3):822836. DOI:10.1137/110825467 · 1.59 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥bAx∥(2), especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CGtype algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrixvector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images.Computer methods and programs in biomedicine 02/2012; 108(2):66978. DOI:10.1016/j.cmpb.2011.12.002 · 1.90 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: We propose practical stopping criteria for the iterative solution of sparse linear least squares (LS) problems. Although we focus our discussion on the algorithm LSQR of Paige and Saunders, the ideas discussed here may also be applicable to other algorithms. We review why the 2norm of the projection of the residual vector onto the range of A is a useful measure of convergence, and show how this projection can be estimated efficiently at every iteration of LSQR. We also give practical and cheaplycomputable estimates of the backward error for the LS problem.SIAM Journal on Matrix Analysis and Applications 05/2010; 31(4):20552074. DOI:10.1137/090770655 · 1.59 Impact Factor 

 [Show abstract] [Hide abstract]
ABSTRACT: Most examples of cycling in the simplex method are given without explanation of how they were constructed. An exception is Beale's example built around the geometry of the dual simplex method in the plane [Beale, E. 1955. Cycling in the dual simplex method. Naval Res. Logist. Quart. 2( 4) 269275]. Using this approach, we give a simple geometric explanation for a number of examples of cycling in the simplex method, including Hoffman's original example [Hoffman, A. 1953. Cycling in the Simplex Algorithm. National Bureau of Standards, Washington, D. C.]. This gives rise to a simple method for generating examples with cycles.Operations Research 04/2008; 56(2):512518. DOI:10.1287/opre.1070.0474 · 1.74 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: For given vectors b 2 C,×n such that y is an exact least squares solution to F y � b. This complements the original lessconstructive derivation of Wald´en, Karlson and Sun [Numerical Linear Algebra with Applications, 2:271–286 (1995)]. We do the equivalent for the data least squares problem—the other extreme case of the scaled total least squares problem. Not only can the results be used as indicated above for the compatible case, but the constructive technique we use could also be applicable to other backward problems—such as those for underdetermined systems, the singular value decomposition, and the eigenproblem. Key words. matrix characterization, approximate solutions, iterative methods, linear algebraicSIAM Journal on Matrix Analysis and Applications 01/2008; 30(4):14061420. DOI:10.1137/060675691 · 1.59 Impact Factor
Publication Stats
25  Citations  
14.23  Total Impact Points  
Top Journals
Institutions

20122014

University of Oxford
 Mathematical Institute
Oxford, England, United Kingdom


20082010

McGill University
 School of Computer Science
Montréal, Quebec, Canada
