KSSOLV—a Matlab toolbox for solving the Kohn–Sham equations

ACM Transactions on Mathematical Software (Impact Factor: 1.86). 03/2009; 36(2). DOI: 10.1145/1499096.1499099
Source: DBLP


We describe the design and implementation of KSSOLV, a MATLAB toolbox for solving a class of nonlinear eigenvalue problems known as the Kohn-Sham equations. These types of problems arise in electronic structure calculations, which are nowadays essential for studying the microscopic quantum mechanical properties of molecules, solids, and other nanoscale materials. KSSOLV is well suited for developing new algorithms for solving the Kohn-Sham equations and is designed to enable researchers in computational and applied mathematics to investigate the convergence properties of the existing algorithms. The toolbox makes use of the object-oriented programming features available in MATLAB so that the process of setting up a physical system is straightforward and the amount of coding effort required to prototype, test, and compare new algorithms is significantly reduced. All of these features should also make this package attractive to other computational scientists and students who wish to study small- to medium-size systems.

Download full-text


Available from: Juan C Meza
  • Source
    • "The general case, as well as the proof of Theorem 4.1, which quantifies the improvement of the Kohn–Sham ground state energy obtained by the post-processing in the asymptotic regime, will be detailed in a mathematical analysis oriented paper [5]. In Section 5, we report numerical simulations on a simple system, a CO 2 molecule, obtained with the KSSOLV package [18], showing that our post-processing method leads to significant gain in accuracy (typically one order of magnitude on the energy) for a small extra cost (a few percent of the overall cost). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article, we propose a post-processing of the planewave solution of the Kohn–Sham LDA model with pseudopotentials. This post-processing is based upon the fact that the exact solution can be interpreted as a perturbation of the approximate solution, allowing us to compute corrections for both the eigenfunctions and the eigenvalues of the problem in order to increase the accuracy. Indeed, this post-processing only requires the computation of the residual of the solution on a finer grid so that the additional computational cost is negligible compared to the initial cost of the planewave-based method needed to compute the approximate solution. Theoretical estimates certify an increased convergence rate in the asymptotic convergence range. Numerical results confirm the low computational cost of the post-processing and show that this procedure improves the energy accuracy of the solution even in the pre-asymptotic regime which comprises the target accuracy of practitioners.
    Full-text · Article · Apr 2015
  • Source
    • "In the experiments reported in Section 5, effective choices of m ranged from three (with n = 3) to 50 (with n = 16, 384). For possibly modifying m k , our implementation follows a strategy used by Yang et al. [43] that is intended to maintain acceptable conditioning of the least-squares problem. In this, the condition number of the least-squares coefficient matrix (which is just the condition number of R in the QR decomposition) is monitored, and left-most columns of the matrix are dropped (and the QR decomposition updated) as necessary to keep the condition number below a prescribed threshhold. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper concerns an acceleration method for fixed-point iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547–560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic structure computations, where it is known as Anderson mixing; however, it seems to have been untried or underexploited in many other important applications. Moreover, while other acceleration methods have been extensively studied by the mathematics and numerical analysis communities, this method has received relatively little attention from these communities over the years. A recent paper by H. Fang and Y. Saad [Numer. Linear Algebra Appl., 16 (2009), pp. 197–221] has clarified a remarkable relationship of Anderson acceleration to quasi-Newton (secant updating) methods and extended it to define a broader Anderson family of acceleration methods. In this paper, our goals are to shed additional light on Anderson acceleration and to draw further attention to its usefulness as a general tool. We first show that, on linear problems, Anderson acceleration without truncation is "essentially equivalent" in a certain sense to the generalized minimal residual (GMRES) method. We also show that the Type 1 variant in the Fang—Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method. We then discuss practical considerations for implementing Anderson acceleration and illustrate its performance through numerical experiments involving a variety of applications.
    Preview · Article · Aug 2011 · SIAM Journal on Numerical Analysis
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Minimization with orthogonality constraints (e.g., X X = I) and/or spherical constraints (e.g., x 2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we propose to use a Crank-Nicolson-like update scheme to preserve the constraints and based on it, develop curvilinear search algorithms with lower per-iteration cost compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842% to the best known solution on the largest problem "tai256c" in QAPLIB can be reached in 5 minutes on a typical laptop.
    Preview · Article · Dec 2010 · Mathematical Programming
Show more