Page 1

arXiv:cs/0607105v4 [cs.NA] 17 Sep 2009

Nearly-Linear Time Algorithms for Preconditioning and Solving

Symmetric, Diagonally Dominant Linear Systems∗

Daniel A. Spielman

Department of Computer Science

Program in Applied Mathematics

Yale University

Shang-Hua Teng

Department of Computer Science

Boston University

September 17, 2009

Abstract

We present a randomized algorithm that, on input a symmetric, weakly diagonally dom-

inant n-by-n matrix A with m non-zero entries and an n-vector b, produces an ˜ x such that

??˜ x − A†b??

mlogO(1)nlog(1/ǫ).

A≤ ǫ??A†b??

Ain expected time

The algorithm applies subgraph preconditioners in a recursive fashion. These preconditioners

improve upon the subgraph preconditioners first introduced by Vaidya (1990). For any

symmetric, weakly diagonally-dominant matrix A with non-positive off-diagonal entries and

k ≥ 1, we construct in time mlogO(1)n a preconditioner of A with at most

2(n − 1) + (m/k)logO(1)n

non-zero off-diagonal entries such that the finite generalized condition number κf(A,B) is

at most k. If the non-zero structure of the matrix is planar, then the condition number is at

most

O?(n/k)lognloglog2n?,

and the corresponding linear system solver runs in expected time

O(nlog2n + nlogn (loglogn)2log(1/ǫ)).

Similar bounds are obtained on the running time of algorithms computing approximate

Fiedler vectors.

∗This paper is the last in a sequence of three papers expanding on material that appeared first under the title

“Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems” [ST04].

The second paper, “Spectral Sparsification of Graphs” [ST08c] contains algorithms for constructing sparsifiers

of graphs, which we use in this paper to build preconditioners. The first paper, “A Local Clustering Algorithm

for Massive Graphs and its Application to Nearly-Linear Time Graph Partitioning” [ST08b] contains graph

partitioning algorithms that are used to construct sparsifiers in the second paper.

This material is based upon work supported by the National Science Foundation under Grant Nos. 0325630,

0324914, 0634957, 0635102 and 0707522. Any opinions, findings, and conclusions or recommendations expressed in

this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

1

Page 2

1Introduction

We design an algorithm with nearly optimal asymptotic complexity for solving linear systems

in symmetric, weakly diagonally dominant (SDD0) matrices. The algorithm applies a classical

iterative solver, such as the Preconditioned Conjugate Gradient or the Preconditioned Chebyshev

Method, with a novel preconditioner that we construct and analyze using techniques from graph

theory. Linear systems in these preconditioners may be reduced to systems of smaller size in

linear time by use of a direct method. The smaller linear systems are solved recursively. The

resulting algorithm solves linear systems in SDD0matrices in time almost linear in their number

of non-zero entries. Our analysis does not make any assumptions about the non-zero structure

of the matrix, and thus may be applied to the solution of the systems in SDD0matrices that

arise in any application, such as the solution of elliptic partial differential equations by the

finite element method [Str86, BHV04], the solution of maximum flow problems by interior point

algorithms [FG04, DS08], or the solution of learning problems on graphs [BMN04, ZBL+03,

ZGL03].

Graph theory drives the construction of our preconditioners. Our algorithm is best un-

derstood by first examining its behavior on Laplacian matrices—the symmetric matrices with

non-positive off-diagonals and zero row sums. Each n-by-n Laplacian matrix A may be associ-

ated with a weighted graph, in which the weight of the edge between distinct vertices i and j is

−Ai,j(see Figure 1). We precondition the Laplacian matrix A of a graph G by the Laplacian

matrix B of a subgraph H of G that resembles a spanning tree of G plus a few edges. The sub-

graph H is called an ultra-sparsifier of G, and its corresponding Laplacian matrix is a very good

preconditioner for A: The finite generalized condition number κf(A,B) is logO(1)n. Moreover,

it is easy to solve linear equations in B. As the graph H resembles a tree plus a few edges, we

may use partial Cholesky factorization to eliminate most of the rows and columns of B while

incurring only a linear amount fill. We then solve the reduced system recursively.

1.5

−1.5

0

0

−1.5

4

−2

−0.5

00

−2

3

−1

−0.5

−1

1.5

2

4

3

1

1.5

0.5

1

2

Figure 1: A Laplacian matrix and its corresponding weighted graph.

The technical meat of this paper lies in the construction of ultra-sparsifiers for Laplacian

matrices, which appears in Sections 7 through 10. In the remainder of the introduction, we

formally define ultra-sparsifiers, and the sparsifiers from which they are built. In Section 2,

we survey the contributions upon which we build, and mention other related work. We devote

Section 3 to recalling the basics of support theory, defining the finite condition number, and

explaining why we may restrict out attention to Laplacian matrices.

In Section 4, we state the properties we require of partial Cholesky factorizations, and we

present our first algorithms for solving equations in SDD0-matrices. These algorithms directly

solve equations in the preconditioners, rather than using a recursive approach, and take time

roughly O(m5/4logO(1)n) for general SDD0-matrices and O(n9/8log1/2n) for SDDM0-matrices

2

Page 3

with planar non-zero structure. To accelerate these algorithms, we apply our preconditioners

in a recursive fashion. We analyze the complexity of these recursive algorithms in Section 5,

obtaining our main algorithmic results. In Section 6, we observe that these linear system solvers

yield efficient algorithms for computing approximate Fiedler vectors, when applied inside the

inverse power method.

We do not attempt to optimize the exponent of logn in the complexity of our algorithm.

Rather, we present the simplest analysis we can find of an algorithm of complexity mlogO(1)nlog(1/ǫ).

We expect that the exponent of logn can be substantially reduced through advances in the con-

structions of low-stretch spanning trees, sparsifiers, and ultrasparsifiers. Experimental work is

required to determine whether a variation of our algorithm will be useful in practice.

1.1Ultra-sparsifiers

To describe the quality of our preconditioners, we employ the notation A ? B to indicate that

B − A is positive semi-definite. We define a SDDM0-matrix to be a SDD0-matrix with no

positive off-diagonal entries. When positive definite, the SDDM0-matrices are M-matrices and

in particular are Stieltjes matrices.

Definition 1.1 (Ultra-Sparsifiers). A (k,h)-ultra-sparsifier of an n-by-n SDDM0-matrix A with

2m non-zero off-diagonal entries is a SDDM0-matrix Assuch that

(a) As? A ? k · As.

(b) Ashas at most 2(n − 1) + 2hm/k non-zero off-diagonal entries.

(c) The set of non-zero entries of Asis a subset of the set of non-zero entries of A.

In Section 10, we present an expected mlogO(1)n-time algorithm that on input a Laplacian

matrix A and a k ≥ 1 produces a (k,h)-ultra-sparsifier of A with probability at least 1 − 1/2n,

for

h = c3logc4

2n,(1)

where c3and c4are some absolute constants. As we will use these ultra-sparsifiers throughout

the paper, we will define a k-ultra-sparsifier to be a (k,h)-ultra-sparsifier where h satisfies (1).

For matrices whose graphs are planar, we present a simpler construction of (k,h)-ultra-

sparsifiers, with h = O?logn(loglogn)2?. This simple constructions exploits low-stretch span-

ultra-sparsifiers in Section 10 builds upon the simpler construction, but requires the use of

sparsifiers. The following definition of sparsifiers will suffice for the purposes of this paper.

ning trees [AKPW95, EEST08, ABN08], and is presented in Section 9. Our construction of

Definition 1.2 (Sparsifiers). A d-sparsifier of n-by-n SDDM0-matrix A is a SDDM0-matrix As

such that

(a) As? A ? (5/4)As.

(b) Ashas at most dn non-zero off-diagonal entries.

(c) The set of non-zero entries of Asis a subset of the set of non-zero entries of A.

3

Page 4

(d) For all i,

?

j?=i

As(i,j)

A(i,j)

≤ 2|{j : A(i,j) ?= 0}|.

In a companion paper [ST08c], we present a randomized algorithm Sparsify2 that produces

sparsifiers of Laplacian matrices in expected nearly-linear time. As explained in Section 3, this

construction can trivially be extended to all SDDM0-matrices.

Theorem 1.3 (Sparsification). On input an n × n Laplacian matrix A with 2m non-zero off-

diagonal entries and a p > 0, Sparsify2 runs in expected time mlog(1/p)log17n and with

probability at least 1 − p produces a c1logc2(n/p)-sparsifier of A, for c2= 30 and some absolute

constant c1> 1.

We parameterize this theorem by the constants c1and c2as we believe that they can be

substantially improved. In particular, Spielman and Srivastava [SS08] construct sparsifiers with

c2= 1, but these constructions require the solution of linear equations in Laplacian matrices,

and so can not be used to help speed up the algorithms in this paper. Batson, Spielman and

Srivastava [BSS09] have proved that there exist sparsifiers that satisfy conditions (a) through

(c) of Definition 1.2 with c2= 0.

2 Related Work

In this section, we explain how our results relate to other rigorous asymptotic analyses of algo-

rithms for solving systems of linear equations. For the most part, we restrict our attention to

algorithms that make structural assumptions about their input matrices, rather than assump-

tions about the origins of those matrices.

Throughout our discussion, we consider an n-by-n matrix with m non-zero entries. When

m is large relative to n and the matrix is arbitrary, the fastest algorithms for solving linear

equations are those based on fast matrix multiplication [CW82], which take time approximately

O(n2.376). The fastest algorithm for solving general sparse positive semi-definite linear systems

is the Conjugate Gradient. Used as a direct solver, it runs in time O(mn) (see [TB97, Theo-

rem 28.3]). Of course, this algorithm can be used to solve a system in an arbitrary matrix A in

a similar amount of time by first multiplying both sides by AT. To the best of our knowledge,

every faster algorithm requires additional properties of the input matrix.

2.1Special non-zero structure

In the design and analysis of direct solvers, it is standard to represent the non-zero structure

of a matrix A by an unweighted graph GAthat has an edge between vertices i ?= j if and only

if Ai,jis non-zero (see [DER86]). If this graph has special structure, there may be elimination

orderings that accelerate direct solvers. If A is tri-diagonal, in which case GAis a path, then a

linear system in A can be solved in time O(n). Similarly, when GAis a tree a linear system in

A by be solved in time O(n) (see [DER86]).

If the graph of non-zero entries GA is planar, one can use Generalized Nested Dissec-

tion [Geo73, LRT79, GT87] to find an elimination ordering under which Cholesky factorization

can be performed in time O(n1.5) and produces factors with at most O(nlogn) non-zero entries.

4

Page 5

We will exploit these results in our algorithms for solving planar linear systems in Section 4.

We recall that a planar graph on n vertices has at most 3n−6 edges (see [Har72, Corollary 11.1

(c)]), so m ≤ 6n.

2.2Subgraph Preconditioners

Our work builds on a remarkable approach to solving linear systems in Laplacian matrices

introduced by Vaidya [Vai90]. Vaidya demonstrated that a good preconditioner for a Laplacian

matrix A can be found in the Laplacian matrix B of a subgraph of the graph corresponding to

A. He then showed that one could bound the condition number of the preconditioned system by

bounding the dilation and congestion of an embedding of the graph of A into the graph of B. By

using preconditioners obtained by adding edges to maximum spanning trees, Vaidya developed

an algorithm that finds ǫ-approximate solutions to linear systems in SDDM0-matrices with at

most d non-zero entries per row in time O((dn)1.75log(1/ǫ)). When the graph corresponding

to A had special structure, such as having a bounded genus or avoiding certain minors, he

obtained even faster algorithms. For example, his algorithm for solving planar systems runs in

time O((dn)1.2log(1/ǫ)).

As Vaidya’s paper was never published and his manuscript lacked many proofs, the task

of formally working out his results fell to others. Much of its content appears in the thesis

of his student, Anil Joshi [Jos97], and a complete exposition along with many extensions was

presented by Bern et. al. [BGH+06]. Gremban, Miller and Zagha [Gre96, GMZ95] explain parts

of Vaidya’s paper as well as extend Vaidya’s techniques. Among other results, they find ways of

constructing preconditioners by adding vertices to the graphs. Maggs et. al. [MMP+05] prove

that this technique may be used to construct excellent preconditioners, but it is still not clear if

they can be constructed efficiently.

The machinery needed to apply Vaidya’s techniques directly to matrices with positive off-

diagonal elements is developed in [BCHT04]. An algebraic extension of Vaidya’s techniques for

bounding the condition number was presented by Boman and Hendrickson [BH03b], and later

used by them [BH01] to prove that the low-stretch spanning trees constructed by Alon, Karp,

Peleg, and West [AKPW95], yield preconditioners for which the preconditioned system has con-

dition number at most m2O(√lognloglogn). They thereby obtained a solver for symmetric diago-

nally dominant linear systems that produces ǫ-approximate solutions in time m1.5+o(1)log(1/ǫ).

Through improvements in the construction of low-stretch spanning trees [EEST08, ABN08] and

a careful analysis of the eigenvalue distribution of the preconditioned system, Spielman and

Woo [SW09] show that when the Preconditioned Conjugate Gradient is applied with the best

low-stretch spanning tree preconditioners, the resulting linear system solver takes time at most

O(mn1/3log1/2nlog(1/ǫ)). The preconditioners in the present paper are formed by adding edges

to these low-stretch spanning trees.

The recursive application of subgraph preconditioners was pioneered in the work of Joshi [Jos97]

and Reif [Rei98]. Reif [Rei98] showed how to recursively apply Vaidya’s preconditioners to solve

linear systems in SDDM0-matrices with planar non-zero structure and at most a constant num-

ber of non-zeros per row in time O(n1+βlogO(1)(κ(A)/ǫ)), for every β > 0. While Joshi’s anal-

ysis is numerically much cleaner, he only analyzes preconditioners for simple model problems.

Our recursive scheme uses ideas from both these works, with some simplification. Koutis and

Miller [KM07] have developed recursive algorithms that solve linear systems in SDDM0-matrices

5