Content uploaded by Hussam Al Daas
Author content
All content in this area was uploaded by Hussam Al Daas on Sep 03, 2021
Content may be subject to copyright.
A MULTILEVEL SCHWARZ PRECONDITIONER BASED ON A
HIERARCHY OF ROBUST COARSE SPACES
HUSSAM AL DAAS∗, LAURA GRIGORI†, PIERRE JOLIVET†† ,AND PIERRE-HENRI
TOURNIER†‡
Abstract. In this paper we present a multilevel preconditioner based on overlapping Schwarz
methods for symmetric positive definite (SPD) matrices. Robust two-level Schwarz preconditioners
exist in the literature to guarantee fast convergence of Krylov methods. As long as the dimension of
the coarse space is reasonable, that is, exact solvers can be used efficiently, two-level methods scale
well on parallel architectures. However, the factorization of the coarse space matrix may become
costly at scale. An alternative is then to use an iterative method on the second level, combined with
an algebraic preconditioner, such as a one-level additive Schwarz preconditioner. Nevertheless, the
condition number of the resulting preconditioned coarse space matrix may still be large. One of the
difficulties of using more advanced methods, like algebraic multigrid or even two-level overlapping
Schwarz methods, to solve the coarse problem is that the matrix does not arise from a partial
differential equation (PDE) anymore. We introduce in this paper a robust multilevel additive Schwarz
preconditioner where at each level the condition number is bounded, ensuring a fast convergence for
each nested solver. Furthermore, our construction does not require any additional information than
for building a two-level method, and may thus be seen as an algebraic extension.
Key words. domain decomposition, multilevel, elliptic problems, subspace correction
AMS subject classifications. 65F08, 65F10, 65N55
1. Introduction. We consider the solution of a linear system of equations
(1.1) Ax =b,
where A∈Rn×nis a symmetric positive definite (SPD) matrix, b∈Rnis the right-
hand side, and x∈Rnis the vector of unknowns. To enhance convergence, it is
common to solve the preconditioned system
M−1Ax =M−1b.
Standard domain decomposition preconditioners such as block Jacobi, additive
Schwarz, and restricted additive Schwarz methods are widely used [32,9,8]. In a
parallel framework, such preconditioners have the advantage of relatively low com-
munication costs. However, their role in lowering the condition number of the sys-
tem typically deteriorates when the number of subdomains increases. Multilevel ap-
proaches have shown a large impact on enhancing the convergence of Krylov methods
[33,12,7,25,20,10,21,1,15,23,34,30]. In multigrid and domain decomposition
communities, multilevel methods have proven their capacity of scaling up to large
numbers of processors and tackling ill-conditioned systems [37,4,19]. While some
preconditioners are purely algebraic [7,20,10,26,29,16,1], several multilevel meth-
ods are based on hierarchical meshing in both multigrid and domain decomposition
communities [35,9,25,15,23]. Mesh coarsening depends on the geometry of the
problem. One has to be careful when choosing a hierarchical structure since it can
have a significant impact on the iteration count [23,25]. In [23], the authors propose
∗ALPINES, INRIA, Paris, France (aldaas.hussam@gmail.com,laura.grigori@inria.fr).
†IRIT, CNRS, Toulouse, France (pierre.jolivet@enseeiht.fr).
‡LJLL, CNRS, Paris, France (tournier@ljll.math.upmc.fr).
1
2H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
a multilevel Schwarz domain decomposition solver for the elasticity problem. Based
on a heuristic approach and following the maximum independent set method [2], they
coarsen the fine mesh while preserving the boundary in order to obtain a two-level
method. This strategy is repeated recursively to build several levels. However, they
do not provide a bound on the condition number of the preconditioned matrix of the
multilevel method. Multilevel domain decomposition methods are mostly based on
non-overlapping approaches [35,9,25,23,37,4,30,34]. Two-level overlapping domain
decomposition methods are well studied and provide robust convergence estimates
[33,12,5]. However, extending such a construction to more than two levels while
preserving robustness is not straightforward. In [6], the authors propose an algebraic
multilevel additive Schwarz method. Their approach is inspired by algebraic multigrid
strategies. One drawback of it is that it is sensitive to the number of subdomains. In
[15], the authors suggest applying the two-level Generalized Dryja–Smith–Widlund
preconditioner recursively to build a multilevel method. In this case, the condition
number bound of the two-level approach depends on the width of the overlap, the
diameter of discretization elements, and the diameter of the subdomains. They focus
on the preconditioner for the three-level case. One drawback of their approach is that
the three-level preconditioner requires more iterations than the two-level variant. In
this paper, the only information from the PDE needed for the construction of the
preconditioner consists of the local Neumann matrices at the fine level. These ma-
trices correspond to the integration of the bilinear form in the weak formulation of
the studied PDE on the subdomain-decomposed input mesh. No further information
is necessary: except on the fine level, our method is algebraic and does not depend
on any coarsened mesh or auxiliary discretized operator. For problems not arising
from PDE discretization, one needs to supply the local SPSD matrices on the finest
level. In [3], a subset of the authors propose a fully algebraic approximation for such
matrices. However, their approximation strategy is heuristic and may not be effective
in some cases.
Our preconditioner is based on a hierarchy of coarse spaces and is defined as fol-
lowing. At the first level, the set of unknowns is partitioned into N1subdomains and
each subdomain has an associated matrix A1,j =R1,j AR>
1,j obtained by using appro-
priate restriction and prolongation operators R1,j and R>
1,j respectively, defined in the
following section. The preconditioner is formed as an additive Schwarz preconditioner
coupled with an additive coarse space correction, defined as,
M−1=M−1
1=V1A−1
2V>
1+
N1
X
j=1
R>
1,j A−1
1,j R1,j ,
where V1is a tall-and-skinny matrix spanning a coarse space obtained by solving for
each subdomain j= 1 to N1a generalized eigenvalue problem involving the matrix
A1,j and the Neumann matrix associated with subdomain j. The coarse space matrix
is A2=V>
1AV1. This is equivalent to the GenEO preconditioner, and is described
in detail in [33] and recalled briefly in section 2. The dimension of the coarse space
is proportional to the number of subdomains N1. When it increases, factorizing A2
by using a direct method becomes prohibitive, and hence the application of A−1
2to a
vector should also be performed through an iterative method.
Our multilevel approach defines a hierarchy of coarse spaces Viand coarse space
matrices Aifor i= 2 to any depth L+1, and defines a preconditioner M−1
isuch that
the condition number of M−1
iAiis bounded. The depth L+ 1 is chosen such that the
coarse space matrix AL+1 can be factorized efficiently by using a direct method. At
HIERARCHICAL ROBUST COARSE SPACES 3
each level i, the graph of the coarse space matrix Aiis partitioned into Nisubdomains,
and each subdomain jis associated with a local matrix Ai,j =Ri,j AiR>
i,j obtained by
using appropriate restriction and prolongation operators Ri,j and R>
i,j , respectively.
The preconditioner at level iis defined as,
M−1
i=ViA−1
i+1V>
i+
Ni
X
j=1
R>
i,j A−1
i,j Ri,j ,
where the coarse space matrix is Ai+1 =V>
iAiVi.
One of the main contributions of the paper concerns the construction of the
hierarchy of coarse spaces Vifor levels igoing from 2 to L, that are built algebraically
from the coarse space of the previous level Vi−1. This construction is based on the
definition of local symmetric positive semi-definite (SPSD) matrices associated with
each subdomain jat each level ithat we introduce in this paper. These matrices are
obtained by using the local SPSD matrices of the previous level i−1 and the previous
coarse space Vi−1. They are then involved, with the local matrices Ai,j, in concurrent
generalized eigenvalue problems solved for each subdomain jthat allows to compute
the local eigenvectors contributing to the coarse space Vi.
We show in Theorem 5.3,section 5, that the condition number of M−1
iAiis
bounded and depends on the maximum number of subdomains at the first level that
share an unknown, the number of distinct colors required to color the graph of Aiso
that span{R>
i,j }16j6Niof the same color are mutually Ai-orthogonal, and a user
defined tolerance τ. It is thus independent of the number of subdomains Ni.
The main contribution of this paper is based on the combination of two previous
works on two-level additive Schwarz methods [3,33]. The coarse space proposed in
[33] guarantees an upper bound on the condition number that can be prescribed by
the user. The SPSD splitting in the context of domain decomposition presented in
[3] provides an algebraic view for the construction of coarse spaces. The combination
of these two works leads to a robust multilevel additive Schwarz method. Here,
robustness refers to the fact that at each level, an upper bound on the condition
number of the associated matrix can be prescribed by the user a priori. The rest
of the paper is organized as follows. In the next section, we present the notations
used throughout the paper. In section 2, we present a brief review of the theory of
one- and two-level additive Schwarz methods. We extend in section 3 the class of
SPSD splitting matrices presented in [3] in order to make it suitable for multilevel
methods. Afterwards, we define the coarse space at level ibased on the extended
class of local SPSD splitting matrices associated with this level. Section 4 describes
the partitioning of the domain at level i+ 1 from the partitioning at level i. In
Section 5, we explain the computation of the local SPSD matrices associated with each
subdomain at level i+ 1. We compute them using those associated with subdomains
at level i.Section 6 presents numerical experiments on highly challenging diffusion
and linear elasticity problems in two- and three-dimensional problems. We illustrate
the theoretical robustness and practical usage of our proposed method by performing
strong scalability tests up to 8,192 processes.
Context and notation. By convention, the finest level, on which (1.1) is de-
fined, is the first level. A subscript index is used in order to specify which level
an entity is defined on. In the case where additional subscripts are used, the first
subscript always denotes the level. For the sake of clarity, we omit the subscript cor-
responding to level 1 when it is clear from context, e.g., matrix A. Furthermore, the
4H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
subscripts iand jalways refer to a specific level iand its subdomain j, respectively.
The number of levels is L+ 1. Let Ai∈Rni×nidenote symmetric positive definite
matrices, each corresponding to level i= 1, . . . , L +1. We suppose that a direct solver
can be used at level L+ 1 to compute an exact factorization of AL+1 .
Let B∈Rp×qbe a matrix. Let P⊂J1; pKand Q⊂J1; qKbe two sets of
indices. The concatenation of Pand Qis represented by [P, Q]. We note that the
order of the concatenation is important. B(P , :) is the submatrix of Bformed by
the rows whose indices belong to P.B(:, Q) is the submatrix of Bformed by the
columns whose indices belong to Q.B(P, Q)=(B(P, :)) (:, Q). The identity matrix
of size pis denoted Ip. We suppose that the graph of Aiis partitioned into Ninon-
overlapping subdomains, where Niniand Ni+1 6Nifor i= 1, . . . , L. We note that
partitioning at level 1 can be performed by using a graph partitioning library such as
ParMETIS [22] or PT-SCOTCH [11]. Partitioning at greater levels will be described
later in section 4. In the following, we define for each level i= 1, . . . , L notations
for subsets and restriction operators that are associated with the partitioning. Let
Ωi=J1; niKbe the set of unknowns at level iand let Ωi,j,I for j= 1, . . . , Nibe the
subset of Ωithat represents the unknowns in subdomain j. We refer to Ωi,j,I as the
interior unknowns of subdomain j. Let Γi,j for j= 1, . . . , Nibe the subset of Ωithat
represents the neighbor unknowns of subdomain j, i.e., the unknowns at distance 1
from subdomain jthrough the graph of Ai. We refer to Γi,j as the overlapping
unknowns of subdomain j. We denote Ωi,j = [Ωi,j,I,Γi,j ], for j= 1, . . . , Ni, the
concatenation of interior and overlapping unknowns of subdomain j. We denote
∆i,j , for j= 1, . . . , Ni, the complementary of Ωi,j in Ωi, i.e., ∆i,j = Ωi\Ωi,j . In
Figure 1.1, a triangular mesh is used to discretize a square domain. The set of
nodes of the mesh is partitioned into 16 disjoint subsets Ω1,j,I , which represent a
non-overlapping decomposition, for j= 1,...,16 (left). On the left, a matrix A1
whose connectivity graph corresponds to the mesh is illustrated. The submatrix
A1(Ω1,j,I ,Ω1,j,I ) is associated with the non-overlapping subdomain j. Each submatrix
A1(Ω1,j,I ,Ω1,j,I ) is colored with a distinct color. The same color is used to color the
region that contains the nodes in the non-overlapping subdomain Ω1,j,I. Note that
if two subdomains j1, j2are neighbors, the submatrix A1(Ω1,j1,I ,Ω1,j2,I ) has nonzero
elements. For j= 1, . . . , Ni, we denote by ni,j,I ,γi,j and ni,j the cardinality of Ωi,j,I ,
Γi,j and Ωi,j respectively.
Let Ri,j,I ∈Rni,j,I ×nibe defined as Ri,j,I =Ini(Ωi,j,I ,:).
Let Ri,j,Γ∈Rγi,j ×nibe defined as Ri,j,Γ=Ini(Γi,j ,:).
Let Ri,j ∈Rni,j ×nibe defined as Ri,j =Ini(Ωi,j ,:).
Let Ri,j,∆∈R(ni−ni,j )×nibe defined as Ri,j,∆=Ini(∆i,j ,:).
Let Pi,j =Ini([Ωi,j,I ,Γi,j ,∆i,j ],:) ∈Rni×ni, be a permutation matrix associated
with the subdomain j, for j= 1, . . . , Ni. The matrix of the overlapping subdomain j,
Ri,j AiR>
i,j , is denoted Ai,j . We denote Di,j ∈Rni,j,×ni,j , j = 1, . . . , Ni, any set of
non-negative diagonal matrices such that
Ini=
Ni
X
j=1
R>
i,j Di,j Ri,j .
We refer to {Di,j }16j6Nias the algebraic partition of unity. Let Vi∈Rni×ni+1 be
a tall-and-skinny matrix of full rank. We denote Sithe subspace spanned by the
columns of Vi. This subspace will stand for the coarse space associated with level i.
By convention, we refer to Sias subdomain 0 at level i. Thus, we have ni,0=ni+1.
HIERARCHICAL ROBUST COARSE SPACES 5
Ω1,1,I Ω1,2,I
Ω1,3,I
Ω1,4,I
Ω1,5,I
Ω1,6,I
Ω1,7,I
Ω1,8,I
Ω1,9,I
Ω1,10,I
Ω1,11,I
Ω1,12,I
Ω1,13,I
Ω1,14,I
Ω1,15,I
Ω1,16,I
Fig. 1.1.Left: a triangular mesh is used to discretize the unit square. The set of nodes of the
mesh is partitioned into 16 disjoint subsets, non-overlapping subdomains, Ω1,j,I for j= 1,...,16.
Right: Illustration of the matrix A1whose connectivity graph corresponds to the mesh on the left.
The diagonal block jof A1corresponds to the non-overlapping subdomain Ω1,j,I. Each submatrix
A1(Ω1,j,I ,Ω1,j,I )is colored with a distinct color. The same color is used to color the region of the
square that contains nodes in Ω1,j,I.
The interpolation operator at level iis defined as:
Ri,2:
Ni
Y
j=0
Rni,j →Rni
(uj)06j6Ni7→
Ni
X
j=0
R>
i,j uj.
(1.2)
Finally, we denote Vi,j the set of neighboring subdomains of each subdomain jat
level ifor (i, j)∈J1; LK×J1; NiK.
Vi,j ={k∈J1; NiK: Ωi,j ∩Ωi,k 6=∅}.
As previously mentioned, partitioning at level 1 can be performed by graph parti-
tioning libraries such as ParMETIS [22] or PT-SCOTCH [11]. Partitioning at further
levels will be defined later: the sets Ωi,j,I , Ωi,j,Γ, Ωi,j , and ∆i,j for i > 1 are defined
in subsection 4.2. The coarse spaces Sias well as the projection and prolongation
operators V>
iand Viare defined in subsection 3.2. We suppose that the connectivity
graph between the subdomains on each level is sparse. This assumption is not true in
general, however, it is valid in structures based on locally constructed coarse spaces
in domain decomposition as we show in this paper, see [18, Section 4.1 p.81] for the
case of two levels.
2. Background. In this section, we review briefly several theoretical results
related to additive Schwarz preconditioners. We introduce them for the sake of com-
pleteness.
Lemma 2.1 (fictitious subspace lemma). Let A∈RnA×nA, B ∈RnB×nBbe two
6H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
symmetric positive definite matrices. Let Rbe an operator defined as
R:RnB→RnA
v7→ Rv,
and let R>be its transpose. Suppose that the following conditions hold:
1. The operator Ris surjective.
2. There exists cu>0such that
(Rv)>A(Rv)6cuv>Bv, ∀v∈RnB.
3. There exists cl>0such that for all vnA∈RnA,∃vnB∈RnB|vnA=RvnB
and
clv>
nBBvnB6(RvnB)>A(RvnB) = v>
nAAvnA.
Then, the spectrum of the operator RB−1R>Ais contained in the segment [cl, cu].
Proof. We refer the reader to [12, Lemma 7.4 p.164] or [28,27,13] for a detailed
proof.
Lemma 2.2. The operator Ri,2as defined in (1.2) is surjective.
Proof. The proof follows from the definition of Ri,2(1.2).
Lemma 2.3. Let ki,c for i= 1, . . . , L be the minimum number of distinct colors
so that span{R>
i,j }16j6Niof the same color are mutually Ai-orthogonal. Then, we
have
(Ri,2uBi)>Ai(Ri,2uBi)
6(ki,c + 1)
Ni
X
j=0
u>
jRi,j AiR>
i,j uj,∀uBi= (uj)06j6Ni∈
N
Y
j=0
Rni,j .
Proof. We refer the reader to [9, Theorem 12 p.93] for a detailed proof.
We note that at level i, the number ki,c is smaller than the maximum number of
neighbors over the set of subdomains J1; NiK
ki,c 6max
16j6Ni
#Vi,j .
Due to the sparse structure of the connectivity graph between the subdomains at
level i, the maximum number of neighbors over the set of subdomains J1; NiKis
independent of the number of subdomains Ni. Then, so is ki,c.
Lemma 2.4. Let uAi∈RnAiand uBi={uj}06j6Ni∈QNi
j=0 Rni,j such that uAi=
Ri,2uBi. The additive Schwarz operator without any other restriction on the coarse
space Siverifies the following inequality
Ni
X
j=0
u>
jRi,j AiR>
i,j uj62u>
AiAiuAi+ (2ki,c + 1)
Ni
X
j=1
u>
jRi,j AiR>
i,j uj,
where ki,c is defined in Lemma 2.3.
HIERARCHICAL ROBUST COARSE SPACES 7
Proof. We refer the reader to [12, Lemma 7.12, p. 175] to view the proof in
detail.
Lemma 2.5. Let A, B ∈Rm×mbe two symmetric positive semi-definite matrices.
Let ker(A),range(A)denote the null space and the range of Arespectively. Let P0
be an orthogonal projection on range(A). Let τbe a positive real number. Consider
the generalized eigenvalue problem,
P0BP0uk=λkAuk,
(uk, λk)∈range(A)×R.
Let Pτbe an orthogonal projection on the subspace
Z=ker(A)⊕span {uk|λk> τ},
then, the following inequality holds:
(2.1) (u−Pτu)>B(u−Pτu)6τu>Au, ∀u∈Rm.
Proof. We refer the reader to [3, Lemma 2.4] and [12, Lemma 7.7] for a detailed
proof.
2.1. GenEO coarse space. In [33,12] the authors present the GenEO coarse
space which relies on defining appropriate symmetric positive semi-definite (SPSD)
matrices ˜
Aj∈Rn×nfor j= 1, . . . , N . These are the unassembled Neumann matrices,
corresponding to the integration on each subdomain of the operator defined in the
variational form of the PDE. These matrices are local, i.e., Rj,∆˜
Aj= 0. Furthermore,
they verify the relations
u>˜
Aju6u>Au, ∀u∈Rn,
u>
N
X
j=1
˜
Aju6kGenEOu>Au, ∀u∈Rn,
where kGenEO 6Nis the maximum number of subdomains that share an unknown.
2.2. Local SPSD splitting of an SPD matrix. In [3], the authors present
the local SPSD splitting of an SPD matrix. Given the permutation matrix Pj, a local
SPSD splitting matrix ˜
Ajof Aassociated with subdomain jis defined as
(2.2) Pj˜
AjP>
j=
Rj,I AR>
j,I Rj,I AR>
j,Γ
Rj,ΓAR>
j,I ˜
Aj
Γ
0
,
where ˜
Aj
Γ∈Rγj×γjsatisfies the two following conditions: For all u∈Rγj,
•u>Rj,ΓAR>
j,I Rj,I AR>
j,I −1Rj,I AR>
j,Γu6u>˜
Aj
γu
•u>˜
Aj
Γu6u>Rj,ΓAR>
j,Γ−Rj,ΓAR>
j,∆Rj,∆AR>
j,∆−1Rj,∆AR>
j,Γu.
The authors prove that the matrices ˜
Ajdefined in such a way verify the following
relations:
Rj,∆˜
Aj= 0,(2.3)
u>˜
Aju≤u>Au, ∀u∈Rn,(2.4)
u>
N
X
j=1
˜
Aju6ku>Au, ∀u∈Rn,(2.5)
8H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
where kis a number that depends on the local SPSD splitting matrices and can be
at most equal to the number of subdomains k6N. The authors also show that the
local matrices defined in GenEO [33,12] can be seen as a local SPSD splitting.
In [3], the authors highlight that the key idea to construct a coarse space relies
on the ability to identify the so-called local SPSD splitting matrices. They present
a class of algebraically constructed coarse spaces based on the local SPSD splitting
matrices. Moreover, this class can be extended to a larger variety of local SPSD
matrices. This extension has the advantage of allowing to construct efficient coarse
spaces for a multilevel structure in a practical way. This is discussed in the following
section.
3. Extension of the class of coarse spaces. In this section we extend the
class of coarse spaces presented in [3]. To do so, we present a class of matrices, that is
larger than the class of local SPSD splitting matrices. This will be our main building
block in the construction of efficient coarse spaces. Furthermore, this extension can
lead to a straightforward construction of hierarchical coarse spaces in a multilevel
Schwarz preconditioner setting.
3.1. Extension of the class of local SPSD splitting matrices. Regarding
the two-level additive Schwarz method, the authors of [3] introduced the local SPSD
splitting related to a subdomain as defined in (2.2). As it can be seen from the theory
presented in that paper, it is not necessary to have the exact matrices Rj,I AR>
j,I ,
Rj,I AR>
j,Γ, and Rj,ΓAR>
j,I in the definition of the local SPSD splitting in order to
build an efficient coarse space. Indeed, the one and only necessary condition is to
define for each subdomain jan SPSD matrix ˜
Ajfor j= 1, . . . , N such that:
Rj,∆˜
Aj= 0,
u>
N
X
j=1
˜
Aju6ku>Au, ∀u∈Rn,
(3.1)
where kis a number that depends on the local SPSD matrices ˜
Ajfor j= 1, . . . , N .
The first condition means that ˜
Ajhas the local SPSD structure associated with sub-
domain j, i.e., it has the following form:
Pj˜
AjP>
j=˜
Aj
I,Γ0
0 0,
where ˜
Aj
I,Γ∈Rnj×nj. The second condition is associated with the stable decom-
position property [36,12]. Note that with regard to the local SPSD matrices, the
authors in [33] only use these two conditions. That is to say, with matrices that verify
conditions (3.1) the construction of the coarse space is straightforward through the
theory presented in either [33] or [3]. To this end, we define in the following the local
SPSD (LSPSD) matrix associated with subdomain jas well as the associated local
filtering subspace that contributes to the coarse space.
Definition 3.1 (local SPSD matrices). An SPSD matrix ˜
Ai,j ∈Rni×niis called
local SPSD (LSPSD) with respect to subdomain jif
•Ri,j,∆˜
Ai,j = 0,
•u>PNi
j=1 ˜
Ai,j u6kiu>Aiu,
where ki>0.
HIERARCHICAL ROBUST COARSE SPACES 9
We note that the local SPSD splitting matrices form a subset of the local SPSD
matrices.
3.2. Multilevel coarse spaces. This section summarizes the steps to be per-
formed in order to construct the coarse space at level ionce we have the LSPSD
matrices associated with each subdomain at that level.
Definition 3.2 (coarse space based on LSPSD matrices). Let ˜
Ai,j ∈Rni×nifor
j= 1, . . . , Nibe LSPSD matrices. Let Di,j ∈Rni,j for j= 1, . . . , Nibe the partition
of unity. Let τi>0be a given number. For a subdomain j∈J1; NiK, let
Gi,j =Di,j Ri,j AiR>
i,j Di,j .
Let ˜
Pi,j be the projection on range(Ri,j ˜
AjR>
i,j )parallel to ker(Ri,j ˜
AjR>
i,j ). Let Ki,j =
ker(Ri,j ˜
Ai,j R>
i,j ). Consider the generalized eigenvalue problem:
˜
Pi,j Gi,j ˜
Pi,j ui,j,k =λi,j,kRi,j ˜
Ai,j R>
i,j ui,j,k,
(ui,j,k, λi,j,k )∈range(Ri,j ˜
Ai,j R>
i,j )×R.
(3.2)
Set
(3.3) Zi,j =Ki,j ⊕span {ui,j,k|λi,j,k > τi}.
Then, the coarse space associated with LSPSD matrices ˜
Ai,j for j= 1, . . . , Niat level i
is defined as:
(3.4) Si=
Ni
M
j=1
R>
i,j Di,j Zi,j .
Following notations from section 1, the columns of Vispan the coarse space Si. The
matrix Ai+1 is defined as:
(3.5) Ai+1 =V>
iAiVi.
The local SPSD splitting matrices at level 1 will play an important role in the
construction of the LSPSD matrices at subsequent levels. In the following, we present
an efficient approach for computing LSPSD matrices for levels greater than 1.
4. Partitioning for levels strictly greater than 1. In this section, we ex-
plain how to obtain the partitioning sets Ωi,j,I for (i, j)∈J2; LK×J1; NiK. Once the
sets Ωi,j,I for j= 1, . . . , Niare defined at level i, the following elements are readily
available: sets Γi,j ,∆i,j , and Ωi,j ; restriction operators Ri,j,I , Ri,j,Γ, Ri,j,∆, and Ri,j ;
permutation matrices Pi,j for j= 1, . . . , Ni. The partition of unity is constructed in
an algebraic way. The mth diagonal element of Di,j is 1 if m6ni,j,I and 0 otherwise.
4.1. Superdomains as unions of several subdomains. In this section, we
introduce the notion of a superdomain. It refers to the union of several neighboring
subdomains. Let Gi,1,...,Gi,Ni+1 be disjoint subsets of J1; NiK, where SNi+1
j=1 Gi,j =
J1; NiK. We call the union of the subdomains {k∈J1; NiK:k∈ Gi,j }superdomain j,
for j= 1, . . . , Ni+1.Figure 4.1 gives an example of how to set superdomains. Though
this definition of superdomains may look somehow related to the fine mesh, it is in
practice done at the algebraic level, as explained later on. Note that the indices of
columns and rows of Ai+1 are associated with the vectors contributed by the subdo-
mains at level iin order to build the coarse space Si, see Figure 4.2. Hence, defining
subdomains on the structure of Ai+1 is natural once we have the subsets Gi,j, for
j= 1, . . . , Ni+1.
10 H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
Ω1,1,I Ω1,2,I
Ω1,3,I
Ω1,4,I
Ω1,5,I
Ω1,6,I
Ω1,7,I
Ω1,8,I
Ω1,9,I
Ω1,10,I
Ω1,11,I
Ω1,12,I
Ω1,13,I
Ω1,14,I
Ω1,15,I
Ω1,16,I
G1,1
G1,2
G1,3
G1,4
Fig. 4.1.Left: 16 subdomains at level 1. Right: 4superdomains at level 1. G1,j =J4(j−1) +
1; 4(j−1) + 4K.
Di,1Zi,1
Di,3Zi,3
(R>
i,1Di,1Zi,1)>Ai(R>
i,1Di,1Zi,1)
(R>
i,3Di,3Zi,3)>Ai(R>
i,4Di,4Zi,4)
Fig. 4.2.Illustration of the correspondence of indices between the columns of Vi(left) and the
rows and columns of Ai+1 (right). Having no overlap in Viis possible through a non-overlapping
partition of unity.
4.2. Heritage from superdomains. Let ei,j be the set of indices of the vectors
that span R>
i,j Di,j Zi,j in the matrix Vifor some (i, j)∈J1; L−1K×J1; NiK, see
Figure 4.2. We define Ωi+1,j,I =∪k∈Gi,j ei,k, for j= 1, . . . , Ni+1. We denote Ωi+1,j,Γ
the subset of J1; ni+1K\Ωi+1,j,I whose elements are at distance 1 from Ωi+1,j,I through
the graph of Ai+1. We note that
Ωi+1,j,Γ⊂[
p∈Gi,j [
k∈Vi,p
ei,k,
where Vi,j represents the set of subdomains that are neighbors of subdomain jat
level ifor j= 1, . . . , Ni. The overlapping subdomain jis defined by the set Ωi+1,j =
[Ωi+1,j,I ,Ωi+1,j,Γ]. The rest of the sets, restriction, and prolongation operators can
be defined as given in section 1.
HIERARCHICAL ROBUST COARSE SPACES 11
5. LSPSD matrices for levels strictly greater than 1. In [33,12,3], differ-
ent methods are suggested to obtain local SPSD splitting matrices at level 1. These
matrices are used to construct efficient two-level additive Schwarz preconditioners.
Here in this section, we do not discuss the construction of these matrices at level 1. We
suppose that we have the local SPSD matrices ˜
A1,j ∈Rn1×n1for j= 1, . . . , N1. We
focus on computing LSPSD matrices ˜
Ai,j ∈Rni×nifor (i, j)∈J2; LK×J1; NiK. We also
suppose that the coarse space S1is available, i.e., the matrices V1and A2=V>
1A1V1
are known explicitly.
Proposition 5.1. Let ibe a fixed level index, and let ˜
Ai,j be an LSPSD of Ai,
(see Definition 3.1), associated with subdomain j, for j= 1, . . . , Ni. Let Gi,1,...,Gi,Ni+1
be a set of superdomains at level iassociated with the partitioning at level i+ 1, see
subsection 4.1. Let V>
ibe the restriction matrix to the coarse space at level i. Then,
the matrix ˜
Ai+1,j which is defined as:
˜
Ai+1,j =X
k∈Gi,j
V>
i˜
Ai,kVi,
satisfies the conditions in Definition 3.1. That is, ˜
Ai+1,j is LSPSD of Ai+1 with
respect to subdomain jfor j= 1, . . . , Ni+1.
Proof. To prove that ˜
Ai+1,j is LSPSD of Ai+1 with respect to subdomain j, we
have to prove the following:
•Ri+1,j,∆˜
Ai+1,j = 0
•u>PNi+1
j=1 ˜
Ai+1,j u6ki+1u>Ai+1 ufor all u∈Rni+1 .
First, note that Ri,k ˜
Ai,j = 0 for all non-neighboring subdomains kof subdomain j.
This yields Z>
i,kDi,k Ri,k ˜
Ai,j = 0 for these subdomains k.
Now, let m∈J1; ni+1K\Ωi+1,j . We will show that the mth row of ˜
Ai+1,j is zero.
Following the partitioning of subdomains at level i+ 1, there exists a subdomain Ωp0
such that the mth column of Viis part of R>
i,p0Di,p0Zi,p0. We denote this column
vector by vm. Furthermore, the subdomain p0is not a neighbor of any subdomain
that is a part of the superdomain Gi,j . Hence, v>
m˜
Ai,k = 0 for k∈ Gi,j . The mth row
of ˜
Ai+1,j is given as v>
mPk∈Gi,j
˜
Ai,kVi. Then, v>
mPk∈Gi,j
˜
Ai,k = 0, and the mth row
of ˜
Ai+1,j is zero.
To prove the second condition, we have
u>
Ni+1
X
j=1
˜
Ai+1,j u=u>
Ni+1
X
j=1 X
k∈Gi,j
V>
i˜
Ai,kViu.
Since {Gi,j }16j6Ni+1 form a disjoint partitioning of J1; NiK, we can write
u>
Ni+1
X
j=1
˜
Ai+1,j u=u>
Ni
X
k=1
V>
i˜
Ai,kViu,
=u>V>
i
Ni
X
k=1
˜
Ai,kViu.
12 H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
˜
Ai,k is an LSPSD matrix of Aifor k= 1, . . . , Ni. Hence, we have
u>
Ni+1
X
j=1
˜
Ai+1,j u6kiu>V>
iAiViu,
6kiu>Ai+1u.
We finish the proof by setting ki+1 =ki.
Figure 5.1 gives an illustration of the LSPSD construction provided by Proposi-
tion 5.1.Figure 5.1 (top left) represents the matrix A1. The graph of A1is partitioned
into 16 subdomains. Each subdomain is represented by a different color. Figure 5.1
(top right) represents the matrix V1whose column vectors form a basis of the coarse
space S1. Colors of columns of V1correspond to those of subdomains in A1.Figure 5.1
(bottom left) represents the matrix A2=V>
1A1V1. Note that column and row indices
of A2are associated with column indices of V1. Four subdomains are used at level 2.
The partitioning at level 2 is related to the superdomain G1,j =J4(j−1)+1; 4(j−1)+4K
for j= 1,...,4. Figure 5.1 (bottom right) represents an LSPSD matrix of A2with
respect to subdomain 1 at level 2.
Theorem 5.2 shows that the third condition of the fictitious subspace lemma
Lemma 2.1 holds at level ifor i= 1, . . . , L.
Theorem 5.2. Let ˜
Ai,j be an LSPSD of Aiassociated with subdomain j, for
(i, j)∈J1; LK×J1; NiK. Let τi>0,Zi,j be the subspace associated with ˜
Ai,j , and
Pi,j be the projection on Zi,j as defined in Lemma 2.5. Let ui∈Rniand let ui,j =
Di,j Ini,j −Pi,j Ri,j uifor (i, j)∈J1; LK×J1; NiK. Let ui,0be defined as,
ui,0=V>
iVi−1V>
i
Ni
X
j=1
R>
i,j Di,j Pi,j Ri,j ui
.
Let mi= (2 + (2ki,c + 1)kiτi)−1. Then,
ui=
Ni
X
j=0
R>
i,j ui,j ,
and
(5.1) mi
Ni
X
j=0
u>
i,j Ri,j AiR>
i,j ui,j 6u>
iAiui.
Proof. We have
Ni
X
j=0
R>
i,j ui,j =ViV>
iVi−1V>
i
Ni
X
j=1
R>
i,j Di,j Pi,j Ri,j ui
+
Ni
X
j=1
R>
i,j ui,j
HIERARCHICAL ROBUST COARSE SPACES 13
(R>
1,1D1,1Z1,1)>A1(R>
1,1D1,1Z1,1)
(R>
1,11D1,11 Z1,11)>A1(R>
1,6D1,6Z1,6)
(R>
1,1D1,1Z1,1)>A1(R>
1,1D1,1Z1,1)
(R>
1,11D1,11 Z1,11)>A1(R>
1,6D1,6Z1,6)
Fig. 5.1.Illustration of the LSPSD construction provided by Proposition 5.1. Top left:
the matrix A1, top right: V1, bottom left: the matrix A2=V>
1A1V1, bottom right: ˜
A2,1=
Pj∈G1,1V>
1˜
A1,j V1, where G1,1= 1,...,4
Since for all y∈ Si,ViV>
iVi−1V>
iy=y, we have
Ni
X
j=0
R>
i,j ui,j =
Ni
X
j=1
R>
i,j Di,j Pi,j Ri,j ui+
Ni
X
j=1
R>
i,j Di,j Ini,j −Pi,j Ri,j ui,
=
Ni
X
j=1
R>
i,j Di,j Ri,j ui,
=ui.
To prove the inequality (5.1), we start with the inequality from Lemma 2.4. We
have
Ni
X
j=0
u>
i,j Ri,j AiR>
i,j ui,j 62u>
iAiui+ (2ki,c + 1)
Ni
X
j=1
u>
i,j Ri,j AiR>
i,j ui,j ,(5.2)
where we chose uBiin Lemma 2.4 to be (ui,j)j=0,...,Niand uAi=ui. In Definition 3.2,
14 H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
we defined Zi,j, such that for all w∈Rni,j we have
(Ini,j −Pi,j )w>Di,j Ri,j AiR>
i,j Di,j (Ini,j −Pi,j )w6τiw>Ri,j ˜
Ai,j R>
i,j w.
Hence, in the special case w=Ri,j ui, we can write
(Ini,j −Pi,j )Ri,j ui>Di,j Ri,j AiR>
i,j Di,j (Ini,j −Pi,j )Ri,j ui
6τi(Ri,j ui)>Ri,j ˜
Ai,j R>
i,j (Ri,j ui).
Equivalently,
u>
i,j Ri,j AiR>
i,j ui,j 6τi(Ri,j ui)>Ri,j ˜
Ai,j R>
i,j (Ri,j ui).
Plugging this inequality in (5.2) gives
Ni
X
j=0
u>
i,j Ri,j AiR>
i,j ui,j 62u>
iAiui+ (2ki,c + 1) τi
Ni
X
j=1
(Ri,j ui)>Ri,j ˜
Ai,j R>
i,j (Ri,j ui).
Since ˜
Ai,j is local, we have
(Ri,j ui)>Ri,j ˜
Ai,j R>
i,j (Ri,j ui) = u>
i˜
Ai,j ui,for j= 1, . . . , Ni.
By using the fact that ˜
Ai,j is LSPSD of Aifor j= 1, . . . , Ni, we obtain the following:
Ni
X
j=0
u>
i,j Ri,j AiR>
i,j ui,j 62u>
iAiui+ (2ki,c + 1) kiτiu>
iAiui.
Multiplying both sides with miends the proof, i.e.,
mi
Ni
X
j=0
u>
i,j Ri,j AiR>
i,j ui,j 6u>
iAiui.
In [3], the authors presented the minimal subspace that replaces Zi,j (defined in (3.3)
and used in Theorem 5.2) that is required to prove Theorem 5.2. The main difference
with respect to the subspace that we define in (3.3) is that it is not necessary to include
the entire kernel of the LSPSD matrix, Ki,j, in Zi,j , see Definition 3.2. Nevertheless,
in this work, we include the entire kernel of the LSPSD matrix in the definition of
Zi,j . This allows us to ensure that the kernels of Neumann matrices are transferred
across the levels, see Theorem 5.4. And in addition, this corresponds to the definition
used in GenEO [12, Lemma 7.7] and to its implementation in the HPDDM library
[19].
Theorem 5.3 provides an upper bound on the condition number of the precondi-
tioned matrix M−1
iAifor i= 1, . . . , L.
Theorem 5.3. Let Mibe the additive Schwarz preconditioner at level icombined
with the coarse space correction induced by Sidefined in (3.4). The following inequality
holds,
κM−1
iAi6(ki,c + 1) (2 + (2ki,c + 1)kiτi).
HIERARCHICAL ROBUST COARSE SPACES 15
Proof. Lemma 2.2,Lemma 2.3, and Theorem 5.2 prove that the multilevel precon-
ditioner verifies the conditions in Lemma 2.1 at each level i. Hence, the spectrum of the
preconditioned matrix M−1
iAiis contained in the interval [(2 + (2ki,c + 1)kiτi)−1, ki,c+
1]. Equivalently, the condition number of the preconditioned matrix at level iverifies
the following inequality
κM−1
iAi6(ki,c + 1) (2 + (2ki,c + 1)kiτi).
Proposition 5.1 shows that the constant kiassociated with the LSPSD matrices at
level iis independent of the number of levels and bounded by the number of subdo-
mains at level 1. Indeed,
k1≥kifor i= 2, . . . , L.
Furthermore, in the case where the LSPSD matrices at the first level are the Neumann
matrices, kiis bounded by the maximum number of subdomains at level 1 that share
an unknown.
The constant ki,c for i= 1, . . . , L is the minimum number of distinct colors so that
span{R>
i,j }16j6Niof the same color are mutually Ai-orthogonal. Both constants
kiand ki,c are independent of the number of subdomains for each level i.
The constant τican be chosen such that the condition number of the precondi-
tioned system at level iis upper bounded by a prescribed value. Hence, this allows
to have a robust convergence of the preconditioned Krylov solver at each level.
Algorithm 5.1 presents the construction of the multilevel additive Schwarz method
by using GenEO. The algorithm iterates over the levels. At each level, three main
operations are performed. First, the construction of the LSPSD matrices. At level 1,
the LSPSD matrices are the Neumann matrices, otherwise, Proposition 5.1 is used
to compute them. Once the LSPSD matrix is available, the generalized eigenvalue
problem in (3.2) has to be solved concurrently. Given the prescribed upper bound on
the condition number, Zi,j can be set. Finally, the coarse space is available and the
coarse matrix is assembled.
The following Theorem 5.4, describes how the kernel of Neumann matrices are
transferred across the levels.
Theorem 5.4. Suppose that ˜
A1,j is the Neumann matrix associated with the sub-
domain Ω1,j for j∈J1; N1K. For (i, j)∈J2; LK×J1; NiK, let
•˜
Ai,j be the LSPSD matrices associated with Ai,j defined in Proposition 5.1,
• Gi−1,j be the corresponding superdomains,
• G1
i−1,j be the union of subdomains at level 1 which contribute hierarchically
to obtain Gi−1,j ,
•˜
AGi−1,j be the Neumann matrix associated with G1
i−1,j (seeing G1
i−1,j as a
subdomain),
•AGi−1,j be the restriction of Ato the subdomain G1
i−1,j .
Then, the kernel of ˜
AGi−1,j is included in the kernel of Qi−1
l=1 Vl˜
Ai,j Qi−1
l=1 Vl>
.
Proof. First, note that for any LSPSD matrix computed as in Proposition 5.1, we
have
i−1
Y
l=1
Vl!˜
Ai,j i−1
Y
l=1
Vl!>
= i−1
Y
l=1
Vl! i−1
Y
l=1
Vl!>
X
k∈G1
i,j
˜
A1,k i−1
Y
l=1
Vl! i−1
Y
l=1
Vl!>
.
16 H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
Algorithm 5.1 Multilevel GenEO
Require: A1=A∈Rn×nSPD, L+ 1 number of levels, Ninumber of subdomains
at each level, Gi,j sets of superdomains
Ensure: preconditioner at each level i,M−1
iwith bounded condition number of
M−1
iAi
1: for i= 1, . . . , L do
2: for each subdomain j= 1, . . . , Nido
3: Ai,j =Ri,j AiR>
i,j (local matrix associated with subdomain j)
4: if i= 1 then
5: local SPSD ˜
Ai,j is Neumann matrix of subdomain j
6: else
7: compute local SPSD matrix as
˜
Ai,j =X
k∈Gi,j
V>
i−1˜
Ai−1,kVi−1
8: end if
9: solve the generalized eigenvalue problem (3.2), set Zi,j as in (3.3)
10: end for
11: Si=LNi
j=1 Di,j R>
i,j Zi,j ,Vibasis of Si
12: coarse matrix Ai+1 =V>
iAiVi,Ai+1 ∈Rni+1×ni+1
13: end for
14: M−1
i=ViA−1
i+1V>
i+PNi
j=1 R>
i,j A−1
i,j Ri,j
Moreover, due to the fact that ˜
AGi−1,j and ˜
A1,k are Neumann matrices, we have
u>˜
AGi−1,j u6u>X
k∈G1
i,j
˜
A1,ku6k1u>˜
AGi−1,j u.
On one hand, the kernels of ˜
A1,k for k∈ G1
i,j are included, by construction, in the im-
age of V1, see Definition 3.2. So is their intersection which is the kernel of Pk∈G1
i,j
˜
A1,k.
On the other hand, the previous two-sided inequality implies that the kernels of ˜
AGi−1,j
and Pk∈G1
i,j
˜
A1,k are identical. Hence, the kernel of ˜
AGi−1,j is included in the image
of QQ>, where Q=Qi−1
l=1 Vl.
Theorem 5.4 proves that the kernel of the Neumann matrix of a union of subdomains
at level 1 that hierarchically contribute to form a subdomain at level iis conserved by
the construction of the hierarchical coarse spaces. For example in the case of linear
elasticity, it is essential to include the rigid body motions in the coarse space in order
to have a fast convergence. As these are included in the kernel of the Neumann matrix
of the subdomain, the hierarchical coarse space includes them, consequently.
6. Numerical experiments. In this section, the developed theory is validated
numerically with FreeFEM [14] for finite element discretizations and HPDDM [19]
for domain decomposition methods. We present numerical experiments on two highly
challenging problems illustrating the efficiency and practical usage of the proposed
method. For both problems, we use N1= 2,048 MPI processes (equal to the number
of subdomains at level 1), and the domain partitioning is performed using ParMETIS
HIERARCHICAL ROBUST COARSE SPACES 17
[22], with no control on the alignments of subdomain interfaces. We compare the
two-level GenEO preconditioner and its multilevel extension by varying N2between 4
and 256. For the two-level method, N2corresponds to the number of MPI processes
that solve the coarse problem in a distributed fashion using MKL CPARDISO [17].
For the multilevel method, N3is set to 1, i.e., a three-level method is used. The goal
of these numerical experiments is to show that when one switches from a two-level
method with an exact coarse solver, to our proposed multilevel method, the number
of outer iterations is not impacted. Thus, three levels are sufficient. As an outer
solver, since all levels but the coarsest are solved approximately, the flexible GMRES
[31] is used. It is stopped when relative unpreconditioned residuals are lower than
10−6. Subdomain matrices {Ai,j }16i62,16j6Niare factorized concurrently using MKL
PARDISO, and eigenvalue problems are solved using ARPACK [24]. In both, two-
and three-level GenEO, we factorize the local matrices A1,j for j∈J1; N1Kand solve
the generalized eigenvalue problems concurrently at the first level. For this reason,
we do not take into account the time needed for these two steps which are performed
without any communication between MPI processes. We compare the time needed
to assemble and factorize A2in the two-level approach against the time needed to
assemble A2and local SPSD matrices ˜
A2,j for j∈J1; N2K, solve the generalized
eigenvalue problems concurrently on the second level, assemble, and factorize the
matrix A3in the three-level approach. We also compare the time spent in the outer
Krylov solver during the solution phase. Readers interested by a comparison of the
efficiency of GenEO and multigrid methods such as GAMG [1] are referred to [18].
FreeFEM scripts used to produce the following results are available at the following
URL: https://github.com/prj-/aldaas2019multi1.
6.1. Diffusion test cases. The scalar diffusion equation with highly heteroge-
neous coefficient κis solved in [0,1]d(d= 2 or 3). The strong formulation of the
equation is:
−∇ · (κ∇u) = 1 in Ω,
u= 0 on ΓD,
∂u
∂n = 0 on ΓN.
The exterior normal vector to the boundary of Ω is denoted n. ΓDis the subset
of the boundary of Ω corresponding to x= 0 in 2D and 3D. ΓNis defined as the
complementary of ΓDwith respect to the boundary of Ω. We discretize the equation
using P2and P4finite elements in the 3D and 2D test cases, respectively. The number
of unknowns is 441 ×106and 784 ×106, with approximately 28 and 24 nonzero
elements per row in the 3D and 2D cases, respectively. The heterogeneity is due
to the jumps in the diffusion coefficient κ, see Figure 6.1, which is modeled using
a combination of jumps and channels, cf. the file coefficients.idp from https:
//github.com/prj-/aldaas2019multi.
The results in two dimensions are reported in Table 6.1. The number of outer
iterations for both two- and three-level GenEO is 32. The size of the level 2 operator
is n2= 25 ×2,048 = 51,200. In all numerical results, the number of eigenvectors per
subdomain, here 25, is fixed. This is because ARPACK cannot a priori compute all
eigenpairs below a certain threshold, and an upper bound has to be provided instead.
1note to reviewers: the repository is now public
18 H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
1
5·105
1·106
1.7·106
κ
Fig. 6.1.Variation of the coefficient κused for the diffusion test case
HPDDM is capable of filtering the eigenpairs for which eigenvalues are above the user-
specified GenEO threshold from Lemma 2.5. However, this means that the coarse
operator may be unevenly distributed. With a fixed number of eigenvectors per sub-
domain, it is possible to use highly optimized uniform MPI routines and block matrix
formats. Hence, for performance reasons, all eigenvectors computed by ARPACK are
kept when building coarse operators. It is striking that the multilevel method does not
deteriorate the numerical performance of the outer solver. For the two-level method,
the first column corresponds to the time needed to assemble the Galerkin operator A2
from (3.5) (assuming V1has already been computed by ARPACK), and to factorize it
using N2MPI processes. For the three-level method, the first column corresponds to
the time needed to assemble level 2 local subdomain matrices {A2,j }16j6N2, level 2
local SPSD matrices, solve the generalized eigenvalue problem (3.2) concurrently, as-
semble the Galerkin operator A3and factorize it on a single process. The size of
the level 3 operator is n3= 20 ×N2. For both two- and three-level methods, the
second column is the time spent in the outer Krylov solver once the preconditioner
has been set up. In the last column of the three-level method, the number of inner
iterations for solving systems involving A2, which is not inverted exactly anymore,
is reported. For all tables, this column is an average over all successive outer itera-
tions. Another important numerical property of our method is that, thanks to fully
controlled bounds at each level, the number of inner iterations is low, independently
of the number of superdomains N2. Because this problem is not large enough, it is
still tractable by a two-level method, for which HPDDM was highly optimized for.
Thus, there is no performance gain to be expected at this scale. However, one can
notice that the construction of the coarse operator(s) scales nicely with N2for the
three-level method, whereas the performance of the direct solver MKL CPARDISO
quickly stagnates because of the finer and finer parallel workload granularity.
The results in three dimensions are reported in Table 6.2. The number of outer
iterations for both the two- and three-level GenEO is 19. The observations made
in two dimensions still hold, and the dimensions of A2and A3are the same. Once
again, it is important to note that the number of outer iterations is the same for both
methods.
HIERARCHICAL ROBUST COARSE SPACES 19
two-level GenEO three-level GenEO
N2CS solve % of nnz A2CS solve inner it. % of nnz A3
4 2.4 11.9
0.19
6.5 27.4 14 56.0
16 1.8 11.3 3.6 15.4 15 19.0
64 1.9 12.1 3.0 16.7 14 5.5
256 2.4 18.4 2.8 13.9 13 1.4
Table 6.1
Diffusion 2D test case, comparison between two- and
three-level GenEO. The percentage of nonzero entries
in A1is 0.3%.
two-level GenEO three-level GenEO
N2CS solve % of nnz A2CS solve inner it. % of nnz A3
4 7.0 20.9
0.36
16.9 43.6 17 62.0
16 5.0 19.8 7.7 26.7 17 28.0
64 5.1 20.1 5.8 32.7 15 8.9
256 5.2 24.1 5.3 22.6 14 2.6
Table 6.2
Diffusion 3D test case, comparison between two- and
three-level GenEO. The percentage of nonzero entries
in A1is 0.5%.
6.2. Linear elasticity test cases. The system of linear elasticity with highly
heterogeneous elastic moduli is solved in 2D and 3D. The strong formulation of the
equation is given as:
div σ(u) + f= 0 in Ω,
u= 0 on ΓD,
σ(u)·n= 0 on ΓN.
(6.1)
The physical domain Ω is a beam of dimensions [0,10] ×[0,1], extruded for z∈
[0,1] in 3D. The Cauchy stress tensor σ(·) is given by Hooke’s law: it can be expressed
in terms of Young’s modulus Eand Poisson’s ratio ν.
σij (u) = (2µεij (u)i6=j,
2µεii(u) + λdiv(u)i=j,
where
εij (u) = 1
2∂ui
∂xi
+∂uj
∂xj, µ =E
2(1 + ν),and λ=Eν
1−2ν.
The exterior normal vector to the boundary of Ω is denoted n. ΓDis the subset
of the boundary of Ω corresponding to x= 0 in 2D and 3D. ΓNis defined as the
complementary of ΓDwith respect to the boundary of Ω. We discretize (6.1) using
the following vectorial finite elements: (P2,P2,P2) in 3D and (P3,P3) in 2D. The
number of unknowns is 146 ×106and 847 ×106, with approximately 82 and 34
nonzero elements per row in the 3D and 2D cases, respectively. The heterogeneity is
due to the jumps in Eand ν. We consider discontinuous piecewise constant values
for Eand ν: (E1, ν1) = (2 ×1011,0.25), (E2, ν2) = (107,0.45), see Figure 6.2.
Results in two (resp. three) dimensions are reported in Table 6.3 (resp. Table 6.4).
The number of outer iterations are 73 and 45 respectively. For these test cases, we
20 H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
Fig. 6.2.Variation of the structure coefficients used for the elasticity test case
two-level GenEO three-level GenEO
N2CS solve % of nnz A2CS solve inner it. % of nnz A3
4 4.8 52.7
0.18
22.5 179.3 31 43.0
16 3.9 50.3 9.3 124.9 57 17.0
64 4.0 53.1 7.2 71.5 34 4.9
256 4.8 63.2 6.8 71.2 44 1.4
Table 6.3
Elasticity 2D test case, comparison between two- and
three-level GenEO. The percentage of nonzero entries
in A1is 0.4%.
slightly relaxed the criterion for selecting eigenvectors in coarse spaces, which explains
why the iteration counts increase. However, the same observations as for the diffusion
test cases still hold. The dimension of the level 2 matrix is n2= 50×2,048 = 1.02·105,
while for the level 3 matrix it is n3= 20 ×N2. This means that 50 (resp. 20)
eigenvectors are kept per level 1 (resp. level 2) subdomains. We observe that the
number of iterations of the inner solver increases slowly when increasing the number
of subdomains from 4 to 256 in the 2D case and remains almost constant in the 3D
case. In terms of runtime, the two-level GenEO is faster than three-level GenEO for
these matrices of medium dimensions.
To show the potential of our method at larger scales, a three-dimensional linear
elasticity problem of size 593 ×106is now solved on N1= 16,384 processes and
N2= 256 superdomains. With the two-level method, A2is assembled and factorized
in 40.8 seconds. With the three-level method, this step now takes 35.1 seconds, see
Table 6.5. There is a two iterations difference in the iteration count. Not taking
into account the preconditioner setup, the problem is solved in 222.5 seconds in the
two-level case and 90.1 seconds in the multilevel case. In this test case the cost of
applying the two-level preconditioner on a given vector is approximately twice the cost
of applying the multilevel variant. At this regime, it is clear that there are important
gains for the solution phase. At even greater scales, gains for the setup phase are
also expected. Moreover, another interesting fact to note regarding computation time
is that the generalized eigenvalue problems solved concurrently at the first level to
obtain V1actually represents a significant part of the total time of 377.6 seconds (resp.
244.8 seconds) with the two- (resp. three-)level method: 78.2 seconds. This cost can
be reduced by taking a larger number of (smaller) subdomains, with the drawback of
increasing the size of V1and thus A2. This drawback represents a clear bottleneck
for the two-level method but is alleviated by using the three-level method, making it
a good candidate for problems at greater scales.
7. Conclusion. In this paper, we reviewed general properties of overlapping
Schwarz preconditioners and presented a framework for its multilevel extension. We
HIERARCHICAL ROBUST COARSE SPACES 21
two-level GenEO three-level GenEO
N2CS solve % of nnz A2CS solve inner it. % of nnz A3
4 28.5 46.9
0.38
78.9 296.7 23 43.0
16 17.3 35.4 24.5 124.5 23 19.0
64 15.0 33.2 15.4 62.2 21 7.9
256 13.6 40.7 10.6 50.7 23 2.5
Table 6.4
Elasticity 3D test case, comparison between two- and
three-level GenEO. The percentage of nonzero entries
in A1is 3.3%.
two-level GenEO three-level GenEO
N2CS solve CS solve inner it.
256 40.8 222.5 35.1 90.1 11
Table 6.5
Elasticity 3D test case, comparison between two- and three-level GenEO
generalized the local SPSD splitting presented in [3] to cover a larger set of matrices
leading to more flexibility for building robust coarse spaces. Based on local SPSD
matrices on the first level, we presented how to compute local SPSD matrices for
coarser levels. The multilevel solver based on hierarchical local SPSD matrices is
robust and guarantees a bound on the condition number of the preconditioned matrix
at each level depending on predefined values. Numerical experiments illustrate the
theory and prove the efficiency of the method on challenging problems of large size
arising from heterogeneous linear elasticity and diffusion problems with jumps in the
coefficients of multiple orders of magnitude.
8. Acknowledgments. We would like to thank the anonymous referees for their
comments and remarks that helped us improve the clarity of this manuscript. This
work was granted access to the HPC resources of TGCC@CEA under the allocation
A0050607519 made by GENCI. The work of the second author was supported by the
NLAFET project as part of European Union’s Horizon 2020 research and innovation
program under grant 671633.
REFERENCES
[1] M. F. Adams, H. H. Bayraktar, T. M. Keaveny, and P. Papadopoulos,Ultrascalable
Implicit Finite Element Analyses in Solid Mechanics with over a Half a Billion Degrees of
Freedom, in Proceedings of the 2004 ACM/IEEE Conference on Supercomputing, SC ’04,
IEEE Computer Society, 2004.
[2] M. F. Adams and J. W. Demmel,Parallel Multigrid Solver for 3D Unstructured Finite El-
ement Problems, in Proceedings of the 1999 ACM/IEEE Conference on Supercomputing,
SC ’99, ACM, 1999.
[3] H. Al Daas and L. Grigori,A class of efficient locally constructed preconditioners based on
coarse spaces, SIAM Journal on Matrix Analysis and Applications, 40 (2019), pp. 66–91.
[4] S. Badia, A. Mart
´
ın, and J. Principe,Multilevel balancing domain decomposition at extreme
scales, SIAM Journal on Scientific Computing, 38 (2016), pp. C22–C52.
[5] P. E. Bjørstad, M. J. Gander, A. Loneland, and T. Rahman,Does SHEM for Additive
Schwarz Work Better than Predicted by Its Condition Number Estimate?, in International
Conference on Domain Decomposition Methods, Springer, 2017, pp. 129–137.
[6] A. Borz
`
ı, V. De Simone, and D. di Serafino,Parallel algebraic multilevel Schwarz precon-
ditioners for a class of elliptic PDE systems, Computing and Visualization in Science, 16
(2013), pp. 1–14.
22 H. AL DAAS, L. GRIGORI, P. JOLIVET, AND P.-H. TOURNIER
[7] M. Brezina, A. Cleary, R. Falgout, V. Henson, J. Jones, T. Manteuffel, S. McCormick,
and J. Ruge,Algebraic Multigrid Based on Element Interpolation (AMGe), SIAM Journal
on Scientific Computing, 22 (2001), pp. 1570–1592.
[8] X.-C. Cai and M. Sarkis,A restricted additive Schwarz preconditioner for general sparse
linear systems, SIAM Journal on Scientific Computing, 21 (1999), pp. 792–797.
[9] T. F. Chan and T. P. Mathew,Domain decomposition algorithms, Acta Numerica, 3 (1994),
pp. 61––143.
[10] T. Chartier, R. D. Falgout, V. E. Henson, J. Jones, T. Manteuffel, S. McCormick,
J. Ruge, and P. S. Vassilevski,Spectral AMGe (ρAMGe), SIAM Journal on Scientific
Computing, 25 (2003), pp. 1–26.
[11] C. Chevalier and F. Pellegrini,PT-SCOTCH: A tool for efficient parallel graph ordering,
Parallel Computing, 34 (2008), pp. 318–331. Parallel Matrix Algorithms and Applications.
[12] V. Dolean, P. Jolivet, and F. Nataf,An introduction to domain decomposition methods,
Society for Industrial and Applied Mathematics, 2015. Algorithms, theory, and parallel
implementation.
[13] M. Griebel and P. Oswald,On the abstract theory of additive and multiplicative Schwarz
algorithms, Numerische Mathematik, 70 (1995), pp. 163–180.
[14] F. Hecht,New development in FreeFem++, Journal of Numerical Mathematics, 20 (2012),
pp. 251–266.
[15] A. Heinlein, A. Klawonn, O. Rheinbach, and F. R¨
over,A Three-Level Extension of the
GDSW Overlapping Schwarz Preconditioner in Three Dimensions, technical report, Uni-
versit¨at zu K¨oln, November 2018.
[16] V. E. Henson and U. M. Yang,BoomerAMG: A parallel algebraic multigrid solver and pre-
conditioner, Applied Numerical Mathematics, 41 (2002), pp. 155–177. Developments and
Trends in Iterative Methods for Large Systems of Equations.
[17] Intel,MKL web page.https://software.intel.com/en-us/intel-mkl, 2019.
[18] P. Jolivet,Domain decomposition methods. Application to high-performance computing, the-
ses, Universit´e de Grenoble, Oct. 2014.
[19] P. Jolivet, F. Hecht, F. Nataf, and C. Prud’homme,Scalable domain decomposition pre-
conditioners for heterogeneous elliptic problems, in Proceedings of the International Con-
ference on High Performance Computing, Networking, Storage and Analysis, SC13, ACM,
2013.
[20] J. Jones and P. Vassilevski,AMGe Based on Element Agglomeration, SIAM Journal on
Scientific Computing, 23 (2001), pp. 109–133.
[21] D. Kalchev, C. Lee, U. Villa, Y. Efendiev, and P. Vassilevski,Upscaling of mixed finite
element discretization problems by the spectral AMGe method, SIAM Journal on Scientific
Computing, 38 (2016), pp. A2912–A2933.
[22] G. Karypis and V. Kumar,Multilevel k-way partitioning scheme for irregular graphs, Journal
of Parallel and Distributed Computing, 48 (1998), pp. 96–129.
[23] F. Kong and X.-C. Cai,A highly scalable multilevel Schwarz method with boundary geometry
preserving coarse spaces for 3D elasticity problems on domains with complex geometry,
SIAM Journal on Scientific Computing, 38 (2016), pp. C73–C95.
[24] R. Lehoucq, D. Sorensen, and C. Yang,ARPACK users’ guide: solution of large-scale
eigenvalue problems with implicitly restarted Arnoldi methods, vol. 6, Society for Industrial
and Applied Mathematics, 1998.
[25] J. Mandel, B. Soused
´
ık, and C. R. Dohrmann,Multispace and multilevel BDDC, Comput-
ing, 83 (2008), pp. 55–85.
[26] O. Marques, A. Druinsky, X. S. Li, A. T. Barker, P. Vassilevski, and D. Kalchev,Tuning
the coarse space construction in a spectral AMG solver, Procedia Computer Science, 80
(2016), pp. 212–221. International Conference on Computational Science 2016, ICCS 2016,
6-8 June 2016, San Diego, California, USA.
[27] S. V. Nepomnyaschikh,Mesh theorems of traces, normalizations of function traces and their
inversions, Russian Journal of Numerical Analysis and Mathematical Modelling, 6 (1991),
pp. 1–25.
[28] , Decomposition and fictitious domains methods for elliptic boundary value problems,
1992.
[29] Y. Notay,An aggregation-based algebraic multigrid method, Electronic Transactions on Nu-
merical Analysis, 37 (2010), pp. 123–146.
[30] C. Pechstein,Finite and boundary element tearing and interconnecting solvers for multiscale
problems, vol. 90, Springer Science & Business Media, 2012.
[31] Y. Saad.,A Flexible Inner–Outer Preconditioned GMRES Algorithm, SIAM Journal on Sci-
entific Computing, 14 (1993), pp. 461–469.
HIERARCHICAL ROBUST COARSE SPACES 23
[32] Y. Saad,Iterative Methods for Sparse Linear Systems, Society for Industrial and Applied
Mathematics, 2nd ed., 2003.
[33] N. Spillane, V. Dolean, P. Hauret, F. Nataf, C. Pechstein, and R. Scheichl,Abstract
robust coarse spaces for systems of PDEs via generalized eigenproblems in the overlaps,
Numerische Mathematik, 126 (2014), pp. 741–770.
[34] J. Toivanen, P. Avery, and C. Farhat,A multilevel feti-dp method and its performance for
problems with billions of degrees of freedom, International Journal for Numerical Methods
in Engineering, 116 (2018), pp. 661–682.
[35] A. Toselli and O. Widlund,Domain Decomposition Methods - Algorithms and Theory,
Springer Series in Computational Mathematics, Springer Berlin Heidelberg, 2005.
[36] J. Xu,Theory of Multilevel Methods, PhD thesis, Cornell University, 1989.
[37] S. Zampini,PCBDDC: A class of robust dual-primal methods in PETSc, SIAM Journal on
Scientific Computing, 38 (2016), pp. S282–S306.