Page 1
arXiv:cond-mat/0304363v2 28 Jan 2004
DFTT 10/03
Random matrix theory and symmetric spaces
M. Caselle1and U. Magnea2
1Department of Theoretical Physics, University of Torino
and INFN, Sez. di Torino
Via P. Giuria 1, I-10125 Torino, Italy
2Department of Mathematics, University of Torino
Via Carlo Alberto 10, I-10123 Torino, Italy
caselle@to.infn.it
magnea@dm.unito.it
Abstract
In this review we discuss the relationship between random matrix theories and symmetric
spaces. We show that the integration manifolds of random matrix theories, the eigenvalue
distribution, and the Dyson and boundary indices characterizing the ensembles are in strict
correspondence with symmetric spaces and the intrinsic characteristics of their restricted
root lattices. Several important results can be obtained from this identification. In par-
ticular the Cartan classification of triplets of symmetric spaces with positive, zero and
negative curvature gives rise to a new classification of random matrix ensembles. The re-
view is organized into two main parts. In Part I the theory of symmetric spaces is reviewed
with particular emphasis on the ideas relevant for appreciating the correspondence with
random matrix theories. In Part II we discuss various applications of symmetric spaces
to random matrix theories and in particular the new classification of disordered systems
derived from the classification of symmetric spaces. We also review how the mapping from
integrable Calogero–Sutherland models to symmetric spaces can be used in the theory of
random matrices, with particular consequences for quantum transport problems. We con-
clude indicating some interesting new directions of research based on these identifications.
Page 2
Contents
1 Introduction4
2 Lie groups and root spaces9
2.1 Lie groups and manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
2.2 The tangent space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
2.3 Coset spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
2.4 The Lie algebra and the adjoint representation . . . . . . . . . . . . . . . .13
2.5 Semisimple algebras and root spaces. . . . . . . . . . . . . . . . . . . . .15
2.6 The Weyl chambers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
2.7 The simple root systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
3 Symmetric spaces 23
3.1 Involutive automorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . .24
3.2 The action of the group on the symmetric space . . . . . . . . . . . . . . .26
3.3 Radial coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
3.4 The metric on a Lie algebra . . . . . . . . . . . . . . . . . . . . . . . . . .30
3.5 The algebraic structure of symmetric spaces . . . . . . . . . . . . . . . . .32
4 Real forms of semisimple algebras33
4.1 The real forms of a complex algebra . . . . . . . . . . . . . . . . . . . . . .33
4.2 The classification machinery . . . . . . . . . . . . . . . . . . . . . . . . . .37
1
Page 3
5 The classification of symmetric spaces 41
5.1 The curvature tensor and triplicity . . . . . . . . . . . . . . . . . . . . . .42
5.2 Restricted root systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
5.3 Real forms of symmetric spaces . . . . . . . . . . . . . . . . . . . . . . . .52
6 Operators on symmetric spaces 54
6.1 Casimir operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
6.2 Laplace operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
6.3 Zonal spherical functions . . . . . . . . . . . . . . . . . . . . . . . . . . . .64
6.4 The analog of Fourier transforms on symmetric spaces . . . . . . . . . . . .68
7 Integrable models related to root systems72
7.1 The root lattice structure of the CS models . . . . . . . . . . . . . . . . . .73
7.2 Mapping to symmetric spaces . . . . . . . . . . . . . . . . . . . . . . . . .74
8 Random matrix theories and symmetric spaces 78
8.1 Introduction to the theory of random matrices . . . . . . . . . . . . . . . .78
8.1.1 What is random matrix theory? . . . . . . . . . . . . . . . . . . . .78
8.1.2 Some of the applications of random matrix theory . . . . . . . . . .79
8.1.3 Why are random matrix models successful?. . . . . . . . . . . . . 81
8.2 The basics of matrix models . . . . . . . . . . . . . . . . . . . . . . . . . .82
8.3 Identification of the random matrix integration manifolds . . . . . . . . . .86
8.3.1 Circular ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . .86
2
Page 4
8.3.2 Gaussian ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . .90
8.3.3 Chiral ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
8.3.4 Transfer matrix ensembles . . . . . . . . . . . . . . . . . . . . . . . 100
8.3.5 The DMPK equation . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.3.6 BdG and p–wave ensembles . . . . . . . . . . . . . . . . . . . . . . 107
8.3.7 S–matrix ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.4 Identification of the random matrix eigenvalues and universality indices . . 110
8.4.1 Discussion of the Jacobians of various types of matrix ensembles . . 111
8.5 Fokker–Planck equation and the Coulomb gas analogy . . . . . . . . . . . 115
8.5.1 The Coulomb gas analogy . . . . . . . . . . . . . . . . . . . . . . . 116
8.5.2 Connection with the Laplace–Beltrami operator . . . . . . . . . . . 118
8.5.3 Random matrix theory description of parametric correlations . . . . 119
8.6 A dictionary between random matrix ensembles and symmetric spaces . . . 119
9On the use of symmetric spaces in random matrix theory 120
9.1Towards a classification of random matrix ensembles . . . . . . . . . . . . 121
9.2 Symmetries of random matrix ensembles . . . . . . . . . . . . . . . . . . . 122
9.3 Orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
9.4Use of symmetric spaces in quantum transport . . . . . . . . . . . . . . . . 127
9.4.1Exact solvability of the DMPK equation in the β = 2 case . . . . . 128
9.4.2Asymptotic solutions in the β = 1,4 cases . . . . . . . . . . . . . . 133
9.4.3 Magnetic dependence of the conductance . . . . . . . . . . . . . . . 133
3
Page 5
9.4.4 Density of states in disordered quantum wires. . . . . . . . . . . . . 135
10 Beyond symmetric spaces136
10.1 Non–Cartan parametrization of symmetric spaces and S–matrix ensembles 136
10.1.1 Non-Cartan parametrization of SU(N)/SO(N) . . . . . . . . . . . 138
10.2 Clustered solutions of the DMPK equation . . . . . . . . . . . . . . . . . . 141
10.3 Triplicity of the Weierstrass potential . . . . . . . . . . . . . . . . . . . . . 144
11 Summary and conclusion147
A Appendix: Zonal spherical functions151
A.1 The Itzykson–Zuber–Harish–Chandra integral . . . . . . . . . . . . . . . . 153
A.2 The Duistermaat–Heckman theorem . . . . . . . . . . . . . . . . . . . . . . 154
1 Introduction
The study of symmetric spaces has recently attracted interest in various branches of physics,
ranging from condensed matter physics to lattice QCD. This is mainly due to the gradual
understanding during the past few years of the deep connection between random matrix
theories and symmetric spaces. Indeed, this connection is a rather old intuition, which
traces back to Dyson [1] and has subsequently been pursued by several authors, notably by
H¨ uffmann [2]. Recently it has led to several interesting results, like for instance a tentative
classification of the universality classes of disordered systems. The latter topic is the main
subject of this review.
The connection between random matrix theories and symmetric spaces is obtained simply
through the coset spaces defining the symmetry classes of the random matrix ensembles.
Although Dyson was the first to recognize that these coset spaces are symmetric spaces,
the subsequent emergence of new random matrix symmetry classes and their classification
in terms of Cartan’s symmetric spaces is relatively recent [3, 4, 5, 6, 7]. Since symmetric
spaces are rather well understood mathematical objects, the main outcome of such an
4
Page 6
identification is that several non–trivial results concerning the behavior of the random
matrix models, as well as the physical systems that these models are expected to describe,
can be obtained.
In this context an important tool, that will be discussed in the following, is a class of inte-
grable models named Calogero–Sutherland models [8]. In the early eighties, Olshanetsky
and Perelomov showed that also these models are in one–to–one correspondence with sym-
metric spaces through the reduced root systems of the latter [9]. Thanks to this chain of
identifications (random matrix ensemble – symmetric space – Calogero–Sutherland model)
several of the results obtained in the last twenty years within the framework of Calogero–
Sutherland models can also be applied to random matrix theories.
The aim of this review is to allow the reader to follow this chain of correspondences. To
this end we will devote the first half of the paper (sections 2 through 7) to the necessary
mathematical background and the second part (sections 8 through 10) to the applications
in random matrix theory. In particular, in the last section we discuss some open directions
of research. The reader who is not interested in the mathematical background could skip
the first part and go directly to the later sections where we list and discuss the main results.
This review is organized as follows:
The first five sections of Part I (sections 2–6) are devoted to an elementary introduction
to symmetric spaces. As mentioned in the Abstract, these sections consist of the material
presented in [10], which is a self–contained introductory review of symmetric spaces from
a mathematical point of view. The material on symmetric spaces should be accessible to
physicists with only elementary background in the theory of Lie groups. We have included
quite a few examples to illustrate all aspects of the material. In the last section of Part I,
section 7, we briefly introduce the Calogero–Sutherland models with particular emphasis
on their connection with symmetric spaces.
After this introductory material we then move on in Part II to random matrix theories
and their connection with symmetric spaces (section 8). Let us stress that this paper
is not intended as an introduction to random matrix theory, for which very good and
thorough references already exist [11, 26, 27, 28, 29]. In this review we will assume that the
reader is already acquainted with the topic, and we will only recall some basic information
(definitions of the various ensembles, main properties, and main physical applications).
The main goal of this section is instead to discuss the identifications that give rise to the
close relationship between random matrix ensembles and symmetric spaces.
Section 9 is devoted to a discussion of some of the consequences of the above mentioned
identifications. In particular we will deduce, starting from the Cartan classification of
symmetric spaces, the analogous classification of random matrix ensembles. We discuss
5
Page 7
the symmetries of the ensembles in terms of the underlying restricted root system, and
see how the orthogonal polynomials belonging to a certain ensemble are determined by
the root multiplicities. In this section we also give some examples of how the connection
between random matrix ensembles on the one hand, and symmetric spaces and Calogero–
Sutherland models on the other hand, can be used to obtain new results in the theoretical
description of physical systems, more precisely in the theory of quantum transport.
The last section of the paper is devoted to some new results that show that the mathe-
matical tools discussed in this paper (or suitable generalizations of these) can be useful for
going beyond the symmetric space paradigm, and to explore some new connections between
random matrix theory, group theory, and differential geometry. Here we discuss clustered
solutions of the Dorokhov–Mello–Pereyra–Kumar equation, and then we go on to discuss
the most general Calogero–Sutherland potential, given by the Weierstrass P–function, and
show that it covers the three cases of symmetric spaces of positive, zero and negative cur-
vature. Finally, in the appendix we discuss some intriguing exact results for the so called
zonal spherical functions, which not only play an important role in our discussion, but are
also of great relevance in several other branches of physics.
There are some important and interesting topics that we will not review because of lack of
space and competence. For these we refer the reader to the existing literature. In particular
we shall not discuss:
• the supersymmetric approach to random matrix theories and in particular their clas-
sification in terms of supersymmetric spaces. Here we refer the reader to the original
paper by M. Zirnbauer [4], while a good introduction to the use of supersymmetry
in random matrix theory and a complete set of the relevant references can be found
in [12];
• the very interesting topic of phase transitions. For this we refer to the recent and
thorough review by G. Cicuta [13];
• the extension to two–dimensional models of the classification of symmetric spaces,
and more generally the methods of symmetric space analysis [14];
• the generalization of the classification of symmetric spaces to non–hermitean random
matrices [15] (see however a discussion in the concluding section 11);
• the so called q–ensembles [16];
• the two–matrix models [17] and multi–matrix models [18] and their continuum limit
generalization.
6
Page 8
The last item in the list given above is a very interesting topic, which has several physical
applications and would indeed deserve a separate review. The common feature of these
two– and multi–matrix models which is of relevance for the present review, is that they all
can be mapped onto suitably chosen Calogero–Sutherland systems. These models represent
a natural link to two classes of matrix theories which are of great importance in high energy
physics: on the one hand, the matrix models describing two–dimensional quantum gravity
(possibly coupled to matter) [19], and on the other hand, the matrix models pertaining
to large N QCD, which trace back to the original seminal works of ’t Hooft [20]. In
particular, a direct and explicit connection exists between multi–matrix models (the so
called Kazakov–Migdal models) for large N QCD [21] and the exactly solvable models of
two–dimensional QCD on the lattice [22].
The mapping of these models to Calogero–Sutherland systems of the type discussed in this
review can be found for instance in [23]. The relevance of these models, and in particular of
their Calogero–Sutherland mappings, for the condensed matter systems like those discussed
in the second part of this review, was first discussed in [24]. A recent review on this aspect,
and more generally on the use of Calogero–Sutherland models for low-dimensional models,
can be found in [25].
We will necessarily be rather sketchy in discussing the many important physical applica-
tions of the random matrix ensembles to be described in section 8. We refer the reader
to some excellent reviews that have appeared in the literature during the last few years:
the review by Beenakker [26] for the solid state physics applications, the review by Ver-
baarschot [27] for QCD–related applications, and [28, 29] for extensive reviews including a
historical outline.
7
Page 9
Part I
The theory of symmetric spaces has a long history in mathematics. In this first part of the
paper we will introduce the reader to some of the most fundamental concepts in the theory
of symmetric spaces. We have tried to keep the discussion as simple as possible without
assuming any previous familiarity of the reader with symmetric spaces. The review should
be particularly accessible to physicists. In the hope of addressing a wider audience, we
have almost completely avoided using concepts from differential geometry, and we have
presented the subject mostly from an algebraic point of view. In addition we have inserted
a large number of simple examples in the text, that will hopefully help the reader visualize
the ideas.
Since our aim in Part II will be to introduce the reader to the application of symmetric
spaces in physical integrable systems and random matrix models, we have chosen the
background material presented here with this in mind. Therefore we have put emphasis
not only on fundamental issues but on subjects that will be relevant in these applications
as well. Our treatment will be somewhat rigorous; however, we skip proofs that can be
found in the mathematical literature and concentrate on simple examples that illustrate
the concepts presented. The reader is referred to Helgason’s book [30] for a rigorous
treatment; however, this book may not be immediately accessible to physicists. For the
reader with little background in differential geometry we recommend the book by Gilmore
[31] (especially Chapter 9) for an introduction to symmetric spaces of exceptional clarity.
In section 2, after reviewing the basics about Lie groups, we will present some of the most
important properties of root systems. In section 3 we define symmetric spaces and discuss
their main characteristics, defining involutive automorphisms, spherical decomposition of
the group elements, and the metric on the Lie algebra. We also discuss the algebraic
structure of the coset space.
In section 4 we show how to obtain all the real forms of a complex semisimple Lie algebra.
The same techniques will then be used to classify the real forms of symmetric spaces in
section 5. In this section we also define the curvature of a symmetric space, and discuss
triplets of symmetric spaces with positive, zero and negative curvature, all corresponding
to the same symmetric subgroup. We will see why curved symmetric spaces arise from
semisimple groups, whereas the flat spaces are associated to non–semisimple groups. In
addition, in section 5 we will define restricted root systems. The restricted root systems are
associated to symmetric spaces, just like ordinary root systems are associated to groups.
As we will discuss in detail in Part II of this paper, they are key objects when considering
the integrability of Calogero–Sutherland models.
In section 6 we discuss Casimir and Laplace operators on symmetric spaces and men-
8
Page 10
tion some known properties of the eigenfunctions of the latter, so called zonal spherical
functions. These functions play a prominent role in many physical applications.
The introduction to symmetric spaces we present contains the basis for understanding the
developments to be discussed in more detail in Part II. The reader already familiar with
symmetric spaces is invited to start reading in the last section of Part I, section 7, where
we give a brief introduction to Calogero–Sutherland models.
2Lie groups and root spaces
In this introductory section we define the basic concepts relating to Lie groups. We will
build on the material presented here when we discuss symmetric spaces in the next section.
The reader with a solid background in group theory may want to skip most or all of this
section.
2.1Lie groups and manifolds
A manifold can be thought of as the generalization of a surface, but we do not in general
consider it as embedded in a higher–dimensional euclidean space. A short introduction to
differentiable manifolds can be found in ref. [32], and a more elaborate one in refs. [33] and
[34] (Ch. III). The points of an N–dimensional manifold can be labelled by real coordinates
(x1,...,xN). Suppose that we take an open set Uαof this manifold, and we introduce local
real coordinates on it. Let ψαbe the function that attaches N real coordinates to each
point in the open set Uα. Suppose now that the manifold is covered by overlapping open
sets, with local coordinates attached to each of them. If for each pair of open sets Uα, Uβ,
the fuction ψα◦ ψ−1
go smoothly from one coordinate system to another in this region. Then the manifold is
differentiable.
β
is differentiable in the overlap region Uα∩ Uβ, it means that we can
Consider a group G acting on a space V . We can think of G as being represented by
matrices, and of V as a space of vectors on which these matrices act. A group element
g ∈ G transforms the vector v ∈ V into gv = v′.
If G is a Lie group, it is also a differentiable manifold. The fact that a Lie group is a
differentiable manifold means that for two group elements g, g′∈ G, the product (g,g′) ∈
G × G → gg′∈ G and the inverse g → g−1are smooth (C∞) mappings, that is, these
mappings have continuous derivatives of all orders.
9
Page 11
Example: The space Rnis a smooth manifold and at the same time an abelian group.
The “product” of elements is addition (x,x′) → x + x′and the inverse of x is −x. These
operations are smooth.
Example: The set GL(n,R) of nonsingular real n × n matrices M, detM ?= 0, with
matrix multiplication (M,N) → MN and multiplicative matrix inverse M → M−1is a
non–abelian group manifold. Any such matrix can be represented as M = e
Xiare generators of the GL(n,R) algebra and tiare real parameters.
?
itiXiwhere
2.2The tangent space
In each point of a differentiable manifold, we can define the tangent space. If a curve
through a point P in the manifold is parametrized by t ∈ R
xa(t) = xa(0) + λata = 1,...,N(2.1)
where P = (x1(0),...,xN(0)), then λ = (λ1,...,λN) = (˙ x1(0),..., ˙ xN(0)) is a tangent vector
at P. Here ˙ xa(0) =
tangent space. In particular, the tangent vectors to the coordinate curves (the curves
obtained by keeping all the coordinates fixed except one) through P are called the natural
basis for the tangent space.
d
dtxa(t)|t=0. The space spanned by all tangent vectors at P is the
Example: In euclidean 3–space the natural basis is {ˆ ex, ˆ ey, ˆ ez}. On a patch of the unit
2–sphere parametrized by polar coordinates it is {ˆ eθ, ˆ eφ}.
For a Lie group, the tangent space at the origin is spanned by the generators, that play the
role of (contravariant) vector fields (also called derivations), expressed in local coordinates
on the group manifold as X = Xa(x)∂a(for an introduction to differential geometry see
ref. [35], Ch. 5, or [34]). Here the partial derivatives ∂a=
field. That the generators span the tangent space at the origin can easily be seen from the
exponential map. Suppose X is a generator of a Lie group. The exponential map then
maps X onto etX, where t is a parameter. This mapping is a one–parameter subgroup,
and it defines a curve x(t) in the group manifold. The tangent vector of this curve at the
origin is then
∂
∂xaform a basis for the vector
d
dtetX|t=0= X(2.2)
10
Page 12
All the generators together span the tangent space at the origin (also called the identity
element).
2.3Coset spaces
The isotropy subgroup Gv0of a group G at the point v0∈ V is the subset of group elements
that leave v0fixed. The set of points that can be reached by applying elements g ∈ G to
v0is the orbit of G at v0, denoted Gv0. If Gv0= V for one point v0, then this is true for
every v ∈ V . We then say that G acts transitively on V .
In general, a symmetric space can be represented as a coset space. Suppose H is a subgroup
of a Lie group G. The coset space G/H is the set of subsets of G of the form gH, for g ∈ G.
G acts on this coset space: g1(gH) is the coset (g1g)H. We will refer to the elements of the
coset space by g instead of by gH, when the subgroup H is understood from the context,
because of the natural mapping described in the next paragraph. If g / ∈ H, gH corresponds
to a point on the manifold G/H away from the origin, whereas hH = H (h ∈ H) is the
identity element identified with the origin of the symmetric space. This point is the north
pole in the example below.
If G acts transitively on V , then V = Gv for any v ∈ V . Since the isotropy subgroup
Gv0leaves a fixed point v0invariant, gGv0v0= gv0= v ∈ V , we see that the action of the
group G on V defines a bijective action of elements of G/Gv0on V . Therefore the space
V on which G acts transitively, can be identified with G/Gv0, since there is one–to–one
correspondence between the elements of V and the elements of G/Gv0. There is a natural
mapping from the group element g onto the point gv0on the manifold.
Example: The SO(2) subgroup of SO(3) is the isotropy subgroup at the north pole of
a unit 2–sphere imbedded in 3–dimensional space, since it keeps the north pole fixed. On
the other hand, the north pole is mapped onto any point on the surface of the sphere by
elements of the coset SO(3)/SO(2). This can be seen from the explicit form of the coset
representatives. As we will see in eq. (3.20) in subsection 3.5, the general form of the
elements of the coset is
M = exp
?
0C
0−CT
?
=
? √I2− XXT
−XT
X
√1 − XTX
?
(2.3)
where C is the matrix
11
Page 13
C =
?
t2
t1
?
(2.4)
and t1, t2are real coordinates. I2in eq. (2.3) is the 2 ×2 unit matrix. For the coset space
SO(3)/SO(2), M is equal to
M = exp
?2
i=1
?
tiLi
?
,L1=1
2
0
0
0 −1 0
0
0
0
1
,
L2=1
2
0
0
0 1
0 0
−1 0 0
(2.5)
The third SO(3) generator
L3=1
2
0 1 0
−1 0 0
0 0 0
(2.6)
spans the algebra of the stability subgroup SO(2), that keeps the north pole fixed:
exp(t3L3)
0
0
1
=
0
0
1
(2.7)
The generators Li(i = 1,2,3) satisfy the SO(3) commutation relations [Li,Lj] =1
Note that since the Liand the tiare real, C†= CT.
2ǫijkLk.
In (2.3), M is a general representative of the coset SO(3)/SO(2). By expanding the
exponential we see that the explicit form of M is
M =
1 + (t2)2(cos√
t1t2(cos√
−t2sin√
(t1)2+(t2)2−1)
(t1)2+(t2)2
t1t2(cos√
1 + (t1)2(cos√
−t1sin√
(t1)2+(t2)2−1)
(t1)2+(t2)2
t2sin√
t1sin√
?
(t1)2+(t2)2
√
√
(t1)2+(t2)2
(t1)2+(t2)2
(t1)2+(t2)2−1)
(t1)2+(t2)2
(t1)2+(t2)2−1)
(t1)2+(t2)2
(t1)2+(t2)2
(t1)2+(t2)2
√
(t1)2+(t2)2
(t1)2+(t2)2
√
(t1)2+(t2)2
cos
(t1)2+ (t2)2
(2.8)
Thus the matrix X =
?
x
y
?
is given in terms of the components of C by (cf. eq. (3.21)):
12
Page 14
X =
?
x
y
?
=
t2sin√
t1sin√
(t1)2+(t2)2
√
√
(t1)2+(t2)2
(t1)2+(t2)2
(t1)2+(t2)2
(2.9)
Defining now z = cos
of the 2–sphere:
?
(t1)2+ (t2)2, we see that the variables x, y, z satisfy the equation
x2+ y2+ z2= 1 (2.10)
When the coset space representative M acts on the north pole it is easily seen that the
orbit is all of the 2–sphere:
M
0
0
1
=
. . x
. . y
. . z
0
0
1
=
x
y
z
(2.11)
This shows that there is one–to–one correspondence between the elements of the coset and
the points of the 2–sphere. The coset SO(3)/SO(2) can therefore be identified with a unit
2–sphere imbedded in 3–dimensional space.
2.4 The Lie algebra and the adjoint representation
A Lie algebra G is a vector space over a field F. Multiplication in the Lie algebra is given
by the bracket [X,Y ]. It has the following properties:
[1] If X, Y ∈ G, then [X,Y ] ∈ G,
[2] [X,αY + βZ] = α[X,Y ] + β[X,Z] for α, β ∈ F,
[3] [X,Y ] = −[Y,X],
[4] [X,[Y,Z]] + [Y,[Z,X]] + [Z,[X,Y ]] = 0 (the Jacobi identity).
The algebra G generates a group through the exponential mapping. A general group
element is
M = exp
??
i
tiXi
?
;ti∈ F, Xi∈ G(2.12)
13
Page 15
We define a mapping adX from the Lie algebra to itself by adX : Y → [X,Y ]. The
mapping X → adX is a representation of the Lie algebra called the adjoint representation.
It is easy to check that it is an automorphism: it follows from the Jacobi identity that
[adXi,adXj] = ad[Xi,Xj]. Suppose we choose a basis {Xi} for G. Then
adXi(Xj) = [Xi,Xj] = Ck
ijXk
(2.13)
where we sum over k. The Ck
transform as mixed tensor components. They define the matrix (Mi)jk= Cj
with the adjoint representation of Xi. One can show that there exists a basis for any
complex semisimple algebra in which the structure constants are real. This means the
adjoint representation is real. Note that the dimension of the adjoint representation is
equal to the dimension of the group.
ijare called structure constants. Under a change of basis, they
ikassociated
Example: Let’s construct the adjoint representation of SU(2). The generators in the
defining representation are
J3=1
2
?
1
0 −1
0
?
,J±=1
2
??
0 1
1 0
?
± i
?
0 −i
i0
??
(2.14)
and the commutation relations are
[J3,J±] = ±J±,[J+,J−] = 2J3
(2.15)
The structure constants are therefore C+
and the adjoint representation is given by (M3)++ = 1, (M3)−− = −1, (M+)+3 = −1,
(M+)3−= 2, (M−)−3= 1, (M−)3+= −2, and all other matrix elements equal to 0:
3+= −C+
+3= −C−
3−= C−
−3= 1, C3
+−= −C3
−+= 2
M3=
0 0
0 1
0 0 −1
0
0
,
M+=
00 2
−1 0 0
0 0 0
,
M−=
0 −2 0
00
10
0
0
,
(2.16)
These representation matrices are real, have the same dimension as the group, and satisfy
the SU(2) commutation relations [M3,M±] = ±M±, [M+,M−] = 2M3.
14
Page 16
2.5Semisimple algebras and root spaces
In this paragraph we will briefly recall the basic facts about root spaces and the classifica-
tion of complex simple Lie algebras, to set the stage for our discussion of real forms of Lie
algebras and finally symmetric spaces.
An ideal, or invariant subalgebra I is a subalgebra such that [G,I] ⊂ I. An abelian ideal
also satisfies [I,I] = 0. A simple Lie algebra has no proper ideal. A semisimple Lie algebra
is the direct sum of simple algebras, and has no proper abelian ideal (by proper we mean
different from {0}).
A Lie algebra is a linear vector space over a field F, with an antisymmetric product defined
by the Lie bracket (cf. subsection 2.4). If F is the field of real, complex or quaternion
numbers, the Lie algebra is called a real, complex or quaternion algebra. A complexification
of a real Lie algebra is obtained by taking linear combinations of its elements with complex
coefficients. A real Lie algebra H is a real form of the complex algebra G if G is the
complexification of H.
In any simple algebra there are two kinds of generators: there is a maximal abelian subal-
gebra, called the Cartan subalgebra H0= {H1,...,Hr}; [Hi,Hj] = 0 for any two elements
of the Cartan subalgebra. There are also raising and lowering operators denoted Eα. α
is an r–dimensional vector α = (α1,...,αr) and r is the rank of the algebra.
ter are eigenoperators of the Hiin the adjoint representation belonging to eigenvalue αi:
[Hi,Eα] = αiEα. For each eigenvalue, or root αi, there is another eigenvalue −αiand a
corresponding eigenoperator E−αunder the action of Hi.
1The lat-
Suppose we represent each element of the Lie algebra by an n×n matrix. Then [Hi,Hj] = 0
means the matrices Hican all be diagonalized simultaneously. Their eigenvalues µiare
given by Hi|µ? = µi|µ?, where the eigenvectors are labelled by the weight vectors µ =
(µ1,...,µr) [36].
A weight whose first non–zero component is positive is called a positive weight. Also, a
weight µ is greater than another weight µ′if µ − µ′is positive. Thus we can define the
highest weight as the one which is greater than all the others. The highest weight is unique
in any representation.
The roots αi≡ α(Hi) of the algebra G are the weights of the adjoint representation. Recall
that in the adjoint representation, the states on which the generators act are defined by
1The rank of an algebra is defined through the secular equation (see subsection 6.1). For a non–
semisimple algebra, the maximal number of mutually commuting generators can be greater than the rank
of the algebra.
15
Page 17
the generators themselves, and the action is defined by
Xa|Xb? ≡ adXa(Xb) ≡ [Xa,Xb] (2.17)
The roots are functionals on the Cartan subalgebra satisfying
adHi(Eα) = [Hi,Eα] = α(Hi)Eα
(2.18)
where Hiis in the Cartan subalgebra. The eigenvectors Eαare called the root vectors.
These are exactly the raising and lowering operators E±αfor the weight vectors µ. There
are canonical commutation relations defining the system of roots belonging to each simple
rank r–algebra. These are summarized below:2
[Hi,Hj] = 0,[Hi,Eα] = αiEα,[Eα,E−α] = αiHi
(2.19)
One can prove the fundamental relation [35, 36]
2α · µ
α2
= −(p − q) (2.20)
where α is a root, µ is a weight, and p, q are positive integers such that Eα|µ + pα? = 0,
E−α|µ − qα? = 03. This relation gives rise to the strict properties of root lattices, and
permits the complete classification of all the complex (semi)simple algebras.
2For the reader who wants to understand more about the origin of the structure of Lie algebras, we
recommend Chapter 7 of Gilmore [31].
3Here the scalar product · can be defined in terms of the metric on the Lie algebra. For the adjoint
representation, µ is a root β and
2α · β
α2
=2K(Hα,Hβ)
K(Hα,Hα)
≡2β(Hα)
α(Hα)
(2.21)
where K denotes the Killing form (see paragraph 3.4). There is always a unique element Hαin the algebra
such that K(H,Hα) = α(H) for each H ∈ H0(see for example [35], Ch. 10). In general for a linear form
µ on the Lie algebra,
2α · µ
α2
=2µ(Hα)
α(Hα)
(2.22)
Then µ is a highest weight for some representation if and only if this expression is an integer for each
positive root α.
16
Page 18
Eq. (2.20) is true for any representation, but has particularly strong implications for the
adjoint representation. In this case µ is a root. As a consequence of eq. (2.20), the possible
angle between two root vectors of a simple Lie algebra is limited to a few values: these turn
out to be multiples ofπ
reflections in the hyperplanes orthogonal to the roots (the Weyl group). As we will shortly
see, this is true not only for the root lattice, but for the weight lattice of any representation.
6andπ
4(see e.g. [36], Ch. VI). The root lattice is invariant under
Note that the roots α are real–valued linear functionals on the Cartan subalgebra. There-
fore they are in the space dual to H0. A subset of the positive roots span the root lattice.
These are called simple roots. Obviously, since the roots are in the space dual to H0, the
number of simple roots is equal to the rank of the algebra.
The same relation (2.20) determines the highest weights of all irreducible representations.
Setting p = 0, choosing a positive integer q, and letting α run through the simple roots,
α = αi(i = 1,...,r), we find the highest weights µiof all the irreducible representations
corresponding to the given value of q [36]. For example, for q = 1 we get the highest weights
of the r fundamental representations of the group, each corresponding to a simple root αi.
For higher values of q we get the highest weights of higher–dimensional representations of
the same group.
The set of all possible simple root systems are classified by means of Dynkin diagrams,
each of which correspond to an equivalence class of isomorphic Lie algebras. The classical
Lie algebras SU(n + 1,C), SO(2n + 1,C), Sp(2n,C) and SO(2n,C) correspond to root
systems An, Bn, Cn, and Dn, respectively. In addition there are five exceptional algebras
corresponding to root systems E6, E7, E8, F4and G2. Each of these complex algebras in
general has several real forms associated with it (see section 4). These real forms correspond
to the same Dynkin diagram and root system as the complex algebra. Since we will not
make reference to Dynkin diagrams in the following, we will not discuss them here. The
interested reader can find sufficient material for example in the book by Georgi [36].
The (semi)simple complex algebra G decomposes into a direct sum of root spaces [35]:
G = H0⊕
?
α
Gα
(2.23)
where Gαis generated by {E±α}. This will be evident in the example given below.
Example: The root system An−1corresponds to the complex Lie algebra SL(n,C) and all
its real forms. In a later section we will see how to construct all the real forms associated
with a given complex Lie algebra. Let’s see here explicitly how to construct the root lattice
of SU(3,C), which is one of the real forms of SL(3,C).
17
Page 19
The generators are determined by the commutation relations. In physics it is common to
write the commutation relations in the form
[Ti,Tj] = ifijkTk
(2.24)
(an alternative form is to define the generators as Xi= iTiand write the commutation re-
lations as [Xi,Xj] = −fijkXk) where fijkare structure constants for the algebra SU(3,C).
Using the notation g = eitaTafor the group elements (with tareal and a sum over a implied),
the generators Tain the fundamental representation of this group are hermitean4:
T1=1
2
0 1 0
1 0 0
0 0 0
,
,
T2=1
2
0 −i 0
i0
00
0
0
,
,
0
0
T3=1
2
1
0 −1 0
00
00
0
,
T4=1
2
0 0 1
0 0 0
1 0 0
T5=1
2
0 0 −i
0 0
i0
0
0
T6=1
2
0 0 0
0 0 1
0 1 0
,
T7=1
2
0 0
0 0 −i
0i
0
0
,
T8=
1
2√3
1 0
0 1
0 0 −2
(2.25)
In high energy physics the matrices 2Taare known as Gell–Mann matrices. The generators
are normalized in such a way that tr(TaTb) =1
subalgebra. We take the Cartan subalgebra to be H0= {T3,T8}. The rank of this group
is r = 2.
2δab. Note that T1, T2, T3form an SU(2,C)
Let’s first find the weight vectors of the fundamental representation. To this end we look
for the eigenvalues µiof the operators in the abelian subalgebra H0:
4Note that we have written an explicit factor of i in front of the generators in the expression for the
group elements. This is often done for compact groups; since the Killing form (subsection 3.4) has to
be negative definite, the coordinates of the algebra spanned by the generators must be purely imaginary.
Here we use this notation because it is conventional. If we absorb the factor of i into the generators, we
get antihermitean matrices Xa = iTa; we will do this in the example in subsection 3.1 to comply with
eq. (3.1). Of course, the matrices in the algebra are always antihermitean.
18
Page 20
T3
1
0
0
=1
2
1
0
0
,
T8
1
0
0
=
1
2√3
1
0
0
,
(2.26)
therefore the eigenvector (100)Tcorresponds to the state |µ? where
µ ≡ (µ1,µ2) =
?1
2,
1
2√3
?
(2.27)
is distinguished by its eigenvalues under the operators Hiof the Cartan subalgebra. In
the same way we find that (010)Tand (001)Tcorrespond to the states labelled by weight
vectors
µ′=
?
−1
2,
1
2√3
?
,µ′′=
?
0,−1
√3
?
(2.28)
respectively. µ, µ′, and µ′′are the weights of the fundamental representation ρ = D and
they form an equilateral triangle in the plane. The highest weight of the representation D
is µ =
2√3
.
?1
2,
1
?
There is also another fundamental representation¯D of the algebra SU(3,C), since it
generates a group of rank 2. Indeed, from eq. (2.20), for p = 0, q = 1, there is one highest
weight µi, and one fundamental representation, for each simple root αi. The highest weight
¯ µ of the representation¯D is
¯ µ =
?1
2,−
1
2√3
?
(2.29)
The highest weights of the representations corresponding to any positive integer q can be
obtained as soon as we know the simple roots. Then, by operating with lowering operators
on this weight, we obtain other weights, on which we can further operate with lowering
operators until we have obtained all the weights in the representation. For an example of
this procedure see [36], Ch. IX.
Let’s see now how to obtain the roots of SU(3,C). Each root vector Eαcorresponds to ei-
ther a raising or a lowering operator: Eαis the eigenvector belonging to the root αi≡ α(Hi)
under the adjoint representation of Hi, like in eq. (2.32). Each raising or lowering operator
19
Page 21
is a linear combination of generators Tithat takes one state of the fundamental represen-
tation to another state of the same representation: E±α|µ? = N±α,µ|µ± α?. Therefore the
root vectors α will be differences of weight vectors in the fundamental representation. We
find the raising and lowering operators E±αto be
E±(1,0)=
1
√2(T1± iT2)
E±(1
2,
√3
2)=
1
√2(T4± iT5)
E±(−1
2,
√3
2)=
1
√2(T6± iT7)
(2.30)
These generate the subspaces Gαin eq. (2.23). In the fundamental representation, we find
using the Gell–Mann matrices that these are matrices with only one non–zero element. For
example, the raising operator Eαthat corresponds to the root α = (1,0) is
E+(1,0)=
1
√2
0 1 0
0 0 0
0 0 0
(2.31)
This operator takes us from the state |µ′? = | −1
components of the root vectors of SU(3,C) are the eigenvalues αi of these under the
adjoint representation of the Cartan subalgebra. That is,
2,
1
2√3? to the state |µ? = |1
2,
1
2√3?. The
Hi|Eα? ≡ adHi(Eα) ≡ [Hi,Eα] = αi|Eα?(2.32)
This way we easily find the roots: we can either explicitly use the structure constants of
SU(3) in [Ta,Tb] = ifabcTc = −iCc
tions regarding the generators) or we can use an explicit representation for Hi, Eαlike in
eqs. (2.25), (2.30), (2.31), to calculate the commutators:
abTc (note the explicit factor of i due to our conven-
adH1(E±(1,0)) = [H1,E±(1,0)] = [T3,
1
√2(T1± iT2)] =
1
√2(iT2± T1) = ±E±(1,0)≡ α±
1E±(1,0)
adH2(E±(1,0)) = [H2,E±(1,0)] = [T8,
1
√2(T1± iT2)] = 0 ≡ α±
2E±(1,0)
(2.33)
20
Page 22
The root vector corresponding to the raising operator E+(1,0)is thus α = (α+
and the root vector corresponding to the lowering operator E−(1,0)is −α = (α−
(−1,0). These root vectors are indeed the differences between the weight vectors µ =
?1
1,α+
2) = (1,0)
1,α−
2) =
2,
1
2√3
?
and µ′=
?
−1
2,
1
2√3
?
of the fundamental representation.
In the same way we find the other root vectors
multiplicity 2), by operating with H1and H2on the remaining E±α’s and on the Hi’s. The
last root with multiplicity 2 has as its components the eigenvalues under H1, H2of the
states |H1? and |H2?: Hi|Hj? = [Hi,Hj] = 0; i, j ∈ {1,2}. The root vectors form a regular
hexagon in the plane. The positive roots are (1,0), α1=
latter two are simple roots. (1,0) is not simple because it is the sum of the other positive
roots. There are two simple roots, since the rank of SU(3) is 2 and the root lattice is
two–dimensional.
?
±1
2,±
√3
2
?
,
?
∓1
2,±
√3
2
?
, and (0,0) (with
?1
2,
√3
2
?
and α2=
?1
2,−
√3
2
?
. The
The root lattice of SU(3) is invariant under reflections in the hyperplanes orthogonal to the
root vectors. This is true of any weight or root lattice; the symmetry group of reflections in
hyperplanes orthogonal to the roots is called the Weyl group. It is obtained from eq. (2.20):
since for any root α and any weight µ, 2(α · µ)/α2is the integer q − p,
µ′= µ −2(α · µ)
α2
α (2.34)
is also a weight. Eq. (2.34) is exactly the above mentioned reflection, as can easily be seen.
2.6 The Weyl chambers
The roots are linear functionals on the Cartan subalgebra. We may denote the Cartan
subalgebra by H0and its dual space by H∗
be defined not only for the weights or roots µ in the space H∗
q ∈ H∗
0. A Weyl reflection like the one in (2.34) can
0, but for an arbitrary vector
0or, in all generality, for a vector q in an arbitrary finite–dimensional vector space:
sα(q) = q − α∗(q)α(2.35)
Note that q ∈ H∗
α∗(q) is a linear functional on H∗
the crystallographic case when α∗(q) is integer. We denote the hyperplanes in H∗
the function α∗(q) vanishes by H(α):
0is in the space dual to H0and may denote a root. In (2.35) the function
0such that α∗(α) = 2. We will be concerned only with
0where
21
Page 23
H(α)= {q ∈ H∗
0: α∗(q) = 0} (2.36)
H(α)is orthogonal to the root α, and sα(q) is a reflection in this hyperplane.
By identifying the dual spaces H0 and H∗
dimension), we can consider hyperplanes like the ones in (2.36) in the space H0. The role
of the linear functional α∗(q) is then played by
0(this is possible since they have the same
α∗(q) =2q · α
α2
=2q(Hα)
α(Hα)
(2.37)
where α(Hα) = K(Hα,Hα). Here K is the Killing form (a metric on the algebra to be
defined in paragraph 3.4) and Hαis the unique element in H0such that K(H,Hα) = α(H).
The open subsets of H0where roots are nonzero are called Weyl chambers. Consequently,
the walls of the Weyl chambers are the hyperplanes in H0where the roots q(H) are zero.
2.7The simple root systems
We have just shown by an example, in subsection 2.5, how to obtain a root system of
type An. In general, for any simple algebra the commutation relations determine the
Cartan subalgebra and raising and lowering operators, that in turn determine a unique
root system, and correspond to a given Dynkin diagram. In this way we can classify all
the simple algebras according to the type of root system it possesses. The root systems
for the four infinite series of classical non–exceptional Lie groups can be characterized as
follows [36] (denote the r–dimensional space spanned by the roots by V and let {e1,...en}
be a canonical basis in Rn):
An−1: Let V be the hyperplane in Rnthat passes through the points (1,0,0,...0), (0,1,0,...,0),
..., (0,0,...,0,1) (the endpoints of the ei, i = 1,...,n). Then the root lattice contains the
vectors {ei− ej,i ?= j}.
Bn: Let V be Rn; then the roots are {±ei,±ei± ej,i ?= j}.
Cn: Let V be Rn; then the roots are {±2ei,±ei± ej,i ?= j}.
Dn: Let V be Rn; then the roots are {±ei± ej,i ?= j}.
22
Page 24
The root lattice BCn, that we will discuss in conjunction with restricted root systems, is
the union of Bnand Cn. It is characterized as follows:
BCn: Let V be Rn; then the roots are {±ei,±2ei,±ei± ej,i ?= j}.
Because this system contains both eiand 2ei, it is called non–reduced (normally the only
root collinear with α is −α). However, it is irreducible in the usual sense, which means it
is not the direct sum of two disjoint root systems Bnand Cn. This can be seen from the
root multiplicities (cf. Table 1).
The semisimple algebras are direct sums of simple ones. That means the simple constituent
algebras commute with each other, and the root systems are direct sums of the correspond-
ing simple root systems. Therefore, knowing the properties of the simple Lie algebras, we
also know the semisimple ones.
3 Symmetric spaces
In the previous section, we have reminded ourselves of some elementary facts concerning
root spaces and the classification of the complex semisimple algebras. In this section we
will define and discuss symmetric spaces.
A symmetric space is associated to an involutive automorphism of a given Lie algebra.
As we will see, several different involutive automorphisms can act on the same algebra.
Therefore we normally have several different symmetric spaces deriving from the same Lie
algebra. The involutive automorphism defines a symmetric subalgebra and a remaining
complementary subspace of the algebra. Under general conditions, the complementary
subspace is mapped onto a symmetric space through the exponential map. In the following
subsections we make these statements more precise. We discuss how the elements of the Lie
group can act as transformations on the elements of the symmetric space. This naturally
leads to the definition of two coordinate systems on symmetric spaces: the spherical and
the horospheric coordinate systems. The radial coordinates associated to each element of
a symmetric space through its spherical or horospheric decomposition will be of relevance
when we discuss the radial parts of differential operators on symmetric spaces in section
6. In the same section we explain why these operators are important in applications to
physical problems, and in Part II we will discuss some of their uses.
In all of this paper we will distinguish between compact and non–compact symmetric
spaces. In order to give a precise notion of compactness, we will define the metric tensor
on a Lie algebra in terms of the Killing form in subsection 3.4. The latter is defined as a
23
Page 25
symmetric bilinear trace form on the adjoint representation, and is therefore expressible
in terms of the structure constants. We will give several examples of Killing forms later,
as we discuss the various real forms of a Lie algebra. The metric tensor serves to define
the curvature tensor on a symmetric space (subsection 5.1). It is also needed in computing
the Jacobian of the transformation to radial coordinates. This Jacobian is relevant in
calculating the radial part of the Laplace–Beltrami operator (see paragraph 6.2).
We will close this section with a discussion of the general algebraic form of coset represen-
tatives in subsection 3.5.
3.1 Involutive automorphisms
An automorphism of a Lie algebra G is a mapping from G onto itself such that it preserves
the algebraic operations on the Lie algebra. For example, if σ is an automorphism, it
preserves multiplication: [σ(X),σ(Y )] = σ([X,Y ]), for X, Y ∈ G.
Suppose that the linear automorphism σ : G → G is such that σ2= 1, but σ is not the
identity. That means that σ has eigenvalues ±1, and it splits the algebra G into orthogonal
eigensubspaces corresponding to these eigenvalues. Such a mapping is called an involutive
automorphism.
Suppose now that G is a compact simple Lie algebra, σ is an involutive automorphism of
G, and G = K ⊕ P where
σ(X) = X for X ∈ K, σ(X) = −X for X ∈ P(3.1)
From the properties of automorphisms mentioned above, it is easy to see that K is a
subalgebra, but P is not. In fact, the commutation relations
[K,K] ⊂ K, [K,P] ⊂ P, [P,P] ⊂ K(3.2)
hold. A subalgebra K satisfying (3.2) is called a symmetric subalgebra. If we now multiply
the elements in P by i (the “Weyl unitary trick”), we construct a new noncompact algebra
G∗= K ⊕ iP. This is called a Cartan decomposition, and K is a maximal compact
subalgebra of G∗. The coset spaces G/K and G∗/K are symmetric spaces.
Example: Suppose G = SU(n,C), the group of unitary complex matrices with determi-
24
Page 26
nant +1. The algebra of this group then consists of complex antihermitean5matrices of
zero trace (this follows by differentiating the identities UU†= 1 and detU = 1 with respect
to t where U(t) is a curve passing through the identity at t = 0); a group element is written
as g = etaXawith tareal. Therefore any matrix X in the Lie algebra of this group can
be written X = A + iB, where A is real, skew–symmetric, and traceless and B is real,
symmetric and traceless. This means the algebra can be decomposed as G = K⊕P, where
K is the compact connected subalgebra SO(n,R) consisting of real, skew–symmetric and
traceless matrices, and P is the subspace of matrices of the form iB, where B is real,
symmetric, and traceless. P is not a subalgebra.
Referring to the example for SU(3,C) in subsection 2.5 we see, setting Xa= iTa, that
the {Xa} split into two sets under the involutive automorphism σ defined by complex
conjugation σ = K. This splits the compact algebra G into K ⊕ P, since P consists of
imaginary matrices:
K = {X2,X5,X7} =
1
2
01 0
−1 0 0
0 0 0
,1
2
0
0
0 1
0 0
−1 0 0
,1
2
0
0
0 −1 0
0
0
0
1
P = {X1,X3,X4,X6,X8}
=
i
2
0 1 0
1 0 0
0 0 0
,i
2
1
0 −1 0
00
00
0
,i
2
0 0 1
0 0 0
1 0 0
,i
2
0 0 0
0 0 1
0 1 0
,
i
2√3
1 0
0 1
0 0 −2
0
0
(3.3)
K spans the real subalgebra SO(3,R).
commutation relations for the subalgebra are [Li,Lj] =
iH0= {X3,X8} is here entirely in the subspace P.
Setting X2 ≡ L3, X5 ≡ L2, X7 ≡ L1, the
1
2ǫijkLk. The Cartan subalgebra
Going back to the general case of G = SU(n,C), we obtain from G by the Weyl unitary
trick the non–compact algebra G∗= K ⊕ iP. iP is now the subspace of real, symmetric,
and traceless matrices B. The Lie algebra G∗= SL(n,R) is then the set of n × n real
matrices of zero trace, and generates the linear group of transformations represented by
real n × n matrices of unit determinant.
The involutive automorphism that split the algebra G above was defined to be complex
conjugation σ = K. The involutive automorphism that splits G∗is defined by ˜ σ(g) =
(gT)−1for g ∈ G∗, as we will now see. On the level of the algebra, ˜ σ(g) = (gT)−1means
5See the footnote in subsection 2.5.
25
Page 27
˜ σ(X) = −XT. Suppose now g = etX∈ G∗with X real and traceless and t a real parameter.
If now X is an element of the subalgebra K, we then have ˜ σ(X) = +X, i.e. −XT= X
and X is skew–symmetric. If instead X ∈ iP, we have ˜ σ(X) = −XT= −X, i.e. X is
symmetric. The decomposition G∗= K ⊕ iP is the usual decomposition of a SL(n,R)
matrix in symmetric and skew–symmetric parts.
G/K = SU(n,C)/SO(n,R) is a symmetric space of compact type, and the related sym-
metric space of non–compact type is G∗/K = SL(n,R)/SO(n,R).
3.2The action of the group on the symmetric space
Let G be a semisimple Lie group and K a compact symmetric subgroup. As we saw in
the preceding paragraph, the coset spaces G/K and G∗/K represent symmetric spaces.
Just as we have defined a Cartan subalgebra and the rank of a Lie algebra, we can define,
in an exactly analogous way, a Cartan subalgebra and the rank of a symmetric space. A
Cartan subalgebra of a symmetric space is a maximal abelian subalgebra of the subspace
P (see paragraph 5.2), and the rank of a symmetric space is the number of generators in
this subalgebra.
If G is connected and G = K⊕P where K is a compact symmetric subalgebra, then each
group element can be decomposed as g = kp (right coset decomposition) or g = pk (left
coset decomposition), with k ∈ K = eK, p ∈ P = eP. P is not a subgroup, unless it is
abelian and coincides with its Cartan subalgebra. However, if the involutive automorphism
that splits the algebra is denoted σ, one can show ([37], Ch. 6) that gpσ(g−1) ∈ P. This
defines G as a transformation group on P. Since σ(k−1) = k−1for k ∈ K, this means
p′= kpk−1∈ P(3.4)
if k ∈ K, p ∈ P. Now suppose there are no other elements in G that satisfy σ(g) = g than
those in K. This will happen if the set of elements satisfying σ(g) = g is connected. Then
P is isomorphic to G/K. Also, G acts transitively on P in the manner defined above (cf.
subsection 2.3). The tangent space of G/K at the origin (identity element) is spanned by
the subspace P of the algebra.
26
Page 28
3.3Radial coordinates
In this paragraph we define two coordinate systems frequently used on symmetric spaces.
Let G = K ⊕ P be a Cartan decomposition of a semisimple algebra and let H0⊂ P be a
maximal abelian subalgebra in the subspace P. Define M to be the subgroup of elements
in K such that
M = {k ∈ K : kHk−1= H, H ∈ H0} (3.5)
This set is called the centralizer of H0in K. Under conjugation by k ∈ K, each element
H of the Cartan subalgebra is preserved. Further, denote
M′= {k ∈ K : kHk−1= H′, H, H′∈ H0} (3.6)
This is a larger subgroup than M that preserves the Cartan subalgebra as a whole, but
not necessarily each element separately, and is called the normalizer of H0in K. If K is
a compact symmetric subgroup of G, one can show ([37], Ch. 6) that every element p of
P ≃ G/K is conjugated with some element h = eHfor some H ∈ H0by means of the
adjoint representation6of the stationary subgroup K:
p = khk−1= khσ(k−1)(3.8)
where k ∈ K/M and H is defined up to the elements in the factor group M′/M. This factor
group coincides with the Weyl group that was defined in eq. (2.34): since the space H0can
be identified with its dual space H∗
restricted root system (see paragraph 5.2). The effect of the Weyl group is to transform the
algebra H0⊂ P into another Cartan subalgebra H′
This amounts to a permutation of the roots of the restricted root lattice corresponding to
a Weyl reflection. Equation (3.8) means that every element g ∈ G can be decomposed as
g = pk = k′hk′−1k = k′hk′′, and this is very much like the Euler angle decomposition of
SO(n).
0, we can identify M′/M with the Weyl group of the
0⊂ P conjugate with the original one.
Thus, if x0is the fixed point of the subgroup K, an arbitrary point x ∈ P can be written
6Note that
eKHe−K= eadKH ≡
∞
?
n=0
(adK)n
n!
H
(3.7)
27
Page 29
x = khk−1x0= khx0
(3.9)
The coordinates (k(x),h(x)) are called spherical coordinates. k(x) is the angular coordinate
and h(x) is the spherical radial coordinate of the point x. Eq. (3.8) defines the so called
spherical decomposition of the elements in the coset space. Of course, a similar reasoning
is true for the space P∗≃ G∗/K.
This means every matrix p in the coset space G/K can be diagonalized by a similarity
transformation by the subgroup K, and the radial coordinates are exactly the set of eigen-
values of the matrix p. These “eigenvalues” are not necessarily real numbers. This is easily
seen in the example in eq. (3.3). It can also be seen in the adjoint representation. Suppose
the algebra G = K⊕P is compact. From eq. (2.13), in the adjoint representation Hi∈ H0
has the form
Hi=
0...
...
.
.0
αi
−αi
...
ηi
−ηi
(3.10)
where the matrix is determined by the structure constants ([Hi,Hj] = 0, [Hi,E±α] =
±αiE±α... and ±αi,...,±ηiare the roots corresponding to Hi). Since the Killing form
must be negative (see subsection 3.4) for a compact algebra, the coordinates of the Cartan
subalgebra must be purely imaginary and the group elements corresponding to H0must
have the form
eit·H=
1 ...
...
.
.1
eit·α
...
e−it·η
(3.11)
with t = (t1,t2,...tr) and tireal parameters. In particular, if the eigenvalues are real for
28
Page 30
p ∈ P∗, they are complex numbers for p ∈ P.
Example: In the example we gave in the preceding subsection, the coset space G∗/K
= SL(n,R)/SO(n) ≃ P∗= eiPconsists of real positive–definite symmetric matrices. Note
that G = K ⊕ P implies that G can be decomposed as G = PK and G∗as G∗=
P∗K. The decomposition G∗= P∗K in this case is the decomposition of a SL(n,R)
matrix in a positive–definite symmetric matrix and an orthogonal one. Each positive–
definite symmetric matrix can be further decomposed: it can be diagonalized by an SO(n)
similarity transformation. This is the content of eq. (3.8) for this case, and we know it to be
true from linear algebra. Similarly, according to eq. (3.8) the complex symmetric matrices
in G/K = SU(n,C)/SO(n) ≃ P = ePcan be diagonalized by the group K = SO(n) to a
form where the eigenvalues are similar to those in eq. (3.11).
In terms of the subspace P of the algebra, eq. (3.8) amounts to saying that any two Cartan
subalgebras H0, H′
by K, and we can choose the Cartan subalgebra in any way we please. However, the number
of elements that we can diagonalize simultaneously will always be equal to the rank of the
symmetric space.
0of the symmetric space are conjugate under a similarity transformation
There is also another coordinate system valid only for spaces of the type P∗∼ G∗/K. This
coordinate system is called horospheric and is based on the so called Iwasawa decomposition
[37] of the algebra:
G = N+⊕ H0⊕ K(3.12)
Here K, H0, N+are three subalgebras of G. K is a maximal compact subalgebra, H0is
a Cartan subalgebra, and
N+=
?
α∈R+G′
α
(3.13)
is an algebra of raising operators corresponding to the positive roots α(H) > 0 with respect
to H0(G′
decomposed g = nhk, in an obvious notation. This means that if x0is the fixed point of
K, any point x ∈ G∗/K can be written
αis the space generated by Eα). As a consequence, the group elements can be
x = nhkx0= nhx0
(3.14)
29
Page 31
The coordinates (n(x),h(x)) are called horospheric coordinates and the element h = h(x)
is called the horospheric projection of the point x or the horospheric radial coordinate.
3.4 The metric on a Lie algebra
A metric tensor can be defined on a Lie algebra [30, 31, 35, 37]. For our purposes, it will
eventually serve to define the curvature of a symmetric space and be useful in computing
the Jacobian of the transformation to radial coordinates. In sections 6 and 8 we will see
the importance of this Jacobian in physical applications in connection with the radial part
of the Laplace–Beltrami operator.
If {Xi} form a basis for the Lie algebra G, the metric tensor is defined by
gij= K(Xi,Xj) ≡ tr(adXiadXj) = Cr
isCs
jr
(3.15)
The symmetric bilinear form K(Xi,Xj) is called the Killing form. It is intrinsically asso-
ciated with the Lie algebra, and since the Lie bracket is invariant under automorphisms of
the algebra, so is the Killing form.
Example: The generators X7≡ L1, X5≡ L2, X2≡ L3of SO(3) given in eq. (3.3) obey
the commutation relations [Li,Lj] = Ck
algebra is gij= −1
that the metric takes the canonical form gij= −δij.
ijLk=1
2ǫijkLk. From eq. (3.15), the metric for this
2δij. The generators and the structure constants can be normalized so
Just like we defined the Killing form K(Xi,Xj) for the algebra G in eq. (3.15) using the
adjoint representation, we can define a similar trace form Kρand a metric tensor gρfor
any representation ρ by
gρ,ij= Kρ(Xi,Xj) = tr(ρ(Xi)ρ(Xj))(3.16)
where ρ(X) is the matrix representative of the Lie algebra element X. If ρ is an automor-
phism of G, Kρ(Xi,Xj) = K(Xi,Xj).
Suppose the Lie algebra is semisimple (this is true for all the classical Lie algebras except
the Lie algebras GL(n,C), U(n,C)). According to Cartan’s criterion, the Killing form is
non–degenerate for a semisimple algebra. This means that detgij?= 0, so that the inverse
of gij, denoted by gij, exists. Since it is also real and symmetric, it can be reduced to
30
Page 32
canonical form gij= diag(−1,...,−1,1,...,1) with p −1’s and (n − p) +1’s, where n is the
dimension of the algebra.
p is an invariant of the quadratic form. In fact, for any real form of a complex algebra, the
trace of the metric, called the character of the particular real form (see below and in [31])
distinguishes the real forms from each other (though it can be degenerate for the classical
Lie algebras [31]). The character ranges from −n, where n is the dimension of the algebra,
to +r, where r is its rank. All the real forms of the algebra have a character that lies
in between these values. In subsection 4.1 we will see several explicit examples of Killing
forms.
A famous theorem by Weyl states that a simple Lie group G is compact, if and only if the
Killing form on G is negative definite. Otherwise it is non–compact. This is actually quite
intuitive and natural (see [31], Ch. 9, paragraph I.2). On a compact algebra, the metric
can be chosen to be minus the Killing form, if it is required to be positive–definite.
The metric on the Lie algebra can be extended to the whole coset space P ≃ G/K,
P∗≃ G∗/K as follows. At the origin of G/K and G∗/K, the identity element I, the
metric is identified with the metric in the algebra, restricted to the respective tangent
spaces P, iP. Since the group acts transitively on the coset space (cf. paragraph 2.3), and
the orbit of the origin is the entire space, we can use a group transformation to map the
metric at the origin to any point M in the space. The metric tensor at M will depend on
the coset representative M. It is given by
grs(M) = gij(I)∂xi(I)
∂xr(M)
∂xj(I)
∂xs(M)
(3.17)
where gij(I) is the metric at the origin (identity element) of the coset space. (3.17) follows
from the invariance of the line element ds2= gijdxidxjunder translations. If {Xi} is a
basis in the tangent space, and dM = exp(dxiXi) is a coset representative infinitesimally
close to the identity, we need to know how dxitransforms under translations by the coset
representative M. We will not discuss that here, but some generalities can be found for
example in Ch. 9, paragraph V.4. of ref. [31]. In general, it is not an easy problem unless
the coset has rank 1.
Example: The line element ds2on the radius–1 2–sphere SO(3)/SO(2) in polar coordi-
nates is ds2= dθ2+ sin2θdφ2. The metric at the point (θ,φ) is
gij=
?
1
0 sin2θ
0
?
,gij=
?
1
0 sin−2θ
0
?
(3.18)
31
Page 33
where the rows and columns are labelled in the order θ, φ.
The distance between points on the symmetric space is defined as follows. The length of
a vector X =
endowed with a definite metric) is identified with the length of the geodesic connecting the
identity element in the coset space with the element M = exp(X) [31].
?
itiXi in the tangent space P (this object is well–defined because P is
3.5 The algebraic structure of symmetric spaces
Except for the two algebras SL(n,R) and SU∗(2n) (and their dual spaces related by the
Weyl trick), for which the subspace representatives of K, P and iP consist of square,
irreducible matrices (for SL(n,R), we saw this in the example in subsection 3.1 and for
SU(n,C) explicitly in eq. (3.3)), the matrix representatives of the subalgebra K and of the
subspaces P and iP in the fundamental representation consist of block–diagonal matrices
X ∈ K, Y ∈ P, Y′∈ iP of the form [31]
X =
?
A
0
0
B
?
,Y =
?
0C
0−C†
?
,Y′=
?
0
˜C
0
˜C†
?
, (3.19)
in the Cartan decomposition. Here A†= −A, B†= −B and˜C = iC. In fact, for any
finite–dimensional representation, the matrix representatives of K and P are antihermitean
(thus they become antisymmetric if the representation of P is real) and as a consequence,
those of iP are hermitean (symmetric in case the representation of iP is real) [31]. This is
true irrespective of whether the matrix representatives are block–diagonal or square.
The exponential maps of the subspaces P and iP are isomorphic to coset spaces G/K
and G∗/K, respectively (see for example [30, 37]). The exponential map of the algebra
maps the subspaces P and iP into unitary and hermitean matrices, respectively. In the
fundamental representation, these spaces are mapped onto [31]
exp(P) = exp
?
0C
0−C†
?
˜C†
?
?
=
? √I − XX†
−X†
˜ X†
X
√I − XX†
˜ X
?
?
exp(iP) = exp
0
˜C
0
=
?
I +˜ X˜ X†
I +˜ X˜ X†
(3.20)
32
Page 34
where X is a spherical and˜ X a hyperbolic function of the submatrix C (˜C):
X = Csin√C†C
√C†C
,
˜ X =˜Csinh
√˜C† ˜C
√˜C† ˜C
(3.21)
This shows explicitly that the range of parameters parametrizing the two cosets is bounded
for the compact coset and unbounded for the non–compact coset, respectively. We already
saw an explicit example of these formulas in subsection 2.3.
4 Real forms of semisimple algebras
In this section we will introduce the tools needed to find all the real forms of any (semi)simple
algebra. The same tools will then be used in the next section to find the real forms of a
symmetric space. When thinking of a real form, it is convenient to visualize it in terms of
its metric. As we saw in paragraph 3.4 the trace of the metric is called the character of the
real form and it distinguishes the real forms from each other. In the following subsection
we discuss various real forms of an algebra and we see how to go from one form to another.
In each case, we compute the metric and the character explicitly. We also give the simplest
possible example of this procedure, the rank–1 algebra. In subsection 4.2 we enumerate
the involutive automorphisms needed to classify all real forms of semisimple algebras and
again, we illustrate it with two examples.
4.1 The real forms of a complex algebra
In general a semisimple complex algebra has several distinct real forms. Recall from sub-
section 2.5 that a real form of an algebra is obtained by taking linear combinations of its
elements with real coefficients. The real forms of the complex Lie algebra G
?
i
ciHi+
?
α
cαEα
(ci, cαcomplex), (4.1)
where H0= {Hi} is the Cartan subalgebra and {E±α} are the sets of raising and lowering
operators, can be classified according to all the involutive automorphisms of G obeying
σ2= 1. Two distinctive real forms are the normal real form and the compact real form.
33
Page 35
The normal real form of the algebra (4.1), which is also the least compact real form,
consists of the subspace in which the coefficients ci, cαare real. The metric in this case
with respect to the bases {Hi,E±α} is (with appropriate normalization of the elements of
the Lie algebra to make the entries of the metric equal to ±1)
gij=
1
...
1
0 1
1 0
...
0 1
1 0
(4.2)
where the r 1’s on the diagonal correspond to the elements of the Cartan subalgebra (r is
obviously the rank of the algebra), and the 2 × 2 matrices on the diagonal correspond to
the pairs E±αof raising and lowering operators. This structure reflects the decomposition
of the algebra G into a direct sum of the root spaces: G = H0⊕?
tensor can be transformed to diagonal form, if we choose the generators to be
αGα. This metric
K =
?(Eα− E−α)
√2
?
,iP =
?
Hi,(Eα+ E−α)
√2
?
(4.3)
Example: In our example with SU(3,C), K and iP are exactly the subspaces spanned by
{X2,X5,X7} and {iX1,iX3,iX4,iX6,iX8} (cf. eq. (3.3)), and (Eα−E−α) and −i(Eα+E−α)
are exactly the Gell–Mann matrices (cf. eq. (2.30)).
Then gijtakes the form
gij=
1
...
1
1
0 −1
0
...
1
0 −1
0
(4.4)
34
Page 36
where the entries with a minus sign correspond to the generators of the compact subalgebra
K, the first r entries equal to +1 correspond to the Cartan subalgebra, and the remaining
ones to the operators in iP not in the Cartan subalgebra. This is the diagonal metric
tensor corresponding to the normal real form. The character of the normal real form is
plus the rank of the algebra.
The compact real form of G is obtained from the normal real form by the Weyl unitary
trick:
K =
?(Eα− E−α)
√2
?
,P =
?
iHi,i(Eα+ E−α)
√2
?
(4.5)
The character of the compact real form is minus the dimension of the algebra, and the
metric tensor is gij= diag(−1,...,−1).
Example: We will use as an example the well–known SU(2,C) algebra with Cartan
subalgebra H0 = {J3} and raising and lowering operators {J±}. We have chosen the
normalization such that the non–zero entries of gijare all equal to 1:
J3=
1
2√2τ3,J±=1
4(τ1± iτ2)
(4.6)
where in the defining representation of SU(2,C)
τ3=
?
1
0 −1
0
?
,τ1=
?
0 1
1 0
?
,τ2=
?
0 −i
i0
?
(4.7)
The normalization is such that
[J3,J±] = ±1
√2J±,[J+,J−] =
1
√2J3
(4.8)
In equation (2.16) we constructed the adjoint representation of this algebra, albeit with a
different normalization. Using the present normalization to set the entries of the metric
equal to 1, we see that the non–zero structure constants are C+
C3
Cr
form SU(2,R) in this basis is
3+= −C+
+3= −C−
3−= C−
−3=
+−= −C3
isCs
−+=
1
√2. The entries of the metric are given by eq. (3.15), gij= K(Ji,Jj) =
jrwith summation over repeated indices, so we see that the metric of the normal real
35
Page 37
gij=
1 0 0
0 0 1
0 1 0
(4.9)
where the rows and columns are labelled by 3,+,− respectively. This corresponds to
eq. (4.2).
To pass now to a diagonal metric, we just have to set
Σ3= J3
Σ1=J++J−
√2
=
1
2√2τ1
Σ2=J+−J−
√2
=
i
2√2τ2
(4.10)
like in eq. (4.3). The commutation relations then become
[Σ1,Σ2] = −1
√2Σ3, [Σ2,Σ3] = −1
√2Σ1, [Σ3,Σ1] =
1
√2Σ2
(4.11)
These commutation relations characterize the algebra SO(2,1;R). From here we find the
structure constants C3
metric of the normal real form with rows and columns labelled 3,1,2 (in order to comply
with the notation in eq. (4.4)) is
12= −C3
21= C1
23= −C1
32= −C2
31= C2
13= −1
√2and the diagonal
gij=
1 0
0 1
0 0 −1
0
0
(4.12)
which is to be compared with eq. (4.4). According to eq. (4.3), the Cartan decomposition
of G∗is G∗= K ⊕ iP where K = {Σ2} and iP = {Σ3,Σ1}. The Cartan subalgebra
consists of Σ3.
Finally, we arrive at the compact real form by multiplying Σ3 and Σ1 with i. Setting
iΣ1 =˜Σ1, Σ2 =˜Σ2, iΣ3 =˜Σ3 the commutation relations become those of the special
orthogonal group:
36
Page 38
[˜Σ1,˜Σ2] = −1
√2˜Σ3, [˜Σ2,˜Σ3] = −1
√2˜Σ1, [˜Σ3,˜Σ1] = −1
√2˜Σ2
(4.13)
The last commutation relation in eq. (4.11) has changed sign whereas the others are un-
changed. C2
SO(3,R):
31, C2
13, and consequently g33and g11change sign and we get the metric for
gij=
−1
0
0
00
0−1
0−1
(4.14)
This is the compact real form. The subspaces of the compact algebra G = K ⊕ P are
K = {˜Σ2} and P = {˜Σ3,˜Σ1}. Weyl’s theorem states that a simple Lie group G is compact,
if and only if the Killing form on G is negative definite; otherwise it is non–compact. In
the present example, we see this explicitly.
4.2 The classification machinery
To classify all the real forms of any complex Lie algebra, with characters lying between
the character of the normal real form and the compact real form (the intermediate real
forms obviously have an indefinite metric), it suffices to enumerate all the involutive au-
tomorphisms of its compact real form. A detailed and almost complete account of these
procedures for the non–exceptional groups can be found in [31], Chapter 9, paragraph 3.
To summarize, if G is the compact real form of a complex semisimple Lie algebra GC, G∗
runs through all its associated non–compact real forms G∗, G′∗, ... with corresponding
maximal compact subgroups K, K′, ... and complementary subspaces iP, iP′, ... as σ
runs through all the involutive automorphisms of G.
One such automorphism is complex conjugation σ1= K, which is used to split the compact
real algebra into subspaces K and P in eq. (4.5). (To avoid confusion: the generators can
be complex even though the field of real numbers is used to multiply the generators in a
real form of an algebra. If the generators are also real, we speak of a real representation.
However, whether we consider the field to be R and the generators to be complex, or the
opposite, also depends on our definition of basis; cf. one of the footnotes in subsection 2.5).
The involutive automorphisms σ satisfy σGσ−1= G, σ2= 1, which implies that σ either
commutes or anticommutes with the elements of the compact algebra G: if σXσ−1= X′,
then σX′σ−1= X, and we get X′= ±X for X, X′∈ G (see the example below). One
can show [38] (Ch. VII), that it suffices to consider the following three possibilities for σ:
σ1= K, σ2= Ip,qand σ3= Jp,pwhere
37
Page 39
Ip,q=
?
Ip
0
0
−Iq
?
,Jp,p=
?
0Ip
0−Ip
?
(4.15)
and Ipdenotes the p × p unit matrix. By operating with one (or two successive ones) of
these automorphisms on the elements of G, we can construct the subspaces K and P, and
K and iP of the corresponding non–compact real form G∗. A complex algebra and all its
real forms (the compact and the various non–compact ones) correspond to the same root
lattice and Dynkin diagram.
Example: The normal real form of the complex algebra GC= SL(n,C) is the non–
compact algebra G∗= SL(n,R). As we saw in subsection 3.1, this algebra can be decom-
posed as K ⊕ iP where K is the algebra consisting of real, skew–symmetric and traceless
n×n matrices and iP is the algebra consisting of real, symmetric and traceless n×n ma-
trices. Under the Weyl unitary trick we constructed, in a previous example, this algebra
from the compact real form of GC, SU(n,C) = G = K ⊕ P.
Starting with the compact real form G, we can construct all the various non–compact
real forms G∗, G′∗,... from it, by applying the involutive automorphisms σ1, σ2, σ3to the
elements of G. All the real forms related to the root system An−1are obtained by applying
the three involutions to G = SU(n,C):
σ1) The involutive automorphism σ1 = K (complex conjugation) splits G = SU(n,C)
into K ⊕ P (we recall this from the example in paragraph 3.1). The non–compact real
form obtained this way, by the Weyl unitary trick, is exactly the normal real form G∗=
K ⊕ iP = SL(n,R).
σ2) A general matrix in the Lie algebra SU(n,C) can be written in the form
X =
?
AB
C−B†
?
(4.16)
where A, C are complex p×p and q×q matrices satisfying A†= −A, C†= −C, trA+trC =
0 (since the determinant of the group elements must be +1), and B is an arbitrary complex
p×q matrix (p+q = n). In eq. (4.16), the matrices A, B and C are all linear combinations
of submatrices in both subspaces K = {1
The action of the involution σ2= Ip,qon X is
√2(Eαi− E−αi)} and P = {iHj,
i
√2(Eαi+ E−αi)}.
Ip,qXI−1
p,q=
?
A
B†
−B
C
?
(4.17)
38
Page 40
Therefore, we see that the subspaces K′and P′are given by the matrices
?
A
0
0
C
?
∈ K′,
?
0B
0−B†
?
∈ P′
(4.18)
Indeed, we see that Ip,qtransforms the Lie algebra elements in K′into themselves, and
those in P′into minus themselves. The transformation by Ip,qmixes the subspaces K and
P, and splits the algebra in a different way into K′⊕ P′. The matrices
?
A iB
C−iB†
?
∈ K′⊕ iP′
(4.19)
define the non–compact real form G′∗. This algebra is called SU(p,q;C) and its maximal
compact subalgebra K′is SU(p) ⊗ SU(q) ⊗ U(1).
σ3) By the involutive automorphism σ3σ1 = Jp,pK one constructs in a similar way (for
details see [31]) a third non–compact real form (for even n = 2p) G′′∗= K′′⊕iP′′associated
to the algebra G = SU(2p,C). G′′∗is the algebra SU∗(2p) and its maximal compact
subalgebra is USp(2p).7
This procedure, summarized in the formula below, exhausts all the real forms of the simple
algebras.
GC→ G = K ⊕ P
σ1
ր
σ2
→
σ3
ց
G∗= K ⊕ iP
G′∗= K′⊕ iP′
G′′∗= K′′⊕ iP′′
(4.22)
7The algebra SU∗(2p) is represented by complex 2p × 2p matrices of the form
X =
?
AB
−B∗
−A∗
?
(4.20)
where trA+trA∗= 0. USp(2p) denotes the complex 2p×2p matrix algebra of the group with both unitary
and symplectic symmetry (USp(2p,C) can also be denoted U(p,Q) where Q is the field of quaternions).
A matrix in the algebra USp(2p,C) can be written as
?
where A†= −A, BR= B, and the superscriptRdenotes reflection in the minor diagonal.
X =
AB
−B†
−AR
?
(4.21)
39
Page 41
Example: Note that it may not always be possible to apply all the above involutions
σ1, σ2, σ3to the algebra. For example, complex conjugation σ1does not do anything to
SO(2n + 1,R), because it is represented by real matrices, neither is σ3a symmetry of this
algebra, since the adjoint representation is odd–dimensional and σ3has to act on a 2p×2p
matrix. The only possibility that remains is σ2= Ip,q. For a second, even more concrete
example, let’s look at the algebra SO(3,R), belonging to the root lattice B1. This algebra
is spanned by the generators L1, L2, L3given in subsection 2.3. A general element of the
algebra is
X = t · L =1
2
t3
t2
t1
−t3
−t2
−t1
=1
2
t3
−t3
⊕1
2
t2
t1
−t2
−t1
(4.23)
This splitting of the algebra is caused by the involution I2,1acting on the representation:
I2,1XI−1
2,1=
1
1
−1
1
2
t3
t2
t1
−t3
−t2
−t1
1
1
−1
=1
2
t3
−t2
−t1
−t3
t2
t1
(4.24)
and it splits it into SO(3) = K⊕P = SO(2)⊕SO(3)/SO(2). Exponentiating, as we saw
in subsection 2.3, the coset representative is a point on the 2–sphere
M =
. .t2sin√
t1sin√
?
(t1)2+(t2)2
√
√
(t1)2+ (t2)2
(t1)2+(t2)2
(t1)2+(t2)2
. .
(t1)2+(t2)2
. . cos
=
. . x
. . y
. . z
;
x2+ y2+ z2= 1(4.25)
By the Weyl unitary trick we now get the non–compact real form G∗= K⊕iP: SO(2,1) =
SO(2) ⊕ SO(2,1)/SO(2). This algebra is represented by
t3
it2
it1
−t3
−it2
−it1
=
t3
−t3
⊕
it2
it1
−it2
−it1
(4.26)
and after exponentiation of the coset generators
40
Page 42
M =
. .it2sinh√
it1sinh√
?
(t1)2+(t2)2
√
√
(t1)2+(t2)2
(t1)2+(t2)2
. .
(t1)2+(t2)2
(t1)2+ (t2)2
. . cosh
=
. . ix
. . iy
. .z
;
(ix)2+ (iy)2+ z2= 1 (4.27)
The surface in R3consisting of points (x,y,z) satisfying this equation is the hyperboloid
H2. Similarly, we get the isomorphic space SO(1,2)/SO(2) by applying I1,2: SO(1,2) =
˜K ⊕ i˜P = SO(2) ⊕ SO(1,2)/SO(2) and in terms of the algebra
˜ X =1
2
t1
−t1
⊕1
2
−it3
−it2
it3
it2
(4.28)
5 The classification of symmetric spaces
In this section we introduce the curvature tensor and the sectional curvature of a symmetric
space, and we extend the family of symmetric spaces to include also flat or Euclidean–type
spaces. These are identified with the subspace P of the Lie algebra itself, and the group
that acts on it is a semidirect product of the subgroup K and the subspace P. As we
will learn, to each compact subgroup K corresponds a triplet of symmetric spaces with
positive, zero and negative curvature. The classification of these symmetric spaces is in
exact correspondence with the new classification of random matrix models to be discussed
in Part II. These spaces exhaust the Cartan classification and have a definite metric. They
are listed in Table 1 together with some of their properties.
In paragraph 5.2 we introduce restricted root systems. In the same way as a Lie algebra
corresponds to a given root system, the “algebra” (subspace P or iP) of each symmetric
space corresponds to a restricted root system. These root systems are of primary impor-
tance in the physical applications to be discussed in Part II. The restricted root system can
be of an entirely different type from the root system inherited from the complex extension
algebra, and its rank may be different. We work out a specific example of a restricted
root system as an illustration. In spite of their importance, we have not been able to find
any explicit reference in the literature that explains how to obtain the restricted root sys-
tems. Instead, we found that they are often referred to in tables and in mathematical texts
without explicitly mentioning that they are restricted, which could easily lead to confusion
with the inherited root systems. In reference [31] the root system that is associated to
41
Page 43
each symmetric space is the one inherited from the complex extension algebra, whereas for
example in Table B1 of reference [9] and in [38] the restricted root systems are listed.
There are also symmetric spaces with an indefinite metric, so called pseudo–Riemannian
spaces, corresponding to a maximal non–compact subgroup H. For completeness, we will
briefly discuss how these are obtained as real forms of symmetric spaces corresponding to
compact symmetric subgroups. This does not require any new tools than the ones we have
already introduced, namely the involutive automorphisms.
5.1The curvature tensor and triplicity
Suppose that K is a maximal compact subalgebra of the non–compact algebra G∗in the
Cartan decomposition G∗= K ⊕ iP, where iP is a complementary subspace. K and P
(alternatively K and iP) satisfy eq. (3.2):
[K,K] ⊂ K, [K,P] ⊂ P, [P,P] ⊂ K(5.1)
K is called a symmetric subalgebra and the coset spaces exp(P) ≃ G/K and exp(iP) ≃
G∗/K are globally symmetric Riemannian spaces. Globally symmetric means that every
point on the manifold can be moved to any other point by a particular group operation (we
discussed this in paragraph 2.3; for a rigorous definition of globally symmetric spaces see
Helgason [30], paragraph IV.3). In the same way, the metric can be defined in any point
of the manifold by moving the metric at the origin to this point, using a group operation
(cf. eq. (3.17) in paragraph 3.4). The Killing form restricted to the tangent spaces P and
iP at any point in the coset manifold has a definite sign. The manifold is then called
“Riemannian”. The metric can be taken to be either plus or minus the Killing form so
that it is always positive definite (cf. paragraph 3.4).
A curvature tensor with components Ri
in the usual way [30, 32]. It is a function of the metric tensor and its derivatives. It was
proved for instance in [30], Ch. IV, that the components of the curvature tensor at the
origin of a globally symmetric coset manifold is given by the expression
jklcan be defined on the manifold G/K or G∗/K
Rn
ijkXn= [Xi,[Xj,Xk]] = Cn
imCm
jkXn
(5.2)
where {Xi} is a basis for the Lie algebra. The sectional curvature at a point p is equal to
42
Page 44
K = g([[X,Y ],X],Y )(5.3)
where g is an arbitrary symmetric and nondegenerate metric (such a metric is also called
a pseudo–Riemannian structure, or simply a Riemannian structure if it has a definite
sign) on the tangent space at p, invariant under the action of the group elements. In
(5.3), g(Xi,Xj) ≡ gijand {X,Y } is an orthonormal basis for a two–dimensional subspace
S of the tangent space at the point p (assuming it has dimension ≥ 2). The sectional
curvature is equal to the gaussian curvature on a 2–dimensional manifold. If the manifold
has dimension ≥ 2, (5.3) gives the sectional curvature along the section S.
Eqs. (5.2) and (5.3), together with eq. (5.1) show that the curvature of the spaces G/K
and G∗/K has a definite and opposite sign ([30], par. V.3). Thus, we see that if G is a
compact semisimple group, to the same subgroup K there corresponds a positive curvature
space P ≃ G/K and a dual negative curvature space P∗≃ G∗/K. The reason for this
is exactly the same as the reason why the sign changes for the components of the metric
corresponding to the generators in iP as we go to the dual space P. We remind the reader
that the sign of the metric can be chosen positive or negative for a compact space. The
issue here is that the sign changes in going from G∗/K to G/K.
Example: We can use the example of SU(2) in paragraph 4.1 to see that the sectional
curvature is the opposite for the two spaces G/K and G∗/K. If we take {X,Y } = {Σ3,Σ1}
as the basis in the space iP and {˜Σ3,˜Σ1} (˜Σi≡ iΣi) as the basis in the space P, we see
by comparing the signs of the entries of the metrics we computed in eqs. (4.12) and (4.14)
that the sectional curvature K at the origin has the opposite sign for the two spaces
SO(2,1)/SO(2) and SO(3)/SO(2).
Actually, there is also a zero–curvature symmetric space X0= G0/K related to X+= G/K
and X−= G∗/K, so that we can speak of a triplet of symmetric spaces related to the same
symmetric subgroup K. The zero–curvature spaces were discussed in [9] and in Ch. V of
Helgason’s book [30], where they are referred to as “symmetric spaces of the euclidean
type”. That their curvature is zero was proved in Theorem 3.1 of [30], Ch. V.
The flat symmetric space X0can be identified with the subspace P of the algebra. The
group G0is a semidirect product of the subgroup K and the invariant subspace P of the
algebra, and its elements g = (k,a) act on the elements of X0in the following way:
g(x) = kx + a,k ∈ K, x,a ∈ X0
(5.4)
if the x’s are vectors, and
43
Page 45
g(x) = kxk−1+ a,k ∈ K,x,a ∈ X0
(5.5)
if the x’s are matrices. We will see one example of each below.
The elements of the algebra P now define an abelian additive group, and X0is a vector space
with euclidean geometry. In the above scenario, the subspace P contains only the operators
of the Cartan subalgebra and no others: P = H0, so that P is a subalgebra of G0. The
algebra G0= K ⊕ P belongs to a non–semisimple group G0, since it has an abelian ideal
P: [K,K] ⊂ K, [K,P] ⊂ P, [P,P] = 0. Note that K and P still satisfy the commutation
relations (5.1). In this case the coset space X0is flat, since by (5.1), Rn
elements X ∈ P. Eq. (5.2) is valid for any space with a Riemannian structure. Indeed, it
is easy to see from eqs. (5.2), (5.3) that Rn
though the Killing form on non–semisimple algebras is degenerate, it is trivial to find a
non–degenerate metric on the symmetric space X0that can be used in (5.3) to find that
the sectional curvature at any point is zero. For example, as we pass from the sphere to
the plane, the metric becomes degenerate in the limit as [L1,L2] ∼ L3→ [P1,P2] = 0 (see
the example below). Obviously, we do not inherit this degenerate metric from the tangent
space on R2like in the case of the sphere, but the usual metric for R2, gij= δijprovides
the Riemannian structure on the plane.
ijk= 0 for all the
ijk= K = 0 if the generators are abelian. Even
Examples: An example of a flat symmetric space is E2/K, where G0= E2is the euclidean
group of motions of the plane R2: g(x) = kx + a, g = (k,a) ∈ G0where k ∈ K = SO(2)
and a ∈ R2. The generators of this group are translations P1, P2∈ H0= P and a rotation
J ∈ K satisfying [P1,P2] = 0, [J,Pi] = −ǫijPj, [J,J] = 0, in agreement with eq. (5.1)
defining a symmetric subgroup. The abelian algebra of translations
isomorphic to the plane R2, and can be identified with it.
?2
i=1tiPi, ti∈ R, is
The commutation relations for E2are a kind of limiting case of the commutation relations
for SO(3) ∼ SU(2) and SO(2,1). If in the limit of infinite radius of the sphere S2we
identify˜Σ1with P1,˜Σ2with P2, and˜Σ3with J, we see that the commutation relations
resemble the ones described in eq. (4.11) and (4.13) – we only have to set [˜Σ1,˜Σ2] = 0,
which amounts to setting C3
non–semisimple algebra E2:
12= −C3
21→ 0. From here we get the degenerate metric of the
gij=
−1
0
0
(5.6)
where the only nonzero element is g33. This is to be confronted with eqs. (4.12) and (4.14)
44
Page 46
which are the metrics for SO(2,1) and SO(3). This is an example of contraction of an
algebra.
An example of a triplet {X+,X0,X−} corresponding to the same subgroup K = SO(n)
is:
1) X+= SU(n,C)/SO(n), the set of symmetric unitary matrices with unit determinant;
it is the space exp(P) where P are real, symmetric and traceless n × n matrices. (Cf. the
example in subsection 3.1.)
2) X0is the set P of real, symmetric and traceless n×n matrices and the non–semisimple
group G0is the group whose action is defined by g(x) = kxk−1+a, g = (k,a) ∈ G0where
k ∈ K = SO(n) and x,a ∈ X0. The involutive automorphism maps g = (k,a) ∈ G0into
g′= (k,−a).
3) X−= SL(n,R)/SO(n) is the set of real, positive, symmetric matrices with unit deter-
minant; it is the space exp(iP) where P are real, symmetric and traceless n × n matrices.
We remark that the zero–curvature symmetric spaces correspond to the integration mani-
folds of many known matrix models with physical applications.
The pairs of dual symmetric spaces of positive and negative curvature listed in each row
of Table 1 originate in the same complex extension algebra [31] with a given root lattice.
This “inherited” root lattice is listed in the first column of the table. In our example in
paragraph 4.2 this was the root lattice of the complex algebra GC= SL(n,C). The same
root lattice An−1characterizes the real forms of SL(n,C): as we saw in the example these
are the algebras SU(n,C), SL(n,R), SU(p,q;C) and SU∗(2n), and we have seen how
to construct them using involutive automorphisms.
However, also listed in Table 1 is the restricted root system corresponding to each symmetric
space. This root system may be different from the one inherited from the complex extension
algebra. Below, we will define the restricted root system and see an explicit example of one
such system. While the original root lattice characterizes the complex extension algebra
and its real forms, the restricted root lattice characterizes a particular symmetric space
originating from one of its real forms. The root lattices of the classical simple algebras
are the infinite sequences An, Bn, Cn, Dn, where the index n denotes the rank of the
corresponding group. The root multiplicities mo, ml, ms listed in Table 1 (where the
subscripts refer to ordinary, long and short roots, respectively) are characteristic of the
restricted root lattices. In general, in the root lattice of a simple algebra (or in the graphical
representation of any irreducible representation), the roots (weights) may be degenerate
and thus have a multiplicity greater than 1. This happens if the same weight µ = (µ1,...,µr)
corresponds to different states in the representation. In that case one can arrive at that
45
Page 47
particular weight using different sets of lowering operators E−αon the highest weight of
the representation. Indeed, we saw in the example of SU(3,C) in subsection 2.5, that the
roots can have a multiplicity different from 1. The same is true for the restricted roots.
The sets of simple roots of the classical root systems (briefly listed in subsection 2.7)
have been obtained for example in [31, 36]. In the canonical basis in Rn, the roots of
type {±ei± ej,i ?= j} are ordinary while the roots {±2ei} are long and the roots {±ei}
are short. Only a few sets of root multiplicities are compatible with the strict properties
characterizing root lattices in general.
5.2Restricted root systems
The restricted root systems play an important role in connection with matrix models and
integrable Calogero–Sutherland models (these models will be introduced in section 7). We
will discuss this in detail in Part II. In this subsection we will explain how restricted root
systems are obtained and how they are related to a given symmetric space.8
As we have repeatedly seen in the examples using the compact algebra SU(n,C) (in
particular in subsection 4.2), the algebra SU(p,q;C) (p + q = n) is a non–compact real
form of the former. This means they share the same rank–(n − 1) root system An−1.
However, to the symmetric space SU(p,q;C)/(SU(p) ⊗ SU(q) ⊗ U(1)) one can associate
another rank–r′root system, where r′= min(p,q) is the rank of the symmetric space.
For some symmetric spaces, it is the same as the root system inherited from the complex
extension algebra (see Table 1 for a list of the restricted root systems), but this need not be
the case. For example, the restricted root system is, in the case of SU(p,q;C)/(SU(p) ⊗
SU(q)⊗U(1)), BCr′. When it is the same and when it is different, as well as why the rank
can change, will be obvious from the example we will give below.
In general the restricted root system will be different from the original, inherited root
system if the Cartan subalgebra is a subset of K. The procedure to find the restricted root
system is then to define an alternative Cartan subalgebra that lies partly (or entirely) in P
(or iP).
To achieve this, we first look for a different representation of the original Cartan subalgebra,
that gives the same root lattice as the original one (i.e., An−1for the SU(p,q;C) algebra).
In general, this root lattice is an automorphism of the original root lattice of the same kind,
obtained by a permutation of the roots. Unless we find this new representation, we will
8The authors are indebted to Prof. Simon Salamon for explaining how the restricted root systems are
obtained.
46
Page 48
not be able to find a new, alternative Cartan subalgebra that lies partly in the subspace
P.
Once this has been done, we take a maximal abelian subalgebra of P (the number of
generators in it will be equal to the rank r′of the symmetric space G/K or G∗/K) and
find the generators in K that commute with it. These generators will be among the ones
that are in the new representation of the original Cartan subalgebra. These commuting
generators now form our new, alternative Cartan subalgebra that lies partly in P, partly
in K. Let’s call it A0.
The new root system is defined with respect to the part of the maximal abelian subalgebra
that lies in P. Therefore its rank is normally smaller than the rank of the root system
inherited from the complex extension. We can define raising and lowering operators E′
the whole algebra G that satisfy
αin
[X′
i,E′
α] = α′
iE′
α
(X′
i∈ A0∩ P) (5.7)
The roots α′
idefine the restricted root system.
Example: Let’s now look at a specific example. We will start with the by now familiar
algebra SU(3,C). As before, we use the convention of regarding the Ti’s as the generators,
without the factor of i (recall that the algebra consists of elements of the form?
constructed its root lattice A2. Let’s write down the generators again:
ataXa=
i?
ataTa; cf. the footnote in conjuction with eq. (2.25)). In subsection 2.5 we explicitly
T1=1
2
0 1 0
1 0 0
0 0 0
,
,T2=1
2
0 −i 0
i0
00
0
0
,
0
0
,T3=1
2
1
0 −1 0
00
00
0
,
T4=1
2
0 0 1
0 0 0
1 0 0
T5=1
2
0 0 −i
0 0
i0
0
0
T6=1
2
0 0 0
0 0 1
0 1 0
,
T7=1
2
0 0
0 0 −i
0i
0
0
,
T8=
1
2√3
1 0
0 1
0 0 −2
(5.8)
The splitting of the SU(3,C) algebra in terms of the subspaces K and P was given in
eq. (3.3):
47
Page 49
K = {iT2,iT5,iT7},P = {iT1,iT3,iT4,iT6,iT8} (5.9)
The Cartan subalgebra is {iT3,iT8}. The raising and lowering operators were given in
(2.30) in terms of Ti:
E±(1,0)=
1
√2(T1± iT2)
E±(1
2,
√3
2)=
1
√2(T4± iT5)
E±(−1
2,
√3
2)=
1
√2(T6± iT7)
(5.10)
Now let us construct the Cartan decomposition of G′∗= K′⊕ iP′= SU(2,1;C). We
know from paragraph 4.2 that K′and P′are given by matrices of the form
?
A
0
0
C
?
∈ K′,
?
0B
0−B†
?
∈ P′
(5.11)
where A and C are antihermitean and trA + trC = 0. Combining the generators to form
this kind of block–structures (or alternatively, using the involution σ2 = I2,1) we need
to take linear combinations of the Xi’s, with real coefficients, and we then see that the
subspaces K′and iP′are spanned by
K′=
i
2
0 1
1 0
0
,1
2
01
−1 0
0
,i
2
1
0 −1
0
0
,
i
2√3
1 0
0 1
−2
= {iT1,iT2,iT3,iT8}
= {T4,T5,T6,T7}
iP′=
1
2
1
0
1 0
,i
2
−1
0
1 0
,1
2
0
1
0 1
,i
2
0
−1
0 1
(5.12)
where the block–structure is evidenced by leaving blank the remaining zero entries. K′
spans the algebra of the symmetric subgroup SU(2) ⊗ U(1) and iP′spans the comple-
48
Page 50
mentary subspace corresponding to the symmetric space SU(2,1)/(SU(2) ⊗ U(1)). iP′is
spanned by matrices of the form
?
0
˜B
0
˜B†
?
(5.13)
We see that the Cartan subalgebra iH0= {iT3,iT8} lies entirely in K′. It is easy to see
that by using the alternative representation
T′
3=1
2
1
0
−1
,
T′
8=
1
2√3
1
−2
1
(5.14)
of the Cartan subalgebra (note that this is a valid representation of SU(3,C) generators)
while the other Ti’s are unchanged, we still get the same root lattice A2. The eigenvec-
tors under the adjoint representation, the Eα’s, are still given by eq. (5.10). However, their
eigenvalues (roots) are permuted under the new adjoint representation of the Cartan subal-
gebra, so that they no longer correspond to the root subscripts in (5.10). This permutation
is a Weyl reflection; more specifically, it is the reflection in the hyperplane orthogonal to
the root (−1
2,
√3
2).
Now we choose the alternative Cartan subalgebra to consist of the generators T4, T′
8:
A0= {T4,T′
8},[T4,T′
8] = 0, iT4∈ P′, iT′
8∈ K′
(5.15)
(Note that unless we first take a new representation of the original Cartan subalgebra, we
are not able to find the alternative Cartan subalgebra that lies partly in P′.) The restricted
root system is now about to be revealed. We define raising and lowering operators E′
the whole algebra according to
αin
E′
±1∼ (T5± iT3)E′
±1
2∼ (T6± iT2)
˜E′
±1
2∼ (T7± iT1) (5.16)
The ±α subscripts are the eigenvalues of T4∈ iP′in the adjoint representation:
[T4,E′
±1] = ±E′
±1,[T4,E′
±1
2] = ±1
2E′
±1
2,[T4,˜E′
±1
2] = ±1
2
˜E′
±1
2
(5.17)
49
Download full-text