ArticlePDF Available

Abstract

We survey the analytic theory of matrix orthogonal polynomials.
arXiv:0711.2703v2 [math.CA] 31 Jan 2008
The Analytic Theory of Matrix Orthogonal Polynomials
David Damanik, Alexander Pushnitski, and Barry Simon
January 30, 2008
Abstract
We survey the analytic theory of matrix orthogonal polynomials.
MSC: 42C05, 47B36, 30C10
keywords: orthogonal polynomials, matrix-valued measures, block Jacobi matrices, block
CMV matrices
Contents
1 Introduction 2
1.1 Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Matrix-Valued Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Matrix obius Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 Matrix Orthogonal Polynomials on the Real Line 15
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Polynomials, Inner Products, Norms . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Monic Orthogonal Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.3 Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.4 Recurrence Relations for Monic Orthogonal Polynomials . . . . . . . . . . . . 18
2.1.5 Normalized Orthogonal Polynomials . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Block Jacobi Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Block Jacobi Matrices as Matrix Representations . . . . . . . . . . . . . . . . 20
2.2.2 Basic Properties of Block Jacobi Matrices . . . . . . . . . . . . . . . . . . . . 21
2.2.3 Special Representatives of the Equivalence Classes . . . . . . . . . . . . . . . 22
2.2.4 Favard’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 The m-Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.1 The Definition of the m-Function . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Coefficient Stripping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Second Kind Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5 Solutions to the Difference Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Surveys in Approximation Theory
Volume 4, 2008. pp. 1–85.
c
2008 Surveys in Approximation Theory.
ISSN 1555-578X
All rights of reproduction in any form reserved.
1
D. Damanik, A. Pushnitski, B. Simon 2
2.6 Wronskians and the Christoffel–Darboux Formula . . . . . . . . . . . . . . . . . . . . 29
2.7 The CD Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.8 Christoffel Variational Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.9 Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.10 Lower Bounds on pand the Stieltjes–Weyl Formula for m. . . . . . . . . . . . . . . 34
2.11 Wronskians of Vector-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.12 The Order of Zeros/Poles of m(z) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.13 Resolvent of the Jacobi Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 Matrix Orthogonal Polynomials on the Unit Circle 37
3.1 Definition of MOPUC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 The Szeg˝o Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Second Kind Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Christoffel–Darboux Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.5 Zeros of MOPUC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.6 Bernstein–Szeg˝o Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.7 Verblunsky’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.8 Matrix POPUC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.9 Matrix-Valued Carath´eodory and Schur Functions . . . . . . . . . . . . . . . . . . . 50
3.10 Coefficient Stripping, the Schur Algorithm, and Geronimus’ Theorem . . . . . . . . . 52
3.11 The CMV Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.11.1 The CMV basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.11.2 The CMV matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.11.3 The LM-representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.12 The Resolvent of the CMV Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.13 Khrushchev Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4 The Szeg˝o Mapping and the Geronimus Relations 64
5 Regular MOPRL 68
5.1 Upper Bound and Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2 Density of Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3 General Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.4 Weak Convergence of the CD Kernel and Consequences . . . . . . . . . . . . . . . . 70
5.5 Widom’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.6 A Conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
References 72
1 Introduction
1.1 Introduction and Overview
Orthogonal polynomials on the real line (OPRL) were developed in the nineteenth century and
orthogonal polynomials on the unit circle (OPUC) were initially developed around 1920 by Szeg˝o.
Their matrix analogues are of much more recent vintage. They were originally developed in the
MOPUC case indirectly in the study of prediction theory [116, 117, 129, 131, 132, 138, 196] in
Matrix Orthogonal Polynomials 3
the period 1940–1960. The connection to OPUC in the scalar case was discovered by Krein [131].
Much of the theory since is in the electrical engineering literature [36, 37, 38, 39, 40, 41, 120, 121,
122, 123, 203]; see also [84, 86, 87, 88, 142].
The corresponding real line theory (MOPRL) is still more recent: Following early work of Krein
[133] and Berezan’ski [9] on block Jacobi matrices, mainly as applied to self-adjoint extensions, there
was a seminal paper of Aptekarev–Nikishin [4] and a flurry of papers since the 1990s [10, 11, 12,
14, 16, 17, 19, 20, 21, 22, 29, 35, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
62, 64, 65, 66, 67, 68, 69, 71, 73, 74, 75, 76, 77, 79, 83, 85, 102, 103, 104, 105, 106, 107, 108, 109,
110, 111, 112, 113, 137, 139, 140, 143, 144, 145, 148, 149, 150, 155, 156, 157, 154, 161, 162, 179,
186, 198, 200, 201, 202, 204]; see also [7].
There is very little on the subject in monographs — the more classical ones (e.g., [23, 82, 93, 184])
predate most of the subject; see, however, Atkinson [5, Section 6.6]. Ismail [118] has no discussion
and Simon [167, 168] has a single section! Because of the use of MOPRL in [33], we became
interested in the subject and, in particular, we needed some basic results for that paper which
we couldn’t find in the literature or which, at least, weren’t very accessible. Thus, we decided to
produce this comprehensive review that we hope others will find useful.
As with the scalar case, the subject breaks into two parts, conveniently called the analytic theory
(general structure results) and the algebraic theory (the set of non-trivial examples). This survey
deals entirely with the analytic theory. We note, however, that one of the striking developments in
recent years has been the discovery that there are rich classes of genuinely new MOPRL, even at
the classical level of Bochner’s theorem; see [20, 55, 70, 72, 102, 109, 110, 111, 112, 113, 156, 161]
and the forthcoming monograph [63] for further discussion of this algebraic side.
In this introduction, we will focus mainly on the MOPRL case. For scalar OPRL, a key issue
is the passage from measure to monic OPRL, then to normalized OPRL, and finally to Jacobi
parameters. There are no choices in going from measure to monic OP, Pn(x). They are determined
by
Pn(x) = xn+ lower order,hxj, Pni= 0 j= 1, . . . , n 1.(1.1)
However, the basic condition on the orthonormal polynomials, namely,
hpn, pmi=δnm (1.2)
does not uniquely determine the pn(x). The standard choice is
pn(x) = Pn(x)
kPnk.
However, if θ0, θ1,... are arbitrary real numbers, then
˜pn(x) = enPn(x)
kPnk(1.3)
also obey (1.2). If the recursion coefficients (aka Jacobi parameters), are defined via
xpn=an+1pn+1 +bn+1pn+anpn1,(1.4)
then the choice (1.3) leads to
˜
bn=bn,˜an=enanen1.(1.5)
D. Damanik, A. Pushnitski, B. Simon 4
The standard choice is, of course, most natural here; for example, if
pn(x) = κnxn+ lower order,(1.6)
then an>0 implies κn>0. It would be crazy to make any other choice.
For MOPRL, these choices are less clear. As we will explain in Section 1.2, there are now two
matrix-valued “inner products” formally written as
hhf, giiR=Zf(x)(x)g(x),(1.7)
hhf, giiL=Zg(x)(x)f(x),(1.8)
where now µis a matrix-valued measure and denotes the adjoint, and corresponding two sets of
monic OPRL: PR
n(x) and PL
n(x). The orthonormal polynomials are required to obey
hhpR
n, pR
miiR=δnm1.(1.9)
The analogue of (1.3) is
˜pR
n(x) = PR
n(x)hhPR
n, P R
nii1/2σn(1.10)
for a unitary σn. For the immediately following, use pR
nto be the choice σn1. For any such
choice, we have a recursion relation,
xpR
n(x) = pR
n+1(x)A
n+1 +pR
n(x)Bn+1 +pR
n1(x)An(1.11)
with the analogue of (1.5) (comparing σn1to general σn)
˜
Bn=σ
nBnσn˜
An=σ
n1Anσn.(1.12)
The obvious analogue of the scalar case is to pick σn1, which makes κnin
pR
n(x) = κnxn+ lower order (1.13)
obey κn>0. Note that (1.11) implies
κn=κn+1A
n+1 (1.14)
or, inductively,
κn= (A
n. . . A
1)1.(1.15)
In general, this choice does not lead to Anpositive or even Hermitian. Alternatively, one can pick
σnso ˜
Anis positive. Besides these two “obvious” choices, κn>0 or An>0, there is a third that
Anbe lower triangular that, as we will see in Section 1.4, is natural. Thus, in the study of MOPRL
one needs to talk about equivalent sets of pR
nand of Jacobi parameters, and this is a major theme
of Chapter 2. Interestingly enough for MOPUC, the commonly picked choice equivalent to An>0
(namely, ρn>0) seems to suffice for applications. So we do not discuss equivalence classes for
MOPUC.
Associated to a set of matrix Jacobi parameters is a block Jacobi matrix, that is, a matrix
which when written in l×lblocks is tridiagonal; see (2.29) below.
In Chapter 2, we discuss the basics of MOPRL while Chapter 3 discusses MOPUC. Chapter 4
discusses the Szeg˝o mapping connection of MOPUC and MOPRL. Finally, Chapter 5 discusses the
extension of the theory of regular OPs [180] to MOPRL.
While this is mainly a survey, it does have numerous new results, of which we mention:
Matrix Orthogonal Polynomials 5
(a) The clarification of equivalent Jacobi parameters and several new theorems (Theorems 2.8 and
2.9).
(b) A new result (Theorem 2.28) on the order of poles or zeros of m(z) in terms of eigenvalues of
Jand the once stripped J(1).
(c) Formulas for the resolvent in the MOPRL (Theorem 2.29) and MOPUC (Theorem 3.24) cases.
(d) A theorem on zeros of det(ΦR
n) (Theorem 3.7) and eigenvalues of a cutoff CMV matrix (The-
orem 3.10).
(e) A new proof of the Geronimus relations (Theorem 4.2).
(f) Discussion of regular MOPRL (Chapter 5).
There are numerous open questions and conjectures in this paper, of which we mention:
(1) We prove that type 1 and type 3 Jacobi parameters in the Nevai class have An1but do
not know if this is true for type 2 and, if so, how to prove it.
(2) Determine which monic matrix polynomials, Φ, can occur as monic MOPUC. We know
det(Φ(z)) must have all of its zeros in the unit disk in C, but unlike the scalar case where
this is sufficient, we do not know necessary and sufficient conditions.
(3) Generalize Khrushchev theory [125, 126, 101] to MOPUC; see Section 3.13.
(4) Provide a proof of Geronimus relations for MOPUC that uses the theory of canonical moments
[43]; see the discussion at the start of Chapter 4.
(5) Prove Conjecture 5.9 extending a result of Stahl–Totik [180] from OPRL to MOPRL.
1.2 Matrix-Valued Measures
Let Mldenote the ring of all l×lcomplex-valued matrices; we denote by αthe Hermitian
conjugate of α∈ Ml. (Because of the use of for Szeg˝o dual in the theory of OPUC, we do not
use it for adjoint.) For α∈ Ml, we denote by kαkits Euclidean norm (i.e., the norm of αas a linear
operator on Clwith the usual Euclidean norm). Consider the set Pof all polynomials in zC
with coefficients from Ml. The set Pcan be considered either as a right or as a left module over
Ml; clearly, conjugation makes the left and right structures isomorphic. For n= 0,1,...,Pnwill
denote those polynomials in Pof degree at most n. The set Vdenotes the set of all polynomials in
zCwith coefficients from Cl. The standard inner product in Clis denoted by ,·iCl.
A matrix-valued measure, µ, on R(or C) is the assignment of a positive semi-definite l×l
matrix µ(X) to every Borel set Xwhich is countably additive. We will usually normalize it by
requiring
µ(R) = 1(1.16)
(or µ(C) = 1) where 1is the l×lidentity matrix. (We use 1in general for an identity operator,
whether in Mlor in the operators on some other Hilbert space, and 0for the zero operator
or matrix.) Normally, our measures for MOPRL will have compact support and, of course, our
measures for MOPUC will be supported on all or part of D(Dis the unit disk in C).
Associated to any such measures is a scalar measure
µtr(X) = Tr(µ(X)) (1.17)
(the trace normalized by Tr(1) = l). µtr is normalized by µtr(R) = l.
Applying the Radon–Nikodym theorem to the matrix elements of µ, we see there is a positive
semi-definite matrix function Mij(x) so
ij (x) = Mij (x)tr(x).(1.18)
D. Damanik, A. Pushnitski, B. Simon 6
Clearly, by (1.17),
Tr(M(x)) = 1 (1.19)
for tr-a.e. x. Conversely, any scalar measure with µtr(R) = land positive semi-definite matrix-
valued function Mobeying (1.19) define a matrix-valued measure normalized by (1.16).
Given l×lmatrix-valued functions f, g, we define the l×lmatrix hhf, giiRby
hhf, giiR=Zf(x)M(x)g(x)tr (x),(1.20)
that is, its (j, k) entry is X
nm Zfnj(x)Mnm(x)gmk (x)tr(x).(1.21)
Since fMf 0, we see that
hhf, f iiR0.(1.22)
One might be tempted to think of hhf, f ii1/2
Ras some kind of norm, but that is doubtful. Even if µ
is supported at a single point, x0, with M=l11, this “norm” is essentially the absolute value of
A=f(x0), which is known not to obey the triangle inequality! (See [169, Sect. I.1] for an example.)
However, if one looks at
kfkR= (Trhhf , fiiR)1/2,(1.23)
one does have a norm (or, at least, a semi-norm). Indeed,
hf, giR= Trhhf, giiR(1.24)
is a sesquilinear form which is positive semi-definite, so (1.23) is the semi-norm corresponding to
an inner product and, of course, one has a Cauchy–Schwarz inequality
|Trhhf, giiR| ≤ kfkRkgkR.(1.25)
We have not specified which f’s and g’s can be used in (1.20). We have in mind mainly
polynomials in xin the real case and Laurent polynomials in zin the Dcase although, obviously,
continuous functions are okay. Indeed, it suffices that f(and g) be measurable and obey
ZTr(f(x)f(x)) tr(x)<(1.26)
for the integrals in (1.21) to converge. The set of equivalence classes under fgif kfgkR= 0
defines a Hilbert space, H, and hf , giRis the inner product on this space.
Instead of (1.20), we use the suggestive shorthand
hhf, giiR=Zf(x)(x)g(x).(1.27)
The use of Rhere comes from “right” for if α∈ Ml,
hhf, gαiiR=hhf, giiRα, (1.28)
hhf α, giiR=αhhf, giiR,(1.29)
Matrix Orthogonal Polynomials 7
but, in general, hhf, αgiiRis not related to hhf, giiR.
While (Trhhf , fiiR)1/2is a natural analogue of the norm in the scalar case, it will sometimes be
useful to instead consider
[dethhf, f iiR]1/2.(1.30)
Indeed, this is a stronger “norm” in that det >0Tr >0 but not vice-versa.
When is a “direct sum,” that is, each M(x) is diagonal, one can appreciate the difference.
In that case, =1⊕ ··· ⊕ land the MOPRL are direct sums (i.e., diagonal matrices) of
scalar OPRL
PR
n(x, dµ) = Pn(x, dµ1)⊕ · ·· ⊕ Pn(x, dµl).(1.31)
Then
kPR
nkR=l
X
j=1kPn(·, dµj)k2
L2(j)1/2
,(1.32)
while
(dethhPR
n, P R
niiR)1/2=
l
Y
j=1kPn(·, dµj)kL2(j).(1.33)
In particular, in terms of extending the theory of regular measures [180], kPR
nk1/n
Ris only sensitive to
maxkPn(·, dµj)k1/2
L2(j)while (dethhPR
n, P R
niiR)1/2is sensitive to them all. Thus, det will be needed
for that theory (see Chapter 5).
There will also be a left inner product and, correspondingly, two sets of MOPRL and MOPUC.
We discuss this further in Sections 2.1 and 3.1.
Occasionally, for Clvector-valued functions fand g, we will want to consider the scalar
X
k,j Zfk(x)Mkj (x)gj(x)tr(x),(1.34)
which we will denote Zdhf(x), µ(x)g(x)iCl.(1.35)
We next turn to module Fourier expansions. A set {ϕj}N
j=1 in H(Nmay be infinite) is called
orthonormal if and only if
hhϕj, ϕkiiR=δjk1.(1.36)
This natural terminology is an abuse of notation since (1.36) implies orthogonality in ,·iRbut
not normalization, and is much stronger than orthogonality in ,·iR.
Suppose for a moment that N < . For any a1,...,aN∈ Ml, we can form PN
j=1 ϕjajand, by
the right multiplication relations (1.28), (1.29), and (1.36), we have
N
X
j=1
ϕjaj,
N
X
j=1
ϕjbjR
=
N
X
j=1
a
jbj.(1.37)
We will denote the set of all such PN
j=1 ϕjajby H(ϕj)—it is a vector subspace of Hof dimension
(over C)Nl2.
D. Damanik, A. Pushnitski, B. Simon 8
Define for f∈ H,
π(ϕj)(f) =
N
X
j=1
ϕjhhϕj, fiiR.(1.38)
It is easy to see it is the orthogonal projection in the scalar inner product ,·iRfrom Hto H(ϕj).
By the standard Hilbert space calculation, taking care to only multiply on the right, one finds
the Pythagorean theorem,
hhf, f iiR=hhfπ(ϕj)f, f π(ϕj)fiiR+
N
X
j=1hhϕj, f ii
Rhhϕj, fiiR.(1.39)
As usual, this proves for infinite Nthat
N
X
j=1hhϕj, f ii
Rhhϕj, fiiR≤ hhf, f iiR(1.40)
and the convergence of
N
X
j=1
ϕjhhϕj, fiiRπ(ϕj)(f) (1.41)
allowing the definition of π(ϕj)and of H(ϕj)Ran π(ϕj)for N=.
An orthonormal set is called complete if H(ϕj)=H. In that case, equality holds in (1.40) and
π(ϕj)(f) = f.
For orthonormal bases, we have the Parseval relation from (1.39)
hhf, f iiR=
X
j=1hhϕj, f ii
Rhhϕj, fiiR(1.42)
and
kfk2
R=
X
j=1
Tr(hhϕj, fii
Rhhϕj, fiiR).(1.43)
1.3 Matrix M¨obius Transformations
Without an understanding of matrix M¨obius transformations, the form of the MOPUC Geronimus
theorem we will prove in Section 3.10 will seem strange-looking. To set the stage, recall that scalar
fractional linear transformations (FLT) are associated to matrices T=a b
c d with det T6= 0 via
fT(z) = az +b
cz +d.(1.44)
Without loss, one can restrict to
det(T) = 1.(1.45)
Indeed, T7→ fTis a 2 to 1 map of SL(2,C) to maps of C∪ {∞} to itself. One advantage of the
matrix formalism is that the map is a matrix homomorphism, that is,
fTS=fTfS,(1.46)
Matrix Orthogonal Polynomials 9
which shows that the group of FLTs is SL(2,C)/{1,1}.
While (1.46) can be checked by direct calculation, a more instructive way is to look at the
complex projective line. u, v C2\ {0}are called equivalent if there is λC\{0}so that u=λv.
Let [·] denote equivalence classes. Except for [1
0], every equivalence class contains exactly one
point of the form z
1with zC. If [1
0] is associated with , the set of equivalence classes is
naturally associated with C∪ {∞}.fTthen obeys
Tz
1=fT(z)
1 (1.47)
from which (1.46) is immediate.
By M¨obius transformations we will mean those FLTs that map Donto itself. Let
J=1 0
01.(1.48)
Then [u] = [z
1] with |z|= 1 (resp. |z|<1) if and only if hu, J ui= 0 (resp. hu, Jui<0). From
this, it is not hard to show that if det(T) = 1, then fTmaps Dinvertibly onto Dif and only if
T
JT =J. (1.49)
If Thas the form a b
c d , this is equivalent to
|a|2− |c|2= 1,|b|2− |d|2=1,¯ab ¯cd = 0.(1.50)
The set of T’s obeying det(T) = 1 and (1.49) is called SU(1,1). It is studied extensively in [168,
Sect. 10.4].
The self-adjoint elements of SU(1,1) are parametrized by αDvia ρ= (1 − |α|2)1/2,
Tα=1
ρ1α
¯α1(1.51)
associated to
fTα(z) = z+α
1 + ¯αz .(1.52)
Notice that
T1
α=Tα(1.53)
and that
zD,!αsuch that Tα(0) = z,
namely, α=z.
It is a basic theorem that every holomorphic bijection of Dto Dis an fTfor some Tin SU(1,1)
(unique up to ±1).
With this in place, we can turn to the matrix case. Let Mlbe the space of l×lcomplex
matrices with the Euclidean norm induced by the vector norm ,·i1/2
Cl. Let
Dl={A∈ Ml:kAk<1}.(1.54)
We are interested in holomorphic bijections of Dlto itself, especially via a suitable notion of FLT.
There is a huge (and diffuse) literature on the subject, starting with its use in analytic number
D. Damanik, A. Pushnitski, B. Simon 10
theory. It has also been studied in connection with electrical engineering filters and indefinite
matrix Hilbert spaces. Among the huge literature, we mention [1, 3, 78, 99, 114, 166]. Especially
relevant to MOPUC is the book of Bakonyi–Constantinescu [6].
Consider Ml⊕ Ml=Ml[2] as a right module over Ml. The Ml-projective line is defined by
saying X
YX
Y, both in Ml[2] \ {0}, if and only if there exists Λ ∈ Ml, Λ invertible so that
X=XΛ, Y =YΛ.(1.55)
Let Tbe a map of Ml[2] of the form
T=A B
C D(1.56)
acting on Ml[2] by
TX
Y=AX +BY
CX +DY .(1.57)
Because this acts on the left and Λ equivalence on the right, Tmaps equivalence classes to them-
selves. In particular, if C X +Dis invertible, Tmaps the equivalence class of X
1to the equivalence
class of hfT[X]
1i, where
fT[X] = (AX +B)(CX +D)1.(1.58)
So long as CX +Dremains invertible, (1.46) remains true. Let Jbe the 2l×2lmatrix in l×l
block form
J=1 0
01.(1.59)
Note that (with X
1= [X1])
X
1
JX
10XX1⇔ kXk ≤ 1.(1.60)
Therefore, if we define SU(l, l) to be those T’s with det T= 1 and
T
JT =J, (1.61)
then
TSU(l, l)fT[Dl] = Dlas a bijection.(1.62)
If Thas the form (1.56), then (1.61) is equivalent to
AACC=DDBB=1,(1.63)
AB=CD(1.64)
(the fourth relation BA=DCis equivalent to (1.64)).
This depends on
Proposition 1.1. If T=A B
C D obeys (1.61) and kXk<1, then CX +Dis invertible.
Matrix Orthogonal Polynomials 11
Proof. (1.61) implies that
T1=J T
J(1.65)
=AC
BD.(1.66)
Clearly, (1.61) also implies T1SU(l, l). Thus, by (1.63) for T1,
DDC C =1.(1.67)
This implies first that DD1, so Dis invertible, and second that
kD1Ck ≤ 1.(1.68)
Thus, kXk<1 implies kD1CX k<1 so 1+D1C X is invertible, and thus so is D(1+D1CX).
It is a basic result of Cartan [18] (see Helgason [114] and the discussion therein) that
Theorem 1.2. A holomorphic bijection, g, of Dlto itself is either of the form
g(X) = fT(X) (1.69)
for some TSU(l, l)or
g(X) = fT(Xt).(1.70)
Given α∈ Mlwith kαk<1, define
ρL= (1αα)1/2, ρR= (1αα)1/2.(1.71)
Lemma 1.3. We have
αρL=ρRα, αρR=ρLα,(1.72)
α(ρL)1= (ρR)1α, α(ρR)1= (ρL)1α.(1.73)
Proof. Let fbe analytic in Dwith f(z) = P
n=0 cnznits Taylor series at z= 0. Since kααk<1,
we have
f(αα) =
X
n=0
cn(αα)n(1.74)
norm convergent, so α(αα)n= (αα)nαimplies
αf(αα) = f(αα)α, (1.75)
which implies the first halves of (1.72) and (1.73). The other halves follow by taking adjoints.
Theorem 1.4. There is a one-one correspondence between α’s in Mlobeying kαk<1and positive
self-adjoint elements of SU(l, l)via
Tα=(ρR)1(ρR)1α
(ρL)1α(ρL)1.(1.76)
D. Damanik, A. Pushnitski, B. Simon 12
Proof. A straightforward calculation using Lemma 1.3 proves that Tαis self-adjoint and T
αJTα=J.
Conversely, if Tis self-adjoint, T=A B
C D and in SU(l, l), then T=TA=A,B=C, so
(1.63) becomes
AABB=1,(1.77)
so if
α=A1B, (1.78)
then (1.77) becomes
A1(A1)+αα=1.(1.79)
Since T0, A0 so (1.79) implies A= (ρR)1, and then (1.78) implies B= (ρR)1α.
By Lemma 1.3,
C=B=α(ρR)1= (ρL)1α(1.80)
and then (by D=D,C=B, and (1.63)) DDCC=1plus D > 0 implies D= (ρL)1.
Corollary 1.5. For each αDl, the map
fTα(X) = (ρR)1(X+α)(1+αX)1(ρL) (1.81)
takes Dlto Dl. Its inverse is given by
f1
Tα(X) = fTα(X) = (ρR)1(Xα)(1αX)1(ρL).(1.82)
There is an alternate form for the right side of (1.81).
Proposition 1.6. The following identity holds true for any X,kXk ≤ 1:
ρR(1 + Xα)1(X+α)(ρL)1= (ρR)1(X+α)(1 + αX)1ρL.(1.83)
Proof. By the definition of ρLand ρR, we have
X(ρL)2(1 αα) = (ρR)2(1 αα)X.
Expanding, using (1.73) and rearranging, we get
X(ρL)2+α(ρL)2αX= (ρR)2X+(ρR)2α.
Adding α(ρL)2+X(ρL)2αXto both sides and using (1.73) again, we obtain
X(ρL)2+α(ρL)2+X(ρL)2αX+α(ρL)2αX
= (ρR)2X+ (ρR)2α+(ρR)2X+Xα(ρR)2α,
which is the same as
(X+α)(ρL)2(1 + αX) = (1 + Xα)(ρR)2(X+α).
Multiplying by (1 + X α)1and (1 + αX)1, we get
(1 + Xα)1(X+α)(ρL)2= (ρR)2(X+α)(1 + αX)1
and the statement follows.
Matrix Orthogonal Polynomials 13
1.4 Applications and Examples
There are a number of simple examples which show that beyond their intrinsic mathematical
interest, MOPRL and MOPUC have wide application.
(a) Jacobi matrices on a strip
Let Λ Zνbe a subset (perhaps infinite) of the ν-dimensional lattice Zνand let 2(Λ) be square
summable sequences indexed by Λ. Suppose a real symmetric matrix αij is given for all i, j Λ
with αij = 0 unless |ij|= 1 (nearest neighbors). Let βibe a real sequence indexed by iΛ.
Suppose
sup
i,j |αij |+ sup
i|βi|<.(1.84)
Define a bounded operator, J, on 2(Λ) by
(Ju)i=X
j
αij uj+βiui.(1.85)
The sum is finite with at most 2νelements.
The special case Λ = {1,2,...}with bi=βi,ai=αi,i+1 >0 corresponds precisely to classical
semi-infinite tridiagonal Jacobi matrices.
Now consider the situation where ΛZν1is a finite set with lelements and
Λ = {jZν:j1∈ {1,2,...}; (j2,...jν)Λ},(1.86)
a “strip” with cross-section Λ.Jthen has a block l×lmatrix Jacobi form where (γ, δ Λ)
(Bi)γδ =b(i,γ),(γ=δ),(1.87)
=a(i,γ)(i,δ),(γ6=δ),(1.88)
(Ai)γδ =a(i,γ)(i+1).(1.89)
The nearest neighbor condition says (Ai)γδ = 0 if γ6=δ. If
a(i,γ)(i+1)>0 (1.90)
for all i, γ, then Aiis invertible and we have a block Jacobi matrix of the kind described in Section 2.2
below.
By allowing general Ai, Bi, we obtain an obvious generalization of this model—an interpretation
of general MOPRL.
Schr¨odinger operators on strips have been studied in part as approximations to Zν; see [31, 95,
130, 134, 151, 164]. From this point of view, it is also natural to allow periodic boundary conditions
in the vertical directions. Furthermore, there is closely related work on Schr¨odinger (and other)
operators with matrix-valued potentials; see, for example, [8, 24, 25, 26, 27, 28, 30, 96, 97, 165].
D. Damanik, A. Pushnitski, B. Simon 14
(b) Two-sided Jacobi matrices
This example goes back at least to Nikishin [153]. Consider the case ν= 2, Λ={0,1} ⊂ Z, and
Λ as above. Suppose (1.90) holds, and in addition,
a(1,0)(1,1)