Page 1
arXiv:0911.0941v1 [nlin.SI] 4 Nov 2009
The multicomponent 2D Toda hierarchy: generalized matrix
orthogonal polynomials, multiple orthogonal polynomials and
Riemann–Hilbert problems
Carlos´Alvarez-Fern´ andez†, Ulises Fidalgo‡and Manuel Ma˜ nas†
†Departamento de F´ ısica Te´ orica II, Universidad Complutense,
28040-Madrid, Spain
‡Departamento de Matem´ atica Aplicada, Universidad Carlos III
28911-Madrid, Spain
manuel.manas@fis.ucm.es
Abstract
We consider the relation of the multi-component 2D Toda hierarchy with matrix orthog-
onal and biorthogonal polynomials. The multi-graded H¨ ankel reduction of this hierarchy is
considered and the corresponding generalized matrix orthogonal polynomials are studied. In
particular for these polynomials we consider the recursion relations, and for rank one weights
its relation with multiple orthogonal polynomials of mixed type with a type II normalization
and the corresponding link with a Riemann–Hilbert problem.
1Introduction
Recently, multiple orthogonal polynomials, the related Riemann–Hilbert problems and its ap-
plication to different areas, for example Brownian motions have deserved much attention [1].
The field of matrix orthogonal polynomials has been also a growing area of research with some
similarities with the scalar case but much more richness, see [2]. The relation of multiple orthog-
onal polynomials with multicomponent KP hierarchies has been noticed in [3] and the string
equation formalism of integrable systems has been applied in [4]. Ten years ago Adler and
van Moerbeke [5] –in the context of the so called discrete KP hierarchy– introduced what they
named as generalized orthogonal polynomials, and what they claimed to be the corresponding
Riemann–Hilbert problem, while later on they studied the related Darboux transformations [6].
Recently, and for the Toeplitz case Cafasso [7] extended this work in order to consider block
matrices within the non-Abelian Ablotiwz–Ladick latttice.
Following Ueno and Takasaki [8] and the seminal paper of Mulase [9] in [10] we gave a
description of the infinite multicomponent 2D Toda lattice hierarchy in terms of a Gaussian
factorization (also known as Borel factorization) in an infinite dimensional Lie group, while later
on [11] we analyzed the dispersionless limit of the hierarchy with the aid of the factorization
problem. (See [12] for a discussion on different cases where this factorization makes sense.)
Following [5] we could argue as follows: i) in the one hand the multicomponent Toda hierarchy
1
Page 2
may be viewed as an LU factorization of certain deformed infinite-dimensional matrix and ii) on
the other hand, the same matrix could be thought as a moment matrix and the corresponding
LU factorization should give us the corresponding generalized matrix orthogonal polynomials.
In this manner, we would be able to build a bridge between multicomponent Toda hierarchy
and matrix orthogonal polynomials. This is the main idea developed in this paper. First,
we connect matrix orthogonal and biorthogonal polynomials with the multi-component 2DToda
lattice hierarchy, focusing in particular on the H¨ ankel reduction. Second, we generalize the band
condition of [5] to the multicomponent case and consider what we refer as multi-graded H¨ ankel.
This leads to a multi-component extension of generalized orthogonal polynomials which in some
cases can be described in terms of multiple orthogonal polynomials of mixed type with a type II
normalization. This connection allows us to give an appropriate Riemann–Hilbert problem for
this generalized orthogonal polynomials (notice that the one discussed in [5] is not correct).
The layout of the paper is as follows. In this introduction we give a overview of the Gaussian
factorization and the semi-infinite multi-component 2D Toda lattice hierarchy. Then, in §2 we
discuss how matrix orthogonal polynomials and the multi-component 2D Toda lattice hierarchy
are connected. Finally, in §3 we consider multi-graded H¨ ankel reduction, extended generalized
orthogonal polynomials, mixed multiple orthogonal polynomials and corresponding Riemann–
Hilbert problems.
1.1 Gaussian factorization and the semi-infinite multi-component 2D Toda
lattice hierarchy
For the construction of a Lie group theoretical setting we denote by Λ the shift operator for
matrix valued sequences. The associative algebra of linear operators on these sequences can
be identified with the associative algebra of semi-infinite matrices with entries taking values in
CN×N, the set of N ×N complex matrices. With the usual commutator for linear operators this
algebra is also a Lie algebra denoted by g whose Lie group G is the group of invertible linear
operators in g. Let’s take g ∈ G and consider the following Gaussian factorization problem
g = S−1¯S where S is a block lower triangular matrix, with Sii= IN, being IN ∈ CN×Nthe
identity matrix, and¯S is an block upper triangular matrix. In [12] it is proven that the Borel
decomposition holds if all the principal minors do not vanish. Thus, the factorization holds
under “small” continuous deformations and we can consider the factorization g(t) = S(t)−1¯S(t)
where t stands for a set of complex variables. As was discussed in [10] this factorization problem
leads to an integrable hierarchy of nonlinear PDE known as multicomponent 2D Toda lattice
hierarchy. Let us discuss these issues in more depth. Observe that the matrix associated with the
shift operator is the block matrix (INδi,i+1) and Λtis the operator associated with the transposed
matrix (INδi+1,i). If Eab, a,b = 1,...,N is the canonical basis of CN×Nand t = ({tja},{¯tja}),
j = 1,2,... and a = 1,...,N, is a collection of complex parameters we introduce
W0:=
N
?
a=1
Eaaexp(
∞
?
j=1
tjaΛj),
¯ W0:=
N
?
a=1
Eaaexp(
∞
?
j=1
¯tja(Λt)j),
and consider Gaussian factorization of g(t) := W0(t)g¯ W0(t)−1.
2
Page 3
Following [10] we define the Lax operators
L := SΛS−1= Λ + u0+ u1Λt+ u2(Λt)2+ ··· ,
Ca:= SEaaS−1= Eaa+ ca1Λt+ ··· ,
¯L :=¯SΛt¯S−1= eφΛt+ ¯ u0+ ¯ u1Λ + ··· ,
¯Ca:=¯SEaa¯S−1= ¯ ca0+ ¯ ca1Λ + ··· ,
(1)
where all the coefficients in the Λ-expansions belong to CN×N. The multi-component 2D Toda
hierarchy has the following Lax representation
∂L
∂tja
∂L
∂¯tja
= [(LjCa)+,L],
∂Cb
∂tja
∂Cb
∂¯tja
= [(LjCa)+,Cb],
∂¯L
∂tja
∂¯L
∂¯tja
= [(LjCa)+,¯L],
∂¯Cb
∂tja
∂¯Cb
∂¯tja
= [(LjCa)+,¯Cb],
= [(¯Lj¯Ca)−,L], = [(¯Lj¯Ca)−,Cb], = [(¯Lj¯Ca)−,¯L], = [(¯Lj¯Ca)−,¯Cb],
where the sub-indices + and − denote the block upper triangular, strictly block lower triangular
projections, respectively.
2Matrix orthogonal polynomials and the multi-component 2DToda
lattice hierarchy
Following Adler and van Moerbeke [5] we construct families of matrix orthogonal and bi-
orthogonal polynomials associated with the 2D Toda lattice hierarchy.
In the first place, we define the following families of (time-dependent) matrix polynomials
p(z) ≡ {pi(z)}i?0:= Sχ(z),¯ p(z) ≡ {¯ pi(z)}i?0:= (¯S−1)†χ(z),
where χ(z) := (IN,zIN,z2IN,...)tand the symbol†denotes Hermitian conjugation.Next we
consider a matrix-valued bilinear pairing between matrix polynomials. Given matrix polynomials
P(z) =?i
?P(z),Q(z)? =
?
l=1,...,j
k=0Pkzkand Q(z) =?j
l=0Qlzl(of degrees i,j, respectively) we have
k=1,...,i
Pk?zkIN,zlIN?Q†
l,
where ?zkIN,zlIN? denotes the matrix for the bilinear pairing in the canonical basis and for each
(k,l) is an N × N complex matrix.
This pairing has the following properties
1. Is linear in the first component:
?c1P1(z) + c2P2(z),Q(z)? = c1?P1(z),Q(z)? + c2?P2(z),Q(z)?,
∀c1,c2∈ CN×N
2. Is skew-linear in the second component:
?P(z),c1Q1(z) + c2Q2(z)? = ?P(z),Q1(z)?c†
1+ ?P(z),Q2(z)?c†
2,
∀c1,c2∈ CN×N
3
Page 4
Proposition 1.
then the families p(z) and ¯ p(z) are biorthogonal matrix polynomials for the linear pairing,
i.e.
1. If ?ziIN,zjIN? = g(t)ijwhere gijis the CN×Nblock in the position (i,j),
?pi(z), ¯ pj(z)? = δijIN.
Moreover,
?pi(z),zlIN? = 0,l = 0,...,i − 1,
?zlIN, ¯ pj(z)? = 0,l = 0,...,j − 1. (2)
2. In addition, if the time-dependent initial condition g(t) is Hermitian for all t then p(z)
and ¯ p(z) are two families of matrix orthogonal polynomials, moreover, the two families are
proportional.
Proof. 1. With the previous definitions for p(z) and ¯ p(z) we have:
pi(z) =
i
?
k=0
Sikzk,¯ pj(z) =
j
?
l=0
(¯S−1
lj)†zl,
where Sikand¯S−1
lj
are the blocks (i,k) and (l,j) for S and¯S−1, respectively. Hence
?pi(z), ¯ pj(z)? =
i,j
?
k,l=0
Sik?zkIN,zlIN?¯S−1
lj=
?
k,l?0
Sik?zkIN,zlIN?¯S−1
lj
= (Sg(t)¯S−1)ij= (SS−1¯S¯S−1)ij= δijIN,
as desired.
0, but p0(z) = (¯S−1
?pi(z), ¯ p1(z)? = 0, but ¯ p1(z) = (¯S−1
result and the fact that (¯S−1
an so on.
Finally, (2) is proven by induction.
00)†is invertible and therefore we conclude ?pi(z),IN? = 0.
11)†z+(¯S−1
11)†is invertible we deduce that ?pi(z),zIN? = 0, and so forth
First we have that ?pi(z), ¯ p0(z)? =
Now,
10)†, and using the skew-linearity, the previous
2. Let’s study the conditions under which p(z) is a family of matrix orthogonal polynomials. If
we take two polynomials in the family, such as pi(z) =?i
i,j
?
= (Sg(t)S†)ij= (¯SS†)ij.
k=1Sikzkand pj(z) =?j
l=1Sjlzl
we have
?pi(z),pj(z)? =
k,l=1
Sik?zkI,zlI?(Sjl)†=
∞
?
k,l=0
Sik?zkIN,zlIN?(S†)lj
Observe that¯SS†is clearly block upper-diagonal with its Hermitian conjugate given by
(¯SS†)†= S¯S†= Sg(t)(g(t)−1)†¯S†=¯S(¯Sg(t)−1)†=¯SS†.
Therefore,¯SS†is Hermitian and block upper-diagonal, which implies that¯SS†is a block
diagonal matrix and the blocks in the diagonal are CN×NHermitian matrices.
We conclude that ?pi(z),pj(z)? = δij(hi)−1, where hiis a Hermitian matrix. Notice also
that as a consequence ¯ p(z) = hp(z) where h = diag(h1,h2,...).
4
Page 5
2.1The H¨ ankel case
We choose g to be a block H¨ ankel matrix so that Λg = gΛtor
(Λg)ij= gi+1,j= gi,j+1= (gΛt)ij,
In this case we have for the blocks of the moment matrix g
gij= γ(i+j),
for some matrices γ(j)∈ CN×N. From
Λg(t) = ΛW0g¯ W−1
= W0gΛt¯ W−1
0
= W0Λg¯ W−1
= W0g¯ W−1
0
00Λt= g(t)Λt,
we easily deduce that g(t) is block H¨ ankel if g is. We have
Proposition 2. Assume that g is block H¨ ankel then with γ(j)a block moment matrix; i.e.
γ(j)=
?
?
Rxjρ(x)dx then the pairing can be viewed as a scalar product in the real line whose
matrix moment is g ; i.e.,
?P(x),Q(x)? =
R
P(x)ρ(x)Q(x)†dx.
Proof. On one hand we have ?P(x),Q(x)? =
?
?
ijPiγ(i+j)Q†
j. Using the previous definition
Rxj+kρ(x)dx = ?xjIN,xkIN? = γ(j+k)(the H¨ ankel symmetry ensures that there is only depen-
dence in j + k) we have:
?P(x),Q(x)? =
?
?
ij
Piγ(i+j)Q†
j=
?
ij
Pi
?
R
xj+kρ(x)dxQ†
j
=
R
P(x)ρ(x)Q(x)†dx.
In general arbitrary continuous deformations do not preserve the Hermitian character of g. If
we look for families of matrix orthogonal polynomials on the real line we should make restricted
deformations. Let’s make this point clear. In the following z∗denotes the complex conjugate of
z ∈ C.
Proposition 3. If the matrix g is block H¨ ankel and the matrices γ(j)are Hermitian then
1. The families p(z) and ¯ p(z) are proportional and the two of them are matrix orthogonal
polynomials in the real line.
2. Moreover, if the continuous deformation parameters satisfy one of the two following con-
ditions
(a) tja,¯tja∈ R and satisfy tja= tj,¯tja=¯tj, a = 1,...,N.
(b) tja,¯tjasatisfy tja+¯t∗
ja= 0.
5
Page 6
the result holds for the time dependent moment matrix.
Proof. 1. If the matrix g is block H¨ ankel and the blocks are Hermitian the matrix g is itself
Hermitian. Given i,j, pair of indices for an element of g, there exist four integer indices
(k,l),(m,n) with k,l ? 0 y m,n = 1,...,N that satisfy aij = (Akl)mn = (Akl)∗
(Alk)∗
Proposition 1 the first part of the result holds.
nm=
nm= a∗
ji(we use A ∈ CN×Nfor a block of g) as a consequence g = g†and by
2. Let us compute the time evolution for g. We have
W0=
N
?
a=1
Eaaexp(
∞
?
j=1
tjaΛj) =
N
?
a=1
Eaa
∞
?
j=0
s(a)
jΛj,
where s(a)
sj:=?N
j
a=1Eaas(a)
is the j-th Schur1polynomial for the component a. Consequently and taking
j
W0=
∞
?
j=0
N
?
k=1
Eaas(a)
jΛj=
∞
?
j=0
sjΛj,
so it is straight forward to see the block structure of W0, whose blocks are given by
(W0)ij= sj−iif j − i ? 0 and 0Notherwise.
A similar argument for¯ W−1
0
leads to (¯ W−1
¯ sj:=?N
We are now ready to compute
0)ij= ¯ si−jif i − j ? 0 and 0N otherwise (here
is the j-th Schur polynomial for the component a but now in
a=1Eaa¯ s(a)
j
and ¯ s(a)
j
the variables −¯tja).
g(t)ij=
?
k,l?0
(W0)ikgkl(¯ W−1
0)lj=
?
k?i,l?j
sk−iγ(k+l)¯ sl−j=
?
k,l?0
skγ(i+j+k+l)¯ sl.
Then
(a) If the first condition holds then all matrices sjand ¯ sjare real and scalar so
g(t)†
ij=
?
k,l?0
¯ s†
lγ(i+j+k+l)s†
k=
?
k,l?0
s†
kγ(i+j+k+l)¯ s†
l=
?
k,l?0
skγ(i+j+k+l)¯ sl
= g(t)ij.
(b) If the second condition holds sj= ¯ s†
jfor all j so
g(t)†
ij=
?
k,l?0
¯ s†
lγ(i+j+k+l)s†
k=
?
l,k?0
¯ s†
kγ(i+j+l+k)s†
l=
?
k,l?0
skγ(i+j+k+l)¯ sl
= g(t)ij.
1We remind the reader that the Schur polynomials are given by exp(P∞
and s(a)
j
=Pj
p=1
P
in the variables t1a,t2a,...,tj−1,a.
j=1tjaΛj) =P∞
j=0s(a)
jΛjso s(a)
0
= 1
j1+···+jp=jtj1a···tjpa = tja+ ··· where the missing term is a polynomial of utmost degree j
6
Page 7
Under any of the two conditions g(t) is block-H¨ ankel with Hermitian blocks, so is itself
Hermitian. That proves the second part of the proposition.
We now discuss the recursion formulae. Using the H¨ ankel condition Λg = gΛtand taking the
usual definition for the Lax operator we conclude L =¯L and hence we have L = Λ+u(n)+v(n)Λt.
Proposition 4. The polynomials p(z), ¯ p(z) satisfy a three term recurrence law given by pn+1(z) =
zpn(z) − u(n)pn(z) − v(n)pn−1(z)
Proof. From the definition of L we have Lp(z) = LSχ(z) = zp(z) = (Λ + u(n) + v(n)Λt)p(z).
If we take the sequence terms {pn(z)}nwe conclude zpn(z) = pn+1(z) + u(n)pn(z) + v(n)pn−1.
For the polynomials ¯ p(z) we have¯L†¯ p(z) = z¯ p(z) = (Λv†(n) + u†(n) + Λt)¯ p(z), and using the
sequence {¯ pn(z)}nwe obtain another recurrence law.
3Multi-graded H¨ ankel reduction, generalized orthogonality, mul-
tiple orthogonal polynomials and Riemann–Hilbert problems
In this section we will study generalized H¨ ankel type conditions. Given a multi-index ¯ n =
(n1,...,nN) with nanon-negative integers we define for A ∈ g the power A¯ n=?N
Λ¯ ng = g(Λt)¯ m.
a=1AnaEaa.
For two multi-indices ¯ n and ¯ m a matrix g is said to be a (¯ n, ¯ m) multi-graded H¨ ankel if
(3)
If as before gij ∈ CN×Ndenotes a block in g then we can write gij = (gij,ab)1?a,b?N and the
multi-graded H¨ ankel condition reads gi+naj,ab= gij+mb,ab. An ample family of multi-graded
H¨ ankel matrices can be constructed in terms of weights ρj,abas the moments
gij,ab=
?
R
xiρj,ab(x)dx, (4)
where the weights satisfy a generalized periodicity condition of the form
ρj+mb,ab(x) = xnaρj,ab(x).(5)
Thus, given the weights ρ0,ab,...,ρmb−1,ab, all the others are fixed by (5). From now on, we
concentrate only in these cases of multigraded H¨ ankel matrices.
component case and for n1= m1case these moments matrices were studied in [5]-[6].
We notice that for the 1-
Proposition 5. For multi-graded H¨ ankel matrices the matrix polynomials pisatisfy the following
generalized orthogonality conditions
?
R
pi(x)ρj(x)dx = 0,j = 0,...,i − 1,ρj:= (ρj,ab) ∈ CN×N. (6)
Proof. From Sg(t)¯S−1= 1Gwe get?
1. Now recalling (4) we get the result.
j=1,...,i
b=1,...,N
pij,abgjl,bc= 0 for a,c = 1,...,N and l = 0,...,i−
7
Page 8
Using the Euclidean division i = θcmc+σc, with θc? 0, 0 ? σc< mcwe get a better insight
of the orthogonality relations (6), for a,c = 1,...,N,
N
?
N
?
b=1
?
R
pi,ab(x)ρj,bc(x)(xnb)ldx = 0,j = 0,...,mc− 1,l = 0,...,θc− 1,
b=1
?
R
pi,ab(x)ρj,bc(x)(xnb)θcdx = 0,j = 0,...,σc− 1.
(7)
3.1 Evolution
From (3) we conclude that g(t) is of (¯ n, ¯ m) multi-graded H¨ ankel type if g is. In general, the
evolution of g is given in terms of Schur polynomials by
gij(t) =
?
k,l?0
skgi+k,j+l¯ sl
which recalling (4) leads to the following evolution of the weights
ρj(t) =
?
k,l?0
xkskρj+l¯ sl= exp(t(x))
?
l?0
ρj+l¯ sl, (8)
where t(x) =?N
a=1ta(x)Eaaand ta(x) :=?
j?1tjaxj. It can be easily checked that ρj(t) satisfies
the periodicity condition (5) if ρjdoes. From (8) we infer that
ρ0,ab(t)
...
ρmb−1,ab(t)
= exp(ta(x))
S0,ab
S1,ab
S0,ab
S2,ab
S1,ab
S0,ab
...
xnaS3,ab
···
···
···
...
···
Smb−1,ab
Smb−2,ab
Smb−2,ab
...
S0,ab
xnaSmb−1,ab
xnaSmb−2,ab
...
xnaS1,ab
xnaSmb−1,ab
...
xnaS2,ab
ρ0,ab
...
ρmb−1,ab
(9)
where
Si,ab=
?
j?0
¯ s(b)
i+mbj(xna)j=
1
mbxina/mb
mb−1
?
k=0
εik
bexp(−¯tb(εk
bxna/mb)),εmb
b
= 1.
If we denote
¯t(l)
a(x) =
?
j?0
¯tjmb+l,axjmb+l,l = 0,1,...,mb− 1,
we have
¯ta(εk
bx) =¯t(0)
a(x) +¯t[k]
a(x),
¯t[k]
a(x) =
?
j?0
l=1,...,mb−1
¯tjmb+lεkl
bxjmb+l,
8
Page 9
and therefore
Si,ab=
1
mbxina/mbexp(−¯t(0)
b(xna/mb))
mb−1
?
k=0
εik
bexp(−¯t[k]
b(xna/mb)). (10)
Finally, we deduce
ρj,ab(t) = exp(ta(x) −¯t(0)
b(xna/mb))
mb−1
?
k=0
ˆ ρ(k)
j,abexp(−¯t[k]
b(xna/mb)), (11)
where we have used the discrete Fourier transform of the weights
ˆ ρ(k)
j,ab:=
1
mb
mb−1
?
i=1
εik
bx−ina/mbρj+i,ab.
3.2Recursion relations and symmetries
In terms of Lax operators the multi-graded H¨ ankel reduction reads [10]
L :=
N
?
a=1
LnaCa=
N
?
b=1
¯Lmb ¯Cb. (12)
Within this subsection we assume that
n1? ··· ? nN? 1,m1? ··· ? mN? 1,
and suppose that n1= ··· = nrand nr> nr+1. Given (1) from (12) we deduce that
L = (E11+ ··· + Err)Λn1+ Ln1−1Λn1−1+ ··· + L0+ L−1Λt+ L−2(Λt)2+ ··· ,
while we also have L =?N
L = L−m1(Λt)m1+ L−m1+1(Λt)m1−1+ ··· + L0+ L1Λ + L2Λ2+ ··· .
b=1¯Lmb ¯Cband therefore
We conclude the block band structure
L = (E11+ ··· + Err)Λn1+ Ln1−1Λn1−1+ ··· + L−m1+1(Λt)m1−1+ L−m1(Λt)m1.(13)
Proposition 6. The polynomials pi(z) are subject to
(E11+ ··· + Err)pi+n1(z) + ··· + L−m1pi−m1(z) = pi(z)?
Proof. We only have to show that Lp = p(?n
Similarly, from L†¯ p(z) = ¯ p(z)(?N
? N
? N
for i ? 1 and b = 1,...,N.
n
?
a=1
znaEaa
?
a=1znaEaa).
a=1znaEaa).
But this follows from Lp =
S(?N
a=1ΛnaEaa)S−1Sχ = Sχ(?N
Finally we notice that in this case the following symmetry conditions hold [10]
a=1znaEaa) = p(z)(?n
b=1zmbEbb) a recursion relation follows for the ¯ pi.
?
?
a=1
∂
∂tinaa
+
N
?
N
?
a=1
∂
∂¯timaa
?
?
L =
? N
? N
?
a=1
∂
∂tinaa
+
N
?
N
?
a=1
∂
∂¯timaa
?¯L = 0,
?¯Cb= 0,
a=1
∂
∂tinaa
+
a=1
∂
∂¯timaa
Cl=
?
a=1
∂
∂tinaa
+
a=1
∂
∂¯timaa
9
Page 10
3.3 Relation with multiple orthogonal polynomials
We will see that when the weights ρj are particular rank one matrices there is a nice corre-
spondence with multiple orthogonal polynomial of mixed type with a normalization of type II.
Following [1] we take two sets of non negative multi-indices ¯ ν = (ν1,...,νp), ¯ µ = (µ1,...,µq)
and write |ν| =?p
{A¯ ν,¯ µ,J}J=1,...,pis a set of multiple orthogonal polynomials of mixed type if degA¯ ν,¯ µ,J? νJ− 1
and the following orthogonality relations
J=1νJ and |µ| =?q
K=1µK. We also take weights {w1J}p
J=1and {w2K}q
K=1
which are assumed to be non-negative functions on the real line. For a fixed pair ¯ ν, ¯ µ we say that
?
R
p
?
J=1
A¯ ν,¯ µ,J(x)w1J(x)w2K(x)xαdx = 0,α = 0,...,µK− 1,K = 1,...,q,(14)
are satisfied. Alternatively defining the following linear forms Q¯ ν,¯ µ(x) :=?n
?
For multiple orthogonal polynomials as described above we will take |ν| = |µ| and assume that
the following conditions hold
J=1A¯ ν,¯ µ,J(x)w1J(x)
the orthogonality relations can be written in the following way
R
Q¯ ν,¯ µ(x)w2K(x)xαdx = 0,α = 0,...,µK− 1,K = 1,...,q.
1. For each K = 1,...,p the orthogonality relations for the multi-indices ¯ ν + eK, ¯ µ have a
unique solution with A¯ ν,¯ µ,K monic and degA¯ ν,¯ µ,K = νK− 1, with degA¯ ν,¯ µ,J < νJ− 1
if J ?= K. We will call it type II normalization to the K-th component and write that
normalized solution as {A(II,K)
¯ ν,¯ µ,J}J=1,...,p.
2. For each K = 1,...,q the orthogonality relations for the multi-indices ¯ ν, ¯ µ − eK have a
unique solution with the following normalization: degA¯ ν,¯ µ,J= νJ−1 and?
solution as {A(I,K)
¯ ν,¯ µ,J}J=1,...,p.
In order to connect (7) with multiple orthogonal polynomials we consider the Euclidean division
i = qana+ ra,with qa? 0 and 0 ? ra< na, and write
RQ¯ ν,¯ µ(x)w2KxµKdx =
1. We will call it normalization of type I to the K-th component and write that normalized
pi,ab(z) =
nb
?
j=1
zj−1Πij,ab(znb), (15)
where Πij,ab(znb) are polynomials in znbsuch that
degΠij,ab?
?
qb,
qb− 1,
j ? rbor j = rb+ 1 and b = a,
j > rb+ 1 or j = rb+ 1 and b ?= a.
Notice that the monic character of pigives the normalization of Πi ra+1,aawhich happens to be
a monic polynomial with degΠi ra+1,aa= qa.
The inversion formula for (15) can be deduced as follows. If we denote by ǫb:= exp(2πi/nb),
a primitive nb-th root of the unity, and evaluate at ǫk
system of equations
bz, k = 0,...,nb− 1 we get the following
pi,ab(ǫk
bz) =
nb
?
j=1
(ǫk
bz)j−1Πij,ab(znb)k = 0,...,nb− 1,
10
Page 11
that we solve in order to obtain the polynomials Πik,abin terms of a discrete Fourier transform
of the polynomial pithrough the formula
Πi j+1,ab(znb) =
1
nbzj
nb−1
?
k=0
ǫ−jk
b
pi,ab(ǫk
bz).
Then, (7) can be written as
?
R
N
?
N
?
b=1
nb
?
nb
?
j=1
Πij,ab(xnb)xj−1ρk,bc(x)(xnb)ldx = 0,k = 0,...,mc− 1,l = 0,...,θc− 1,
?
R
b=1
j=1
Πij,ab(xnb)xj−1ρk,bc(x)(xnb)θcdx = 0,k = 0,...,σc− 1.
(16)
These equations strongly suggest to perform a change of variables in each integrand of the type
y = xnb. For that aim, it is relevant that when nbis an even number then supp(ρj,bc) ⊂ R+,
otherwise the change of variables is ill defined. In fact, for these even cases, one easily see that
the weights must be supported on the positive axis or uniqueness of the orthogonal polynomials
is not ensured. Moreover, for nbodd it is also necessary to assume that the weight is supported
either only in the positive real numbers or only in the negative real line, this requirement comes
from the positivity condition on the weights and the use of (19). Hereon we will assume that the
all weights are supported on the positive real semiline. When the mentioned change of variable
is performed in (16) we get
?
R
N
?
N
?
b=1
nb
?
nb
?
j=1
Πij,ab(y)˜ ρjk,bc(y)yldy = 0,k = 0,...,mc− 1,l = 0,...,θc− 1,
?
R
b=1
j=1
Πij,ab(y)˜ ρjk,bc(y)yθcdy = 0,k = 0,...,σc− 1,
(17)
with
˜ ρjk,bc(y) =
1
nby
j
nb−1ρk,bc
?y
1
nb?.
Now, if the matrix weights ρkare rank-1 matrices of the following particular form
ρk,bc(x) = v1,b(x)w2,kc(xnb),
we get
?
R
N
?
N
?
b=1
nb
?
nb
?
j=1
Πij,ab(y)w1,jb(y)w2,kc(y)yldy = 0,k = 0,...,mc− 1,l = 0,...,θc− 1,
?
R
b=1
j=1
Πij,ab(y)w1,jb(y)w2,kc(y)yθcdy = 0,k = 0,...,σc− 1,
(18)
11
Page 12
where
w1,jb(y) =
1
nby
j
nb−1v1,b
?y
1
nb?. (19)
We are ready to describe the relation among multiple orthogonal polynomials of type II and
generalized matrix polynomials. First, given (a,j) with a = 1,...,N and j ∈ N we make the
definitions N(a,j) := n1+ ··· + na−1+ j and M(a,j) := m1+ ··· + na−1+ j + 1.
Proposition 7. Relations (18) are particular cases of (14) with:
1. J = N(b,j) for b = 1,...,N and j = 1,...,nb; and K = M(c,k) for c = 1,...,N and
k = 0,...,mc− 1. We have therefore the identification J = n1+ ··· + nb−1+ j and
K = m1+ ··· + mc−1+ k + 1.
2. p = |¯ n| = n1+ ··· + nN and q = |¯ m| = m1+ ··· + mN.
3.
νN(b,j)=
?
?
qb+ 1,
qb,
j ? rbor j = rb+ 1 and b = a,
j > rb+ 1 or j = rb+ 1 and b ?= a,
µM(c,k)=
θc+ 1,
θc,
0 ? k ? σc− 1,
σc? k ? mc− 1.
4. |¯ ν| =?
5. A¯ ν,¯ µ,J= Πij,aband pi,ab(z) =?nb
6. degA¯ ν,¯ µ,J= νJif J = N(a,ra+1) so that A¯ ν,¯ µ,J= A(II,N(a,ra+1))
normalization w.r.t N(a,ra+ 1). Thus, pi,ab(z) =?nb
The reader should notice that evolution of the weights given trough (8) give the following
evolution of w1,jband w2,kc
JνJ= Ni + 1 and |¯ µ| =?
KµK= Ni.
j=1zj−1A¯ ν,¯ µ,N(b,j)(znb).
¯ ν,¯ µ,J
j=1zj−1A(II,N(a,ra+1))
¯ ν,¯ µ,N(b,j)
, and we have a type II
(znb).
w1,jb(t) = exp(
?
k?1
yk/nbtkb)w1,jb,w2,kc(t) =
?
l?0
w2,k+l c¯ s(c)
l.
Using (11) we may write
w1,jb(t) = exp(
?
k?1
yk/nbtkb)w1,jb,
w2,kc(t) = exp(−¯t(0)
b(y1/mc))
mb−1
?
l=0
˜ w(l)
2k,cexp(−¯t[l]
b(y1/mc)),˜ w(l)
2,kc:=
1
mc
mb−1
?
i=1
εil
cy−i/mcw2 k+i,c.
These evolved weights fulfill the positivity condition (recall that their support is included in the
positive real line) when we have tjb,¯tjmcc∈ R and¯tjc? 0 for j ?= 0 mod(mc).
12
Page 13
3.4 Riemann–Hilbert problems
As we have just shown the generalized matrix polynomials are connected with a family of multiple
orthogonal polynomials for a particular rank one moment matrix. In [1] the Riemann–Hilbert
problem for multiple orthogonal polynomials of mixed type was presented, see also [13]. We will
discuss its relation with the generalized orthogonal polynomials pi(z).
Let us recall the reader that the Cauchy transform is defined by
ˆQ¯ ν′,¯ µ′,K(z) := −1
2πi
?
R
Q¯ ν′,¯ µ′(x)
z − x
w2K(x)dx.
Now we can make the following definition for the (p + q) × (p + q) complex valued matrix
Y (z)
YK,J:= A(II,K)
¯ ν′+eK,¯ µ′,J
J = 1,...,pK = 1,...,p
YK,J+p:=ˆQ(II,K)
¯ ν′+eK,¯ µ,J
J = 1,...,qK = 1,...,p
YK+p,J:= −2πiA(I,K)
YK+p,J+p:= −2πiˆQ(I,K)
¯ ν′,¯ µ′−eK,J
J = 1,...,pK = 1,...,q
¯ ν′,¯ µ′−eK,j
J = 1,...,qK = 1,...,q
we will also use the following real valued (p + q) × (p + q) matrix D(x) defined by blocks
D(x) :=
?
Ip
W(x)
Iq
0q×p
?
where WJK(x) = w1J(x)w2K(x).
We adapt to the present situation a result of [1] taking into account the support of the
weights
Theorem 1. Let be ¯ ν′, ¯ µ′two multi-indices such that |¯ ν′| = |¯ µ′|and suppose that the normality
conditions hold. Let also be two sets of weight functions {w1J}J=1,...,pand {w2K}K=1,...,qsuch
that for every J,K w1J,(x)w2K(x) are differentiable a.e. in R+and xjw1J,xjw2K∈ H1(R+),
j = 0,...,ν′
K− 1. In x = 0 we require the weight functions to be bounded. Then, the matrix
Y (z) is the only solution of the following Riemann–Hilbert problem.
1. Y (z) is analytic in C \ R+.
2. Y (x)+= Y (x)−D(x) for all x > 0.
3. Y (z)diag(z−ν′
1,...,z−ν′
p,zµ′
1,...,zµ′p) = I(p+q)+ O?z−1?for z → ∞.
log|z|
log|z| ···
log|z|
log|z|
log|z| ···
log|z|
...
log|z|
log|z| ···
log|z|
4. Y (z) = O
1
1
...
1
1
1
...
1
···
···
...
···
1
1
...
1
...
...
...
when z → 0.
13
Page 14
Proof. First we see uniqueness of the solutions. First we see that as detD(x) = 1 detY (z)
is analytical across the integration contour, then the only possible singularity of detY (z) is in
z = 0. As z detY (z) → 0 when z → 0 the isolated singularity must be removable and detY (z)
is an entire function. The asymptotic behavior and Liouville’s Theorem gives detY (z) = 1.
Consequently Y (z)−1exists. Given two possible solutions for the problem Y (z) and˜Y (z), the
matrix Y (z)˜Y (z)−1can be singular only in z = 0. As before the singularity must be removable
so Y (z)˜Y (z)−1is entire and bounded, and hence Y (z)˜Y (z)−1= Ip+q.
Now we prove that the matrix Y (z) is a solution of the R–H problem. Condition 1 fol-
lows from the fact that polynomials are entire functions and general theory of Cauchy integrals
[14] gives analytic behavior outside the integration contour. Now, observe that condition 2
is a jumping condition on the positive real axis and is a consequence of Plemelj formulae.
For condition 3 we notice that Y (z)ii is a monic polynomial with degree ν′
We also see that the leading term of each Y (z)ii is z−µ′
orthogonality relations and the type I normalization. Consequently the diagonal elements of
Y (z)diag(z−ν′
totically. Finally, condition 4 is a consequence of the behavior of the polynomials and the Cauchy
transforms. The boundness condition of the weights at z = 0 makes the Cauchy integrals to
have at much log-type singularities at z = 0.
ifor i = 1,...,p.
i for i = p,...,(p + q) due to the
1,...,z−ν′
p,zµ′
1,...,zµ′p) are equal to 1 and the rest of them vanish like1
zasymp-
The Theorem applies to our situation giving
Proposition 8. If ¯ ν′= ¯ ν − eN(a,ra+1)and ¯ µ′= ¯ µ, with ¯ ν and ¯ µ as in Proposition 7, we have
Πij,ab= YN(a,ra+1),N(b,j)and
pi,ab(z) =
nb
?
j=1
zj−1YN(a,ra+1),N(b,k)(znb),
YN(a,ra+1),N(b,j)(znb) =
1
nbzj−1
nb−1
?
l=0
ǫ−l(j−1)
b
pi,ab(ǫj
bz).
We observe that in [5] and [6] these generalized polynomials are considered for the one com-
ponent case N = 1 and for n = m and therefore fit as a particular example of our polynomials
and consequently are related with multiple orthogonal polynomials of mixed type. Apparently
they might noticed this fact later, as in [3] they claim that they considered multiple orthogonal
polynomials in [5]. However, we must say that the Riemann–Hilbert problem derived there is
different from the Daems–Kuijlaars problem considered here for multiple orthogonal polynomi-
als. In fact, the matrix Y considered in [5] is not analytic in the upper-half plane and hence
fails to satisfy the Riemann–Hilbert problem posed there.
Acknowledgements
MM thanks economical support from the Spanish Ministerio de Ciencia e Innovaci´ on, research
project FIS2008-00200 and UF thanks economical support from the Spanish Ministerio de Cien-
cia e Innovaci´ on research projects MTM2006-13000-C03-02 and MTM2007-62945 and from Co-
munidad de Madrid/Universidad Carlos III de Madrid project CCG07-UC3M/ESP-3339. MM
reckons different and clarifying discussions with Dr. Mattia Caffasso.
14
Page 15
References
[1] E. Daems and A. B. J. Kuijlaars, J. of Approx. Theory 146 (2007) 91.
[2] A. Dur´ an and F. A. Gr¨ unbaum, J. Compt. Appl. Math. 178 (2005) 169.
[3] M. Adler, P. van Moerbeke and P. Vanhaecke, Commun. Math. Phys. 286 (2008) 1.
[4] L. Mart´ ınez Alonso and E. Medina, J. Phys. A: Math. Theor. 42 (2009) 205204.
[5] M. Adler and P. van Moerbeke, Commun. Math. Phys. 207 (1999) 589.
[6] M. Adler and P. van Moerbeke, Int. Math. Res. Not. 18 (2001) 935.
[7] M. Cafasso, J. Phys. A: Math. Theor. 42 (2009) 365211.
[8] K. Ueno and K. Takasaki, Adv. Stud. Pure Math. 4 (1984) 1.
[9] M. Mulase, Adv. Math. 54 (1984) 57.
[10] M. Ma˜ nas, L. Mart´ ınez Alonso and C.´Alvarez-Fern´ andez, Inv. Prob. 25 (2009) 065007.
[11] M. Ma˜ nas and L. Mart´ ınez Alonso, Inv. Prob. to be appear (2009).
[12] R. Felipe and F. Ongay, Lin. Alg. Appl. 338 (2001) 1.
[13] W. Van Assche, K. T-R Mc Laughlin, A. B. J. Kuijlaars and M. Vanlessen, Adv. Math.
188 (2004) 337.
[14] F.D. Gakhov. Boundary Value Problems, Pergamon Press, New York, (1966). Redited by
Dover (1990).
15
Download full-text