Está en la página 1de 57

General Jacobi corners process and the Gaussian Free Field.

Alexei Borodin

Vadim Gorin

May 11, 2013


Abstract
We prove that the twodimensional Gaussian Free Field describes the asymptotics
of global uctuations of a multilevel extension of the general Jacobi random matrix
ensembles. Our approach is based on the connection of the Jacobi ensembles to a de-
generation of the Macdonald processes that parallels the degeneration of the Macdonald
polynomials to to the HeckmanOpdam hypergeometric functions (of type A). We also
discuss the limit.
Contents
1 Introduction 2
1.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 The main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Matrix models for multilevel ensembles . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Further results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Setup 8
2.1 Macdonald processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Elementary asymptotic relations . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 HeckmanOpdam processes and Jacobi distributions . . . . . . . . . . . . . . 12
3 Integral operators 19
3.1 Step 1: Integral form of operators T
k
N
. . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Step 2: Operators T
k
N
as sums over labeled graphs . . . . . . . . . . . . . . . 23
3.3 Step 3: Cancelations in terms corresponding to a given labeled graph . . . . . 26
4 Central Limit Theorem 31
4.1 Formulation of GFF-type asymptotics . . . . . . . . . . . . . . . . . . . . . . 31
4.2 A warm up: Moments of p
1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 Gaussianity lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5 Preliminaries on the twodimensional Gaussian Free Field . . . . . . . . . . . 38
4.6 Identication of the limit object . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Department of Mathematics, Massachusetts Institute of Technology, USA, and Institute for Information
Transmission Problems of Russian Academy of Sciences, Russia

e-mail: borodin@math.mit.edu

e-mail: vadicgor@gmail.com
1
5 Appendix: CLT as 45
6 Appendix: Heckman-Opdam hypergeometric functions. 48
1 Introduction
1.1 Preface
The goal of this article is twofold. First, we want to show how the Macdonald measures and
Macdonald processes [BC], [BCGS], [BG] (that generalize the Schur measures and Schur pro-
cesses of Okounkov and Reshetikhin [Ok], [OR1]) are related to classical general ensembles
of random matrices. The connection is obtained through a limit transition which is parallel
to the degeneration of the Macdonald polynomials to the HeckmanOpdam hypergeometric
functions. This connection brings certain new tools to the domain of random matrices, which
we further exploit.
Second, we extend the known Central Limit Theorems for the global uctuations of (clas-
sical) general- random matrix ensembles to corners processes, one example of which is the
joint distribution of spectra of the GUErandom matrix and its principal submatrices. We
show that the global uctuations of certain general corners processes can be described via
the twodimensional Gaussian Free Field. This suggests a new viewpoint on several known
Central Limit Theorems of random matrix theory: there is a unique twodimensional uni-
versal limit object (the Gaussian Free Field) and in dierent models one sees its various
onedimensional slices.
Let us proceed to a more detailed description of our model and results.
1.2 The model
The general ensemble of rank N is the distribution on the set of Ntuples of reals (particles
or eigenvalues) x
1
< x
2
< < x
N
with density (with respect to the Lebesgue measure)
proportional to

1i<jN
(x
j
x
i
)

i=1
w(x
i
), (1.1)
where w(x) is the weight of the ensemble. Perhaps, the most well-known case is = 2,
w(x) = exp(x
2
/2) (often called Gaussian or Hermite ensemble), which corresponds to the
eigenvalue distribution of the complex Hermitian matrix from the Gaussian Unitary Ensemble
(GUE). Other wellstudied weights are w(x) = x
p
e
x
on R
>0
, p > 1, known as Laguerre
(or Wishart) ensemble, and w(x) = x
p
(1 x)
q
on (0, 1), p, q > 1, known as Jacobi (or
MANOVA) ensemble. These three weight functions correspond to classical random matrix
ensembles because of their relation to the classical orthogonal polynomials. In the present
article we focus on the Jacobi ensemble, but it is very plausible that all our results extend to
other ensembles by suitable limit transitions.
For = 1, 2, 4, and the three classical choices of w(x) above, the distribution (1.1) ap-
pears as the distribution of eigenvalues of natural classes of random matrices, see Anderson
GuionnetZeitouni [AGZ], Forrester [F], Mehta [Me]. Here the parameter corresponds
to the dimension of the base eld over R, and one speaks about real, complex or quater-
nion matrices, respectively. There are also dierent points of view on the density function
(1.1), relating it to the Coulomb loggas, to the squared ground state wave function of the
CalogeroSutherland quantum manybody system, or to random tridiagonal and unitary
Hessenberg matrices. These viewpoints naturally lead to considering > 0 as a continuous
real parameter, see e.g. Forrester [Ox, Chapter 20 Beta Ensembles] and references therein.
2
The above random matrix ensembles at = 1, 2, 4 come with an additional structure,
which is a natural coupling between the distributions (1.1) with varying number of particles
N. In the case of the Gaussian Unitary Ensemble, take M = [M
ij
]
N
i,j=1
to be a random
Hermitian matrix with density of the distribution proportional to exp
_
Trace(M
2
/2)
_
. Let
x
k
1
x
k
k
, k = 1, . . . , N, denote the set of (real) eigenvalues of the topleft k k
corner [M
ij
]
k
i,j=1
. The joint probability density of (x
k
1
, . . . , x
k
k
) is given by (1.1) with = 2,
w(x) = exp(x
2
/2). The eigenvalues satisfy the interlacing conditions x
j
i
x
j1
i
x
j
i+1
for
all meaningful values of i and j. (Although, the inequalities are not strict in general, they
are strict almost surely.) The joint distribution of the N(N + 1)/2dimensional vector x
j
i
,
i = 1, . . . , j, j = 1, . . . , N is known as the GUEcorners process (the term GUEminors
process is also used), explicit formulas for this distribution can be found in GelfandNaimark
[GN], Baryshnikov [Bar], Neretin [N], JohanssonNordenstam [JN].
Similar constructions are available for the Hermite (Gaussian) ensemble with = 1, 4.
One can notice that in the resulting formulas for the distribution of the corners process x
j
i
,
i = 1, . . . , j, j = 1, . . . , N, the parameter enters in a simple way (see e.g. [N, Proposition
1.1]), which readily leads to the generalization of the denition of the corners process to the
case of general > 0, see Neretin [N] and OkounkovOlshanski [OO, Section 4].
From a dierent direction, if one considers the restriction of the corners process (both for
classical = 1, 2, 4 and for a general ) to two neighboring levels, i.e. the joint distribution
of vectors (x
k
1
, . . . , x
k
k
) and (x
k1
1
, . . . , x
k1
k1
), then one nds formulas that are well-known in
the theory of Selberg integrals. Namely, this distribution appears in the DixonAnderson
integration formula, see Dixon [Di], Anderson [A], Forrester [F, chapter 4]. More recently
the same twolevel distributions were studied by Forrester and Rains [FR] in relation with
nding random recurrences for classical matrix ensembles (1.1) with varying N, and with
percolation models.
All of the above constructions presented for the Hermite ensemble admit a generalization
to the Jacobi weight. This leads to a multilevel general Jacobi ensemble, or Jacobi
corners process, which is the main object of the present paper and whose denition we now
present. In Section 1.5 we will further comment on the relation of this multilevel ensemble
to matrix models.
Fix two integer parameters N > 0, M > 0 and a real parameter > 0. Also set
= /2 > 0. Let 1
M
(N) denote the set of families x
1
, x
2
, . . . , x
N
, such that for each
1 n N, x
n
is a sequence of real numbers of length min(n, M), satisfying:
0 < x
n
1
< x
n
2
< < x
n
min(n,M)
< 1.
We also require that for each 1 n N 1 the sequences x
n
and x
n+1
interlace x
n
x
n+1
,
which means that
x
n+1
1
< x
n
1
< x
n+1
2
< x
n
2
< . . . .
Note that for n M the length of sequence x
n
does not change and equals M, this is a
specic feature of the Laguerre and Jacobi ensembles which is not present in the Hermite
ensemble.
Denition 1.1. The Jacobi corners process of rank N with parameters M, , as above is
a probability distribution on the set 1
M
(N) with density with respect to the Lebesgue measure
3
proportional to

1i<jmin(N,M)
(x
N
j
x
N
i
)
min(N,M)

i=1
(x
N
i
)
(+N1)1
min(N,M)

i=1
_
1 x
min(N,M)
i
_
1+(NM)
+

N1

k=1
_
_
min(k,M)

i=1
(x
k
i
)
2

1i<jmin(k,M)
(x
k
j
x
k
i
)
22
min(k,M)

a=1
min(k+1,M)

b=1
[x
k
a
x
k+1
b
[
1
_
_
,
(1.2)
where (N M)
+
= max(N M, 0).
Similarly to the GUEcorners, the projection of -Jacobi corners process to a single level
k is given by the Jacobi ensemble whose density in our notations is given by

1i<jmin(k,M)
(x
k
j
x
k
i
)
2
k

i=1
(x
k
i
)
1
(1 x
k
i
)
([Mk[+1)1
. (1.3)
Further, the denition of Jacobi corners process is consistent for various N 1, meaning
that the restriction of the corners process of rank N to the rst n < N levels is the corners
process of rank n.
1.3 The main result
Our main result concerns the global (Gaussian) uctuations of the Jacobi corners process.
Let us start, however, with a discussion of similar results for singlelevel Jacobi ensembles
that have been obtained before.
1
Fix two reals u
0
, u
1
0 and let x
1
< < x
N
be sampled from the -Jacobi ensemble of
rank N with parameters u
0
N, u
1
N, i.e. (1.1) with w(x) = x
u
0
N
(1 x)
u
1
N
. Dene the
height function 1(x, N), x [0, 1], N = 1, 2, . . . as the number of eigenvalues x
i
which are
less than x. The computation of the leading term of the asymptotics of 1(x, N) as N
can be viewed as the Law of Large Numbers.
Proposition 1.2 (DumitriuPaquette [DP], Jiang [Ji], Killip [Ki]). For any values of > 0,
u
0
, u
1
0, the normalized height function 1(x, N)/N tends (in supnorm, in probability) to
an explicit deterministic limit function

1(x; u
0
, u
1
), which does not depend on .
In the case = 1 Proposition 1.2 was rst established by Wachter [Wa], nowadays other
proofs for classical = 1, 2, 4 exist, see e.g. Collins [C]. In the case of Hermite (Gaussian)
ensemble and = 1, 2, 4 an analogue of Proposition 1.2 dates back to the work of Wigner
[Wi] and is known as the Wigner semicircle law.
One feature of the limit prole

1(x; u
0
, u
1
) is that its derivative (in x) is non-zero only
inside a certain interval (l(u
0
, u
1
), r(u
0
, u
1
)) (0, 1). In other words, outside this interval
asymptotically as N we see no eigenvalues, and

1(x; u
0
, u
1
) is constant. Somewhat
abusing the notation we call [l(u
0
, u
1
), r(u
0
, u
1
)] the support of

1(x; u
0
, u
1
).
Studying the next term in the asymptotic expansion of 1(x, N) leads to various Central
Limit Theorems. One could either study 1(x, N) E1(x, N) as N for a xed x or a
smoothed version
_
1
0
f(x)
_
1(x, N) E1(x, N)
_
dx,
1
For a discussion of the remarkable recent progress in understanding local uctuations of general ensem-
bles see e.g. ErdosYau [EY], Vaik oVirag [VV], RamirezRiderVirag [RRV] and references therein.
4
for a (smooth) function f(x). It turns out that these two variations leads to dierent scalings
and dierent limits. We will concentrate on the second (smoothed) version of the central
limit theorems, see Killip [Ki] for some results in the nonsmoothed case.
Central Limit Theorems for random matrix ensembles at = 2 go back to Szegos the-
orems on the asymptotics of Toeplitz determinants, see Szego [Sz1], Forrester [F, Section
14.4.2], Krasovsky [Kr]. Nowadays several approaches exist, see [AGZ], [F], [Ox] and ref-
erences therein. The rst result for general was obtained by Johansson [Jo] who studied
the distribution (1.1) with analytic potential. Formally, the Jacobi case is out of the scope
of the results of [Jo] (although the approach is likely to apply) and here the Central Limit
Theorem was obtained very recently by Dumitriu and Paquette [DP] using the tridiagonal
matrices approach to general ensembles, cf. DumitriuEdelman [DE1], KillipNenciu [KN],
EdelmanSutton [ES].
Proposition 1.3 ([DP]). Take k 1 and any k continuously dierentiable functions f
1
, . . . ,
f
k
on [0, 1]. In the settings of Proposition 1.2 with u
0
+u
1
> 0, the vector
_
1
0
f
i
(x)
_
1(x, N) E1(x, N)
_
dx, i = 1, . . . , k,
converges as N to a Gaussian random vector.
Dumitriu and Paquette [DP] also prove that the covariance matrix diagonalizes when f()
ranges over suitably scaled Chebyshev polynomials of the rst kind.
Let us now turn to the multilevel ensembles, i.e. to the -Jacobi corners processes, and
state our result. Fix parameters

M > 0, > 0. Suppose that as our large parameter L ,
parameters M, of Denition 1.1 grow linearly in L:
M L

M, L .
Let 1(x, k), x [0, 1], k 1, denote the height function of the Jacobi corners process, i.e.
1(x, k) counts the number of eigenvalues from (x
]k|
1
, . . . , x
]k|
]k|
) that are less than x. Observe
that Proposition 1.2 readily implies the Law of Large Numbers for 1(x, L

N),

N > 0, as
L . Let [l(

N), r(

N)] denote the support of lim
L
1(x, L

N) (both endpoints depend
on ,

M, but we omit this dependence from the notations). Further, let D denote the region
inside [0, 1] R
>0
on (x,

N) plane dened by the inequalities l(

N) x r(

N).
Now we are ready to state our main theorem, giving the asymptotics of the uctuations
of Jacobi corners process in terms of the twodimensional Gaussian Free Field (GFF, for
short). We briey recall the denition and basic properties of the GFF in Section 4.5.
Theorem 1.4. Suppose that as our large parameter L , parameters M, grow linearly
in L:
M L

M, L ;

M > 0, > 0.
Then the centered random (with respect to the measure of Denition 1.1) height function

_
1(x, L

N) E1(x, L

N)
_
converges to the pullback of the Gaussian Free Field with Dirichlet boundary conditions on
the upper halfplane H with respect to a map : D H (see Denition 4.11 for the explicit
formulas) in the following sense: For any set of polynomials R
1
, . . . , R
k
C[x] and positive
numbers

N
1
, . . . ,

N
k
, the joint distribution of
_
1
0
R
i
(x)
_
1(x, L

N
i
) E1(x, L

N
i
)
_
dx, i = 1, . . . , k,
5
converges to the joint distribution of the similar averages
_
1
0
T((x,

N
i
))R
i
(x)dx, i = 1, . . . , k,
of the pullback of GFF.
There are several reasons why one might expect the appearance of the GFF in the study
of the general random matrix corners processes.
First, the GFF is believed to be a universal scaling limit for various models of random
surfaces in R
3
. Now the appearance of the GFF is rigorously proved for several models of
random stepped surfaces, see Kenyon [Ken], BorodinFerrari [BF], Petrov [P], Duits [Dui],
Kuan [Ku], ChhitaJohanssonYoung [CJY], BorodinBufetov [BB]. On the other hand, it is
known that random matrix ensembles for = 2 can be obtained as a certain limit of stepped
surfaces, see OkounkovReshetikhin [OR2], JohanssonNordenstam [JN], FlemingForrester
Nordenstam [FFR], Gorin [G], GorinPanova [GP], hence one should expect the presence of
GFF in random matrices.
Second, for random normal matrices, whose eigenvalues are no longer real, but the in-
teraction potential is still logarithmic, the convergence of the uctuations to the GFF was
established by AmeurHedenmalmMakarov [AHM1], [AHM2], see also RiderVir ag [RV].
Further, Spohn [Sp] found the GFF in the asymptotics of the general circular Dyson
Brownian Motion.
Finally, in [B1], [B2] one of the authors proved an analogue of Theorem 1.4 for = 1, 2
Wigner random matrices. This result can also be accessed through a suitable degeneration
of our results, see Remark 2 after Proposition 4.15 for more details.
Note that other classical ensembles can be obtained from -Jacobi through a suitable
limit transition and, thus, one should expect that similar results hold for them as well.
1.4 The method
Let us outline our approach to the proof of Theorem 1.4.
Recall that Macdonald polynomials P

(x
1
, . . . , x
N
; q, t) and Q

(x
1
, . . . , x
N
; q, t) are certain
symmetric polynomials in variables x
1
, . . . , x
N
depending on two parameters q, t > 0 and
parameterized by Young diagrams , see e.g. Macdonald [M, Chapter VI]. Given two sets of
parameters a
1
, . . . , a
N
> 0, b
1
, . . . , b
M
> 0 satisfying a
i
b
j
< 1, 1 i N, 1 j M, the
Macdonald measure on the set of all Young diagrams Y is dened as a probability measure
assigning to Young diagram the weight proportional to
P

(a
1
, . . . , a
N
; q, t)Q

(b
1
, . . . , b
M
; q, t). (1.4)
It turns out that (for a specic choice of a
1
, . . . , a
N
, b
1
, . . . , b
M
) when q, t 1 in such a way
that t = q

, the measure (1.4) weakly converges to the (single level) Jacobi distribution
with = 2, see Theorem 2.8 for the exact statement. This fact was rst noticed by Forrester
and Rains in [FR].
Further, the Macdonald measures admit multilevel generalizations called ascending Mac-
donald processes. The same limit transition as above yields the Jacobi corners process of
Denition 1.1.
In BorodinCorwin [BC] an approach for studying Macdonald measures through the Mac-
donald dierence operators, which are diagonalized by the Macdonald polynomials, was sug-
gested. This approach survives in the above limit transition and allows us to compute the
expectations of certain observables (essentially, moments) of Jacobi ensembles as results
of the application of explicit dierence operators to explicit functions, see Theorem 2.10 and
6
Theorem 2.11 for the details. Moreover, as we will explain, a modication of this approach
allows us to study the Macdonald processes and, thus, the Jacobi corners process, see also
BorodinCorwinGorinShakirov [BCGS] for further generalizations.
The next step is to express the action of the obtained dierence operators through contour
integrals, which are generally convenient for taking asymptotics. Here the approach of [BC]
fails (the needed contours cease to exist for our choice of parameters a
i
, b
j
of the Macdonal
measures), and we have to proceed in a dierent way. This is explained in Section 3.
The Central Limit Theorem itself is proved in Section 4 using a combinatorial lemma,
which is one of the important new ingredients of the present paper. Informally, this lemma
shows that when the joint moments of a family of random variables can be written via nested
contour integrals typical for the Macdonald processes, then the asymptotics of these moments
is given by Isserliss theorem (also known as Wicks formula), which proves the asymptotic
Gaussianity. See Lemma 4.2 for more details.
We also note that the convergence of Macdonald processes to Jacobi corners processes
is a manifestation of a more general limit transition that takes Macdonald polynomials to the
so-called Heckman-Opdam hypergeometric functions (see HeckmanOpdam [HO], Opdam
[Op], HeckmanSchlichtkrull [HS] for the general information about these functions), and
general Macdonald processes to certain probability measures that we call Heckman-Opdam
processes
2
. We discuss this limit transition in more detail in the Appendix.
1.5 Matrix models for multilevel ensembles
There are many ways to obtain Jacobi ensembles at = 1, 2, 4, with various special
exponents p, q, through random matrix models, see e.g. Forrester [F, Chapter 3], Duenez
[Due], Collins [C]. In most of them there is a natural way of extending the probability
measure to multilevel settings. We hope that a non-trivial subset of these situations would
yield the Jacobi corners process of Denition 1.1, but we do not know how to prove that.
For example, take two innite matrices X
ij
, Y
ij
, i, j = 1, 2, . . . with i.i.d. Gaussian entries
(either real, complex or quaternion). Fix three integers A, B, C > 0, let X
AC
be the A C
topleft corner of X, and let Y
BC
be B C topleft corner of of Y . Then the distribution
of (distinct from 0 and 1) eigenvalues x
1
x
2
x
N
, N = min(A, B, C), of
/
ABC
= (X
AC
)

X
AC
_
(X
AC
)

X
AC
+ (Y
BC
)

Y
BC
_
1
is given by the Jacobi ensemble ( = 1, 2, 4, respectively) with density (see e.g. [F, Section
3.6])

1i<jN
(x
j
x
i
)

i=1
(x
i
)

2
([AC[+1)1
(1 x
i
)

2
([BC[+1)1
.
Comparing this formula with (1.3) it is reasonable to expect that the joint distribution of
the above eigenvalues for matrices /
AnC
, n = 1, 2, . . . , N, is given by Denition 1.1 with
= /2, M = C, = A + 1. However, we were unable to locate this statement in the
literature, thus, we leave it as a conjecture here. A similar statement in the limiting case
of the Hermite ensemble is proven e.g. by Neretin [N], while for the Laguerre ensemble with
= 2 this is discussed by BorodinPeche [BP], DiekerWarren [DW], ForresterNagao [FN].
Another matrix model for Denition 1.1 at = /2 = 1 was suggested by Adlervan
MoerbekeWang [AMW]. In the above settings (with complex matrices) set

X
ij
=
_
X
ij
, if 1 i 2j +A,
0, otherwise.
2
In spite of a similar name, they are dierent from HeckmanOpdam Markov processes of Schapira [Sch].
7
Let

X
AC
be (2C +A) C topleft corner of

X and Y
BC
be as above, and set

/
ABC
= (

X
AC
)


X
AC
_
(

X
AC
)

X
AC
+ (Y
BC
)

Y
BC
_
1
.
Suppose N M; then the joint distribution of eigenvalues of

/
ABn
, n = 1, . . . , N, is given
by Denition 1.1 with = /2, M = B, = A + 1, as seen from [AMW, Theorem 1] (one
should take
n
= A+n 1 in this theorem).
Let us also remark on the connection with tridiagonal models for classical general
ensembles of DumitriuEdelman [DE1], KillipNenciu [KN], EdelmanSutton [ES]. One may
try to produce an alternative denition of multilevel -Jacobi (Laguere, Hermite) ensembles
using tridiagonal models for the single level ensembles and taking the joint distribution of
eigenvalues of suitable submatrices. However, note that these models produce the ensemble
of rank N out of linear in N number of independent random variables, while the dimension
of the set of interlacing congurations grows quadratically in N. Therefore, this construction
would produce a distribution concentrated on a lower dimensional subset and, thus, singular
with respect to the Lebesgue measure, which is not what we need. On the other hand, it is
possible that the marginals of the Jacobi corners processes on two neighboring levels can
be obtained as eigenvalues of a random tridiagonal matrix and its submatrix of size by one
less, see ForresterRains [FR] for some results in this direction.
Let us nally mention that Edelman [Ed] discovered an ingenious random matrix algo-
rithm that (conjecturally for ,= 1, 2, 4) yields the Jacobi corners process of Denition
1.1.
1.6 Further results
Let us list a few other results proved below.
First, in addition to Theorem 1.4, we express the limit covariance of the Jacobi corners
process in terms of Chebyshev polynomials (in the spirit of the results of DumitiuPaquette
[DP]), see Proposition 4.15 for the details.
Further, we use the same techniques as in the proof of Theorem 1.4 to analyze the behavior
of the (multilevel) Jacobi ensemble as . It is known that the eigenvalues of the
Jacobi ensemble concentrate near roots of the corresponding Jacobi orthogonal polynomials
as (see e.g. Szego [Sz, Section 6.7], Kerov [Ker]), and in the Appendix we sketch a
proof of the fact that the uctuations (after rescaling by

) are asymptotically Gaussian,
see Theorem 5.1 for the details. Similar results for the singlelevel Hermite and Laguerre
ensembles were previously obtained in DumitriuEdelman [DE2]. We have so far been unable
to produce simple formulas for the limit covariance or to identify the limit Gaussian process
with a known object.
1.7 Acknowledgements
The authors are very grateful to I. Corwin, A. Edelman and G. Olshanski for many invaluable
discussions. A. B. was partially supported by the NSF grant DMS-1056390. V. G. was
partially supported by RFBR-CNRS grant 11-01-93105.
2 Setup
The aim of this section is to put general Jacobi random matrix ensembles in the context of
the Macdonald processes and their degenerations that we will refer to as the HeckmanOpdam
processes.
8
2.1 Macdonald processes
Let GT
+
N
denote the set of all N tuples of non-negative integers
GT
+
N
=
1

2

N
0 [
i
Z.
We say that GT
+
N
and GT
+
N1
interlace and write , if
3

1

1

2

N1

N
.
Sometimes (when it leads to no confusion) we also say that GT
+
N
and GT
+
N
(note
the change in index) interlace and write if

1

1

2

N

N
.
Informally, in this case we complement with a single zero coordinate.
Let
N
denote the algebra of symmetric polynomials in N variables x
1
, . . . , x
N
with
complex coecients.
N
has a distinguished (linear) basis formed by Macdonald polynomials
P

(; q, t), GT
+
N
, see e.g. [M, Chapter VI]. Here q and t are parameters (that may also be
considered formal). We also need the dual Macdonald polynomials Q

(; q, t). By denition
Q

(; q, t) = b

(; q, t),
where b

= b

(q, t) is a certain explicit constant, see [M, Chapter VI, (6.19)].


We also need skew Macdonald polynomials P
/
(, GT
+
N
,
i

i
for all i) and Q
/
,
they can be dened through the identities
P

(x
1
, . . . , x
N
, y
1
, . . . , y
N
; q, t) =

GT
+
N
P
/
(x
1
, . . . , x
N
; q, t)P

(y
1
, . . . , y
N
; q, t), (2.1)
Q

(x
1
, . . . , x
N
, y
1
, . . . , y
N
; q, t) =

GT
+
N
Q
/
(x
1
, . . . , x
N
; q, t)Q

(y
1
, . . . , y
N
; q, t).
Somewhat abusing the notations, in what follows we write P

(x
1
, . . . , x
M
; q, t), with GT
+
N
,
N M, for P

(x
1
, . . . , x
M
; q, t), where

GT
+
M
is obtained from adding M N zero
coordinates; similarly for Q

, P
/
, Q
/
.
Assume that 0 < q < 1, 0 < t < 1, and x the following set of parameters: an integer
M > 0, positive reals a
1
, a
2
, . . . and positive reals b
1
, . . . , b
M
. The following denition is a
slight generalization of [BC, Denition 2.2.7].
Denition 2.1. The innite ascending Macdonald process indexed by M, a
i
, b
j
is a
(random) sequence
1
,
2
, . . . such that
1. For each N 1,
N
GT
+
min(N,M)
and also
N

N+1
.
2. For each N 1 the (marginal) distribution of
N
is given by
Prob
N
= =
1
Z
N
P

(a
1
, . . . , a
N
; q, t)Q

(b
1
, . . . , b
M
; q, t), (2.2)
where
Z
N
=

GT
+
min(N,M)
P

(a
1
, . . . , a
N
; q, t)Q

(b
1
, . . . , b
M
; q, t) =
N

i=1
M

j=1
(ta
i
b
j
; q)

(a
i
b
j
; q)

. (2.3)
3
The notation GT
+
comes from GelfandTsetlin patterns, which are interlacing sequence of elements of
GT
+
i
, i = 1, . . . , N and parameterize the same named basis in the irreducible representations of the unitary
group U(N). In the representation theory the above interlacing condition appears in the branching rule for
the restriction of an irreducible representation to the subgroup.
9
3.
N

N1
is a trajectory of a Markov chain with (backward) transition probabilities
Prob
N1
= [
N
= = P
/
(a
N
; q, t)
P

(a
1
, . . . , a
N1
; q, t)
P

(a
1
, . . . , a
N
; q, t)
. (2.4)
Proposition 2.2. If sequences a
i

i=1
and b
j

M
j=1
of positive parameters are such that
a
i
b
j
< 1 for all i, j, then the innite ascending Macdonald process indexed by M 1, a
i
, b
j

is well dened.
Proof. The nonegativity of (2.2), (2.4) follows from the combinatorial formula for the (skew)
Macdonald polynomials [M, Chapter VI, Section 7]. The identity (2.3) is the Cauchy identity
for Macdonald polynomials [M, Chapter VI, (2.7)]. Note that the absolute convergence of
the series

in (2.3) follows from the fact that it is a rearrangement of the absolutely


convergent power series in a
i
, b
j
for the product form of Z
N
. The consistency of properties
(2.4) and (2.2) follows from the denition of skew Macdonald polynomials, cf. [BG], [BC].
Let f
N
be any symmetric polynomial. For GT
+
N
we dene
f() = f(q

1
t
N1
, q

2
t
N2
, . . . q

N
).
Further, for any subset I 1, . . . , N dene
A
I
(z
1
, . . . , z
N
; t) =

iI

j,I
z
i
tz
j
z
i
z
j
.
Dene the shift operator T
q
i
through
[T
q
i
f](z
1
, . . . , z
N
) = f(z
1
, . . . , z
i1
, qz
i
, z
i+1
, . . . , z
N
).
For any k N dene the kth Macdonald dierence operator Mac
k
N
through
Mac
k
N
=

[I[=k
A
I
(z
1
, . . . , z
N
; t)

iI
T
q
i
.
Theorem 2.3. Fix any integers m 1, N
1
N
2
N
m
1 and 1 k
i
N
i
,
i = 1, . . . , m. Suppose that
1
,
2
, . . . is an innite ascending Macdonald process indexed by
M 1, a
i

i=1
, b
j

M
j=1
as described above. Then
E
_
m

i=1
e
k
i
_

N
i
_
_
=
Mac
k
m
N
m
Mac
k
2
N
2
Mac
k
1
N
1
_
N
1

i=1
H(z
i
)
_

N
1
i=1
H(z
i
)
z
i
=a
i
, (2.5)
where
H(z) =
M

i=1
(tzb
i
; q)

(zb
i
; q)

,
and e
k
is the degree k elementary symmetric polynomial.
For N
1
= N
2
= = N
m
this statement coincides with the observation of [BC, Section
2.2.3]. For general N
j
s a proof can be found in [BCGS], however, it is quite simple and we
reproduce its outline below.
10
Sketch of the proof of Theorem 2.3. Macdonald polynomials are the eigenfunctions of Mac-
donald dierence operators (see [M, Chapter VI, Section 4]):
Mac
k
n
(P

(x
1
, . . . , x
n
; q, t)) = e
k
(q

1
t
n1
, . . . , q

n
)P

(x
1
, . . . , x
n
; q, t). (2.6)
So, rst expand

N
1
i=1
H(z
i
) using the Cauchy identity [M, Chapter VI, (2.7)]
N
1

i=1
H(z
i
) =

GT
+
min(N
1
,M)
P

(x
1
, . . . , x
N
1
; q, t)Q

(b
1
, . . . , b
M
; q, t)
and then apply

r1
i=1
Mac
k
i
N
i
to the sum, where r is the maximal number such that N
1
=
N
2
= = N
r1
. Using (2.6) we get
r1

i=1
Mac
k
i
N
i
_
N
1

i=1
H(z
i
)
_
=

N
1GT
+
min(N
1
,M)
_
r1

i=1
e
k
i
_
q

N
1
1
t
N
1
1
, . . . , q

N
1
N
1
_
_
P

N
1
(x
1
, . . . , x
N
1
; q, t)Q

N
1
(b
1
, . . . , b
M
; q, t) (2.7)
Now substitute in (2.7) the decomposition (which is a version of the denition (2.1))
P

N
1
(x
1
, . . . , x
N
1
; q, t) =

N
r
GT
+
min(N
r
,M)
P

N
k
(x
1
, . . . , x
N
r
; q, t)P

N
1/
N
r
(x
N
r
+1
, . . . , x
N
1
; q, t)
and apply (again using (2.6))

h1
i=r
Mac
k
i
N
i
to the resulting sum, where h is the maximal
number such that N
r
= N
r+1
= = N
h1
. Iterating this procedure we arrive at the desired
statement.
2.2 Elementary asymptotic relations
In what follows we will use the following technical lemmas.
Lemma 2.4. For any a, b C and complexvalued function u() dened in a neighborhood
of 1 and such that
lim
q1
u(q) = u
with 0 < u < 1, we have
lim
q1
(q
a
u(q); q)

(q
b
u(q); q)

= (1 u)
ba
.
Proof. For q approaching 1 we have (using ln(1 +x) x for small x)
4
(q
a
u(q); q)

(q
b
u(q); q)

= exp
_

m=0
ln
1 q
a+m
u(q)
1 q
b+m
u(q)
_
= exp
_

m=0
ln
_
1 +
q
m
u(q)(q
b
q
a
)
1 q
b+m
u(q)
_
_
exp
_

m=0
q
m
u(q)(q
b
q
a
)
1 q
b+m
u(q)
_
= exp
_

m=0
(q
m
q
m+1
)
u(q)(q
b
q
a
)
(1 q
b+m
u(q))(1 q)
_
(2.8)
4
We use the notation f(x) g(x) as x a if lim
xa
f(x)
g(x)
= 1.
11
Note that as q 1
u(q)(q
b
q
a
)
(1 q
b+m
u(q))(1 q)

u(a b)
1 q
m
u
,
the last sum in (2.8) turns into a Riemannian sum for an integral, and we get (omitting a
standard uniformity of convergence estimate)
exp
__
1
0
u(a b)
1 ux
dx
_
= exp((a b) ln(1 u)) = (1 u)
ba
.
Remark. The convergence in Lemma 2.4 is uniform in u bounded away from 1.
Lemma 2.5. For any x C 0, 1, 2, . . . we have
lim
q1
(q; q)

(q
x
; q)

(1 q)
1x
= (x).
Proof. E.g. [KLS, Section 1.9], [AAR, Section 10.3].
2.3 HeckmanOpdam processes and Jacobi distributions
Throughout this section we x two parameters M Z
+
and > 0.
Let 1
M
denote the set of families r
1
, r
2
, . . . , such that for each N 1, r
N
is a sequence
of real numbers of length min(N, M), satisfying:
0 < r
N
1
< r
N
2
< < r
N
min(N,M)
< 1.
We also require that for each N the sequences r
N
and r
N+1
interlace r
N
r
N+1
, which
means that
r
N+1
1
< r
N
1
< r
N+1
2
< r
N
2
< . . . .
Denition 2.6. The probability distribution P
,M,
on 1
M
is the unique distribution satis-
fying two conditions:
1. For each N 1 the distribution of r
N
is given by the following density (with respect to
the Lebesgue measure)
P
,M,
(r
N
[z, z +dz]) =
const

1i<jmin(N,M)
(z
j
z
i
)
2
min(N,M)

i=1
z
1
i
(1 z
i
)
[MN[+1
dz
i
, (2.9)
where 0 < z
1
< < z
min(N,M)
< 1 and const is an (explicit) normalizing constant.
2. Under P
,M,
, r
N

N1
, is a trajectory of a Markov chain with backward transition
probabilities having the following density:
P
,M,
(r
N1
[z, z +dz] [ r
N
= y) =
(N)
()
N

j=1
y
j
(N1)

1i<j<N
(z
j
z
i
)

1i<jN
(y
j
y
i
)
12
N1

i=1
N

j=1
[y
j
z
i
[
1
N1

i=1
dz
i
z
N
i
,
(2.10)
12
for N M and
P
,M,
(r
N1
[z, z+dz] [ r
N
= y) =
(N)
()
M
(N M)

1i<jM
(z
j
z
i
)(y
j
y
i
)
12

j=1
y
j
(N1)
(1 y
j
)
(MN1)+1
M

i,j=1
[y
j
z
i
[
1
M

i=1
(1 z
i
)
(NM)1
dz
i
z
N
i
, (2.11)
for N > M, where z y in both formulas.
Remark 1. The distribution (2.9) is known as the general Jacobi ensemble.
Remark 2. Straightforward computation shows that the restriction of P
,M,
on the rst
N 1 levels gives the Jacobi corners process of Denition 1.1.
Remark 3. The backward transitional probabilities (2.10) are known in the theory of
Selberg integrals. They appear in the integration formulas due to Dixon [Di] and Anderson
[A]. More recently, twolevel distribution of the above kind was studied by Forrester and
Rains [FR].
Remark 4. Alternatively, one can write down forward transitional probabilities of the
Markov chain r
N

N1
; together with the distribution of r
1
given by (2.9) they uniquely
dene P
,M,
. In particular, for 1 N M we have
P
,M,
(r
N
[y, y +dy] [ r
N1
= z) = const

1i<j<N
(z
j
z
i
)
12

1i<jN
(y
j
y
i
)

N1

i=1
N

j=1
[y
j
z
i
[
1
N1

i=1
z
(N+)+1
i
(1z
i
)
(NM)+1
N

j=1
y
j
(N1+)1
(1y
j
)
(NM+1)1
dy
j
.
Proposition 2.7. The distribution P
,M,
is well-dened.
Proof. We should check three properties here. First, we want to check that the density (2.9)
is integrable and nd the normalizing constant in this formula. This is known as the Selberg
Integral evaluation, see [Se], [A], [F, Chapter 4]:
S
n
(
0
,
1
, ) :=
_
1
0

_
1
0
n

i=1
t

0
1
i
(1 t
i
)

1
1

1i<jn
[t
i
t
j
[
2
dt
1
. . . dt
n
=
n1

j=0
(
0
+j)(
1
+j)(1 + (j + 1))
(
0
+
1
+ (n +j 1))(1 +)
.
We conclude that const in (2.9) is given by
const =
(min(N, M))!
S
min(N,M)
(, [M N[ +, )
.
Next, we want to check that (2.10) denes a probability distribution, i.e. that the integral
over zs is 1. For that we use a particular case of the Dixon integration formula (see [Di], [F,
Exercise 4.2, q. 2]) which reads
_
T

1i<jn
(t
i
t
j
)
n

i=1
n+1

j=1
[t
i
a
j
[

j
1
[b t
i
[

j
dt
1
dt
n
=
n+1

j=1
(
j
)

_
n+1

j=1

j
_

1i<jn+1
(a
i
a
j
)

i
+
j
1
n+1

i=1
[b a
i
[

n+1
j=1

j
, (2.12)
13
where the domain of integration T is given by
a
1
< t
1
< a
2
< t
2
< t
n
< a
n+1
.
Setting in (2.12), n = N1, b = 0,
j
= , j = 1, . . . , N, we arrive at the required statement.
Finally, we want to prove the consistency of formulas (2.9) and (2.10), i.e. that for prob-
abilities dened through those formulas we have
_
y
P
,M,
(r
N1
[z, z +dz] [ r
N
= y)P
,M,
(r
N
[y, y +dy])
= P
,M,
(r
N1
[z, z +dz]).
Assuming N M, this is equivalent to
_

_
N

j=1
y
j
(N1)

i<j
(z
j
z
i
)

i<j
(y
j
y
i
)
12

i,j
[y
j
z
i
[
1
N1

i=1
z
N
i

1i<jN
(y
j
y
i
)
2
N

i=1
y
1
i
(1 y
i
)
[MN[+1
dy
i
,
= const

1i<jN1
(z
j
z
i
)
2
N1

i=1
z
1
i
(1 z
i
)
[MN+1[+1
dz
i
, (2.13)
where the integration goes over all ys such that
0 < y
1
< z
1
< y
2
< < z
N1
< y
N
< 1.
In order to prove (2.13) we use another particular case of the Dixon integration formula (this
is b limit of (2.12)), which was also proved by Anderson [A]. This formula reads
_

_

i<j
(y
j
y
i
)

i,j
[y
j
a
i
[

i
1
dy
1
dy
n
=
n+1

j=1
(
j
)

_
n+1

j=1

j
_

1i<jn+1
(a
i
a
j
)

i
+
j
1
,
(2.14)
where the integration is over all y
i
such that
a
1
< y
1
< a
2
< < y
n
< a
n+1
.
Choosing
n = N, a
1
= 0, a
2
= z
1
, a
3
= z
2
, . . . , a
n
= z
n1
, a
n+1
= 1
and appropriate
i
we arrive at (2.13). For N > M the argument is similar.
Our next aim is to show that P
,M,
is a scaling limit of the ascending Macdonald processes
from Section 2.1.
Theorem 2.8. Fix two positive reals , and a positive integer M. Consider two sequences
a
i
= t
i1
, i = 1, 2, . . . and b
i
= t

t
i1
, i = 1, . . . , M, and let
1
,
2
, . . . be distributed
according to the innite ascending Macdonald process of Denition 2.1. For > 0 set
q = exp(), t = q

,
14
and dene
r
i
j
() = exp(
i
j
).
Then as 0 the nite-dimensional distributions of r
i
j
(), i = 1, 2, . . . , j =
1, . . . , min(M, i) weakly converge to those of P
,M,
.
Remark. The result of Theorem 2.8 is a manifestation of a more general limit transition
that takes Macdonald polynomials to the so-called Heckman-Opdam hypergeometric func-
tions and general Macdonald processes to certain probability measures that we call Heckman-
Opdam processes. In particular, P
,M,
is a Heckman-Opdam process. As all we shall need
in the sequel is the above theorem, we moved the discussion of these more general limiting
relations to the appendix.
Proof of Theorem 2.8. We need to prove that (2.2) converges to (2.9) and (2.4) converges to
(2.10), (2.11). Let us start from the former.
For any GT
+
N
and M N we have with the agreement that
i
= 0 for i > N (see
[M, Chapter VI, (6.11)])
P

(1, . . . , t
M1
; q, t) = t

N
i=1
(i1)
i

i<jM
(q

j
t
ji
; q)

(q

j
t
ji+1
; q)

(t
ji+1
; q)

(t
ji
; q)

= t

N
i=1
(i1)
i

i<jN
(q

j
t
ji
; q)

(q

j
t
ji+1
; q)

i=1
M

j=N+1
(q

i
t
ji
; q)

(q

i
t
ji+1
; q)

iN; jM

i<j
(t
ji+1
; q)

(t
ji
; q)

.
In the limit regime
q = exp(), t = q

,
i
=
1
log(r
i
), 0,
using Lemma 2.4 we have
t

N
i=1

i
(i1)

i=1
(r
i
)
(i1)
,
(q

j
t
ji
; q)

(q

j
t
ji+1
; q)

_
1 r
i
/r
j
_

,
M

j=N+1
(q

i
t
ji
; q)

(q

i
t
ji+1
; q)

=
(q

i
t
N+1i
; q)

(q

i
t
M+1i
; q)

_
1 r
i
_
(MN)
,
and using Lemma 2.5 we get
(t
ji+1
; q)

(t
ji
; q)

((j i))
((j i + 1))

.
We also need (see [M, Chapter VI, (6.19)])
Q

(; q, t)
P

(; q, t)
= b

1ij()
f(q

j
t
ji
)
f(q

j+1
t
ji
)
, f(u) =
(tu; q)

(qu; q)

. (2.15)
Thus, canceling asymptotically equal factors, we get
b

i=1
f(1)
f(q

i
t
Ni
)


N(1)
()
N
N

i=1
(1 r
i
)
1
.
15
Also
(ta
i
b
j
; q)

(a
i
b
j
; q)

=
(t t
i1
t
+j1
; q)

(t
i1
t
+j1
; q)

= f(t
i1
t
+j1
)

_
((i 1 +j 1 +)
((i 1 +j 1 + + 1))

_
.
We conclude that for N M as 0
N

i=1
M

j=1
(t
i1
t
+j1
; q)

(t t
i1
t
+j1
; q)

(1, . . . , t
N1
; q, t)Q

(t

, . . . , t
+M1
; q, t)
const
N

1i<jN
(r
i
r
j
)
2
N

i=1
(r
i
)

(1 r
i
)
(MN)+1
where const is a certain (explicit) constant (depending on N, M, and ). Taking into the
account that dr
i
r
i
d
i
, that the convergence in all the above formulas is uniform over
compact subsets of the set 0 < r
1
< < r
N
< 1, and that both prelimit and limit measures
have mass one, we conclude that (2.2) weakly converges to (2.9). For N > M the argument
is similar.
It remains to prove that (2.4) weakly converges to (2.10). Using [M, Chapter VI, (7.13)]
we have
P
/
(t
N1
; q, t) =
/
(t
N1
), (2.16)
where (f() was dened above, see (2.15))

/
(x) = x
[[[[
f(1)
N1

i<j<N
f(q

j
t
ji
)

ij<N
f(q

j+1
t
ji
)
f(q

j+1
t
ji
)f(q

j
t
ji
)
.
In the limit regime
q = exp(), t = q

,
i
=
1
log(r
i
),
i
=
1
log(r
t
i
), 0,
f(1)

1
()
, f(q

j
t
ji
) (1 r
t
i
/r
t
j
)
1
,
f(q

j+1
t
ji
)
f(q

j+1
t
ji
)f(q

j
t
ji
)

_
1 r
i
/r
j+1
_
1 r
t
i
/r
j+1
__
1 r
i
/r
t
j
_
_
1
,
t
(N1)([[[[)

_
i
r
i

i
r
t
i
_
(N1)
.
Therefore,

/
(t
N1
)

(N1)(1)
()
N1

i
r
(N1)
i

i
(r
t
i
)
1N

i<j<N
(r
t
j
r
t
i
)
1

ij<N
_
r
j+1
r
i
_
r
j+1
r
t
i
__
r
t
j
r
i
_
_
1
. (2.17)
Taking into the account that dr
t
i
r
t
i
d
i
and the above formulas for the asymptotic
behavior of P

(1, . . . , t
N1
; q, t), the uniformity of convergence on compact subsets of the
set dened by interlacing condition r
t
r, and the fact that we started with a probability
measure and obtained a probability density, we conclude that (2.4) weakly converges to (2.10).
For N > M the argument is similar.
16
Let denote the algebra of symmetric functions, which can be viewed as the algebra of
symmetric polynomials of bounded degree in innitely many variables x
1
, x
2
, . . . , see e.g. [M,
Chapter I, Section 2]. One way to view is as an algebra of polynomials in Newton power
sums
p
k
=

i
(x
i
)
k
, k = 1, 2, . . . .
Denition 2.9. For a symmetric function f , let f(N; ) denote the function on 1
M
given by
f(N; r) =
_

_
f(r
N
1
, r
N
2
, . . . , r
N
N
, 0, 0, . . . ), N M,
f(r
N
1
, r
N
2
, . . . , r
N
M
, 1, . . . , 1
. .
NM
, 0, 0, . . . ), N > M.
For example,
p
k
(N; r) =
_

_
N

i=1
(r
N
i
)
k
, N M,
M

i=1
(r
N
i
)
k
+N M, N > M.
For every M, N 1 and > 0 dene functions H(y; , M) and H
N
(; , M) in variables
y
1
, . . . , y
N
through
H
N
(y
1
, . . . , y
N
; , M) =
N

i=1
H(y
i
; , M) =
N

i=1
(y +)
(y + +M)
. (2.18)
For any subset I 1, . . . , N dene
B
I
(y
1
, . . . , y
N
; ) =

iI

j,I
+y
i
y
j
y
i
y
j
.
Dene the shift operator T
i
through
[T
i
f](y
1
, . . . , y
N
) = f(y
1
, . . . , y
i1
, y
i
1, y
i+1
, . . . , y
N
).
For any k N dene the kth order dierence operator T
k
N
acting on functions in variables
y
1
, . . . , y
N
through
T
k
N
=

[I[=k
B
I
(y
1
, . . . , y
N
)

iI
T
i
. (2.19)
The following statement is parallel to Theorem 2.3.
Theorem 2.10. Fix any integers m 1, N
1
N
2
N
m
1 and 1 k
i
N
i
,
i = 1, . . . , m. With E taken with respect to P
,M,
of Denition 2.6 we have
E
_
m

i=1
e
k
i
(N
i
, r)
_
=
T
k
m
N
m
T
k
2
N
2
T
k
1
N
1
_
N
1

i=1
H(y
i
; , M)
_

N
1
i=1
H(y
i
; , M)
y
i
=(1i)
, (2.20)
where e
k
is the kth degree elementary symmetric polynomial.
Proof. We start from Theorem 2.3 and perform the limit transition of Theorem 2.8.
17
Note that q

i
t
Ni
< 1, therefore, e
k
i
(
N
i
) < N
k
i
i
and Theorem 2.8 implies that left side
of (2.5) converges to the left side of (2.20). Turning to the right sides, observe that for
b
i
= t
+i1
we have
M

i=1
(tzb
i
; q)

(zb
i
; q)

=
(zt
+M
; q)

(zt

; q)

.
Set
q = exp(), t = q

, z
i
= exp(y
i
).
Note that for any function g(z
1
, . . . , z
N
) we have as 0
Mac
k
N
g(z
1
, . . . , z
n
) T
k
N
g(exp(y
1
), . . . , exp(y
N
)).
Further, Lemma 2.5 implies that as 0
(z
i
t
+M
; q)

(z
i
t

; q)


M
(y
i
+)
(y
i
+ +M)
=
M
H(y
i
).
It follows that
lim
0
Mac
k
m
N
m
Mac
k
2
N
2
Mac
k
1
N
1
_
N
1

i=1
(z
i
t
+M
;q)

(z
i
t

;q)

N
1
i=1
(z
i
t
+M
;q)

(z
i
t

;q)
z
i
=t
i1
=
T
k
m
N
m
T
k
2
N
2
T
k
1
N
1
_
N
1

i=1
H(y
i
; , M)
_

N
1
i=1
H(y
i
; , M)
y
i
=(1i)
.
Next, we aim to dene operators T
k
N
which will help us in studying the limiting behavior
of observables p
k
(N, r), cf. Denition 2.9.
Recall that partition of number n 0 is a sequence of integers
1

2
0 such
that

i=1

i
= n. The number of non-zero parts
i
is denoted () and called its length.
The number n is called the size of and denoted [[. For a partition = (
1
,
2
, . . . ) let
e

=
()

i=1
e

i
.
Elements e

with running over the set Y of all partitions, form a linear basis of , cf. [M,
Chapter I, Section 2].
Let PE(k, ) denote the transitional coecients between e and p bases in the algebra of
symmetric functions, cf. [M, Chapter I, Section 6]:
p
k
=

Y: [[=k
PE(k, )e

.
Dene
T
k
N
=

Y: [[=k
PE(k, )
1
()!

S
()
()

i=1
T

(i)
N
, (2.21)
where S
m
is the symmetric group of rank m.
Remark. Recall that operators T
k
N
, k = 1, . . . , N commute (because they are limits of
Mac
k
N
that are all diagonalized by the Macdonald polynomials, cf. [M, Chapter VI, (4.15)-
(4.16)]). However, it is convenient for us to think that the products of operators in (2.21) are
ordered (and we sum over all orderings).
Theorem 2.10 immediately implies the following statement.
18
Theorem 2.11. Fix any integers m 1, N
1
N
2
N
m
1 and k
i
1, i = 1, . . . , m.
With E taken with respect to P
,M,
of Denition 2.6 we have
E
_
m

i=1
p
k
i
(N
i
, r)
_
=
T
k
m
N
m
T
k
2
N
2
T
k
1
N
1
_
N
1

i=1
H(y
i
; , M)
_

N
1
i=1
H(y
i
; , M)
y
i
=(1i)
. (2.22)
3 Integral operators
The aim of this section is to express expectations of certain observables with respect to mea-
sure P
,M,
as contour integrals. This was done in [BC] for certain expectations of Macdonald
processes; however, the approach of [BC] fails in our case (the contours of integration required
in that paper do not exist) and we have to proceed dierently.
Since in (2.20), (2.22) the expectations of observables are expressed in terms of the action
of dierence operators on products of univariate functions, we will produce integral formulas
for the latter.
Let S
n
denote the set of all set partitions of 1, . . . , n. An element s S
N
is a collection
S
1
, . . . , S
k
of disjoint subsets of 1, . . . , n such that
k
_
m=1
S
m
= 1, . . . , n.
The number of non-empty sets in s S
n
will be called the length of s and denoted as (s).
The parameter n itself will be called the size of s and denoted as [s[. We will also denote by
[n] the set partition of 1, . . . , n consisting of the single set 1, . . . , n.
Let g(z) be a meromorphic function of a complex variable z, let y = (y
1
, . . . , y
N
)
C
N
, and let d > 0 be a parameter. The system of closed positively oriented contours
(
1
(y; g), . . . , (
k
(y; g) in the complex plane is called (g, y, d)-admissible (d will be called dis-
tance parameter), if
1. For each i, (
i+1
(y; g) is inside the inner boundary of the dneighborhood of (
i
(y; g).
(Hence, (
k
(y; g) is the smallest contour.)
2. All points y
m
are inside the smallest contour (
k
(y; g) (hence, inside all contours) and
g(z1)
g(z)
is analytic inside the largest contour (
1
(y; g). (Thus, potential singularities of
g(z1)
g(z)
have to be outside all the contours.)
From now on we assume that such contours do exist for every k (and this will be indeed true
for our choices of g and y
m
s).
Let G
(k)
be the following formal expression (which can be viewed as a kdimensional
dierential form)
G
(k)
(v
1
, . . . , v
k
; y
1
, . . . , y
N
; g) =

i<j
(v
i
v
j
)
2
(v
i
v
j
)
2

2
k

i=1
_
N

m=1
v
i
y
m

v
i
y
m

g(v
i
1)
g(v
i
)
dv
i
_
(the dependence on is omitted from the notations). Note that G
(k)
is symmetric in v
i
; this
will be important in what follows.
Take any set partition s = (S
1
, . . . , S
(s)
) S
k
and let G
s
denote the expression in (s)
variables w
1
, . . . , w
(s)
obtained by taking for each h = 1, . . . , (s) the residue of G
(k)
at
v
i
1
= v
i
2
= = v
i
r
(i
r
1), S
h
= i
1
, . . . , i
r
,
19
and renaming the remaining variable v
i
1
by w
h
. Here i
1
, . . . , i
r
are all elements of the set S
h
.
Note that the symmetry of G
(k)
implies that the order of elements in S
h
as well as the ordering
of the resulting variables w
h
are irrelevant. However, we need to specify some ordering of w
h
;
let us assume that the ordering is by the smallest elements of sets of the partitions, i.e. if S
i
corresponds to w
i
and S
j
corresponds to w
j
then the order in pair (i, j) is the same as that
of the minimal elements in S
i
and S
j
.
Observe that, in particular, G
1
k = G
(k)
with v
i
= w
i
, i = 1, . . . , k.
Denition 3.1. An admissible integral operator J
N
is an operator which acts on the functions
of the form

N
i=1
g(y
i
) via pdimensional integral
J
N
_
N

i=1
g(y
i
)
_
=
N

i=1
g(y
i
) c
_

i<j
Cr
i,j
(w
i
w
j
)
(w
i
w
j
)
d
ij
p

j=1
G
[k
j
]
(w
j
; y
1
, . . . , y
N
; g),
where p is the dimension of integral, k
j

p
j=1
is a sequence of positive integral parameters, d
ij
are non-negative integral parameters, Cr
i,j
(z) are analytic functions of z which have limits
as z , and c is a constant. The integration goes over nested admissible contours with
large enough distance parameter, and g(z) is assumed to be such that the admissible contours
exist. We call

i,j
d
ij
the degree of operator J
N
.
Remark. When we have a series of integral operators J
N
, N = 1, 2, . . . , we additionally
assume that all the above data (p, k
j
, d
ij
, Cr
i,j
(z)) does not depend on N. When N is
irrelevant, we sometime write simply J.
A subclass of admissible integral operators is given by the following denition.
Denition 3.2. For a set partition s S
n
dene the integral operator TJ
s
N
acting on the
product functions
_

N
i=1
g(y
i
)
_
via
TJ
s
N
_
N

i=1
g(y
i
)
_
=
_
N

i=1
g(y
i
)
_
1
(2i)
(s)
_
G
s
(w
1
, . . . , w
(s)
; y
1
, . . . , y
N
; g), (3.1)
where each variable w
i
, 1 i (s) is integrated over the (positively oriented, encircling
y
m

N
m=1
) contour (
i
(y; g), and the system of contours (
i
(y; g) is (g, y, [s[)admissible, as
dened above.
Remark. The dimension of TJ
s
N
is (s) and its degree is 0.
Now we are ready to state the main theorem of this section.
Theorem 3.3. The action of the dierence operator T
k
N
(dened by (2.21)) on any product
function

N
i=1
g(y
i
) can be written as a (nite) sum of admissible integral operators
T
k
N
= ()
k
TJ
[k]
N
+

G
J
N
(G),
where the summation goes over a certain nite set ( (independent of N). All integral operators
J
N
(G), G ( are such that the dierences of their dimensions and degrees are non-positive.
Remark. Theorem 3.3 implies that as N , the leading term of T
k
N
is given
by ()
k
TJ
[k]
N
as can be seen by dilating the integration contours by a large parameter.
Moreover, as we will see in the next section the same is true for the compositions of T
k
N
(as
in Theorem 2.11), their leading term will be given by the compositions of ()
k
TJ
[k]
N
. This
20
property is crucial for the proof of the Central Limit Theorem that we present in the next
section.
The rest of this section is devoted to the proof of Theorem 3.3 and is subdivided into
three steps. In Step 1 we nd the decomposition of operators T
k
N
(dened by (2.19)) into the
linear combination of admissible integral operators TJ
s
N
. In Step 2 we substitute the result
of Step 1 into the denition of operators T
k
N
(2.21) and obtain the expansion of T
k
N
as a big
sum of admissible integral operators. We also encode each term in this expansion by a certain
labeled graph. Finally, in Step 3 we observe massive cancelations in sums of Step 2, showing
that all integral operators in the decomposition of T
k
N
, whose dierence of the dimension and
degree is positive (except for ()
k
TJ
[k]
N
) vanish.
3.1 Step 1: Integral form of operators T
k
N
.
Proposition 3.4. For any N k 1 on product functions
_

N
i=1
g(y
i
)
_
we have the identity
T
k
N
=
()
k
k!

s=(S
1
,S
2
,... )S
k
_
_
(1)
k(s)
(s)

h=1
([S
h
[ 1)!
_
_
TJ
s
N
,
where T
k
N
is given by (2.19) and TJ
s
N
is given by (3.1).
Proof. Let us evaluate TJ
s
N
as a sum of residues. First, assume that s = 1
k
, i.e. this is the
partition of 1, . . . , k into singletons. We will rst integrate over the smallest contour, then
the second smallest one, etc. When we integrate over the smallest contour we get the sum
of residues at points y
1
, . . . , y
N
. If we pick the y
m
term at this rst step, then at the second
one we could either take the residue at y
j
with j ,= m or at y
m
; there is no residue at
y
m
thanks to (v
i
v
j
)
2
in the denition of G
(k)
, and there is no residue at y
m
+ thanks to
the factor v
i
y
m
in G
(k)
. When we continue, we see the formation of strings of residue
locations of the form
y
m
, y
m
, y
m
2, . . . .
Observe that the sum of the residues in the decomposition of TJ
1
k
N
can be mimicked by the
decomposition of the product
k

i=1
(x
i
1
+x
i
2
+ +x
i
N
)
into the sum of monomials in variables x
i
j
(here "i" is the upper index, not an exponent). A
general monomial
x
1
i
1
x
2
i
2
x
k
i
k
is identied with the residue of G
(k)
at
y
1
, y
1
, . . . , y
1
(m
1
1), y
2
, y
2
, . . . , y
2
(m
2
1), . . . ,
where m
c
is the multiplicity of cs in (i
1
, . . . , i
k
).
More generally, when we evaluate TJ
s
N
for a general s S
k
we obtain sums of similar
residues, and now the decomposition is mimicked by the product
(s)

h=1
_
x
i
1
(h)
1
x
i
2
(h)
1
x
i
r
(h)
1
+x
i
1
(h)
2
x
i
2
(h)
2
. . . x
i
r
(h)
2
+ +x
i
1
(h)
N
x
i
2
(h)
N
x
i
r
(h)
N
_
,
where i
1
(h), . . . , i
r
(h) are all elements of the set S
h
in s.
Now we will use the following combinatorial lemma which will be proved a bit later.
21
Lemma 3.5. We have

s=(S
1
,S
2
,... )S
k
(1)
k(s)
(s)

h=1
([S
h
[ 1)!
_
x
i
1
(h)
1
x
i
r
(h)
1
+ +x
i
1
(h)
N
x
i
r
(h)
N
_
=

x
1
(1)
x
2
(2)
x
k
(k)
, (3.2)
where runs over all injective maps from 1, . . . , k to 1, . . . , N.
Applying Lemma 3.5 we conclude that

s=(S
1
,S
2
,... )S
k
_
_
(1)
k(s)

j
([S
j
[ 1)!
_
_
TJ
s
N
is the sum of residues of G
(k)
at collections of distinct points y
(1)
, . . . , y
(k)
as in (3.2).
Computing the residues explicitly, comparing with the denition of T
k
N
, and noting that the
factor k! appears because of the ordering of (1), . . . , (k) (we need to sum over kpoint
subsets, not over ordered ktuples), we are done.
We now prove Lemma 3.5.
Proof of Lemma 3.5. Pick m
1
, . . . , m
k
and compare the coecient of x
1
m
1
x
k
m
k
in both sides
of (3.2). Clearly, if all m
i
are distinct, then the coecient is 1 in the right side. It is also
1 in left side, because such monomial appears only in the decomposition of TJ
1
k
N
and with
coecient 1. If some of m
i
s coincide, then the corresponding coecient in the right side of
(3.2) is zero. Let us prove that it also vanishes in the left side.
Let s = (

S
1
,

S
2
. . . ) S
k
denote the partition of 1, . . . k into sets formed by equal
m
i
s (i.e. a and b belong to the same set of partition i m
a
= m
b
). Then the coecient of
x
1
m
1
x
k
m
k
in the left side of (3.2) is

s=(S
1
,S
2
... )S
k
(1)
k(s)
(s)

h=1
([S
h
[ 1)!, (3.3)
where the summation goes over s such that s is a renement of s, i.e. sets of s are unions of
the sets of s. Clearly, (3.3) is equal to
( s)

i=1

s runs over set partitions of



S
i
s=(S
1
,S
2
,... )
(1)
[S
i
[(s)
(s)

h=1
([S
h
[ 1)!.
Now it remains to prove that for any n 1

s=(S
1
,S
2
,... )S
n
(1)
[S
i
[(s)
(s)

h=1
([S
h
[ 1)! = 0. (3.4)
For that consider the well-known summation over symmetric group S(n)

S(n)
(1)
sgn()
= 0. (3.5)
Under the map : S(n) S
n
mapping a permutation into the set partition that corre-
sponds to the cyclic structure of , (3.5) turns into (3.4).
22
3.2 Step 2: Operators T
k
N
as sums over labeled graphs
Let us substitute the statement of Proposition 3.4 into (2.21). We obtain
T
k
N
= ()
k

[[=k
PE(k, )

i
!

1
()!

S
()
()

i=1
_
_
_

s=(S
1
,S
2
,... )S

(i)
_
_
(1)
k(s)

j
([S
j
[ 1)!
_
_
TJ
s
N
_
_
_
(3.6)
The aim of this section is to understand the combinatorics of the resulting expression.
We start with the following proposition.
Proposition 3.6. Take any p 1 set partitions s
1
, . . . , s
p
, and let s
i
= (S
i
1
, S
i
2
, . . . ), i =
1, . . . , p. Then
TJ
s
p
N
TJ
s
1
N
_
N

m=1
g(y
i
)
_
=
_
N

m=1
g(y
i
)
_
1
(2i)

p
j=1
(s
j
)

_

1i<jp
Cr(i, j)
p

j=1
G
s
j (w
j
1
, . . . , w
j
(s
j
)
; y
1
, . . . , y
N
; g), (3.7)
where each variable w
j
i
is integrated over the (positively oriented, encircling y
m
) contour
(
j
i
(y; g). The contours are nested by lexicographical order on (j, i) (the (1, 1) contour is the
largest one) and are admissible with large enough distance parameter; the function g() is
assumed to be such that the contours exist. Furthermore,
Cr(i, j) =
(s
i
)

a=1
(s
j
)

b=1
[S
j
b
[

c=1
w
i
a
w
j
b
c +[S
i
a
[
w
i
a
w
j
b
c

w
i
a
w
j
b
c + 1
w
i
a
w
j
b
c +[S
i
a
[ + 1
.
Proof. The formula is obtained by iterating the application of operators TJ
s
N
. First, we
apply TJ
s
1
N
using Denition 3.2 and renaming the variable w
a
, a = 1, . . . , (s
1
) into w
1
a
. The
ydependent part in the right-hand side of (3.1) is

N
m=1
g
t
(y
m
) with
g
t
(y
m
) := g(y
m
)
(s
1
)

a=1
w
1
a
y
m

w
1
a
y
m

w
1
a
y
m
w
a
y
m
+

w
1
a
y
m
+([S
1
a
[ 2)
w
1
a
y
m
+([S
1
a
[ 1)
= g(y
m
)
(s
1
)

a=1
w
1
a
y
m

w
1
a
y
m
+([S
1
a
[ 1)
,
where s
1
= (S
1
1
, S
1
2
, . . . ). We have
g
t
(v 1)
g
t
(v)
=
g(v 1)
g(v)
(s
1
)

a=1
w
1
a
v +([S
1
a
[ 1)
w
1
a
v

w
1
a
v + 1
w
1
a
v + 1 +([S
1
a
[ 1)
. (3.8)
In order to iterate, this function must not have poles inside the contour of integration. To
achieve this it suces to choose on the next step the contours which are much closer to y
m
(i.e. at the rst step the contours are large, on the second step the are smaller, etc).
23
When we further apply TJ
s
2
N
, the product
(s
2
)

b=1
g
t
(w
2
b
1)
g
t
(w
2
b
)

g
t
(w
2
b
1 +([S
2
b
[ 1))
g
t
(w
2
b
+([S
2
b
[ 1))
(3.9)
appears, where s
2
= (S
2
1
, S
2
2
, . . . ). Substituting the gindependent part of the right side of
(3.8) into (3.9) we get Cr(1, 2). Further applying operators TJ
s
3
,. . . , TJ
s
p
we arrive at
(3.7).
Our next step is to expand the products in (3.6) using Proposition 3.6 to arrive at a big
sum. Each term in this sum involves an integral encoded by a collection of set partitions
s
1
, . . . , s
p
. The dimension of integral equals

p
j=1
(s
j
) and each integration variable corre-
sponds to one of the sets in one of partitions s
j
. Let us enumerate the nested contours of
integration by numbers from 1 to

j
(s
j
) (contour number 1 is the largest one as above)
and rename the variable on contour number i by z
i
. Then the integrand has the following
product structure:
1. For each i there is a factor, which is a function of z
i
. The exact form of this func-
tion depends only on the size of the corresponding set in one of s
j
. We denote this
multiplicative factor by M
r
(z
i
) (where r is the size of the corresponding set).
2. For each pair of indices i < j corresponding to two sets from the same set partition
there is a factor, which is a function of z
i
and z
j
. The exact form of this factor depends
on the sizes of the corresponding sets. We call is cross-factor of type I and denote
Cr
I
r,r

(z
i
, z
j
) (where r and r
t
are the sizes of the corresponding sets).
3. For each pair of indices i < j corresponding to two sets from distinct set partitions
there is a factor, which is a function of z
i
and z
j
. The exact form of this factor depends
on the sizes of the corresponding sets. We say that this is the cross-factor of type II
and denote in by Cr
II
r,r

(z
i
, z
j
) (where r and r
t
are the sizes of the corresponding sets).
Altogether there are

p
j=1
(s
j
) multiplicative factors and
_

p
j=1
(s
j
)
2
_
cross factors.
Now we expand each cross-factor in power series in (z
i
z
j
)
1
. We have
Cr
I
r,r
(z
i
, z
j
) =
r

a=1
r

b=1
_
z
i
z
j
+(a b)
_
2
_
z
i
z
j
+(a b)
_
2

2
= 1 +
k1

m=2
a
I
r,r

(m)
(z
i
z
j
)
m
+
A
I
r,r

(z
i
z
j
)
(z
i
z
j
)
k
,
(3.10)
where a
I
r,r

(m) vanishes when m is odd (i.e. the summation in (3.10) goes only over even
powers) and A
I
r,r

(z
i
z
j
) tends to a nite limit as (z
i
z
j
) . Also
Cr
II
r,r
(z
i
, z
j
) =
r

c=1
_
z
i
z
j
c +r
z
i
z
j
c

z
i
z
j
c + 1
z
i
z
j
c +r + 1
_
= 1 +
k1

m=2
a
II
r,r

(m)
(z
i
z
j
)
m
+
A
II
r,r

(z
i
z
j
)
(z
i
z
j
)
k
, (3.11)
where A
II
r,r

(z
i
z
j
) tends to a nite limit as (z
i
z
j
) . It is crucial for us that both
series have no rst order term.
24
We again substitute the above expansions into (3.6) and expand everything into even
bigger sum of integrals. Our next aim is to provide a description for a general term in this
sum. The general term is related to several choices that we can make:
1. Young diagram ;
2. Permutation of the set 1, . . . , ();
3. For each 1 i (), the set partition s
i
of S

(i)
;
4. For each pair of sets of (the same or dierent) set partitions, one of
the terms in expansions (3.10), (3.11).
(3.12)
It is convenient to illustrate all these choices graphically. For that purpose we take k ver-
tices enumerated by numbers 1, . . . , k. They are united into groups (clusters) of lengths
(i)
,
i.e. the rst group has the vertices 1, . . . ,
(1)
, the second one has
(1)
+1, . . . ,
(1)
+
(2)
,
etc. This symbolizes the choices of and . Some of the vertices inside clusters are united
into multivertices this symbolizes the set partitions. Finally, each pair of multivertices can
be either not joined by an edge, which means the choice of term 1 in (3.10) or (3.11), or joined
by a red edge with label m, which means the choice of term with (z
i
z
j
)
m
in (3.11), or joined
by a black edge with label m, which means the choice of term with (z
i
z
j
)
m
in (3.10). An
example of such a graphical illustration is shown at Figure 1. An integral is reconstructed
2
3
5
4
1
2
6
4
Figure 1: Graphical illustration for a general integral term. Here = (4, 1, 1), so we have 3
clusters (separated by green dashed contours), the set partition from S
4
has one set of size 3
and one set of size 1. The edges are shown in thin red and thick black.
by the picture via the following procedure: Each integration variable z
i
corresponds to one
of the multivertices; the variables are ordered by the minimal numeric labels of vertices they
contain and are integrated over admissible nested contours with respect to this order. As
described above, for each variable z
i
we have a multiplicative factor M
r
(z
i
), where r is the
number of vertices in the corresponding multivertex (this number r will be called the rank
of multivertex). For each pair of variables z
i
, z
j
, if corresponding vertices are joined by an
edge, then we also have a crossterm depending on the color and label of the edge:
1. The edge is black if the term comes from Cr
I
r,r

(z
i
, z
j
) and red if the term comes from
Cr
II
r,r

(z
i
, z
j
).
2. The number m 2 indicates the power of (z
i
z
j
)
1
in (3.10) or (3.11).
Next, we note that many features of the picture are irrelevant for the resulting integral
(in other words, dierent pictures might give the same integrals). These features are:
25
1. Decomposition into clusters;
2. All numbers on each multivertex except for the minimal one;
3. The order (i.e. nesting of integration contours) between dierent connected components;
i.e. only the order inside each component matters.
So let us remove all of the above irrelevant features of the picture. After that we end up
with the following object that we denote by G: We have a collection of multivertices; each
multivertex should be viewed as a set of r ordinary vertices (we call r the rank). Some of
the multivertices are joined by red or black edges with labels. Each connected component of
the resulting graph has a linear order on its multivertices. In other words, there is a partial
order on multivertices of G such that only vertices in the same connected component can be
compared. We call the resulting object labeled graph. The fact that a labeled graph appeared
from the above graphical illustration of the integral implies the following properties:
1. Sum of the ranks of all multivertices is k;
2. Multivertices of one black connected component can not be joined by a red edge (because
they came from the same cluster);
3. Suppose that A, B, C are three multivertices of one (uncolored) connected component
and, moreover, A and B belong to the same black connected component. In this case
A < C if and only if B < C (thus, also A > C i B > C).
The labeled graph obtained after removing all irrelevant features of the picture of Figure 1 is
shown in Figure 2.
3
2
4
2
1
Figure 2: Graph with one multivertex of rank 3 and three multivertices of rank 1
Given a labeled graph G we can reconstruct the integral by the same procedure as before,
we denote the resulting integral via J(G).
Note that in our sum each integral corresponding to a given graph G comes with a
prefactor
(1)
k

v is a multivertex of G
(1)(r(v) 1)!.
It is important for us that this prefactor depends only on the labeled graph G (and not on the
data we removed to obtain G). Because of that property we can forget about the prefactor
when analyzing the sum of the integrals corresponding to a given graph.
3.3 Step 3: Cancelations in terms corresponding to a given labeled graph
Our aim now is to compute the total coecient of J(G) for a given graph G, i.e. we want
to compute the (weighted) sum of all integrals corresponding to G. As we will see, for many
graphs G this sum vanishes.
26
First, x some with () = n and a permutation S(n) (equivalently x two out
of four choices in (3.12)). Let W(G, , ) denote the number of integral terms corresponding
to a graph G and these two choices; in other words, this is the number of ways to make the
remaining two choices in (3.12) in such a way as to get the integral term of the type J(G).
Let

G denote the subgraph of G whose multivertices either have rank at least 2 or has at
least one edge attached to it (i.e. we exclude multivertices of rank 1 that have no adjacent red
or black edges). Let B denote the set of all black connected components of

G
5
. Note that each
black component arises from one of the clusters (which correspond to the coordinates of )
according to our denitions. Thus, each element of B corresponds to one of the coordinates of
, i.e. we have a map from B into 1, . . . , n (each i will further correspond to
(i)
). This
map must satisfy the following property: if the partial order on G is such that members of a
black component b
1
precede members of a black component b
2
, then (b
1
) < (b
2
). In other
words, B is equipped with a partial order (which is a projection of the partial order on G)
and is an order-preserving map. Dierent s correspond to dierent cluster congurations
that G may have originated from. Let = (G, n) denote the set of such maps .
Example: Consider the graph of Figure 2. It has three connected black components
b
1
, b
2
, b
3
: b
1
has 2 multivertices of ranks 3 and 1, b
2
has one multivertex of rank 1 (joined
by a red edge), b
3
has one isolated multivertex of rank 1. Therefore, B consists of two
elements B = b
1
, b
2
. The partial order has the only inequality b
2
< b
1
. Also take to be
a partition with two nonzero parts. Now should be a map from b
1
, b
2
to 1, 2 such that
(b
2
) < (b
1
). This means that (b
2
) = 1 and (b
1
) = 2. Therefore, there is a single such ,
see Figure 3.
2
3
2
4
1
Figure 3: Graphical illustration for the unique possible when has two parts. Equivalently,
this is a unique decompositions of graph of Figure 2 (without single isolated vertices) into
two clusters.
Now suppose that is xed, and let W(G, , , ) denote the number of integrals corre-
sponding to it. We have
W(G, , ) =
1
[Aut(

G)[

(G,n)
W(G, , , ), n = (), (3.13)
where Aut(

G) is the group of all automorphisms of graph



G.
5
Up to now we were considering graphs G and

G up to isomorphism. But here, in order to dene B and
then , we x some representative of the isomorphism class of

G. This choice is also responsible for the factor
|Aut(

G)|
1
in (3.13).
27
Let us call a vertex v of G simple if v is isolated and the corresponding multivertex has
rank 1.
Lemma 3.7. W(G, , , ) is a polynomial in n variables
(i)
(with coecients depending
on G and only) of degree equal to the number of nonsimple vertices in G, i.e. the number
of vertices in

G.
Proof. For each coordinate
(i)
of we have chosen (via ) which black components of B
belong to it. After that, we claim that the total number number of ways to choose a set
partition s S

(i)
and factors from the integral corresponding to it in such a way as to
get factors corresponding to these black components, is a polynomial in
(i)
depending on
the set
1
(i); we use the notation P

1
(i)
(
(i)
) for it. Indeed, if the black components of

1
(i) have d vertices altogether (recall that all these vertices are not simple), then we choose
d (unordered) elements out of the set with
(i)
elements; there are
_

(i)
d
_
ways to do this.
After that there is a xed (depending solely on the set
1
(i) and G) number of ways to do
all the other choices, i.e. to choose a set partition of these d elements which would agree with
G on
1
(i). Therefore,
P

1
(i)
(
(i)
) = c(
1
(i))
_

(i)
d
_
. (3.14)
Note that this polynomial automatically vanishes if
(i)
< d. It is also convenient to set
P to be 1 if
1
(i) = . Since all the choices for dierent i are independent, and there is
always a unique way to add required by G red edges to the picture, we conclude that the
total number of integrals is

i
P

1
(i)
(
(i)
).
Note that this is a polynomial of
i
of degree equal to the total number of non-simple vertices
in G.
Example: Continuing the above example with the graph of Figure 2, for the unique ,
multivertertices of b
1
are in one cluster (corresponding to
2
) and multivertex of b
2
is in
another cluster (corresponding to
1
). In the rst cluster of size
2
we choose 4 elements
(corresponding to 4 vertices of b
1
); there are
2
(
2
1)(
2
2)(
2
3)/24 ways to do this.
Having chosen these 4 elements we should subdivide them into those 3 corresponding to the
rank 3 multivertex and 1 corresponding to the rank 1 multivertex in b
1
. When doing this
we have a restriction: rank 1 multivertex should have a greater number, than the minimal
number among the vertices of the rank 3 multivertex. This simply means that the rank 1
multivertex is either number 2, number 3 or number 4 of our set with 4 elements. Thus,
P

1
(2)
(
2
) = 3
_

2
4
_
.
In the second cluster of size
1
we choose one element corresponding to b
2
. There are
1
ways to do this. Note that we ignore all simple vertices. Indeed, there is no need to specify
whats happening with simple vertices: the parts of set partitions corresponding to them are
just a decomposition of a set into singletons, which is is automatically uniquely dened as
soon as we do all the choices for non-simple vertices. We conclude that yields the following
polynomial of degree 5
P

1
(1)
(
1
)P

1
(2)
(
2
) =
3
24

2
(
2
1)(
2
2)(
2
3).
We proceed summing over all (G, n), n = ().
28
Lemma 3.8. W(G, , ) is a polynomial in n variables
(i)
(with coecients depending on
G) of degree equal to the number of nonsimple vertices in G.
Moreover, if G has an isolated multivertex of degree r > 1, then the highest order compo-
nent of W(G, , ) is divisible by

n
i=1
(
i
)
r
.
Proof. The rst part is an immediate consequence of (3.13) and Lemma 3.7.
As for the second part, x an isolated multivertex v of G of degree r. Consider the process
of constructing the polynomial for W(G v; , ) through (3.13) and Lemma 3.7. Note that
any order-preserving
G\v
(G v, n) for the graph G v corresponds to exactly n order-
preserving
G
s for G: they dier by the image of v, while images of all other black components
are the same as in
G\v
. Moreover, if
G
(v) = t, then the polynomial corresponding to
G
is
the one for
G\v
times
_

(t)
d
r
_
,
where d is the total number of non-simple vertices in the black components of
1
G\v
(t), as
in Lemma 3.7. We conclude that the highest order term of the sum of all polynomials
corresponding to xed
G\v
and various choices of t = 1, . . . , n is divisible by

r
i
. Clearly,
this property survives when we further sum over all
G\v
.
Now let us also sum over . For a graph G, let U(G, ) denote
1
()!
times the total number
of the integrals given by graph G in the decomposition of

S
()
()

i=1
T

(i)
N
.
Lemma 3.9. For any labeled graph G there exists a symmetric function f
G
(x
1
, x
2
, . . . )
of degree equal to the number of nonsimple vertices in G, such that
U(G, ) = f
G
(
1
, . . . ,
()
, 0, 0, . . . ).
Moreover, if G has an isolated multivertex of rank r then the highest degree component of f
G
is divisible by p
r
=

i
x
r
i
.
Proof. Combining Lemma 3.8 and identity

S(n)
1
n!
W(G, , ) = U(G, ), n = ()
we get a symmetric polynomial f
n
G
(x
1
, . . . , x
N
) (of desired degree and with the desired divis-
ibility of highest degree component) dened by
f
n
G
(
1
, . . . ,
n
) = U(G, ).
Note that if we now add a zero coordinate to (thus, extending its length by 1), then the
polynomial does not change, i.e.
f
n+1
G
(x
1
, . . . , x
n
, 0) = f
n
G
(x
1
, . . . , x
n
). (3.15)
Indeed, when
n+1
= 0, (3.14) vanishes unless
1
(
1
(n+1)) = , therefore, the summation
over (G, n +1) is in reality over such that
1
(
1
(n +1)) = . These are eectively s
from (G, n) and the sum remains stable.
The property (3.15) implies that the sequence f
n
G
, n = 1, 2, . . . denes an element f
G
,
cf. [M, Chapter I, Section 2].
29
The next crucial cancelation step is given in the following statement.
Proposition 3.10. Take k > 1 and let f be a symmetric function of degree at most k
such that f C[p
1
, . . . , p
k1
]. We have

Y
k
PE(k, )f(
1
,
2
, . . . )

i
!
= 0,
where Y
k
is the set of all partitions with [[ = k.
Proof. Let : C be an (algebra-) homomorphism sending p
1
1 and p
k
0, k 2.
With the notation
e

=
()

i=1
e

i
,
where e
m
, is the elementary symmetric function of degree m, we have
(e

) =
1

i
!
,
as follows from the identity of generating series (see e.g. [M, Chapter I, Section 2])

k=0
e
k
z
k
= exp
_

k=1
p
k
(z)
k
k
_
.
Let d[i
1
, . . . , i
m
] denote the dierential operator on (which is viewed as an algebra of
polynomials in p
k
here) of the form
d[i
1
, . . . , i
m
] =
m

j=1
(1)
i
j
i
j

p
i
j
, i
1
, . . . , i
j
1.
Then for any n 1 (see [M, Chapter I, Section 5, Example 3])
d[n]e
r
= (1)
n
n

p
n
(e
r
) =
_
e
rn
, r n,
0, otherwise.
Let E

()
i=1
(e

i
!) (note that (E

) = 1). Then
d[n]E

j1
E
nv
j

j
(
j
1) (
j
n + 1),
where v
i
is the vector with the ith coordinate equal to 1 and all other coordinates equal to
0, and we view as vector (
1
,
2
, . . . ). More generally,
d[i
1
, . . . , i
m
]E

j
1
,...,j
m
1
E
i
1
v
j
1
i
2
v
j
2
...
m

q=1

(q)
j
q
(
(q)
j
q
1) (
(q)
j
q
i
q
+ 1),
where

(q)
= i
q+1
v
j
q+1
i
q+2
v
j
q+2
. . . , 1 q m,
and, in particular,
(m)
= . Observe that (d[i
1
, . . . , i
m
]E

) is a symmetric polynomial
in
i
with highest order part being p
i
1
p
i
m
. Therefore, any symmetric function f
in variables
i
of degree at most k and such that f C[p
1
, . . . , p
k1
] can be obtained as
30
a linear combination of (d[i
1
, . . . , i
m
]E

) with i
j
< k. (Indeed, after we subtract a linear
combination of (d[i
1
, . . . , i
m
]E

) that agree with the highest order part of f, we get a


polynomial f
t
of degree at most k 1, which is automatically in C[p
1
, . . . , p
k1
]. After that
we repeat for f
t
, etc.). It remains to apply this linear combination of d[i
1
, . . . , i
m
] to the
identity
p
k
=

Y
k
PE(k, )
E

i
!
.
Now we are ready to prove the Theorem 3.3.
Proof of Theorem 3.3. As we explained above, labeled graph G denes an admissible integral
operator J(G). The dimension of this operator is equal to the number of multivertices in
G and the degree equals the sum of labels of all edges of G. We will show that the sum in
Theorem 3.3 is over the set ( of all labeled graphs with k vertices and such that the dimension
minus the degree of the corresponding integral operator is a non-positive number.
We start from the decomposition (3.6) of T
k
N
into the sum of integrals. Note that the
coecient of TJ
[k]
N
is ()
k
PE(k, k)(1)
k1
/k. The second factor is PE(k, k) = (1)
k1
k,
therefore, the coecient of TJ
[k]
N
is ()
k
.
We further use Proposition 3.6 and then expand crossterms in the resulting integrals as
in (3.10), (3.11). Note that A
I
r,r

(z
i
z
j
) and A
II
r,r

(z
i
z
j
) in these expansions by the very
denition have limits as z
i
z
j
. Since all the integrals in our expansion are at most k
dimensional, all terms where A
I
r,r

(z
i
z
j
) or A
II
r,r

(z
i
z
j
) are present satisfy the assumption
that dimension minus degree is non-positive.
As is explained above, all other terms in the expansion are enumerated by certain labeled
graphs. Our aim is to show that if the contribution of a given graph is non-zero, then the
dimension minus degree of the corresponding integral operator is non-positive. If a graph
G has less than k non-simple vertices, then combining Lemma 3.9 with Proposition 3.10
we conclude that the contribution of this graph vanishes. On the other hand, if a graph G
has k non-simple vertices which form M k multivertices, then the corresponding integral
is Mdimensional. If G has no isolated multivertices, then it has at least ,M/2| edges.
Since each edge increases the degree at least by 2, we conclude that the dimension minus
degree of the corresponding integral operator is non-positive. Finally, if G has k vertices and
an isolated multivertex of degree less than k (isolated multivertex of degree k corresponds
precisely to TJ
[k]
N
), then we can again use Lemma 3.9 and Proposition 3.10 concluding that
the contribution of this graph vanishes.
4 Central Limit Theorem
4.1 Formulation of GFF-type asymptotics
The main goal of this section is to prove the following statement.
Theorem 4.1. Suppose that we have a large parameter L, parameters M 1, > 0,
N
1
N
2
N
h
1 grow linearly in it:
M L

M, L , N
i
L

N
i
, L ;

M 0, 0, N
h
> 0.
Let r 1
M
be distributed according to P
,M,
of Denition 2.6. Then for any integers k
i
1,
i = 1, . . . , h, the random vector (p
k
i
(N
i
; r) Ep
k
i
(N
i
; r))
h
i=1
converges, in the sense of joint
31
moments, thus weakly, to the Gaussian vector with mean 0 and covariance given by
lim
L
E
_
[p
k
1
(N
1
; r) Ep
k
1
(N
1
; r)] [p
k
2
(N
2
; r) Ep
k
2
(N
2
; r)]
_
=

1
(2i)
2

_ _
du
1
du
2
(u
1
u
2
)
2
2

r=1
_
u
r
(u
r
+

N
r
)

u
r

(u
r


M)
_
k
r
, (4.1)
where both integration contours are closed and positively oriented, they enclose the poles of
the integrand at u
1
=

N
1
, u
2
=

N
2
(

N
1


N
2
, as above), but not at u
r
= +

M, r = 1, 2,
and u
2
contour is contained in the u
1
contour.
Remark 1. We can change p
k
(N; r) in the statement of theorem by removing N M
ones from its denition (cf. Denition 2.9) when N > M.
Remark 2. The limit covariance depends on only via the prefactor
1
.
Remark 3. Our methods also give the asymptotics of Ep
k
i
(N
i
; r)/L, which provides a
limit shape theorem (or law of large numbers) for Jacobi ensemble. We do not pursue this
further as this was already done in [DP], [Ki], [Ji].
In Sections 4.2, 4.3, 4.4 we prove Theorem 4.1, and in Section 4.6 we identify the limit
covariance with that of a pullback of the Gaussian Free Field (whose basic properties are
discussed in Section 4.5) and also give an alternative expression for the covariance in terms
of Chebyshev polynomials.
4.2 A warm up: Moments of p
1
.
In order to see what kind of objects we are working with take N
1
= N =

NL as in Theorem
4.1 and consider the limit distribution of p
1
(N; r).
For p
1
the situation is simplied by the fact that
T
1
N
= T
1
N
= TJ
[1]
N
.
We study E[p
1
(N; r)]
m
using Theorem 2.11. Applying m 1 times the operator T
1
N
,
using Proposition 3.6 and changing the variables w
i
1
= Lu
i
we arrive at the following formula
E[p
1
(N; r)]
m
=
L
m
()
m
(2i)
m
_
. . .
_

i<j
Cr
L
(u
i
, u
j
)
m

i=1
F
L
(u
i
)du
i
, (4.2)
where
F
L
(u) =
H(Lu 1; , M)
H(Lu; , M)
N

m=1
Lu + (m2)
Lu + (m1)
,
with H from (2.18) given by
H(y; , M) =
N

i=1
(y +)
(y + +M)
.
We conclude that
F
L
(u) =
(Lu + +M)
(Lu + 1 + +M)
(Lu + 1 +)
(Lu +)
N

m=1
Lu + (m2)
Lu + (m1)
=
Lu
Lu + (N 1)

Lu
Lu M
=
u

L
u +
(N1)
L

u

L
u
+M
L
.
32
Also we have
Cr
L
(u
1
, u
2
) =
(u
1
u
2
+
1
L
)(u
1
u
2
)
(u
1
u
2
+
1
L
)(u
1
u
2


L
)
,
and the integration in (4.2) is performed over nested contours (the smallest index corresponds
to the largest contour) enclosing the singularity of F
L
in
(N1)
L
.
Note that as L , F
L
converges to an analytic limit F given by
F(u) =
u
u +

N

u
u

M
.
Dene the function V
L
(u
1
, u
2
) through
V
L
(u
1
, u
2
) = Cr
L
(u
1
, u
2
) 1.
Note that as L
V
L
(u
1
, u
2
)
1
L
2


(u
1
u
2
)
2
.
Therefore, as L ,
E[p
1
(N; r)]
2
[E(p
1
(N; r))]
2
=
L
2
()
2
(2i)
2
_ _
V
L
(u
1
, u
2
)F
L
(u
1
)F
L
(u
2
)du
1
du
2


1
(2i)
2
_ _
F(u
1
)F(u
2
)
(u
1
u
2
)
2
du
1
du
2
. (4.3)
Changing the variables u
i
= u
t
i
, i = 1, 2, we arrive at the limit covariance formula (4.1) for
p
1
.
The proof of the fact that p
1
(N; r) Ep
1
(N; r) is asymptotically Gaussian follows from a
general lemma that we present in the next section.
4.3 Gaussianity lemma
Let us explain the features of the formula (4.2) that are important for us. The integration
in (4.2) goes over m contours = (
1
, . . . ,
m
), such that belongs to a certain class
m
.
As long as
m
, the actual choice of is irrelevant. The crucial property of classes
m
is that if (
1
, . . . ,
m
)
m
and 1 i
1
< < i
l
m, then (
i
1
,
i
2
, . . . ,
i
l
)
l
. Further,
if (
1
, . . . ,
m
)
m
, then F
L
(u) converges to a limit function F(u) uniformly over u
i
,
i = 1, . . . , m, as L , and also V
L
(u
1
, u
2
)
1
L
2


(u
1
u
2
)
2
uniformly over (u
1
, u
2
)
i

j
,
1 i < j m.
Let us now generalize the above properties. We will need this generalization when dealing
with p
k
(N; r), k 2, see Section 4.4. Fix an integral parameter q > 0 (in the above example
with p
1
, q = 1) and take q random variables
6

1
(L), . . . ,
q
(L) depending on an auxiliary
parameter L. Suppose that the following data is given. (In what follows we use the term
multicontour for a nite collection of closed positively oriented contours in C, and we call
the number of elements of a multicontour its dimension.)
1. For each k = 1, . . . , q, we have an integer l(k) > 0. In the above example with p
1
,
l(1) = 1
6
Throughout this section random variable is just a name for the collection of moments, in other words, it
is not important whether moments that we specify indeed dene a conventional random variable. To avoid
the confusion we write random variables and moments with the quotation marks when speaking about
such virtual random variables.
33
2. For any n 1 and any ntuple of integers K = (1 k
1
k
2
k
n
q), we
have a class of multicontours
K
, such that
K
is a family of n multicontours
= (
1
, . . . ,
n
) and
i
is a l(k
i
)dimensional multicontour.
3. If = (
1
, . . . ,
n
)
K
and 1 i
1
< < i
t
n, then (
i
1
,
i
2
, . . . ,
i
t
)
K
, where
K
t
= (k
i
1
k
i
2
k
i
t
).
4. For each k = 1, . . . , q, and each value of L we have a continuous function of l(k)
variables: F
k
L
(u
1
, . . . , u
l(k)
). If = (
1
, . . . ,
n
)
K
, 1 i n and k = k
i
, then
F
k
L
(u
1
, . . . , u
l(k)
) converges as L to a (continuous) function F
k
(u
1
, . . . , u
l(k)
) uni-
formly over (u
1
, . . . , u
l(k)
)
i
.
5. For each pair 1 k, r q and each value of L we have a continuous function
Cr
k,r
L
(u
1
, . . . , u
i
l(k)
; v
1
, . . . , v
l(r)
). If = (
1
, . . . ,
n
)
K
, 1 i < j n, and
a = k
i
, b = k
j
, then Cr
a,b
L
(u
1
, . . . , u
l(a)
; v
1
, . . . , v
l(b)
) converges as L to a (con-
tinuous) function Cr
a,b
(u
1
, . . . , u
i
l(a)
; v
1
, . . . , v
l(b)
) uniformly over (u
1
, . . . , u
l(a)
)
i
,
(v
1
, . . . , v
l(b)
)
j
.
6. For each k = 1, . . . , q, we have certain (Ldependent) constants c
L
(k).
7. An additional real parameter > 0 is xed.
Suppose now that for any n 1 and any ntuple of integers K = (1 k
1
k
2
k
n

q), there exists L(K) > 0 such that for L > L(K) the joint moments of
i
(L) corresponding
to K have the form
E(
k
1
(L)
k
n
(L)) =
n

i=1
c
L
(k
i
)
_

_

i<j
Cr
L
(k
i
, i; k
j
, j)
n

i=1
F
L
(k
i
, i),
where
F
L
(k, i) = F
k
L
(u
i
1
, . . . , u
i
l(k)
)du
i
1
du
i
l(k)
,
Cr
L
(k, i; r, j) = 1 +L

V
L
(k, i; r, j) = 1 +L

Cr
k,r
L
(u
i
1
, . . . , u
i
l(k)
; u
j
1
, . . . , u
j
l(r)
),
and the integration goes over any set of contours
K
.
Lemma 4.2. In the above settings, as L , the random vector

1
(L) E
1
(L)
c
L
(1)L
/2
, . . . ,

q
(L) E
q
(L)
c
L
(k
i
)L
/2
converges (in the sense of moments) to the Gaussian random vector
1
, . . . ,
q
with mean 0
and covariance (here k r)
E
k

r
=
_

_
Cr
k,r
(u
1
1
, . . . , u
1
l(k)
; u
2
1
, . . . , u
2
l(r)
)
F
k
(u
1
1
, . . . , u
1
l(k)
)F
r
(u
2
1
, . . . , u
2
l(r)
)du
1
1
du
1
l(k)
du
2
1
du
2
l(r)
,
where the integration goes over
(k,r)
. The answer does not depend on the choice of

(k,r)
.
Remark. Note that Lemma 4.2 is merely a manipulation with integrals and their asymp-
totics, and we use moments just as names for these integrals.
34
Proof of Lemma 4.2. Take any K = 1 k
1
k
2
k
n
q and let us compute the
corresponding centered moment. We have
E
n

i=1

k
i
E
k
i
c
L
(k
i
)L
/2
= L
n/2
_

_
n

i=1
F
L
(k
i
, i)

A1,...,n
(1)
n[A[

i,jA, i<j
(1 +L

V
L
(k
i
, i; k
j
, j)), (4.4)
with integration over
K
(here we use the hypothesis that classes are closed under the
operation of taking subsets). For a set A 1, . . . , n let A
(2)
denote the set of all pairs i < j
with i, j A,
A
(2)
= (i, j) [ i, j A, i < j.
Also for any set B of pairs of numbers, let S(B) denote the set of all rst and second
coordinates of elements from B (support of B). With this notation (4.4) transforms into
L
n/2
_
. . .
_
_
n

i=1
F
L
(k
i
, i)
_

A1,...,n
(1)
n[A[

BA
(2)
L
[B[

(i,j)B
V
L
(k
i
, i; k
j
, j)
= L
n/2
_
. . .
_
_
n

i=1
F
L
(k
i
, i)
_

B1,...,n
(2)

(i,j)B
L
[B[
V
L
(k
i
, i; k
j
, j)

A[ S(B)A
(1)
n[A[
.
(4.5)
Note that for any two nite sets I
1
I
2
we have

A: I
1
AI
2
(1)
[I
2
[[A[
=
_
1, I
1
= I
2
,
0, otherwise.
Hence, (4.5) is
L
n/2
_
. . .
_
_
n

i=1
F
L
(k
i
, i)
_

B1,...,n
(2)
[ S(B)=1,...,n
L
[B[

(i,j)B
V
L
(k
i
, i; k
j
, j). (4.6)
Note that the set of pairs B such that S(B) = 1, . . . , n must have at least ,
n
2
| elements.
Therefore, if n is odd, then the factor L
n/2
L
[B[
in (4.6) converges to 0. If n is even, then
similarly, the products with [B[ > n/2 are negligible. If [B[ = n/2 then B is just a perfect
matching of the set 1, . . . , n. We conclude that for even n, (4.6) converges as L to

Bperfect matchings of 1,...,n

(i,j)B
_

_
Cr
k
i
,k
j
(u
1
1
, . . . , u
1
l(k
i
)
; u
2
1
, . . . , u
2
l(r
i
)
)
F
k
i
(u
1
1
, . . . , u
1
l(k
i
)
)F
r
(u
2
1
, . . . , u
2
l(r
i
)
)du
1
1
du
1
l(k
i
)
du
2
1
du
2
l(r
i
)
.
This is precisely Wicks formula (known also as Isserliss theorem, see [Is]) for the joint
moments of Gaussian random variables
1
, . . . ,
q
.
4.4 Proof of Theorem 4.1
Throughout this section we x k
1
, . . . , k
h
1 and N
1
N
2
N
h
1. Our aim is to
prove that the moments of the vector
(p
k
1
(N
1
; r) Ep
k
1
(N
1
; r), p
k
2
(N
2
; r) Ep
k
2
(N
2
; r), . . . , p
k
h
(N
h
; r) Ep
k
h
(N
h
; r))
35
converge to those of the Gaussian vector with covariance given by (4.1). Clearly, this would
imply Theorem 4.1.
The proof is a combination of Theorem 2.11, Theorem 3.3 and Lemma 4.2.
Theorem 3.3 yields that operator T
k
N
is a sum of R = R(k) terms with leading term being
()
k
TJ
[k]
N
. Let us denote through T
k
N
j, j = 1, . . . , R(k), all the terms in T
k
N
, with j = 1
corresponding to ()
k
TJ
[k]
N
.
Now the joint moments of random variables p
k
(N; r) (with varying k and N) can be
written as (cf. Theorem 2.11)
E
_
m

i=1
p
k
i
(N
i
; r)
_
=
m

i=1
_

R(k
j
)
j=1
T
k
i
N
i
j
_
_
N
m

i=1
H(y
i
; , M)
_

N
m
i=1
H(y
i
; , M)
y
i
=(1i)
. (4.7)
Introduce formal random variables p
k
(N)j, j = 1, . . . , R(k), such that
E
_
m

i=1
p
k
i
(N
i
)j
i

_
=
m

i=1
T
k
i
N
i
j
i

_
N
m

i=1
H(y
i
; , M)
_

N
m
i=1
H(y
i
; , M)
y
i
=(1i)
. (4.8)
The word formal here means that at this time a set of random variables for us is just a
collection of numbers their joint moments.
Clearly, we have (formally, in the sense of moments)
p
k
(N; r) =
R(k)

j=1
p
k
(N)j.
Lemma 4.3. Random variables p
k
(N)j satisfy the assumptions of Lemma 4.2 with = 2
and coecients c
L
(k; N; j) corresponding to p
k
(N)j being of order L
d(k,N,j)
as L ,
where d(k, N, 1) = 1 and d(k, N, j) is a non-positive integer for j > 1.
Proof. We want to compute the moments of p
k
(N)j. For that we use Theorem 3.3 and
Denition 3.1 to write p
k
(N)j as an integral operator, then apply the formula (4.8) and
Proposition 3.6. Finally, we change the variables w
j
= Lu
j
. Let us specialize all the data
required for the application of Lemma 4.2.
1. The dimension l corresponding to p
k
(N)j is the dimension of jth integral operator
in the decomposition of T
k
N
(see Theorem 3.3).
2. The contours of integration are (scaled by L) nested admissible contours arising in
Proposition 3.6. We further assume that the distance parameter grows linearly in L,
thus, after rescaling by L, the admissibility condition does not depend on L.
3. The denition of admissible contours (see beginning of Section 3) readily implies this
property.
4. The functions T
k
L
are integrands in Denition 3.1 after the change of variables w
j
= Lu
j
.
If we extract the prefactor L
dimdeg
(where dim is the dimension of the integral and
deg is its degree), which will be absorbed by contants C
L
(k), then the functions T
k
L
clearly converge to analytic limits.
36
5. The cross-terms in the formulas for joint moments were explicitly computed in Propo-
sition 3.6 and one can easily see that after change of variables w
j
= Lu
j
and expansion
in power series in L
1
, the rst order terms cancel out and we get the required expansion
with = 2.
6. Each constant C
L
(k) is the product of c from Denition 3.2 and L
dimdeg
from property
4. Note that Theorem 3.3 yields that this dim deg corresponding to p
k
(N)j is 1
for j = 1 and is less than 1 for j > 1.
7. = 2.
Now we apply Lemma 4.2 to the random variables p
k
t
(N
t
)j, t = 1, . . . , h, j =
1, . . . R(k
t
), and conclude that their moments converge (after rescaling) to those of Gaus-
sian random variables. Since
p
k
t
(N
t
; r) Ep
k
t
(N
t
; r) =
R(k
t
)

j=1
(p
k
t
(N
t
)j Ep
k
t
(N
t
)j), (4.9)
and by Lemma 4.3 in the last sum the jth term is of order L
d(k
t
,N
t
,j)1
(where 1 = /2),
we conclude that as L all the terms except for j = 1 vanish. Therefore, the moments of
random vector (p
k
t
(N
t
) Ep
k
t
(N
t
))
h
t=1
converge to those of the Gaussian random vector with
mean 0 and the same covariance as the limit (centered) covariance of variables p
k
t
(N
t
)1.
In the rest of the proof we compute this limit covariance. By the denition, T
k
(N)1 is
()
k
TJ
[k]
N
, and the operator TJ
[k]
N
acts on product functions via (we are using Denition
3.2)
TJ
[k]
N
N

i=1
g(y
i
) =
_
N

i=1
g(y
i
)
_

1i<jk

2
(j i)
2

1i<jk
(j i + 1)

1i<j1k
(j i 1)

1
2i
_
N

m=1
v y
m

v + (k 1) y
m
k1

i=0
g(v +i 1)
g(v +i)
dv
=
_
N

i=1
g(y
i
)
_

k1
2ki
_
N

m=1
v y
m

v + (k 1) y
m
k1

i=0
g(v +i 1)
g(v +i)
dv. (4.10)
Therefore, as in Proposition 3.6,
E
_
p
k
1
(N
1
)1p
k
2
(N
2
)1
_
E(p
k
1
(N
1
)1)E(p
k
2
(N
2
)1)
=

2
k
1
k
2

1
(2i)
2
_ _
dv
1
dv
2
2

r=1
_
N
r

m=1
v
r
y
m

v
r
+ (k
r
1) y
m
k
r
1

i=0
g(v
r
+i 1)
g(v
r
+i)
_
(4.11)

_
k
2
1

i=0
v
1
v
2
i + 1
v
1
v
2
+ (k
1
1) i + 1

v
1
v
2
+ (k
1
1) i
v
1
v
2
i
1
_
, (4.12)
where y
i
= (1 i), contours are nested (v
1
is larger) and enclose y
i
. Using g(z) =
H(z; , M) from (2.18) we have
g(z 1)
g(z)
=
z
z M
.
37
The part (4.11) of the integrand simplies to
2

r=1
_
(v )v(v +) (v + (k
r
2))
(v +(N
r
2 + 1)) (v +(N
r
2 +k
r
))
k
r
1

i=0
v
r
+i
v
r
+i M
_
Elementary computations reveal that the part (4.12) of the integrand as a power series in
(v
1
v
2
)
1
is
1 +
k
2
1

i=0
_
1
i
v
1
v
2
+

2
(i + 1)
2
(v
1
v
2
)
2
+O
_
(v
1
v
2
)
3
_
__
1 +
i + 1
v
1
v
2
_

_
1
(k
1
1 i) + 1
v
1
v
2
+
((k
1
1 i) + 1)
2
(v
1
v
2
)
2
+O
_
(v
1
v
2
)
3
_
_

_
1 +
(k
1
1 i)
v
1
v
2
_
=
k
1
k
2
(v
1
v
2
)
2
+O
_
(v
1
v
2
)
3
_
. (4.13)
Changing the variables v
i
= Lu
i
transforms (4.11), (4.12) into

1
(2i)
2

_ _
du
1
du
2
(u
1
u
2
)
2
2

r=1
_
u
r
(u
r
+

N
r
)

u
r

(u
r


M)
_
k
r
+O(L
1
), (4.14)
where contours are nested (u
2
is smaller) and enclose the singularities at

N
1
,

N
2
(but not
at +

M) . Sending L to innity completes the proof.
4.5 Preliminaries on the twodimensional Gaussian Free Field
In this section we briey recall what is the 2d Gaussian Free Field. An interested reader is
referred to [She], [Dub, Section 4], [HMP, Section 2] and references therein for a more detailed
discussion.
Denition 4.4. The Gaussian Free Field with Dirichlet boundary conditions in the upper
halfplane H is a (generalized) centered Gaussian random eld T on H with covariance given
by
E(T(z)T(w)) =
1
2
ln

z w
z w

, z, w H. (4.15)
We note that although T can be viewed as a random element of a certain functional space,
there is no such thing as a value of T at a given point z (this is related to the singularity of
(4.15) at z = w).
Nevertheless, T inherits an important property of conventional functions: it can be inte-
grated with respect to (smooth enough) measures. Omitting the description of the required
smoothness of measures, we record this property in the following two special cases that we
present without proofs.
Lemma 4.5. Let be an absolutely continuous (with respect to the Lebesgue measure) mea-
sure on H whose density is a smooth function g(z) with compact support. Then
_
H
Td =
_
H
T(u)g(u)du
is a well-dened centered Gaussian random variable. Moreover, if we have two such measures

1
,
2
(with densities g
1
, g
2
), then
E
_
_
_
H
T(u)g
1
(u)du
_

_
_
H
T(u)g
2
(u)du
_
_
=
_
H
2
g
1
(z)
_

1
2
ln

z w
z w

_
g
2
(w)dzdw =
_
H
g
1
(u)
1
g
2
(u)du,
38
where
1
is the inverse of the Laplace operator with Dirichlet boundary conditions in H.
Lemma 4.6. Let be a measure on H whose support is a smooth curve and whose density
with respect to the natural (arc-length) measure on is given by a smooth function g(z) such
that
_ _

g(z)
_

1
2
ln

z w
z w

_
g(w)dzdw < . (4.16)
Then
_
H
Td =
_

T(u)g(u)du
is a well-dened Gaussian centered random variable of variance given by (4.16). Moreover,
if we have two such measures
1
,
2
(with two curves
1
,
2
and two densities g
1
, g
2
), then
E
_
_
_

1
T(u)g
1
(u)du
_

_
_

2
T(u)g
2
(u)du
_
_
=
_ _

2
g
1
(z)
_

1
2
ln

z w
z w

_
g
2
(w)dzdw.
In principle, the above two lemmas can be taken as an alternative denition of the Gaus-
sian Free Field as a random functional on (smooth enough) measures.
Another property of functions that T inherits is the notion of pullback.
Denition 4.7. Given a domain D and a bijection : D H, the pullback T is a
generalized centered Gaussian Field on D with covariance
E(T((z))T((w))) =
1
2
ln

(z) (w)
(z) (w)

, z, w D.
Integrals of T with respect to measures can be computed through
_
D
(T ) d =
_
H
Td(),
where d() stands for the pushforward of measure .
The above denition immediately implies the following analogue of Lemma 4.6 (there is
a similar analogue of Lemma 4.5).
Lemma 4.8. In notation of Denition 4.7, let be a measure on D whose support is a
smooth curve and whose density with respect to the natural (length) measure on is given
by a smooth function g(z) such that
_ _

g(z)
_

1
2
ln

(z) (w)
(z) (w)

_
g(w)dzdw < . (4.17)
Then
_
D
(T ) d =
_

T((u))g(u)du
is a well-dened Gaussian centered random variable of variance given by (4.17). Moreover,
if we have two such measures
1
,
2
(with two curves
1
,
2
and two functions g
1
, g
2
), then
E
_
_
_

1
T((u))g
1
(u)du
_

_
_

2
T((u))g
2
(u)du
_
_
=
_ _

2
g
1
(z)
_

1
2
ln

(z) (w)
(z) (w)

_
g
2
(w)dzdw.
As a nal remark of this section, we note that the Gaussian Free Field is a conformally
invariant object: if is a conformal automorphism of H (i.e. a real Moebius transformation),
then the distributions of T and T are the same.
39
4.6 Identication of the limit object
The aim of this section is to interpret Theorem 4.1 as convergence of a random height function
to a pullback of the Gaussian Free Field.
Lemma 4.9. We have
u
u +a

u b
u b c
= q
1
+q
2
u b c
u +a
+q
3
u +a
u b c
,
where
q
1
=
2ac +ba +b
2
+bc
(a +b +c)
2
, q
2
=
(b +a)a
(a +b +c)
2
, q
3
=
c(b +c)
(a +b +c)
2
.
Proof. Straightforward computations.
Lemma 4.10. For a, b, c > 0 the functon
u
u
u +a

u b
u b c
(4.18)
is real on the circle with center
a(b+c)
ac
and radius

ac(a+b)(c+b)
[ac[
. When a = c, this circle
becomes a vertical line 1u = b/2.
Proof. Lemma 4.9 yields that expression (4.18) is real when
ubc
u+a
belongs to the circle of
radius
_
q
3
q
2
=

c(b +c)
a(b +a)
.
with center at 0. Reinterpreting this as a condition on u completes the proof.
We use Lemma 4.10 to give the following denition.
Denition 4.11. Dene the map from the subset D of [0, 1] [0, +] in (x,

N) plane
dened by the inequalities
C
1
(

N, ,

M) 2
_
C
2
(

N, ,

M) x C
1
(

N, ,

M) + 2
_
C
2
(

N, ,

M),
C
1
(

N, ,

M) =

N

M + (

N + )(

M + )
(

N + +

M)
2
,
C
2
(

N, ,

M) =

M(

M +)

N(

N +)
(

N + +

M)
4
,
to the upper half-plane H through the requirement that the horizontal section at height

N is
mapped to the upper half-plane part of the circle with center

N( +

M)

N

M
and radius
_

M(

M +)

N(

N +)
[

N

M[
(when

N =

M the circle is replaced by the vertical line 1(u) = /2) in such a way that point
u H is the image of point
_
u
u +

N

u
u

M
;

N
_
D.
40
Figure 4: The boundary of domain D (i.e. frozen boundary) for parameters (from left to
right)

M = 0.1, = 1;

M = 1, = 1;

M = 1, = 0.1
One easily shows that is a bijection between D and H . The boundary of
the set D for some values of parameters is shown in Figure 4.
Denition 4.12. Suppose that r 1
M
is distributed according to P
,M,
. For a pair of
numbers (x, N) [0, 1] R
>0
dene the height function 1(x, N) as the (random) number of
points r
]N|
i
which are less than x.
Theorem 4.13. Suppose that as our large parameter L , parameters M, grow linearly
in L:
M L

M, L ;

M > 0, > 0.
Then the centered random (with respect to measure P
,M,
) height function

(1(x, L

N) E1(x, L

N))
converges to the pullback of the Gaussian Free Field on H with respect to map of Denition
4.11 in the following sense: For any set of polynomials R
1
, . . . , R
k
C[x] and positive numbers

N
1
, . . . ,

N
k
, the joint distribution of
_
1
0
(1(x, L

N
i
) E1(x, L

N
i
))R
i
(x)dx, i = 1, . . . , k,
converges to the joint distribution of the similar averages of the pullback of GFF given by
_
1
0
T((x,

N
i
))R
i
(x)dx, i = 1, . . . , k.
Proof. We assume that

N
1


N
2


N
k
> 0 and set N
i
= L

N
i
|. Clearly, it suces to
consider monomials R
i
(x) = x
m
i
. For those we have
_
1
0
(1(x, L

N
i
) E1(x, L

N
i
))x
m
i
dx =
_
1
0
1(x, L

N
i
)x
m
i
dx E
_
1
0
1(x, L

N
i
)x
m
i
dx.
Integrating by parts, we get
_
1
0
1(x, L

N
i
)x
m
i
dx = 1(1, L

N
i
)
1
m
i
+1
m
i
+ 1

_
1
0
1
t
(x, L

N
i
)
x
m
i
+1
m
i
+ 1
dx
=
min(N
i
, M)
m
i
+ 1

min(N
i
,M)

j=1
(r
N
i
j
)
m
i
+1
m
i
+ 1
.
41
Therefore,
_
1
0
(1(x, L

N
i
) E1(x, L

N
i
))x
m
i
dx =
1
m
i
+ 1
_
p
m
i
+1
(N
i
; r) Ep
m
i
+1
(N
i
; r)
_
.
Applying Theorem 4.1, we conclude, that random variables

_
1
0
(1(x, L

N
i
) E1(x, L

N
i
))x
m
i
dx
are asymptotically Gaussian with the covariance of ith and jth (i < j) given by

(2i)
2
(m
i
+ 1)(m
j
+ 1)
_ _
du
1
du
2
(u
1
u
2
)
2

_
u
1
(u
1
+

N
i
)

u
1

(u
1


M)
_
m
i
+1
_
u
2
(u
2
+

N
j
)

u
2

(u
2


M)
_
m
j
+1
(4.19)
where contours are nested (u
1
is larger) and enclose the singularities at

N
i
,

N
j
. We claim
that we can deform the contours, so that u
1
is integrated over the circle with center

N
i
( +

M)

N
i


M
and radius


M(

M+)

N
i
(

N
i
+)
[

N
i


M[
, while u
2
is integrated over the circle with center

N
j
( +

M)

N
j


M
and radius


M(

M+)

N
j
(

N
j
+)
[

N
j


M[
. Indeed, if

N
i
,

N
j
< M, then the rst contour is larger and
both contours are to the left from the vertical line 1u = /2 and, thus, do not enclose the
singularity at +

M that could have potentially been an obstacle for the deformation. Cases

N
i
> M >

N
j
and

N
i
,

N
j
> M are similar and when

N
i
=

M or

N
j
=

M, then the circles
are replaced by vertical lines 1(u) = /2 as in Denition 4.11.
The top halves of the above two circles can be parameterized via images of the horizontal
segments with respect to the map ; to parameterize the whole circles we also need the
conjugates

. Hence, (4.19) transforms into

(2i)
2
(m
i
+ 1)(m
j
+ 1)
_ _
x
m
i
+1
1
x
m
j
+1
2
dx
1
dx
2
((x
1
,

N
i
) (x
2
,

N
j
))
2

x
1
(x
1
,

N
j
)

x
2
(x
2
,

N
j
)


(2i)
2
(m
i
+ 1)(m
j
+ 1)
_ _
x
m
i
+1
1
x
m
j
+1
2
dx
1
dx
2
((x
1
,

N
i
) (x
2
,

N
j
))
2

x
1
(x
1
,

N
j
)

x
2
(x
2
,

N
j
)


(2i)
2
(m
i
+ 1)(m
j
+ 1)
_ _
x
m
i
+1
1
x
m
j
+1
2
dx
1
dx
2
((x
1
,

N
i
) (x
2
,

N
j
))
2

x
1
(x
1
,

N
j
)

x
2
(x
2
,

N
j
)
+

(2i)
2
(m
i
+ 1)(m
j
+ 1)
_ _
x
m
i
+1
1
x
m
j
+1
2
dx
1
dx
2
((x
1
,

N
i
) (x
2
,

N
j
))
2

x
1
(x
1
,

N
j
)

x
2
(x
2
,

N
j
) (4.20)
with x
1
, x
2
integrated over the horizontal slices of domain D at heights

N
i
and

N
j
, respec-
tively. Integrating twice by parts in (4.20) and noting that boundary terms cancel out (since
is real at the ends of the integration interval) we arrive at

1
4
_ _
x
m
i
1
x
m
j
2
dx
1
dx
2
_
ln(((x
1
,

N
i
) (x
2
,

N
j
)))
ln(((x
1
,

N
i
) (x
2
,

N
j
))) ln(((x
1
,

N
i
) (x
2
,

N
j
))) + ln((x
1
,

N
i
) (x
2
,

N
j
))
_
.
(4.21)
42
Since we know that the expression (4.21) is real, the choice of branches of ln is irrelevant
here. Real parts in (4.21) give

1
2
_ _
ln

(x
1
,

N
i
) (x
2
,

N
j
)
(x
1
,

N
i
) (x
2
,

N
j
)

x
m
i
1
x
m
j
2
dx
1
dx
2
, (4.22)
which (by denition) is the desired covariance of the averages over the pullback of the Gaussian
Free Field with respect to .
An alternative way to write down the limit covariance is through the classical Chebyshev
polynomials:
Denition 4.14. Dene Chebyshev polynomials T
n
, n 0, of the rst kind through
T
n
_
z +z
1
2
_
=
z
n
+z
n
2
.
Equivalently,
T
n
(x) = cos(narccos(x)).
Proposition 4.15. Let r 1
M
be distributed according to P
,M,
and suppose that as the
large parameter L , parameters M, grow linearly in L:
M L

M, L ;

M > 0, 0.
If we set (with constant C
1
, C
2
given in Denition 4.11)

T

N, ,

M
n
(x) = T
n
_
_
x C
1
(

N, ,

M)
2
_
C
2
(

N, ,

M)
_
_
,
then the limiting Gaussian random variables
T
,

M
(n,

N) = lim
L
_
_
min(]L

N|,M)

i=1

T

N, ,

M
n
_
r
]L

N|
i
_
E
min(]L

N|,M)

i=1

T

N, ,

M
n
_
r
]L

N|
i
_
_
_
have the following covariance (

N
1


N
2
)
E
_
T
,

M
(n
1
,

N
1
)T
,

M
(n
2
,

N
2
)
_
=
n
1
!
4(n
2
1)!(n
1
n
2
)!

_

N
2


N
1

N
2
+ +

M


M( +

M)

N
1
( +

N
1
)
_
n
1
n
2
_
_
(

N
1
+ +

M)
_

N
2
( +

N
2
)
(

N
2
+ +

M)
_

N
1
( +

N
1
)
_
_
n
2
, (4.23)
in n
1
n
2
, and the covariance is 0 when n
1
< n
2
.
Remark 1. When

N
1
=

N
2
, the formula above simplies to
n
1
4

n
1
,n
2
, which agrees with
[DP, Theorem 1.2].
Remark 2. When both

N
1
and

N
2
are innitesimally small, the formula matches the
one from [B1, Proposition 3] proved there for = 1/2, 1 (the cases of GOE and GUE).
43
Proof of Proposition 4.15. Theorem 4.1 yields
E
_
T
,

M
(n
1
,

N
1
)T
,

M
(n
2
,

N
2
)
_
=

1
(2i)
2
_ _
du
1
du
2
(u
1
u
2
)
2
2

r=1
T

N
r
, ,

M
n
r
_
u
r
(u
r
+

N
r
)

u
r

(u
r


M)
_
. (4.24)
Using Lemma 4.9 and changing the variables
v
r
=


M( +

M)

N
r
( +

N
r
)

u
r
+

N
r
u
r


M
,
we have
u
r
(u
r
+

N
r
)

u
r

(u
r


M)
= C
1
(

N
r
, ,

M) + 2
_
C
2
(

N
r
, ,

M)
v
r
+v
1
r
2
.
Thus,
T

N
r
, ,

M
n
r
_
u
r
(u
r
+

N
r
)

u
r

(u
r


M)
_
=
v
n
r
r
+v
n
r
r
2
.
Also with the notation A
r
=
_

M( +

M)

N
r
( +

N
r
)
, B
r
=

N
r
, C = +

M, we have
v
r
= A
r
u
r
+B
r
u
r
C
= A
r
_
1 +
B
r
+C
u
r
C
_
,
A
r
(B
r
+C)
v
r
A
r
= u
r
C, u
r
= C +
A
r
(B
r
+C)
v
r
A
r
,
du
r
=
A
r
(B
r
+C)
(v
r
A
r
)
2
,
u
1
u
2
=
v
2
A
1
(B
1
+C) v
1
A
2
(B
2
+C) +A
1
A
2
(B
2
B
1
)
(v
1
A
1
)(v
2
A
2
)
.
In particular, if

N
1
=

N
2
, then A
1
= A
2
, B
1
= B
2
and
du
1
du
2
(u
1
u
2
)
2
=
dv
1
dv
2
(v
1
v
2
)
2
.
Therefore, for

N
1
=

N
2
(4.24) transforms into

1
(2i)
2
_ _
dv
1
dv
2
(v
1
v
2
)
2
v
n
1
1
+v
n
1
1
2

v
n
2
2
+v
n
2
2
2
with nested contours surrounding zero ( v
2
contour is smaller ). We get

1
(2i)
2
_ _
dv
1
dv
2
1
v
2
1
_
1 + 2
v
2
v
1
+ 3
_
v
2
v
1
_
2
+. . .
_
v
n
1
1
+v
n
1
1
2

v
n
2
2
+v
n
2
2
2
=

1
2i
_
dv
1
v
2
1
v
n
2
1
+v
n
2
1
2

n
1
2
v
1n
1
1
=

1
n
1
4

n
1
,n
2
.
If

N
1
>

N
2
, then (4.24) transforms into

1
(2i)
2
_ _
A
1
A
2
(B
1
+C)(B
2
+C)dv
1
dv
2
(v
2
A
1
(B
1
+C) v
1
A
2
(B
2
+C) +A
1
A
2
(B
2
B
1
))
2
v
n
1
1
+v
n
1
1
2

v
n
2
2
+v
n
2
2
2
,
44
where v
2
is integrated over a small circle containing the origin and v
1
over a large circle.
Since
1
(v
2
A
1
(B
1
+C) v
1
A
2
(B
2
+C) +A
1
A
2
(B
2
B
1
))
2
=
1 + 2
v
2
A
1
(B
1
+C)
v
1
A
2
(B
2
+C)A
1
A
2
(B
2
B
1
)
+ 3
_
v
2
A
1
(B
1
+C)
v
1
A
2
(B
2
+C)A
1
A
2
(B
2
B
1
)
_
2
+. . .
(v
1
A
2
(B
2
+C) A
1
A
2
(B
2
B
1
))
2
,
the integral over v
2
gives

1
n
2
8i
_
v
n
1
1
+v
n
1
1
(v
1
A
2
(B
2
+C) A
1
A
2
(B
1
B
2
))
n
2
+1
A
2
(B
2
+C) (A
1
(B
1
+C))
n
2
dv
1
(4.25)
with v
1
to be integrated over a large circle. If n
1
< n
2
, then this is 0 because of the decay of
the integrand at innity; otherwise we can use
1
(v
1
A
2
(B
2
+C) A
1
A
2
(B
2
B
1
))
n
2
+1
=
(1 A
1
B
2
B
1
B
2
+C
1
v
1
)
n
2
1
v
n
2
+1
1
A
n
2
+1
2
(B
2
+C)
n
2
+1
=
1 + (n
2
+ 1)A
1
B
2
B
1
B
2
+C
1
v
1
+
(n
2
+1)(n
2
+2)
2!
_
A
1
B
2
B
1
B
2
+C
1
v
1
_
2
+. . .
v
n
2
+1
1
A
n
2
+1
2
(B
2
+C)
n
2
+1
and (4.25) gives

1
4
n
2
(n
2
+ 1) n
1
(n
1
n
2
)!
_
A
1
B
2
B
1
B
2
+C
_
n
1
n
2
_
A
1
(B
1
+C)
A
2
(B
2
+C)
_
n
2
.
5 Appendix: CLT as
Throughout this section parameters and M of measure P
;M;
are xed, while changes.
We aim to study what happens when .
Let
N
be Jacobi orthogonal polynomial of degree min(N, M) corresponding to the weight
function x
1
(1 x)
[MN[
, see e.g. [Sz], [KLS] for the general information on Jacobi polyno-
mials.
N
has min(N, M) real roots on the interval (0, 1) that we enumerate in the increasing
order. Let j
N
i
denote the ith root of
N
.
Theorem 5.1. Let r 1
M
be distributed according to P
;M;
of Denition 2.6. As ,
r
N
i
converges (in probability) to j
N
i
. Moreover, the random vector

(r
N
i
j
N
i
), i = 1, . . . , min(N, M), N = 1, 2, . . . ,
converges (in nitedimensional distributions) to a centered Gaussian random vector
N
i
,
i = 1, . . . , min(N, M), N = 1, 2, . . . .
We do not have any simple formulas for the covariance of
N
i
. Some formulas, in
principle, could be obtained from our argument below, see also [DE2] where a dierent form
of the covariance (for singlelevel distribution for the Hermite and Laguerre ensembles which
are degenerations of the Jacobi ensemble) is given.
In the rest of this section we give a sketch of the proof of Theorem 5.1.
We start by proving that vector

_
e
k
(N; r) Ee
k
(N; r)
_
, k = 1, . . . , min(N, M), N =
1, 2, . . . , is asymptotically Gaussian. The proof is another application of Lemma 4.2 and is
45
similar to that of Theorem 4.1. Our starting point is Theorem 2.10 which expresses joint
moments of random variables e
k
(N, r) in terms of applications of operators T
k
N
. Proposition
3.4 gives an expansion of T
k
N
in terms of integral operators. We further dene formal random
variables e
k
(N)s, s S
k
, through their joint moments given by (here N
1
N
2

N
m
1)
E
_
m

i=1
e
k
i
(N
i
)s
i

_
=
m

i=1

[s
i
[
TJ
s
i
N
i
_
N
1

i=1
H(y
i
; , M)
_

N
1
i=1
H(y
i
; , M)
y
i
=(1i)
. (5.1)
Clearly, the joint distribution of e
k
(N, r) (understood in the sense of moments) coincides with
that of the sums
e
k
(N, r)
(1)
k
k!

s=(S
1
,... )S
k
_
_
(1)
k(s)

j
([S
j
[ 1)!
_
_
e
k
(N)s. (5.2)
Lemma 5.2. Random variables e
k
(N)s satisfy the assumptions of Lemma 4.2 with =
(L) = L (parameters M, N and do not depend on L here), with = 1 and coecients
c
L
(k; N; j) corresponding to e
k
(N) being of order 1 as L .
Proof. This follows from Proposition 3.6. After change of variables w
j
= Lu
j
the integrand
in Denition 3.1 converges to an analytic limit, while the prefactor L
[s
i
[
arising as we change
variables, cancels with
[s
i
[
in the denition of variables e
k
(N)s. As for the cross-terms,
after our change of variables they behave like 1 +O(L
1
).
Now Lemma 4.2 implies that random vector

(e
k
(N)s Ee
k
(N)s) converges (in
the sense of moments) to the centered Gaussian vector e
k
(N)s (in the sense that the limit
moments satisfy the Wick formula). Therefore,

(e
k
(N; r) Ee
k
(N; r)) converges (in the
sense of moments) to the limit Gaussian vector e
k
(N) such that
e
k
(N) =
(1)
k
k!

s=(S
1
,... )S
k
_
_
(1)
k(s)

j
([S
j
[ 1)!
_
_
e
k
(N)s. (5.3)
One can also compute the covariance of e
k
(N)s (thus also of e
k
(N), k = 1, . . . , N) similarly
to the covariance computation in the proof of Theorem 4.1.
Let us now explain that the Cental Limit Theorem for e
k
(N; r) implies the Central Limit
Theorem for r
i
j
.
Lemma 5.3. For any N 1 and k min(N, M) the expectation E(e
k
(N; r)) does not depend
on .
Proof. Observe that E(e
k
(N; r)) can be computed using formulas (5.2) and (5.1). When we
change the variables w
j
= Lu
j
in integral representations for the operators TJ
s
N
we nd that
the resulting expression does not depend on .
Corollary 5.4. For any N 1 and 1 i min(N, M), r
N
i
converges (in probability) as
to a deterministic limit.
Proof. Since moments of

(e
k
(N; r) E(e
k
(N; r)) converge to those of a Gaussian random
variable, and E(e
k
(N; r)) does not depend on , e
k
(N; r) E(e
k
(N; r)). It remains to note
that r
N
1
, . . . , r
N
min(N,M)
can be reconstructed as roots of polynomial
min(N,M)

i=1
(x r
N
i
) =
min(N,M)

j=0
x
min(N,M)j
(1)
j
e
j
(N; r),
46
therefore, they converge to the roots of polynomial
min(N,M)

j=0
x
min(N,M)j
(1)
j
Ee
j
(N; r).
Another proof of Corollary 5.4. There is another way to see that as the random
vector r 1
M
distributed according to P
;M;
exhibits a law of large numbers. Indeed, (2.9)
implies that as the numbers r
N
1
, . . . , r
N
min(N,M)
(for foxed N) concentrate near the
vector z
1
< z
2
< < z
min(N,M)
which maximizes

1i<jmin(N,M)
(z
j
z
i
)
min(N,M)

i=1
z
/2
i
(1 z
i
)
[MN[/2+1/2
(5.4)
The maximum of (5.4) is known to be unique, and the minimizing conguration is precisely
the set of roots of Jacobi orthogonal polynomial of degree min(N, M) corresponding to the
weight function x
1
(1 x)
[MN[
. This statement dates back to the work of T. Stieltjes, cf.
[Sz, Section 6.7], [Ker].
Now we are ready to prove that

(r
N
i
j
N
i
) converges to a Gaussian vector. For any
K 1 consider the map

K
: W
K
R
K
:
K
(x
1
, . . . , x
K
) = (e
1
(x
1
, . . . , x
K
), . . . , e
K
(x
1
, . . . , x
K
)),
where W
K
is the Weyl chamber of rank K
W
K
= x
1
x
2
x
K
.
Observe that
K
is invertible (x
1
, . . . , x
K
are reconstructed as roots of polynomial

K
i=1
(x
x
i
), whose coecients are (1)
k
e
k
), moreover,
K
is a (local) dieomorphism in the neigh-
borhood of every point in the interior of W
K
. For x
1
< x
2
< < x
k
let [x
1
, . . . , x
K
]
denote the dierential of
1
near the point (x
1
, . . . , x
k
) (e
1
, . . . , e
k
). [x
1
, . . . , x
K
] is a
linear map which can be represented via KK matrix, a straightforward computation gives
an explicit formula for this matrix.
For small enough numbers x
1
,. . . x
k
we have
(x
1
+ x
1
, . . . , x
K
+ x
K
)
= (x
1
, . . . , x
K
) +[x
1
, . . . , x
K
](e
1
, . . . , e
K
) +o
_
_
(e
1
)
2
+ + (e
K
)
2
_
, (5.5)
where e
i
are dened through
(e
1
(x
1
, . . . , x
K
) + e
1
, . . . , e
k
(x
1
, . . . , x
K
) + e
K
) =
K
(x
1
+ x
1
, . . . , x
K
+ x
K
).
Now take in (5.5) K = min(N, M), x
i
= j
N
i
, x
i
= r
N
i
j
N
i
and send . Corol-
lary 5.4 implies that the vectors (x
1
, . . . , x
K
) and (e
1
, . . . , e
K
) tend to zero vector in
probability. Therefore, with probability tending to 1 the estimate of the remainder in (5.5)
becomes uniform. Now Central Limit Theorem for

(e
1
, . . . , e
K
) implies through (5.5)
the Central Limit Theorem (in the sense of weak limits) for

(x
1
, . . . , x
K
).
47
6 Appendix: Heckman-Opdam hypergeometric functions.
Let

GT
N
denote the set of all decreasing Ntuples of non-negative reals. Our notation is
explained by the fact that the limiting transition below realize this set as a scaling limit of
GT
+
N
from the previous sections.
Denition 6.1. For any r

GT
N
and > 0 the function T
r
(y
1
, . . . , y
N
; ) is dened through
integral representation:
T
r
(y
1
, . . . , y
N
; ) =
1
()
N(N1)/2
_

N
=r
exp
_
N

k=1
y
k
_
k

i=1

k
i

k1

i=1

k1
i
_
_

i<j
_
exp(
N
i
) exp(
N
j
)
_
1
N1

k=1
_

1i<jk
_
exp(
k
i
) exp(
k
j
)
_
22

a=1
k+1

b=1

exp(
k
a
) exp(
k+1
b
)

1
k

i=1
_
exp(( 1)
k
i
)d
k
i
_
_
, (6.1)
where the integration goes over the Lebesgue measure on N(N 1)/2 dimensional polytope

1

2

N
= r, each
k
is kdimensional vector
k
1
>
k
2
> >
k
k
, and all the
coordinates interlace, i.e.
k
i
>
k1
i
>
k
i+1
for all meaningful k and i.
We are interested in functions T
r
because they are limits of Macondald polynomials
P

(x
1
, . . . , x
N
; q, t) in limit regime of Theorem 2.8.
Proposition 6.2. Suppose that
t = q

, q = exp(), =
1
(r
1
, . . . , r
N
)|, x
i
= exp(y
i
), (6.2)
where r
1
> r
2
> > r
N
. Then there exists a limit
T
r
(y
1
, . . . , y
N
; ) = lim
0

N(N1)/2
P

(x
1
, . . . , x
N
; q, t),
and this limit is uniform over r and x belonging to compact sets.
Sketch of the proof. Induction in N. The combinatorial formula (i.e. branching rule, see [M,
Chapter VI]) for Macdonald polynomials yields
P

(x
1
, . . . , x
N
; q, t) =

/
(x
N
)P

(x
1
, . . . , x
N1
; q, t), (6.3)
where (with notation f(u) =
(tu;q)

(qu;q)

/
(x) = x
[[[[
f(1)
N1

1i<j<N
f(q

j
t
ji
)

1ij<N
f(q

j+1
t
ji
)
f(q

j+1
t
ji
)f(q

j
t
ji
)
.
Suppose that as 0, =
1
(m
1
, . . . , m
N1
). Then by Lemma 2.4 and 2.5
x
[[[[
exp(y([r[ [m[)), f(1)

1
()
, f(q

j
t
ji
) (1 exp((m
i
m
j
)))
1
,
f(q

j+1
t
ji
)
f(q

j+1
t
ji
)f(q

j
t
ji
)

_
1 exp((r
i
r
j+1
))
_
1 exp((m
i
r
j+1
))
__
1 exp((r
i
m
j
))
_
_
1
.
48
Therefore,

/
(x) g
r/m
(y)
(N1)(1)
,
where
g
r/m
(y) =
exp(y([r[ [m[))
()
N1

1i<j<N
(1 exp((m
i
m
j
)))
1

1ij<N
_
1 exp((r
i
r
j+1
))
_
1 exp((m
i
r
j+1
))
__
1 exp((r
i
m
j
))
_
_
1
(6.4)
As 0 the summation in (6.3) turns into a Riemannian sum (with step ) for an integral.
Therefore, omitting uniformity estimates,
lim
0

N(N1)/2
P

(x
1
, . . . , x
N
; q, t) =
_
mr
g
r/m
(y
N
)T
m
(y
1
, . . . , y
N1
; ).
There are several ways to link the functions T
r
to known objects. One way is through the
fact that these functions are analytic continuations (in index) of Jack polynomials J

(; )
(see e.g. [M, Chapter VI, Section 10]) as seen from the formula
T
r
(
1
(n 1), . . . ,
n
; )
T
r
((n 1), . . . , , 0; )
=
J

(exp(r
1
), . . . , exp(r
n
); )
J

(1, . . . , 1; )
, (6.5)
which is a limit of the well-known argument-index symmetry relation, cf. [M, Chapter VI,
(6.6)].
P

(q

1
t
n1
, . . . , q

n
t
0
; q, t)
P

(t
n1
, . . . , 1; q, t)
=
P

(q

1
t
n1
, . . . , q

n
t
0
; q, t)
P

(t
n1
, . . . , 1; q, t)
for Macdonald polynomials.
Another way is through the connection to the CalogeroSutherland Hamiltonian:
Proposition 6.3. For every y
1
, . . . , y
N
C, T
r
(y
1
, . . . , y
N
; ) is an eigenfunction of the
dierential operator (acting in r
i
s)
N

i=1
_

r
i
_
2
+

i<j
(1 )
2 sinh
2
_
r
i
r
j
2
_ (6.6)
with eigenvalue

N
i=1
(y
i
)
2
.
Sketch of the proof. The Pieri formula for the Macdonald polynomials (see [M, Chapter VI,
Section 6]) yields
P

(x
1
, . . . , x
N
; q, t)e
1
(x
1
, . . . , x
N
) = P

(x
1
, . . . , x
N
; q, t)(x
1
+ +x
N
)
=

=(a,b)
a1

i=1
(1 q

a
1
t
ai+1
)(1 q

a
t
ai1
)
(1 q

a
t
ai
)(1 q

a
1
t
ai
)
P

(x
1
, . . . , x
N
; q, t), (6.7)
where the summation goes over Young diagrams which dier from by adding a single
box (a, b) (in row a and column b). In the limit regime of Proposition 6.4, P

turns into T
m
,
where
i
= m
i

1
|, and when = (a, b), P

turns into
T
m
+

m
a
T
m
+

2
2
_

m
a
_
2
T
m
+. . . .
49
On the other hand,
(1 q

a
t
ai+1
)(1 q

a
1
t
ai+1
)
(1 q

a
t
ji
)(1 q

a
1
t
ai
)
= 1 +
2
(1 )
exp(m
a
m
i
)
(1 exp(m
a
m
i
))
2
+O(
3
)
Therefore, up to terms of order
3
, the identity (6.7) gives (omitting (y
1
, . . . , y
N
) from the
argument of T
m
)
_
N +
N

i=1
y
i
+

2
2
N

i=1
(y
i
)
2
_
T
m
=
N

i=1
_
1 +

m
i
T
m
+

2
2
_

m
a
_
2
T
m
_
+
2

i<j
(1 ) exp(m
j
m
i
)
(1 exp(m
j
m
i
))
2
T
m
. (6.8)
Note that on the level of formal computation one could just compare the coecient of
2
in
both sides of (6.8) and get the right answer. However, in order to give a proof, one needs
to subtract from (6.8) certain operators which will cancel low order terms. For that we also
need another identity for Macdonald polynomials, which is
P

(x
1
, . . . , x
N
; q, t)e
N
(x
1
, . . . , x
N
) = P

(x
1
, . . . , x
N
; q, t)x
1
x
N
= P
+1
N(x
1
, . . . , x
N
; q, t).
As 0, up to the terms of order
3
this gives
_
_
1 +
N

i=1
y
i
+

2
2
_
N

i=1
y
i
_
2
_
_
T
m
= T
m
+
N

i=1

m
i
T
m
+

2
2
_
N

i=1

m
i
_
2
T
m
. (6.9)
Combining (6.8) and (6.9) we conclude that the rst order (in ) of the operator of multipli-
cation by
2
2
_
e
1
N (e
N
1) +
1
2
(e
N
1)
2
_
gives
_
N

i=1
y
2
i
_
T
m
=
N

i=1
_

m
i
_
2
T
m
+

i<j
2(1 ) exp(m
j
m
i
)
(1 exp(m
j
m
i
))
2
T
m
.
In principle, neither (6.5), nor (6.6) uniquely dene the functions T
m
. Indeed, the analytic
continuation in (6.5) might be not unique, while some of the eigenvalues in (6.6) coincide.
However, with additional arguments one shows that T
m
(y
1
, . . . , y
N
) can be identied with
HeckmanOpdam hypergeometric functions for root system of type A, see [HO], [Op], [HS].
Yet another link to well-understood objects is obtained through the limit . Straight-
forward computations show that in this limit transition (6.1) converges to the Givental integral
formula [Gi] for gl
N
Whittaker functions. From a dierent point of view the convergence of
HeckmanOpdam hypergeometric functions to Whittaker functions is established in [Shi].
If some of r
i
s coincide, then the integral in (6.1) will give identical zero. But if we do a
rescaling then we still get a nontrivial object, which is also a limit of Macdonald polynomials.
One example is given in the following
Proposition 6.4. Let M N and suppose that
t = q

, q = exp(), = h
1
(r
1
, . . . , r
N
, 0, . . . , 0)|, x
i
= exp(y
i
), (6.10)
where r
1
> r
2
> > r
N
> 0 and GT
+
M
. Then there exists a limit
T
r
(y
1
, . . . , y
M
; ) = lim
0

(N(N1)/2+N(MN))
P

(x
1
, . . . , x
M
; q, t),
and this limit is uniform over r and x belonging to compact sets.
50
Sketch of the proof. Induction in M. When M = N the statement coincides with that of
Proposition 6.2. For M > N we again use the branching rule:
P

(x
1
, . . . , x
M
; q, t) =

/
(x
M
)P

(x
1
, . . . , x
M1
; q, t), (6.11)
where we note that since has N non-zero coordinates, also has N non-zero coordinates.
In other words, the summation is Ndimensional. The asymptotics of
/
(x) also changes
because of zeros in and . We have

/
(x) = x
[[[[
f(1)
N

i<j<N+1
f(q

j
t
ji
)

ij<N+1
f(q

j+1
t
ji
)
f(q

j+1
t
ji
)f(q

j
t
ji
)
.
Thus,

/
(x) g
r/m

N(1)
,
where g
r/m
is as in Proposition 6.2, but with the agreement that r is of length N + 1 with
last coordinate being zero. As 0 the summation in (6.11) turns into a Riemannian sum
(with step ) for an integral. Therefore,
lim
0

(N(N1)/2+N(MN))
P

(x
1
, . . . , x
M
; q, t) =
_
mr
g
r/m
(y
M
)T
m
(y
1
, . . . , y
M1
; , h),
(6.12)
where r is thought of as having length N+1 with last coordinate being zero and m has length
N.
Similarly one can pass to the limit in the Macdonald Qpolynomials (which dier from
P by multiplication by an explicit constant see [M, Chapter 6]).
Proposition 6.5. Let M N and suppose that
t = q

, q = exp(), =
1
(r
1
, . . . , r
N
, 0, . . . , 0)|, x
i
= exp(y
i
), (6.13)
where r
1
> r
2
> > r
N
> 0 and is a signature of size M. Then there exists a limit

T
r
(y
1
, . . . , y
M
; , h) = lim
0

N(1)+(N(N1)/2+N(MN))
Q

(x
1
, . . . , x
M
; q, t),
this limit is uniform over r and x belonging to compact sets.
The functions T
r
inherit various properties of Macdonald polynomials. Let us summarize
some of those (we agree that Tfunction vanishes when some of the indexing coordinates
coincide).
Proposition 6.6. Functions T
r
and

T
r
have the following properties:
I. T
r
and

T
r
are symmetric functions of its arguments.
II. Homogeneity:
T
r
(y
1
+A, . . . , y
N
+A; ) = exp(A[r[)T
r
(y
1
, . . . , y
N
; ),

T
r
(y
1
+A, . . . , y
N
+A; ) = exp(A[r[)

T
r
(y
1
, . . . , y
N
; ).
51
III. Cauchytype identity: Take N parameters a
1
, . . . , a
N
and M parameters b
1
, . . . , b
M
such
that a
i
+b
j
< 0 for all i, j. Then
_
r

GT
min(N,M)

T
r
(a
1
, . . . , a
N
; )T
r
(b
1
, . . . , b
M
; )
min(N,M)

i=1
dr
i
=

i,j
(a
i
b
j
)
( a
i
b
j
)
.
IV. Principal specialization: Let r

GT
N
and M N, then
T
r
(0, , . . . , (1 M); )
=
iN; jM

i<j
((j i))
((j i + 1))

i<j
(e
r
j
e
r
i
)

i=1
(1 e
r
i
)
(MN)
(6.14)
and

T
r
(0, , . . . , (1 M); )
=
1
(()
N
iN; jM

i<j
((j i))
((j i + 1))

i<j
(e
r
j
e
r
i
)

i=1
(1 e
r
i
)
(MN)+(1)
.
(6.15)
V. Dierence operators: For any subset I 1, . . . , N dene
B
I
(y
1
, . . . , y
N
; ) =

iI

j,I
+y
i
y
j
y
i
y
j
.
Dene shift operator T
i
through
[T
i
f](y
1
, . . . , y
N
) = f(y
1
, . . . , y
i1
, y
i
1, y
i+1
, . . . , y
N
).
For any k N dene kth dierence operator T
k
N
acting on symmetric functions in
variables y
1
, . . . , y
N
through
T
k
N
=

[I[=k
B
I
(y
1
, . . . , y
N
)

iI
T
i
.
Then for any r

GT
n
with n N we have
T
k
N
T
r
(y
1
, . . . , y
N
; ) = e
k
(exp(r
1
), . . . , exp(r
n
), 1, . . . , 1
. .
Nn
)T
r
(y
1
, . . . , y
N
; ),
where e
k
is kth elementary symmetric polynomial (in N variables).
Proof. All the properties are straightforward limits of similar statements for Macdonald poly-
nomials. For instance, III is a limit of the Cauchy identity for Macdonald polynomials

()N
P

(
1
, . . . ,
N
; q, t)Q

(
1
, . . . ,
M
; q, t) =

i,j
(t
i

j
; q)

(
i

j
; q)

, (6.16)
IV is a limit of the evaluation formula for Macdonald polynomials
P

(1, . . . , t
M1
; q, t) = t

(i1)
i

i<jM
(q

j
t
ji
; q)

(q

j
t
ji+1
; q)

(t
ji+1
; q)

(t
ji
; q)

= t

(i1)
i

i<jN
(q

j
t
ji
; q)

(q

j
t
ji+1
; q)

i=1
M

j=N+1
(q

i
t
ji
; q)

(q

i
t
ji+1
; q)

iN; jM

i<j
(t
ji+1
; q)

(t
ji
; q)

52
and formula
Q

= b

1ij()
f(q

j
t
ji
)
f(q

j+1
t
ji
)
.
Finally, V is a limit of the Macdonald dierence operators.
References
[A] G. W. Anderson, A short proof of Selbergs generalized beta formula, Forum Math.
3 (1991), 415417.
[AAR] G. E. Andrews, R. Askey, and R. Roy, Special functions, by Encyclopedia of Math-
ematics and Its Applications, The University Press, Cambridge, 1999.
[AGZ] G. W. Anderson, A. Guionnet, O. Zeitouni, An Introduction to Random Matrices,
Cambridge University Press, 2010.
[AHM1] Y. Ameur,H. Hedenmalm, N. Makarov, Fluctuations of eigenvalues of random nor-
mal matrices, Duke Math. J. 159 (2011), 3181, arXiv:0807.0375.
[AHM2] Y. Ameur, H. Hedenmalm, N. Makarov, Random normal matrices and Ward iden-
tities, arXiv:1109.5941.
[AMW] M. Adler, P. van Moerbeke, D. Wang, Random matrix minor processes related to
percolation theory, arXiv:1301.7017.
[Bar] Y. Baryshnikov, GUEs and queues, Probability Theory and Related Fields, 119, no.
2 (2001), 256274.
[B1] A. Borodin, CLT for spectra of submatrices of Wigner random matrices,
arXiv:1010.0898.
[B2] A. Borodin, CLT for spectra of submatrices of Wigner random matrices II. Stochastic
evolution, arXiv:1011.3544.
[BB] A. Borodin, A. Bufetov, Plancherel representations of U() and correlated Gaussian
Free Fields, arXiv:1301.0511
[BC] A. Borodin, I. Corwin, Macdonald processes, to appear in Probability Theory and
Related Fields, arXiv:1111.4408.
[BCGS] A. Borodin, I. Corwin, V. Gorin, S. Shakirov, Observables of Macdonald processes,
in preparation.
[BF] A. Borodin, P. Ferrari, Anisotropic growth of random surfaces in 2 + 1 dimensions,
to appear in Communications in Mathematical Physics, arXiv:0804.3035.
[BG] A. Borodin, V. Gorin, Lectures on Integrable Probability. arXiv:1212.3351
[BP] A. Borodin, S. Peche, Airy kernel with two sets of parameters in directed percolation
and random matrix theory, Journal of Statistical Physics, 132, no. 2 (2008), 275290,
arXiv:0712.1086
[CJY] S. Chhita, K. Johansson, B. Young, Asymptotic domino statistics in the Aztec dia-
mond, arXiv:1212.5414
53
[C] B. Collins, Product of random projections, Jacobi ensembles and universality prob-
lems arising from free probability, Probability Theory and Related Fields, 133, no.
3 (2005), 315344, arXiv:math/0406560.
[DW] A. B. Dieker, J. Warren, On the largest-eigenvalue process for generalized Wishart
random matrices, ALEA, 6 (2009), 369376, arXiv:0812.1504
[Di] A. L. Dixon, Generalizations of Legendres formula KE
t
(K E)K
t
=
1
2
, Proc.
London Math. Soc. 3 (1905), 206224.
[Dub] J. Dubedat, SLE and the free eld: Partition functions and couplings, Journal of the
American Mathematical Society, Vol. 22, no. 4 (2009), 9951054, arXiv:0712.3018.
[Due] E. Duenez, Random Matrix Ensembles Associated to Compact Symmetric Spaces,
Commun. Math. Phys. 244, 2961 (2004), arXiv:math-ph/0111005
[Dui] M. Duits, The Gaussian free eld in an interlacing particle system with two jump
rates, Communications on Pure and Applied Mathematics, 66, no. 4 (2013), 600643,
arXiv:1105.4656.
[DE1] I. Dumitriu, A. Edelman, Matrix Models for Beta Ensembles, Journal of Mathemat-
ical Physics, 43, no. 11 (2002), 58305847, arXiv:math-ph/0206043.
[DE2] I. Dumitriu, A. Edelman, Eigenvalues of Hermite and Laguerre ensembles: Large
Beta Asymptotics, Annales de lInstitut Henri Poincare (B) Probability and Statis-
tics, 41, no. 6, (2005), 10831099, arXiv:math-ph/0403029
[DP] I. Dumitriu, E. Paquette, Global Fluctuations for Linear Statistics of -Jacobi En-
sembles, arXiv:1203.6103.
[Ed] A. Edelman, private communications.
[ES] A. Edelman and B. D. Sutton. The beta-Jacobi matrix model, the CS decomposition,
and generalized singular value problems, Foundations of Computational Mathemat-
ics, 8, no. 2 (2008), 259285.
[FFR] B. J. Fleming, P. J. Forrester, E. Nordenstam, A nitization of the bead pro-
cess, Probability Theory and Related Fields, 152, no. 1-2 (2012), 321356,
arXiv:0902.0709.
[EY] L. Erdos, H.-T. Yau, Gap Universality of Generalized Wigner and beta-Ensembles,
arXiv:1211.3786.
[F] P. J. Forrester, Log-gases and Random Matrices. Princeton University Press, 2010.
[FN] P. J. Forrester, T. Nagao, Determinantal correlations for classical projection pro-
cesses, J. Stat. Mech. (2011) P08011, arXiv:0801.0100.
[FR] P. J. Forrester, E. M. Rains, Interpretations of some parameter dependent general-
izations of classical matrix ensembles, Probab. Theory Relat. Fields 131, 161 (2005),
arxiv: math-ph/0211042.
[FW] P. J. Forrester, S. O. Warnaar, The importance of the Selberg Integral, Bulletin
(New Series) of the American Mathematical Society, Vol. 45, no. 4 (2008), 489534.
54
[GN] I. M. Gelfand, M. A. Naimark, Unitary representations of the classical groups,
Trudy Mat. Inst. Steklov, Leningrad, Moscow (1950) (in Russian). (German transl.:
Academie-Verlag, Berlin, 1957.)
[Gi] A. Givental. Stationary phase integrals, quantum Toda lattices, ag manifolds and
the mirror conjecture. Topics in Singularity Theory., AMS Transl. Ser. 2, 180, 103-
115, AMS, Providence, RI, 1997.
[G] V. Gorin, Noncolliding Jacobi processes as limits of Markov chains on the Gelfand-
Tsetlin graph. Journal of Mathematica Sciences (New York) 158 (2009), no. 6, 819
837 (translated from Zapiski Nauchnykh Seminarov POMI, Vol. 360 (2008), pp.
91123). arXiv: 0812.3146
[GP] V. Gorin, G. Panova, Asymptotics of symmetric polynomials with applications to
statistical mechanics and representation theory, arXiv:1301.0634.
[HO] G. J. Heckman, E. M. Opdam, Root systems and hypergeometric functions. I, II,
Compositio Mathematica 64, no. 3 (1987), 329352 and 353373.
[HS] G. J. Heckman, H. Schlichtkrull, Harmonic Analysis and Special Functions on Sym-
metric Spaces, Acad. Press, 1994.
[HMP] X. Hu, J. Miller and Y. Peres, Thick points of the Gaussian free eld, Annals of
Probability 2010, Vol. 38, No. 2, 896926, arXiv:0902.3842.
[Is] L. Isserlis, On a formula for the product-moment coecient of any order of a normal
frequency distribution in any number of variables, Biometrika 12 (1918), 134139.
[Jo] K. Johansson. On uctuations of eigenvalues of random Hermitian matrices. Duke
Math. J., 91 (1998), 151204.
[JN] K. Johansson and E. Nordenstam. Eigenvalues of GUE minors. Electron. J. Probab.,
11:no. 50, 13421371 (electronic), 2006, arXiv:math/0606760.
[Ji] T. Jiang, Limit Theorems for Beta-Jacobi Ensembles, to appear in Bernoulli,
arXiv:0911.2262.
[Ken] R. Kenyon, Height Fluctuations in the Honeycomb Dimer Model, Communications
in Mathematical Physics, 281, no. 3 (2008), 675709, arXiv: math-ph/0405052
[Ker] S. V. Kerov, Equilibrium and orthogonal polynomials (Russian), Algebra i Analiz,
12:6 (2000), 224237, English translation: St. Petersburg Mathematical Journal,
2001, 12:6, 10491059.
[Ki] R. Killip, Gaussian uctuations for ensembles, Int. Math. Res. Not., 8 (2008),
arXiv:math/0703140.
[KN] R. Killip and I. Nenciu, Matrix models for circular ensembles, Int. Math. Res. Not.
50 (2004), 26652701, arXiv:math/0410034
[KLS] R. Koekoek, P. Lesky, R. F. Swarttouw, Hypergeometric Orthogonal Polynomials
and Their q-Analogues, Springer, 2010.
[Kr] I. Krasovsky, Aspects of Toeplitz Determinants, Random Walks, Boundaries and
Spectra, Progress in Probability, 64, (2011), pp 305324, arXiv:1007.1128.
[Ku] J. Kuan, The Gaussian free eld in interlacing particle systems, arXiv:1109.4444.
55
[M] I. G. Macdonald, Symmetric functions and Hall polynomials, Second Edition. Oxford
University Press, 1999.
[Me] M. L. Mehta, Random Matrices (3rd ed.), Amsterdam: Elsevier/Academic Press,
2004.
[N] Yu. A. Neretin, Rayleigh triangles and non-matrix interpolation of matrix beta in-
tegrals, Sbornik: Mathematics (2003), 194(4), 515540.
[Ok] A. Okounkov. Innite wedge and random partitions. Selecta Math. 7 (2001), 5781.
arXiv:math/9907127.
[OO] A. Okounkov, G. Olshanski, Shifted Jack polynomials, binomial formula, and appli-
cations. Mathematical Research Letters 4 (1997), 6978. arXiv:q-alg/9608020.
[OR1] A. Okounkov, N. Reshetikhin, Correlation functions of Schur process with applica-
tion to local geometry of a random 3-dimensional Young diagram. J. Amer. Math.
Soc. 16 (2003), 581603. arXiv:math.CO/0107056
[OR2] A. Yu. Okounkov, N. Yu. Reshetikhin, The birth of a random matrix, Mosc. Math.
J., 6, no. 3 (2006), 553566
[Op] Opdam, E. M. (1988), Root systems and hypergeometric functions. III, IV, Compo-
sitio Mathematica 67 (1): 2149; 67 (2): 191209
[Ox] The Oxford Handbook of Random Matrix Theory, edited by G. Akemann, J. Baik,
P. Di Francesco, Oxford University Press, 2011.
[P] L. Petrov, Asymptotics of Uniformly Random Lozenge Tilings of Polygons. Gaussian
Free Field, to appear in Annals of Probability, arXiv:1206.5123.
[RRV] J. Ramirez, B. Rider, B. Virag, Beta ensembles, stochastic Airy spectrum, and a
diusion, J. Amer. Math. Soc. 24 (2011), no. 4, 919944, arXiv:math/0607331.
[RV] B. Rider, B. Virag, The noise in the circular law and the Gaussian free Field, Internat.
Math. Research notices 2007, no. 2, arxiv:math/0606663
[Sch] B. Schapira, The Heckman-Opdam Markov processes, Probability Theory and Re-
lated Fields, 138, no. 34 (2007), 495519.
[She] S. Sheeld, Gaussian free elds for mathematicians, Probability Theory and Related
Fields 139 (2007), 521541, arXiv:math.PR/0312099
[Shi] N. Shimeno, A limit transition from the Heckman-Opdam hypergeometric functions
to the Whittaker functions associated with root systems, arXiv:0812.3773
[Se] A. Selberg, Bemerkninger om et multipelt integral, Norsk. Mat. Tidsskr. 24 (1944),
7178.
[Sp] H. Spohn, Dysons Model of Interacting Brownian Motions of arbitrary coupling
strength, Markov Processes and Related Fields, 1998, 649661.
[Sz1] G. Szego. On certain Hermitian forms associated with the Fourier series of a positive
function. Marcel Riesz Volume, Lund, 1952, 228237.
[Sz] G. Szego, Orthogonal polynomials, Forth edition. American Mathematical Society,
2003.
56
[VV] B. Valko, B. Virag, Continuum limits of random matrices and the Brownian carousel.
Invent. Math. 177 (2009), no. 3, 463508, arXiv:0712.2000.
[Wa] K. W. Wachter, The limiting empirical measure of multiple discriminant ratios, An-
nals of Statistics, 8, no. 5 (1980), 937957.
[Wi] E. P. Wigner, On the distribution of the roots of certain symmetric matrices, Ann.
of Math. 67 (1958), 325327.
57

También podría gustarte