PreprintPDF Available

Low lying zeros of Rankin-Selberg L-functions

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

We study the low lying zeros of GL(2)×GL(2)GL(2) \times GL(2) Rankin-Selberg L-functions. Assuming the generalized Riemann hypothesis, we compute the 1-level density of the low-lying zeroes of L(s,fg)L(s, f \otimes g) averaged over families of Rankin-Selberg convolutions, where f,gf, g are cuspidal newforms with even weights k1,k2k_1, k_2 and prime levels N1,N2N_1, N_2, respectively. The Katz-Sarnak density conjecture predicts that in the limit, the 1-level density of suitable families of L-functions is the same as the distribution of eigenvalues of corresponding families of random matrices. The 1-level density relies on a smooth test function ϕ\phi whose Fourier transform ϕ^\widehat\phi has compact support. In general, we show the Katz-Sarnak density conjecture holds for test functions ϕ\phi with suppϕ^(12,12)\operatorname{supp} \widehat\phi \subset (-\frac{1}{2}, \frac{1}{2}). When N1=N2N_1 = N_2, we prove the density conjecture for suppϕ^(54,54)\operatorname{supp} \widehat\phi \subset (-\frac{5}{4}, \frac{5}{4}) when k1k2k_1 \ne k_2, and suppϕ^(2928,2928)\operatorname{supp} \widehat\phi \subset (-\frac{29}{28}, \frac{29}{28}) when k1=k2k_1 = k_2. A lower order term emerges when the support of ϕ^\widehat\phi exceeds (1,1)(-1, 1), which makes these results particularly interesting. The main idea which allows us to extend the support of ϕ^\widehat\phi beyond (1,1)(-1, 1) is an analysis of the products of Kloosterman sums arising from the Petersson formula. We also carefully treat the contributions from poles in the case where k1=k2k_1 = k_2. Our work provides conditional lower bounds for the proportion of Rankin-Selberg L-functions which are non-vanishing at the central point and for a related conjecture of Keating and Snaith on central L-values.
arXiv:2308.16302v1 [math.NT] 23 Aug 2023
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS
ALEXANDER SHASHKOV
Abstract. We study the low lying zeros of GL(2) ×GL(2) Rankin-Selberg L-functions.
Assuming the generalized Riemann hypothesis, we compute the 1-level density of the low-
lying zeroes of L(s, f g) averaged over families of Rankin-Selberg convolutions, where
f, g are cuspidal newforms with even weights k1, k2and prime levels N1, N2, respectively.
The Katz-Sarnak density conjecture predicts that in the limit, the 1-level density of suitable
families of L-functions is the same as the distribution of eigenvalues of corresponding families
of random matrices. The 1-level density relies on a smooth test function φwhose Fourier
transform b
φhas compact support. In general, we show the Katz-Sarnak density conjecture
holds for test functions φwith supp b
φ(1
2,1
2). When N1=N2, we prove the density
conjecture for supp b
φ(5
4,5
4) when k16=k2, and supp b
φ(29
28 ,29
28 ) when k1=k2.
A lower order term emerges when the support of b
φexceeds (1,1), which makes these
results particularly interesting. The main idea which allows us to extend the support of b
φ
beyond (1,1) is an analysis of the products of Kloosterman sums arising from the Petersson
formula. We also carefully treat the contributions from poles in the case where k1=k2. Our
work provides conditional lower bounds for the proportion of Rankin-Selberg L-functions
which are non-vanishing at the central point and for a related conjecture of Keating and
Snaith on central L-values.
Contents
1. Introduction 1
2. Preliminaries 6
3. The explicit formula 10
4. Applying the Petersson formula 12
5. Proofs of Theorems 1.1 and 1.4 14
6. Products of Kloosterman sums 15
7. Surpassing (-1, 1): proof of Theorem 1.2 22
8. Handling the poles 27
References 31
1. Introduction
Since Montgomery and Dyson’s discovery that the two point correlation of the zeros of
the Riemann zeta function agrees with the pair correlation function for eigenvalues of the
Gaussian Unitary Ensemble (see [Mon73]), the connection between the zeros of L-functions
and the eigenvalues of random matrices has been a major area of study. It is now widely
Date: September 1, 2023.
2020 Mathematics Subject Classification. 11M26, 11M50.
Key words and phrases. Low-lying zeros, Rankin-Selberg convolutions, n-level densities, Katz-Sarnak con-
jectures.
1
2 ALEXANDER SHASHKOV
believed that the statistical behavior of families of L-functions can be modeled by ensembles
of random matrices. Based on the observation that the spacing statistics of high zeros
associated with cuspidal L-functions agree with the corresponding statistics for eigenvalues
of random unitary matrices under Haar measure (see [RS96], for example), it was originally
believed that only the unitary ensemble was important to number theory. However, Katz and
Sarnak [KS99a, KS99b] showed that these statistics are the same for all classical compact
groups. These statistics, the n-level correlations, are unaffected by finite numbers of zeros.
In particular, they fail to identify differences in behavior near s= 1/2.
The n-level density statistic was introduced to distinguish the behavior of families of L-
functions close to the central point. Based partially on an analogy with the function field
setting, Katz and Sarnak conjectured that the low-lying zeros of families of L-functions
behave like the eigenvalues near 1 of classical compact groups (unitary, symplectic, and
orthogonal). The behavior of the eigenvalues near 1 is different for each matrix group. A
growing body of evidence has shown that this conjecture holds for test functions with suitably
restricted support for a wide range of families of L-functions. For a non-exhaustive list, see
[AM15, AAI+15, BBD+17, CDG+22, DM06, DM09, ERGR12, FM15, Gao13, Gul05, HM07,
ILS00, Mil04, MP10, ¨
OS93, ¨
OS99, RR07, Roy01, Rub01, ST12, You04].
We study the 1-level density of families of Rankin-Selberg L-functions, which are the
L-functions associated with Rankin-Selberg convolutions of cusp forms. In particular, let
H
k(N) denote the set of cusp forms of weight kwhich are newforms of prime level N(see
the next section for more detail). We assume that the level Nis prime in order to make
computations easier, but our results should hold for any N; see [BBD+17].
Let φbe an even smooth test function whose Fourier transform has compact support.
Take fHk1(N1), gHk2(N2), and let L(s, f g) be the Rankin-Selberg convolution
L-function. See Section 3.1 for a precise definition. By the work of Rankin [Ran39] and
Selberg [Sel40], L(s, f g) is holomorphic in the entire complex plane except for a pole at
s= 1 when f=g. Note that since our forms have trivial central character, we have that
f=fso that L(s, f g) has a pole if and only if f=g. We are interested in the quantity
D(fg;φ) := X
ρfg
φγfg
2πlog R(1.1)
where the sum is over the non-trivial zeros ρfg=1
2+fgof L(s, f g) and Ris the
analytic conductor of fg. We have that
R=((N1, N2)2(k1k2)2(k1+k2)2k16=k2
(N1, N2)2k2
1k1=k2.(1.2)
See Section 3.1 for more information on the conductor. Because much of our analysis relies
on bounding prime sums up to Rσwhere supp b
φ(σ, σ), we are able to obtain better
results when the conductor is small. These small conductor cases allow us to obtain Fourier
support up to and beyond (1,1) below, while we are restricted to (1/2,1/2) in general.
For the purposes of this paper, we assume the generalized Riemann hypothesis (GRH)
for L(s, f g), so that γfgis always real. We assume GRH for L(s, f g) as well as
L(s, sym2(f)) and L(s, sym2(f)sym2(g)) in order to obtain better estimates on prime
sums in Section 3, but this also makes our results easier to interpret. As such, all of the
results stated in this paper are dependent on GRH for these L-functions. In order to prove
Theorem 1.2, we also assume GRH for Dirichlet L-functions.
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 3
We are interested in averages of D(fg;φ) over families of Rankin-Selberg convolutions
of cusp forms. In particular, let
H(k1, N1, k2, N2) = {fg|fH
k1(N1), g H
k2(N1)}(1.3)
be the family of Rankin-Selberg convolutions of cusp forms from H
k1(N1) and H
k2(N2). These
are GL(4) automorphic forms, which are difficult to study in general. However, by studying
Rankin-Selberg convolutions, we are able to apply the GL(2) Petersson trace formula in
order to make our calculations tractable. As such, our paper mostly follows the method of
[ILS00], where the 1-level density was studied for families of cusp forms. We utilize results
from [ILS00] wherever possible for brevity. The main novelty in our method comes from
studying the interaction between the terms arising from applying the Petersson formula,
and from our analysis of the pole terms (which did not appear in [ILS00]).
The families of forms we study exhibit symplectic symmetry, as shown by Due˜nez and
Miller [DM09]. Due˜nez and Miller study convolutions of families of L-functions in general,
and are able to determine the symmetry type for a large variety of families. However, their
results do not give explicit bounds on the support, and their methods are not strong enough
in order to obtain the support proved in this paper. In particular, assuming GRH and using
the estimates in [ILS00] to obtain explicit results for our family of study, the maximum
support obtainable using the methods of [DM09] is (1/5,1/5) in general, with an extension
to (2/5,2/5) in certain cases. Notably, this is not strong enough to show that a positive
proportion of L-functions in the family vanish at the central point, and weaker than the
results of this paper. Due˜nez and Miller [DM06] also study convolutions of families of cusp
forms by a single, fixed, Hecke-Maass cusp form, but do not obtain an explicit bound on
the support of the test function. Shin and Templier [ST12] study 1-level densities for very
general families of automorphic forms, but also do not obtain explicit support.
The density function for the symplectic group equals
W(Sp)(x):= 1 sin 2πx
2πx (1.4)
The Katz-Sarnak density conjecture predicts that in the limit, as D(fg, φ) is averaged
over an increasingly large family, the 1-level density equals
Z
−∞
φ(x)W(Sp(x))dx =Z
−∞ b
φ(y)c
W(Sp(y))dy. (1.5)
The Fourier transform of the symplectic density function is
c
W(Sp)(y) = δ(y)1
2η(y),(1.6)
where η(y) = 1 if |y|<1 and zero otherwise. It follows that
Z
−∞
φ(x)W(Sp(x))dx =b
φ(0) 1
2φ(0),supp b
φ(1,1).(1.7)
Because of the discontinuity of c
W(Sp)(y) at ±1, of particular interest are results which allow
us to take the support of b
φbeyond the interval (1,1), which we give in Theorem 1.2.
We first state a general result with more restricted support.
4 ALEXANDER SHASHKOV
Theorem 1.1. Assume the generalized Riemann hypothesis. Fix a test function φwith
supp b
φ(1/2,1/2), let k1, k2be even integers and let N1, N2be primes. We have
lim
N1N2→∞
1
|H(k1, N1, k2, N2)|X
fgH(k1,N1,k2,N2)
D(fg;φ) = Z
−∞
φ(x)W(Sp(x))dx. (1.8)
Note that we just need N1N2 , so that we can hold N1or N2fixed and let the other
grow, or allow them both to grow in unison.
When N1=N2, we are able to extend the support because the analytic conductor of our
cusp forms is smaller; see (1.2).
Theorem 1.2. Assume the generalized Riemann hypothesis and let k1, k2be even integers.
If k16=k2, fix a test function φwith supp b
φ(5/4,5/4). If k1=k2, then take supp b
φ
(29/28,29/28). Then as N through the primes, we have
lim
N→∞
1
|H(k1, N, k2, N )|X
fgH(k1,N,k2,N )
D(fg;φ) = Z
−∞
φ(x)W(Sp(x))dx. (1.9)
Remark 1.3. When k1=k2, for each fH
k1(N), the L-function L(s, f f) is in our
family and has a pole at s= 1. This pole contributes to the explicit formula, and cannot be
bounded away if the support of b
φexceeds (1,1). See Remark 4.3 for more details.
In general, this phenomenon makes it difficult to obtain 1-level density results with support
exceeding (1,1) when the family contains non-entire L-functions, and there are only a
limited number of results of this type. Fouvry and Iwaniec [FI03] study Hecke L-functions
(some of which have poles) and are able to obtain support (4/3,4/3) by utilizing GRH and
additional averaging.
If we take the weight of our forms to infinity, we can prove a similar result.
Theorem 1.4. Assume the generalized Riemann hypothesis. Fix a test function φwith
supp b
φ(1/2,1/2) and let N1, N2be 1 or prime. Then we have
lim
k1k2→∞
1
|H(k1, N1, k2, N2)|X
fgH(k1,N1,k2,N2)
D(fg;φ) = Z
−∞
φ(x)W(Sp(x))dx. (1.10)
If k1, k2 with |k1k2|bounded, we can take supp b
φ(1,1).
In the above theorem, we can take Fourier support (1,1) when |k1k2|is bounded
because in this case the analytic conductor for L(s, f g) is small. When |k1k2|is
bounded, the conductor is of size k2
1. When |k1k2|is unbounded, the conductor can be as
large as max(k4
1, k4
2).
As noted in [ILS00], an application of 1-level density results is to lower bound the pro-
portion of L-functions in a given family which vanish at the central point. We use the test
function
φ(x) = sin(πσx)
πσx 2
,(1.11)
which has Fourier transform
b
φ(y) = (1
σ|y|
σ|y|< σ
0|y| σ(1.12)
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 5
Note that this test function is not optimal for bounding order of vanishing when σ > 1, so that
the constants in (1.15) and (1.16) can be slightly improved. See [BCD+23, CCM22, DM22]
for work on optimal test functions. As shown in [ILS00], we have that the proportion of
L-functions vanishing at the central point is lower bounded by
(5
41
2σσ < 1
11
4σ2σ1.(1.13)
Combining this bound with the earlier theorems gives the following result.
Corollary 1.5. Assume the generalized Riemann hypothesis. Set
Z(k1, N1, k2, N2) = |{fgH(k1, N1, k2, N2) : L(1/2, f g)6= 0}|
|H(k1, N1, k2, N2)|(1.14)
to be the proportion of L-functions in the family which are nonvanishing at the central point.
We have that
lim
N→∞
k16=k2
Z(k1, N, k2, N )21
25 = 0.84 (1.15)
lim
N→∞ Z(k, N, k, N )645
841 = 0.7669 ... (1.16)
lim
k1,k2→∞
|k1k2|bounded
Z(k1, N1, k2, N2)3
4= 0.75 (1.17)
lim
k1,k2,N1,N2→∞ Z(k1, N1, k2, N2)1
4= 0.25.(1.18)
Because our L-functions have even functional equation, it is conjectured Z(k1, N1, k2, N2) =
1 in the limit. In fact, this would follow from the conjecture of Keating and Snaith described
below. [KMV02] show that Z(k1, N1, k2, N2) is positive in the limit but do not obtain an
explicit constant. We are able to give an explicit (but conditional) lower bound.
As was recently demonstrated by Radziwi l l and Soundararajan [RS23], we can apply
Corollary 1.5 to obtain conditional lower bounds for a conjecture of Keating and Snaith
[KS00] on the distribution of L(1/2, f g). First, set
N(k1, N1, k2, N2, α, β) = nfgH(k1, N1, k2, N2) : log L(1/2,fg)1
2log log R
log log R(α, β)o
|H(k1, N1, k2, N2)|(1.19)
where we say that log 0 = by convention. The Keating-Snaith conjecture predicts that
as R N(k1, N1, k2, N2, α, β) = 1
2πZβ
α
ex2/2dx +o(1).(1.20)
In other words, log L(1/2, f g) should be distributed approximately normally with mean
1
2log log Rand variance log log R. The mean of the distribution is expected to depend on
the symmetry type of the family. For orthogonal families, the mean is predicted to be
1
2log log R, as opposed to +1
2log log Rfor our symplectic family. Radziwi l l and Soundarara-
jan show that assuming the generalized Riemann hypothesis, we can obtain lower bounds
for the Keating-Snaith conjecture using lower bounds for the proportion of L-functions in a
family which are non-vanishing at the central point.
6 ALEXANDER SHASHKOV
Corollary 1.6. Assume the generalized Riemann hypothesis and fix an interval (α, β). Then
as R , we have that
N(k1, N1, k2, N2, α, β)c0
1
2πZβ
α
ex2/2dx +o(1) (1.21)
where
c0=
0.84 N1=N2 with k16=k2fixed
0.7669 N1=N2 with k1=k2fixed
0.75 k1, k2 with |k1k2|bounded and N1, N2fixed
0.25 in general.
(1.22)
Corollary 1.6 follows from modifying the methods of [RS23] to the family H(k1, N1, k2, N2)
and using the constants in Corollary 1.5. These modifications are fairly straightforward, and
the only required inputs are standard estimates for sums over Fourier coefficients (which are
developed in this paper). As such, we omit details to avoid replicating their arguments.
The structure of this paper is as follows. In Section 2, we state some important definitions
and review several facts about cusp forms from [ILS00]. In Section 3, we go over Rankin-
Selberg L-functions and develop the explicit formula relating the 1-level density to prime
sums. In Section 4, we apply the Petersson trace formula to average the explicit formula over
our family. Then, in Section 5 we prove Theorems 1.1 and 1.4. The remainder of the paper
is devoted to proving Theorem 1.2. In Section 6.1, we use GRH for Dirichlet L-functions
to eliminate the Kloosterman sums which arise from the Petersson formula, and carefully
analyze the remaining character sums. In the process, we develop new identities related
to sums over products of Gauss, Kloosterman, and Ramanujan sums. Then in Section 7,
we prove Theorem 1.2 in the case where k16=k2by evaluating the Bessel integral in order
to obtain a closed form for an off-diagonal term which only contributes for support outside
(1,1). Lastly, in Section 8, we complete the proof of Theorem 1.2 in the case where k1=k2
by handling the contribution from the poles.
Acknowledgments. The author would like to thank Steven J. Miller for supervising this
project and Leo Goldmakher for helpful conversations on character sums.
2. Preliminaries
In this section we go over some basic facts which will be useful later in the paper.
2.1. Notation. Throughout this paper, we use the following notation for sums over residue
classes:
X
a(q)
f(a) =
q
X
a=1
f(a) (2.1)
X
a(q)
f(a) =
q
X
a=1
(a,q)=1
f(a).(2.2)
Often the restriction to (a, q) = 1 is implicit when the summand involves a Dirichlet character
modulo q, as in this case f(a) = 0 if (a, q)>1.
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 7
Definition 2.1 (Gauss Sums).For χa character modulo qand e(x) = e2πix ,
Gχ(n):=X
a(q)
χ(a)e(an/q).(2.3)
By Theorem 9.12 of [MV07], we have that |Gχ(n)| (n, q)q.
Definition 2.2 (Ramanujan Sums).If χ=χ0(the principal character modulo q) in (2.3),
then Gχ0(n) becomes the Ramanujan sum
R(n, q):=X
a(q)
e(an/q) = X
d|(n,q)
µ(q/d)d, (2.4)
where restricts the summation to be over all arelatively prime to q.
The Ramanujan sum satisfies the following identity:
R(n, q) = µq
(q, n)ϕ(q)
ϕq
(q,n).(2.5)
Definition 2.3 (Kloosterman Sums).For integers mand n,
S(m, n;q):=X
d(q)
emd
q+nd
q,(2.6)
where dd1 mod q. We have
|S(m, n;q)| (m, n, q)smin q
(m, q),q
(n, q)τ(q),(2.7)
where τ(q) is the number of divisors of q; see Equation 2.13 of [ILS00].
Definition 2.4 (Fourier Transform).We use the following normalization:
b
φ(y):=Z
−∞
φ(x)e2πixy dx, φ(x):=Z
−∞ b
φ(y)e2πixy dy. (2.8)
Definition 2.5 ((Infinite) GCD).For x, y Z, let (x, y) denote the greatest common divisor
of xand y. Set (x, y) = sup nN(x, yn) and (x, y) = sup nN(xn, y).
The Bessel function of the first kind occurs frequently in this paper, and so we collect here
some standard bounds for it (see, for example, [GR14, Wat22]).
Lemma 2.6. Let k2be an integer. The Bessel function satisfies
(1) Jk1(x)1.
(2) Jk1(x)min 1,x
kk1/3,
(3) Jk1(x)x2k,0< k x
3.
Throughout the paper, there will be a special case when the weights and levels of the two
families we convolve are the same, as in this case some of the L-functions in the family have
poles. We use the following indicator function to indicate this.
Definition 2.7. We have
δpole :=δpole(k1, N1, k2, N2):=(1 (k1, N1) = (k2, N2)
0 otherwise. (2.9)
8 ALEXANDER SHASHKOV
2.2. Cusp forms. We recall some of important facts about cusp forms from [ILS00]. For
more information on cusp forms, see [Iwa97, Ono04, DS05]. Let Sk(N) denote the set of
cusp forms of even weight kand prime level Nfor the Hecke congruence subgroup Γ0(N).
Note that we assume the level Nis prime; most of the arguments should hold for squarefree
level Nas in [ILS00]. These are cusp forms for the congruence subgroup Γ1(N) with trivial
nebentypus (central character). Each fSk(N) has Fourier expansion
f(z) =
X
n=1
af(n)e(nz),(2.10)
and we can normalize fso that af(1) = 1, and then set
λf(n):=af(n)n(k1)/2.(2.11)
If nN, each cusp form fSk(N) is an eigenfunction of all the Hecke operators Tnwith
eigenvalue λf(n). The Hecke eigenvalues are multiplicative, and in particular we have
λf(m)λf(n) = X
d|(m,n)
(d,N)=1
λfmn
d2(2.12)
so that λf(m)λf(n) = λf(mn) if (m, n) = 1.
Let H
k(N) denote the set of fSk(N) which are newforms of level Nand which are
normalized so that af(1) = 1. Since we assume that Nis prime, for our purposes this means
that fis a normalized cusp form of level Nwhich is not a cusp form of level 1. Corollary
2.14 of [ILS00] characterizes the size of the set H
k(N).
Lemma 2.8 ([ILS00], Corollary 2.14).Let H
k(N)denote the set of normalized cusp forms
which are newforms of level N. Then
|H
k(N)|=k1
12 ϕ(N) + O(kN )2/3.(2.13)
2.3. Petersson formula. Essential to our results will be the Petersson formula [Pet32],
which allows us to calculate averages over Fourier coefficients. Set
ψf(n):=Γ(k1)
(4πn)k11/2
||f||1af(n) (2.14)
where ||f||2=hf, f iis the Petersson inner product on Sk(N). Next, put
k,N (m, n):=X
f∈Bk(N)
ψf(m)ψf(n) (2.15)
where the sum is over an orthogonal basis Bk(N) for Sk(N). The classical Petersson formula
gives
k,N (m, n) = δ(m, n) + 2πik
X
b=1
S(m, n;bN)
bN Jk14πmn
bN .(2.16)
[ILS00] gives an alternative characterization of the Petersson formula which will be useful
later.
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 9
Lemma 2.9 ([ILS00], Lemma 2.7).Set
ν(N):= 1(N) : Γ0(N)] = NY
p|N1 + p1(2.17)
and define the following zeta functions
Z(s, f ):=
X
n=1
λf(n2)ns, ZN(s, f ):=X
n|N
λf(n2)ns.(2.18)
Let (m, n, N) = 1 and (mn, N2)|N. Then
k,N (m, n) = 12
(k1)NX
LM=NX
fH
k(M)
λf(m)λf(n)
ν((mn, L))
ZN(1, f )
Z(1, f ).(2.19)
Of particular interest to us are the pure sums
k,N (n):=X
fH
k(N)
λf(n).(2.20)
We use the following result from [ILS00].
Lemma 2.10 ([ILS00], Proposition 2.11).If (n, N 2)|N, then
k,N (n) = k1
12 X
LM=N
µ(L)M
ν((n, L)) X
(m,M)=1
m1k,M (m2, n).(2.21)
Remark 2.11. In our case when Nis prime, the main contribution to (2.19) and (2.21)
comes from when M=N. If we fix kand take N , then we can easily show that the
M= 1 term vanishes in the limit (it is bounded by an absolute constant). If we take k
as in Theorem 1.4, we eventually show that the M=Nterm vanishes, and showing that
the M= 1 term vanishes is nearly identical. Thus for the remainder of the paper we will
ignore the M= 1 term for simplicity.
We split the sum into two pieces as
k,N (n) =
k,N (N) +
k,N (n) (2.22)
where
k,N (n) = X
(m,N)=1
mY
m1k,N (m2, n) (2.23)
and
k,N (n) is the complementary sum (the terms with m > Y ). Here, Yis a param-
eter which we set to (k1k2N1N2)ǫ. Lemma 2.12 of [ILS00] allows us to bound away the
complementary sum.
Lemma 2.12 ([ILS00], Lemma 2.12).Let (aq)be a sequence such that for all fH
k(N)
we have X
(q,N )=1
λf(q)aq(kN )ǫ.(2.24)
If (n, N2)|N, then X
(q,N )=1
k,N (q)aqkN Y 1/2(kNY )ǫ.(2.25)
We prove a similar lemma (with the sum over two families of cusp forms) in Lemma 4.2.
10 ALEXANDER SHASHKOV
3. The explicit formula
In this section we develop the explicit formula for Rankin-Selberg L-functions to relate
the 1-level densities to Bessel-Kloosterman sums. Many of our results about Rankin-Selberg
L-functions come from [Li79] and Section 4 of [KMV02].
3.1. Convolution L-functions. We consider two families of cusp forms H
k1(N1) and H
k2(N2),
both with even weights k1and k2and prime levels N1and N2.
Let fH
k1(N1) and gH
k2(N2). We are interested in studying the convolution fg.
As the forms in our original family are self–dual, the convolution fgis as well. The
Rankin-Selberg convolution L-function is
L(s, f g) : = L(2s, χN1N2
0)X
n1
λf(n)λg(n)
ns(3.1)
=Y
p
2
Y
i=1
2
Y
j=1 1αf,i(p)αg,j (p)ps1,
where χN
0denotes the principal character modulo Nand αf,i denotes the ith Satake parameter
of λf(p). These are the roots of the equation
x2λf(p)x+χN1
0(p) = 0.(3.2)
The analogous definition holds for αg,j (p). We set
L(s, f g) := (N1, N2)
4π2s
Γs+|k1k2|
2Γs+k1+k2
21.(3.3)
By the duplication formula for the gamma function, we can write
L(s, f g) = (N1, N2)
π2s2k1
8πΓs
2+|k1k2|
4Γs
2+|k1k2|+ 2
4
×Γs
2+k1+k22
4Γs
2+k1+k2
4.(3.4)
The completed L-function is
Λ(s, f g) := L(s, f g)L(s, f g).(3.5)
It satisfies the functional equation
Λ(s, f g) = Λ(1 s, f g).(3.6)
3.2. Explicit formula. Let φbe an even test function whose Fourier transform is compactly
supported in some fixed interval (σ, σ). Set
D(fg;φ) := X
ρfg
φγfg
2πlog R(3.7)
as in (1.1), where the sum is over the nontrivial zeros ρfg=1
2+fgof L(s, f g). R
is a normalization factor which we set to the analytic conductor of our L-functions. The
conductor is
R=((k1k2)2(k1+k2)2(N1, N2)2k16=k2
k2
1(N1, N2)2k1=k2
(3.8)
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 11
which comes from the gamma factors in (3.4). We apply the explicit formula as in Section
4 of [ILS00]. We apply the argument principle to Λ(s, f g) multiplied by the normalized
test function
φs1
2log R
2πi .(3.9)
If f6=gthen by (4.11) of [ILS00] we have
D(fg;φ) = A
log R2X
p
X
ν=1 X
i,j
αν
f,i(p)αν
g,j (p)!b
φνlog p
log Rpν/2log p
log R(3.10)
where
A=b
φ(0) log R+O(1).(3.11)
If f=g, there is an additional term from a pole at s= 1. The contribution of the pole is
2φlog R
4πi , where we extend the definition of φto Cusing the inverse Fourier transform:
φ(z) = Z
−∞ b
φ(y)e2πizy dy, z C.(3.12)
By the Ramanujan conjectures for fand g, we have that αf ,i(p), αg,j (p)1, so the ν3
terms in (3.10) are Olog1R. For the ν= 1 terms, we have that
X
i,j
αf,i(p)αg,j (p) = X
i
αf,i(p)! X
j
αg,j (p)!(3.13)
=λf(p)λg(p).
For the ν= 2 terms, we have that
X
i,j
α2
f,i(p)α2
g,j (p) = X
i
α2
f,i(p)! X
j
α2
g,j (p)!(3.14)
=λf(p2)χN1
0(p)λg(p2)χN2
0(p).
Putting this all together we have that
D(fg;φ) = b
φ(0) X
p
λf(p)λg(p)b
φlog p
log R2 log p
plog R(3.15)
X
pλf(p2)λg(p2)λf(p2)λg(p2)b
φ2 log p
log R2 log p
plog R
X
pb
φ2 log p
log R2 log p
plog R+Olog1R.
We use that χN1
0(p) = 1 if p6=N1and 0 otherwise, and the terms with p=N1or p=N2
are trivially absorbed into the error term.
Now, by the Riemann Hypothesis for L(s, sym2(f)), L(s, sym2(g)), and L(s, sym2(f)
sym2(g)), we have that the second term is O(log log R/ log R) if f6=g. If f=g, then the
12 ALEXANDER SHASHKOV
second term will be O(1) (but it will still vanish after averaging over the family). Lastly, we
have by the prime number theorem and partial summation that
X
pb
φ2 log p
log R2 log p
plog R=1
2φ(0) + Olog1R.(3.16)
Combining these bounds gives the main result of the section.
Proposition 3.1. We have
D(fg;φ) = b
φ(0) 1
2φ(0) S(fg;φ)2δpoleφlog R
4πi +Olog log R
log R(3.17)
where
S(fg;φ):=X
p
λf(p)λg(p)b
φlog p
log R2 log p
plog R.(3.18)
4. Applying the Petersson formula
In this section we utilize Proposition 3.1 and the Petersson formula in order to average
S(fg;φ) over the forms in the convolved family. Our main result is the following.
Proposition 4.1. We have that
X
f,g
S(fg;φ):=(k11)
12
(k21)
12 4π2ik1+k2X
m1,m2Y
1
m1m2X
b1,b21
1
b1b2
Q(m2
1, b1N1, m2
2, b2N2)
+Ok1N1k2N2
log log R
log R+δpolek1N1Rσ/2(4.1)
where
Q(m2
1, c1, m2
2, c2) = X
p
S(m2
1, p;c1)S(m2
2, p;c2)Jk114πm1p
c1Jk214πm2p
c2
×2 log p
plog Rb
φlog p
log R.(4.2)
We first sum over f. To do so, we use (2.22) to find
X
f
S(fg;φ) = X
p
(∆
k1,N1(p) +
k1,N1(p))λg(p)b
φlog p
log R2 log p
plog R.(4.3)
If (k1, N1)6= (k2, N2), we can apply Lemma 2.12 and the Riemann hypothesis for L(s, f g)
to find that the sum over
k1,N1(p) is O(k1N1/log R). If (k1, N1) = (k2, N2), then we can
separately bound the f=gterm and add in an additional error term of Rσ/2. Simplifying
gives
X
f
S(fg;φ) = X
p
k1,N1(p)λg(p)b
φlog p
log R2 log p
plog R+Ok1N1
log R+δpoleRσ/2.(4.4)
Next we want to sum over g. Doing so gives
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 13
X
f,g
S(fg;φ) = X
p
k1,N1(p)∆
k2,N2(p)b
φlog p
log R2 log p
plog R(4.5)
+X
p
k1,N1(p)∆
k2,N2(p)b
φlog p
log R2 log p
plog R
+Ok1N1k2N2
log log R
log R+δpolek1N1Rσ/2.
The first sum is the main term, and we use a method similar to Lemma 2.12 to show that
the second sum vanishes in the limit.
Lemma 4.2. Set
S:=X
p
k1,N1(p)∆
k2,N2(p)b
φlog p
log R2 log p
plog R.(4.6)
We have that
SY1/2+ǫ(k1k2N1N2)1+ǫ+δpoleY1/2k1N1Rσ/2.(4.7)
Proof. Expanding the and using Lemma 2.10 gives
SX
p
N1k1X
m1Y
(m1,N1)=1
m1
1k1,N1(m2
1, p)N2k2X
m2>Y
(m2,N2)=1
m1
2k2,N2(m2
2, p) (4.8)
×b
φlog p
log Rlog p
plog R.
Next we apply Lemma 2.9 and rearrange to give
SX
f,g
ZN1(1, f )
Z(1, f )X
m1Y
(m1,N1)=1
m1
1λf(m2
1)
ZN2(1, g)
Z(1, g)X
m2>Y
(m2,N2)=1
m1
2λg(m2
2)
(4.9)
×X
p
λf(p)λg(p) log p
plog Rb
φlog p
log R.
By the Riemann hypothesis for L(s, sym2(f)), the first sum in brackets is (k1N1Y)ǫ.
Likewise, GRH for L(s, sym2(g)) gives that the second sum in brackets is Y1/2(k2N2Y)ǫ.
Lastly, if f6=g, by GRH for L(s, f g) the sum over pis (k1k2N1N2)ǫlog1R. If f=g,
then the sum over pis of size Rσ/2. If (k1, N1) = (k2, N2), there will be |Hk1(N1)| k1N1
terms in the sum for which f=g, so their contribution is k1N1Rσ/2. Combining these
bounds gives the lemma.
Remark 4.3. When f=g, the lack of square root cancellation in the sums in (4.9) means
that Scontributes when b
φis supported outside (1,1) and Y= (k1k2N1N2)ǫ. In Section
8, we account for this by taking Y=Nαwith α= 1/14. However, in this case m1and m2
have non-negligible size, which requires more careful bounding of the associated sums.
14 ALEXANDER SHASHKOV
Setting Y= (k1k2N1N2)ǫ, we find that Sis absorbed by the error term in (4.5). Now,
we want to expand the s using the Petersson formula. By (2.16) and (2.23) we have
k,N (p) = N(k1)
12 2πikX
mY
1
m
X
b=1
S(m2, p;bN)
bN Jk14πmp
bN .(4.10)
Applying this to (4.5) completes the proof of Proposition 4.1.
5. Proofs of Theorems 1.1 and 1.4
In this section, we use Proposition 4.1 to complete the proofs of Theorems 1.1 and 1.4.
Because Theorems 1.1 and 1.4 require that b
φbe supported in (1,1), we need to show
that
1
|H
k1(N1)||H
k2(N2)|X
f,g
S(fg;φ) = o(1) (5.1)
and then the theorems will follow from Proposition 3.1, (2.13), and comparing with (1.6).
We also need to account for the contribution from any potential poles. We can bound the
contribution from the poles to (4.1) by
|H
k1(N1)|φlog R
4πi =Z
−∞ b
φ(y)Ry/2k1N1Rσ/2.(5.2)
A pole occurs only if (k1, N1) = (k2, N2), in which case R=k2
1N2
1. After dividing by the
size of the family with (2.13), we find that the contribution from the poles is Okσ1
1Nσ1
1,
which vanishes in the limit if σ1. Note that if σ > 1, we cannot bound away the
contribution from the pole. In Section 8, we show that a new term emerges from the average
over S(fg;φ) which cancels the contribution from the pole when σ > 1.
Now, to complete the proof of the theorem we need to bound S(fg;φ) averaged over
the family. We bound Qwith (2.7) and Jk1(x)x, giving
Q(m1, c1, m2, c2)(m1m2c1c2)ǫm1m2(c1c2)1/2R3σ/2(5.3)
Applying this bound to (4.1) gives
X
f,g
S(fg;φ)k1k2R3σ/2(N1N2)1/2Y2+Ok1N1k2N2
log log R
log R.(5.4)
Recall that if k1, k2are fixed, then RN2
1N2
2if N16=N2and RN2
1if N1=N2. Further,
recall that Y= (k1k2N1N2)ǫ, so we have that
X
f,g
S(fg;φ)k1k2(N1N2)1/2(N1N2)3σ(k1k2N1N2)ǫ+Ok1N1k2N2
log log R
log R.(5.5)
When σ < 1/2, the main term is absorbed by the error term. By (2.13), we have that our
family is of size k1k2N1N2, which completes the proof of Theorem 1.1. To prove Theorem
1.4, we use the stronger bound Jk1(x)x2k, which holds when x < k/3. To use this
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 15
bound, we need one of the following two inequalities to hold (we can just use Jk1(x)x
for the other one):
Rσ/2k1N1Y1(5.6)
Rσ/2k2N2Y1.(5.7)
Fix N1, N2and let |k1k2|be bounded by an absolute constant. Then we have Rk2
1and
Rk2
2, so that (5.6) and (5.7) hold when σ < 1. In this case, we have that
X
f,g
S(fg;φ)2k1k2k1k2(N1N2)1/2(k1k2)3σ(k1k2N1N2)ǫ+Ok1N1k2N2
log log R
log R,
(5.8)
so the main term in (4.1) is absorbed by the error term. If |k1k2|is unbounded, then
Rk4
1so that the main term is absorbed by the error term when σ < 1/2. The proof of
Theorem 1.4 follows after dividing by the size of the family using (2.13).
6. Products of Kloosterman sums
In this section we analyze the Kloosterman sums arising from the Petersson formula. We
are interested in the sum
Q(m2
1, b1N, m2
2, b2N):=X
p
S(m2
1, p;b1N)S(m2
2, p;b2N)Jk114πm1p
b1NJk214πm2p
b2N
×2 log p
plog Rb
φlog p
log R.(6.1)
We will later use (5.3) to bound (6.1) when when Ndivides b1or b2, so for the rest of the
section we assume that (b1, N) = (b2, N ) = 1. Additionally, we have that m1, m2< N , so
we also assume (m1, N) = (m2, N ) = 1. Our main result is the following.
Proposition 6.1. Let b1, b2, m1, m2be integers not divisible by the prime N. Set r= (b1, b2)
so that we can write b1=d1rand b2=d2rwith (d1, d2) = 1. We have that
Q(m2
1, b1N, m2
2, b2N)
=4ψ(m2
1d2
2, m2
2d2
1, Nr)
ϕ(d1d2rN )R(m2
1, d1)R(m2
2, d2)µ(d1d2)χd1d2
0(r)I(b1, b2, m1, m2, N)
+Om3
1m3
2N2σ1/2+ǫ(b1b2)ǫ(6.2)
where Ris the Ramanujan sum (2.2),χd1d2
0is the principal character modulo d1d2,ψis given
in Lemma 6.6, and
I(b1, b2, m1, m2, N):=Z
0
Jk114πm1y
b1NJk214πm2y
b2Nb
φ2log y
log Rdy
log R.(6.3)
To prove the proposition, we analyze the product of Kloosterman sums in (6.1). In Section
6.1 we decompose the Kloosterman sums in terms of Gauss sums in order to prove Lemma
6.3. In Section 6.2 we apply the generalized Riemann hypothesis for Dirichlet L-functions
in order to effectively bound our error terms. In Section 6.3 we develop identities for sums
over Gauss and Ramanujan sums in order to prove Lemma 6.6. Finally, in Section 6.4 we
apply partial summation to complete the proof of Proposition 6.1.
16 ALEXANDER SHASHKOV
6.1. Decomposing Kloosterman sums. First, since (b1, N) = 1, we can write
S(m2
1, p;b1N) = S(Nm2
1, Np;b1)S(b1m2
1, b1p;N) (6.4)
where the overline denotes the multiplicative inverse modulo the period of the Kloosterman
sum. The analogous result holds for S(m2
2, p;b2N). We use the following lemma from [ILS00]
for S(Nm2
1, Np;b1), and Lemma 6.3 for S(b1m2
1, b1p;N).
Lemma 6.2 ([ILS00], Section 6).Let pbe a prime with (p, b) = 1 and let (n, b) = 1. Then
S(nm, np;b) = 1
ϕ(b)X
χ(b)
χ(p)Gχ(n2m)Gχ(1) (6.5)
where for a Dirichlet character χmodulo b,Gχis the Gauss sum defined in (2.3).
For the second term in (6.4) we combine the terms from each Kloosterman sum. If pN,
we have that
S(m2
1b1, pb1;N)S(m2
2b2, pb2;N) = 1
ϕ(N)X
χ(N)
χ(p)X
a(N)
χ(a)S(m2
1b1, ab1;N)S(m2
2b2, ab2;N).
(6.6)
Since b1and b2are relatively prime to N, we can write this as
1
ϕ(N)X
χ(N)
χ(p)X
a(N)
χ(a)S(m2
1b1
2, a;N)S(m2
2b2
2, a;N).(6.7)
Recall that we assume Ndoes not divide b1, b2, m1, m2so that Ndoes not divide m2
1b1
2and
m2
2b2
2. We need the following result.
Lemma 6.3. Let Nbe a prime not dividing integers n1, n2,χa Dirichlet character modulo
N, and set
K(n1, n2, χ):=X
a(N)
χ0(a)S(n1, a;N)S(n2, a;N) (6.8)
If χ0is the principal character modulo N, then
K(n1, n2, χ0) = (ϕ(N)2+ϕ(N)1n1n20(N)
ϕ(N)2otherwise.(6.9)
If χis a non-principal Dirichlet character modulo N, we have that
K(n1, n2, χ)N3/2+ǫ.(6.10)
Remark 6.4. When χ0is principal and n1n20(N), we have that S(n1, a;N) =
S(n2, a;N) so that all the terms in (6.8) are positive. In the other cases when χ0is principal,
the sign of the Kloosterman sum changes, which leads to better than square root cancellation.
When χis non-principal, the sum (6.10) exhibits square root cancellation.
Proof. Expanding the Kloosterman sums and rearranging gives
K(n1, n2, χ) = X
a(N)
χ(a)X
u1(N)
eau1+n1u1
NX
u2(N)
eau2+n2u2
N
=X
u1(N)
X
u2(N)
en1u1+n2u2
NGχ(u1+u2).(6.11)
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 17
Since Nis prime, we have that
Gχ(u1+u2) = (δχϕ(N)u1+u20 mod (N)
χ(u1+u2)Gχ(1) otherwise (6.12)
so we can write
K(n1, n2, χ) = Gχ(1) X
u1(N)
X
u2(N)
en1u1+n2u2
Nχ(u1+u2)
+δχϕ(N)X
u1(N)
eu1(n1n2)
N(6.13)
where δχis the indicator function for the principal character. This second sum equals ϕ(N)
when n1n20 mod Nand is µ(N) otherwise. For the first sum, we do a change of
variables u1u1u2which gives
X
u1(N)
X
u2(N)
en1u1+n2u2
Nχ(u1+u2) = X
u1(N)
X
u2(N)
en1u1u2+n2u2
Nχ(u1u2+u2)
=X
u1(N)
χ(u1+ 1) X
u2(N)
eu2(n1u1+n2)
Nχ(u2)
=X
u1(N)
χ(u1+ 1)Gχ(n1u1+n2).(6.14)
We again case on the value of Gχusing (6.12), which gives that (6.14) equals
Gχ(1) X
u1(N)
χ(u1+ 1)χ(n1u1+n2) + χ0(n1n2+ 1)δχϕ(N).(6.15)
Putting this all together gives
K(n1, n2, χ) = Gχ(1)2X
u1(N)
χ(u1+ 1)χ(n1u1+n2) + δχD(n1, n2) (6.16)
where
D(n1, n2) = (ϕ(N)2n1n20(N)
2ϕ(N) otherwise. (6.17)
Thus when χis principal we have
K(n1, n2, χ) = (ϕ(N)2+ϕ(N)1n1n20(N)
ϕ(N)2 otherwise (6.18)
as desired. When χis non-principal, we have by the Riemann hypothesis for finite fields (see
[IK21], Section 12) that
X
u1(N)
χ(u1+ 1)χ(d2
1u1+d2
2)N1/2+ǫ.(6.19)
The proof follows from the fact that |Gχ(1)|=Nwhen χis primitive modulo N.
18 ALEXANDER SHASHKOV
6.2. Applying GRH for Dirichlet L-functions. We study a modified version of (6.1)
defined as
A:=A(x, m2
1, m2
2, b1, b2, N):=X
px
S(m2
1, p;b1N)S(m2
2, p;b2N) log p(6.20)
and then derive a closed form for Qusing partial summation. We study this sum using
GRH for Dirichlet L-functions. This implies for a Dirichlet character χmodulo cthat
X
px
χ(p) log p=δχx+Ox1/2(cx)ǫ(6.21)
where δχis the the indicator for the principal character. Applying Lemmas 6.2 and 6.3 gives
A=1
ϕ(b1)ϕ(b2)ϕ(N)X
χ1(b1)X
χ2(b2)X
χ3(N)
B"X
px
χ1χ2χ3(p) log p#(6.22)
where Bis some expression not dependent on p. Note that we do not account for when p|b1N
or p|b2N, but these terms are absorbed by the error term (6.27). χ1χ2χ3is principal when
χ3is principal and χ1is induced by some character χmodulo (b1, b2), and χ2is induced by
χ. Thus we can write the main term of Aas
ψ(m2
1b2
2, m2
2b2
1, N)x
ϕ(b1)ϕ(b2)ϕ(N)X
χ(b1,b2)
Gχ1(m2
1N2)Gχ1(1)Gχ2(m2
2N2)Gχ2(1) (6.23)
where χ1is the character modulo b1induced by χ,χ2is the character modulo b2induced by
χ, and
ψ(n1, n2, N):=(ϕ(N)2+ϕ(N)1n1n20(N)
ϕ(N)2 otherwise.(6.24)
Note that this is the same ψas in Lemma 6.6; it is easy to verify that the definitions agree.
We have that
Gχ1(m2
1N2) = χ1(N)2Gχ1(m2
1) (6.25)
so we can simplify the main term as
ψ(m2
1b2
2, m2
2b2
1, N)x
ϕ(b1)ϕ(b2)ϕ(N)X
χ(b1,b2)
Gχ1(m2
1)Gχ1(1)Gχ2(m2
2)Gχ2(1) (6.26)
since χ1(N)χ2(N) = χb1
0(N)χb2
0(N)χ(N)χ(N) = 1.
Given a character χmodulo b, we have that Gχ(n)(n, b)b. Thus we have that the
error term can be bounded by
x1/2b1b2m2
1m2
2N3/2(b1b2Nx)ǫ.(6.27)
This gives the following expression for A:
A=ψ(m2
1b2
2, m2
2b2
1, N)x
ϕ(b1)ϕ(b2)ϕ(N)X
χ(b1,b2)
Gχ1(m2
1)Gχ1(1)Gχ2(m2
2)Gχ2(1)
+Ox1/2b1b2m2
1m2
2N3/2(b1b2Nx)ǫ.(6.28)
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 19
6.3. Sums over Gauss sums. We want to analyze the sum over Gauss sums in (6.28).
We begin by reducing the induced characters χ1and χ2to the character χmodulo (b1, b2).
Before we begin, we need to introduce some notation. Set r= (b1, b2), r1= (b1, r), and
r2= (b2, r). We first prove the following lemma.
Lemma 6.5. Let b1, r, r1, χ, χ1be as above. We have that
Gχ1(m2
1) = (χ(b1/r1)R(m2
1, b1/r1)r1
rGχ(m2
1r/r1)r1|m2
1r
0otherwise.(6.29)
Proof. We have that (r1, b1/r1) = 1, so we can write χ1=χb1/r1
0χ, where χn
0denotes the
principal character modulo n. We then have that
Gχ1(m2
1) = X
u(b1)
χ1(u)eum2
1
b1
=X
u1(r1)X
u2(b1/r1)
χ1(u1b1/r1+u2r1)eu1m2
1
r1eu2m2
1
b1/r1
=χ(b1/r1)χb1/r1
0(r1)X
u1(r1)
χ(u1)eu1m2
1
r1X
u2(b1/r1)
χb1/r1
0(u2)eu2m2
1
b1/r1
=χ(b1/r1)R(m2
1, b1/r1)X
u1(r1)
χ(u1)eu1m2
1
r1.(6.30)
Now, we have that r|r1, so we can write u1=u2+ru3with u2going from 1 to rand u3
going from 1 to r1/r, so that
X
u1(r1)
χ(u1)eu1m2
1
r1=X
u2(r)X
u3(r1/r)
χ(u2+u3r)e(u2+u3r)m2
1
r1
=Gχm2
1r
r1X
u3(r1/r)
eu3rm2
1
r1.(6.31)
This final sum equals r1/r if r1|rm2
1and is 0 otherwise. Substituting this back into (6.30)
completes the proof.
Now, if m1= 1, we have that Gχ1(1) is 0 unless r=r1, so that (r, b1/r) = 1. In this case,
we also have that R(1, b1/r1) = µ(b1/r), so that
Gχ1(1) = χ(b1/r)χr
0(b1/r)µ(b1/r)Gχ(1).(6.32)
But if r=r1, then r1|m2
1r, so that
Gχ1(m2
1) = χ(b1/r)R(m2
1, b1/r)Gχm2
1(6.33)
by Lemma 6.5. Of course, the analog of Lemma 6.5 and (6.32) holds for χ2. Set d1=b1/r
and d2=b2/r so that (d1, d2) = 1. Applying (6.32) and (6.33) gives
X
χ(r)
Gχ1(m2
1)Gχ1(1)Gχ2(m2
2)Gχ2(1) = R(m2
1, d1)R(m2
2, d2)µ(d1d2)χr
0(d1d2)
×X
χ(r)
χ(d2
1d2
2)Gχ(m2
1)Gχ(1)Gχ(m2
2)Gχ(1).(6.34)
20 ALEXANDER SHASHKOV
Because of the term χr
0(d1d2), we may assume that (r, d1) = (r, d2) = 1. Now, we have
Gχ(m2
1)Gχ(1) = X
u1(r)
χ(u1)eu1m2
1
rX
u2(r)
χ(u2)eu2
r
=X
u1(r)
χ(u1)eu1m2
1
rX
u2(r)
χ(u1u2)eu1u2
r
=X
u2(r)
χ(u2)X
u1(r)
eu1(u2+m2
1)
r
=X
u2(r)
χ(u2)R(u2+m2
1, r).(6.35)
Applying this gives
X
χ(r)
χ(d2
1d2
2)Gχ(1)Gχ(m2
1)Gχ(1)Gχ(m2
2) = X
u1(r)
R(u1+m2
1, r)X
u2(r)
R(u2+m2
2, r)X
χ(r)
χ(u1u2d2
1d2
2).
(6.36)
By orthogonality the inner sum equals 0 unless u2=u1d2
1d2
2, in which case it is ϕ(r). Thus
we have that
X
χ(r)
χ(d2
1d2
2)Gχ(1)Gχ(m2
1)Gχ(1)Gχ(m2
2) = ϕ(r)X
u1(r)
R(u1+m2
1, r)R(u1d2
1d2
2+m2
2, r)
=ϕ(r)X
u1(r)
R(u1+m2
1d2
2, r)R(u1+m2
2d2
1, r).
(6.37)
Now, we want to study sums of the type
ψ(n1, n2, r):=X
u1(r)
R(u1+n1, r)R(u1+n2, r).(6.38)
We obtain the following result.
Lemma 6.6. The function ψ(n1, n2, r)defined in (6.38) is multiplicative in rso it can be
defined by its values when r=pα. We have that
ψ(n1, n2, pα) =
pR(n1n2, p)R(n1, p)R(n2, p)α= 1
0α > 1 and p|n1n2
pαR(n1n2, pα) otherwise.
(6.39)
Proof. First we show that ψis multiplicative. Write r=st with (s, t) = 1. As the Ramanujan
sums are multiplicative, R(a, r) = R(a, s)R(a, t). Writing u1=u2s+u3tin the sum gives
ψ(n1, n2, r) = X
u2(t)
X
u3(s)
R(u2s+u3t+n1, s)R(u2s+u3t+n1, t)
×R(u2s+u3t+n2, s)R(u2s+u3t+n2, t)
=X
u2(t)
R(u2s+n1, t)R(u2s+n2, t)X
u3(s)
R(u3t+n1, s)R(u3t+n2, s) (6.40)
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 21
since R(a, r) is periodic modulo r. Doing a change of variables u2u2sand u3u3tgives
ψ(n1, n2, r) = ψ(n1, n2, s)ψ(n1, n2, t) (6.41)
as desired.
Now we evaluate R(d, r) when r=pαwith α1. We can write
ψ(n1, n2, pα) = X
u1(pα)
R(u1+n1, pα)R(u1+n2, pα)X
u1(pα1)
R(u1p+n1, pα)R(u1p+n2, pα).
(6.42)
Call the first sum S1and the second S2. We have that
S1=X
u1(pα)X
u2(pα)
eu1u2+n1u2
pαX
u3(pα)
u1u3+n2u3
pα
=X
u2(pα)
X
u3(pα)
en1u2+n2u3
pαX
u1(pα)
eu1(u2+u3)
pα.(6.43)
The inner sum is 0 unless u2+u3= 0, so we have that
S1=pαX
u2(pα)
eu2(n1n2)
pα=pαR(n1n2, pα).(6.44)
If α= 1, we have that S2=R(n1, p)R(n2, p). If α > 1, we have
S2=X
u2(pα)
X
u3(pα)
en1u2+n2u3
pαX
u1(pα1)
eu1(u2+u3)
pα1.(6.45)
The inner sum is 0 unless u2+u30(pα1). Thus we can write u3=u2+u4pα1so that
S2=pα1X
u2(pα)
eu2(n1n2)
pαX
u4(p)
eu4n2
p=pαR(n1n2, pα) (6.46)
if p|n2, and 0 otherwise. Taking S1S2completes the lemma.
Now, applying Lemma 6.6 to (6.37) and then plugging into (6.34) gives
X
χ(r)
Gχ1(m2
1)Gχ1(1)Gχ2(m2
2)Gχ2(1) = R(m2
1, d1)R(m2
2, d2)µ(d1d2)χr
0(d1d2)ϕ(r)ψ(m2
1d2
2, m2
2d2
1, r).
(6.47)
Applying this to (6.28) and using the identity ψ(m2
1b2
2, m2
2b2
1, N) = ψ(mn
12d2
2, m2
2d2
1, N) we
finally have
A=ψ(m2
1d2
2, m2
2d2
1, Nr)
ϕ(d1d2Nr)R(m2
1, d1)R(m2
2, d2)µ(d1d2)χd1d2
0(r)x
+Ox1/2b1b2m2
1m2
2N3/2(b1b2Nx)ǫ.(6.48)
22 ALEXANDER SHASHKOV
6.4. Evaluating Q.We use summation by parts to express (6.1) in terms of (6.48). Doing
so gives
Q=Z
0ψ(m2
1d2
2, m2
2d2
1, Nr)
ϕ(d1d2Nr)R(m2
1, d1)R(m2
2, d2)µ(d1d2)χd1d2
0(r)x+Ox1/2b1b2m2
1m2
2N3/2(b1b2Nx)ǫ
dJk114πm1x
b1NJk214πm2x
b2N2
xlog Rb
φlog x
log R.(6.49)
Integrating by parts and setting y=xgives that the main term is
4ψ(m2
1d2
2, m2
2d2
1, Nr)
ϕ(d1d2rN )R(m2
1, d1)R(m2
2, d2)µ(d1d2)χd1d2
0(r)
×Z
0
Jk114πm1y
b1NJk214πm2y
b2Nb
φ2log y
log Rdy
log R.(6.50)
Similarly, we can bound the error term by
b1b2m2
1m2
2N3/2(b1b2N)ǫZRσ
0Jk114πm1x
b1NJk214πm2x
b2N
dx
x(6.51)
and using Jk1(x)xgives that this is bounded by
m3
1m3
2N1/2Rσ(b1b2N)ǫm3
1m3
2N2σ1/2+ǫ(b1b2)ǫ.(6.52)
Putting this together gives Proposition 6.1.
7. Surpassing (-1, 1): proof of Theorem 1.2
In this section, we complete the proof of Theorem 1.2 in the case where k16=k2by proving
the following proposition.
Proposition 7.1. Let k16=k2and set
P(k1, k2, N):=1
|H(k1, N, k2, N )|X
f,g
S(fg;φ)
=4π2ik1+k2
ϕ(N)2X
m1,m2Y
1
m1m2X
b1,b21
1
b1b2
Q(m2
1, b1N, m2
2, b2N) + Olog log R
log R.
(7.1)
If supp b
φ(5/4,5/4), we have that
lim
N→∞ P(k1, k2, N) = Z
−∞
φ(x)sin(2πx)
2πx dx 1
2φ(0).(7.2)
The formula for P(k1, k2, N) comes from (2.13) and Proposition 4.1. Combining Proposi-
tion 7.1 with Proposition 3.1 completes the proof of Theorem 1.2 in the case where k16=k2
(after comparing with (1.4)). The key insight which allows us to obtain a closed form for
P(k1, k2, N) is Lemma 7.3, in which we apply Proposition 6.1. In doing so, we are able
to remove many lower order subterms, and the integral which remains involves a product
of Bessel functions with a relatively simple Mellin transform. We evaluate this integral in
Section 7.2 using methods similar to Section 7 of [ILS00].
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 23
7.1. Removing subterms. We begin by using (5.3) to bound terms where b1and b2are
large so that we may apply Lemma 6.1.
Lemma 7.2. If supp(b
φ)(4/3,4/3), we have that
P(k1, k2, N) = 4π2ik1+k2
ϕ(N)2X
m1,m2Y
1
m1m2X
1b1,b2<N
1
b1b2
Q(m2
1, b1N, m2
2, b2N)+Olog log R
log R.
(7.3)
Proof. We need to bound the terms in (7.1) with b1Nor b2N. By (5.3) we have that
Q(m2
1, b1N, m2
2, b2N)(m1m2b1Nb2N)ǫm1m2(b1Nb2N)1/2N3σ
N3σ+ǫ1(b1b2)1/2+ǫ.(7.4)
We can bound the terms with b1Nor b2Nin (7.1) by
N2NǫN3σ+ǫ1X
b1N
1
b3/2ǫ
1X
b21
1
b3/2ǫ
2N3σ+ǫ4.(7.5)
This is O(Nǫ) if σ < 4/3.
We are now ready to remove additional terms and simplify using Proposition 6.1. The
remaining terms are those where m1is a multiple of d1and m2is a multiple of d2.
Lemma 7.3. If supp(b
φ)(5/4,5/4), we have that
P(k1, k2, N) = 16π2ik1+k2ψ(1,1, N )
ϕ(N)3X
mY
1
m2X
d1,d2Y/m
µ(d1d2)
d2
1d2
2X
(r,d1d2)=1
ψ(m2, m2, r)
r2ϕ(r)I(m, r, N)
+Olog log R
log R(7.6)
where
I(m, r, N):=Z
0
Jk114πmy
rN Jk214πmy
rN b
φ2log y
log Rdy
log R.(7.7)
Proof. First we want a general purpose bound for Qusing Lemma 6.1. We have that
ϕ(n)n1ǫ,ψ(n1, n2, r)r2,R(n, d)ϕ(d) and using Jν(x)x, we can bound the
integral piece by
m1m2
b1b2N2ZRσ/2
0
y2dy m1m2(b1b2)1N3σ2.(7.8)
This gives the bound
Q(m2
1, b1N, m2
2, b2N)ψ(m2
1d2
2, m2
2d2
1, N)r1+ǫ(d1d2)1m1m2N3σ+ǫ3+m3
1m3
2N2σ1/2+ǫ(b1b2)ǫ.
(7.9)
Now, if m2
1d2
2m2
2d2
16≡ 0(N), we have that ψ(m2
1d2
2, m2
2d2
1, N)N, so we have the bound
Q(m2
1, b1N, m2
2, b2N)r1+ǫ(d1d2)1m1m2N3σ+ǫ2+m3
1m3
2N2σ1/2+ǫ(b1b2)ǫ.(7.10)
Using this bound and m1, m2Y=Nǫ, we find that we can bound the terms in (7.3) with
m2
1d2
2m2
2d2
16≡ 0 by
N2+ǫX
1r,d1,d2<N N3σ+ǫ2
r3ǫd2
1d2
2
+N2σ+ǫ1/2
r2ǫd1ǫ
1d1ǫ
2N3σ+ǫ4+N2σ5/2+ǫ.(7.11)
24 ALEXANDER SHASHKOV
This is O(Nǫ) if supp(b
φ)(5/4,5/4).
For the remaining terms, we have that m2
1d2
2m2
2d2
10(N), so
Q(m2
1, b1N, m2
2, b2N)r1+ǫ(d1d2)1m1m2N3σ+ǫ1+m3
1m3
2N2σ1/2+ǫ(b1b2)ǫ.(7.12)
If m2
1d2
2m2
2d2
10(N), we have that m1d2 ±m2d1(N). If m1d26=m2d1, we have that
either m1d2> N/2 or m2d1> N/2. Since m1, m2Nǫthis means either d1> N 1ǫ/2
or d2> N1ǫ/2. Thus we can bound the terms in (7.3) with m2
1d2
2m2
2d2
10(N) and
m1d26=m2d1by
N2+ǫX
1r,d2<N X
N1ǫ/2<d1<N N3σ+ǫ1
r3ǫ(d1d2)2+N2σ+ǫ1/2
r2ǫd1ǫ
1d1ǫ
2N3σ+ǫ4+N2σ5/2+ǫ.(7.13)
This is O(Nǫ) if supp(b
φ)(5/4,5/4). Thus the only terms left are those with m1d2=
m2d1. But since (d1, d2) = 1, we must have that m1=md1and m2=md2for some m1.
Using Proposition 6.1, we can write the remaining terms as
P(k1, k2, N) = 16π2ik1+k2ψ(1,1, N )
ϕ(N)3X
mY
1
m2X
d1,d2Y/m
µ(d1d2)
d2
1d2
2X
(r,d1d2)=1
r<min(N/d1,N/d2)
ψ(m2, m2, r)
r2ϕ(r)I(m, r, N)
+Olog log R
log R.(7.14)
Finally, we extend the sum to be over all rwith (r, d1d2) = 1 using Jk1(x)x, which gives
I(m, r, N)m2r2N3σ2.(7.15)
7.2. Evaluating the integral. Unfolding the Fourier transform gives
I(m, r, N) = Z
0
Jk114πmy
rN Jk214πmy
rN Z
−∞
φ(x)y4πix/ log Rdx dy
log R.(7.16)
After doing a change of variables xxlog R, we want to interchange the integrals. However,
the integral does not converge absolutely, so we introduce a parameter ǫ, giving
I(m, r, N) = lim
ǫ0Z
−∞
φ(xlog R)Z
0
Jk114πmy
rN Jk214πmy
rN yǫ4πix dydx. (7.17)
Set
H(ν, µ, s):=Z
0
Jν(x)Jµ(x)xsdx (7.18)
which is essentially a Mellin transform. Setting u= 4πmy/rN gives
I(m, r, N) = lim
ǫ0
rN
4πm Z
−∞
φ(xlog R)4πm
rN ǫ+4πix
H(k11, k21, ǫ + 4πix)dx. (7.19)
Setting s=ǫ+ 4πix, we reinterpret this as a contour integral:
I(m, r, N) = lim
ǫ0
rN
16π2im ZRe(s)=ǫ
φ(sǫ) log R
4πi 4πm
rN s
H(k11, k21, s)ds. (7.20)
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 25
Section 6.8 (33) of [Bat54] (which is given in (6.8) of [KMV02]) gives:
H(ν, µ, s) = 2sΓ(s)Γ( ν+µ+1s
2)
Γ(νµ+1+s
2)Γ(µν+1+s
2)Γ(ν+µ+1+s
2),0<Re(s)< ν +µ+ 1.(7.21)
Using the identity
Γ(1/2 + s)Γ(1/2s) = π
cos πs (7.22)
gives
H(ν, µ, s) = 2scos(πsν+µ
2)Γ(s)Γ(ν+µ+1s
2)Γ(νµ+1s
2)
πΓ(ν+µ+1+s
2)Γ(νµ+1+s
2).(7.23)
For our case where ν=k11 and µ=k21 with k1, k22 even, we have that
H(k11, k21, s) = 2sik1+k2cos(πs
2)Γ(s)Γ(k1+k21s
2)Γ(k1k2+1s
2)
πΓ(k1+k21+s
2)Γ(k1k2+1+s
2),0<Re(s)<3.
(7.24)
We want to interchange the integral with the sum over rin (7.6). In order to make everything
absolutely convergent, we shift the contour to the line Re(s) = 2 + ǫand rearrange, giving
X
(r,d1d2)=1
ψ(m2, m2, r)
r2ϕ(r)I(m, r, N)
= lim
ǫ0
N
16π2im ZRe(s)=2+ǫ
φ(sǫ) log R
4πi 4πm
Ns
χ(s)H(k11, k21, s)ds
(7.25)
where
χ(s) = X
(r,d1d2)=1
ψ(m2, m2, r)
(r)rs(7.26)
is a Dirichlet series absolutely convergent when Re(s)>1.
Now, by Lemma 6.6, we have that
χ(s) = Y
pmd1d21
(p)ps+1
1psY
p|m
pd1d2
1 + 1
ps+1
=ζ(s)ζmd1d2(s)1αmd1d2(s)βm/(m,d1d2)(s1) (7.27)
where
ζd(s):=Y
p|d
1
1ps(7.28)
αd(s):=Y
pd11ps
(p)ps(7.29)
βd(s):=Y
p|d1 + ps.(7.30)
26 ALEXANDER SHASHKOV
By the properties of infinite products (see [SS10], for example), αd(s) converges when
X
pd
1ps
(p)ps<,(7.31)
which is satisfied when Re(s)>1/2. Now, by the functional equation for the Riemann
zeta function we have that
ζ(1 s) = 21sπscos πs
2Γ(s)ζ(s).(7.32)
Thus applying (7.24) and (7.27) to (7.25) gives
X
(r,d1d2)=1
ψ(1,1, r)
r2ϕ(r)I(m, r, N) = lim
ǫ0
Nik1+k2
32π3im ZRe(s)=2+ǫ
φ(sǫ) log R
4πi 4π2m
Ns
×ζmd1d2(s)1αmd1d2(s)βm/(m,d1d2)(s+ 1)ζ(1 s)Γ(k1+k21s
2)Γ(k1k2+1s
2)
Γ(k1+k21+s
2)Γ(k1k2+1+s
2)ds.
(7.33)
We want to shift the contour back to the line Re(s) = ǫ. If k16=k2, there are no poles. If
k1=k2, there is a pole at s= 1 coming from the term Γ(k1k2+1s
2) with residue
φ(1 ǫ) log R
4πi 4π2m
N1
ζmd1d2(1)1αmd1d2(1)βm/(m,d1d2)(2)ζ(0) Γ(k11)
Γ(k1)Γ(1) ·(2).(7.34)
Taking ǫ0, using the functional equation for the Gamma function and ζ(0) = 1/2, the
above equals
4π2m
N(k11)φ(1 ǫ) log R
4πi ζmd1d2(1)1αmd1d2(1)βm/(m,d1d2)(2).(7.35)
We finish treating the contribution from the pole in Section 8. Now, after shifting the
contour, we want to use the Laurent expansion for our functions around s= 0. We have
that
ζ(1 s) = s1+O(1) (7.36)
ζd(s)1=δ(1, d) + O(slog d) (7.37)
αd(s) = 1 + O(s) (7.38)
βd(s+ 1) = ν(d)
d+O(s) (7.39)
and by Section 7 of [ILS00]
Γks
2= Γ k+s
2k
2sh1 + Os
ki.(7.40)
Applying these to (7.33) gives
X
(r,d1d2)=1
ψ(m2, m2, r)
r2ϕ(r)I(m, r, N) = lim
ǫ0
Nik1+k2
32π3im δ(1, md1d2)ZRe(s)=ǫ
φ(sǫ) log R
4πi As/2ds
s
+ONlog(md1d2) log1R(7.41)
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 27
where
A:=(k1+k21)2(k1k2+ 1)2N2
256π4.(7.42)
Applying this to (7.6) gives
P(k1, k2, N) = N ψ(1,1, N )
ϕ(N)32πi lim
ǫ0ZRe(s)=ǫ
φ(sǫ) log R
4πi As/2ds
s+Olog log R
log R.
(7.43)
Now, setting s=ǫ+ 4πix and doing a change of variables ǫ2ǫgives
P(k1, k2, N) = N ψ(1,1, N )
ϕ(N)3lim
ǫ0AǫZ
−∞
φ(xlog R)A2πix dx
ǫ+ 2πix +Olog log R
log R.
(7.44)
By Section 7 of [ILS00], we have that
lim
ǫ0AǫZ
−∞
φ(xlog R)A2πix dx
ǫ+ 2πix =Z
−∞
φ(x)sin(2πx)
2πx dx +1
2φ(0) + Olog1R
(7.45)
and by Lemma 6.6 we have that
Nψ(1,1, N )
ϕ(N)3= 1 + O(N1).(7.46)
Applying this to (7.44), we finally have
P(k1, k2, N) = Z
−∞
φ(x)sin(2πx)
2πx dx 1
2φ(0) + Olog log R
log R(7.47)
as desired.
8. Handling the poles
In this section, we complete the proof of Theorem 1.2 by accounting for the case where
k1=k2. For the remainder of the section, set N=N1=N2and k=k1=k2with kfixed.
We begin by bounding the complementary sum
k,N appearing in (4.3) and (4.5). In
Section 4, we bounded this sum using Lemmas 2.12 and 4.2 and setting Y=Nǫ. The
complementary sums are larger in this case, so we set Y=Nαwith α= 1/14 so that
we can use the Ydecay and extend the support slightly past (1,1). We need to choose
αsmall enough so that the error terms vanish in the limit, but large enough so that the
complementary m1, m2sums vanish. This amounts to solving the system of inequalities
σα/210 (8.1)
2σ+ 6α5/20 (8.2)
with optimal solution α= 1/14, σ= 29/28. The first equation comes from (8.3) and the
second from (8.6).
Using Lemmas 2.12 and 4.2 and dividing by the size of the family, we have that the
contribution from the complementary sums is bounded by
Y1/2+ǫ(kN )ǫ+Y1/2(kN )1Rσ/2Nα/2+ǫ+Nσα/21,(8.3)
which vanishes in the limit if supp b
φ(29/28,29/28).
Now, we repeat the analysis in Section 7 in the case where k1=k2. Our analysis is mostly
the same as in that section, so we omit some details. The main difference is that the size of
28 ALEXANDER SHASHKOV
m1, m2is no longer negligible, as now we have that m1, m2Nαinstead of m1, m2Nǫ.
Our main result is the following.
Proposition 8.1. Let supp(b
φ)(29/28,29/28) and set
P(k, N ):=1
|H(k, N, k, N )|X
f,g
S(fg;φ)
=4π2
ϕ(N)2X
m1,m2Nα
1
m1m2X
b1,b21
1
b1b2
Q(m2
1, b1N, m2
2, b2N) + Olog log R
log R.(8.4)
We have that
P(k, N ) = Z
−∞
φ(x)sin(2πx)
2πx dx 1
2φ(0) + 2
|H
k(N)|φlog R
4πi +Olog log R
log R.(8.5)
Combining Proposition 8.1 with Proposition 3.1 completes the proof of Theorem 1.2 in
the case where k1=k2after comparing with (1.4).
Proof. As in Lemma 7.2, we restrict the sum over b1, b2to be up to N, which introduces an
error term of size N3σ+2α4. This is O(Nǫ) if supp b
φ(29/28,29/28). Next, we remove
all subterms except where m1=md1and m2=md2as in Lemma 7.3. Using (7.9), we find
that this introduces an error term of
N3σ+2α4+N2σ+6α5/2.(8.6)
This is O(Nǫ) if supp b
φ(29/28,29/28).Applying Proposition 6.1 gives
P(k, N ) = 16π2ψ(1,1, N )
ϕ(N)3X
mNα
1
m2X
d1,d2Nα/m
µ(d1d2)
d2
1d2
2X
r<min(N/d1,N/d2)
(r,d1d2)=1
ψ(m2, m2, r)
r2ϕ(r)I(m, r, N)
+Olog log R
log R.(8.7)
Using the bound (7.15), we extend the sum to be over all rwith (r, d1d2) = 1. This introduces
an error term of size
N1X
mNα
1X
d1,d2Nα/m
1
d2
1d2
2X
rN1α
1
r3N3σ2N3σ+3α4.(8.8)
This gives
P(k, N ) = 16π2ψ(1,1, N )
ϕ(N)3X
mNα
1
m2X
d1,d2Nα/m
µ(d1d2)
d2
1d2
2X
(r,d1d2)=1
ψ(m2, m2, r)
r2ϕ(r)I(m, r, N)
+Olog log R
log R.(8.9)
Our analysis of the sum over ris the same as in Section 7.2 except for the contribution from
the pole in (7.33). By (7.35) and the Cauchy residue theorem, the contribution of the pole
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 29
to P(k, N ) is
2πi ×16π2ψ(1,1, N )
ϕ(N)3X
mNα
1
m2X
d1,d2Nα/m
µ(d1d2)
d2
1d2
2
N
32π3im
×4π2m
N(k1)φlog R
4πi ζmd1d2(1)1αmd1d2(1)βm/(m,d1d2)(2).(8.10)
Using that Nψ(1,1, N )(N)31 and (2.13), we can simplify this as
2
|H
k(N)|φlog R
4πi π2
6X
mNα
1
m2X
d1,d2Nα/m
µ(d1d2)
d2
1d2
2
ζmd1d2(1)1αmd1d2(1)βm/(m,d1d2)(2).
(8.11)
Using (5.2) and the bound
ζmd1d2(1)1αmd1d2(1)βm/(m,d1d2)(2) 1,(8.12)
we extend the sum over d1, d2to be over all positive integers. This introduces an error term
of size
Nσ1X
mNα
1
m2X
d11
1
d2
1X
d2>Nα/m
1
d2
2Nσα1+ǫ.(8.13)
Likewise, we extend the sum over mto be over all positive integers, which introduces an
error term of the same size. These error terms are O(Nǫ) if supp b
φ(29/28,29/28).
Thus we have that the contribution from the pole term is
2
|H
k(N)|φlog R
4πi π2
6X
d1,d21
µ(d1d2)
d2
1d2
2X
m1
1
m2ζmd1d2(1)1αmd1d2(1)βm/(m,d1d2)(2) + ONǫ.
(8.14)
Now, we have that
ζmd1d2(1)1αd(1) = ζ(3)1Y
p|d1d2
p3p2
p31Y
q|m
qd1d2
q3q2
q31(8.15)
βm/(m,d1d2)(2) = Y
p|m
pd1d2
p2+ 1
p2.(8.16)
This gives
X
m1
1
m2ζmd1d2(1)1αmd1d2(1)βm/(m,d1d2)(2) = ζ(3)1Y
p|d1d2
p3p2
p31X
m1
1
m2Y
q|m
qd1d2
q3q2
q31
q2+ 1
q2.
(8.17)
Factoring the sum into an Euler product, we have that the above equals
ζ(3)1Y
p|d1d2
p3p2
p31
1
1p2f(p)1Y
q
f(q) (8.18)
where
f(p) = 1 + p3p
p31
p2+ 1
p2
1
p21.(8.19)
30 ALEXANDER SHASHKOV
Thus we have that the sum over d1, d2in (8.14) equals
X
d1,d21
=ζ(3)1Y
q
f(q)X
d1,d21
µ(d1d2)
d2
1d2
2Y
p|d1d2
p3p2
p31
1
1p2f(p)1.(8.20)
We want to count the number of times that a fixed value of d1d2appears in the above sum.
Since d1d2is squarefree, if d1d2has aprime factors, then it appears 2atimes. Thus we have
that X
d1,d21
=ζ(3)1Y
q
f(q)Y
p12
p2
p3p2
p31
1
1p2f(p)1
=ζ(3)1Y
pf(p)2
p2
p3p2
p31
1
1p2.(8.21)
Now, we have that
f(p)2
p2
p3p2
p31
1
1p2=1p2
1p3(8.22)
so that X
d1,d21
=ζ(3)1Y
p
1p2
1p3
=ζ(3)1ζ(3)ζ(2)1(8.23)
=ζ(2)1=6
π2.(8.24)
Plugging this into (8.14) gives that the contribution from the pole is
2
|H
k(N)|φlog R
4πi +ONǫ.(8.25)
Combining this with the analysis of the integral from Section 7.2 gives the proposition.
LOW LYING ZEROS OF RANKIN-SELBERG L-FUNCTIONS 31
References
[AAI+15] L. Alpoge, N. Amersi, G. Iyer, O. Lazarev, S. J. Miller, and L. Zhang. Maass Waveforms and
Low-Lying Zeros, pages 19–55. Springer International Publishing, 2015.
[AM15] L. Alpoge and S. J. Miller. Low-lying zeros of Maass form L-functions. International Mathematics
Research Notices, 2015(10):2678–2701, 2015.
[Bat54] H. Bateman. Tables of integral transforms, volume 1. McGraw-Hill book company, 1954.
[BBD+17] O. Barrett, P. Burkhardt, J. DeWitt, R. Dorward, and S. J. Miller. One-level density for holo-
morphic cusp forms of arbitrary level. Research in Number Theory, 3:1–21, 2017.
[BCD+23] E. Bo ldyriew, F. Chen, C. Devlin, S. J. Miller, and J. Zhao. Determining optimal test functions
for 2-level densities. Research in Number Theory, 9(2):32, 2023.
[CCM22] E. Carneiro, A. Chirre, and M. M. Milinovich. Hilbert spaces and low-lying zeros of L-functions.
Advances in Mathematics, 410:108748, 2022.
[CDG+22] P. Cohen, J. Dell, O. E. Gonz´alez, G. Iyer, S. Khunger, C.-H. Kwan, S. J. Miller, A. Shashkov,
A. S. Reina, C. Sprunger, N. Triantafillou, N. Truong, R. Van Peski, S. Willis, and Y. Yang.
Extending support for the centered moments of the low lying zeroes of cuspidal newforms. arXiv
preprint arXiv:2208.02625, 2022.
[DM06] E. Due˜nez and S. J. Miller. The low lying zeros of a GL (4) and a GL (6) family of L-functions.
Compositio Mathematica, 142(6):1403–1425, 2006.
[DM09] E. Due˜nez and S. J. Miller. The effect of convolving families of L-functions on the underlying
group symmetries. Proceedings of the London Mathematical Society, 99(3):787–820, 2009.
[DM22] S. Dutta and S. J. Miller. Bounding the order of vanishing of cuspidal newforms via the nth
centered moments. arXiv preprint arXiv:2211.04945, 2022.
[DS05] F. Diamond and J. Shurman. A First Course in Modular Forms. Springer, 2005.
[ERGR12] A. Entin, E. Roditty-Gershon, and Z. Rudnick. Low-lying zeros of quadratic Dirichlet L-
Functions, hyper-elliptic curves and random matrix theory. Geometric and Functional Analysis,
23:1230–1261, 2012.
[FI03] E. Fouvry and H. Iwaniec. Low-lying zeros of dihedral L-functions. Duke Mathematical Journal,
116(2):189–217, 2003.
[FM15] D. Fiorilli and S. J. Miller. Surpassing the ratios conjecture in the 1-level density of Dirichlet
L-functions. Algebra & Number Theory, 9(1):13–52, 2015.
[Gao13] P. Gao. n-level density of the low-lying zeros of quadratic Dirichlet L-functions. International
Mathematics Research Notices, 2014(6):1699–1728, 2013.
[GR14] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Academic press, 2014.
[Gul05] A. Guloglu. Low-lying zeroes of symmetric power L-functions. International Mathematics Re-
search Notices, pages 517–550, 2005.
[HM07] C. P. Hughes and S. J. Miller. Low-lying zeros of L-functions with orthogonal symmetry. Duke
Mathematical Journal, 136(1), 2007.
[IK21] H. Iwaniec and E. Kowalski. Analytic number theory, volume 53. American Mathematical Soc.,
2021.
[ILS00] H. Iwaniec, W. Luo, and P. Sarnak. Low lying zeros of families of L-functions. Publications
Math´ematiques de l’IH ´
ES, 91:55–131, 2000.
[Iwa97] H. Iwaniec. Topics in classical automorphic forms, volume 17. American Mathematical Soc.,
1997.
[KMV02] E. Kowalski, P. Michel, and J. VanderKam. Rankin-Selberg L-functions in the level aspect. Duke
Mathematical Journal, 114(1):123–191, 2002.
[KS99a] N. M. Katz and P. Sarnak. Random matrices, Frobenius eigenvalues, and monodromy, volume 45
of American Mathematical Society Colloquium Publications. American Mathematical Society,
Providence, RI, 1999.
[KS99b] N. M. Katz and P. Sarnak. Zeroes of zeta functions and symmetry. Bull. Amer. Math. Soc. (N.S.),
36(1):1–26, 1999.
[KS00] J. P. Keating and N. C. Snaith. Random matrix theory and L-functions at s= 1/2. Communi-
cations in Mathematical Physics, 214:91–100, 2000.
32 ALEXANDER SHASHKOV
[Li79] W. Li. L-series of Rankin type and their functional equations. Mathematische Annalen,
244(2):135–166, 1979.
[Mil04] S. J. Miller. One-and two-level densities for rational families of elliptic curves: evidence for the
underlying group symmetries. Compositio Mathematica, 140(4):952–992, 2004.
[Mon73] H. L. Montgomery. The pair correlation of zeros of the zeta function. In Proc. Symp. Pure Math,
volume 24, pages 181–193, 1973.
[MP10] S. J. Miller and R. Peckner. Low-lying zeros of number field L-functions. Journal of Number
Theory, 132:2866–2891, 2010.
[MV07] H. L. Montgomery and R. C. Vaughan. Multiplicative number theory I: Classical theory. Cam-
bridge University Press, 2007.
[Ono04] K. Ono. The Web of Modularity: Arithmetic of the Coefficients of Modular Forms and q-series.
American Mathematical Soc., 2004.
[¨
OS93] A. E. ¨
Ozl¨uk and C. Snyder. Small zeros of quadratic L-functions. Bulletin of the Australian
Mathematical Society, 47(2):307–319, 1993.
[¨
OS99] A. E. ¨
Ozl¨uk and C. Snyder. On the distribution of the nontrivial zeros of quadratic L-functions
close to the real axis. Acta Arithmetica, 91(3):209–228, 1999.
[Pet32] H. Petersson. ¨
Uber die entwicklungskoeffizienten der automorphen formen. Acta mathematica,
58:169–215, 1932.
[Ran39] R. A. Rankin. Contributions to the theory of Ramanujan’s function τ(n) and similar arithmeti-
cal functions: II. the order of the Fourier coefficients of integral modular forms. Mathematical
Proceedings of the Cambridge Philosophical Society, 35(3):357–372, 1939.
[Roy01] E. Royer. Petits eros de fonctions Lde formes modulaires. Acta Arithmetica, pages 47–172, 2001.
[RR07] G. Ricotta and E. Royer. Statistics for low-lying zeros of symmetric power L-functions in the
level aspect. Forum Mathematicum, 23:969–1028, 2007.
[RS96] Z. Rudnick and P. Sarnak. Zeros of principal L-functions and random matrix theory. Duke Math-
ematical Journal, pages 269–322, 1996.
[RS23] M. Radziwi l l and K. Soundararajan. Conditional lower bounds on the distribution of central
values in families of L-functions. arXiv preprint arXiv:2308.00169, 2023.
[Rub01] M. Rubinstein. Low-lying zeros of L-functions and random matrix theory. Duke Mathematical
Journal, pages 147–181, 2001.
[Sel40] A. Selberg. Bemerkungen ¨uber eine dirichletsche reihe, die mit der theorie der modulformen nahe
verbunden ist. Arch. Math. Nat., 43:47–50, 1940.
[SS10] E. M. Stein and R. Shakarchi. Complex analysis, volume 2. Princeton University Press, 2010.
[ST12] S. W. Shin and N. Templier. Sato–Tate theorem for families and low-lying zeros of automorphic
L-functions. Inventiones mathematicae, 203:1–177, 2012.
[Wat22] G. N. Watson. A treatise on the theory of Bessel functions. The University Press, 1922.
[You04] M. Young. Low-lying zeros of families of elliptic curves. Journal of the American Mathematical
Society, 19(1):205–250, 2004.
Department of Mathematics, Williams College
Email address :aes7@williams.edu
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Building on the work of Iwaniec, Luo and Sarnak, we use the n-level density to bound the probability of vanishing to order at least r at the central point for families of cuspidal newforms of prime level NN \rightarrow \infty , split by sign. There are three methods to improve bounds on the order of vanishing: optimizing the test functions, increasing the support, and increasing the n-level density studied. Previous work has determined the optimal test functions for the 1 and 2-level densities in certain support ranges, with the effectiveness of the bounds only marginally increasing by the optimized test functions over simpler ones, and thus this is not expected to be a productive avenue for further research. Similarly the support has been increased as far as possible, and further progress is shown to be related to delicate and difficult conjectures in number theory. Thus we concentrate on the third method, and study the higher centered moments (which are similar to the n-level densities but combinatorially easier). We find the level at each rank for which the upper bounds on the order of vanishing is the best, thus producing world-record bounds on the order of vanishing to rank at least r for every r>2r > 2 (for example, our bounds for vanishing to order at least 5 or at least 6 are less than half the previous bounds, a significant improvement). Additionally, we calculate the bound using the optimal test function for the 1-level density from previous work and compare it to the naive test functions for higher levels. We find that the optimal test function for certain levels are not the optimal for other levels, and some test functions may outperform others for some levels but not in others. Finally, we calculate the integrals needed to determine the bounds, doing so by transforming an n-dimensional integral to a 1-dimensional integral and greatly reducing the computation cost in the process.
Article
Generalizing previous work of Iwaniec, Luo, and Sarnak (2000), we use information from one-level density theorems to estimate the proportion of non-vanishing of L-functions in a family at a low-lying height on the critical line (measured by the analytic conductor). To solve the Fourier optimization problems that arise, we provide a unified framework based on the theory of reproducing kernel Hilbert spaces of entire functions (there is one such space associated to each symmetry type). Explicit expressions for the reproducing kernels are given. We also revisit the problem of estimating the height of the first low-lying zero in a family, considered by Hughes and Rudnick (2003) and Bernard (2015). We solve the associated Fourier optimization problem in this setting by establishing a connection to the theory of de Branges spaces of entire functions and using the explicit reproducing kernels. In an appendix, we study the related problem of determining the sharp embeddings between the Hilbert spaces associated to the five symmetry types and the classical Paley-Wiener space.
Article
Prime numbers are the multiplicative building blocks of natural numbers. Understanding their overall influence and especially their distribution gives rise to central questions in mathematics and physics. In particular their finer distribution is closely connected with the Riemann hypothesis, the most important unsolved problem in the mathematical world. Assuming only subjects covered in a standard degree in mathematics, the authors comprehensively cover all the topics met in first courses on multiplicative number theory and the distribution of prime numbers. They bring their extensive and distinguished research expertise to bear in preparing the student for intelligent reading of the more advanced research literature. This 2006 text, which is based on courses taught successfully over many years at Michigan, Imperial College and Pennsylvania State, is enriched by comprehensive historical notes and references as well as over 500 exercises.