# A characterization of the arcsine distribution

**ABSTRACT** The following characterization of the arcsine density is established. Let [xi] be a r.v. supported on (-1,1); then [xi] has the arcsine density , -1<t<1, if and only if has the same value for almost all x[set membership, variant][-1,1].

**0**Bookmarks

**·**

**170**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**The main result of the paper is the following characterization of the generalized arcsine density p γ (t) = t γ−1(1 − t)γ−1/B(γ, γ) with ${t \in (0, 1)}$ and ${\gamma \in(0,\frac12) \cup (\frac12,1)}$ : a r.v. ξ supported on [0, 1] has the generalized arcsine density p γ (t) if and only if ${ {\mathbb E} |\xi- x|^{1-2 \gamma}}$ has the same value for almost all ${x \in (0,1)}$ . Moreover, the measure with density p γ (t) is a unique minimizer (in the space of all probability measures μ supported on (0, 1)) of the double expectation ${ (\gamma-\frac12 ) {\mathbb E} |\xi-\xi^{\prime}|^{1-2 \gamma}}$ , where ξ and ξ′ are independent random variables distributed according to the measure μ. These results extend recent results characterizing the standard arcsine density (the case ${\gamma=\frac12}$ ).Metrika 01/2013; 76(3). · 0.72 Impact Factor

Page 1

A characterization of the arcsine distribution

Karl Michael Schmidt∗and Anatoly Zhigljavsky∗

Abstract. The following characterization of the arcsine density is established: let ξ be a r.v. supported

on (−1,1), then ξ has the arcsine density p(t) = 1/(π√1 − t2), −1 < t < 1, if and only if Elog(ξ − x)2

has the same value for almost all x ∈ [−1,1].

1Introduction

The arcsine density on the interval (−1,1) is

p(t) =

1

π√1 − t2,

−1 < t < 1.

(1)

To define a r.v. ξ with the arcsine density (1) we can use the formula ξ = cos(πα), where α is

a r.v. with uniform distribution on (0,1). The arcsine density has several non-trivial appearances in

probability theory and statistics. For example, for a general random walk {Sn} satisfying the Lindeberg-

L´ evy condition, the limiting distribution of

n

see §11 in Billingsley (1968), Erd˝ os and Kac (1947), L´ evy (1948). The arcsine density (1) is an invariant

density for a number of maps of the interval (−1,1) onto itself, see e.g. Rivlin (1990), Theorem 4.5. This

density is the limiting density of the roots of the orthogonal polynomials which are defined on (-1,1) and

orthogonal with respect to any weight function w(·) continuous on (−1,1), see Ullman (1972), Erd˝ os and

Freud (1974), van Assche (1987).

In probability theory, the arcsine density has a number of characterizations, see Norton (1975, 1978),

Arnold and Groenveld (1980), Kemperman and Skibinsky (1982). Below we consider a characterization

of the arcsine density that is of a different nature than the ones considered in these papers.

Our main result is as follows.

1

?n

i=11[Si>0](as n → ∞) has the arcsine density on (0,1),

Theorem. Let ξ be a r.v. supported on (−1,1). This r.v. has the arcsine density (1) if and only if

Elog(ξ − x)2has the same value for almost all x ∈ [−1,1].

As a motivation for the above theorem, assume that we have a sequence of points x1,x2,... in the interval

(−1,1) that has an asymptotic c.d.f. F(·) in the sense that

1

k

j=1

?|h(x)|dF(x) < ∞. Consider an associated sequence of

the values of the normalized ratios Rk(x,y) = [Hk(x)/Hk(y)]1/ktend to 1 (as k → ∞) for almost all

x,y ∈ (−1,1) if and only if the c.d.f. F(·) has the arcsine density (1). Indeed,

logRk(x,y) = log[Hk(x)]1/k− log[Hk(y)]1/k=1

lim

k→∞

k

?

h(xj) =

?1

−1

h(t)dF(t) (2)

for any continuous function h(·) such that

polynomials Hk(x) = (x − x1)2(x − x2)2···(x − xk)2. Then the result of the Theorem implies that

k

k

?

j=1

log(x − xj)2−1

k

k

?

j=1

log(y − xj)2.

∗School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4YH, UK (SchmidtKM@cf.ac.uk,

ZhigljavskyAA@cf.ac.uk)

1

Page 2

Using (2), for almost all x,y ∈ [−1,1] we obtain

?

The theorem implies that the r.h.s. of (3) is zero for almost all x,y ∈ [−1,1] if and only if the c.d.f. F(·)

has the arcsine density (1).

The fact that Rk(x,y) → 1 (as k → ∞) for almost all x,y ∈ (−1,1) means that the ratios Hk(x)/Hk(y)

are almost never very large (these ratios are smaller than δkwith any δ > 1 for sufficiently large k:

k > k∗(x,y)) and very rarely are very close to 0 (they are larger than any δkwith any δ < 1 and

k > k∗(x,y)). Note that if k is fixed and xj= cos(π(2j − 1)/(2k)) for j = 1,...,k, then Hk(x) = ckT2

where ckis some constant and Tk(x) = cos[karccos(x)] is the k-th Chebyshev polynomial. In this case,

the fact that Rk(x,y)∼= 1 (as k is large) for typical x,y ∈ (−1,1) follows from the properties of the

Chebyshev polynomials.

loglim

k→∞Rk(x,y)

?

= lim

k→∞logRk(x,y) =

?1

−1

log(x − t)2dF(t) −

?1

−1

log(y − t)2dF(t).

(3)

k(t)

2Auxiliary statements and proofs

The proof of the theorem is based on three Lemmas. In Lemma 1 we observe that the expected value

of log(ξ − x)2is finite for almost all x ∈ [0,1]. In Lemma 2 we derive a specific characterization of

the uniform measure on the interval [0,π]. To prove Lemma 2 we use a general characterization of the

Lebesgue measure on the interval [0,π] established in Lemma 3.

Lemma 3 uses Fourier series, which may seem surprising but is a natural reflection of the intrinsic

relationship between the arcsine distribution and trigonometric powers, as apparent in the Chebyshev

polynomials.

Lemma 1. For any r.v. ξ supported on (−1,1), the expectation Elog(ξ − x)2is finite for almost all

x ∈ [−1,1].

Proof of Lemma 1. Let F(·) be the c.d.f. of the r.v. ξ and −1 < t < 1. The integral

?1

is bounded and continuous as a function of t, so the integral

?1

exists. By the Fubini-Tonelli theorem,

?1

so in particular it is finite for almost all x ∈ [−1,1].

−1

log(t − x)2dx = (1 + t)log(1 + t)2+ (1 − t)log(1 − t)2− 4

−1

?1

−1

log(t − x)2dxdF(t)

Elog(ξ − x)2=

−1

log(t − x)2dF(t) ∈ L1([−1,1]),

Lemma 2. Let ϕ be a r.v. distributed according to a probability measure µ(·) on [0,π]. Then the

expectation

Ex(µ) = Elog(cosϕ − x)2

is constant for almost all x ∈ [−1,1] if and only if the measure µ(·) is uniform on [0,π]; in this case, the

expectation Ex(µ) has the same value for all x ∈ [−1,1].

2

Page 3

Proof of Lemma 2. As x ∈ [−1,1], we can set ψ := arccosx ∈ [0,π]. Let us extend µ to [0,2π] as an

even measure (that is, we set µ(A) = µ(2π−A) for all Borel sets A ⊂ [π,2π]) and note that µ([0,2π]) = 2.

Using cosϕ = cos(2π − ϕ) for all ϕ ∈ R, we calculate

Ex(µ) =1

2

0

2

=1

2

00

2

?2π

log(cosϕ − x)2µ(dϕ) =1

?2π

?2π

0

log

?

2sinϕ − ψ

2

sinϕ + ψ

2

?2

µ(dϕ)

??2π

2 log2µ(dϕ) + log

?

sinϕ − ψ

?2

µ(dϕ) +

?2π

0

log

?

sinϕ + ψ

2

?2

µ(dϕ)

?

= 2 log2 +

?π

0

log?sin2(ϕ − ψ/2)?

˜ µ(dϕ) +

?π

0

log?sin2(ϕ + ψ/2)?

˜ µ(dϕ),

(4)

where ˜ µ(A) =1

As ˜ µ and sin2are π-periodic and even, we obtain by making the substitution ˜ ϕ = π − ϕ:

?π

This implies that the two integrals in (4) are identical and therefore

?π

Hence the expectation Ex(µ) is constant for almost all x ∈ [−1,1] if and only if

?π

The Fourier series for logsin2is not uniformly convergent, but it converges in the L2([0,π]) sense, as

logsin2∈ L2([0,π]) and {e2ikx|k ∈ Z} is an orthonormal basis of this Hilbert space. Moreover, all

(complex) Fourier coefficients of logsin2are real and non-zero. Indeed,

?π

and

?π

see formula 4.384.3 in Gradshtein and Ryzhik (1965). The statement of Lemma 2 now follows from

Lemma 3 below.

2µ(2A) for all Borel sets A ⊂ [0,π].

0

log?sin2(ϕ + ψ/2)?

˜ µ(dϕ) =

?π

0

log?sin2(π − ˜ ϕ + ψ/2)?

˜ µ(d˜ ϕ) =

?π

0

log?sin2(˜ ϕ − ψ/2)?

˜ µ(d˜ ϕ).

Ex(µ) = 2log2 + 2

0

log?sin2(ϕ − ψ/2)?

˜ µ(dϕ).

logsin2? ˜ µ(y) =

0

log?sin2(ϕ − y)?

˜ µ(dϕ) is constant for almost all y ∈ [0,π].

(5)

0

log?sin2(ϕ)?sin(2kϕ)dϕ = 0 ∀k ∈ Z

0

log?sin2(ϕ)?cos(2kϕ)dϕ = 2π

?1

0

log(sin(πt))cos(2πkt)dt =

?

−2π log2,

−π/k,

k = 0

k ∈ Z \ {0},

Lemma 3. Let ˜ µ be a probability measure on [0,π] and f ∈ L2([0,π]) be such that

f(x) = l.i.m.N→∞

N

?

k=−N

θke2ikx

(x ∈ [0,π])

where all Fourier coefficients are non-zero: θk ?= 0 ∀k ∈ Z. Then, extending f to R as a π-periodic

function, the convolution f ? ˜ µ(·) :=?π

0f(· − t) ˜ µ(dt) is constant almost everywhere if and only if ˜ µ is

the uniform measure on [0,π]; in this case, f ? ˜ µ is constant everywhere.

3

Page 4

Proof of Lemma 3. Assume

f ? ˜ µ(x) =

?π

0

f(x − t) ˜ µ(dt) = C = const (for almost all x ∈ [0,π]).

Then, for all k ∈ Z \ {0},

?π

0=

0

e2ikxCdx=

?π

0

e2ikx

??π

N

?

0

f(x−t) ˜ µ(dt)

?

dx=

?π

0

??π

?

0

e2ikxl.i.m.N→∞

N

?

N

?

n=−N

θne2in(x−t)dx

?

˜ µ(dt)

=

?π

0

lim

N→∞

n=−N

θne−2int

??π

0

e2i(k+n)xdx

˜ µ(dt) =

?π

0

lim

N→∞

n=−N

θne−2intπ δn,−k˜ µ(dt)

?π

= πθ−k

0

e2ikt˜ µ(dt).

As θ−k?= 0 ∀k ∈ Z \ {0} we get?π

?π

as shown above, so?π

0e2ikt˜ µ(dt) = 0 ∀k ∈ Z \ {0}.

?π

0e2iktµ?(dt) = 0 ∀k ∈ Z. As every continuous function on [0,π] can be uniformly

approximated by a linear combination of {e2ikt|k ∈ Z} and µ?is finite, this implies

?π

and hence µ?= 0. This completes the proof of the ‘only if’ part of Lemma 3. The converse is obvious,

bearing in mind that f is π-periodic and f(x − ·) ∈ L1([0,π]) for all x ∈ R.

Now set µ?= ˜ µ − ˜ µ([0,π])λ/π, where λ is the Lebesgue measure on [0,π]. Then

µ?(dt) = 0 and

00

?π

e2iktµ?(dt) =

0

e2ikt˜ µ(dt) −˜ µ([0,π])

π

?π

0

e2iktdt = 0, ∀k ∈ Z \ {0}

0

f(t)µ?(dt) = 0 ∀f ∈ C([0,π])

Proof of the Theorem. Consider the expectation

Ix= Elog(ξ − x)2=

?1

−1

log(t − x)2

π√1 − t2dt,

(6)

where r.v. ξ has the arcsine density (1). By changing the variable t = cosϕ we obtain

?π

where µ0is the uniform measure on [0,π]. Hence, by applying Lemma 2 with µ = µ0, we conclude that

Ixhas the same value for all x ∈ [−1,1]. This proves the ‘only if’ statement in the Theorem.

To complete the proof of the Theorem, we now show the converse, i.e. that if, for a random variable

ξ supported on (−1,1), Elog(ξ −x)2has the same value for almost all x ∈ [−1,1], then ξ has the arcsine

density. In view of Lemma 1 the constant value of Elog(ξ −x)2must be finite. Denote by F(·) the c.d.f.

of ξ. Then F(−1) = 0, F(1) = 1 and

?1

where t = cosϕ and˜F(ϕ) = 1 − F(cosϕ). By Lemma 2, the probability measure generated by˜F is

uniform on [0,π]; that is,˜F(ϕ) = ϕ/π ∀ϕ ∈ [0,π]. This implies

F(x) = 1 − (arccosx)/π ∀x ∈ (−1,1),

so the density of ξ is F?(x) = 1/(π√1 − x2).

Ix=

0

log(cosϕ − x)2

π sinϕ

sinϕ dϕ =1

π

?π

0

log(cosϕ − x)2dϕ = Ex(µ0),

(7)

Elog(ξ − x)2=

−1

log(t − x)2dF(t) =

?π

0

log(cosϕ − x)2d˜F(ϕ),

4

Page 5

3Explicit formulae for the integrals and a generalization

3.1Explicit formulae for the expectations

The value of the expectation (6) can be easily computed based on our result that it is independent of x

in the interval [−1,1].

Corollary 1. Let the r.v. ξ have density (1). Then

?

Ix= Elog(ξ − x)2=

−2log2

2log?|x| +√x2− 1?− 2log2

if |x| ≤ 1

if |x| ≥ 1.

(8)

Proof. For |x| ≤ 1 we use Ix= I0= −2log2 by evaluating the integral I0:

Ix= I0=1

π

0

?π

log?sin2(ϕ)?dϕ =2

π

?π

0

log(sinϕ) dϕ = −2log2.

(9)

Let now x ≥ 1. From (9) we have I1= −2log2. Differentiating Ixwe get

??1

(see Gradshtein and Ryzhik (1965) 3.121.2 — note that interchanging the differentiation and integration

is justified as the derivative of the integrand is bounded by an integrable function of t locally uniformly

in x, |x| > 1). Therefore, for x > 1,

?x

Combining (9) and (10) we obtain (8).

I?

x=

−1

log(x − t)2

π√1 − t2dt

??

=

?1

−1

2

π(x − t)√1 − t2dt = 2

?1

0

ds

π(x + 1 − 2s)?s(1 − s)

=

2

√x2− 1;

I−x= Ix= I1+

1

I?

zdz = −2log2 +

?x

1

2

√z2− 1dz = 2log

?

x +√x2− 1

2

?

.

(10)

3.2Arcsine density on an arbitrary interval

The arcsine density on an interval (a,b) is

p(t) =

1

π?(t − a)(b − t),a < t < b.

(11)

If a = −1 and b = 1 then (11) is reduced to (1). A simple change of variables generalizes Theorem 1 to

the following statement.

Corollary 2. Let −∞ < a < b < ∞ and let ζ be a r.v. supported on the interval (a,b). The r.v. ζ has

the arcsine density (11) if and only if Elog(ζ − z)2has the same value for almost all z ∈ [a,b].

Corollary 1 is generalized as follows.

Corollary 3. Let −∞ < a < b < ∞ and let r.v. ζ have density (11). Then

?

Elog(ζ − z)2=

2log(b − a) − 4log2

2log(b − a) + 2log

if a ≤ z ≤ b

if z < a or z > b,

?

|xz| +?x2

z− 1

?

− 4log2

(12)

where xz= −1 + 2(z − a)/(b − a).

5

Page 6

Proof. By changing variables t = −1 + 2(u − a)/(b − a) and x = −1 + 2(z − a)/(b − a) in the integral

?b

we obtain Elog(ζ −z)2= 2log(b−a)−2log2+Ix, where Ixis defined in (8). This immediately implies

(12).

a

log(u − z)2

π?(u − a)(b − u)du = Elog(ζ − z)2,

6

Page 7

References

[1] B.C. Arnold and R.A. Groeneveld. Some properties of the arcsine distribution. J. Amer. Statist.

Assoc., 75(369):173–175, 1980.

[2] P. Billingsley. Convergence of probability measures. John Wiley & Sons Inc., New York, 1968.

[3] P. Erd˝ os and G. Freud. On orthogonal polynomials with regularly distributed zeros. Proc. London

Math. Soc. (3), 29:521–537, 1974.

[4] I.S. Gradshteyn and I.M. Ryzhik. Table of integrals, series, and products. Academic Press, New

York, 1965.

[5] J.H.B. Kemperman and M. Skibinsky. On the characterization of an interesting property of the

arcsin distribution. Pacific J. Math., 103(2):457–465, 1982.

[6] P. L´ evy. Processus Stochastiques et Mouvement Brownien. Gauthier-Villars, Paris, 1948.

[7] R.M. Norton. On properties of the arc sine law. Sankhy¯ a Ser. A, 37(2):306–308, 1975.

[8] R.M. Norton. Moment properties and the arc sine law. Sankhy¯ a Ser. A, 40(2):192–198, 1978.

[9] T.J. Rivlin. Chebyshev polynomials. John Wiley & Sons Inc., New York, second edition, 1990.

[10] J. L. Ullman. On the regular behaviour of orthogonal polynomials. Proc. London Math. Soc. (3),

24:119–148, 1972.

[11] W. Van Assche. Asymptotics for orthogonal polynomials. Springer-Verlag, Berlin, 1987.

7

#### View other sources

#### Hide other sources

- Available from Anatoly Zhigljavsky · May 21, 2014
- Available from cf.ac.uk