Article

# Multivariate Chebyshev Inequalities

The Annals of Mathematical Statistics 01/1960; DOI:10.1214/aoms/1177705673
Source: OAI

ABSTRACT If $X$ is a random variable with $EX^2 = \sigma^2$, then by Chebyshev's inequality, \begin{equation*}\tag{1.1}P\{|X| \geqq \epsilon\} \leqq \sigma^2/\epsilon^2.\end{equation*} If in addition $EX = 0$, one obtains a corresponding one-sided inequality \begin{equation*}\tag{1.2}\quad P\{X \geqq \epsilon\} \leqq \sigma^2/ (\epsilon^2 + \sigma^2)\end{equation*} (see, e.g., [8] p. 198). In each case a distribution for $X$ is known that results in equality, so that the bounds are sharp. By a change of variable we can take $\epsilon = 1$. There are many possible multivariate extensions of (1.1) and (1.2). Those providing bounds for $P\{\max_{1 \leqq j \leqq k} |X_j| \geqq 1\}$ and $P\{|\max_{1 \leqq j \leqq k} X_j \geqq 1\}$ have been investigated in [3, 5, 9] and [4], respectively. We consider here various inequalities involving (i) the minimum component or (ii) the product of the components of a random vector. Derivations and proofs of sharpness for these two classes of extensions show remarkable similarities. Some of each type occur as special cases of a general theorem in Section 3. Bounds are given under various assumptions concerning variances, covariances and independence. Notation. We denote the vector $(1, \cdots, 1)$ by $e$ and $(0, \cdots, 0)$ by 0; the dimensionality will be clear from the context. If $x = (x_1, \cdots, x_k)$ and $y = (y_1, \cdots, y_k)$, we write $x \geqq y(x > y)$ to mean $x_j \geqq y_j(x_j > y_j), j = 1, 2, \cdots, k$. If $\Sigma = (\sigma_{ij}): k \times k$ is a moment matrix, for convenience we write $\sigma_{jj} = \sigma^2_j, j = 1, \cdots, k$. Unless otherwise stated, we assume that $\Sigma$ is positive definite.

0 0
·
0 Bookmarks
·
88 Views
• Source
##### Article: Chebyshev's inequality for Banach-space-valued random elements
[hide abstract]
ABSTRACT: In this paper, we obtain a new generalization of Chebyshev's inequality for random elements taking values in a separate Banach space.
Statistics [?] Probability Letters 05/2012; 82(5):925-931. · 0.53 Impact Factor
• Source
##### Article: Mixed 0-1 Linear Programs Under Objective Uncertainty: A Completely Positive Representation.
[hide abstract]
ABSTRACT: In this paper, we analyze mixed 0-1 linear programs under objective uncertainty. The mean vector and the second-moment matrix of the nonnegative objective coefficients are assumed to be known, but the exact form of the distribution is unknown. Our main result shows that computing a tight upper bound on the expected value of a mixed 0-1 linear program in maximization form with random objective is a completely positive program. This naturally leads to semidefinite programming relaxations that are solvable in polynomial time but provide weaker bounds. The result can be extended to deal with uncertainty in the moments and more complicated objective functions. Examples from order statistics and project networks highlight the applications of the model. Our belief is that the model will open an interesting direction for future research in discrete and linear optimization under uncertainty.
Operations Research 01/2011; 59:713-728. · 1.79 Impact Factor
• Source
##### Article: Bounds on the Bayes Error Given Moments
[hide abstract]
ABSTRACT: We show how to compute lower bounds for the supremum Bayes error if the class-conditional distributions must satisfy moment constraints, where the supremum is with respect to the unknown class-conditional distributions. Our approach makes use of Curto and Fialkow's solutions for the truncated moment problem. The lower bound shows that the popular Gaussian assumption is not robust in this regard. We also construct an upper bound for the supremum Bayes error by constraining the decision boundary to be linear.
IEEE Transactions on Information Theory - TIT. 05/2011; 58(6).