Article

Multivariate Chebyshev Inequalities

The Annals of Mathematical Statistics 12/1960; DOI: 10.1214/aoms/1177705673
Source: OAI

ABSTRACT If $X$ is a random variable with $EX^2 = \sigma^2$, then by Chebyshev's inequality, \begin{equation*}\tag{1.1}P\{|X| \geqq \epsilon\} \leqq \sigma^2/\epsilon^2.\end{equation*} If in addition $EX = 0$, one obtains a corresponding one-sided inequality \begin{equation*}\tag{1.2}\quad P\{X \geqq \epsilon\} \leqq \sigma^2/ (\epsilon^2 + \sigma^2)\end{equation*} (see, e.g., [8] p. 198). In each case a distribution for $X$ is known that results in equality, so that the bounds are sharp. By a change of variable we can take $\epsilon = 1$. There are many possible multivariate extensions of (1.1) and (1.2). Those providing bounds for $P\{\max_{1 \leqq j \leqq k} |X_j| \geqq 1\}$ and $P\{|\max_{1 \leqq j \leqq k} X_j \geqq 1\}$ have been investigated in [3, 5, 9] and [4], respectively. We consider here various inequalities involving (i) the minimum component or (ii) the product of the components of a random vector. Derivations and proofs of sharpness for these two classes of extensions show remarkable similarities. Some of each type occur as special cases of a general theorem in Section 3. Bounds are given under various assumptions concerning variances, covariances and independence. Notation. We denote the vector $(1, \cdots, 1)$ by $e$ and $(0, \cdots, 0)$ by 0; the dimensionality will be clear from the context. If $x = (x_1, \cdots, x_k)$ and $y = (y_1, \cdots, y_k)$, we write $x \geqq y(x > y)$ to mean $x_j \geqq y_j(x_j > y_j), j = 1, 2, \cdots, k$. If $\Sigma = (\sigma_{ij}): k \times k$ is a moment matrix, for convenience we write $\sigma_{jj} = \sigma^2_j, j = 1, \cdots, k$. Unless otherwise stated, we assume that $\Sigma$ is positive definite.

0 Bookmarks
 · 
161 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose an output-feedback Model Predictive Control (MPC) algorithm for linear discrete-time systems affected by a possibly unbounded additive noise and subject to probabilistic constraints. In case the noise distribution is unknown, the chance constraints on the input and state variables are reformulated by means of the Chebyshev - Cantelli inequality. The recursive feasibility of the proposed algorithm is guaranteed and the convergence of the state to a suitable neighbor of the origin is proved under mild assumptions. The implementation issues are thoroughly addressed showing that, with a proper choice of the design parameters, its computational load can be made similar to the one of a standard stabilizing MPC algorithm. Two examples are discussed in details, with the aim of providing an insight on the performance achievable by the proposed control scheme.
    08/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a new feature selection framework based on the L0-norm, in which data are summarized by their moments of the class conditional densities. However, discontinuity of the L0-norm makes it difficult to find the optimal solution. We apply a proper approximation of the L0-norm and a bound on the misclassification probability involving the mean and covariance of the dataset, to derive a robust difference of convex functions (DC) program formulation, while the DC optimization algorithm is used to solve the problem effectively. Furthermore, a kernelized version of this problem is also presented in this work. Experimental results on both real and synthetic datasets show that the proposed formulations can select fewer features than the traditional Minimax Probability Machine and the L1-norm state.
    International Journal of Computational Intelligence Systems 03/2014; 7(1):12-24. · 0.45 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a robust support vector regression (RSVR) method with uncertain input and output data is studied. First, the data uncertainties are investigated under a stochastic framework and two linear robust formulations are derived. Linear formulations robust to ellipsoidal uncertainties are also considered from a geometric perspective. Second, kernelized RSVR formulations are established for nonlinear regression problems. Both linear and nonlinear formulations are converted to second-order cone programming problems, which can be solved efficiently by the interior point method. Simulation demonstrates that the proposed method outperforms existing RSVRs in the presence of both input and output data uncertainties.
    IEEE transactions on neural networks and learning systems 11/2012; 23(11):1690-1700. · 4.37 Impact Factor