## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Brown et al. (2006) derive a Stein-type inequality for the multivariate Student’s -distribution. We generalize their result to the family of (multivariate) generalized hyperbolic distributions and derive a lower bound for the variance of a function of a random variable.

To read the full-text of this research,

you can request a copy directly from the authors.

In this paper, we discuss downside risk optimization in the context of portfolio selection. We derive explicit solutions to the optimal portfolios that minimize the downside risk with respect to constant targets and random targets. In doing so, we propose using portfolio amplitude, a new measure in the literature, to characterize the portfolio selection under the downside risk optimization. Particularly, we demonstrate a mechanism by which the random target inputs its impact into the system and alters the optimal solution. Our results underpin why investors prefer holding some specific assets in following random targets and provide explanations for some special investment strategies, such as constructing a stock portfolio following a bond index. We present numerical examples of stock portfolio management to support our theoretical results.

Now is an opportune time to revisit Stein’s (1973) beautiful lemma all over again. It is especially so since researchers have recently begun discovering a great deal of potential of closely related Stein’s unbiased risk estimate (SURE) in a number of directions involving novel applications. In recognition of the importance of the topic of Stein’s (1973; Ann Statist 9:1135–1151, 1981) research and its elegance, we include a selective review from the field. The process of rereading Stein’s lemma and reliving its awesome simplicity as well as versatility rekindled a number of personal thoughts, queries, and observations. A number of new and interesting insights are highlighted in the spirit of providing updated and futuristic versions of the celebrated lemma by largely focusing on univariate continuous distributions not belonging to an exponential family. In doing so, a number of new identities have emerged when the parent population is continuous, but they are highly non-normal. Last, but not the least, we have argued that there is no big foundational difference between the basic messages obtained via Stein’s identity and Cramér-Rao identity.

We introduce a new class of multivariate elliptically symmetric distributions including elliptically symmetric logistic distributions and Kotz type distributions. We investigate the various probabilistic properties including marginal distributions, conditional distributions, linear transformations, characteristic functions and dependence measure in the perspective of the inconsistency property. In addition, we provide a real data example to show that the new distributions have reasonable flexibility.

Inspired by the work of Adcock, Landsman, and Shushi (2019) which established the Stein’s lemma for generalized skew-elliptical random vectors, we derive Stein type lemmas for location-scale mixture of generalized skew-elliptical random vectors. Some special cases such as the location-scale mixture of elliptical random vectors, the location-scale mixture of generalized skew-normal random vectors, and the location-scale mixture of normal random vectors are also considered. As an application in risk theory, we give a result for optimal portfolio selection.

We describe a construction of Stein kernels using moment maps, which are solutions to a variant of the Monge-Amp\`ere equation. As a consequence, we show how regularity bounds on these maps control the rate of convergence in the classical central limit theorem, and derive new rates in Kantorovitch-Wasserstein distance in the log-concave situation, with explicit polynomial dependence on the dimension.

In this letter we derive the multivariate Stein’s lemma for truncated elliptical random vectors. The results in this letter generalize Stein’s lemma for elliptical random vectors given in Landsman and Nešlehová (2008), and the tail Stein’s lemma given in Landsman and Valdez (2016). We give a conditional Stein’s-type inequalities and a conditional version of Siegel’s formula for the elliptical distributions, and by that we generalize results obtained in Landsman et al. (2013) and in Landsman et al. (2015). Furthermore, we show applications of the main results in the letter for risk theory.

This note consists of two parts . In the first part, we provide a pedagogic review on the multivariate generalized hyperbolic (MGH) distribution. We show that this probability family is close under margining, conditioning, and linear transforms; however, such property does not hold for its subclasses. In the second part, we obtain the Stein-type inequality in the context of MGH distribution. Moreover, we apply the Stein-type inequality to prove a lower bound for Var[h(X)]. Particularly, we present examples when X belongs to some well-known subclasses in MGH family.

Stein operators are differential operators which arise within the so-called
Stein's method for stochastic approximation. We propose a new mechanism for
constructing such operators for arbitrary (continuous or discrete) parametric
distributions with continuous dependence on the parameter. We provide explicit
general expressions for location, scale and skewness families. We also provide
a general expression for discrete distributions. For specific choices of target
distributions (including the Gaussian, Gamma and Poisson) we compare the
operators hereby obtained with those provided by the classical approaches from
the literature on Stein's method. We use properties of our operators to provide
upper and lower variance bounds (only lower bounds in the discrete case) on
functionals $h(X)$ of random variables $X$ following parametric distributions.
These bounds are expressed in terms of the first two moments of the derivatives
(or differences) of $h$. We provide general variance bounds for location, scale
and skewness families and apply our bounds to specific examples (namely the
Gaussian, exponential, Gamma and Poisson distributions). The results obtained
via our techniques are systematically competitive with, and sometimes improve
on, the best bounds available in the literature.

The implementation of sound quantitative risk models is a vital concern for all financial institutions, and this trend has accelerated in recent years with regulatory processes such as Basel II. This book provides a comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management and equips readers--whether financial risk analysts, actuaries, regulators, or students of quantitative finance--with practical tools to solve real-world problems. The authors cover methods for market, credit, and operational risk modelling; place standard industry approaches on a more formal footing; and describe recent developments that go beyond, and address main deficiencies of, current practice.The book's methodology draws on diverse quantitative disciplines, from mathematical finance through statistics and econometrics to actuarial mathematics. Main concepts discussed include loss distributions, risk measures, and risk aggregation and allocation principles. A main theme is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. The techniques required derive from multivariate statistical analysis, financial time series modelling, copulas, and extreme value theory. A more technical chapter addresses credit derivatives. Based on courses taught to masters students and professionals, this book is a unique and fundamental reference that is set to become a standard in the field.

Let X be an absolutely continuous random variable from the integrated Pearson
family and assume that X has finite moments of any order. Equivalently, X is a
linear (non-constant) transformation of Y where Y follows a Normal, a Beta or a
Gamma density. Using some properties of the associate orthonormal polynomial
system we provide a class of strengthened Chernoff-type variance bounds.

For an absolutely continuous (integer-valued) r.v. $X$ of the Pearson (Ord)
family, we show that, under natural moment conditions, a Stein-type covariance
identity of order $k$ holds (cf. [Goldstein and Reinert, J. Theoret. Probab. 18
(2005) 237--260]). This identity is closely related to the corresponding
sequence of orthogonal polynomials, obtained by a Rodrigues-type formula, and
provides convenient expressions for the Fourier coefficients of an arbitrary
function. Application of the covariance identity yields some novel expressions
for the corresponding lower variance bounds for a function of the r.v. $X$,
expressions that seem to be known only in particular cases (for the Normal, see
[Houdr\'{e} and Kagan, J. Theoret. Probab. 8 (1995) 23--30]; see also
[Houdr\'{e} and P\'{e}rez-Abreu, Ann. Probab. 23 (1995) 400--419] for
corresponding results related to the Wiener and Poisson processes). Some
applications are also given.

When two random variables have a bivariate normal distribution, Stein's lemma (Stein, 197324.
Stein , C. M. ( 1973 ). Estimation of the mean of a multivariate normal distribution . Proc. Prague Symp. Asymptotic Statist. 345 – 381 . View all references 198125.
Stein , C. M. ( 1981 ). Estimation of the mean of a multivariate normal distribution . Ann. Statist. 9 : 1135 – 1151 . [CrossRef], [Web of Science ®]View all references), provides, under certain regularity conditions, an expression for the covariance of the first variable with a function of the second. An extension of the lemma due to Liu (199415.
Liu , J. S. ( 1994 ). Siegel's formula via Stein's identities . Statist. Probab. Lett. 21 : 247 – 251 . [CrossRef], [Web of Science ®]View all references) as well as to Stein himself establishes an analogous result for a vector of variables which has a multivariate normal distribution. The extension leads in turn to a generalization of Siegel's (199322.
Siegel , A. F. ( 1993 ). A surprising covariance involving the minimum of multivariate normal variables . J. Amer. Statist. Assoc. 88 : 77 – 80 . [Taylor & Francis Online], [Web of Science ®]View all references) formula for the covariance of an arbitrary element of a multivariate normal vector with its minimum element. This article describes extensions to Stein's lemma for the case when the vector of random variables has a multivariate skew-normal distribution. The corollaries to the main result include an extension to Siegel's formula. This article was motivated originally by the issue of portfolio selection in finance. Under multivariate normality, the implication of Stein's lemma is that all rational investors will select a portfolio which lies on Markowitz's mean-variance efficient frontier. A consequence of the extension to Stein's lemma is that under multivariate skew-normality, rational investors will select a portfolio which lies on a single mean-variance-skewness efficient hyper-surface.

De Moivre gave a simple closed form expression for the mean absolute deviation of the binomial distribution. Later authors showed that similar closed form expressions hold for many of the other classical families. We review the history of these identities and extend them to obtain summation formulas for the expectations of all polynomials orthogonal to the constants.

An inequality due to Chernoff is generalized and a related Cramer-Rao type of inequality is studied.

We provide some necessary and some sufficient conditions for the validity of the inequality of Simes in models with elliptical dependencies. Necessary conditions are presented in terms of sufficient conditions for the reverse Simes inequality. One application of our main results concerns the problem of model misspecification, in particular the case that the assumption of Gaussianity of test statistics is violated. Since our sufficient conditions require non-negativity of correlation coefficients between test statistics, we also develop two exact tests for vectors of correlation coefficients and compare their powers in computer simulations.

A formula is provided for the covariance of X 1 with the minimum of the multivariate normal vector (X 1 ,X 2 ,⋯,X n ). The resulting expression has an intuitive interpretation as the weighted average of the n covariances of X 1 with X 1 ,X 2 ,⋯,X n , where the ith weight equals the probability that X i is the minimum. The formula is surprising for several reasons. First, results involving extrema of order statistics in the presence of correlation are usually much more complex. Second, an attempt at a direct proof runs into difficulties involving the nontrivial distinction between a conditional expectation and an ordinary covariance. Finally, although these difficulties are genuine on a term-by-term basis, they cancel out when weighted and combined. The geometric interpretation in n- dimensional space is that although the vector of covariances of X 1 with (X 1 ,X 2 ,⋯,X n ) is in general different from the vector of appropriate conditional expectations, these vectors always have the same projection onto the vector of probabilities that X i is the smallest, i=1,2,⋯,n. The formula arises in analysis of commodity and financial futures contracts.

When two random variables are bivariate normally distributed Stein’s original lemma allows to conveniently express the covariance of the first variable with a function of the second. Landsman and Neslehova (2008) extend this seminal result to the family of multivariate elliptical distributions. In this paper we use the technique of conditioning to provide a more elegant proof for their result. In doing so, we also present a new proof for the classical linear regression result that holds for the elliptical family.

The characteristics of the one-dimensional generalized hyperbolic distributions are discussed, and the questions of maximum likelihood estimation for the hyperbolic distribution are considered in some detail. Various ways of approximating a theoretical distribution by one of the hyperbolic or generalized hyperbolic distributions are outlined and as an application of this an approximation is obtained to the distribution of the sum of a sample of observations from the hyperbolic distribution.

Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies

This article presents two expectation identities and a series of applications. One of the identities uses the heat equation, and we show that in some families of distributions the identity characterizes the normal distribution. We also show that it is essentially equivalent to Stein's identity. The applications we have presented are of a broad range. They include exact formulas and bounds for moments, an improvement and a reversal of Jensen's inequality, linking unbiased estimation to elliptic partial differential equations, applications to decision theory and Bayesian statistics, and an application to counting matchings in graph theory. Some examples are also given.

After a brief review of the work on Chernoff-type inequalities, bounds for the variance of functions g(X, Y) of a bivariate random vector (X, Y) are derived when the marginal distribution of X is normal, gamma, binomial, negative binomial or Poisson assuming that the variance of g(X, Y) is finite. These results follow as a consequence of Chernoff inequality, Stein-identity for the normal distribution and their analogues for other distributions as obtained by Cacoullos, Papathanasiou, Prakasa Rao, Sreehari among others. Some interesting inequalities in real analysis are derived as special cases.

In the course of solving a variational problem Chernoff (Ann. Probab. 9 (1981) 533) obtained what appears to be a specialized inequality for a variance, namely, that for a standard normal variable X, Var[g(X)]⩾E[g′(X)]2. However, both the simplicity and usefulness of the inequality has generated a plethora of extensions, as well as alternative proofs. All previous papers have focused on a single function. We provide here an inequality for the covariance matrix of k functions, which leads to a matrix inequality in the sense of Loewner.

The CreditRisk+ model is one of the industry standards for estimating the credit default risk for a portfolio of credit loans. The natural parameterization of this model requires the default probability to be apportioned using a number of (non-negative) factor loadings. However, in practice only default correlations are often available but not the factor loadings. In this paper we investigate how to deduce the factor loadings from a given set of default correlations. This is a novel approach and it requires the non-negative factorization of a positive semi-definite matrix which is by no means trivial. We also present a numerical optimization algorithm to achieve this.

The distribution of a continuous r.v. X is characterized by the function w appearing in the lower bound σ 2 E 2 [w(X)g ' (X)] for the variance of a function g(X); for a discrete X, g ' (x) is replaced by Δg(x)=g(x+1)-g(x). The same characterizations are obtained by considering the upper bound σ 2 E{w(X)[g ' (X)] 2 }≥Var[g(X)]· The special case w(x)≡1 gives the normal [A. A. Borovkov and S. A. Utev, Teor. Veroyatn. Primen. 28, No.2, 209- 218 (1983; Zbl 0511.60016); English translation in Theory Probab. Appl. 28, 219-228 (1984)], and the Poisson [B. L. S. Prakasa Rao and M. Sreehari, Aust. J. Stat. 29, 38-41 (1987; Zbl 0624.62021)]. The results extend to independent random variables.

Using the representation theorem and inversion formula for Stieltjes transforms, we give a simple proof of the infinite divisibility of the student $t$-distribution for all degrees of freedom by showing that $x^{-\frac{1}{2}}K_\nu(x^{\frac{1}{2}})/K_{\nu+1}(x^{\frac{1}{2}})$ is completely monotonic for $\nu \geqq -1$. Our approach proves the stronger and new result, that $x^{-\frac{1}{2}}K_\nu (x^{\frac{1}{2}}) /K_{\nu+1}(x^{\frac{1}{2}})$ is a completely monotonic function of $x$ for all real $\nu$. We also derive a new integral representation.

The following inequality is useful in studying a variation of the classical isoperimetric problem. Let $X$ be normally distributed with mean 0 and variance 1. If $g$ is absolutely continuous and $g(X)$ has finite variance, then $E \{\lbrack g'(X)\rbrack^2\} \geq \operatorname{Var}\lbrack g(X)\rbrack$ with equality if and only if $g(X)$ is linear in $X$. The proof involves expanding $g(X)$ in Hermite polynomials.

Chernoff (1981) obtained an upper bound for the variance of a function of a standard normal random variable, using Hermite polynomials. Chen (1980) gave a different proof, using the Cauchy-Schwarz inequality, and extended the inequality to the case of a multivariate normal. Here it is shown how similar upper bounds can be obtained for other distributions, including discrete ones. Moreover, by using a variation of the Cramer-Rao inequality, analogous lower bounds are given for the variance of a function of a random variable which satisfies the usual regularity conditions. Matrix inequalities are also obtained. All these bounds involve the first two moments of derivatives or differences of the function.

Upper bounds for the distance in variation between an arbitrary probability measure and the standard normal one are established via some integrodifferential functionals including information. The results are illustrated by gamma- and $t$-distributions. Moreover, as a by-product, another proof of the central limit theorem is obtained.

Estimation of the means of independent normal random variables is considered, using sum of squared errors as loss. An unbiased estimate of risk is obtained for an arbitrary estimate, and certain special classes of estimates are then discussed. The results are applied to smoothing by use of moving averages and to trimmed analogs of the James-Stein estimate. A suggestion is made for calculating approximate confidence sets for the mean vector centered at an arbitrary estimate.

Ulm, University, Diss., 2003.

For the family of multivariate normal distribution functions, Stein's Lemma presents a useful tool for calculating covariances between functions of the component random variables. Motivated by applications to corporate finance, we prove a generalization of Stein's Lemma to the family of elliptical distributions.

Stein's Lemma, important in statistics and also in capital asset pricing models, is generalized to the case of elliptical class of distributions. The case when the covariance matrix of the underlying distribution does not exist, is also considered. The results are illustrated by multivariate generalized Student-t family.

Dependencies of extreme events in finance: modeling, statistics, and data analysis (Dissertation) Estimation of the mean of a multivariate normal distribution

- R Schmidt

Schmidt, R., 2003. Dependencies of extreme events in finance: modeling, statistics, and data analysis (Dissertation), ULM. Stein, C.M., 1973. Estimation of the mean of a multivariate normal distribution. In: Proc. Prague Symp. Asymptotic Statist. pp. 345–381.

On the Simes inequality in elliptical models Available online at wias-berlin The heat equation and Stein's identity: Connections, applications

- Bodnar
- Taras
- Dickhaus
- Thorsten

Bodnar, Taras, Dickhaus, Thorsten, 2010. On the Simes inequality in elliptical models. Preprint. Available online at wias-berlin.de. Brown, L.D., DasGupta, A., Haff, L.R., Strawderman, W.E., 2006. The heat equation and Stein's identity: Connections, applications. J. Statist. Plann. Inference 136, 2254–2278.