Preprint

Analytic solution to variance optimization with no short-selling

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

A large portfolio of independent returns is optimized under the variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric 1\ell_1 regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value r=2. This means that a ban on short positions does not prevent the phase transition in the optimization problem, it merely shifts the critical point from its non-regularized value of r=1 to 2. At r=2 the out of sample estimator for the portfolio variance stays finite and the estimated in-sample variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for N/T<2N/T<2. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample variance increases rapidly with increasing N, becoming one in the large N limit. However, these are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large variances, in accord with one's natural expectation.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The optimization of a large random portfolio under the expected shortfall risk measure with an ℓ 2 regularizer is carried out by analytical calculation for the case of uncorrelated Gaussian returns. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of di?erent assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength η is large, while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T versus η plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about four for a suffciently strong regularizer.
Article
Full-text available
We show that including a term which accounts for finite liquidity in portfolio optimization naturally mitigates the instabilities that arise in the estimation of coherent risk measures on finite samples. This is because taking into account the impact of trading in the market is mathematically equivalent to introducing a regularization on the risk measure. We show here that the impact function determines which regularizer is to be used. We also show that any regularizer based on the norm ℓp with p > 1 makes the sensitivity of coherent risk measures to estimation error disappear, while regularizers with p < 1 do not. The ℓ1 norm represents a border case: its "soft" implementation does not remove the instability, but rather shifts its locus, whereas its "hard" implementation (including hard limits or a ban on short selling) eliminates it. We demonstrate these effects on the important special case of expected shortfall (ES) which has recently become the global regulatory market risk measure.
Research
Full-text available
The determination of correlation matrices is typically affected by in-sample noise. We propose a simple, yet optimal, estimator of the true underlying correlation matrix and show that this new cleaning recipe outperforms all existing estimators in terms of the out-of-sample risk of synthetic portfolios.
Article
Full-text available
The contour maps of the error of historical resp. parametric estimates for large random portfolios optimized under the risk measure Expected Shortfall (ES) are constructed. Similar maps for the sensitivity of the portfolio weights to small changes in the returns as well as the VaR of the ES-optimized portfolio are also presented, along with results for the distribution of portfolio weights over the random samples and for the out-of-sample and in-the-sample estimates for ES. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes invariably turn out to be unrealistically large for any reasonable choice of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations.
Article
Full-text available
We use a replica approach to deal with portfolio optimization problems. A given risk measure is minimized using empirical estimates of asset values correlations. We study the phase transition which happens when the time series is too short with respect to the size of the portfolio. We also study the noise sensitivity of portfolio allocation when this transition is approached. We consider explicitely the cases where the absolute deviation and the conditional value-at-risk are chosen as a risk measure. We show how the replica method can study a wide range of risk measures, and deal with various types of time series correlations, including realistic ones with volatility clustering.
Book
Full-text available
Risk control and derivative pricing have become of major concern to financial institutions, and there is a real need for adequate statistical tools to measure and anticipate the amplitude of the potential moves of the financial markets. Summarising theoretical developments in the field, this 2003 second edition has been substantially expanded. Additional chapters now cover stochastic processes, Monte-Carlo methods, Black-Scholes theory, the theory of the yield curve, and Minority Game. There are discussions on aspects of data analysis, financial products, non-linear correlations, and herding, feedback and agent based models. This book has become a classic reference for graduate students and researchers working in econophysics and mathematical finance, and for quantitative analysts working on risk management, derivative pricing and quantitative trading strategies.
Article
Full-text available
In portfolio analysis, uncertainty about parameter values leads to suboptimal portfolio choices. The resulting loss in the investor's utility is a function of the particular estimator chosen for expected returns. So, this is a problem of simultaneous estimation of normal means under a well-specified loss function. In this situation, as Stein has shown, the classical sample mean is inadmissible. This paper presents a simple empirical Bayes estimator that should outperform the sample mean in the context of a portfolio. Simulation analysis shows that these Bayes-Stein estimators provide significant gains in portfolio selection problems.
Article
Full-text available
It is shown that the axioms for coherent risk measures imply that whenever there is a pair of portfolios such that one of them dominates the other in a given sample (which happens with finite probability even for large samples), then there is no optimal portfolio under any coherent measure on that sample, and the risk measure diverges to minus infinity. This instability was first discovered in the special example of Expected Shortfall which is used here both as an illustration and as a springboard for generalization.
Article
Full-text available
We consider the problem of portfolio optimization in the presence of market impact, and derive optimal liquidation strategies. We discuss in detail the problem of finding the optimal portfolio under Expected Shortfall (ES) in the case of linear market impact. We show that, once market impact is taken into account, a regularized version of the usual optimization problem naturally emerges. We characterize the typical behavior of the optimal liquidation strategies, in the limit of large portfolio sizes, and show how the market impact removes the instability of ES in this context.
Article
Full-text available
We consider sample covariance matrices SN=1pΣN1/2XNXNΣN1/2{S_N=\frac{1}{p}\Sigma_N^{1/2}X_NX_N^* \Sigma_N^{1/2}} where X N is a N × p real or complex matrix with i.i.d. entries with finite 12th moment and ΣN is a N × N positive definite matrix. In addition we assume that the spectral measure of ΣN almost surely converges to some limiting probability distribution as N → ∞ and p/N → γ > 0. We quantify the relationship between sample and population eigenvectors by studying the asymptotics of functionals of the type 1NTr(g(ΣN)(SNzI)1),{\frac{1}{N}\text{Tr} ( g(\Sigma_N) (S_N-zI)^{-1}),} where I is the identity matrix, g is a bounded function and z is a complex number. This is then used to compute the asymptotically optimal bias correction for sample eigenvalues, paving the way for a new generation of improved estimators of the covariance matrix and its inverse.
Article
Full-text available
We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold. In combinatorial geometry these transitions appear as abrupt changes in the properties of face counts of convex polytopes when the dimensions are varied. The thresholds in these very different problems appear in the same critical locations after appropriate calibration of variables. These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now-ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity tradeoff in undersampling theorems. Existing derivations of phase transitions in combinatorial geometry assume the underlying matrices have independent and identically distributed (iid) Gaussian elements. In applications, however, it often seems that Gaussianity is not required. We conducted an extensive computational experiment and formal inferential analysis to test the hypothesis that these phase transitions are {\it universal} across a range of underlying matrix ensembles. The experimental results are consistent with an asymptotic large-n universality across matrix ensembles; finite-sample universality can be rejected.
Article
Full-text available
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.
Article
Full-text available
We address the problem of portfolio optimization under the simplest coherent risk measure, i.e. the expected shortfall. As is well known, one can map this problem into a linear programming setting. For some values of the external parameters, when the available time series is too short, portfolio optimization is ill-posed because it leads to unbounded positions, infinitely short on some assets and infinitely long on others. As first observed by Kondor and coworkers, this phenomenon is actually a phase transition. We investigate the nature of this transition by means of a replica approach.
Article
Full-text available
According to standard portfolio theory, the tangency portfolio is the only efficient stock portfolio. However, empirical studies show that an investment in the global minimum variance portfolio often yields better out-of-sample results than does an investment in the tangency portfolio and suggest investing in the global minimum variance portfolio. But little is known about the distributions of the weights and return parameters of this portfolio. Our contribution is to determine these distributions. By doing so, we answer several important questions in asset management.
Article
This note corrects an error in the proof of Proposition 2 of “Risk Reduction in Large Portfolios: Why Imposing the Wrong Constraint Helps” that appeared in the Journal of Finance, August 2003.
Article
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics, and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Article
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r=N/T<1r=N/T<1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r=1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1r)1/(1-r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical index, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Article
In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis.
Article
Many statistical applications require an estimate of a covariance matrix and/or its inverse. When the matrix dimension is large compared to the sample size, which happens frequently, the sample covariance matrix is known to perform poorly and may suffer from ill-conditioning. There already exists an extensive literature concerning improved estimators in such situations. In the absence of further knowledge about the structure of the true covariance matrix, the most successful approach so far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to a multiple of the identity, by taking a weighted average of the two, turns out to be equivalent to linearly shrinking the sample eigenvalues to their grand mean, while retaining the sample eigenvectors. Our paper extends this approach by considering nonlinear transformations of the sample eigenvalues. We show how to construct an estimator that is asymptotically equivalent to an oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlo simulations, the resulting bona fide estimator can result in sizeable improvements over the sample covariance matrix and also over linear shrinkage.
Article
The contour map of estimation error of Expected Shortfall (ES) is constructed. It allows one to quantitatively determine the sample size (the length of the time series) required by the optimization under ES of large institutional portfolios for a given size of the portfolio, at a given confidence level and a given estimation error.
Book
The value of robust statistical methods in portfolio construction arises because asset returns and other financial quantities often contain outliers. Outliers are data values that are well-separated from the bulk of the data values and are not predicted by univariate or multivariate normal distributions. Under normal distribution models, such an outlier sometimes occurs with exceedingly small probability. For example, if we fit a normal distribution to S & P 500 daily returns for various periods of time prior to the stock market crash of 1987, we find that the probability of occurrence of an event of that magnitude is so small that one would have to wait much longer than the history of civilization for another such occurrence. Large outliers of this type are not limited to situations with extreme market movements—one can find many such examples in individual asset returns.
Article
Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the l1 minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone.
Article
In a recent paper Galluccio, Bouchaud and Potters demonstrated that a certain portfolio problem with a nonlinear constraint maps exactly onto finding the ground states of a long-range spin glass, with the concomitant nonuniqueness and instability of the optimal portfolios. Here we put forward geometric arguments that lead to qualitatively similar conclusions, without recourse to the methods of spin glass theory, and give two more examples of portfolio problems with convex nonlinear constraints.
Article
We study the feasibility and noise sensitivity of portfolio optimization under some downside risk measures (value-at-risk, expected shortfall, and semivariance) when they are estimated by fitting a parametric distribution on a finite sample of asset returns. We find that the existence of the optimum is a probabilistic issue, depending on the particular random sample, in all three cases. At a critical combination of the parameters of these problems we find an algorithmic phase transition, separating the phase where the optimization is feasible from the one where it is not. This transition is similar to the one discovered earlier for expected shortfall based on historical time series. We employ the replica method to compute the phase diagram, as well as to obtain the critical exponent of the estimation error that diverges at the critical point. The analytical results are corroborated by Monte Carlo simulations.
Article
In this paper, we derive two shrinkage estimators for minimum-variance portfolios that dominate the traditional estimator with respect to the out-of-sample variance of the portfolio return. The presented results hold for any number of assets d≥4 and number of observations n≥d+2. The small-sample properties of the shrinkage estimators as well as their large-sample properties for fixed d but n→∞ and n,d→∞ but n/d→q≤∞ are investigated. Furthermore, we present a small-sample test for the question of whether it is better to completely ignore time series information in favor of naive diversification.
Article
We study the sensitivity to estimation error of portfolios optimized under various risk measures, including variance, absolute deviation, expected shortfall and maximal loss. We introduce a measure of portfolio sensitivity and test the various risk measures by considering simulated portfolios of varying sizes N and for different lengths T of the time series. We find that the effect of noise is very strong in all the investigated cases, asymptotically it only depends on the ratio N/T, and diverges (goes to infinity) at a critical value of N/T, that depends on the risk measure in question. This divergence is the manifestation of a phase transition, analogous to the algorithmic phase transitions recently discovered in a number of hard computational problems. The transition is accompanied by a number of critical phenomena, including the divergent sample to sample fluctuations of portfolio weights. While the optimization under variance and mean absolute deviation is always feasible below the critical value of N/T, expected shortfall and maximal loss display a probabilistic feasibility problem, in that they can become unbounded from below already for small values of the ratio N/T, and then no solution exists to the optimization problem under these risk measures. Although powerful filtering techniques exist for the mitigation of the above instability in the case of variance, our findings point to the necessity of developing similar filtering procedures adapted to the other risk measures where they are much less developed or non-existent. Another important message of this study is that the requirement of robustness (noise-tolerance) should be given special attention when considering the theoretical and practical criteria to be imposed on a risk measure.
Article
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
Article
Many applied problems require a covariance matrix estimator that is not only invertible, but also well-conditioned (that is, inverting it does not amplify estimation error). For large-dimensional covariance matrices, the usual estimator—the sample covariance matrix—is typically not well-conditioned and may not even be invertible. This paper introduces an estimator that is both well-conditioned and more accurate than the sample covariance matrix asymptotically. This estimator is distribution-free and has a simple explicit formula that is easy to compute and interpret. It is the asymptotically optimal convex linear combination of the sample covariance matrix with the identity matrix. Optimality is meant with respect to a quadratic loss function, asymptotically as the number of observations and the number of variables go to infinity together. Extensive Monte Carlo confirm that the asymptotic results tend to hold well in finite sample.
Article
This paper proposes to estimate the covariance matrix of stock returns by an optimally weighted average of two existing estimators: the sample covariance matrix and single-index covariance matrix. This method is generally known as shrinkage, and it is standard in decision theory and in empirical Bayesian statistics. Our shrinkage estimator can be seen as a way to account for extra-market covariance without having to specify an arbitrary multifactor structure. For NYSE and AMEX stock returns from 1972 to 1995, it can be used to select portfolios with significantly lower out-of-sample variance than a set of existing estimators, including multifactor models.
Article
In the proposed research, our objective is to provide a general framework for identifying portfolios that perform well out-of-sample even in the presence of estimation error. This general framework relies on solving the traditional minimum-variance problem (based on the sample covariance matrix) but subject to the additional constraint that the p-norm of the portfolio-weight vector be smaller than a given threshold. In particular, we consider the 1-norm constraint, which is that the sum of the absolute values of the weights be smaller than a given threshold, and the 2-norm constraint that the sum of the squares of the portfolio weights be smaller than a given threshold. Our contribution will be to show that our unifying theoretical framework nests as special cases the shrinkage approaches of Jagannathan and Ma (2003) and Ledoit and Wolf (2004), and the 1/N portfolio studied in DeMiguel, Garlappi, and Uppal (2007). We also use our general framework to propose several new portfolio strategies. For these new portfolios, we provide a moment-shrinkage interpretation and a Bayesian interpretation where the investor has a prior belief on portfolio weights rather than on moments of asset returns. Finally, we compare empirically (in terms of portfolio variance, Sharpe ratio, and turnover), the out-of-sample performance of the new portfolios we propose to nine strategies in the existing literature across five datasets. Our preliminary results indicate that the norm-constrained portfolios we propose have a lower variance and a higher Sharpe ratio than the portfolio strategies in Jagannathan and Ma (2003) and Ledoit and Wolf (2004), the 1/N portfolio, and also other strategies in the literature such as factor portfolios and the parametric portfolios in Brandt, Santa-Clara, and Valkanov (2005).
Article
Traditional portfolio optimization has been often criticized since it does not account for estimation risk. Theoretical considerations indicate that estimation risk is mainly driven by the parameter uncertainty regarding the expected asset returns rather than their variances and covariances. This is also demonstrated by several numerical studies. The global minimum variance portfolio has been advocated by many authors as an appropriate alternative to the traditional Markowitz approach since there are no expected asset returns which have to be estimated and thus the impact of estimation errors can be substantially reduced. But in many practical situations an investor is not willing to choose the global minimum variance portfolio, especially in the context of top down portfolio optimization. In that case the investor has to minimize the variance of the portfolio return by satisfying some specific constraints for the portfolio weights. Such a portfolio will be called 'local minimum variance portfolio'. Some finite sample hypothesis tests for global and local minimum variance portfolios are presented as well as the unconditional finite sample distribution of the estimated portfolio weights and the first two moments of the estimated expected portfolio returns.
Article
We evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1/N portfolio. Of the 14 models we evaluate across seven empirical datasets, none is consistently better than the 1/N rule in terms of Sharpe ratio, certainty-equivalent return, or turnover, which indicates that, out of sample, the gain from optimal diversification is more than offset by estimation error. Based on parameters calibrated to the US equity market, our analytical results and simulations show that the estimation window needed for the sample-based mean-variance strategy and its extensions to outperform the 1/N benchmark is around 3000 months for a portfolio with 25 assets and about 6000 months for a portfolio with 50 assets. This suggests that there are still many “miles to go” before the gains promised by optimal portfolio choice can actually be realized out of sample.
Article
This paper proposes a multivariate shrinkage estimator for the optimal portfolio weights. The estimated classical Markowitz weights are shrunk to the deterministic target portfolio weights. Assuming log asset returns to be i.i.d. Gaussian, explicit solutions are derived for the optimal shrinkage factors. The properties of the estimated shrinkage weights are investigated both analytically and using Monte Carlo simulations. The empirical study compares the competing portfolio selection approaches. Both simulation and empirical studies show that the proposed shrinkage estimator is robust and provides significant gains to the investor compared to benchmark procedures.
Article
The central message of this paper is that nobody should be using the sample covariance matrix for the purpose of portfolio optimization. It contains estimation error of the kind most likely to perturb a mean-variance optimizer. In its place, we suggest using the matrix obtained from the sample covariance matrix through a transformation called shrinkage. This tends to pull the most extreme coefficients towards more central values, thereby systematically reducing estimation error where it matters most. Statistically, the challenge is to know the optimal shrinkage intensity, and we give the formula for that. Without changing any other step in the portfolio optimization process, we show on actual stock market data that shrinkage reduces tracking error relative to a benchmark index, and substantially increases the realized information ratio of the active portfolio manager.
Article
We develop a jackknife estimator for the conditional variance of a minimum-tracking- error-variance portfolio constructed using estimated covariances. We empirically evaluate the performance of our estimator using an optimal portfolio of 200 stocks that has the lowest tracking error with respect to the S&P500 benchmark when three years of daily return data are used for estimating covariances. We find that our jackknife estimator provides more precise estimates and suffers less from in-sample optimism when compared to conventional estimators.
Article
In this paper, we prove several distributional properties for optimal portfolio weights. The weights are estimated by replacing the parameters with the sample counterparts. All results for finite samples are made assuming normally distributed returns. We calculate the exact covariances for the weights obtained by the expected quadratic utility. Additionally we derive the multivariate density function of the global minimum variance portfolio and the univariate density of the tangency portfolio. We obtain the conditional density for the Sharpe ratio optimal weights and show that the expectations of the Sharpe ratio optimal weights do not exist. Moreover, we determine the asymptotic distributions of the estimated weights assuming that the returns follow a multivariate stationary Gaussian process.
Article
Suppose we wish to recover a vector x_0 Є R^m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax_0 + e; A is an n by m matrix with far fewer rows than columns (n « m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x^# to the ℓ_(1-)regularization problem min ‖x‖ℓ_1 subject to ‖Ax - y‖ℓ(2) ≤ Є, where Є is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level ‖x^# - x_0‖ℓ_2 ≤ C Є. As a first example, suppose that A is a Gaussian random matrix; then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x_0; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/[log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
Spin glass theory and beyond
  • M Mézard
  • G Parisi
  • M A Virasoro
M. Mézard, G. Parisi, and M. A. Virasoro. Spin glass theory and beyond. World Scientific Lecture Notes in Physics Vol. 9, World Scientific, Singapore, 1987.