Article

Risk reduction in large portfolios: why imposing the wrong constraints helps

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This note corrects an error in the proof of Proposition 2 of “Risk Reduction in Large Portfolios: Why Imposing the Wrong Constraint Helps” that appeared in the Journal of Finance, August 2003.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Markowitz portfolio optimization with no short-sale (Jagannathan and Ma, 2003). Since the FSPD procedure is applicable to any covariance matrix estimators including precision matrix estimators, we briefly discuss this extendability in Section 6. ...
... The author proposes to choose a portfolio that minimizes risk, standard deviation of return. Since the MVP problem may yield an optimal allocation with short-sale or leverage, Jagannathan and Ma (2003) proposes to add a no-short-sale constraint to the MVP optimization formula. ...
... Note that (18) allows its solution to have a weight that is negative (short-sale) or greater than 1 (leverage). Jagannathan and Ma (2003) points out that short-sale and leveraging are sometimes impractical because of legal constraints and considers the MVP optimization problem with no short-sale constraint: ...
Preprint
In this work, we study the positive definiteness (PDness) problem in covariance matrix estimation. For high dimensional data, many regularized estimators are proposed under structural assumptions on the true covariance matrix including sparsity. They are shown to be asymptotically consistent and rate-optimal in estimating the true covariance matrix and its structure. However, many of them do not take into account the PDness of the estimator and produce a non-PD estimate. To achieve the PDness, researchers consider additional regularizations (or constraints) on eigenvalues, which make both the asymptotic analysis and computation much harder. In this paper, we propose a simple modification of the regularized covariance matrix estimator to make it PD while preserving the support. We revisit the idea of linear shrinkage and propose to take a convex combination between the first-stage estimator (the regularized covariance matrix without PDness) and a given form of diagonal matrix. The proposed modification, which we denote as FSPD (Fixed Support and Positive Definiteness) estimator, is shown to preserve the asymptotic properties of the first-stage estimator, if the shrinkage parameters are carefully selected. It has a closed form expression and its computation is optimization-free, unlike existing PD sparse estimators. In addition, the FSPD is generic in the sense that it can be applied to any non-PD matrix including the precision matrix. The FSPD estimator is numerically compared with other sparse PD estimators to understand its finite sample properties as well as its computational gain. It is also applied to two multivariate procedures relying on the covariance matrix estimator -- the linear minimax classification problem and the Markowitz portfolio optimization problem -- and is shown to substantially improve the performance of both procedures.
... Consistent with existing literature (see, e.g., Jagannathan andMa 2003, andDeMiguel, Garlappi, andUppal 2007), the economic performance obtained by restricting the portfolio weights tend to improve across different industries, regardless the model combination scheme. Yet, our decouplerecouple model synthesis scheme allows a representative investor to obtain a larger performance than BMA and equal-weight linear pooling. ...
... Consistent with existing literature (see, e.g., Jagannathan andMa 2003, andDeMiguel, Garlappi, andUppal 2007), the economic performance obtained by restricting the portfolio weights tend to improve across different industries, regardless the model combination scheme. Yet, our decouplerecouple model synthesis scheme allows a representative investor to obtain a larger performance than BMA and equal-weight linear pooling. ...
... Notice, however, that imposing no-short constraints substantially improves the out-of-sample real-time economic performance of the alternative specifications as well. In this respect, results are consistent with the existing evidence that by restricting portfolio weights we obtain a regularization effect in the model estimations which reduces the effect of the estimation error (see, e.g., Jagannathan andMa 2003, andDeMiguel et al. 2007, and the references therein). ...
Preprint
We develop a novel "decouple-recouple" dynamic predictive strategy and contribute to the literature on forecasting and economic decision making in a data-rich environment. Under this framework, clusters of predictors generate different latent states in the form of predictive densities that are later synthesized within an implied time-varying latent factor model. As a result, the latent inter-dependencies across predictive densities and biases are sequentially learned and corrected. Unlike sparse modeling and variable selection procedures, we do not assume a priori that there is a given subset of active predictors, which characterize the predictive density of a quantity of interest. We test our procedure by investigating the predictive content of a large set of financial ratios and macroeconomic variables on both the equity premium across different industries and the inflation rate in the U.S., two contexts of topical interest in finance and macroeconomics. We find that our predictive synthesis framework generates both statistically and economically significant out-of-sample benefits while maintaining interpretability of the forecasting variables. In addition, the main empirical results highlight that our proposed framework outperforms both LASSO-type shrinkage regressions, factor based dimension reduction, sequential variable selection, and equal-weighted linear pooling methodologies.
... This difficulty has been around in portfolio selection from the early days and a plethora of methods have been proposed to cope with it, e.g. single and multi-factor models [3], Bayesian estimators [4,5,6,7,8,9,10,11,12,13,14,15,16,17], or, more recently, tools borrowed from random matrix theory [18,19,20,21,22,23]. In the thermodynamic regime, estimation errors are large, sample to sample fluctuations are huge, results obtained from one sample do not generalize well and can be quite misleading concerning the true process. ...
... It is easy to show that when the L2 norm is used as a regularizer, then the resulting method is closely related to Bayesian ridge regression, which uses a Gaussian prior on the weights (with the difference of the additional budget constraint). The work on covariance shrinkage, such as [8,9,10,11], falls into the same category. Other priors can be used [17], which can be expected to lead to different results (for an insightful comparison see e.g. ...
... The estimation problem for the mean is so serious [31,32] as to make the trade-off between risk and return largely illusory. Therefore, following a number of authors [8,9,33,34,35], we focus on the minimum variance portfolio and drop the usual constraint on the expected return. This is also in line with previous work (see [36] and references therein), and makes the treatment simpler without compromising the main conclusions. ...
Preprint
The optimization of large portfolios displays an inherent instability to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification "pressure". This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade-off between the two, depending on the size of the available data set.
... We show that with gross-exposure constraint the theoretical optimal portfolios have similar performance to the empirically selected ones based on estimated covariance matrices and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio is not diversified enough and can be improved by allowing some short positions. ...
... We show that with gross-exposure constraint the theoretical optimal portfolios have similar performance to the empirically selected ones based on estimated covariance matrices and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio is not diversified enough and can be improved by allowing some short positions. ...
... These problems get more pronounced when the portfolio size is large. In fact, Jagannathan and Ma (2003) showed that optimal no-short-sale portfolio outperforms the Markowitz portfolio, when the portfolio size is large. ...
Preprint
Markowitz (1952, 1959) laid down the ground-breaking work on the mean-variance analysis. Under his framework, the theoretical optimal allocation vector can be very different from the estimated one for large portfolios due to the intrinsic difficulty of estimating a vast covariance matrix and return vector. This can result in adverse performance in portfolio selected based on empirical data due to the accumulation of estimation errors. We address this problem by introducing the gross-exposure constrained mean-variance portfolio selection. We show that with gross-exposure constraint the theoretical optimal portfolios have similar performance to the empirically selected ones based on estimated covariance matrices and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio is not diversified enough and can be improved by allowing some short positions. As the constraint on short sales relaxes, the number of selected assets varies from a small number to the total number of stocks, when tracking portfolios or selecting assets. This achieves the optimal sparse portfolio selection, which has close performance to the theoretical optimal one. Among 1000 stocks, for example, we are able to identify all optimal subsets of portfolios of different sizes, their associated allocation vectors, and their estimated risks. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 400 stocks randomly selected from Russell 3000.
... One of the major shortcomings of the mean-variance approach is the fact that optimized weights are highly sensitive to estimation errors and to the presence of multicollinearity in the inputs. In particular, it is acknowledged that estimating the expected returns is more challenging than just focusing on risk minimization and thereby looking for the portfolios with minimum risk, i.e. the so-called global minimum variance portfolios (GMV) (Merton 1980, Chopra and Ziemba 1993, Jagannathan and Ma 2003. But even in the GMV set-up, the sample covariance matrix might exhibit estimation error that can easily accumulate, especially when dealing with a large number of assets (Michaud 1989, Ledoit and Wolff 2003, DeMiguel and Nogales 2009, Fan et al. 2012. ...
... In the portfolio context, the LASSO framework typically relies on adding to the Markowitz formulation a penalty proportional to the 1 -Norm 1 on the asset weight vector (Brodie et al. 2009, DeMiguel et al. 2009a, Carrasco and Noumon 2012, Fan et al. 2012. DeMiguel et al. (2009a) provide a general framework that nests regularized portfolio strategies based on the 1 -Norm with the approaches introduced by Ledoit and Wolff (2003) and Jagannathan and Ma (2003), and advocate their superior performance in an out of sample setting. Brodie et al. (2009) and Fan et al. (2012) show that LASSO results in constraining the gross exposures, can be used to implicitly account for transaction costs, and sets an upper bound on the portfolio risk depending just on the maximum estimation error of the covariance matrix. ...
... Brodie et al. (2009) and Fan et al. (2012) show that LASSO results in constraining the gross exposures, can be used to implicitly account for transaction costs, and sets an upper bound on the portfolio risk depending just on the maximum estimation error of the covariance matrix. Moreover, the shrinkage covariance estimation of Jagannathan and Ma (2003), obtained by adding a no-short sale constraint (the so-called GMV long only (GMV-LO)), can be considered a special case of the LASSO. Next to the LASSO penalty, Brodie et al. (2009) andDeMiguel et al. (2009b) also consider a portfolio with an 2 -Norm penalty on the weight vector, known in statistical literature as RIDGE (Hoerl and Kennard 1988). ...
Preprint
We introduce a financial portfolio optimization framework that allows us to automatically select the relevant assets and estimate their weights by relying on a sorted 1\ell_1-Norm penalization, henceforth SLOPE. Our approach is able to group constituents with similar correlation properties, and with the same underlying risk factor exposures. We show that by varying the intensity of the penalty, SLOPE can span the entire set of optimal portfolios on the risk-diversification frontier, from minimum variance to the equally weighted. To solve the optimization problem, we develop a new efficient algorithm, based on the Alternating Direction Method of Multipliers. Our empirical analysis shows that SLOPE yields optimal portfolios with good out-of-sample risk and return performance properties, by reducing the overall turnover through more stable asset weight estimates. Moreover, using the automatic grouping property of SLOPE, new portfolio strategies, such as SLOPE-MV, can be developed to exploit the data-driven detected similarities across assets.
... It is well known that the selected portfolios depend too sensitively on the expected future returns and volatility matrix (Klein and Bawa, 1976;Best and Grauer, 1991;Chopra and Ziemba, 1993). This leads to the puzzle postulated by Jagannathan and Ma (2003) why no short-sale portfolio outperforms the efficient Markowicz portfolio. See also De Roon, et al. (2001) on the study of optimal no-short sale portfolio on emerging market. ...
... The results are demonstrated also by both simulation and empirical studies. This gives not only a theoretical answer to the puzzle postulated by Jagannathan and Ma (2003) but also paves a way for optimal portfolio selection in practice. ...
... Then, w 1 = w + + w − is the gross exposure of the portfolio. To simplify the problem, following Jagannathan and Ma (2003) and Fan et al. (2008b) and other papers in the literature, we consider only the risk optimization problem. In practice, the expected return constraint can be replaced by the constraints of sectors or industries, to avoid unreliable estimates of the expected return vector. ...
Preprint
Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of selected portfolios among a vast pool of assets, as demonstrated in Fan et al (2008). The required high-dimensional volatility matrix can be estimated by using high frequency financial data. This enables us to better adapt to the local volatilities and local correlations among vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This paper studies the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of "pairwise-refresh time" and "all-refresh time" methods proposed by Barndorff-Nielsen et al (2008) for estimation of vast covariance matrix and compare their merits in the portfolio selection. We also establish the concentration inequalities of the estimates, which guarantee desirable properties of the estimated volatility matrix in vast asset allocation with gross exposure constraints. Extensive numerical studies are made via carefully designed simulations. Comparing with the methods based on low frequency daily data, our methods can capture the most recent trend of the time varying volatility and correlation, hence provide more accurate guidance for the portfolio allocation in the next time period. The advantage of using high-frequency data is significant in our simulation and empirical studies, which consist of 50 simulated assets and 30 constituent stocks of Dow Jones Industrial Average index.
... Portfolio optimization often assumes that weights should satisfy a further constraint that fixes the expected return to some target value. However, it has been remarked [6,51] that estimation error in sample expected returns is so large that nothing much is lost in ignoring this constraint altogether, with no appreciable effect on out-of-sample performance [3]. Also, in applications such as index tracking, where the objective is to mimic a benchmark portfolio as closely as possible, this constraint is not needed. ...
... Let us now comment on the generality of the result. Concerning the linear assumption in Eq. (6), which is standard in the econometric literature [61], we observe that the estimate of market impact functions is a matter of active current research. Most of the empirical evidence suggests a convex shape [62], with the price impact growing slower than w. ...
... A linear impact, as the one assumed in Eq. (6), then corresponds to an order book with a constant density of limit orders. Hence, a measure of η is given by the density of the order book close to the best bid/ask. ...
Preprint
We consider the problem of portfolio optimization in the presence of market impact, and derive optimal liquidation strategies. We discuss in detail the problem of finding the optimal portfolio under Expected Shortfall (ES) in the case of linear market impact. We show that, once market impact is taken into account, a regularized version of the usual optimization problem naturally emerges. We characterize the typical behavior of the optimal liquidation strategies, in the limit of large portfolio sizes, and show how the market impact removes the instability of ES in this context.
... In fact, Jagannathan and Ma (2003) showed explicitly that the error of mean estimation is so large that nothing is lost when one ignores the mean altogether. For this reason, recent research has focused on global minimum-variance portfolios (MVP) by relying solely on the covariance estimate only, as the analytical formulation of the MVP does not require the mean as an input at all. ...
... On top of these methodologies, other methods have been explored to reduce the sampling errors in the covariance matrix. Jagannathan and Ma (2003) reduced the sampling errors in the covariance matrix by constraining portfolio weights, motivated by the idea that the portfolio weights tend to take large positive and negative positions in the underlying assets. ...
... As the optimal portfolio weights take extreme values, they generate considerable turnover and are unstable. Thus, Jagannathan and Ma (2003) constrained portfolio weights by adding no-short-sale constraints, forcing the weights to only take positive values. This implies that the MVP is exclusively long-only, indicating that investors can only purchase, not short-sell, the assets involved. ...
Preprint
Full-text available
In portfolio risk minimization, the inverse covariance matrix of returns is often unknown and has to be estimated in practice. This inverse covariance matrix also prescribes the hedge trades in which a stock is hedged by all the other stocks in the portfolio. In practice with finite samples, however, multicollinearity gives rise to considerable estimation errors, making the hedge trades too unstable and unreliable for use. By adopting ideas from current methodologies in the existing literature, we propose 2 new estimators of the inverse covariance matrix, one which relies only on the l2 norm while the other utilizes both the l1 and l2 norms. These 2 new estimators are classified as shrinkage estimators in the literature. Comparing favorably with other methods (sample-based estimation, equal-weighting, estimation based on Principal Component Analysis), a portfolio formed on the proposed estimators achieves substantial out-of-sample risk reduction and improves the out-of-sample risk-adjusted returns of the portfolio, particularly in high-dimensional settings. Furthermore, the proposed estimators can still be computed even in instances where the sample covariance matrix is ill-conditioned or singular
... In this regard, the minVar portfolio minimizes portfolio variance irrespective of the level of portfolio return, making it an appealing strategy for risk-averse investors (Clarke et al., 2006;DeMiguel et al., 2009). As for the MVP strategy, it balances risk and return through mean-variance optimization, which generally leads to a higher risk-adjusted return measure compared to heuristic-based allocations (Jagannathan and Ma, 2003). Accordingly, it is not surprising that our proposed IFCP/IFCPL portfolio strategy underperforms those two portfolio strategies when it comes to risk-adjusted returns. ...
... While the above results reflect the difficulty in beating minVar and MVP strategies due to their mathematical optimization properties (Jagannathan and Ma, 2003;DeMiguel et al., 2009), our proposed partial correlation connectedness-based approach to portfolio construction provides a competitive performance, particularly regarding downside risk mitigation. Furthermore, in light of the sensitivity of mean-variance optimization to estimation errors, it is important to note the comparable performance of IFCP(L) across bootstrap samples and its potential robustness in a practical application. ...
Article
Full-text available
We propose a partial correlation-based connectedness approach to study the directional connectedness under normal and extreme market conditions among the returns of 22 commodities and compare it with the well-known Diebold and Yilmaz (i.e. generalized forecast error variance decomposition (GFEVD)) connectedness approach estimated at the mean and tails. Considering four groups of commodities, namely energy, agricultural, precious metals, and industrial metals, and daily data from September 1, 2005 to June 5, 2024, covering various crisis periods, we draw filtered networks and measures of directional connectedness. The main results are summarized as follows. Firstly, the total connectedness index captures the significant commodities related shocks, and intensifies during crises episodes, notably at the extreme lower quantile. Secondly, using partial correlations in the approach of connectedness leads to a surge of the total connectedness level at the extreme lower quantile and identifies the beginnings of major crises earlier than the GFEVD measure of connectedness. Thirdly, the connectedness structure of commodities based on partial correlation is unstable during turbulent market conditions, a feature that is ignored when the GFEVD approach of connectedness is used. Fourthly, in terms of practical implications, the partial correlation-based connectedness portfolio outperforms the GFEVD based minimum connectedness portfolio on a risk adjusted basis.
... Another strand of literature considers various constraints on large portfolio optimiza-tion. Jagannathan and Ma (2003) proposed no-short-selling constraints to reduce the portfolio risks. Inspired by Jagannathan and Ma (2003), DeMiguel et al. (2009) imposed the norm constraints to the portfolios. ...
... Jagannathan and Ma (2003) proposed no-short-selling constraints to reduce the portfolio risks. Inspired by Jagannathan and Ma (2003), DeMiguel et al. (2009) imposed the norm constraints to the portfolios. Fan et al. (2012) proposed a gross-exposure constraint method to modify the Markowitz mean-variance optimization problem in a single period to reduce the sensitivity of the resulting allocation to input vectors. ...
Preprint
This paper considers the finite horizon portfolio rebalancing problem in terms of mean-variance optimization, where decisions are made based on current information on asset returns and transaction costs. The study's novelty is that the transaction costs are integrated within the optimization problem in a high-dimensional portfolio setting where the number of assets is larger than the sample size. We propose portfolio construction and rebalancing models with nonconvex penalty considering two types of transaction cost, the proportional transaction cost and the quadratic transaction cost. We establish the desired theoretical properties under mild regularity conditions. Monte Carlo simulations and empirical studies using S&P 500 and Russell 2000 stocks show the satisfactory performance of the proposed portfolio and highlight the importance of involving the transaction costs when rebalancing a portfolio.
... Although the Markowitz model seems ideal in theory, many constraints involved indeed influence the portfolios in practical investments. Black and Litterman (1992) have found that short-sale constraints often cause no investment in many stocks, and Jagannathan (2003) have found that constraining portfolio weights to be nonnegative can reduce the risk in estimated optimal portfolios even when the constraints are wrong. To solve this problem, DeMiguel et al. (2009a) proposed a generalized approach to optimize portfolios under general norm-constrained portfolio. ...
... Sharpe ratio, defined as the ratio of expected value of the excess returns to its standard deviation (Sharpe 1994), is widely used to evaluate the performance of portfolios practically (Jagannathan 2003, Scholz 2007. Table 3 shows Sharpe ratios of portfolios with different investment horizons, in which there is no specular length that could maximize Sharpe ratio in all circumstances. ...
Preprint
The problem of portfolio optimization is one of the most important issues in asset management. This paper proposes a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: selecting the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, i.e., degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion, then using the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index or the sum of the amplitudes of the trading days with rising index to the total number of trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that the peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all the possible optimal portfolio strategy based on different parameters to select portfolios and different criteria to identify market conditions, 65%65\% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market and the proportion is 70%70\% for the Shenzhen A-Share market.
... As the tunable coefficient is decreased, the optimal solutions are given more and more latitude to include short positions. The optimal solutions for our penalized objective function can thus be seen as natural generalizations of the "no-short-positions" portfolios considered by Jagannathan and Ma (2003). In addition to stabilizing the optimization problem (Daubechies, Defrise, and De Mol, 2004) and generalizing no-short-positions-constrained optimization, the ℓ 1 penalty facilitates treatment of transaction costs. ...
... Given that the ℓ 1 −penalty term takes on a fixed value in this case, minimizing the quadratic term only (as in (1)) is thus equivalent to minimizing the penalized objective function in (2), for non-negative weights w i . This is consistent with the observation made by Jagannathan and Ma (2003) that a restriction to non-negative-weights-only can have a regularizing effect on Markowitz's portfolio construction. ...
Preprint
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e. portfolios with only few active positions), and allows to account for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naive evenly-weighted portfolio which constitutes, as shown in recent literature, a very tough benchmark.
... Such an understanding can come from an analytic approach. Analytic calculations of the optimal estimated portfolio have been performed by various groups under the assumption that the underlying statistical distribution is normal, the objective function is the variance and the optimization is subject to the budget constraint and, in some cases, an 2 regularizer [2][3][4][5][6][7][8][9][10][11][12][13][14][15]. The most recent, nonlinear realization of 2 shrinkage [16][17][18] has turned out to be particularly effective in suppressing sample fluctuations. ...
... regularization from machine learning[33]. Jagannathan and Ma[4] were the first to notice that a ban on short positions, which can be regarded as a special case of 1 regularization, improves the stability of estimated optimal portfolios. Subsequently Brodie et al[34] applied an 1 regularizer on the portfolio weights in an empirical study of real life portfolios in various markets and demonstrated its satisfactory performance ...
Preprint
A large portfolio of independent returns is optimized under the variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric 1\ell_1 regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value r=2. This means that a ban on short positions does not prevent the phase transition in the optimization problem, it merely shifts the critical point from its non-regularized value of r=1 to 2. At r=2 the out of sample estimator for the portfolio variance stays finite and the estimated in-sample variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for N/T<2N/T<2. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample variance increases rapidly with increasing N, becoming one in the large N limit. However, these are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large variances, in accord with one's natural expectation.
... Different methods are suggested in order to repair the Markowitz procedure such as shrinkage estimators (Golosnoy andOkhrin 2009, Frahm andMemmel 2010) or constraining portfolio weights (Jagannathan andMa, 2003, Behr et al., 2013). Another recent popular approach is to impose an additional norm (quadratic form) constraint on the vector of portfolio weights (Brodie et al., 2009, Fan et al., 2012, which is equivalent to adding a penalty component to the mean-variance objective function. ...
... Taking into account that Cov(ω i x, ω j x) = ω i Cov(x, x)ω j = ω i Σω j , we rewrite (22) and get ...
Preprint
We consider a group of mean-variance investors with mimicking desire such that each investor is willing to penalize deviations of his portfolio composition from compositions of other group members. Penalizing norm constraints are already applied for statistical improvement of Markowitz portfolio procedure in order to cope with estimation risk. We relate these penalties to individuals' wish of social learning and introduce a mutual fund (investment club) aggregating group member preferences unknown for individual savers. We derive the explicit analytical solution for the fund's optimal portfolio weights and show advantages to invest in such a fund for individuals willing to mimic.
... Lasso is known to lead to sparse estimates, reducing the effective dimension of the problem and stabilizing the estimator. Jagannathan and Ma [12] considered portfolio optimization under a constraint excluding short positions. Although they did not speak about regularization, a no-short constraint is, in fact, a special case of an asymmetric 1 regularizer. ...
... where w * i , are the true optimal weights given in (12). We see then that in the limit r → 0 our results derived via the replica method perfectly coincide with the results found in section 2, thereby providing an important consistency check. ...
Preprint
The optimization of the variance supplemented by a budget constraint and an asymmetric 1\ell_1 regularizer is carried out analytically by the replica method borrowed from the theory of disordered systems. The asymmetric regularizer allows us to penalize short and long positions differently, so the present treatment includes the no-short-constrained portfolio optimization problem as a special case. Results are presented for the out-of-sample and the in-sample estimator of the regularized variance, the relative estimation error, the density of the assets eliminated from the portfolio by the regularizer, and the distribution of the optimal portfolio weights. We have studied the dependence of these quantities on the ratio r of the portfolio's dimension N to the sample size T, and on the strength of the regularizer. We have checked the analytic results by numerical simulations, and found general agreement. Regularization extends the interval where the optimization can be carried out, and suppresses the large sample fluctuations, but the performance of 1\ell_1 regularization is rather disappointing: if the sample size is large relative to the dimension, i.e. r is small, the regularizer does not play any role, while for r's where the regularizer starts to be felt the estimation error is already so large as to make the whole optimization exercise pointless. We find that the 1\ell_1 regularization can eliminate at most half the assets from the portfolio, corresponding to this there is a critical ratio r=2 beyond which the 1\ell_1 regularized variance cannot be optimized: the regularized variance becomes constant over the simplex. These facts do not seem to have been noticed in the literature.
... Accordingly, related studies often construct the minimum-variance portfolio or an equal-weighted portfolio to avoid this problem. Many studies, including Jagannathan and Ma (2003) show that the minimum-variance strategy not only minimizes variance but also improves overall performance. 3 The changing environment brings about unprecedented volatilities and uncertainties in global financial markets, rendering traditional prediction models less effective (Gaan et al. 2023;Guru et al. 2023;Piwowar-Sulej et al. 2023;Rao et al. 2023;Urom et al. 2021). ...
... Accordingly, related studies often construct a minimum-variance portfolio or an equal-weighted portfolio to avoid this problem. Many studies, including Jagannathan and Ma (2003) show that the minimum-variance strategy not only minimizes variance but also improves overall performance. The optimization problem and optimal weight vector of the minimumvariance strategy can be calculated as. ...
Article
Full-text available
This study employs a variety of machine learning models and a wide range of economic and financial variables to enhance the forecasting accuracy of the Korean won–U.S. dollar (KRW/USD) exchange rate and the U.S. and Korean stock market returns. We construct international asset allocation portfolios based on these forecasts and evaluate their performance. Our analysis finds that the Elastic Net and LASSO regression models outperform traditional benchmark models in predicting exchange rate and stock market returns, as evidenced by their superior out-of-sample R-squared values. We also identify the key factors crucial for improving the accuracy of forecasting the KRW/USD exchange rate and stock market returns. Furthermore, a machine learning-driven global portfolio that accounts for exchange rate fluctuations demonstrated superior performance. Global portfolios constructed using LASSO (Sharpe ratio = 3.45) and Elastic Net (Sharpe ratio = 3.48) exhibit a notable performance advantage over traditional benchmark portfolios. This suggests that machine learning models outperform traditional global portfolio construction methods.
... Heston (1993) developed a stochastic volatility model (Lin & SenGupta, 2023), which can be used to model the time-varying volatility of asset prices. Jagannathan and Wang (2003) (Jagannathan & Ma, 2003) proposed a four-factor asset pricing model, which includes a liquidity factor. Goldfarb and Iyengar (2003) developed an algorithm for robust portfolio optimisation (Xidonas et al., 2020). ...
... Zhou, 2017) to meet the dynamic and contemporary market. Still, risk management (Ghanbari et al., 2023a;Jagannathan & Ma, 2003;Risk Management in Indian Stock Market, n.d.;Yang et al., n.d.)will mitigate risk with advanced models with multimodal risk measures. ...
Article
Full-text available
Purpose: This study provides a comprehensive analysis of the evolution of portfolio optimization over the last three decades, employing systematic review and advanced bibliometric techniques to map key trends, influential works, and significant contributors in the field. Design/Methodology/Approach: Adhering to PRISMA guidelines, we conducted a systematic review and bibliometric analysis of 1,000 articles sourced from the Web of Science database, spanning from 1989 to 2023. Advanced bibliometric tools, including citation analysis, co-occurrence analysis, and network visualization, were utilized to identify prominent authors, influential journals, and emerging research themes. Findings: Our analysis reveals a significant growth in portfolio optimization literature, particularly in recent years. Key findings include the identification of pivotal authors, foundational papers, and leading journals that have shaped the field. The study also traces the methodological evolution from traditional models, like Markowitz's Modern Portfolio Theory, to contemporary approaches incorporating artificial intelligence and machine learning. Practical Implications: This study offers valuable insights for researchers and practitioners by highlighting critical developments in portfolio optimization. It also suggests areas for future research, particularly in integrating advanced data analytics and AI-driven methodologies into portfolio management. Originality/Value: This paper stands out by combining systematic review with a comprehensive bibliometric analysis, offering a holistic view of the portfolio optimization landscape. It not only synthesizes past research but also identifies emerging trends and gaps, providing a foundation for future explorations in this dynamic field.
... Researchers suggest using heuristic methods such as genetic algorithms and simulated annealing to improve the portfolio return and speed of solving mean-variance optimization models [38]. Additionally, they recommend the incorporation of constraints such as upper and lower bound constraints on portfolio weights, or the number of securities to select to stabilize performance and improve solving time [39]. ...
Article
Full-text available
Since the introduction of Modern Portfolio Theory (MPT) in 1952, its practical applications, associated challenges, and computational efficiency on large and high-frequency datasets, particularly datasets not used to develop and optimize the model, have drawn extensive research interest. This study has examined the performance of various portfolio models that have explored the concept of MPT on U.S. stock and cryptocurrency markets, i.e., discrete Markowitz portfolio selection (DMPS), the optimal dynamic portfolio (ODP), the binary unconstrained ODP (BUODP) with a quantum annealing solver, and the 1/N naive diversification (ND). Their performance is then compared to the indices that measure the performance of corresponding market exchange-traded funds (ETFs) for stock markets. Our findings show that the DMPS and ODP perform better than other models, delivering better returns in a shorter period. Both run significantly faster than the BUODP (with quantum annealing) with computation time of approximately 0.5 seconds for the S&P 400, 500, and 600 markets whereas BUODP takes 30 seconds; they also mitigate risk and outperform ETFs and ND model for the out-of-sample test with diversified portfolios combining NASDAQ stocks and cryptocurrencies. Furthermore, we have analyzed the impact of data with different frequency intervals, i.e., weekly, daily, hourly, and one-minute, on portfolio performance of the stock markets. The results suggest that data collection frequencies do not make differences in portfolio selections and weights. This study contributes to the advancement of portfolio theory, providing insights and practical values, especially in addressing computational efficiency for high-frequency and large-scale datasets and saving computational costs.
... Since the introduction of Markowitz' Mean-variance (MV) model (Markowitz 1952), many models have been proposed, all of which are built upon the foundation laid by Markowitz's MV model. Nevertheless, the sample mean and the sample covariance matrix are susceptible to error, especially when dealing with large portfolio size (Chopra & Ziemba 1993;DeMiguel, Garlappi & Uppal 2011;Jagannathan & Ma 2003), it cannot consistently dominate the naïve diversification strategy (Hwang, Xu & In 2018). ...
Article
In this paper, we introduce a modified norm-constraint mean-variance portfolio selection method. First, we use the Augmented Lagrangian method (ALM) to convert the objective function to an unconstrained objective function. Then we apply the proximal spectral gradient method (PSG) onto the unconstrained objective function to find an optimal sparse portfolio. This novel sparse portfolio optimization procedure encourages sparsity in the entire portfolio using norm. The PSG utilizes a multiple damping gradient (MDG) method to solve the smooth terms of the function. The step size is computed using the Lipschitz constant. Also, PSG uses the iterative thresholding method (ITH) to solve norm and induce the sparsity of the portfolio. The performance of the PSG is illustrated by its application on the Malaysian stock market. It is found that PSG’s sparse portfolio outperforms the equal weightage portfolio when the initial portfolio size is around 100 stocks and is prefiltered using the Sharpe ratio or the coefficient of variation.
... One prominent approach is robust optimization, pioneered by Ben- Tal For example, Tütüncü & Koenig (2004) propose box uncertainty sets, while Bertsimas & Sim (2004) suggest the use of polyhedral uncertainty sets. An alternative, more straightforward strategy to alleviate the error-maximization property involves constraining the portfolio weights to prevent extreme allocations and enhance diversification against estimation errors (see, e.g., Frost & Savarino, 1988;Jagannathan & Ma, 2003;). However, while such constraints may help stabilize the optimization process, they also inherently limit portfolio performance, potentially resulting in suboptimal solutions (Zhao et al., 2019). ...
Thesis
Full-text available
In my dissertation, I develop new portfolio optimization methodologies designed to enhance empirical performance in practical applications. Instead of relying on asset characteristics as intermediary variables to estimate the distribution of asset returns—an approach that introduces estimation errors—this work explores methods that directly incorporate asset characteristics as input variables in the optimization program. By bypassing the estimation step, these approaches aim to improve robustness and efficiency in portfolio construction. This dissertation contributes to both active and passive portfolio management.
... We focus on three types of constraints: (I) The No-shorting Constraint: We set A t = {π ∈ R N | π ≥ 0}, which represents the no-shorting constraint. Besides some markets that physically prohibits short-selling, the no-shorting constraint is usually added in the portfolio optimization models to enhance the out-of-sample performance (see [46]). Clearly, when A t = R N , it means the portfolio is not constrained. ...
Preprint
Motivated by practical applications, we explore the constrained multi-period mean-variance portfolio selection problem within a market characterized by a dynamic factor model. This model captures predictability in asset returns driven by state variables and incorporates cone-type portfolio constraints that are crucial in practice. The model is broad enough to encompass various dynamic factor frameworks, including practical considerations such as no-short-selling and cardinality constraints. We derive a semi-analytical optimal solution using dynamic programming, revealing it as a piecewise linear feedback policy to wealth, with all factors embedded within the allocation vectors. Additionally, we demonstrate that the portfolio policies are determined by two specific stochastic processes resulting from the stochastic optimizations, for which we provide detailed algorithms. These processes reflect the investor's assessment of future investment opportunities and play a crucial role in characterizing the time consistency and efficiency of the optimal policy through the variance-optimal signed supermartingale measure of the market. We present numerical examples that illustrate the model's application in various settings. Using real market data, we investigate how the factors influence portfolio policies and demonstrate that incorporating the factor structure may enhance out-of-sample performance.
... The third model is the optimal or tangency portfolio model. This model provides an optimal and efficient portfolio by maximizing the Sharpe ratio (Sharpe 1966) because the portfolio that maximizes the Sharpe ratio lies on the mean-variance efficient frontier, a point at which the capital market line is tangent to the efficient frontier (Markowitz 1952;Jagannathan and Ma 2003). For this reason, the optimal portfolio is also called tangency portfolio. ...
Article
Full-text available
This study assesses the portfolio concentration of socially responsible investment (SRI) pension funds, which may be subject to a potentially limited asset universe and have a higher concentration and lower performance than conventional funds. Nonetheless, in contrast to previous studies on SRI funds, this study considers the informationadvantage theory, positing that skilled managers should increase their concentration in assets in which they possess valuable information, departing from optimization models to achieve outperformance. This study frst compares actual fund concentration with concentration obtained from several traditional and modern portfolio optimization techniques (minimum variance, global minimum variance, optimal portfolio, naïve diversifcation, risk parity, and reward-to-risk timing) to understand whether SRI pension funds concentrate portfolios and deviate from optimization model solutions. Unlike previous studies, the actual fund assets are considered in the optimization models to take into account the real investment profles of SRI funds. The results indicate that SRI pension funds are less concentrated than conventional funds, and SRI and conventional pension funds largely diversify their portfolios, presenting lower concentration than portfolios formed with the optimization models. Furthermore, concentration strategies positively infuence performance in SRI and conventional funds, revealing the use of information advantage. However, SRI and conventional fund managers present poor skills (picking, timing, and trading) to exploit information advantages due to overconfdence issues, which afect performance with concentration strategies. This situation may be modifed if SRI funds follow modern optimization models and conventional funds follow traditional optimization models, improving managers’ performance and skills
... Early efforts focused primarily on improving covariance matrix estimation. (Chan et al. 1999, Löffler 2003 proposed utilizing high-frequency data for enhanced volatility forecasts, while (Jagannathan and Ma 2003) made the crucial observation that imposing portfolio constraints could effectively shrink extreme covariance estimates. The recognition of parameter uncertainty led to more sophisticated approaches, such as the shrinkage method (Ledoit and Wolf 2003, 2004, Kourtis et al. 2012, which combines sample estimates with structured estimators to reduce estimation error. ...
Preprint
Full-text available
This paper addresses the critical disconnect between prediction and decision quality in portfolio optimization by integrating Large Language Models (LLMs) with decision-focused learning. We demonstrate both theoretically and empirically that minimizing the prediction error alone leads to suboptimal portfolio decisions. We aim to exploit the representational power of LLMs for investment decisions. An attention mechanism processes asset relationships, temporal dependencies, and macro variables, which are then directly integrated into a portfolio optimization layer. This enables the model to capture complex market dynamics and align predictions with the decision objectives. Extensive experiments on S\&P100 and DOW30 datasets show that our model consistently outperforms state-of-the-art deep learning models. In addition, gradient-based analyses show that our model prioritizes the assets most crucial to decision making, thus mitigating the effects of prediction errors on portfolio performance. These findings underscore the value of integrating decision objectives into predictions for more robust and context-aware portfolio management.
... The Sortino Ratios further illustrate this adaptive strength, which echoes findings by Kritzman and Li (2010), who emphasized that dynamic portfolios typically yield better risk-adjusted returns in turbulent markets. In contrast, Jagannathan and Ma (2003) reported limited differences in downside risk between static and rebalanced portfolios, attributing this to lower-frequency rebalancing intervals. This suggests that the rolling portfolio's higherfrequency adjustments might be essential to achieve superior performance. ...
Article
Purpose This study aims to explore the strategic integration of Sharia-compliant and environmental, social and governance (ESG)-focused investments within global equity portfolio optimization frameworks, with a particular emphasis on variance minimization and dynamic rebalancing techniques. Design/methodology/approach The research uses historical data from Sharia-compliant, ESG-focused and conventional equity exchange-traded funds (ETFs). Advanced mean-variance optimization methodologies via quadratic programming are employed, encompassing static optimization with and without a 50% cap on individual asset weights, dynamic optimization with monthly rebalancing and rolling window optimization. Findings Portfolios integrating Sharia-compliant investments frequently outperform those composed solely of conventional equity ETFs. Dynamic optimization with monthly rebalancing achieved the highest Sharpe ratio (1.3708) and demonstrated enhanced portfolio resilience during market turbulence, such as the COVID-19 pandemic. Sharia-compliant investments showed substantial allocations during key periods, with weights reaching up to 100% in the first half of 2020. In contrast, ESG-focused investments exhibited more limited and sporadic allocations, reflecting a more opportunistic role in the portfolio. Practical implications The findings reaffirm the critical role of Sharia-compliant investments in well-diversified, risk-conscious portfolios while also providing nuanced insights into the more selective integration of ESG-focused assets. The results offer practical guidance for portfolio managers seeking to integrate ethical and sustainable investment principles within advanced portfolio optimization frameworks, particularly when focusing on minimizing variance and dynamically responding to evolving market conditions. Social implications The study contributes to the growing body of literature on ethical and sustainable investments, demonstrating that it is possible to balance ethical considerations with robust financial performance. The research underscores the potential for Sharia-compliant investments to play a significant role in global portfolios, potentially fostering greater financial inclusion and cross-cultural understanding in the investment community. Originality/value This research provides novel insights by focusing on Sharia-compliant investments within non-Muslim countries, an area that has been relatively underexplored. It also compares the outcomes of static, dynamic and rolling optimizations, highlighting the dynamic interplay between ethical investment principles and financial performance.
... The Robust Sparse Mean Variance (RSMV) model extends the classical MV framework incorporating robustness and sparsity through the ellipsidal uncertainty set and fixed transaction costs, respectively. The MV portfolio is known to be unstable with respect to estimated expectations of asset returns, where a slight perturbation may lead to a dramatic change in portfolio weights (Best & Grauer 1991, Jagannathan & Ma 2003, Chopra & Ziemba 2013. ...
Preprint
Full-text available
We extend the classical mean-variance (MV) framework and propose a robust and sparse portfolio selection model incorporating an ellipsoidal uncertainty set to reduce the impact of estimation errors and fixed transaction costs to penalize over-diversification. In the literature, the MV model under fixed transaction costs is referred to as the sparse or cardinality-constrained MV optimization, which is a mixed integer problem and is challenging to solve when the number of assets is large. We develop an efficient semismooth Newton-based proximal difference-of-convex algorithm to solve the proposed model and prove its convergence to at least a local minimizer with a locally linear convergence rate. We explore properties of the robust and sparse portfolio both analytically and numerically. In particular, we show that the MV optimization is indeed a robust procedure as long as an investor makes the proper choice on the risk-aversion coefficient. We contribute to the literature by proving that there is a one-to-one correspondence between the risk-aversion coefficient and the level of robustness. Moreover, we characterize how the number of traded assets changes with respect to the interaction between the level of uncertainty on model parameters and the magnitude of transaction cost.
... Finance Reducing the risk in large portfolios of stocks (Jagannathan and Ma, 2003). ...
Preprint
This paper deals with certain estimation problems involving the covariance matrix in large dimensions. Due to the breakdown of finite-dimensional asymptotic theory when the dimension is not negligible with respect to the sample size, it is necessary to resort to an alternative framework known as large-dimensional asymptotics. Recently, Ledoit and Wolf (2015) have proposed an estimator of the eigenvalues of the population covariance matrix that is consistent according to a mean-square criterion under large-dimensional asymptotics. It requires numerical inversion of a multivariate nonrandom function which they call the QuEST function. The present paper explains how to numerically implement the QuEST function in practice through a series of six successive steps. It also provides an algorithm to compute the Jacobian analytically, which is necessary for numerical inversion by a nonlinear optimizer. Monte Carlo simulations document the effectiveness of the code.
... The success of this strategy violates modern portfolio theory because it takes only the portfolio variance into account. But many empirical studies show that portfolios that focus on minimizing the volatility generate superior out-of-sample results (see, Clarke et al. (2011Clarke et al. ( , 2006, Jagannathan and Ma (2003), Ledoit and Wolf (2004) among others). That is why it makes sense to provide a statistical test whether the current portfolio composition is different from the conventional GMVP taking into account both the uncertainty of the asset returns and the large dimensionality of the portfolio. ...
Preprint
In this study, we construct two tests for the weights of the global minimum variance portfolio (GMVP) in a high-dimensional setting, namely, when the number of assets p depends on the sample size n such that pnc(0,1)\frac{p}{n}\to c \in (0,1) as n tends to infinity. In the case of a singular covariance matrix with rank equal to q we assume that q/nc~(0,1)q/n\to \tilde{c}\in(0, 1) as nn\to\infty. The considered tests are based on the sample estimator and on the shrinkage estimator of the GMVP weights. We derive the asymptotic distributions of the test statistics under the null and alternative hypotheses. Moreover, we provide a simulation study where the power functions and the receiver operating characteristic curves of the proposed tests are compared with other existing approaches. We observe that the test based on the shrinkage estimator performs well even for values of c close to one.
... It is worth mentioning that adding a non-negativity constraint (i.e. no-short-sales) has been observed to have a similar regularizing effect, see Jagannathan and Ma (2003). ...
Preprint
Distributional approximations of (bi--) linear functions of sample variance-covariance matrices play a critical role to analyze vector time series, as they are needed for various purposes, especially to draw inference on the dependence structure in terms of second moments and to analyze projections onto lower dimensional spaces as those generated by principal components. This particularly applies to the high-dimensional case, where the dimension d is allowed to grow with the sample size n and may even be larger than n. We establish large-sample approximations for such bilinear forms related to the sample variance-covariance matrix of a high-dimensional vector time series in terms of strong approximations by Brownian motions. The results cover weakly dependent as well as many long-range dependent linear processes and are valid for uniformly 1 \ell_1 -bounded projection vectors, which arise, either naturally or by construction, in many statistical problems extensively studied for high-dimensional series. Among those problems are sparse financial portfolio selection, sparse principal components, the LASSO, shrinkage estimation and change-point analysis for high--dimensional time series, which matter for the analysis of big data and are discussed in greater detail.
... Perhaps surprisingly, shrinkage methods turn out to be related to placing constraints on the portfolio weights in the Markowitz optimization. Jagannathan & Ma (2003) show that imposing a positivity constraint typically shrinks the large entries of the sample covariance downward. 8 As already mentioned, factor analysis and PCA in particular play a prominent role in the literature. ...
Preprint
Estimation error has plagued quantitative finance since Harry Markowitz launched modern portfolio theory in 1952. Using random matrix theory, we characterize a source of bias in the sample eigenvectors of financial covariance matrices. Unchecked, the bias distorts weights of minimum variance portfolios and leads to risk forecasts that are severely biased downward. To address these issues, we develop an eigenvector bias correction. Our approach is distinct from the regularization and eigenvalue shrinkage methods found in the literature. We provide theoretical guarantees on the improvement our correction provides as well as estimation methods for computing the optimal correction from data.
... In SPO, the parameters that appear in the transaction and holding costs can be inspired or motivated by our estimates of what their true values will be, but it is better to think of them as 'knobs' that we turn to achieve trading behavior that we like (see, e.g., [17, chapter 8], [35,19,41]), as verified by back-testing, what-if simulation, and stress-testing. ...
Preprint
We consider a basic model of multi-period trading, which can be used to evaluate the performance of a trading strategy. We describe a framework for single-period optimization, where the trades in each period are found by solving a convex optimization problem that trades off expected return, risk, transaction cost and holding cost such as the borrowing cost for shorting assets. We then describe a multi-period version of the trading method, where optimization is used to plan a sequence of trades, with only the first one executed, using estimates of future quantities that are unknown when the trades are chosen. The single-period method traces back to Markowitz; the multi-period methods trace back to model predictive control. Our contribution is to describe the single-period and multi-period methods in one simple framework, giving a clear description of the development and the approximations made. In this paper we do not address a critical component in a trading algorithm, the predictions or forecasts of future quantities. The methods we describe in this paper can be thought of as good ways to exploit predictions, no matter how they are made. We have also developed a companion open-source software library that implements many of the ideas and methods described in the paper.
... Various types of high-dimensional data are encountered in multiple disciplines when solving practical problems, for example, gene expression data for disease classifications (Golub et al., 1999), financial market data for portfolio construction and assessment (Jagannathan and Ma, 2003), and spatial earthquake data for geographical analysis (van der Hilst et al., 2007), among many others. To meet the challenges in analyzing high-dimensional data, penalized likelihood methods have been extensively studied; see Hastie et al. (2009) and Fan and Lv (2010) for overviews among a large amount of recent literature. ...
Preprint
Determining how to appropriately select the tuning parameter is essential in penalized likelihood methods for high-dimensional data analysis. We examine this problem in the setting of penalized likelihood methods for generalized linear models, where the dimensionality of covariates p is allowed to increase exponentially with the sample size n. We propose to select the tuning parameter by optimizing the generalized information criterion (GIC) with an appropriate model complexity penalty. To ensure that we consistently identify the true model, a range for the model complexity penalty is identified in GIC. We find that this model complexity penalty should diverge at the rate of some power of logp\log p depending on the tail probability behavior of the response variables. This reveals that using the AIC or BIC to select the tuning parameter may not be adequate for consistently identifying the true model. Based on our theoretical study, we propose a uniform choice of the model complexity penalty and show that the proposed approach consistently identifies the true model among candidate models with asymptotic probability one. We justify the performance of the proposed procedure by numerical simulations and a gene expression data analysis.
... The usefulness of Exhibit 2 can be validated with the following example taken from the article by Elton and Gruber (1977). This is an interesting example, since it has been discussed extensively in the literature (e.g., Statman 1987;Jagannathan and Ma 2003). More recently, it was included (with minor variations) in the authors' finance book (Elton et al. 2014). ...
... These estimators are closely related to Ledoit-Wolf shrinkage (Ledoit and Wolf (2003) and Ledoit and Wolf (2004b)) which itself has undergone numerous improvements (e.g., Ledoit and Wolf (2018) and Ledoit and Wolf (2020a)). In tandem, shrinkage methods have been known to impart effects akin to extra constraints in the portfolio optimization as early as Jagannathan and Ma (2003). An insightful example of such robust portfolio optimization that relates (3) to the convergence of the covariance matrix estimator is developed in Fan, Zhang and Yu (2012). ...
Preprint
Full-text available
We describe a puzzle involving the interactions between an optimization of a multivariate quadratic function and a "plug-in" estimator of a spiked covariance matrix. When the largest eigenvalues (i.e., the spikes) diverge with the dimension, the gap between the true and the out-of-sample optima typically also diverges. We show how to "fine-tune" the plug-in estimator in a precise way to avoid this outcome. Central to our description is a "quadratic optimization bias" function, the roots of which determine this fine-tuning property. We derive an estimator of this root from a finite number of observations of a high dimensional vector. This leads to a new covariance estimator designed specifically for applications involving quadratic optimization. Our theoretical results have further implications for improving low dimensional representations of data, and principal component analysis in particular.
... This paper studies the large dimensional minimum variance portfolio (MVP), inspired by the realworld investment scenarios that fund managers frequently encounter, where they must manage a vast universe of assets, often exceeding the sample size. The focus of the MVP is primarily on the covariance structure, presenting a more straightforward task in such complex environments (Jagannathan and Ma, 2003;. Specifically, we propose a robust minimum variance portfolio (R-MVP) to deal with outliers (or heavy tails) in the financial return data. ...
Preprint
Full-text available
This paper proposes a robust, shocks-adaptive portfolio in a large-dimensional assets universe where the number of assets could be comparable to or even larger than the sample size. It is well documented that portfolios based on optimizations are sensitive to outliers in return data. We deal with outliers by proposing a robust factor model, contributing methodologically through the development of a robust principal component analysis (PCA) for factor model estimation and a shrinkage estimation for the random error covariance matrix. This approach extends the well-regarded Principal Orthogonal Complement Thresholding (POET) method (Fan et al., 2013), enabling it to effectively handle heavy tails and sudden shocks in data. The novelty of the proposed robust method is its adaptiveness to both global and idiosyncratic shocks, without the need to distinguish them, which is useful in forming portfolio weights when facing outliers. We develop the theoretical results of the robust factor model and the robust minimum variance portfolio. Numerical and empirical results show the superior performance of the new portfolio.
... While there are arguably better alternatives for dealing with trading costs by multi-period models (Li et al., 2022), our addition provides a minimal yet effective solution. Regarding weight constraints, numerous studies, including Frost and Savarino (1988), Jagannathan and Ma (2003), Levy and Levy (2014), have demonstrated their effectiveness in enhancing out-of-sample performance for both MV and MinVar portfolios by acting similar to shrinkage estimation on the covariance matrix. ...
Article
Full-text available
This article introduces a novel hybrid regime identification-forecasting framework designed to enhance multi-asset portfolio construction by integrating asset-specific regime forecasts. Unlike traditional approaches that focus on broad economic regimes affecting the entire asset universe, our framework leverages both unsupervised and supervised learning to generate tailored regime forecasts for individual assets. Initially, we use the statistical jump model, a robust unsupervised regime identification model, to derive regime labels for historical periods, classifying them into bullish or bearish states based on features extracted from an asset return series. Following this, a supervised gradient-boosted decision tree classifier is trained to predict these regimes using a combination of asset-specific return features and cross-asset macro-features. We apply this framework individually to each asset in our universe. Subsequently, return and risk forecasts which incorporate these regime predictions are input into Markowitz mean-variance optimization to determine optimal asset allocation weights. We demonstrate the efficacy of our approach through an empirical study on a multi-asset portfolio comprising twelve risky assets, including global equity, bond, real estate, and commodity indexes spanning from 1991 to 2023. The results consistently show outperformance across various portfolio models, including minimum-variance, mean-variance, and naive-diversified portfolios, highlighting the advantages of integrating asset-specific regime forecasts into dynamic asset allocation.
... Silverstein (1995) proves the validity of the Macenko and Pastur (MP) equation under more general assumptions and show the strong convergence of the spectral measure of the sample covariance matrix. Recently, high-dimensional optimal portfolio theory has attracted the attention of researchers and practitioners of the financial sector (see, e.g., Jagannathan and Ma (2003); El Karoui (2010) Bodnar et al. (2022aBodnar et al. ( , 2023Bodnar et al. ( , 2022bBodnar et al. ( , 2024; Kan and Wang (2024); Lassance et al. (2024)). ...
Preprint
In this paper, we analyze the asymptotic behavior of the main characteristics of the mean-variance efficient frontier employing random matrix theory. Our particular interest covers the case when the dimension p and the sample size n tend to infinity simultaneously and their ratio p/n tends to a positive constant c(0,1)c\in(0,1). We neither impose any distributional nor structural assumptions on the asset returns. For the developed theoretical framework, some regularity conditions, like the existence of the 4th moments, are needed. It is shown that two out of three quantities of interest are biased and overestimated by their sample counterparts under the high-dimensional asymptotic regime. This becomes evident based on the asymptotic deterministic equivalents of the sample plug-in estimators. Using them we construct consistent estimators of the three characteristics of the efficient frontier. It it shown that the additive and/or the multiplicative biases of the sample estimates are solely functions of the concentration ratio c. Furthermore, the asymptotic normality of the considered estimators of the parameters of the efficient frontier is proved. Verifying the theoretical results based on an extensive simulation study we show that the proposed estimator for the efficient frontier is a valuable alternative to the sample estimator for high dimensional data. Finally, we present an empirical application, where we estimate the efficient frontier based on the stocks included in S\&P 500 index.
... The poor performance of the sample covariance GMV portfolios is not a novel result, especially when no constraint about the possibility of exploiting short-selling strategies is imposed (see Frost and Savarino 1988;Eichhorn et al. 1998;Britten-Jones 1999;Jagannathan and Ma 2003)-Note that, from a purely mathematical perspective, imposing constraints is equivalent to letting Table 1 Reliability R for each strategy adopted and for each sample size, under different time spans, ranging from the N = 50 case to the whole sample case ( N = 450 ). The mesoscopic approach closely tracks the reliability of the 1/N heuristics, which is in turn higher than the classical GMV portfolio based on the sample covariance matrix plugged-in as input a shrinkage operator to act on the covariance matrix of the assets, which helps when the number of parameters to estimate is too large and, as a consequence, estimation errors are large as well. ...
Article
Full-text available
The idiosyncratic and systemic components of market structure have been shown to be responsible for the departure of the optimal mean-variance allocation from the heuristic ‘equally weighted’ portfolio. In this paper, we exploit clustering techniques derived from Random Matrix Theory to study a third, intermediate (mesoscopic) market structure that turns out to be the most stable over time and provides important practical insights from a portfolio management perspective. First, we illustrate the benefits, in terms of predicted and realized risk profiles, of constructing portfolios by filtering out both random and systemic co-movements from the correlation matrix. Second, we redefine the portfolio optimization problem in terms of stock clusters that emerge after filtering. Finally, we propose a new wealth allocation scheme that attaches equal importance to stocks belonging to the same community and show that it further increases the reliability of the constructed portfolios. Results are robust across different time spans, cross sectional dimensions and set of constraints defining the optimization problem.
... However, although many studies have combined rolling window models with portfolio construction, relevant research is still limited. For example, many existing studies focus on specific markets or asset classes and lack systematic evaluations of the model's stability under different market conditions [4][5]. Additionally, many studies have certain limitations in the application of the method and fail to fully consider various constraints in practical operations [6]. ...
Article
Full-text available
This paper aims to explore a method of constructing dynamically adjusted investment portfolios through the combination of rolling windows and mean-variance models. The construction of investment portfolios plays a crucial role in financial markets by effectively diversifying risks and achieving stable investment returns. This study analyzed adjusted closing price data from five stocks (AAPL, JPM, JNJ, XOM, and PG), sourced from Yahoo Finance. The rolling window method was employed to predict future stock prices and construct investment portfolios based on these predictions. This study calculated annualized average return and covariance matrix for each window. The mean-variance model and optimization algorithms were then used to determine the optimal weights for daily portfolio composition. The results demonstrate the high accuracy of the rolling window method in predicting short-term stock prices. Notably, Procter & Gamble (PG) and Johnson & Johnson (JNJ) exhibited the lowest prediction errors. Investment portfolios based on rolling window predictions performed well over a certain period. Nevertheless, the S&P 500 index did not show a substantial disadvantage compared to their long-term performance. This research provides dynamic portfolio optimization methods of practical value for investors in financial markets.
... The Global Minimum Variance Portfolio (GMVP) model minimizes portfolio variance without considering portfolio returns. This strategy, extensively explored and supported by Jagannathan and Ma (2003), offers promising insights. Another widely used approach consists in maximizing the so-called Sharpe ratio, i.e., the risk-adjusted log-return (Sharpe 1998). ...
Article
Full-text available
Portfolio allocation represents a significant challenge within financial markets, traditionally relying on correlation or covariance matrices to delineate relationships among stocks. However, these methodologies assume time stationarity and only capture linear relationships among stocks. In this study, we propose to substitute the conventional Pearson’s correlation or covariance matrix in portfolio optimization with a similarity matrix derived from the signature. The signature, a concept from path theory, provides a unique representation of time series data, encoding their geometric patterns and inherent properties. Furthermore, we undertake a comparative analysis of network structures derived from the correlation matrix versus those obtained from the signature-based similarity matrix. Through numerical evaluation on the Standard & Poor’s 500, we assess that portfolio allocation utilizing the signature-based similarity matrix yielded superior results in terms of cumulative log-returns and Sharpe ratio compared to the baseline network approach based on Pearson’s correlation. This assessment was conducted across various portfolio optimization strategies. This research contributes to portfolio allocation and financial network representation by proposing the use of signature-based similarity matrices over traditional correlation or covariance matrices.
... According to [24], the estimation error associated with the sample mean is significant enough that disregarding the mean entirely does not result in a substantial loss when no additional information is available about the population. Furthermore, the authors in [25] presented empirical evidence demonstrating that, except for one dataset, the minimum-variance method consistently outperforms the mean-variance approach. ...
Article
Full-text available
Mean-variance portfolio optimization is widely used by financial professionals as a fundamental strategy for constructing portfolios that achieve the highest returns for a given degree of risk tolerance. This traditional approach suffers from computational instability issues, which are often solved by integrating regularization-based methods. However, regularized mean-variance models overlook sector considerations, emphasizing a need for market-informed portfolio optimization. In this paper, we propose an extended mean-variance portfolio selection framework that incorporates a new adaptive sparse-group least absolute shrinkage and selection operator (LASSO) regularization as a penalty parameter. In the proposed model, the sparse optimal portfolio is selected considering both sectors and assets simultaneously. Moreover, the inclusion of adaptive weights in the regularization, estimated based on chosen criteria, enables the use of more market data in a Markowitz portfolio model. We also develop an efficient alternating direction method of multipliers (ADMM) algorithm that guarantees convergence, for finding the optimal sparse portfolio. The effectiveness of the suggested portfolio selection model is validated by numerical results derived from the constituents of the Standard and Poor’s 500. These results demonstrate superior performance over our selected benchmark models across various evaluation measurements
... With the short-selling restriction, the GMV problem is a quadratic optimization problem that can be solved numerically. 13 Restricting short sales is equivalent to shrinking towards zero the larger elements of the covariance matrix that would otherwise imply negative weights (Jagannathan and Ma, 2003), offering an interesting point of comparison. ...
Preprint
This paper develops a large-scale inference approach for the regularization of stock return covariance matrices. The framework allows for the presence of heavy tails and multivariate GARCH-type effects of unknown form among the stock returns. The approach involves simultaneous testing of all pairwise correlations, followed by setting non-statistically significant elements to zero. This adaptive thresholding is achieved through sign-based Monte Carlo resampling within multiple testing procedures, controlling either the traditional familywise error rate, a generalized familywise error rate, or the false discovery proportion. Subsequent shrinkage ensures that the final covariance matrix estimate is positive definite and well-conditioned while preserving the achieved sparsity. Compared to alternative estimators, this new regularization method demonstrates strong performance in simulation experiments and real portfolio optimization.
... The threshold for maximum security weight was established at 20%, similar to Chen et al. (2021), while for minimum weight 1% level was utilised. Jagannathan and Ma (2003) argued that imposing upper and lower-bound constraints could prevent extreme exposure to individual stocks. The effect is similar to the shrinkage, thus addressing the problems of the covariance matrix estimation discussed by Ledoit and Wolf (2003). ...
Preprint
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r=N/T<1r=N/T<1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r=1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1r)1/(1-r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical index, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Article
Full-text available
We investigate portfolio selection performance as in Markowitz by evaluating variance matrix estimation criteria in the currency market. This study challenges theoretically rigorous shrinkage covariance estimators using multiple evaluation metrics: systematic loss function, risk profile of minimum variance portfolios, Herfindahl index, financial efficiency, and concentration level. We assess out-of-sample performance across conventional models, factor models, linear shrinkage estimators, and equally weighted portfolios by applying mean-variance criteria and minimum variance framework to the 10 most traded currencies. Our findings reveal that mean-variance optimal portfolios are concentrated, counterintuitive, and highly sensitive to optimizer input choices in currency markets. We discovered that shrinkage estimators do not provide additional benefits to investors and fund managers regarding systematic loss function and minimum variance portfolio risk profiles. The research highlights critical limitations in traditional portfolio construction approaches, demonstrating that portfolios built using mean-variance criteria are prone to significant input data sensitivity and tend to create overly concentrated investments. Consequently, the study suggests that investors and fund managers should exercise caution when selecting covariance estimators and consider exploring more diversified strategies to optimize portfolio performance in foreign exchange markets.
Article
Full-text available
We study non-linear predictability of stock returns arising from the dividend-price ratio and its implications for asset allocation decisions. Using data from five countries — U.S., U.K., France, Germany and Japan — we find empirical evidence supporting non-linear and time-varying models for the equity risk premium. Building on this, we examine several model specifications that can account for non-linear return predictability, including Markov switching models, regression trees, random forests and neural networks. Although in-sample return regressions and portfolio allocation results support the use of non-linear predictability models, the out-of-sample evidence is notably weaker, highlighting the difficulty in exploiting non-linear predictability in real time.
Conference Paper
Full-text available
This paper presents the development of a Portfolio Management System (PMS) designed to enhance investment management through a web-based application. Utilizing HTML, CSS, JavaScript, and Python, the PMS offers functionalities such as stock tracking, performance analysis, back-testing strategies, and personalized investment recommendations. This research aims to address the gap in accessible financial tools for individual investors, promoting informed decision-making and portfolio optimization.
Preprint
Full-text available
Sambar (Cervus unicolor) is considered an important prey species of Tiger and other carnivores throughout South and Southeast Asia. However, the ecological information on Sambar deer in Bhutan is scarce. Therefore, this study represents a pioneering effort to evaluate habitat preferences and assess the impacts of both natural and anthropogenic disturbances on Sambar occurrence within Phrumsengla National Park, a designated National Protected Area. The study area was stratified into three forest types and the standard line transect method was employed for data collection. A total of 90 plots were established, comprising 45 already used by Sambar and 45 control plots representing other available habitats. The study analysis revealed that Sambar highly preferred an elevation range of 2356 - 2680 masl with northeast-facing slopes of 0 - 15o, preferred a canopy cover of 51 - 75%, a ground cover of 61 - 76%, distance to water sources of 101 - 200 m. The study also identified preferred vegetation types, with Sambar showing a marked association with tree species such as Taxus baccata and Betula utilis, shrub species like Sarcococca hookeriana, and herbaceous plants including Stellaria vestita. Furthermore, Sambar favored cool broadleaved forests over mixed conifer and fir forests. Livestock grazing was the prominent anthropogenic disturbance, followed by timber extraction, which had a strong negative correlation with significant difference (p < .05). The findings underscore the necessity for stringent regulations regarding livestock grazing and resource extraction within protected areas. To build on this foundational study, long-term monitoring and further research are essential for developing targeted conservation strategies and interventions, ensuring the sustainability of this vital species and its habitat.
Preprint
Full-text available
This study proposes a novel approach for intelligent decision-making in automotive platform-based projects by integrating advanced techniques such as Long Short-Term Memory (LSTM) for time series forecasting, Genetic Algorithms for portfolio optimization, and Multi-Objective Optimization for balancing conflicting objectives. The use of LSTM enables accurate prediction of future trends in automotive platform performance metrics, while Genetic Algorithms efficiently search for the optimal portfolio composition that maximizes returns and minimizes costs. By incorporating Multi-Objective Optimization, decision-makers can explore trade-offs between multiple objectives such as maximizing returns, minimizing costs, and ensuring diversification. The proposed framework offers a comprehensive solution for optimizing automotive platform portfolios and facilitating strategic decision-making in the automotive industry projects.
Article
Full-text available
While machine learning's role in financial trading has advanced considerably, algorithmic transparency and explainability challenges still exist. This research enriches prior studies focused on high‐frequency financial data prediction by introducing an explainable reinforcement learning model for portfolio management. This model transcends basic asset prediction, formulating concrete, actionable trading strategies. The methodology is applied in a custom trading environment mimicking the CAC‐40 index's financial conditions, allowing the model to adapt dynamically to market changes based on iterative learning from historical data. Empirical findings reveal that the model outperforms an equally weighted portfolio in out‐of‐sample tests. The study offers a dual contribution: it elevates algorithmic planning while significantly boosting transparency and interpretability in financial machine learning. This approach tackles the enduring ‘black‐box’ issue and provides a holistic, transparent framework for managing investment portfolios.
Article
Green and Hollifield (1992) argue that the presence of a dominant factor would result in extreme negative weights in mean-variance efficient portfolios even in the absence of estimation errors. In that case, imposing no-short-sale constraints should hurt, whereas empirical evidence is often to the contrary. We reconcile this apparent contradiction. We explain why constraining portfolio weights to be nonnegative can reduce the risk in estimated optimal portfolios even when the constraints are wrong. Surprisingly, with no-short-sale constraints in place, the sample covariance matrix performs as well as covariance matrix estimates based on factor models, shrinkage estimators, and daily data. Copyright (c) 2003 by the American Finance Association.