## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

The 1/N investment strategy, i.e. the strategy to split one’s wealth uniformly between the available investment possibilities, recently received plenty of attention in the literature. In this paper, we demonstrate that the uniform investment strategy is rational in situations where an agent is faced with a sufficiently high degree of model uncertainty in the form of ambiguous loss distributions. More specifically, we use a classical risk minimization framework to show that, for a broad class of risk measures, as the uncertainty concerning the probabilistic model increases, the optimal decisions tend to the uniform investment strategy.To illustrate the theoretical results of the paper, we investigate the Markowitz portfolio selection model as well as Conditional Value-at-Risk minimization with ambiguous loss distributions. Subsequently, we set up a numerical study using real market data to demonstrate the convergence of optimal portfolio decisions to the uniform investment strategy.

To read the full-text of this research,

you can request a copy directly from the authors.

... Explicit incorporation of a risk measure into a DRO model has also received attention in the literature. We refer to Pflug et al. [303], Pichler [305], Pichler and Xu [307], Wozabal [412] for spectral and distortion risk measures, Calafiore [72] for variance, Calafiore [72] for mean absolute-deviation, Hanasusanto et al. [178], Wiesemann et al. [410] for optimized certainty equivalent, Hanasusanto et al. [175] for CVaR, and Postek et al. [311] for a variety of risk measures. Delage and Li [103] study a risk minimization problem, where there is ambiguity on the underlying risk measure. ...

... When the ambiguity set contains all discrete distributions around the empirical distribution in the sense of the Wasserstein metric, Pflug and Wozabal [302] and Pflug et al. [303] propose to choose the level of robustness based on a probabilistic statement on the Wasserstein metric between the empirical and true distributions, due to Dudley [124], as = CN − 1 d α . This choice of guarantees that P N {d W c (P, P N ) ≥ } ≤ α, and consequently, a finite-sample guarantee with confidence 1 − α can be achieved. ...

... We now turn our attention to the connection between DRO and regularization in statistical learning. Pflug et al. [303], Pichler [305], Wozabal [412] draw the connection between robustification and regularization, where as in Theorem 14, the shape of the transportation cost in the definition of the optimal transport discrepancy directly implies the type of regularization, and (ii) the size of the ambiguity set dictates the regularization parameter. Pichler [305] studies worst-case values of lower semicontinuous and law-invariant risk measures, including spectral and distortion risk measures, over an ambiguity set of distributions formed via the p-Wasserstein metric utilizing an arbitrary norm around the empirical distribution. ...

The concepts of risk aversion, chance-constrained optimization, and robust optimization have developed significantly over the last decade. The statistical learning community has also witnessed a rapid theoretical and applied growth by relying on these concepts. A modeling framework, called distributionally robust optimization (DRO), has recently received significant attention in both the operations research and statistical learning communities. This paper surveys main concepts and contributions to DRO, and relationships with robust optimization, risk aversion, chance-constrained optimization, and function regularization. Various approaches to model the distributional ambiguity and their calibrations are discussed. The paper also describes the main solution techniques used to the solve the resulting optimization problems.

... In the situation with logarithmic utility and uncertainty sets that are balls in some p-norm, p ∈ [1, ∞), it is possible to carry over methods from a one-period risk minimization problem as in Pflug et al. [21] to our continuous-time robust utility maximization problem. If K = {μ ∈ R d | μ − ν p ≤ κ}, then for every ε > 0 there exists a κ 0 > 0 such that for all κ ≥ κ 0 the strategy π * (κ) that is optimal for ...

... This approach has several drawbacks. Firstly, we can follow the ideas from Pflug et al. [21] in continuous time only for logarithmic utility and uncertainty sets K that are balls in p-norm. Secondly, we have to restrict to the class of deterministic strategies to be able to use their methods. ...

... In the special case where K is a ball, this leads to a uniform diversification strategy. This result is in line with Pflug et al. [21] who show convergence of the optimal strategy to the uniform diversification strategy in a risk minimization setting with increasing model uncertainty. ...

In this paper we investigate a utility maximization problem with drift uncertainty in a multivariate continuous-time Black–Scholes type financial market which may be incomplete. We impose a constraint on the admissible strategies that prevents a pure bond investment and we include uncertainty by means of ellipsoidal uncertainty sets for the drift. Our main results consist firstly in finding an explicit representation of the optimal strategy and the worst-case parameter, secondly in proving a minimax theorem that connects our robust utility maximization problem with the corresponding dual problem. Thirdly, we show that, as the degree of model uncertainty increases, the optimal strategy converges to a generalized uniform diversification strategy.

... An investment strategy that is widely used in financial markets is the uniform investment strategy or 1∕N rule, which divides the budget among assets equally. Pflug et al. (2012) demonstrated that the uniform investment strategy is the best strategy for investment under uncertainty. They proposed robust mean-CVaR and mean-variance PSPs where the distribution function of asset returns is uncertain and belongs to a Kantorovich or Wasserstein metric-based ambiguity set. ...

... Hence, the optimal investment strategy in a high ambiguity situation is the uniform investment or 1∕N rule. However, Pflug et al. (2012) assumed that all assets are subject to uncertainty though it is possible to use fixed-income assets with no ambiguity or uncertainty in the portfolio. Therefore, Paç and Pınar (2018) extended the robust uniform strategy of Pflug et al. (2012) by considering both ambiguous and unambiguous assets. ...

... However, Pflug et al. (2012) assumed that all assets are subject to uncertainty though it is possible to use fixed-income assets with no ambiguity or uncertainty in the portfolio. Therefore, Paç and Pınar (2018) extended the robust uniform strategy of Pflug et al. (2012) by considering both ambiguous and unambiguous assets. They showed that by increasing the ambiguity level, measured by the radius of the ambiguity set, the optimal portfolio tends to use equal weights for all assets. ...

This paper reviews recent advances in robust portfolio selection problems and their extensions, from both operational research and financial perspectives. A multi-dimensional classification of the models and methods proposed in the literature is presented, based on the types of financial problems, uncertainty sets, robust optimization approaches, and mathematical formulations. Several open questions and potential future research directions are identified.

... The definition (BALL-C) was chosen both for notational convenience, and to emphasize that distributions in continuous spaces can be specified via varying outcome mappings (as opposed to varying probability measures). This approach is taken by Pflug et al. (2012) to constructively prove the crucially important proposition that underlies our development for the continuous ball case (see Online Appendix C). ...

... We now turn our attention to a class of problems where outcome mapping G has a bilinear structure, and the ambiguity set is a continuous Wasserstein-p ball. Our principal tool to obtain potentially tractable formulations for problems in this class will be a result due to Pflug et al. (2012) (Proposition C.2 in Online Appendix C), which provides a closed-form robustification of convex risks for the case of a bilinear outcome mapping. A detailed discussion along with the formulations of the problems under consideration are presented in Online Appendix C. ...

... Our eventual goal is to similarly convert the problem (DRO-RAD), which arises when the ambiguity set is a discrete EMD ball of type (BALL-D). The primary difficulty lies in the fact that the key result of Pflug et al. (2012), which provided an elegant way to robustify risk measures in a continuous context by replacing the supremum over the ambiguity set with a closed-form formula (see (34) in Online Appendix C), is no longer valid in a discrete setting, as the following example shows. ...

We introduce a new class of distributionally robust optimization problems under decision-dependent ambiguity sets. In particular, as our ambiguity sets, we consider balls centered on a decision-dependent probability distribution. The balls are based on a class of earth mover’s distances that includes both the total variation distance and the Wasserstein metrics. We discuss the main computational challenges in solving the problems of interest and provide an overview of various settings leading to tractable formulations. Some of the arising side results, such as the mathematical programming expressions for robustified risk measures in a discrete space, are also of independent interest. Finally, we rely on state-of-the-art modeling techniques from machine scheduling and humanitarian logistics to arrive at potentially practical applications, and present a numerical study for a novel risk-averse scheduling problem with controllable processing times.
Summary of Contribution: In this study, we introduce a new class of optimization problems that simultaneously address distributional and decision-dependent uncertainty. We present a unified modeling framework along with a discussion on possible ways to specify the key model components, and discuss the main computational challenges in solving the complex problems of interest. Special care has been devoted to identifying the settings and problem classes where these challenges can be mitigated. In particular, we provide model reformulation results, including mathematical programming expressions for robustified risk measures, and describe how these results can be utilized to obtain tractable formulations for specific applied problems from the fields of humanitarian logistics and machine scheduling. Toward demonstrating the value of the modeling approach and investigating the performance of the proposed mixed-integer linear programming formulations, we conduct a computational study on a novel risk-averse machine scheduling problem with controllable processing times. We derive insights regarding the decision-making impact of our modeling approach and key parameter choices.

... Probably the paper closest to the results presented here is Pflug et al. [2012], where the authors consider a robust maximisation problem for risk measures. Their main examples concern the Markowitz functional and the conditional value at risk. ...

... However, their setting does not easily translate to ours: indeed, [Pflug et al., 2012, Proposition 1] essentially assumes that the map Q → U ′ (X) L p (Q) is constant on B k (P), which is clearly not satisfied for general utility functions of interest. Furthermore, results similar to Pflug et al. [2012] are proved in Sass and Westphal [2021] for the power and logarithmic utility functions when there is drift uncertainty in a multivariate continuous-time Black-Scholes type financial market. However, to the best of our knowledge, the case of general utility maximization has not been addressed, even in a one-period setup. ...

We investigate an expected utility maximization problem under model uncertainty in a one-period financial market. We capture model uncertainty by replacing the baseline model $\mathbb{P}$ with an adverse choice from a Wasserstein ball of radius $k$ around $\mathbb{P}$ in the space of probability measures and consider the corresponding Wasserstein distributionally robust optimization problem. We show that optimal solutions converge to the uniform diversification strategy when uncertainty is increasingly large, i.e. when the radius $k$ tends to infinity.

... In this study, five methodologies were selected to calculate portfolios, which are: the naive portfolio (NP), tangent (Tang), minimum variance (MinV), parity of risk (ParR) and VolT, according to Pflug, Pichler, and Wozabal (2012) and Harvey et al. (2018). Equations (4)-(8) illustrate the formula for calculating portfolio weights according to aforementioned methodologies, respectively. ...

... The performance of Black-Litterman portfolios conservative profiles, NP is a viable alternative, validating previous studies such as those by Pflug et al. (2012) and Iquiapaza et al. (2016), who comment on this investment strategy. Conversely, the greater the investor's risk tolerance, the better the performance of the market proxy, confirming the importance of considering different levels of risk in the context of the study. ...

Purpose
The aim of the study was to analyze the performance of Black-Litterman (BL) portfolios using a views estimation procedure that simulates investor forecasts based on technical analysis.
Design/methodology/approach
Ibovespa, S&P500, Bitcoin and interbank deposit rate (IDR) indexes were respectively considered proxies for the national, international, cryptocurrency and fixed income stock markets. Forecasts were made out of the sample aiming at incorporating them in the BL model, using several portfolio weighting methods from June 13, 2013 to August 30, 2022.
Findings
The Sharpe, Treynor and Omega ratios point out that the proposed model, considering only variable return assets, generates portfolios with performances superior to their traditionally calculated counterparts, with emphasis on the risk parity portfolio. Nonetheless, the inclusion of the IDR leads to performance losses, especially in scenarios with lower risk tolerance. And finally, given the impact of turnover, the naive portfolio was also detected as a viable alternative.
Practical implications
The results obtained can contribute to improve investors practices, specifically by validating both the performance improvement – when including foreign assets and cryptocurrencies –, and the application of the BL model for asset pricing.
Originality/value
The main contributions of the study are: performance analysis incorporating cryptocurrencies and international assets in an uncertain recent period; the use of a methodology to compute the views simulating the behavior of managers using technical analysis; and comparing the performance of portfolio management strategies based on the BL model, taking into account different levels of risk and uncertainty.

... This type of ambiguity set is constructed to contain all the distributions centering around a nominal distribution (center) within a certain distance threshold, where the distance can be measured by various metrics such as φ−divergence and Wasserstein metric that is of interest in this paper. In the application of portfolio allocation, this center is often chosen as the empirical distribution (see Pflug et al. 2012, Blanchet et al. 2022, which converges to the true distribution as the sample goes to infinity if the returns of assets are assumed to be i.i.d. across time. ...

... We can change the second constraint above similarly to reformulate as a set of linear and second order cone constraint(s) when m = 1 and m = 2 respectively. Proof of Proposition 2. The proof is similar to what has been shown in Pflug et al. (2012), where they prove the case when K = 1. Denote the center of Wasserstein ambiguity set asP ∈ U rs (0). ...

... Thereafter, we propose an ex-post empirical analysis based on the DEA selected assets to create a uniform portfolio of the stock markets (investing 1 n in each efficient asset) (DeMiguel et al. 2007;Pflug et al. 2012). For the empirical analysis, we use two alternative datasets: the components S&P500 and the Fama and French 100 portfolio formed on Size and Book to Market. ...

... (1) the strategy that invests uniformly among all the preselected assets (see DeMiguel et al. 2007;Pflug et al. 2012); (2) the strategy that maximizes the MSG Sharpe ratio (see (3) the strategy that maximizes the MSG stable ratio (see ; and (4) the strategy that maximizes the MSG Pearson ratio. Thus, we denote with x = [x 1 , ..., x n ] the vector of portfolio weight, and we optimize these strategies applied to the portfolio x z where no short sales are allowed (i.e., x i ≥ 0). ...

This paper uses data envelopment analysis (DEA) approach as a nonparametric efficiency analysis tool to preselect efficient assets in large-scale portfolio problems. Thus, we reduce the dimensionality of portfolio problems, considering multiple asset performance criteria in a linear DEA model. We first introduce several reward/risk criteria that are typically used in portfolio literature to identify features of financial returns. Secondly, we suggest some DEA input/output sets for preselecting efficient assets in a large-scale portfolio framework. Then, we evaluate the impact of the preselected assets in different portfolio optimization strategies. In particular, we propose an ex-post empirical analysis based on two alternative datasets: the components of S &P500 and the Fama and French 100 portfolio formed on size and book to market. According to this empirical analysis we observe better performances of the DEA preselection than the classic PCA factor models for large scale portfolio selection problems. Moreover, the proposed model outperform the S &P500 index and the strategy based on the fully diversified portfolio.

... The easiest, fastest portfolio allocation strategy is the equally-weighted technique, which does not require to solve an optimization problem and allocates the same amount of wealth in each asset. This approach follows the principle of not putting all eggs in one basket and can be appropriate when neither the risks nor the expected returns can be forecasted or when the estimation error is large (De Carvalho et al., 2012;Pflug et al., 2012;Battaglia and Leal, 2017). ...

Portfolio allocation is an important tool for portfolio managers and investors interested in diversification as well as improvements in out-of-sample portfolio performance. Recently, new portfolio allocation strategies based on unsupervised machine learning have been proposed in the literature, being hierarchical risk parity one of the most popular. This article uses assets from the Brazilian financial market to perform an extensive out-of-sample comparison of hierarchical risk parity against widely-known, traditional portfolio allocation techniques. The results suggest that, in general, hierarchical risk parity does not report the best performance but, in some performance measures, performs equally well to other approaches. Overall, hierarchical risk parity outperforms the market index.

... These measures, however, are often predicated on strong assumptions about the underlying return distributions (Daníelsson and Zigrand, 2006). Moreover, they may not fully capture extreme events or the complex, nonlinear dependencies that often characterize financial assets (Pflug et al., 2012). The following section extends our discussion beyond these conventional measures. ...

This study introduces a multivariate entropic Value at Risk (mEVaR) risk measure, broadening the conventional Value at Risk scope to a multi-asset scenario. The mEVaR is coherent and encapsulates the integrated risk of various assets in a portfolio. In addition, a new theoretical result incorporates mutual information into the mEVaR to capture tail dependence during extreme market events. The findings suggest that greater mutual dependence among assets increases risk as the benefit of diversification decreases. Examples, simulations, and empirical studies illustrate the applicability of these risk measures as tools for managing and optimizing investment portfolios.

... This type of ambiguity set is constructed to contain all the distributions centering around a nominal distribution (center) within a certain distance threshold, where the distance can be measured by various metrics, such as φ�divergence and the Wasserstein metric, that are of interest in this paper. In the application of portfolio allocation, this center is often chosen as the empirical distribution (see Pflug et al. 2012, Blanchet et al. 2022, which converges to the true distribution as the sample goes to infinity if the returns of assets are assumed to be i.i.d. across time. ...

Problem definition: Nonstationarity of the random environment is a critical yet challenging concern in decision-making under uncertainty. We illustrate the challenge from the nonstationarity and the solution framework using the portfolio selection problem, a typical decision problem in a time-varying financial market. Methodology/Results: This paper models the nonstationarity by a regime-switching ambiguity set. In particular, we incorporate the time-varying feature of the stochastic environment into the traditional Wasserstein ambiguity set to build our regime-switching ambiguity set. This modeling framework has strong financial interpretations because the financial market is exposed to different economic cycles. We show that the proposed distributional optimization framework is computationally tractable. We further provide a general data-driven portfolio allocation framework based on a covariate-based estimation and a hidden Markov model. We prove that the approach can include the underlying distribution with a high probability when the sample size is larger than a quantitative bound, from which we further analyze the quality of the obtained portfolio. Extensive empirical studies are conducted to show that the proposed portfolio consistently outperforms the equally weighted portfolio (the 1/N strategy) and other benchmarks across both time and data sets. In particular, we show that the proposed portfolio exhibited a prompt response to the regime change in the 2008 financial crisis by reallocating the wealth into appropriate asset classes on account of the time-varying feature of our proposed model. Managerial implications: The proposed framework helps decision-makers hedge against time-varying uncertainties. Specifically, applying the proposed framework to portfolio selection problems helps investors respond promptly to the regime change in financial markets and adjust their portfolio allocation accordingly.
Funding: This work was supported by the Neptune Orient Lines Fellowship [NOL21RP04], Singapore Ministry of Education Academic Research Fund Tier 2 [MOE-T2EP20220-0013], and Singapore Ministry of Education Academic Research Fund Tier 1 [Grant RG17/21].
Supplemental Material: The e-companion is available at https://doi.org/10.1287/msom.2023.1229

... Moreover, the instability of the model is considered one of the main reason of its poor out-of-sample performance if compared to naive allocation models, see DeMiguel et al. (2009). In the case of Markowitz model the uncertainty depending on the parameters estimation has been deeply discussed, see among the others Pflug et al. (2012). Due to parameter uncertainty also the task of identifying the tangency portfolio can become challenging, see for example Muhinyuza et al. (2020). ...

In Markowitz model the expected return of a portfolio is a parameter that can be chosen arbitrarily by the investor when the vector of assets' expected returns is not constant. While this assumption is usually verified in practice, in real data applications, it often happens that the two linear restrictions of Markowitz model, the budget constraint and the restriction on portfolio's expected return, are badly scaled and/or almost collinear, causing the numerical instability of the model. Using numerical arguments, we propose a set of suitable values for portfolio expected return that restricts the standard mean-variance efficient frontier to a subset of portfolios that are numerical stable with respect to a given parameter δ. These portfolios are the stable solutions of the optimal allocation problem and are derived imposing a condition that preserve the numerical rank of the matrices that are involved in the calculation. The proposal is applied both to the case of long-only and long-short portfolios. An extensive application performed on different databases of real financial data highlights the effectiveness of our proposal in reducing the numerical instability of the model.

... But in addition to this immediate application, optimal transport theory has also lead to the the notion of Wasserstein distance [33,64,66], which defines a metric between different probability distributions. Over the years, optimal transport has found applications in different areas of economics [16,26,53], probability theory [55,56] statistics [25,27,44], differential geometry [22,24,61], robust optimization [10,42,69], machine learning and data science [4,19,50,60], just to name a few. At the same time, various variants and extensions of optimal transport have emerged, like multi-marginal versions [2,23,47], optimal transport with additional constraints [9,14,15,20,36,45], optimal transport between measures with different masses [17,59], relaxations [12,39] and regularizations [18,40]. ...

In this paper we introduce a variant of optimal transport adapted to the causal structure given by an underlying directed graph. Different graph structures lead to different specifications of the optimal transport problem. For instance, a fully connected graph yields standard optimal transport, a linear graph structure corresponds to adapted optimal transport, and an empty graph leads to a notion of optimal transport related to CO-OT, Gromov-Wasserstein distances and factored OT. We derive different characterizations of causal transport plans and introduce Wasserstein distances between causal models that respect the underlying graph structure. We show that average treatment effects are continuous with respect to causal Wasserstein distances and small perturbations of structural causal models lead to small deviations in causal Wasserstein distance. We also introduce an interpolation between causal models based on causal Wasserstein distance and compare it to standard Wasserstein interpolation.

... Introducing higher moments in the allocation problem has a strong impact on model identification. The number of parameters to estimate has been widely recognized in the literature as one of the main reasons of poor out-ofsample performance of optimization based models when compared to naive portfolio strategies, see DeMiguel et al. (2009), causing model uncertainty and miss-specification, see Pflug et al. (2012); many empirical papers focus on this issue proposing techniques to reduce the number of parameters, see among the others Lassance and Vrins (2021), Jondeau et al. (2018), Hitaj et al. (2015). The present paper proposes a general theoretical approach able to incorporate in the portfolio optimization problem the moments of returns distribution up to a given order N . ...

... DeMiguel et al. (2009) find that the SMV portfolio and 13 robust extensions cannot outperform EW consistently. Pflug et al. (2012), Yan and Zhang (2017), and Yuan and Zhou (2022) explain that EW is not so naive because it is nearly optimal under high model ambiguity, in the absence of mispricing, and under a one-factor model, respectively. Tu and Zhou (2011) and Yuan and Zhou (2022) combine the EW and SMV portfolios to optimize out-of-sample utility and Sharpe ratio, respectively. ...

We study how to best combine the sample mean-variance portfolio with the naive equally weighted portfolio to optimize out-of-sample performance. We show that the seemingly natural convexity constraint that Tu and Zhou (2011) impose---the two combination coefficients must sum to one---is undesirable because it severely constrains the allocation to the risk-free asset relative to the unconstrained portfolio combination. However, we demonstrate that relaxing the convexity constraint inflates estimation errors in combination coefficients, which we alleviate using a shrinkage estimator of the unconstrained combination scheme. Empirically, the constrained combination outperforms the unconstrained one in a range of generally small degrees of risk aversion, but severely deteriorates otherwise. In contrast, the shrinkage unconstrained combination enjoys the best of both strategies and performs consistently well for all levels of risk aversion.

... Our theoretical results show that high ambiguity aversion-in a setting where currencies are treated as ambiguous assets-can explain why investor holdings are biased toward their base currencies (i.e., the puzzle of insufficient currency diversification is driven by investor's elevated ambiguity aversion). Our findings are in line with Pflug et al. [2012], who theoretically-using a different modeling framework-showed that a uniform investment strategy is optimal when an investor exhibits high model uncertainty. ...

... Despite being very naive, this strategy is also most robust against estimation errors [58], since the allocation simply remains constant no matter the circumstances. Note that the individual investments d are not considered as fractions relative to the current wealth W here, and so a small enough unit d W has to be chosen so that it is possible to invest into all profitable opportunities. ...

It is a common misconception that in order to make consistent profits as a trader, one needs to possess some extra information leading to an asset value estimation that is more accurate than that reflected by the current market price. While the idea makes intuitive sense and is also well substantiated by the widely popular Kelly criterion, we prove that it is generally possible to make systematic profits with a completely inferior price-predicting model. The key idea is to alter the training objective of the predictive models to explicitly decorrelate them from the market. By doing so, we can exploit inconspicuous biases in the market maker’s pricing, and profit from the inherent advantage of the market taker. We introduce the problem setting throughout the diverse domains of stock trading and sports betting to provide insights into the common underlying properties of profitable predictive models, their connections to standard portfolio optimization strategies, and the commonly overlooked advantage of the market taker. Consequently, we prove the desirability of the decorrelation objective across common market distributions, translate the concept into a practical machine learning setting, and demonstrate its viability with real-world market data.

... To construct a confidence set for the ambiguous distribution, several approaches are studied, such as moment information based sets [18], [19], probability metric based sets including L 1 , L inf metrics [8], [20] and Wasserstein metric [21], [22]. Among these, Wasserstein metric based confidence set is extensively studied in recent years, due to its good property on convergence and full utilization of historical data. ...

In this paper, we propose a distributionally robust chance constrained (DRCC) optimization problem for the operation of an active distribution network (ADN). The ADN’s operator uses the proposed problem to centrally optimize the dispatch plan of his resources, namely photovoltaic (PV) and battery energy storage (BES) systems, and to participate in wholesale real/reactive power and flexibility markets. We model the uncertainties in the problem by knowing a set of probability distributions, i.e.,
an ambiguity set
. The uncertainties include production capability of PV systems, end-users’ consumption, requested flexibility by the external network’s operator, and voltage magnitude at the point of common coupling (PCC). The resulting formulation is a DRCC optimization problem for which a solution methodology based on freely available solvers is presented. We evaluate the performance of proposed solution in the numerical results section by comparing it with two benchmark models based on stochastic and chance constrained (CC) optimization.

... In the out-of-sample framework, however, optimal portfolio strategies might under-perform when compared to heuristic approaches as shown in DeMiguel et al. (2009). This phenomenon has been extensively discussed in the literature and can be determined by model uncertainty, see Pflug et al. (2012). One further stream of research relates the difficult implementation of optimal approaches to numerical instability of the solution, see Torrente and Uberti (2021). ...

... Using the ideas in [31, Example 2] and considering measures on R d × R d , we can recast the problem as (1.1). While [31] focused on the asymptotic regime δ → ∞, their non-asymptotic statements are related to our theorem 2.2 and either result could be used here to obtain that V(δ) ≈ V(0) + 1 − γ 2 δ for small δ. ...

We consider sensitivity of a generic stochastic optimization problem to model uncertainty. We take a non-parametric approach and capture model uncertainty using Wasserstein balls around the postulated model. We provide explicit formulae for the first-order correction to both the value function and the optimizer and further extend our results to optimization under linear constraints. We present applications to statistics, machine learning, mathematical finance and uncertainty quantification. In particular, we provide an explicit first-order approximation for square-root LASSO regression coefficients and deduce coefficient shrinkage compared to the ordinary least-squares regression. We consider robustness of call option pricing and deduce a new Black–Scholes sensitivity, a non-parametric version of the so-called Vega. We also compute sensitivities of optimized certainty equivalents in finance and propose measures to quantify robustness of neural networks to adversarial examples.

... This captures the idea that no asset can lose more than 100% of its value. Further more, one can show that for 1-Wasserstein metric defined with the l 1 -norm, there exists a threshold value θ * such that the optimal portfolio will converge to the uniform portfolio when θ ≥ θ * (see Proposition 3 in Pflug et al. 2012). (v) The incorporation of BTDC has a substantial impact on the portfolio weights but has little to no impact on the objective value β. ...

We study the distributionally robust linearized stable tail adjusted return ratio (DRLSTARR) portfolio optimization problem, in which the objective is to maximize the worst-case linearized stable tail adjusted return ratio (LSTARR) performance measure under data-driven Wasserstein ambiguity. We consider two types of imperfectly known uncertainties, named uncertain probabilities and continuum of realizations, associated with the losses of assets. We account for two typical combinatorial trading constraints, called buy-in threshold and diversification constraints, to reflect stock market restrictions. Leveraging conic duality theory to tackle the distributionally robust worst-case expectation, the proposed problems are reformulated into mixed-integer linear programming problems. We carry out a series of empirical tests to illustrate the scalability and effectiveness of the proposed solution framework, and to evaluate the performance of the DRLSTARR-constructed portfolios. The cross-validation results obtained using a rolling-horizon procedure show the superior out-of-sample performance of the DRLSTARR portfolios under an uncertain continuum of realizations.

This paper proposes a generalization of Markowitz model that incorporates skewness and kurtosis into the classical mean–variance allocation framework. The principal appeal of the present approach is that it provides the closed-form solution of the optimization problem. The four moments optimal portfolio is then decomposed into the sum of three portfolios: the mean–variance optimal portfolio plus two self-financing portfolios, respectively, accounting for skewness and kurtosis. Theoretical properties of the optimal solution are discussed together with the economic interpretation. Finally, an empirical exercise on real financial data shows the contribution of the two portfolios accounting for skewness and kurtosis when financial returns depart from Normal distribution.

Portfolio optimization aims to minimize risk and maximize return on investment by determining the best combination of securities and proportions. The variance in portfolio optimization models is typically used for a measure of risk. Over the last few decades, portfolio optimization utilizing a variety of risk measures has grown significantly, and many studies have been conducted. Therefore, this paper provides a systematic review of risk measures for portfolio optimization using bibliometric analysis and maps to analyze the evolution and trends of 682 articles published between 2000 and 2022. Throughout this analysis, communication networks among articles, authors, sources, countries, and keywords are explored. Furthermore, a classification of risks and risk measures were presented to demonstrate a comprehensive overview of the field, and the top 50 papers were analyzed to determine which risk measures were most often used in recent studies.

This textbook shows how to bring theoretical concepts from finance and econometrics to the data. Focusing on coding and data analysis with R, we show how to conduct research in empirical finance from scratch. We start by introducing the concepts of tidy data and coding principles using the tidyverse family of R packages. We then provide the code to prepare common open source and proprietary financial data sources (CRSP, Compustat, Mergent FISD, TRACE) and organize them in a database. We reuse these data in all the subsequent chapters, which we keep as self-contained as possible. The empirical applications range from key concepts of empirical asset pricing (beta estimation, portfolio sorts, performance analysis, Fama-French factors) to modeling and machine learning applications (fixed effects estimation, clustering standard errors, difference-in-difference estimators, ridge regression, Lasso, Elastic net, random forests, neural networks) and portfolio optimization techniques.

Purpose
This paper aims to provide a novel explorative perspective on fund managers’ decisions under uncertainty. The current COVID pandemic is used as a unique reference frame to study how heuristics are used in institutional financial practice.
Design/methodology/approach
This study follows a grounded theory approach. A total of 282 diverse publications between October 2019 and October 2020 for 20 German mutual funds are qualitatively analyzed. A theory of adaptive heuristics for fund managers is developed.
Findings
Fund managers adapt their heuristics during a crisis and this adaptive process flows through three stages. Increasing complexity in the environment leads to the adaption of simplest heuristics around investment decisions. Three distinct stages of adaption: precrisis, uncertainty and stabilization emerge from the data.
Research limitations/implications
This study’s data is based on publicly available information. There might be a discrepancy between publicly stated and internal reasoning.
Practical implications
Money managers can use the provided framework to assess their decision-making in crises. The developed adaptive processes of heuristics can assist capital allocators who choose and rate fund managers. Policymakers and regulators can learn about the aspects of investor decisions that their actions and communication address. Teaching can use this study to exemplify the nature of financial markets as adaptive systems rather than static structures.
Originality/value
To the best of the author’s/authors’ knowledge, this study is the first to systematically explore the heuristics of professional money managers because they navigate a large-scale exogenous crisis.

We identify a source of numerical instability of quadratic programming problems that is hidden in its linear equality constraints. We propose a new theoretical approach to rewrite the original optimization problem in an equivalent reformulation using the singular value decomposition and substituting the ill-conditioned original matrix of the restrictions with a suitable optimally conditioned one. The proposed novel approach is showed, both empirically and theoretically, to solve ill-conditioning related numerical issues, not only when they depend on bad scaling and are relative easy to handle, but also when they result from almost collinearity or when numerically rank-deficient matrices are involved. Furthermore, our strategy looks very promising even when additional inequality constraints are considered in the optimization problem, as it occurs in several practical applications. In this framework, even if no closed form solution is available, we show, through empirical evidence, how the equivalent reformulation of the original problem greatly improves the performances of MatLab®’s quadratic programming solver and Gurobi®. The experimental validation is provided through numerical examples performed on real financial data in the portfolio optimization context.

This study investigates the performance of Markowitz and Naive diversification strategies in the Nigeria stock market. Thus, it examines the portfolio construction strategy that will generate superior performance regarding risk reduction and return maximization. These strategies were used to select 28 securities from the 159 equity stocks listed on the Nigerian stock exchange for 6 years using monthly data from January 2011 to December 2016 which is equivalent to 72 periods. Using the Welch's t-test to test the mean performance of Markowitz and Naïve diversification strategy, the null hypothesis was accepted. Thus, the study found out that there is no significant difference between the mean returns of Markowitz and Naïve diversification strategy using stocks quoted on the Nigerian stock market. The implication of this result is that the two techniques are capable of minimizing risk thereby maximizing expected return. However, the study recommended the adoption of these strategies since they are applicable to the Nigeria stock market.

We introduce a risk‐reduction‐based procedure to identify a subset of funds with a resulting opportunity set that is at least as good as the original menu when short‐sales are imposed. Relying on Wald tests for mean‐variance spanning, we show that the better results for the subset can be explained by a higher concentration of covariance entries between its assets, ultimately leading to smaller Frobenius norms of the associated matrices. With data on U.S. defined contribution plans, where participants have limited financial literacy, tend to be overwhelmed and prefer to make decisions among fewer choices, we obtain a 75% average reduction. This article is protected by copyright. All rights reserved.

The optimal expansion of a power system with reduced carbon footprint entails dealing with uncertainty about the distribution of the random variables involved in the decision process. Optimisation under ambiguity sets provides a mechanism to suitably deal with such a setting. For two-stage stochastic linear programs, we propose a new model that is between the optimistic and pessimistic paradigms in distributionally robust stochastic optimisation. When using Wasserstein balls as ambiguity sets, the resulting optimisation problem has nonsmooth convex constraints depending on the number of scenarios and a bilinear objective function. We propose a decomposition method along scenarios that converges to a solution, provided a global optimisation solver for bilinear programs with polyhedral feasible sets is available. The solution procedure is applied to a case study on expansion of energy generation that takes into account sustainability goals for 2050 in Europe, under uncertain future market conditions.

Stochastic multistage decision problems appear in many - if not all - application areas of Operations Research. While to define such problems is easy, to solve them is quite difficult, since they are of infinite dimension. Numerical solution can only be found by solving an approximate, easier problem. In this paper, we show good approximations can be found, where we emphasize the recursive structure of the involved algorithms and data structures.
In a second part, the problem of coping with the model error of approximations is discussed. We present algorithms for finding distributionally robust solutions for the model error problem. We also review some application cases of such situations from the literature.

We consider a joint distribution that decomposes asset returns into two independent components: an elliptical innovation (Gaussian) and a systematic non-elliptical latent process. The paper provides a tractable approach to estimate the underlying parameters and, hence, the assets’ exposures to the latent non-elliptical factor. Additionally, the framework incorporates higher-order moments, such as skewness and kurtosis, for portfolio selection. Taking into account estimation risk, we investigate the economic contribution of the non-elliptical term. Overall, we find weak empirical evidence to support the inclusion of the non-elliptical term and, hence, the higher-order comoments. Nonetheless, our findings support the mean–variance (MV) decision rule that incorporates the elliptical term alone. Excluding the non-elliptical term results in more robust mean–variance estimates and, thus, enhanced out-of-sample performance. This evidence is significant among stocks that exhibit a strong deviation from the Gaussian property. Moreover, it is most pronounced during market turmoils, when exposures to the latent factor are highest.

This paper studies a structured compound stochastic program (SP) involving multiple expectations coupled by nonconvex and nonsmooth functions. We present a successive convex programming-based sampling algorithm and establish its subsequential convergence. We describe stationary properties of the limit points for several classes of the compound SP. We further discuss probabilistic stopping rules based on the computable error bound for the algorithm. We present several risk measure minimization problems that can be formulated as such a compound stochastic program; these include generalized deviation optimization problems based on the optimized certainty equivalent and buffered probability of exceedance (bPOE), a distributionally robust bPOE optimization problem, and a multiclass classification problem employing the cost-sensitive error criteria with bPOE.

As the penetration of intermittent renewable energy increases in bulk power systems, flexible generation resources, such as quick-start gas units, become important tools for system operators to address the power imbalance problem. To better capture their flexibility, we proposed a two-stage distributionally robust unit commitment framework with both regular and flexible generation resources, in which the unit commitment decisions for flexible generation resources can be adjusted in the second stage to accommodate the renewable energy intermittency. In order to tackle this challenging two-stage distributionally robust mixed-binary model, to which traditional separation algorithms won’t apply, we designed a revised integer L-shaped algorithm with lift-and-project cutting plane techniques. In comparison to the traditional distributionally robust unit commitment, the proposed approach can reduce the system cost through an improved flexible resource quantification in the modeling.

We present a general framework for robust satisficing that favors solutions for which a risk-aware objective function would best attain an acceptable target even when the actual probability distribution deviates from the empirical distribution. The satisficing decision maker specifies an acceptable target, or loss of optimality compared with the empirical optimization model, as a trade-off for the model’s ability to withstand greater uncertainty. We axiomatize the decision criterion associated with robust satisficing, termed as the fragility measure, and present its representation theorem. Focusing on Wasserstein distance measure, we present tractable robust satisficing models for risk-based linear optimization, combinatorial optimization, and linear optimization problems with recourse. Serendipitously, the insights to the approximation of the linear optimization problems with recourse also provide a recipe for approximating solutions for hard stochastic optimization problems without relatively complete recourse. We perform numerical studies on a portfolio optimization problem and a network lot-sizing problem. We show that the solutions to the robust satisficing models are more effective in improving the out-of-sample performance evaluated on a variety of metrics, hence alleviating the optimizer’s curse.
Funding: D. Z. Long is supported by the Hong Kong Research Grants Council [Grant 14207819]. M. Sim and M. Zhou are supported by the Ministry of Education, Singapore, under its 2019 Academic Research Fund Tier 3 [Grant MOE-2019-T3-1-010].
Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2021.2238 .

In this chapter, Markowitz mean-variance approach is proposed for examining the best portfolio diversification strategy within three subperiods which are during the global financial crisis (GFC), post-global financial crisis, and during the non-crisis period. In our approach, we used 10 securities from five different industries to represent a risk-mitigation parameter. In this way, the naive diversification strategy is used to serve as a comparison for the approach used. During the computation process, the correlation matrices revealed that the portfolio risk is not well diversified during non-crisis periods, meanwhile, the variance-covariance matrices indicated that volatility can be minimized during portfolio construction. On this basis, 10 efficient portfolios were constructed and the optimal portfolios were selected in each subperiods based on the risk-averse preference. Performance-wise that optimal portfolio dominated the naïve strategy throughout the three subperiods tested. All the optimal portfolios selected are yielding more returns compared to the naïve portfolio.

Diversification is one of the main pillars of investment strategies. The prominent equal weight or one-over-N portfolio, which puts equal weight on each asset, is apart from its simplicity a strategy which is hard to outperform in realistic settings. But depending on the number of considered assets it can lead to very large portfolios. An approach to reduce the number of chosen assets based on clustering is proposed and its advantages and disadvantages are investigated. Using clustering techniques the possible assets are separated into non-overlapping clusters and the assets within a cluster are ordered by their Sharpe ratio. Then the best asset of each portfolio is chosen to be a member of the new portfolio with equal weights, the cluster portfolio. It is shown that this portfolio inherits the advantages of the equal weight portfolio and that it can even outperform it empirically. To this end different performance measures are used to compare the portfolios on simulated and real data. To explain the observations on real data, explanatory results are derived in an extreme model setting and analyzed in several simulation studies.

This study aims to identify, by means of a systematic literature review, the main methods, tools, and techniques used for portfolio optimization, real-world constraints, and to analyze how the applications of this set were changing over time. The main difference of this research is the approach on several aspects that cover a variety of aspects associated with the portfolio optimization problem—the types of analyses, constraints, algorithms, and language/software used. The main results of the review show dynamic optimization as an emerging theme as well as the hybridization of algorithms and the search for constraints. This chapter presents the main realistic restrictions addressed in the works found, the algorithms of artificial bee colony and genetic programming, and the instances or markets where these algorithms were tested. A point of attention should be given to the input data of optimization models. Depending on the degree of the estimation error of these input parameters, the optimization results may be lower than the results of the 1/N trading strategy. Robust optimization, fuzzy logic, and prediction are examples of techniques used to reduce estimation errors. Regarding software/programming languages, Matlab appears as the most widely used and Python facilitates the task of programming and building models by people who are not specialists in programming. In addition, Python is also common due to its flexibility, object orientation, gratuity, and calculation speed.

Maximum-loss (Max-loss) was recently introduced as a valuation functional in the context of systematic stress testing. The
basic idea is to value a (financial) random variable by its worst case expectation, where the most unfavourable probability
measure—the ‘worst case distribution’—lies within a given Kullback–Leibler radius around a previously estimated distribution.
The article gives an overview of the properties of this measure and analyses relations to other risk and acceptability measures
and to the well-known Esscher pricing principle, used in insurance mathematics and option pricing. The main part of the article
focuses then on optimal decision-making—in particular related to portfolio optimization—with Max-loss as the objective function
to be minimized. A simple algorithm for dealing with the resulting saddle point problem is introduced and analysed.

Recent research has revealed a pattern of choice characterized as
diversification bias: If people make combined choices of quantities of goods for future consumption, they choose more variety than if they make separate choices immediately preceding consumption. This phenomenon is explored in a series of experiments in which the researchers first eliminated several hypotheses that held that the discrepancy between combined and separate choice can be explained by traditional accounts of utility maximization. On the basis of results of further experiments, it was concluded that the diversification bias is largely attributable to 2 mechanisms:
time contraction, which is the tendency to compress time intervals and treat long intervals as if they were short, and
choice bracketing, which is the tendency to treat choices that are framed together differently from those that are framed apart. The researchers describe how the findings can be applied in the domains of marketing and consumer education. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

In this paper, single stage stochastic programs with ambiguous distributions for the involved random variables are considered. Though the true distribution is unknown, exis-tence of a reference measurê P enables the construction of non-parametric ambiguity sets as Kantorovich balls aroun P . The resulting robustified problems are infinite optimization problems and can therefore not be solved computationally. To solve these problems numer-ically, equivalent formulations as finite dimensional non-convex, semi definite saddle point problems are proposed. Finally an application from portfolio selection is studied for which methods to solve the robust counterpart problems explicitly are proposed and numerical results for sample problems are computed.

Abstract An important issue for solving multistage stochastic programs consists in the approximate representation of the (multivariate) stochastic input process in the form of a scenario tree. In this paper, forward and backward approaches are developed for generating scenario trees out of an initial fan of individual scenarios. Both approaches are motivated by the recent stability result in [15] for optimal values of multistage stochastic programs. They are based on upper bounds for the two relevant ingredients of the stability estimate, namely, the probabilistic and the ltration distance, respectively. These bounds allow to control the process of recursive scenario reduction [13] and branching. Numerical experience is reported for constructing multivariate scenario trees in electricity portfolio management. Key Words: Stochastic programming, multistage, stability, Lr-distance, ltration, scenario tree, scenario reduction. 2000 MSC: 90C15

We consider a tree-based discretization technique utilizing conditional transporta- tion distance, which is well suited for the approximation of multi-stage stochastic programming problems, and investigate corresponding convergence properties. We explain the relation between the approximation quality of the probability model and the quality of the solution. 1. Introduction. Dynamic stochastic optimization models are an up to date tool of modern management sciences and can be applied to a wide range of problems, such as financial portfolio optimization, energy contracts, insurance policies, supply chains, etc. While one-period models look for optimal decision values (or decision vec- tors) based on all available information now, the multi-period models consider planned future decisions as functions of the information that will be available later. Hence, the natural decision spaces for multi-period dynamic stochastic models, except for the first stage decisions, are spaces of functions. Consequently, only in some exceptional cases solutions may be found by analytical methods, which include investigating the necessary optimality conditions and solving a variational problem, i.e. only some functional equations have explicit solutions in observed spaces of functions. In the vast majority of cases, it is impossible to find a solution in this way, and we reach out for a numerical solution. However, numerical calculation on digital computers may never represent the underlying infinite dimensional function spaces. The way out of this dilemma is to approximate the original problem by a simpler, surrogate finite dimensional problem, which enables the calculation of the numerical solution. As the optimal decision at each stage is a function of the random components observed so far, the only way to reduce complexity is to reduce the range of the random components. If the random component of the decision model is discrete (i.e. takes a finite and in fact only a few number of values), the variational problem is reduced to a vector optimization problem, which may be solved by well known vector optimization algorithms. The natural question that arises is how to reconstruct a solution of the basic problem out of the solution of the finite surrogate problem. Since every finite valued stochastic process ˜ ξ1, . . . , ˜ ξT is representable as a tree, we deal with tree approximations of stochastic processes. There are two contradicting goals to be considered. For the sake of the quality of approximation, the tree should be large and bushy, while for the sake of computational solution effort, it should be small. Thus the choice of the tree size is obtained through a compromise. The basic question in reaching this compromise is the assessment of the quality of the approximation in terms of the tree size. It is the purpose of this paper to shed some light on the relation between the approximation quality of the probability model for the random components and the quality of the solution.

This paper deals with a problem of guaranteed (robust) financial decision-making under model uncertainty. An efficient method is proposed for determining optimal robust portfolios of risky financial instruments in the presence of ambiguity (uncertainty) on the probabilistic model of the returns. Specifically, it is assumed that a nominal discrete return distribution is given, while the true distribution is only known to lie within a distance $d$ from the nominal one, where the distance is measured according to the Kullback-Leibler divergence. The goal in this setting is to compute portfolios that are worst-case optimal in the mean-risk sense, that is, to determine portfolios that minimize the maximum with respect to all the allowable distributions of a weighted risk-mean objective. The analysis in the paper considers both the standard variance measure of risk and the absolute deviation measure.

In this paper, we consider the problem of finding optimal portfolios in cases when the underlying probability model is not perfectly known. For the sake of robustness, a maximin approach is applied which uses a 'confidence set' for the probability distribution. The approach shows the tradeoff between return, risk and robustness in view of the model ambiguity. As a consequence, a monetary value of information in the model can be determined.

There is a worldwide trend toward defined contribution saving plans and growing interest in privatized Social Security plans. In both environments, individuals are given some responsibility to make their own asset-allocation decisions, raising concerns about how well they do at this task. This paper investigates one aspect of the task, namely diversification. We show that some investors follow the "1/n strategy": they divide their contributions evenly across the funds offered in the plan. Consistent with this naive notion of diversification, we find that the proportion invested in stocks depends strongly on the proportion of stock funds in the plan.

This note corrects an error in the proof of Proposition 2 of “Risk Reduction in Large Portfolios: Why Imposing the Wrong Constraint Helps” that appeared in the Journal of Finance, August 2003.

This book is the first in the market to treat single- and multi-period risk measures (risk functionals) in a thorough, comprehensive manner. It combines the treatment of properties of the risk measures with the related aspects of decision making under risk. The book introduces the theory of risk measures in a mathematically sound way. It contains properties, characterizations and representations of risk functionals for single-period and multi-period activities, and also shows the embedding of such functionals in decision models and the properties of these models. © 2007 by World Scientific Publishing Co. Pte. Ltd. All rights reserved.

Two consumer strategies for the purchase of multiple items from a product class are contrasted. In one strategy (simultaneous choices/sequential consumption), the consumer buys several items on one shopping trip and consumes the items over several consumption occasions. In the other strategy (sequential choices/sequential consumption), the consumer buys one item at a time, just before each consumption occasion. The first strategy is posited to yield more variety seeking than the second. The greater variety seeking is attributed to forces operating in the simultaneous choices/sequential consumption strategy, including uncertainty about future preferences and a desire to simplify the decision. Evidence from three studies, two involving real products and choices, is consistent with these conjectures. The implications and limitations of the results are discussed.

Introduction The Kantorovich duality Geometry of optimal transportation Brenier's polar factorization theorem The Monge-Ampere equation Displacement interpolation and displacement convexity Geometric and Gaussian inequalities The metric side of optimal transportation A differential point of view on optimal transportation Entropy production and transportation inequalities Problems Bibliography Table of short statements Index.

When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a means of deriving bounds for another one in an applied problem. Considering other metrics can also provide alternate insights. We also give examples that show that rates of convergence can strongly depend on the metric chosen. Careful consideration is necessary when choosing a metric.
Le choix de métrique de probabilité est une décision très importante lorsqu'on étudie la convergence des mesures. Nous vous fournissons avec un sommaire de plusieurs métriques/distances de probabilité couramment utilisées par des statisticiens(nes) at par des probabilistes, ainsi que certains nouveaux résultats qui se rapportent à leurs bornes. Avoir connaissance d'autres métriques peut vous foumir avec un moyen de dériver des bornes pour une autre métrique dans un problème appliqué. Le fait de prendre en considération plusieurs métriques vous permettra d'approcher des problèmes d'une manière diffrente. Ainsi, nous vous démontrons que les taux de convergence peuvent dépendre de façon importante sur votre choix de métrique. Il est donc important de tout considérer lorsqu'on doit choisir une métrique.

Why should risk management systems account for parameter uncertainty? In addressing this question, the paper lets an investor in a credit portfolio face non-diversifiable uncertainty about two risk parameters - probability of default and asset-return correlation - and calibrates this uncertainty to a lower bound on estimation noise. In this context, a Bayesian inference procedure is essential for deriving and analyzing the main result, i.e. that parameter uncertainty raises substantially the tail risk perceived by the investor. Since a measure of tail risk that incorporates parameter uncertainty is computationally demanding, the paper also derives a closed-form approximation to such a measure.

This article uses Bayesian model averaging to study model uncertainty in hedge fund pricing. We show how to incorporate heteroscedasticity, thus, we develop a framework that jointly accounts for model uncertainty and heteroscedasticity. Relevant risk factors are identified and compared with those selected through standard model selection techniques. The analysis reveals that a model selection strategy that accounts for model uncertainty in hedge fund pricing regressions can be superior in estimation/inference. We explore potential impacts of our approach by analysing individual funds and show that they can be economically important.

This paper deals with a portfolio selection model in which the methodologies of robust optimization are used for the minimization of the conditional value at risk of a portfolio of shares.Conditional value at risk, being in essence the mean shortfall at a specified confidence level, is a coherent risk measure which can hold account of the so called “tail risk” and is therefore an efficient and synthetic risk measure, which can overcome the drawbacks of the most famous and largely used VaR.An important feature of our approach consists in the use of techniques of robust optimization to deal with uncertainty, in place of stochastic programming as proposed by Rockafellar and Uryasev. Moreover we succeeded in obtaining a linear robust copy of the bi-criteria minimization model proposed by Rockafellar and Uryasev. We suggest different approaches for the generation of input data, with special attention to the estimation of expected returns.The relevance of our methodology is illustrated by a portfolio selection experiment on the Italian market.

The modern portfolio theory pioneered by Markowitz (1952) is widely used in practice and extensively taught to MBAs. However, the estimated Markowitz portfolio rule and most of its extensions not only underperform the naive 1/N rule (that invests equally across N assets) in simulations, but also lose money on a risk-adjusted basis in many real data sets. In this paper, we propose an optimal combination of the naive 1/N rule with one of the four sophisticated strategies—the Markowitz rule, the Jorion (1986) rule, the MacKinlay and Pástor (2000) rule, and the Kan and Zhou (2007) rule—as a way to improve performance. We find that the combined rules not only have a significant impact in improving the sophisticated strategies, but also outperform the 1/N rule in most scenarios. Since the combinations are theory-based, our study may be interpreted as reaffirming the usefulness of the Markowitz theory in practice.

In the proposed research, our objective is to provide a general framework for identifying portfolios that perform well out-of-sample even in the presence of estimation error. This general framework relies on solving the traditional minimum-variance problem (based on the sample covariance matrix) but subject to the additional constraint that the p-norm of the portfolio-weight vector be smaller than a given threshold. In particular, we consider the 1-norm constraint, which is that the sum of the absolute values of the weights be smaller than a given threshold, and the 2-norm constraint that the sum of the squares of the portfolio weights be smaller than a given threshold. Our contribution will be to show that our unifying theoretical framework nests as special cases the shrinkage approaches of Jagannathan and Ma (2003) and Ledoit and Wolf (2004), and the 1/N portfolio studied in DeMiguel, Garlappi, and Uppal (2007). We also use our general framework to propose several new portfolio strategies. For these new portfolios, we provide a moment-shrinkage interpretation and a Bayesian interpretation where the investor has a prior belief on portfolio weights rather than on moments of asset returns. Finally, we compare empirically (in terms of portfolio variance, Sharpe ratio, and turnover), the out-of-sample performance of the new portfolios we propose to nine strategies in the existing literature across five datasets. Our preliminary results indicate that the norm-constrained portfolios we propose have a lower variance and a higher Sharpe ratio than the portfolio strategies in Jagannathan and Ma (2003) and Ledoit and Wolf (2004), the 1/N portfolio, and also other strategies in the literature such as factor portfolios and the parametric portfolios in Brandt, Santa-Clara, and Valkanov (2005).

We consider mean-variance portfolio choice of a robust investor. The investor receives advice from J experts, each with a different prior for expected returns and risk, and follows a min-max portfolio strategy. The robust investor endogenously combines the experts' estimates. When experts agree on the main return generating factors, the investor relies on the advice of the expert with the strongest prior. Dispersed advice leads to averaging of the alternative estimates. The robust investor is likely to outperform alternative strategies. The theoretical analysis is supported by numerical simulations for the 25 Fama-French portfolios and for 81 European country and value portfolios. Copyright 2010, Oxford University Press.

We propose a procedure to take model risk into account in the computation of capital reserves. This addresses the need to make the allocation of capital reserves to positions in given markets dependent on the extent to which reliable models are available. The proposed procedure can be used in combination with any of the standard risk measures, such as Value-at-Risk and expected shortfall. We assume that models are obtained by usual econometric methods, which allows us to distinguish between estimation risk and misspecification risk. We discuss an additional source of risk which we refer to as identification risk. By way of illustration, we carry out calculations for equity and FX data sets. In both markets, estimation risk and misspecification risk together explain about half of the multiplication factors employed by the Bank for International Settlements (BIS).

We evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation
error, relative to the naive 1/N portfolio. Of the 14 models we evaluate across seven empirical datasets, none is consistently better than the 1/N rule in terms of Sharpe ratio, certainty-equivalent return, or turnover, which indicates that, out of sample, the gain from
optimal diversification is more than offset by estimation error. Based on parameters calibrated to the US equity market, our
analytical results and simulations show that the estimation window needed for the sample-based mean-variance strategy and
its extensions to outperform the 1/N benchmark is around 3000 months for a portfolio with 25 assets and about 6000 months for a portfolio with 50 assets. This
suggests that there are still many “miles to go” before the gains promised by optimal portfolio choice can actually be realized
out of sample.

The authors propose that what consumers buy can be systematically influenced by how much they buy. They hypothesize that, as the number of items purchased in a category on a shopping occasion increases, a consumer is more likely to select product variants (e.g. yogurt flavors) that s/he does not usually purchase. They used yogurt scanner data to support this hypothesis. This study also revealed that consumers were more likely to select their regular brands when purchasing more containers of yogurt on a given occasion. A laboratory experiment showed that this reflects the combined impact of purchase quantity and product-display format (i.e., the by-brand display of yogurt in supermarkets) on consumer choice. Copyright 1992 by the University of Chicago.

The psychological principles that govern the perception of decision problems and the evaluation of probabilities and outcomes
produce predictable shifts of preference when the same problem is framed in different ways. Reversals of preference are demonstrated
in choices regarding monetary outcomes, both hypothetical and real, and in questions pertaining to the loss of human lives.
The effects of frames on preferences are compared to the effects of perspectives on perceptual appearance. The dependence
of preferences on the formulation of decision problems is a significant concern for the theory of rational choice.

We develop a model for an investor with multiple priors and aversion to ambiguity. We characterize the multiple priors by
a “confidence interval” around the estimated expected returns and we model ambiguity aversion via a minimization over the
priors. Our model has several attractive features: (1) it has a solid axiomatic foundation; (2) it is flexible enough to allow
for different degrees of uncertainty about expected returns for various subsets of assets and also about the return-generating
model; and (3) it delivers closed-form expressions for the optimal portfolio. Our empirical analysis suggests that, compared
with portfolios from classical and Bayesian models, ambiguity-averse portfolios are more stable over time and deliver a higher
out-of sample Sharpe ratio. (JEL G11)

I present a new approach to the dynamic portfolio and consumption problem of an investor who worries about model uncertainty
(in addition to market risk) and seeks robust decisions along the lines of Anderson, Hansen, and Sargent (2002). In accordance with max-min expected utility, a robust investor insures against some endogenous worst case. I first show
that robustness dramatically decreases the demand for equities and is observationally equivalent to recursive preferences
when removing wealth effects. Unlike standard recursive preferences, however, robustness leads to environment-specific “effective”
risk aversion. As an extension, I present a closed-form solution for the portfolio problem of a robust Duffie-Epstein-Zin
investor. Finally, robustness increases the equilibrium equity premium and lowers the risk-free rate. Reasonable parameters
generate a 4% to 6% equity premium.

We evaluate the performance of models for the covariance structure of stock returns, focusing on their use for optimal portfolio
selection. We compare the models' forecasts of future covariances and the optimized portfolios' out-of-sample performance.
A few factors capture the general covariance structure. Portfolio optimization helps for risk control, and a three-factor
model is adequate for selecting the minimum-variance portfolio. Under a tracking error volatility criterion, which is widely
used in practice, larger differences emerge across the models. In general more factors are necessary when the objective is
to minimize tracking error volatility.

Records of over half a million participants in more than 600 401(k) plans indicate that participants tend to allocate their contributions evenly across the funds they use, with the tendency weakening with the number of funds used. The number of funds used, typically between three and four, is not sensitive to the number of funds offered by the plans, which ranges from 4 to 59. A participant's propensity to allocate contributions to equity funds is not very sensitive to the fraction of equity funds among offered funds. The paper also comments on limitations on inferences from experiments and aggregate-level data analysis. Copyright 2006 by The American Finance Association.

Green and Hollifield (1992) argue that the presence of a dominant factor would result in extreme negative weights in mean-variance efficient portfolios even in the absence of estimation errors. In that case, imposing no-short-sale constraints should hurt, whereas empirical evidence is often to the contrary. We reconcile this apparent contradiction. We explain why constraining portfolio weights to be nonnegative can reduce the risk in estimated optimal portfolios even when the constraints are wrong. Surprisingly, with no-short-sale constraints in place, the sample covariance matrix performs as well as covariance matrix estimates based on factor models, shrinkage estimators, and daily data. Copyright (c) 2003 by the American Finance Association.

When studying convergence of measures, an important issue is the choice of probability metric. In this review, we provide a summary and some new results concerning bounds among ten important probability metrics/distances that are used by statisticians and probabilists. We focus on these metrics because they are either well-known, commonly used, or admit practical bounding techniques. We summarize these relationships in a handy reference diagram, and also give examples to show how rates of convergence can depend on the metric chosen.

Five investing lessons from America's top pension fund

- J Zweig

Zweig, J., 1998. Five investing lessons from America's top pension fund. Money, 115-118.

Robust Optimization. Princeton Series in Applied Mathematics Naive diversification strategies in defined contribution saving plans

- A Ben-Tal
- L Ghaoui
- A Nemirovski

Ben-Tal, A., El Ghaoui, L., Nemirovski, A., 2009. Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ. Benartzi, S., Thaler, R., 2001. Naive diversification strategies in defined contribution saving plans. American Economic Review 91, 79–98.