Article

Applying Optimization Technology to Portfolio Management

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Multiperiod optimization models are typical in portfolio management. Prominent examples include fund construction, the investment/consumption problem for individual investors, and asset/liability management for global financial organizations. Powerful optimization technology can expand the range of solvable portfolio applications, especially for investment problems over time. Three primary frameworks-stochastic control, stochastic programs, and optimizing simulators-have their particular advantages. Advanced optimization tools will be useful in many future applications.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The presence of ample data for estimating system parameters, such as investment return and risk, leads to RPO under aleatory uncertainty, while limited or no data, necessitating expert judgment for parameter estimation, are termed RPO under epistemic uncertainty. The portfolio optimization field has seen an extensive volume of methods and applications (e.g., Ehrgott et al. 2004;Mulvey 2004;Boyle et al. 2008), with some incorporating robustness (Kawas and Thiele 2011;DeMiguel and Nogales 2009;Delage and Ye 2010), but most focus on aleatory uncertainty. Handling both aleatory and epistemic uncertainty is less common. ...
Article
Full-text available
This paper proposes three machine learning-based uncertainty set construction methods and a novel uncertainty quantification method for robustness-based optimization. The proposed methods are capable of capturing all possible types of uncertainties in the form of sparse point and/or interval data and efficiently quantify uncertainty in robustness-based optimization. In contrast to traditional approaches that rely on expert opinion or assumptions to construct uncertainty sets, our methods leverage machine learning algorithms to extract uncertainty patterns from historical data and effectively capture epistemic uncertainty in input variables. Another challenge in existing robustness-based optimization under epistemic uncertainty is the high computational cost associated with the iterative method of epistemic analysis. Moreover, these methods struggle with uncertainty sets containing a mixture of point and interval data. To overcome these limitations, we propose a unified probabilistic approach that utilizes maximum likelihood estimation to efficiently quantify uncertainty, regardless of the data form. Using the proposed methods, we introduce new approaches for robustness-based portfolio optimization (RPO) and multidisciplinary design optimization (RO/MDO), namely the worst-case maximum likelihood estimation (WMLE)-based single-loop RPO and WMLE-based RO/MDO approach, respectively. Illustrated through applications to minimum variance portfolios, our methods consistently demonstrate significant improvements over established formulations in terms of both return and risk. Additionally, when applied to a complex multidisciplinary engineering design, the WMLE-based RO/MDO approach showcases a computationally efficient solving technique, yielding more realistic results than the existing methodologies.
... There is now an extensive volume of methods and applications available for portfolio optimization (e.g., Ehrgott et al. 2004;Mulvey 2004;Boyle et al. 2008;Anagnostopoulos and Mamanis 2010;Oliveira et al. 2011;Mansini et al. 2014;Bekiros et al. 2015;Tofighian et al. 2018). While some of the existing methods include robustness in the portfolio optimization framework (e.g., Kawas and Thiele 2008;DeMiguel and Nogales 2009;Delage and Ye 2010;Chen et al. 2011;Zymler et al. 2011), most of the existing methods for portfolio optimization can deal with only aleatory uncertainty. ...
Article
Full-text available
In this paper, we propose formulations and algorithms for robust portfolio optimization under both aleatory uncertainty (i.e., natural variability) and epistemic uncertainty (i.e., imprecise probabilistic information) arising from interval data. Epistemic uncertainty is represented using two approaches: (1) moment bounding approach and (2) likelihood-based approach. This paper first proposes a nested robustness-based portfolio optimization formulation using the moment bounding approach-based representation of epistemic uncertainty. The nested robust portfolio formulation is simple to implement; however, the computational cost is often high due to the epistemic analysis performed inside the optimization loop. A decoupled approach is then proposed to un-nest the robustness-based portfolio optimization from the analysis of epistemic variables to achieve computational efficiency. This paper also proposes a single-loop robust portfolio optimization formulation using the likelihood-based representation of epistemic uncertainty that completely separates the epistemic analysis from the portfolio optimization framework and thereby achieves further computational efficiency. The proposed robust portfolio optimization formulations are tested on real market data from five S&P 500 companies, and performance of the robust optimization models is discussed empirically based on portfolio return and risk.
... Three optimisation technologies that can be applied to the portfolio selection problem i.e. stochastic control, stochastic programming and Monte Carlo simulation, are also reviewed and compared in [92] by Mulvey. When a solution is achieved, stochastic control represents an ideal framework that is easy to understand and implement, but it is limited to small problems and hard to solve in analytic form. ...
... Nevertheless, the introduction of real features involving the use of integer variables may increase problem complexity significantly and makes LP solvable models more competitive with respect to quadratic models for which satisfactory solution methods are not available. Moreover, the recent advance in computers capability has opened up new solution opportunities and led to an extraordinary progress in statistics (see Efron [30]) as well as in optimization (see Mulvey [74]) with enormous effects in different application contexts including finance. ...
Article
Markowitz formulated the portfolio optimization problem through two criteria: the expected return and the risk, as a measure of the variability of the return. The classical Markowitz model uses the variance as the risk measure and is a quadratic programming problem. Many attempts have been made to linearize the portfolio optimization problem. Several different risk measures have been proposed which are computationally attractive as (for discrete random variables) they give rise to linear programming (LP) problems. About twenty years ago, the mean absolute deviation (MAD) model drew a lot of attention resulting in much research and speeding up development of other LP models. Further, the LP models based on the conditional value at risk (CVaR) have a great impact on new developments in portfolio optimization during the first decade of the 21st century. The LP solvability may become relevant for real-life decisions when portfolios have to meet side constraints and take into account transaction costs or when large size instances have to be solved. In this paper we review the variety of LP solvable portfolio optimization models presented in the literature, the real features that have been modeled and the solution approaches to the resulting models, in most of the cases mixed integer linear programming (MILP) models. We also discuss the impact of the inclusion of the real features.
Article
Purpose This study aims to utilize the mean–variance optimization framework of Markowitz (1952) and the generalized reduced gradient (GRG) nonlinear algorithm to find the optimal portfolio that maximizes return while keeping risk at minimum. Design/methodology/approach This study applies the portfolio optimization concept of Markowitz (1952) and the GRG nonlinear algorithm to a portfolio consisting of the 30 leading stocks from the three different sectors in Amman Stock Exchange over the period from 2009 to 2013. Findings The selected portfolios achieve a monthly return of 5 per cent whilst keeping risk at minimum. However, if the short-selling constraint is relaxed, the monthly return will be 9 per cent. Moreover, the GRG nonlinear algorithm enables to construct a portfolio with a Sharpe ratio of 7.4. Practical implications The results of this study are vital to both academics and practitioners, specifically the Arab and Jordanian investors. Originality/value To the best of the author’s knowledge, this is the first study in Jordan and in the Arab world that constructs optimum portfolios based on the mean–variance optimization framework of Markowitz (1952) and the GRG nonlinear algorithm.
Article
Full-text available
We consider a financial market model with two assets. One has deterministic rate of growth, while the rate of growth of the second asset is governed by a Brownian motion with drift. We can shift money from one asset to another; however, there are losses of money (brokerage fees) involved in shifting money from the risky to the nonrisky asset. We want to maximize the expected rate of growth of funds. It is proved that an optimal policy keeps the ratio of funds in risky and nonrisky assets within a certain interval with minimal effort.
Article
Full-text available
This paper describes the formulation of the Russell-Yasuda Kasai financial planning model, including the motivation for the model. The presentation complements the discussion of the technical details of the financial modeling process and the managerial impact of its use to help allocate the firm’s assets over time discussed in [D. R. Carino et al., Interfaces 24, No. 1, 29-49 (1994); D. R. Carino, D. H. Myers and W. T. Ziemba, Oper. Res. 46, No. 4, 450-462 (1998)]. The multistage stochastic linear program incorporates Yasuda Kasai’s asset and liability mix over a five-year horizon followed by an infinite horizon steady-state end-effects period. The objective is to maximize expected long-run profits less expected penalty costs from constraint violations over the infinite horizon. Scenarios are used to represent the uncertain parameter distributions. The constraints represent the institutional, cash flow, legal, tax, and other limitations on the asset and liability mix over time.
Article
Towers Perrin-Tillinghast employs a stochastic asset-and-liability management system for helping its pension plan and insurance clients understand the risks and opportunities related to capital market investments and other major decisions. The system has three major components: (1) a stochastic scenario generator (CAP:Link); (2) a nonlinear optimization simulation model (OPT:Link); and (3) a flexible liability- and financial-reporting module (FIN:Link). Each part improves over existing technology as compared with traditional actuarial approaches. The integrated investment system links asset risks to liabilities so that company goals are best achieved. For example, US WEST saved 450to450 to 1,000 million in opportunity costs in its pension plan by following the advice of the asset-and-liability system.
Article
Optimal consumption and investment decisions are studied for an investor who has available a bank account paying a fixed rate of interest and a stock whose price is a log-normal diffusion. This problem was solved in the literature when transactions between bank and stock are costless. Here we suppose that there are charges on all transactions equal to a fixed percentage of the amount transacted. It is shown that the optimal buying and selling policies are the local times of the two-dimensional process of bank and stock holdings at the boundaries of a wedge-shaped region which is determined by the solution of a nonlinear free boundary problem. An algorithm for solving the free boundary problem is given.
Article
Leading pension plans employ asset and liability management systems for optimizing their strate- gic decisions. The multi-stage models link asset allocation decisions with payments to beneficia r- ies, changes to plan policies and related issues, in order to maximize the plan's surplus within a given risk tolerance. Temporal aspects complicate the problem but give rise to special opportuni- ties for dynamic investment strategies. Within these models, the portfolio must be re -revised in the face of transaction and market impact costs. The re-balancing problem is posed as a general- ized network with side conditions. We develop a specialize d algorithm for solving the resulting problem. A real-world pension example illustrates the concepts.
Article
The objective of a defined-benefit pension fund's asset allocation policy should be to fully fund accrued pension liabilities at the lowest cost to the plan sponsor, subject to sensible risk. A major risk plan sponsors face is that higher contributions will be required should the asset portfolio not be constructed properly. Specifically, the plan sponsor in establishing its asset allocation strategy should take into account both the present value of liabilities (cash flows) and the volatile behavior of the value of the liabilities due to changes in interest rates. While fluctuations in the present value of assets versus liabilities (funding ratios) represent high financial risk for all plan sponsors, most plan sponsors fail to recognize this risk because it is seriously attenuated by actuarial and accounting smoothing of financial statements. Instead, due to the way pension contributions are calculated, and earnings reported, plan sponsors focus on the return on asset assumption rather than assets versus liabilities. The authors look at the performance of defined-benefit corporate pension plans in 2000 and 2001, and consider the implications of this performance for future corporate earning. They then address issues associated with measuring pension liabilities and offer solutions to deal with this measurement problem.
Article
An agent can invest in a high-yield bond and a low-yield bond, holding either long or short positions in either asset. Any movement of money between these two assets incurs a transaction cost proportional to the size of the transaction. the low-yield bond is liquid in the sense that wealth invested in this bond can be consumed directly without a transaction cost; wealth invested in the high-yield bond can be consumed only by first moving it into the low-yield bond. the problem of optimal consumption and investment on an infinite planning horizon is solved for a class of utility functions larger than the class of power functions. Copyright 1991 Blackwell Publishers.
Article
This chapter reviews the optimal consumption-investment problem for an investor whose utility for consumption over time is a discounted sum of single-period utilities, with the latter being constant over time and exhibiting constant relative risk aversion (power-law functions or logarithmic functions). It presents a generalization of Phelps' model to include portfolio choice and consumption. The explicit form of the optimal solution is derived for the special case of utility functions having constant relative risk aversion. The optimal portfolio decision is independent of time, wealth, and the consumption decision at each stage. Most analyses of portfolio selection, whether they are of the Markowitz–Tobin mean-variance or of more general type, maximize over one period. The chapter only discusses special and easy cases that suffice to illustrate the general principles involved and presents the lifetime model that reveals that investing for many periods does not itself introduce extra tolerance for riskiness at early or any stages of life.
Article
This article develops and empirically implements a stockvaluation model. The model makes three assumptions: (i) dividend equals a fixed fraction of net earnings-per-share plus noise, (ii) the economy's pricing kernel is consistent with the Vasicek term structure of interest rates, and (iii) the expected earnings growth rate follows a mean-reverting stochastic process. Our parameterization of the earnings process distinguishes long-run earnings growth from current growth and separately measures the characteristics of the firm's business cycle. The resulting stockvaluation formula has three variables as input: net earnings-per-share, expected earnings growth, and interest rate. Using a sample of individual stocks, our empirical exercise leads to the following conclusions: (1) the derived valuation formula produces significantly lower pricing errors than existing models, both in- and out-of-sample; (2) modeling earnings growth dynamics properly is the most crucial for achieving better perfor...