## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Institutional investors usually employ mean-variance analysis to determine optimal portfolio weights. Almost immediately upon implementation, however, the portfolio's weights become sub-optimal as changes in asset prices cause the portfolio to drift away from the optimal targets. We apply a quadratic heuristic to address the optimal rebalancing problem, and we compare it to a dynamic programming solution as well as to standard industry heuristics. The quadratic heuristic provides solutions that are remarkably close to the dynamic programming solution. Moreover, unlike the dynamic programming solution, the quadratic heuristic is scalable to as many as several hundreds assets.

To read the full-text of this research,

you can request a copy directly from the authors.

... Their method is widely referenced in recent studies of dynamic portfolio management, such as Tahar et al. (2007), Brito (2008), Branger et al. (2010), Israelov and Katz (2011), Holden and Holden (2013), and Carroll et al. (2017). In addition, Kritzman and Myrgren (2009) and Brown and Smith (2011) address and alleviate the curse of dimensionality problem due to increments of assets. This article suggests that the traditional uniformly distributed grid can be improved by allocating the grid points in a non-uniform fashion according to their importance. ...

Sophisticated predetermined ratios are used to allocate portfolio asset weights to strike a good trade-off between profitability and risk in trading. Rebalancing these weights due to market fluctuations without incurring excessive transaction costs and tracking errors is a vital financial engineering problem. Rebalancing strategies can be modeled by discretely enumerating portfolio weights to form a grid space and then optimized via the Bellman equation. Discretization errors are reduced by increasing the grid resolution at the cost of increased computational time. To minimize errors with constrained computational resources (e.g., grid nodes), we vary the grid resolution according to the probability distribution of asset weights. Specifically, a grid space is first divided into several areas, and each area’s probability is estimated. Then, the discretization error’s upper bound is minimized by inserting an adequate number of grid nodes determined by Lagrange multipliers in a non-uniform fashion. In experiments, the proposed multiresolution rebalancing outperforms traditional uniform-resolution rebalancing and popular benchmark strategies such as the periodic, tolerance-band, and buy-and-hold strategies.

... More advanced rebalancing processes tend to use optimisation techniques (such as minimising cost and/or minimising (conditional) value-at-risk as in MacLean et al. [37] and Meghwani and Thakur [39]); consider derivatives as in Israelov and Tummala [26]; use stochastic probability theory, dynamic programming and heuristic approximations such as machine learning algorithms as in Kritzman et al. [30], Perrin and Roncalli [45] and Sun et al. [53]; or use fuzzy logic models as in Fang et al. [18]. In general, these processes rely on the ability for a portfolio manager to be able to buy and sell with the market -in particular, they rely on a reasonable level of liquidity of their assets. ...

This paper describes multi-portfolio `internal' rebalancing processes used in the finance industry. Instead of trading with the market to `externally' rebalance, these internal processes detail how portfolio managers buy and sell between their portfolios to rebalance. We give an overview of currently used internal rebalancing processes, including one known as the `banker' process and another known as the `linear' process. We prove the banker process disadvantages the nominated banker portfolio in volatile markets, while the linear process may advantage or disadvantage portfolios. We describe an alternative process that uses the concept of `market-invariance'. We give analytic solutions for small cases, while in general show that the $n$-portfolio solution and its corresponding `market-invariant' algorithm solve a system of nonlinear polynomial equations. It turns out this algorithm is a rediscovery of the RAS algorithm (also called the `iterative proportional fitting procedure') for biproportional matrices. We show that this process is more equitable than the banker and linear processes, and demonstrate this with empirical results. The market-invariant process has already been implemented by industry due to the significance of these results.

... If on the other hand the amount of fund is large, the market impact cost significantly affects the perfbrmance of the portfblio. This problem was first studied in a path-breaking paper by Perold [18] Recently, Kritzman et al. [14] proposed a multi-period stochastic prograinming approach fbr calculating a minimal transaction cost rebaLance schedule to a target portfblio, However, there are no guarantees to rebalance to the optimal portfolio since they solve an optimal portfolio construction problem and an optimal rebalancing prob]em separately. ...

This paper is concerned with an optimization problem associated with a rebalancing schedule of a large scale fund subject to nonconvex transaction cost. We will formulate this problem as a 0-1 mixed integer programming problem under linear constraints using absolute deviation as the measure of risk. This problem can be solved by an integer programming software if the size of the universe is small. However, it is still beyond the reach of the state-of-the-art technology to solve a large scale rebalancing problem. We will show that we can now solve these problems almost exactly within a practical amount of time by using an elaborate heuristic approach.

... Dynamic revision strategies are generally complex, computationally intensive, and thus somewhat limited to small investment universes (see Donhue and Yip (2003)). Notably, the Markowitz and van Dijk (2003) as well as Kritzman et al. (2009) approximations represent scalable alternatives to dynamic revision strategies. However, they remain a heuristic approximation to the underlying dynamic program. ...

This article proposes a novel approach to portfolio revision. The current literature on portfolio optimization uses a somewhat naive approach, where portfolio weights are always completely revised after a predefined fixed period. However, one shortcoming of this procedure is that it ignores parameter uncertainty in the estimated portfolio weights, as well as the biasedness of the in-sample portfolio mean and variance as estimates of the expected portfolio return and out-of-sample variance. To rectify this problem, we propose a Jackknife procedure to determine the optimal revision intensity, i.e., the percent of wealth that should be shifted to the new, in-sample optimal portfolio. We find that our approach leads to highly stable portfolio allocations over time, and can significantly reduce the turnover of several well established portfolio strategies. Moreover, the observed turnover reductions lead to statistically and economically significant performance gains in the presence of transaction costs.

For the asset management industry, the last decade is characterized by a strong focus on the creation of “alpha” by various approaches (Gupta et al. 2016) and methodologies. Despite this, we have all heard at one point or another that asset allocation is an important, if not the most important, decision when investing one’s wealth. Various studies1 have attempted to disentangle the magnitude of the contribution of allocation versus security selection as drivers of the risk and return profile of a portfolio. The debate is around a 90% contribution. Regardless of the number, we observe that the majority of the effort by asset managers (capital and resources allocated) focuses on generating alpha, while alpha accounts for only a residual part of portfolio performance.

Traditional approaches to asset-liability management have evolved substantially in recent years. Unfortunately, even the sophisticated, multi-period approaches in common use neglect important features of the underlying economic problem. This chapter describes a new approach to asset-liability management that combines four key elements, one of which is quite new to the finance literature.

A technique called dynamic programming can be used to identify an optimal rebalancing schedule, which significantly reduces rebalancing and suboptimality costs. Dynamic programming provides solutions to multi-stage decision processes in which the decisions made in prior periods affect the choices available in later periods. Dynamic programming provides the optimal year-by-year decision policy by working backwards from year 10. The results of the test of the relative efficacy of dynamic programming and the MvD heuristic with data on domestic equities, domestic fixed income, non- US equities, non-US fixed income, and emerging market equities, show that the MvD heuristic performs quite well compared to the dynamic programming solution for the two-asset case and substantially better than other heuristics. The increase in the number of assets reduces the advantage of dynamic programming over the MvD heuristic and is reversed at the level of five assets. Dynamic programming cannot be applied beyond five assets, but the MvD heuristic can be extended up to 100 assets. The MvD heuristic reduces total costs relative to all of the other heuristics by substantial amounts. The performance of the MvD heuristic improves relative to the dynamic programming solution as more assets are added but this improvement reflects a growing reliance on an approximation for the dynamic programming approach.

The pure form of log-optimal investment strategies are often considered to be impractical due to the inherent need for continuous rebalancing. It is however possible to improve investor log utility by adopting a discrete-time periodic rebalancing strategy. Under the assumptions of geometric Brownian motion for assets and approximate log-normality for a sum of log-normal random variables, we find that the optimum rebalance frequency is a piecewise continuous function of investment horizon. One can construct this rebalance strategy function, called the optimal rebalance frequency function, up to a specified investment horizon given a limited trajectory of the expected log of portfolio growth when the initial portfolio is never rebalanced. We develop the analytical framework to compute the optimal rebalance strategy in linear time, a significant improvement from the previously proposed search-based quadratic time algorithm.

The goal of strategic rebalancing is to limit unintended drift or tracking error from the strategic policy benchmark without incurring large transaction costs. Traditional rebalancing, however, specifies fixed bands around each asset class and can result in significant tracking error and high transaction costs in stressed markets, as volatility and illiquidity increase. Tracking error rebalancing is an alternative approach in which investors directly monitor tracking error (rather than asset class misweights) and ensure that tracking error stays below a specified threshold using trades that minimize transaction costs. Rather than trading all assets that breach the fixed bands, investors use current estimates of volatilities and costs to determine the trades that result in the most risk reduction per unit cost. In stressed markets, this strategy can help avoid trades in illiquid assets and exploit asset class relationships to reduce risk at significantly lower costs.

We study the question of how often an investment portfolio be rebalanced to achieve the goal of maximizing the investor utility for any given investment horizon. We choose to use the log-optimal strategy that is appropriate when investors adhere to a logarithm utility function. This form of active investment strategy is cost-prohibitive and even impractical due to significant overhead of continuous rebalancing and trading cost. We develop an analytical framework to compute the expected value of portfolio growth of log-optimal investor when a given periodic rebal-ance frequency is used. We show that it is possible to improve investor log utility using this quasi-passive or hybrid rebalancing strategy. We present an algorithm to compute the optimal rebalance frequency for the given portfolio assets following Geometric Brownian motion. Simulation studies show that an investor shall gain significantly by using the optimal rebalance frequency in lieu of continuous rebalancing.

Portfolio rebalancing decisions are crucial to today's portfolio managers especially in high frequency trading environment. These decisions must be made fast in dynamic market conditions. We analyze the efficiency and scalability of a proposed discrete-time rebalancing algorithm suitable for log-optimal investors. These investors seek to maximize the expected value of log of portfolio growth in the long run. We incrementally utilize various computational and optimization techniques to develop a highly efficient version of the algorithm.

We determine an opportune time to rebalance a two-asset portfolio set up using the single period Markowitz framework. This is achieved by studying and comparing the nature of portfolio evolution when two extreme rebalancing strategies are used, viz. passive or buy-and-hold and active or continuous rebalancing. We compute the rebalance time as the period during which the passive strategy generates higher expected investor utility, the Sharpe ratio. We show that the rebalance time exists only for a certain class of assets driven by their correlation coefficient.

This article advocates a systematic rebalancing process - Volatility-Driven Asset Allocation or VDAA - for dynamically managing the strategic asset allocation. The goal of the suggested algorithm is to adjust the asset exposures so as to reflect the assumptions investors used when determining their strategic allocation, in terms of balance between risk contributions and expected returns. Such an idea makes sense from the economic point of view of a risk-adverse investor who wishes to achieve a smooth long-run performance. The stable risk contribution is determined by a long-run target, with short-term deviations from this target driving the rebalancing of the portfolio exposure. Rebalancing between asset classes allows smoothing the global volatility of the portfolio by decreasing exposure in asset classes yielding temporarily higher risk contributions and by increasing weight in asset classes with temporarily lower risk contributions. Both our backtests and robustness study demonstrate that this risk rebalancing strategy is superior in terms of information ratio to traditional rebalancing rules.

We propose a simple analytical construct for incorporating liquidity into portfolio choice. If investors deploy liquidity to raise a portfolio’s expected utility beyond its original expected utility, we attach a shadow asset to tradable assets. If, instead, they deploy liquidity to prevent a portfolio’s expected utility from falling, we attach a shadow liability to assets that are not tradable. This construct enables investors to determine the optimal allocation to illiquid assets. Alternatively, they can use this construct to estimate the premium required of an illiquid asset, or the degree to which they must benefit from liquidity in order to justify foregoing investment in illiquid assets. This approach improves upon other methods of incorporating liquidity into portfolio choice in four fundamental ways: First, it mirrors what actually occurs within a portfolio. Second, it maps units of liquidity onto units of expected return and risk, so that investors can analyze liquidity within the same context as other portfolio decisions. Third, it distinguishes absolute illiquidity from partial illiquidity and enables investors to address these attributes within a single, unifying framework. Fourth, it recognizes that liquidity serves not only to meet demands for capital, but to exploit opportunities as well, thus revealing that investors bear an illiquidity cost to the extent any fraction of a portfolio is immobile.

The Environmental Stewardship Scheme provides payments to farmers for the provision of environmental services based on agricultural foregone income. This creates a potential incentive compatibility problem which, combined with an information asymmetry on farm land heterogeneity, could lead to adverse selection of farmers into the scheme. However, the Higher Level Scheme (HLS) design includes some features that potentially reduce adverse selection. This paper studies the adverse selection problem of the HLS using a principal agent framework at the regional level. It is found that, at the regional level, the enrolment of more land from lower payment regions for a given budget constraint has led to a greater overall contracted area (and thus potential environmental benefit) which has had the effect of reducing the adverse selection problem. In addition, for landscape regions with the same payment rate (i.e. of the same agricultural value), differential weighting of the public demand for environmental goods and services provided by agriculture (measured by weighting an environmental benefit function by the distance to main cities) appears to be reflected into the regulator’s allocation of contracts, thereby also reducing the adverse selection problem.

Institutional fund managers generally rebalance using ad hoc methods such as calendar periods or tolerance band triggers. Another approach is to quantify the cost of a rebalancing strategy in terms of risk-adjusted returns net of transaction costs. An optimal rebalancing strategy that actively seeks to minimize that cost uses certainty-equivalents and the transaction costs associated with a policy to define a cost-to-go function. Stochastic programming is then used to minimize expected cost-to-go. Monte Carlo simulations demonstrate that the method outperforms traditional rebalancing strategies such as periodic and 5% tolerance rebalancing.

Hedge funds have return peculiarities not commonly associated with traditional investment vehicles. They are more inclined to produce return distributions with significantly non-normal skewness and kurtosis. Investor preferences may be better represented by bilinear utility functions or S-shaped value functions than by neoclassical utility functions, and mean-variance optimization is thus not appropriate for forming portfolios that include hedge funds. Portfolios of hedge funds formed using both mean-variance and full-scale optimization, given a wide range of assumptions about investor preferences, reveal that higher moments of hedge funds do not meaningfully compromise the efficacy of mean-variance optimization if investors have power utility; mean-variance optimization is not particularly effective for identifying optimal hedge fund allocations if preferences are bilinear or S-shaped; and, contrary to conventional wisdom, investors with S-shaped preferences are attracted to kurtosis as well as negative skewness.

Ideally, financial analysts would like to be able to optimize a consumption-investment game with many securities, many time periods, transaction costs, and changing probability distributions. We cannot. For a small optimizable version of such a game, we consider in this article how much would be lost by following one or another heuristic that could be easily scaled to handle large games. For the games considered, a particular mean-variance heuristic does almost as well as the optimum strategy.

The Environmental Stewardship Scheme provides payments to farmers for the provision of environmental services based on agricultural foregone income. This creates a potential incentive compatibility problem which, combined with an information asymmetry on farm land heterogeneity, could lead to adverse selection of farmers into the scheme. However, the Higher Level Scheme (HLS) design includes some features that potentially reduce adverse selection. This paper studies the adverse selection problem of the HLS using a principal agent framework at the regional level. It is found that, at the regional level, the enrolment of more land from lower payment regions for a given budget constraint has led to a greater overall contracted area (and thus potential environmental benefit) which has had the effect of reducing the adverse selection problem. In addition, for landscape regions with the same payment rate (i.e. of the same agricultural value), differential weighting of the public demand for environmental goods and services provided by agriculture (measured by weighting an environmental benefit function by the distance to main cities) appears to be reflected into the regulator’s allocation of contracts, thereby also reducing the adverse selection problem.

On the theory of dynamic programmingPortfolio Formation with Higher Moments and Plausible Utility

- R E Bellman
- Cremers
- Jan
- Hein
- Kritzman

Bellman, R.E. “On the theory of dynamic programming.” Proceedings of the National Academy of Sciences, 38 (1952), pp.716-719 Cremers, Jan-Hein, Kritzman, and Page. “Portfolio Formation with Higher Moments and Plausible Utility.” Revere Street Working Paper Series. Financial Economics 272-12 (2003)