ArticlePDF Available

Solving Dynamic Stochastic Control Problems in Finance Using Tabu Search with Variable Scaling

Authors:

Abstract and Figures

Numerous multistage planning problems in finance involve nonlinear and nonconvex decision controls. One of the simplest is the fixed-mix investment strategy. At each stage during the planning horizon, an investor rebalances her/his portfolio in order to achieve a target mix of asset proportions. The decision variables represent the target percentages for the asset categories. We show that a combination of tabu search and variable scaling generates global optimal solutions for real world test cases, despite the presence of nonconvexities. Computational results demonstrate that the approach can be applied in a practical fashion to investment problems with over 20 stages (20 years), 100 scenarios, and 8 asset categories. The method readily extends to more complex investment strategies with varying forms of nonconvexities.
Content may be subject to copyright.
META-HEURISTICS:
Theory & Applications
Ibrahim H.Osman
and
JamesP.
Kelly
••
,~
Kiuwer
Acaaemi~
Publishers
BostonlUlndon!bordr~ht
Distributors for North America:
Kluwer Academic Publishers
101 Philip Drive
Assinippi Park
Norwell, Massachusetts 02061 USA
Distributors for all other countries:
Kluwer Academic Publishers Group
Distribution Centre
Post Office Box 322
3300 AH Dordrecht, THE NETHERLANDS
A C.I.P.Catalogue record for this book is available from the Library of
Congress.
All rights reserved.No part of this publication may be reproduced, stored in
a retrieval system or transmitted in any form or by any means, mechanical,
photo-eopying, recording, or otherwise, without the prior written permission of
the publisher, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park,
Norwell, Massachusetts 02061
Solving Dynamic Stochastic Control Problems in Finance Using
Tabu Search with Variable Scaling
Fred Glover
School otBusiness
University of Colorado at Boulder
Boulder, CO 80309
E-mail: fred.glover@colorado.edu
John M. Mulvey·
Department of Civil Engineering and Operations Research
Princeton University
Princeton, NJ 08544
E-mail: mulvey@macbeth.princeton.edu
Kjetil Hoyland
Department of Managerial Economics and Operations Research
University of Trondheim
N-7034 Trondheim, Norway
E-mail: kth@iok.unit.no
Abstract
Numerous multistage planning problems in finance involve nonlinear
and nonconvex decision controls. One of the simplest is the fixed-mix
• Research supported in part byNSF grant CCR-9102660 and Air Force grant
AFOSR-91-0359.We acknowledge the assistance of Michael T.Tapia in preparing
this paper for publication.
investment strategy. At each stage during the planning horizon, an in·
vestor rebalances herlhis portfolio in order to achieve a target mix of
asset proportions.The decision variables represent the target percent-
ages for the asset ~ategories. We show that a combination of Tabu
Search and Variable Scaling generates global optimal solutions for real
world test cases, despite the presence of nonconvexities. Computa-
tional results demonstrate that the approach can be applied in a practi.
cal fashion to investment problems with over 20 stages (20 years). 100
scenarios. and 8 asset categories.The method readily extends to more
complex investment strategies with varying forms of nonconvexities.
Key
Words:
Dynamic stochastic control, financial modeling, tabu
search. variable scaling.
1. Introduction
Efforts to explain financial markets often employ stochastic differen-
tial equations to model randomness over time. For example, interest
rate modeling has been an active research area for the past decade.reo
suIting in numerous approaches for modeling the yield curve. See
Brennan and Schwartz (1982), Hull (1993), Ingersoll (1987) and Jar-
row (1995) for details.These equations form the basis for various
analytical studies such as estimating the "fair" market value of securi-
ties that display behavior conditional on the future path of interest
rates. Often,for tractability (Hull 1993), these teclmiques discretize
theplanning horizon into a fixed number of time steps
t
E
{1•...T}
and the random variables into a finite number of outcomes.
Our goal is to employ a similar discretization of time and random-
ness. But instead of estimating fair market value, we are interested in
analyzing alternative investment strategies over extended planning pe-
riods -- 10 or even 20 years. The basic decision problem is called asset
allocation. The universe of investments is divided into a relatively
small number of generic asset categories (US stocks, bonds, cash, in-
ternational stocks) for the purposes of managing portfolio risks. There
is much evidence that the most critical investment decision involves
selecting the proportion of assets placed into these categories, espe-
cially for investors who are well diversified and for large institutions
such as insurance companies and pension plans.
In our model, we assume that future economic conditions are pre-
sented in the form of a modest number of scenarios.A scpnario is de-
scribed as a complete path of all economic factors and accompanying
returns for the assets over the planning horizon. Asset allocation deci-
sion are made at the end of each of the
T
time periods. They cannot be
changed during the period, but they can and do respond to the chang-
ing state of the world (including the condition of our portfolio) at the
beginning of the next period. To keep the problem in a form that can
be understood by fmancial planners and investment managers, we have
assumed a decision rule that is intuitive and commonly used -- the
fixed-mix rule. Under this rule, the portfolio has a predetermined mix
at the beginning of each period -- for instance, 60% stock, 30% bonds,
and 10% cash. The portfolio must be rebalanced by selling and buying
assets until the proper proportions are attained. This strategy has been
used successfully by Towers Perrin (Mulvey 1995) and other long
term investment managers. See Perold and Sharp (1988) for a detailed
description of the fixed-mix and other dynamic strategies for asset al-
location.
In this paper, we show that Tabu Search can be combined with Vari-
able Scaling to solve important cases of the asset allocation problem.
These decisions are difficult due to nonconvexities in the objective
function.Also, the inclusion of real world considerations, such as
taxes and other transaction costs (Mulvey 1993), may cause difficulties
for continuous optimization algorithms. In contrast, Tabu Search takes
advantage of the discretization of the solution space.
2. The Fixed-Mix Dynamic Control Problem
In this section we provide the mathematical description of the fixed·
mix control problem, a dynamic stochastic control model.Other real·
location rules could be employed, such as the life cycle concept; the1lC'
lead to similar nonconvex optimization problems.
To state the model, we define the following sets:
a set of discrete time steps in which the portfolio will rebalanced:
t
=
{1,2,... ,T};
a set of asset classes:
i
=
{1,2,...
,1};
a set of scenarios, each of which describes the full economic
situation for each asset category
i,
in each time period t:
s
=
{1,2,...,S} .
Define the following decision variables:
x:,1:
amount of money (in dollars) invested in asset category
i,
in
time period
t,
for scenario
s;
w;:
wealth (in dollars) at the beginning oftime period
t
under
scenarios;
Ai:
fraction of the wealth invested in asset category
i
(constant
/
across time), 0 ~ Ai ~
1,
i
=
1,2,...
,1
and LAi
=
1
(not al-
i-I
Define the following parameters:
WI:initial wealth in the beginning of time period 1 (in dollars);
rl~:
percentage return for asset
i,
in time period
t,
under scenario
r;':
average return in time period
t
undcr scenarios.
I
L
(x/I .
r/,)
'~I
I
"'"' J
L.. X"I
1-1
S
p.,:
probability that sccnariosoccurs.
L
p"
=
1;
.'=1
I.: percentage transaction cost forassetclass
i--
to simplify the
presentation,symmetric transaction costs are
.<tssumed
(Le.
cost of selling equals costof buying);the implemented
model canalso handle.nonsymmetric transaction costs.
The fixed-mix control rule
ensures
thata fixed percentageofthe
wealih is invested in each asset category.Wealth at the beginning of
the see·and period willbe:
J
f
w~
=
L(t+ri~I)Xrl-
L:}1J
-rt~<ltl'
1=1 i-I
The second tenl1 of (2) takes
the
transaction cost into account assum-
inglinear transaction costs. To compute these values,tbeaveragere-
turn,
r"f.
is calculated for each
time
period ( and for eacbs~nario s. A
portion ofthe asset classes withreturns
higher
thantheaverage return
is sold, while the asset classes with belowaveragereturns.are bought.
Theeqmnion for rebalancing at the beginning-ofperiod
1
is
.1.1
d.d·
f.-
{?...
T}
-\:i)=fi'rAj
vlal1 S,an -
-'"l--' ,
1
I
11':•.1
=
L<l+r/t)x!,/-
L}/l-i/~«(I 'Vsand
(=
{2, 3,..T}. (4)
I_I1-1
Typically the model includes linear constraints on the asset mix
which we for now assume constant over time:
These constraints can take any linear form, for instance upper and
lower bounds on the proportions.A typical example will be for inves-
tors to limit the international exposure to some value (say 30%).
The objective function depends on the investor's risk attitude.
A
multiperiod extension of the Markowitz's mean-variance model
i~
applied for this analysis.Two terms are needed -- the average total
wealth, Mean( wr), and the variance of the total wealth, Var( WT),
across the scenarios at the end of the planning horizon (at the end of
time period T). Other objective functions can also be used. For ex·
ample, we can maximize the von Neumann-Morgenstern expected
utility of the wealth at time
T
(Keeney and Raiffa 1993).Alterna-
tively, we can maximize a discounted utility function (Ziemba and
Vickson 1975). Other objective function forms, such as multiattributc:
utility functions, can also be used to model multiple investor goab
(Keeney and Raiffa 1993).
Our financial planning model is thus
Max
Z
=
/3
Mean(wT)-(I -/3)Var(wr)
s
s.t.Mean(wr)
=
LPsw: ,
s=1
s
Var(wr)
= ~
L[w: - Mean(wr)]2 ,
s=1
equations (1) - (5), and
O:S
13
:s
1.
Mean(
w
T)measures the expected profit from the investment strategy,
while Var( wT)measures the risk of the strategy. A tradeoff exists
between the expected profit and the risk of an investment strategy.
By
solving the problem while allowing
f3
to vary between 0 to 1 we obtain
the multiperiod efficient frontier. The quadratic equality constraint (3)
causes the ftxed-mix model to be nonconvex.
3. Specializations to Tabu Search
An advantage in solving the ftxed-mix control problem with Tabu
Search is that each application of Variable Scaling effectively discre-
tizes the solution space.We thus define the solution space as a dis-
crete set of possible investment proportions in each asset category.Let
us define the following vectors:
,
A(z):proportion of the total investment in asset category
i;
l(i):
lower bound on the investment proportion in asset category
i;
u(i):upper bound on the investment proportion in asset category
i.
Restricting the investment proportions in each asset category to be an
integer percentage share of the total investment means that we allow
A(i) to take the following percentage values:
1(1),
l(z)
+
1,
1(1)
+
2, ...,U(I) - 1, u(i) ,
where 0
<
l(i)
<
u{l)
<
100 (not allowing short sale) and
u(i)
and
l(i)
are integer.
Tabu Search operates under the assumption that a neighborhood can:
be constructed to identify adjacent solutions. The fixed-mix problem
is constrained such that the sum of all the investment proportions must
equal 100%.We define a neighbor solution by choosing two vari-
ables, A(Up)and 'A,(down)and assigning them new values:
A(Up)
=
A(up)
+
delta, and
A(down)
=
A
(down) - delta,
such that A(Up)
<
u(up) and A(down)
>
l(down), where up is the index
of the variable increasing its value, down is the index of the variable
decreasing its value,and delta> 0 is the stepsize for the neighborhood.
For a given discrete solution space, the following neighborhood
search algorithm (Glover and Laguna 1993) is an attempt to solve the
fixed-mix control problem. 1b.is approach gives the global opti"~
solution (in the discrete solution space) if the problem is com'c.
However, if the problem is nonconvex, as is the case with the
fixed
mix control problem, the neighborhood search algorithm will tennlMu
at the first (discrete) local optimal solution. The area of the SolUlh)f\
space that has been searched will be limited using this approach.
Step 1:Initialization.
Select a feasible starting solution, A_start.
Set current_solution
=
A_start.
Calculate objective value of this solution, f(A _start), and set
best_objective
=
f(A_start).
Step 2: Stopping criterion.
Calculate the objective value of the neighbors of current 30/ution.
Uno neighbors have a better objective value than best_objective, stop.
Otherwise,go to Step 3.
Step 3:Update.
Select a neighbor, A_next, with the best objective value, f(AJleXt) (or se-
lect any neighbor that gives an improving objective value).
Update: current_solution
=
A_next, best_objective
=
f(A_next)
Go to Step 2.
Due to nonconvexities, we need a more intelligent method for find,
ing a global solution.Tabu Search extracts useful information from
previous moves to direct the search to productive areas of the soluti(m
space. It uses specialized memory .structures to characterize move.
with certain attributes as tabu,and assigns other moves values Of
penalties in order to guide the search through the solution space with·
out becoming trapped at local optima.
We define the total neighborhood of the current solution,
N(current_solution), as all possible ways of increasing the amount
ih'
vested in one asset category and decreasing the amount invested in
another asset category, while satisfying the constraints. This neigh·
borhood is modified to N(H, current_solution), where Hrepresents the
history of the moves in the search process. Therefore, the search his·
tory determines which solutions might be reached from the current
point.In short term Tabu Search strategies, N(H, current_solution) is
typically a subset of N(current _solution).In intermediate and longer
term strategies N(H, current_solution) may contain solutions not in
N(current_solution). See Glover (1995) for further details.
3.1 The Memory Structures
Tabu Search makes use of adaptive (recency based and frequency
based) memory structures to guide the search.The recency and the
,
frequency memory structures each consist of two parts. The first part
of the recency based memory keeps track of the most recent iteration at
which each variable received each of the values assigned to it in the
past (tabu_time(i, value)). The second part of the recency based mem-
ory keeps track of the most recent iteration at which each variable
changed its value (tabu_last_time(i)).
The first part of the frequency based memory keeps track of the
number of times each variable has been assigned its specific values
(tabu_count(i, value)). The second part measures the number of itera-
tions the variables have resided at the assigned values (tabuJes(i,
value)). Let duration(z) measure the number of iterations since the
variable received its current value.
3.2 Move Attributes
To make use of these memory structures, Tabu Search uses the concept
of move attributes.Generally a move ",_next(up)
=
",(up)
+
delta and
'" next(down)
=
",(down) - delta has four attributes:
(1) ",(up):value of increasing variable, prior to increase;
(2) ",_next(up): value of increasing variable, after increase;
(3) f..{down):value of decreasing variable, prior to decrease;
(4) ",_next(down):value of decreasing variable, after decrease.
If a move attribute is tabu,this attribute is said to be tabu-active. A
move is classified tabu as a function of the tabu status of its attrihul~"
In the next section we show what COmbinations of tabu-active mo\'
attributes classifies a move tabu.
3.3 Tabu Rules Based on Recency Memory Structure
An .important parameter of this process is tabu_tenure, that
i9
IIw
number of iterations in which a move attribute is tabu-active. For
0'-
approach, tabu_tenure will have two values, tJrom and t_to,
when.
(.from is the number of iterations an assignment
Ai
=
~alue is tabu.
active, to discourage moving
Ai
"from." its present value (now_va/uC'
I.
and t_to is the number of iterations an assig~ent
Ai
=
value is tahu-
active,to discourage moving
AI
"to" a specific value (next_vall44ft
(Glover and Laguna 1993).
An attribute
(i,
value) is declared tabu-active when
tabu_time(i, value)"*0, tabu_time(i, value) 2:current_iteration
_I,
where t
=
t_to if value
=
next_value. ('Ibis classification will discour.
age a variable from returning to a value it recently has moved away
from.) Also, an attribute i(implicitly, an attribute (i, value) for value ••
now_value) is declared tabu-active When
tabu_Iast_time(i)"*0, tabu_Iast_time(i) >= current_iteration _
I.
where t
=
tJrom. (This classification 'mIl discourage moving a vari.
able from its present value if it recently received this value.)
Both dynamic and static rules can be applied to set tabu_tenure. For
our purposes, we have used simple static rules, such as setting
(from
to a constant value between 1 and 3 and
t_to
to a constant value be-
tween 4 and 6.Our choice of values for t_to and tJrom is based on
preliminary experimentation. We choose t_to
==
2...{;
as a rule of
thumb in deciding the size of this parameter (in our test cases n =8).
In the implementation we introduce one other kind of tabu-active
classification: an attribute (i, value) is "strongly from tabu-active" if
tabu_last_time(i, value) >= current_iteration - 1.
For this study, the basic rule for defining a move tabu is if one of the
following is true:
i) either attribute (1) or attribute (3) is strongly tabu-active;
ii)
two or more of the remaining attributes are tabu-active.
3.4 Tabu Rules Based on Frequency Memory Structure
The frequency memory structure is usually applied by assigning pen-
alties and inducements to the choice of particular moves. Let
S
denote
the sequence of solutions generated by the search process up to the
present point and define two subsets,
Sl
and
S2,
which contain high
quality and low quality solutions (in terms of objective function val-
ues), respectively. A high residence frequency in the subsequence
Sl
indicates that an attribute is frequently a participant in high quality
solutions, while a high residence frequency in the subsequence S2 in-
dicates that an attribute is frequently a participant in low quality solu-
tions. These relationships guide the search in the direction of high
quality solutions, by increasing or decreasing the incentive to choose
particular moves based on the quality of past solutions that contain at-
tributes provided by these moves.TIus constitutes an instance of a
Tabu Search intensification process. Define another set,
S3,
containing
both high and low quality solutions. Assigning a penalty to high fre-
quency attributes in this set pushes the search into new regions,creat-
ing a diversification process. A high JraDsition frequency might indi-
cate that an attribute is a "crack filler". which is an attribute that alter-
nates in and out of the solution to "perform a fine tuning function"
(Glover and Laguna 1993).
4. Solution Strategy
Tabu Search methods often do not "turn on" their memory or reslr .
tions until after reaching a flrst local optimum. In our solution stratq'\
we take advantage of an efficient method to fmd the local optirn\lfll
namely a Variable Scaling approaGh.Our Variable Scaling proccd\lf.
is an instance of a candidate list ml~thod, which is the name given
III ;1
class of strategies used in Tabu Search (by extension of related
pn".·
dures sometimes used in optimization) to generate subsets of
gll< ••
1
moves without the effort of examining all moves possible.In the prr \
ent setting, where the number of values available to each variahk .'
effectively infmite, we elect a candidate list strategy that "scales"
11M'
variables to receive only a relatively smalll}umber of discrete valu(",
However, we allow the scaling to be variable, permitting the seal
11111
interval to change each time a 10Galoptimum is reached in the n'
stricted part of the neighborhood defined by the current scaling.Wh"u
no change of scaling (from the options specifled) discloses an imprm->
ing move --so that the current solution is truly a local optimum reI.·
tive to the scalings considered -- we apply a recency based T.hu
Search approach to drive the solution away from its current locatiu«.
and then again seek a local optimum by our Variable Scaling approach
The Variable Scaling approach is outlined in section 4.1, while ,,,-
complete strategy in outlined in section 4.2.
4.1 Variable Scaling Approach
We defme a set of stepsizes (or scaling intervals), each of which
glW!'
rise to a restricted neighborhood.A local neighborhood search is
dON
over a given restricted neighborhood until no improvement is posslb&f
When local optimality is reachedin that neighborhood (for one
fW'
ticular stepsize), we switch to another restricted neighborhood vi•••
new stepsize.We choose a set of stepsizes in decreasing order.
(h.
our purposes, typically the biggest stepsize is 5% and the smaU•••
stepsize is 1%.) The smallest stepsize determines the accuracy of
identifying a solution as a local optimum.
For a convex problem, applying such a candidate list approach based
on Variable Scaling speeds up the search since we move in bigger
steps toward the optimal solution in the beginning of the search, and
only as we get "close to the top" (given we are maximizing) do we de-
crease the stepsize.For a nonconvex problem (like ours) with several
local optima,the approach can reduce the number of necessary itera-
tions to get to a local optimum, and it can also move the search away
from a local optimum. If one stepsize is "stuck" in a local solution, a
change in the stepsize can take us away from the local pptimum. All
of these ideas are incorporated in Algorithm: VARIABLE SCALING.
Algorithm: VARIABLE SCALING
Step 0: Initialize.
Construct a stepsize list:stepsize_J, depsize_2, .., stepsize_NS, where NS
is the number of stepsizes.Choose stepsizes in decreasing order.
Set stepsize
=
stepsize
j.
Construct an initial solution, A_start.Let currentjolution
=
A_start.
Set current_iteration
=
O.
Let best_objective
=
f(A_start), the objective value of A_start.
Step 1:Calculate neighbors.
Step 2: Evaluate neighbors.
If objective value of at least 1 of the neighbors is better than best_objective:
Pick the neighbor with the best ob}~ve value, A_next-.
Update current_solution
=
A_next and best_objective =f(A_next);
Set current_iteration =current_iteration +1;
Go to Step I.
If none of the neighbors improves the solution:
If stepsize ,;,stepsize _NS, use next !:tepsize from list and go to Step I;
If stepsize
=
stepsize_NS, check if objective has improved for any of the
last NS stepsizes: If the objective has improved,set stepsize to
stepsize_l, and go to Step I.[fit has not improved,go to Step 3.
Step 3. STOP:Local optimal solution is o!:>tained.
4.2 The Complete Algorithm
The implemented algorithm is:
Algorithm:COMPLETE ALGORITIIM
Step 0, Step 1and Step 2 from the Variable Scaling approach.
Step 3: Incorporate Recency Based Tabu Search (diversifying search).
3.1:Set stepsize
==
stepsize_div (stepsize used for the diversifying search).
Set div
_u
_counter
==
O.
3.2:Set div_it_counler
==
div_it_counter
+
1.
For all neighborhood moves do
check ifmove is tabu.
if tabu, go to next neighbor;
.if not tabu, calculate the objective value of the move.
3.3: Pick the (nontabu) neighbor with the best objective value.Denote this
neighbor A_best_div, and its objective value f(A_best_div).
If f(A_best_div)
>
best_objective set stepsize
==
stepsizej and go to
Step I in the Variable Scaling approach.
If f(A_best_div)
<
best_objective go to Step 3.4.
3.4:If div_it_counter
<
max_diY_it,update tabu status (as done in Steps 1-3
in memory updating in section 3.1) and go to Step 3:2.
If div_it_counter
==
max_itJounter, go to Step 4.
Step 4: STOP.The solution is equal or sufficiently close to the true (discrete)
global optimal solution.
The procedure imposes the Tabu Search approach in conjunction
with the candidate list strategy of Variable Scaling. This is just
one of many ways that Tabu Search can be coordinated with a candi-
date list strategy. (More typical is to subject such a strategy directly to
the guidance of Tabu Search memory, rather than invoking this guid-
ance only at particular stages and for particular neighborhood in-
stances.) Nevertheless, we have fo\md this approach convenient for
our present purposes. Specifically, in this alternating approach,we
switch from the Variable Sc~ing method (over its chosen set of scal-
ings) to the Tabu Search method (over another set of seatings) when-
ever Variable Scaling no longer improves the current solution.If Tabu
Search generates a solution better than the one obtained from the Vari-
able Scaling, we return to Variable Scaling, with the solution from
Tabu Search as an initial solution. If Tabu Search fails to generate an
improved solution, the algorithm will stop when the maximum number
of iterations is reached. In this procedure the choice of the stepsizes is
crucial for the algorithm's success. We typically choose small step-
sizes in the range of 0.5% to 5% for the Variable Scaling, while a big-
ger one, typically 5% to 15% is applied for the Tabu Search.
5. Computational Results
The algorithm described in section 4.2 is applied to an investment
problem with I
=
8 asset categories, T
=
20 time periods and S
=
100
scenarios. The 8 different asset categories are cMh equivalents,
Treasury bonds, large capitalization US stocks, international bonds,
international stocks, real estate, government/corporate bond index, and
small capitalization US stocks. One hundred scenarios were generated
by the technique introduced in Mulvey (1995). Each scenario is given
equal probability
Ps
=
1/S
=
1%. Each scenario consists of returns for
each asset, in each time period. Hence the total number of returns
generated is equal to 16000.The initial wealth is set equal to unity.
In all the experiments, each point on the multiperiod efficient fron-
tier is obtained as explained in section 4.2. The entire efficient frontier
is obtained by solving the problem f()r 22 values of J3.Through all the
experiments, we have a set of basic test parameters 1
5.1 Comparison with Global Methud
Our solutions are compared with the solutions obtained from the
global method described in Androulakis et al. (1994). This determi-
nistic global optimization algorithm guarantees finite e-convergence to
the global optimum. In this case we assume no transaction cost or tax.
The solutions obtained by our approach· are very close to the solutions
obtained by the global method and are often slightly better.The opti-
I
Number of stepsizes for the Variable Scaling approach (NS): 3; Variable Scaling
stepsizes
stepsize_J
=
5%,
stepsize_2
=
3%,
:1tepsize_3
=
1%; stepsize for the Tabu
Search part of the algorithm
(stepsize _div):
10%; maximum number of diversifying
steps
(max_div
_it): 25; tabu tenures:
t..from=
2,
t_to
=
4.
mal mixes generated by the two methods are indeed practically identi
cal.These results are obtained despite the presence of nonconvexitics
5.2 Including Transaction Costs and Tax
We asswned percentage transaction costs for the asset categories}.
The tax rate is asswned to be 28%. Figure 1 shows the efficient fron-
tiers obtained by solving the model that takes tax and transaction cost
in to account (tax-model) and the one that does not (no-tax model).
All
expected, the efficient frontier obtained by the tax-m9del is tilted
down.More surprisingly, perhaps, the efficient frontiers cross in the
low end of the variance area (for low (3.'s). Transaction costs and taxel!
dampen the variance of the expected value and therefore a lower vari-
ance can be achieved for the tax-modeL From Figure 2 we see that for
13
>
0.94 in the model where transaction costs and tax is included, it
i5
optimal to have all the funds in the asset category with the highest ex-
pected value. This is in contrast to the case without taxes or transac-
tion costs, where a more diversified optimal solution was found
a.ll
soon as the variance was assigned a weight in the objective function.
There is an explanation for this phenomenon. When all of the fund.
are concentrated in one asset category, no buying or selling is neces-
sary to update the portfolio at the end of each time period. When the
funds are split between asset categories, however, trading must be
done at the end of each time period to reconstruct the portfolio to tho
predetermined weights of the fixed-mil{ rule. This implies cash out-
flows because of the transaction cosu: and taxes on assets sold for
profit.So for
P
>0.94 the gain from reduced variance by diversifying
is offset by a larger loss in the expected value.
2Cash equivalents:0%; Treasury bonds:0.25%; large capitalization stocks:0.25%;
international bonds:0.5%; international stocks: 1.0%; real estate: 6.0%;govern-
ment/corporate bond index: 1.0%; small capitalization stocks:1.0%.
•....
--
•.••-u!.._--------
..-_._
....
~----
•.'..I.'y··
,
/'
-8";1'(
I.
!r~-'"
•.... ,f't
JI~·t...···
'I
i
!
Figure
1:
Efficient frontiers for model with and without tax and
transaction costs. Notice how the efficient frontier for the tax-model is
tilted down (as one would expect). Also see how the efficient frontiers
crosses in the low variance end.
f
,
I
I
l
j·~!ic;~",,xl.
oOI,..,"':n71~1
.JII.('~[~
I
.",,".I
alu::.Ua::.::.
.~flid"""'i'l}t.\o'b
.11~U
&""'1''»
I
.e...;;,
&;.~_.....:
»
11 11
~
11
.~
11 11 11
»
11
!l
3
~
d
:;
3
:~
3~
3
::
3
btQ~
~
::
0
Figure 2: Optimal solutions for model with and without tax and trans-
action costs. Notice how the solutions obtained from the tax-model
are less diversified for high Ws (i.e. :inthe high variancelhigh expected
value end of the efficientfrontier), and more diversified for low I3's.
At the other end of the efficient frontier (low 13,i.e. low variancc),
we see that tax-model recommends solutions that are more diversified
than the solutions from the no-tax model.Tax-model recommends"
portfolios consisting of all 8 asset categories for 13:5 0.4, with nu
dominating asset category (except for 13
=
0). The no-tax model, how-
ever, recommends a portfolio concentrated in fewer asset categoricli
There is a simple explanation for this ;~esult.With many asset c1assclI.
the return in some of asset classes will be close to the average return.
Little trading is required in these asset categories to rebal~ce the port-
folio to the prefixed weights. More tIading is done in the asset cate-
gories with returns far from average, but these asset categories are
8
fraction of our total portfolio. This is not the case for a portfolio con·
centrated in fewer asset categories. Consider, for instance, the portfo-
lio recommended by the no-tax model for 13
=
0.2 -- approximately
80% in cash equivalents and Treasury bonds. Since it is unlikely that
both dominating asset categories have close to average returns, more
trading is needed to rebalance the portfolio. This portfolio is likely to
have more cash outflows from transal:tion costs and taxes than the
more diversified one -- the gain in reduced variance by concentrating
the portfolio in cash equivalents and Treasury bonds is offset by a loss
in expected value caused by the increase in cash outflows.
As expected, the solution time increases considerably when transac-
tion costs and taxes are included to the model.The average solution
time per f3-problem for the model with basic test parameters (see be-
ginning of this section) is 46.7 CPU sel:onds Gust over 17 CPU min-
utes to obtain the entire efficient frontier) on a Silicon Graphics Iris
workstation.The results are highly attractive, particularly given that
we are developing a long term projecting system.
5.3 The Effect of Including Tabu Sean:h Restrictions.
To see the effect of the Tabu Search restrictions we consider the tax
problem and compare the solutions of our approach (COMPLETE
ALGORITHM) with the solutions of a pure Variable Scaling approach
(the candidate list strategy used in
OUT
method). The largest stepsize in
the pure Variable Scaling approach is set equal to the stepsize of the
Tabu Search part of our algorithm (stepsize_div
=
10%).The other
stepsizes are also set equal for both approaches (5%, 3% and 1%).
Hence, the sole difference between the two approaches is the recency
memory restrictions on the 10% search applied in our algorithm. For ~
>
0.5 the solutions are identical. Th~:pure Variable Scaling approach
,
"gets stuck" in a local solution for ~
<
0.4, in contrast to the approach
that includes Tabu Search memory guidance.
6. Conclusions
Tabu Search is an efficient method ft>robtaining the efficient frontier
for the fixed-mix investment problem.The computational results
show that the solutions obtained by the Tabu Search are very close to
(often slightly better than) the e-tolenmce global optimal solutions ob-
tained by the method of Androulakis et al. (1994) for the case with no
taxes or transaction costs. In an expanded model, which addresses
transaction costs and taxes for greateI realism,our approach continues
to obtain optimal solutions efficiently.The expanded model is beyond
the capability of the global optimization approach, and its complicat-
ing features in general pose significant difficulties to global optimiza-
tion solvers based on currently standard designs.
Some areas for future research are: (1) develop and test related dy-
namic stochastic control strategies (with nonconvexities);(2) design an
approximation scheme for updating iilformation between iterations in
order to improve computational efficiencies; and (3) incorporate addi-
tional strategic elements of Tabu Search. Since the discretization of
the solution space depends upon an investor's circwnstances, research
in this area is critical for successful use of this methodology.
7. References
J.P.Androulakis et al.Solving stochastic control problems in financ:.
via global optimization,Working paper, SOR-94-01, Princeton
UnJ.
versity (1994).
M.J. Brennan and E.S. Schwartz,An equilibrium model of bond pric-
ing and a test of market efficiency", Journal of Financial and
Quantitative Analysis, 17 (1982) 75.
F.Glover, Tabu search: fundamenta1~ and usage, Working paper, Uni·
versity of Colorado, Boulder (1995).
F.Glover and M. Laguna,Tabu seaIch, in:Heuristic Techniques for
Combinatorial Problems, ed. C. Ree:ves(1992).
J.C. Hull, Options. Futures and other Derivative Securities; (prentice
Hall, 1993).
J.E. Ingersoll, Jr. Theory of Financial Decision Making, (Rowman
&
Littlefield, 1987).
R.Jarrow, Pricing interest rate options, in:Finance, eds: R.Jarrow, V.
Madsimovic, and W.T. Ziemba,(North Holland,Amsterdam, 1995).
R.Keeney and H. Raiffa,Decisions with Multiple Objectives:Prefer-
ences and Value Tradeoffs, (John Wiley, New York, 1976; reprinted
by Cambridge University Press, 1993).
J.M. Mulvey,Incorporating transaction costs in models for asset allo-
cation, in:Financial Optimization, ed: S.Zenios, (Cambridge Uni-
versity Press, 1993).
J.M.Mulvey, Generating scenarios for the Towers Perrin investment
system, Working paper, SOR 95-04, Princeton University (to appear
Interfaces 1995).
A.F. Perold and W.F. Sharpe, Dynamic strategies for asset allocation,
Financial Analysts Journal, (1988) 16.
W.T. Ziemba and R.G. Vickson (eds.), Stochastic Optimization Mod-
els in Finance, (Academic Press, New York, 1975).
... Di Tollo and Roli [10] provided an overview of the literature on the application of metaheuristics to the PSP, which consists of simulated annealing (SA) [9], threshold accepting (TA) [11,12], tabu search (TS) [13], genetic algorithm (GA) [14,15]and ant colony optimization (ACO) [14,16]. Chang et al. [17] proposed GA, SA, and TS for cardinality-constrained PSP. ...
... In general, metaheuristicbased algorithms cannot prove the optimality of the returned solution, but they are usually very efficient in finding nearoptimal solutions. Some techniques, such as tabu search [13], simulatedannealing [9], genetic algorithm [15], and particle swarm optimization [19] have proven to be successful in tackling real-world problems. ...
Article
Full-text available
The Portfolio Selection Problem is one of the most widely studied topics in the finance and economics area. Many portfolio optimization problems are formulated as a complex mathematical model where direct optimal solutions cannot be obtained in a reasonable amount of time with dependable accuracy. In this paper, the firefly algorithm, a newly introduced metaheuristic approach, has been used to solve the Markowitz portfolio optimization problem with cardinality constraints which is among the difficult mathematical problems in finance. The performance of the proposed method is then compared with some other available techniques in the literature; such as Genetic Algorithm, Tabu Search, Simulated Annealing, and Particle Swarm Optimization. The preliminary results indicated that the proposed model outperforms other methods in some cases considering error criteria for some benchmark data sets that are widely tested in the past. We illustrate with numerical examples with the statistical test that by using a well-tuned firefly algorithm we can have a better result
... They also explored new constraints like cardinality constraints (Chang et al. [8], 2000; Bienstock, 1996) [5] and the minimum size of trading lots (Mansini and Speranza, 1999) [28]. Furthermore, Yoshimoto (1996) [41] and Glover et al. (1996) [16] considered transaction costs when examining the evolution of a portfolio over multiple periods. Shaverdi et al. [37] presents a multi-objective robust possibilistic model for technology portfolio optimization that considers social impact and various financing methods, aiming to maximize job creation and profit while minimizing risk, and demonstrates its applicability through a real case study in Iran's Tech-3 This preprint research paper has not been peer reviewed. ...
... They also explored new constraints like cardinality constraints (Chang et al. [8], 2000; Bienstock, 1996) [5] and the minimum size of trading lots (Mansini and Speranza, 1999) [28]. Furthermore, Yoshimoto (1996) [41] and Glover et al. (1996) [16] considered transaction costs when examining the evolution of a portfolio over multiple periods. Shaverdi et al. [37] presents a multi-objective robust possibilistic model for technology portfolio optimization that considers social impact and various financing methods, aiming to maximize job creation and profit while minimizing risk, and demonstrates its applicability through a real case study in Iran's Tech-3 This preprint research paper has not been peer reviewed. ...
... mean deviations below and above the portfolio rate of return) resulted in a model equivalent to the MAD model. In Speranza [19] Tabu search (TS) was developed by Glover [22] and was applied by Glover, Mulvey and Hoyland [23] to a portfolio optimisation problem involving dynamic rebalancing to maintain constant asset proportions, using a scenario approach to model forecast asset returns. ...
Article
Full-text available
https://arxiv.org/abs/cond-mat/0501057
Article
In industrial finance, portfolio selection has emerged as a critical challenge that has received considerable attention over the past few decades. The standard approach to this problem is the Markowitz mean–variance model, which seeks to balance two inherently conflicting objectives: maximizing returns and minimizing risk. This study investigates portfolio optimization under realistic constraints, including cardinality, quantity, and pre-allocation. To address these challenges, we propose a memetic algorithm specifically designed to solve constrained portfolio optimization problems. The performance of the algorithm was evaluated using benchmark datasets from major financial markets, including the Hang Seng, DAX 100, FTSE 100, S&P 100, NASDAQ, and Nikkei indices. A comparative analysis with the Non-dominated Sorting Genetic Algorithm II (NSGA-II) and Particle Swarm Optimization (PSO) demonstrated that the memetic algorithm consistently outperformed both NSGA-II and PSO in terms of execution time and the quality of efficient solutions. Across all tested markets, the memetic algorithm achieved superior risk/return ratios and faster computation times, confirming its effectiveness in solving complex portfolio optimization problems with real-world constraints.
Article
Full-text available
In this paper, we study the dragonfly algorithm, an optimization method derived from the observation of nature and mathematical modeled after the swarming behavior of dragonflies. Xin-She Yang devised this approach, which has been applied to various optimization challenges. [1]. The algorithm effectively explores the search space by imitating dragonfly behaviors like hunting for prey, fleeing from predators, and swarming. The method consists in apply Type-1 Fuzzy Logic to some of the parameters of the algorithm, in this case, W and Betha to analyze the results applied to the mathematical functions F1 through F10 included in this paper, once that we have applied the adaptation of parameters, we will review the results compared with the rest of the papers that implement the same mathematical functions, so we can have a general idea if this method can be reliable.
Article
Introduction When Ant Colony Optimization algorithm (ACO) is adept at identifying the shortest path, the temporary solution is uncertain during the iterative process. All temporary solutions form a solution set. Method: Where each solution is random. That is, the solution set has entropy. When the solution tends to be stable, the entropy also converges to a fixed value. Therefore, it was proposed in this paper that apply entropy as a convergence criterion of ACO. The advantage of the proposed criterion is that it approximates the optimal convergence time of the algorithm. Results: In order to prove the superiority of the entropy convergence criterion, it was used to cluster gene chip data, which were sampled from patients of Alzheimer’s Disease (AD). The clustering algorithm is compared with six typical clustering algorithms. The comparison shows that the ACO using entropy as a convergence criterion is of good quality. Conclusion: At the same time, applying the presented algorithm, we analyzed the clustering characteristics of genes related to energy metabolism and found that as AD occurs, the entropy of the energy metabolism system decreases; that is, the system disorder decreases significantly.
Chapter
In this paper, we study the dragonfly algorithm, an optimization method derived from nature and modeled after the swarming behavior of dragonflies. It was developed by Xin-She Yang and has been used for numerous optimization issues (Yang, Xin-She. A New Metaheuristic Bat-Inspired Algorithm. (2010).). The algorithm effectively explores the search space by imitating dragonfly behaviors, like hunting for prey, fleeing from predators, and swarming. A population of virtual dragonflies serves as a representation of possible solutions to the optimization problem in the dragonfly method. Numerous characteristics, including position and velocity, are unique to each dragonfly. The dragonflies communicate with each other through attraction and repulsion forces, which help them navigate the search space and locate promising regions. The algorithm employs a combination of local and global search strategies. Local search consists of exploring the neighborhood of each dragonfly to improve its solution. The global search is performed by adjusting the position and velocity of the dragonflies based on the collective information obtained from the swarm. By iteratively updating the dragonfly positions and velocities, the algorithm aims to converge to an optimal solution over time. The dragonfly algorithm has proven effective in solving a variety of optimization problems, such as function optimization, engineering design, and data clustering. In general, the Dragonfly Algorithm leverages the collective intelligence of a swarm of virtual dragonflies to solve optimization problems by mimicking the behavior of these insects in nature. This algorithm is like search and optimization algorithms, such as Particle Swarm Optimization (PSO), Differential Evolution (DE), Genetic Algorithm (GA) and Firefly Algorithm (FA) have proven to be efficient in terms of speed and convergence for this type of problems, so we expect the optimization algorithm to work efficiently applied to intelligent computing optimization problems. We conducted experiments with mathematical functions and tested their effectiveness by comparing them with other similar investigations.
Chapter
In this paper we present a new algorithm Continuous Mycorrhiza Optimization Algorithm (CMOA) inspired by the symbiosis of plant roots and the Mycorrhiza Network (MN). Mycorrhiza Networks are fungal hyphae (Ectomycorrhizae) that connect the roots of at least two plants, this type of networks is very important ecologically because they facilitate the transfer of resources between plants and improve the growth, survival, regeneration and colonization of forests. The transfer of resources between plants that may or may not be of the same species are carbon, nitrogen and other minerals, and the transfer from the plants to the MN would basically be carbon, and water from the MN to the plants. In addition, this equipment created by the plants and the MN has a defense system of biochemical signals that travel through the network to warn of possible disturbances such as wind, fire, logging, flooding, predators, pests, etc. To model this ecological environment, we used the Lotka-Volterra (LV) system of continuous equations: LV Predator–Prey Model, LV Cooperative Model and LV Competitive Model, certainly these LV models are represented in this ecosystem, we performed experiments with mathematical functions and checked their effectiveness comparing with other similar researches.KeywordsOptimizationMetaheuristicsLotka-Volterra
Article
The portfolio optimization problem involves decisions pertaining to the investment target and proportion of investment in a large number of assets in order to minimize risk and maximize returns. In recent years, metaheuristics methods have been actively applied to portfolio optimization. Under portfolio optimization, the portfolio is optimized for a fixed period of time so that its performance during that period is excellent. However, the optimized portfolio may not be able to sustain that performance later. Therefore, there is a need for recombining assets and changing the proportion of asset allocation by means of rebalancing. The rebalancing has to be done at an appropriate time. In this paper, we propose a technique for dynamic rebalancing of a portfolio at an appropriate time by applying instance-based policy optimization, with a consideration of market conditions changes.
Chapter
The high computational complexity of many problems in financial decision-making has prevented the development of time-efficient deterministic solution algorithms so far. At least for some of these problems, e.g., constrained portfolio selection or non-linear time series prediction problems, the results from complexity theory indicate that there is no way to avoid this problem. Due to the practical importance of these problems, we require algorithms for finding optimal or near-optimal solutions within reasonable computing time. Hence, heuristic approaches are an interesting alternative to classical approximation algorithms for such problems. Over the last years many interesting ideas for heuristic approaches were developed and tested for financial decision-making. We present an overview of the relevant methodology, and, some applications that show interesting results for selected problems in finance.
Article
Full-text available
Tabu search has achieved widespread successes in solving practical optimization problems. Applications are rapidly growing in areas such as resource management, process design, logistics, technology planning, and general combinatorial optimization. Hybrids with other procedures, both heuristic and algorithmic, have also produced productive results. We examine some of the principal features of tabu search that are most responsible for its successes, and that offer a basis for improved solution methods in the future.
Chapter
The use of formal mathematical models and optimization in finance has become common practice in the 1980s and 1990s. This book clearly presents the exciting symbiosis between the fields of finance and management science/operations research. Prominent researchers present the state of the art in financial optimization, while analysts from industry discuss the latest business techniques practised by financial firms in New York, London and Tokyo. The book covers a wide range of topics: portfolio management of equities and fixed income investments, the pricing of complex insurance, mortgage and other asset-backed products, and models for risk-management and diversification.
Article
Based on courses developed by the author over several years, this book provides access to a broad area of research that is not available in separate articles or books of readings. Topics covered include the meaning and measurement of risk, general single-period portfolio problems, mean-variance analysis and the Capital Asset Pricing Model, the Arbitrage Pricing Theory, complete markets, multiperiod portfolio problems and the Intertemporal Capital Asset Pricing Model, the Black-Scholes option pricing model and contingent claims analysis, 'risk-neutral' pricing with Martingales, Modigliani-Miller and the capital structure of the firm, interest rates and the term structure, and others.
Article
The Towers Perrin company applies integrative asset-liability planning to the problem of pension management. The planning system depends upon a stochastic economic projection model - called CAP:Link - for generating economic factors and asset returns via a set of representative scenarios. Each scenario depicts a coherent set of outcomes. The projections span a long-run horizon - 10 to 40 years. Risks and rewards for alternative investment strategies are determined via dynamic asset and liability allocation over the scenarios. The approach has been implemented in 12 countries in Europe, North America, and Asia.
Article
As risky assets (e.g., stocks) fluctuate in value, the value of a portfolio containing them may change, as may their allocation relative to the safe assets (e.g., bills) within the portfolio. One must decide how to rebalance the portfolio in response to such changes. Dynamic strategies are explicit rules for doing so. Different strategies will produce different risk and return characteristics. Buy-and-hold strategies are "do nothing" strategies. They have a minimum return proportional to the amount allocated to bills and an upside proportional to the amount allocated to stocks. Their performance is linearly related to the performance of the equity market. Strategies that sell stocks as the market falls and buys stocks as the market rises represent the purchase of portfolio insurance. Particular examples are constant-proportion portfolio insurance and option-based portfolio insurance. These strategies have better downside protection and better upside potential than buy-and-hold strategies. They do worse in relatively trendless, volatile markets. Constant-mix strategies—holding a constant fraction of wealth in stocks—buy stocks as the market falls and sell them as it rises. These and other such strategies effectivey represent the sale of portfolio insurance. They have less downside protection than, and not as much upside as, buy-and-hold strategies. They do best in relatively trendless but volatile markets. The greater the "popularity" of one type of strategy—be it the purchase or sale of portfolio insurance—the more costly it becomes and the greater the rewards to those who follow the opposite strategy. Only buy-and-hold strategies can be followed by all investors.
Article
This chapter reviews the various approaches to pricing interest rate options. The review has concentrated on a discrete time, discrete state space model and the continuous time analogues. The two problems are described in the chapter to which the interest rate option pricing theory is applied. The first problem is to value the entire zero coupon bond price curve, given the prices of only a few bonds (one, two or three) thatlie upon it. The second problem is to price contingent claims (options) on the zero coupon bond price curve. The chapter presents the identification of a theoretical difference between the model structures used to solve these two different problems. The distinction relates to the manner in which the spot-rate process' parameters are specified within the model. Zero curve arbitrage models require an exogenous specification of the spot-rate process. In option pricing models, however, the spot rate process is endogenously determined by an exogenous specification of the evolution of the entire zero coupon bond price curve. This distinction is important in the subsequent analysis.