ArticlePDF Available

An artificial stock market


Abstract and Figures

The Santa Fe Artificial Stock Market consists of a central computational market and a number of artificially intelligent agents. The agents choose between investing in a stock and leaving their money in the bank, which pays a fixed interest rate. The stock pays a stochastic dividend and has a price which fluctuates according to agent demand. The agents make their investment decisions by attempting to forecast the future return on the stock, using genetic algorithms to generate, test, and evolve predictive rules. The artificial market shows two distinct regimes of behavior, depending on parameter settings and initial conditions. One regime corresponds to the theoretically predicted rational expectations behavior, with low overall trading volume, uncorrelated price series, and no possibility of technical trading. The other regime is more complex, and corresponds to realistic market behavior, with high trading volume, high intermittent volatility (including GARCH behavior), bubbles and crashes, and the presence of technical trading. One parameter that can be used to control the regime is the exploration rate, which governs how rapidly the agents explore new hypotheses with their genetic algorithms. At a low exploration rate the market settles into the rational expectations equilibrium. At a high exploration rate it falls into the more realistic complex regime. The transition is fairly sharp, but close to the boundary the outcome depends on the agents’ initial “beliefs”—if they believe in rational expectations they occur and are a local attractor; otherwise the market evolves into the complex regime.
Content may be subject to copyright.
Artif Life Robotics (1999) 3:27-31 9 ISAROB 1999
R.G. Palmer - W. Brian Arthur 9 John H. Holland
Blake LeBaron
An artificial stock market
Received: January 19, 1998 / Accepted: February 19, 1998
Abstrar The Santa Fe Artificial Stock Market consists of a
central computational market and a number of artificially
intelligent agents. The agents choose between investing in a
stock and leaving their money in the bank, which pays a
fixed interest rate. The stock pays a stochastic dividend
and has a price which fluctuates according to agent demand.
The agents make their investment decisions by attempting
to forecast the future return on the stock, using genetic
algorithms to generate, test, and evolve predictive rules.
The artificial market shows two distinct regimes of behav-
ior, depending on parameter settings and initial conditions.
One regime corresponds to the theoretically predicted ra-
tional expectations behavior, with low overall trading
volume, uncorrelated price series, and no possibility of tech-
nical trading. The other regime is more complex, and corre-
sponds to realistic market behavior, with high trading
volume, high intermittent volatility (including GARCH be-
havior), bubbles and crashes, and the presence of technical
trading. One parameter that can be used to control the
regime is the exploration rate, which governs how rapidly
the agents explore new hypotheses with their genetic algo-
rithms. At a low exploration rate the market settles into the
rational expectations equilibrium. At a high exploration
rate it falls into the more realistic complex regime. The
transition is fairly sharp, but close to the boundary
the outcome depends on the agents' initial "beliefs" - if
R.G, Palmer (Uf)
Department of Physics, Box 90305, Duke University, Durham NC
27708, USA
W.B. Arhtur
Santa Fe Institute, NM 87501, USA
3.H. Holland
University of Michigan, MI 48109, USA
B. LeBaron
Graduate School of International Economics and Finance
Brandeis University, MA 02453-2728, USA
This work was wesented, in part, at the Third International Sympo-
sium on Artificial Life and Robotics, Oita, Japan, January 19-21, 1998
they believe in rational expectations they occur and are a
local attractor; otherwise the market evolves into the com-
plex regime.
Key words Artificial markets- Volatility. Technical trading
Heterogeneous information 9 Rational expectations
Today's standard economic theory, general equilibrium
theory, or
rational expectations,
says in its shortest state-
ment that agents - traders, firms, individuals, etc. -
their optimum behavior by logical processes from their cir-
cumstances. It further assumes that the agents have com-
plete information, that they are perfectly rational, that they
have common expectations, and that they know that every-
one else has these properties too.
One of the consequences of this approach is that almost
everything is decided at time zero. The agents first work out
how the whole future should be, and then the world just
plays itself out. There is no dynamics, no learning, and no
When this rational expectations approach is applied to a
stock market, 1 it implies that there should not be anything
like market moods or psychologies. There should not be
bubbles, crashes, or bursts, and volatility should be low.
There should not be much trading volume; the only reason
one person would trade with another would be if something
happened externally, changing the assets available for in-
vestment. There should not be money to be made by
nical trading,
i.e., simply extrapolating patterns in a time
series of price, because any regularity in the price series
should have already been arbitraged away by the rational
These ideas do not fit the facts of real stock markets very
well. There do seem to be bubbles, crashes, and moods of
the market. The volume and volatility are much higher than
can be accounted for by external changes, and people on
Wall Street do seem to make money by technical trading]-
There are some ways to modify the theory or its applica-
tion to attempt to come to terms with these discrepancies,
including a number of ideas of
bounded rationality,
theories involving
noise traders. 3
However, none of these
theories seem wholly convincing, 4 and we describe here a
very different approach.
In starting this project in 1989, 5 we asked what agents
do in markets, and more generally in the world. Our
answer, then and now, is that they (a) classify whatever
they see, (b) notice patterns, (c) generalize and form inter-
nal models, ideas, or rules of thumb, and (d) act on the basis
of those internal models. We decided to build a stock market
along these lines in the computer, whereby agents would
notice the patterns in the price (and in any other data they
had access to), form models, and then trade on that basis.
Of course, the agents have to evaluate and adapt their
internal models after seeing how well they work. Actually,
each agent has a number of different ways of predicting
the future, and is continually evaluating and comparing
them. The ones that work well gain more weight and are
used more often. The ones that fail are eventually thrown
out and replaced.
The agents buy and sell stock in the market, and thereby
affect the stock price. What the agents do affects the
market. What the market does affects the agents. So the
market behavior emerges from the collective behavior of
the agents, who are themselves coevolving.
From an economics viewpoint, the aim of our work is to
examine a market of interacting agents that can learn with
an open set of possibilities, and see whether it converges
to a rational expectations equilibrium or to something
else. The core result is that there are two equilibria. The
model can show rational expectations behavior, and it can
show realistic market behavior, but they are two separate
Structure of
the market
The basic structure of the model is N agents (i - 1, 2,...,
N) interacting with a central market. Typically N - 50-100.
There may be several types of agents. In contrast to many
other interacting agent models, the agents do not interact
directly with each other, but only via the market.
In the market there is a single stock, with price
share at time t. Time is discrete (t = 1, 2,...); period t lasts
from time t until t + 1. The stock pays a dividend
d(t + 1)
per share at the end of period t. The dividend times series
is itself a stochastic process defined independently of
the market and the agents' actions. We normally use a
simple random process with persistence called an AR-1 or
Ornstein-Uhlenbeck process, given by
d(t + 1) = pd(t) + an(t)
where d means the offset of the dividend from a fixed mean
d, so
d(t) = d + d(t),
p and a are parameters, and q(t) is a
Gaussian random variable, chosen independently at each
time t from a normal distribution with mean 0 and variance 1.
There is also a fixed-rate asset,
the bank,
which simply
pays a constant rate of return r per period. The agents have
to decide how much money they want to put into the stock
(which has a fixed total number of shares - if somebody
buys, someone else has to sell), and how much they want
to leave in the bank. At any time t, each agent i holds some
number of shares of stock
and has some amount of cash
in the bank. The agent's total wealth is then
wi(t) = Mi(t) + hi(t)p(t)
At the end of the period, one time step later, this portfolio
becomes worth
~/(t + 1) = (1 +
r)Mi(t ) + hi(t)p(t +
1) +
hi(t)d(t +
where the three terms are the money in the bank, with
interest, the new value of the stock, and the dividend pay-
out. v~(t + 1) and
w(t +
1) are not the same, because trading
occurs in between, potentially moving assets between the
bank and the stock.
The trading process is managed by a
inside the
market. The specialist also has the job of setting
+ 1). Its
fundamental problem at each time step is that the number
of bids to buy and offers to sell may not match, and yet the
total number of shares of stock is fixed. We have explored
several approaches to this issue, including rationing of bids
or offers, 6 having the specialist maintain a buffering inven-
tory, and holding an auction in which the price at a given
time is adjusted until the bids and the offers match closely.
Only the last approach, an auction, is described further
here. If there are more bids than offers, then the price is
raised, so the bids drop and the offers increase, until they
match closely.
One more thing that is defined at the level of the overall
market structure is the information that is made available to
the agents for use in their decision making. In principle, this
information set (which we call
the world)
consists of the
price, dividend, total number of bids, and total number of
offers at each past time step. There are other variables that
we have tried adding too, including a predictor of the future
dividend (which can be done in the computer, by running
the stochastic process forward, but not in the real world!),
and a random "sunspot" variable around which the agents
might coordinate their actions.
However, we usually condense most of this information
into a string of
worm bits.
At any given time, the world that
the agents see consists of a string of 80 or so bits, and some
recent price and dividend information. Some examples of
these bits, each of which is either
at each time
t, are as follows:
1. rp(t)/d(t) > z,
2. rp(t)/d(t)
> 1;
3. rp(t)/d(t) > 7,
4. p(t)
> MAlo{p(t)};
5. p(t)
> MA~oo{p(t)};
6. p(t)
> MAso,{p(t)};
7. Always
Here MAn{p(t)} means a moving average over the most
recent n steps of
1 rt-l
MAo{p(t)} = Ep(t- k)
The quantity
would be 1 in a simple equilibrium
notion of fundamental value, so the deviation from 1 gives
a sense of how much the stock is underpriced or overpriced.
We classify bits into three categories: technical, funda-
mental, and control. Technical bits, by definition, just de-
pend on the past price series, and are the only ones that a
strict technical trader would use. Bits 4-6 in the above
list are technical bits. Control bits are useless ones that we
include as experimental controls, like bit 7 in the above
list. Fundamental bits are anything else, generally involv-
ing the dividend series in some way. Bits 1-3 above are
Structure of the agents
Fundamentally, the agents have to decide whether to invest
in the stock or the bank. If, at any time step, they conclude
that they want to invest more in the stock than previously,
then they submit a bid to buy more shares. Conversely, they
may submit an offer to sell shares.
We have examined many types of agents, and our
software can mix different types in the same market (a
description of
condition-action agents
is given elsewhere6).
However, this paper only treats
forecasting agents
that use
a number
of predictors,
each of which attempts to predict the
future return (price plus dividend). By seeing how well their
predictors work, the agents can estimate their accuracy (pre-
diction variance) and update or replace poor ones. Because
they know the variance of their overall predictions, the
agents can also perform a risk aversion analysis called
CARA (constant absolute risk aversion). This is a standard
computation, based on an exponential utility function and
used in portfolio analysis, that gives an optimal division of
funds between two possible assets when the mean and vari-
ance of the expected return is known for each asset. If agent
i's estimate of the mean return is
E~[p(t + 1) + d(t
+ 1)]
with variance o~, then under CARA (and an additional
Gaussian assumption) the ideal number of shares to hold is
given by
h~ircd(t ) = E~[p(t + 1) + d(t + 1)]- p(t)(1 + r)
where k is a parameter, the degree of relative risk
The agents' predictors actually consist of two parts, a
condition part
and a
forecast part.
The condition part deter-
mines when each particular predictor is
as ex-
plained below. Only activated predictors produce forecasts,
using their forecast part. The forecast part, in the simplest
case, is just a linear rule
Eij[p(t +
1) +
d(t +
1)] =
aij(p(t) + d(t)) + bij
where Eij means the expected (predicted) value for agent
jth predictor, and aij and b~j are the coefficients that consti-
tute the forecast part of this predictor. Although this is itself
a very simple linear form, the condition parts make the
overall prediction only piecewise linear.
Every time a predictor is activated, the agent checks to
see how well it performed when the period is over. This is
used to maintain a variance 0 2 for each predictor, as a
weighted moving average of its past squared errors.
There are several ways to combine a set of predictions
and variances,
E~/[p(t +
1) +
d(t §
1)] and o 2, for each
activated predictor j, into the single overall forecast and
E,[p(t +
1) +
d(t +
1) and ~, needed for the
CARA calculation. The simples, used for all the results
described here, is to use the currently best predictor, the
one with the smallest 02 across all activated j's.
The condition part of each predictor is implemented
with a
classifier system,
in which the condition part is
represented by a ternary string of the symbols {0, 1, #}, one
for each of the world bits that the agent can observe (we can
restrict agents to see only a subset of all the world bits). A
condition symbol of 0 means that the corresponding
world bit must be
for the condition part to match,
while conversely 1 requires
A condition symbol of # is
a don't care,
and matches either
For example,
the condition string ##1###0# matches the world bits
01110100 (where 0 stands
for false
and 1 for
but not
Some of an agent's predictors may give good predictions
when they are activated, while others may not. A genetic
algorithm is used to adjust and evolve a better set of pre-
dictors. For each agent at each period we run the genetic
algorithm with probability
where K is a parameter. The
genetic algorithm eliminates some of the worst predictors
(those than have the highest variance) and generates some
new ones to replace them. Typically we replace 20 out of
100 predictors.
To generate new predictors we first clone some of the
best existing ones in the current population. Then we
either perform
(or sometimes both)
on those cloned predictors. Mutation means changing a few
condition bits, and modifying the a~js and bijs by a ran-
dom amount. We use parameterized distributions of such
changes. 4 Crossover means selecting two parent predictors,
taking some condition bits from each, and interpolating
their a~js and bg~s. It is not clear whether crossover has any
positive effect beyond causing large jumps in the space of
condition bits. We also sometimes perform
i.e., changing some of the fixed bits (0s and ls) to don't cares
(#s) in predictors that have not been activated for a long
In choosing predictors for replacement and cloning, we
mainly select according to variance; low variance means
high fitness, but we also impose a small penalty for each bit
that is not a #, giving a little pressure not to condition the
predictors on too many bits.
Our main initial goal in this project was to look for realistic
market behavior. We asked in particular whether the
price series of our stock would look like a real stock
price series. We realized that goal several years ago, finding
reasonably realistic market behavior as shown by time se-
ries analysis.
In 1995, we started a second phase of experiments by
asking whether our agents could also shown rational expec-
tations behavior. If homogeneity is assumed, so that all
the agents have the same beliefs, then the rational expecta-
tions equilibrium for this market can be computed, and has
a simple form in which the price is linear in the dividend.
Thus, rational expectations behavior is within the frame-
work of our agents' linear forecasts.
We tried giving the agents initial beliefs in the rational
expectations result by setting the initial conditions for
b o
to the calculated rational expectations values. We
found that they stayed there, and that the rational expect-
ations equilibrium is in fact a local attractor - when we
initially started the agents fairly close to it, they went to that
state. It resulted in a very stable market, just as the theory
says, with very little trading going on, and homogeneous
agent behavior.
On the other hand, when we started the agents with
almost any other conditions, they never settled into rational
expectations behavior even after millions of periods, and
the system behaved much more like a real market. Thus
we had found two regimes of behavior, which we call the
"rational expectations regime" and the "complex regime,"
In the rational expectations regime, we see relatively
low trading volume, with very little information in the price
series that can be exploited for prediction. The forecast
parameters - the
and bqs - all converge to be the same;
the agents end up becoming homogeneous. The technical
bits are not useful, and are dropped from use.
In the complex regime, we find that the agents remain
heterogeneous and continuously coevolve. One of the tests
we did was to take a successful agent out of the market,
freeze it, and then reinsert it thousands of periods later. We
found that it did not do well at all. Even though the market
looks statistically much the same, an agent that was trained
in one period does not work well later, because the detailed
information that it is picking up in its bits and forecasts is
changing over time.
The trading volume remains much higher in the complex
regime. It varies greatly, and has GARCH behavior. It is
autocorrelated, and there are correlations between volume
and volatility. These are all features found in real markets.
There are sometimes bubbles and crashes, fairly minor ones
usually, and over-reactions, and the agents
use the tech-
nical bits, despite the small cost to do so.
More recently, in a third phase of our experiments, 4 we
looked at what happens as we change certain parameters.
We found that we can force the market into either regime
with appropriate parameter values, using the same random
Table 1 Averages of explorations over 25 runs
Fast exploration Slow exploration
o 2.147 _+ 0.017 2.135 _+ 0.008
0.320 _+ 0.020 0.072 _+ 0.012
p~ 0.007 _+ 0.004 0.036 _+ 0.002
0.064 + 0.004 0.017 -+ 0.002
initial conditions, but with intermediate parameter values,
the initial conditions dictate which regime is reached, as in
the second phase of the experiments.
In most experiments, we varied only the parameter K
that dictates how often the genetic algorithm is run and
controls how often the agents explore new possible ways of
predicting the future. We frequently compared two values,
K - 250 and K = 1000, that we
call fast exploration
respectively. Fast exploration puts us in the
complex regime, while slow exploration gives the rational
expectations regime. We always used random initial condi-
tions in these experiments.
An example of our time series analysis was a fit of the
price series to a linear recurrence relation
p(t +
+ e(t +
where e(t) represents the residual variation after fitting the
best values for A and B. In a rational expectations equilib-
rium, the residuals e(t) ought to be independent, identically
distributed Gaussian variables, because the price is driven
by the AR-1 dividend series. So we tested the e(t) series for
normality and correlations. Table 1 shows some results for
fast and slow exploration, from averages over 25 runs of
each case.
The first line, o, shows the standard deviation of e(t). The
second line, ~c, shows the excess kurtosis, a measure of non-
Gaussian behavior based on the fourth moment. Fast explo-
ration gives much larger deviations from normality, in
the direction of the "fat tails" seen in real market data.
The third and fourth lines show two measures of single-
step autocorrelation, Pl =
<e(t)e(t +
1)>, and
f{2) =
8(t + 1)2> - o 4.
The ARCH(I) test 7 gives about 37 for the fast explora-
tion case, compared with 3.2 for the slow exploration case.
This sort of analysis can be extended by including addi-
tional terms in the recurrence relation. For example, to test
whether the
> 3technical bit is actually of any use
in predicting the price series, we write
p(t + 1)=
A + Bp(t)
+ Clrp/d>~(t ) q-
~(t ~-
is 0 or 1 as appropriate at each t. We then
fit the coefficients as before. In this case we found C =
-0.44 +_ 0.10 for the fast exploration case, compared with
C = 0.05 _+ 0.09 for the slow exploration case. Thus, this bit
is useful in the fast but not the slow case.
There is much more yet to explore with this model.
Given a whole computational market in the computer, we
can experiment with what happens upon changing the mar-
ket structure, the specialist, the dividend series, and so on.
Some other future plans include those listed below.
1. Multiple stocks. It is not clear whether introducing
more than one stock will fundamentally change our results,
but the experiment seems worthwhile.
2. Impact of wealth. As the agents get more wealthy,
they do not actually have more influence in the market
under our CARA assumptions. There are several ways to
change those assumptions.
3. Improved prediction. There are many ways to im-
prove our agents' prediction methods. We have experi-
mented briefly with neural network predictors, but did not
find significantly different results.
4. Transition details. The transition between the two be-
havior regimes deserves detailed study. Is it really a sharp
transition, or is it gradual? What are the sizes of the basins
of attraction of the two regimes? How do they scale with the
number of agents, and with other parameters?
5. Information control. It is possible to give different
agents different information sets, and thus explore the ef-
fect of private information. We can also provide informa-
tion that is released periodically to all agents, like news
6. Strategic behavior. We can have a longer time hori-
zon, so that the agents can look further ahead, rather than
just one period at a time. Then we need to allow the agents
to have strategic behavior over multiple periods.
We thank Paul Tayler and Brandon Weber for
their collaboration in this project. This work was supported in part by
the Santa Fe Institute's Economics Program, including grants from
Citicorp, Coopers and Lybrand, McKinsey and Company, the Russell
Sage Foundation, and the Walker Foundation, and by core funding to
the Santa Fe Institute from the John D. and Catherine T. MacArthur
Foundation, the National Science foundation, and the US Department
of Energy, and by gifts and grants from individuals and members of the
lnstitute's Business Network for Complex Systems research.
1. Lucas RE (1978) Asset prices in an exchange economy
Econometrica 46:1429-1445
2. Frankel JA, Froot KA (1990) Chartists, fundamentalists, and
trading in the foreign exchange market. AEA Pap Proc 80:181
3. Shleifer A, Summers LH (1990) The noise trader approach to
finance. J Econ Perspectives 4:19-33
4. Arthur WB, Holland JH, LeBaron Bet al. (1997) Asset pricing
under endogenous expectations in an artificial stock market. In:
Arthur WB, Lane D, Durlauf SN (eds) The economy as an evolving
complex system. II. Addison-Wesley, Reading, pp 15 44
5. Arthur WB (1992) On learning and adaptation in the economy.
Santa Fe Institute Paper 92-07-038
6. Palmer RG, Arthur WB, Holland JH et al. (1994) Artificial
economic life: a simple model of a stockmarket. Physica D 75:264-
7. Bollerslev T, Chou RY, Jayaraman N, Kroner KF (1990) ARCH
modeling in finance: a review of the theory and empirical evidence.
J Econometrics 52:5~0
... The model is developed by taking the seminal Santa Fe Artificial Stock Market (SFASM) model of LeBaron et al. [20] and modifying it significantly so that it can produce the probabilistic features and nonlinear/complex features of stock markets. The view of the stock market presented in Palmer et al. [26], LeBaron et al. [20], Palmer et al. [27] and Arthur et al. [3] is one of a market that can be approximated by an agent-based model with many different types of agents, each of whom behaves according to a different decision making rule. In these models, jointly referred to as the Santa Fe model, agents are represented by the parameters governing their linear forecasts of prices plus dividends and associated estimated variances. ...
... Using the influential Santa Fe Artificial Stock Market (SFASM) model of LeBaron et al. [20] as a base model is sensible because there is substantial pre-existing research, suggesting that the SFASM is a reasonable approximation of real-world financial markets in the sense that it can replicate some financial phenomena we observe. In particular, it has been documented in the literature that the SFASM model is capable of producing volatility persistence, significant trading volumes, bubbles, crashes and GARCH behaviour of stock returns, see LeBaron et al. [20], [27], Palmer et al. [26] and Arthur et al. [3]. Each of these stock market phenomena has been documented, in the empirical finance literature, to exist in real-world stock markets. ...
... To allow for evolutionary learning on the part of agents, a genetic algorithmic learning approach is incorporated into the agent-based model. The genetic algorithm is the same as that discussed in Palmer et al. [26] to ensure that effects of altering the structure of the stock model can be analysed and keeping other aspects constant. An overview of the genetic algorithm used by Palmer et al. [26] is provided here. ...
Full-text available
US stock returns exhibit mixed Gaussian probabilistic features as well as nonlinear and complex dynamics, but existing agent-based models of stock markets have not focused on replicating or explaining all these phenomena jointly. In this paper, a new agent-based model of the stock market is proposed that can replicate and explain such phenomena jointly. In the new model, stocks are a claim to a dividend process determined by a hidden state process which follows a Markov chain. The model produces a probability distribution of stock returns that is mixed Gaussian, like the US stock market. Using a generalized multiscale entropy method, it is shown that the simulated returns have similar complexity and entropy properties to US stock returns for plausible parameter values. Sensitivity analyses show that the simulated stock price series generated by the model varies in a plausible manner with various underlying important parameters such as agent risk aversion, agent beliefs, the underlying stock dividend process, returns to risk-free assets and dividend transition probabilities.
... First released in 1999, it remains maintained today. Whilst not directly designed for financial modelling, it has been used to create the Santa Fe Artificial Stock Market [13] that, for the first time, reproduced a number of stylized facts about the behaviour of traders and further emphasized the importance of modelling of financial markets. Unlike swarm, MAXE comes with an incorporated time-tracking unit that takes care of the delivery of messages between the agents involved and the advancement of simulation time. ...
We introduce a new software toolbox, called Multi-Agent eXchange Environment (MAXE), for agent-based simulation of limit order books. Offering both efficient C++ implementations and Python APIs, it allows the user to simulate large-scale agent-based market models while providing user-friendliness for rapid prototyping. Furthermore, it benefits from a versatile message-driven architecture that offers the flexibility to simulate a range of different (easily customisable) market rules and to study the effect of auxiliary factors, such as delays, on the market dynamics. Showcasing its utility for research, we employ our simulator to investigate the influence the choice of the matching algorithm has on the behaviour of artificial trader agents in a zero-intelligence model. In addition, we investigate the role of the order processing delay in normal trading on an exchange and in the scenario of a significant price change. Our results include the findings that (i) the variance of the bid-ask spread exhibits a behavior similar to resonance of a damped harmonic oscillator with respect to the processing delay and that (ii) the delay markedly affects the impact a large trade has on the limit order book.
... The Santa Fe Artificial Stock Market (Palmer et al., 1999) is one of the most cited agent-based systems applied to finance. With the development work taking place in the 90's, the system models a market with one risk-free bond and a single stock traded by agents, which follow a set of pre-defined basic rules. ...
... The Santa Fe Artificial Stock Market (Palmer et al., 1999) is one of the most cited agent-based systems applied to finance. With the development work taking place in the 90's, the system models a market with one risk-free bond and a single stock traded by agents, which follow a set of pre-defined basic rules. ...
This paper presents a new financial market simulator that may be used as a tool in both industry and academia for research in market microstructure. It allows multiple automated traders and/or researchers to simultaneously connect to an exchange-like environment, where they are able to asynchronously trade several financial assets at the same time. In its current iteration, this order-driven market implements the basic rules of U.S. equity markets, supporting both market and limit orders, and executing them in a first-in-first-out fashion. We overview the system architecture and we present possible use cases. We demonstrate how a set of automated agents is capable of producing a price process with characteristics similar to the statistics of real price from financial markets. Finally, we detail a market stress scenario and we draw, what we believe to be, interesting conclusions about crash events.
... Likewise, evolutionary computation and agent-based models have been used extensively to study the macro properties of artificial asset markets rather than specifically studying the * micro properties of individual trading strategies [13][14][15][16][17][18][19]. However, to our knowledge there has been no academic study of the possibility of developing in vivo trading strategies using purely ab initio methods-trading strategies that train or evolve using artificial data only, and then, at test time, trade using real asset prices-which is the approach that we take here. ...
Securities markets are quintessential complex adaptive systems in which heterogeneous agents compete in an attempt to maximize returns. Species of trading agents are also subject to evolutionary pressure as entire classes of strategies become obsolete and new classes emerge. Using an agent-based model of interacting heterogeneous agents as a flexible environment that can endogenously model many diverse market conditions, we subject deep neural networks to evolutionary pressure to create dominant trading agents. After analyzing the performance of these agents and noting the emergence of anomalous superdiffusion through the evolutionary process, we construct a method to turn high-fitness agents into trading algorithms. We backtest these trading algorithms on real high-frequency foreign exchange data, demonstrating that elite trading algorithms are consistently profitable in a variety of market conditions---even though these algorithms had never before been exposed to real financial data. These results provide evidence to suggest that developing \textit{ab initio} trading strategies by repeated simulation and evolution in a mechanistic market model may be a practical alternative to explicitly training models with past observed market data.
We introduce a new software toolbox for agent-based simulation. Facilitating rapid prototyping by offering a user-friendly Python API, its core rests on an efficient C++ implementation to support simulation of large-scale multi-agent systems. Our software environment benefits from a versatile message-driven architecture. Originally developed to support research on financial markets, it offers the flexibility to simulate a wide-range of different (easily customisable) market rules and to study the effect of auxiliary factors, such as delays, on the market dynamics. As a simple illustration, we employ our toolbox to investigate the role of the order processing delay in normal trading and for the scenario of a significant price change.
This chapter examines how to control the extreme events happening when a complex adaptive logistics system is implemented in used product remanufacturing, particularly in the used products transhipment stage. The chapter starts with an introduction about the necessity of introducing the complex adaptive logistics system. Then, the related studies dealing with similar issues are discussed in the background section. Next, the focal problem of this chapter is stated in the problem statement section. A detailed description about the approach (i.e., the agent-based modelling and simulation) can be found in the proposed methodology section. Right after this, an illustrative simulation example is discussed in the experimental study section. The potential research directions regarding the main problem considered in this chapter are highlighted in the future trends section. Finally, the conclusions drawn in the last section close this chapter.
The calculation for the influence of high-speed railway on knowledge spillover is based on the results of global instantaneous equilibrium in the mechanism explanation of knowledge spillover. In real production, the interaction between the high-speed railway and the regional innovation system is dynamic and local. In order to simulate the impact of high-speed railway on innovation activities in the time dimension, it is necessary to simulate scenarios under appropriate parameter assumptions. Based on the interaction of economic participants, a discrete evolutionary simulation model is established, which is helpful to predict and estimate the evolution of spatial effect of high-speed railway according to the theory of cellular automata. It is concluded that high-speed railway accelerates the formation of knowledge innovation industry cluster in the region in the process of regional knowledge innovation and evolution. Under the influence of high-speed railway, the node city will gradually evolve into a regional innovation center. By comparing the production evolution of knowledge innovation system with and without high-speed railway, the results show that high-speed railway has a more significant impact on knowledge spillover in higher knowledge privatization environment. Under the background of low labor migration rate, high-speed railway has increased the potential of regional innovation to external knowledge spillover. In the case of higher labor migration rate, the convergence rate of influence of high-speed railway on the concentration of innovation is faster.
Full-text available
The overshooting theory of exchange rates seems ideally designed to explain some important aspects of the movement of the dollar in recent years. Over the period 1981-84, for example, when real interest rates in the United States rose above those of its trading partners (presumably due to shifts in the monetary/fiscal policy mix), the dollar appreciated strongly. It was the higher rates of return that made U.S. assets more attractive to international investors and caused the dollar to appreciate. The overshooting theory would say that, as of 1984 for example, the value of the dollar was so far above its long-run equilibrium that expectations of future deprecation were sufficient to offset the higher nominal interest rate in the minds of international investors. Figure 1 shows the correlation of the real interest differential with the real value of the dollar, since exchange rates began to float in 1973.
We describe a model of a stockmarket in which independent adaptive agents can buy and sell stock on a central market. The overall market behavior, such as the stock price time series, is an emergent property of the agents' behavior. This approach to modelling a market is contrasted with conventional rational expectations approaches. Our model does not necessarily converge to an equilibrium, and can show bubbles, crashes, and continued high trading volume. Peer Reviewed
The standard mode of theorizing assumed in economics deductive--it assumes that human agents derive their conclusions by logical processes from complete, consistent and well-defined premises in a given problem. This works well in simple problems, but it breaks down beyond a "problem complexity boundary" where human computational abilities are exceeded or the assumptions of deductive rationality cannot be relied upon to hold. The paper draws upon what is known in psychology to argue that beyond this problem complexity boundary humans continue to reason well, but by using induction rather than deduction. That is, difficult or complex decision problems, humans transfer experience from other, similar problems they have faced before; they look for patterns and analogies that help them construct internal models of and hypotheses about the situation they are in; and they act more or less deductively on the basis of these. In doing so they constantly update these models and hypotheses by importing feedback--new observations--from their environment. Thus, in dealing with problems of high complexity humans live in a world of learning and adaptation. I illustrate these ideas by showing that the processes of pattern recognition, hypothesis formation and refutation over time are perfectly amenable to analysis; and by using them to explain supposedly "anomalous" behavior in financial markets
We propose a theory of asset pricing based on heterogeneous agents who continually adapt their expectations to the market that these expectations aggregatively create. And we explore the implications of this theory computationally using our Santa Fe artificial stock market.
We use a multivariate generalized autoregressive heteroskedasticity model (M-GARCH) to examine three stock indexes and their associated futures prices: the New York Stock Exchange Composite, S&P 500, and Toronto 35. The North American context is significant because markets in Canada and the United States share similar structures and regulatory environments. Our model allows examination of dependence in volatility as it captures time variation in volatility and cross-market influences. Estimated time variation in volatility is significant and the volatilities are highly positively correlated. Yet, we find that the correlation in North American index and futures markets has declined over time.
This paper is a theoretical examination of the stochastic behavior of equilibrium asset prices in a one-good, pure exchange economy with identical consumers. A general method of constructing equilibrium prices is developed and applied to a series of examples.
A great amount of effort is spent in forecasting the outcome of sporting events, but few papers have focused exclusively on the characteristics of sports forecasts. Rather, many papers have been written about the efficiency of sports betting markets. As it turns out, it is possible to derive considerable information about the forecasts and the forecasting process from the studies that tested the markets for economic efficiency. Moreover, the huge number of observations provided by betting markets makes it possible to obtain robust tests of various forecasting hypotheses. This paper is concerned with a number of forecasting topics in horse racing and several team sports. The first topic involves the type of forecast that is made: picking a winner or predicting whether a particular team beats the point spread. Different evaluation procedures will be examined and alternative forecasting methods (models, experts, and the market) will be compared. The paper also examines the evidence about the existence of biases in the forecasts and concludes with the applicability of these results to forecasting in general.