Content uploaded by Jingyuan Wang
Author content
All content in this area was uploaded by Jingyuan Wang on Aug 08, 2019
Content may be subject to copyright.
Content uploaded by Jingyuan Wang
Author content
All content in this area was uploaded by Jingyuan Wang on Aug 08, 2019
Content may be subject to copyright.
AlphaStock: A Buying-Winners-and-Selling-Losers Investment
Strategy using Interpretable Deep Reinforcement Aention
Networks
Jingyuan Wang1,4, Yang Zhang1, Ke Tang2, Junjie Wu3,4,∗, Zhang Xiong1
1.MOE Engineering Research Center of Advanced Computer Application Technology,
School of Computer Science Engineering, Beihang University, Beijing, China
2.Institute of Economics, School of Social Sciences, Tsinghua University, Beijing China
3.Beijing Key Laboratory of Emergency Support Simulation Technologies for City Operations,
School of Economics and Management, Beihang University, Beijing, China
4.Beijing Advanced Innovation Center for BDBC, Beihang University, Beijing, China. ∗Corresponding author.
ABSTRACT
Recent years have witnessed the successful marriage of nance
innovations and AI techniques in various nance applications in-
cluding quantitative trading (QT). Despite great research eorts
devoted to leveraging deep learning (DL) methods for building
better QT strategies, existing studies still face serious challenges
especially from the side of nance, such as the balance of risk and
return, the resistance to extreme loss, and the interpretability of
strategies, which limit the application of DL-based strategies in real-
life nancial markets. In this work, we propose AlphaStock, a novel
reinforcement learning (RL) based investment strategy enhanced
by interpretable deep attention networks, to address the above chal-
lenges. Our main contributions are summarized as follows:
i
) We
integrate deep attention networks with a Sharpe ratio-oriented re-
inforcement learning framework to achieve a risk-return balanced
investment strategy;
ii
) We suggest modeling interrelationships
among assets to avoid selection bias and develop a cross-asset at-
tention mechanism;
iii
) To our best knowledge, this work is among
the rst to oer an interpretable investment strategy using deep
reinforcement learning models. The experiments on long-periodic
U.S. and Chinese markets demonstrate the eectiveness and ro-
bustness of AlphaStock over diverse market states. It turns out
that AlphaStock tends to select the stocks as winners with high
long-term growth, low volatility, high intrinsic value, and being
undervalued recently.
CCS CONCEPTS
•Applied computing →Economics
;
•Computing method-
ologies →Reinforcement learning;Neural networks.
KEYWORDS
Investment Strategy, Reinforcement Learning, Deep Learning,
Interpretable Prediction
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
KDD ’19, August 4–8, 2019, Anchorage, AK, USA
©2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-6201-6/19/08. . . $15.00
https://doi.org/10.1145/3292500.3330647
ACM Reference Format:
Jingyuan Wang
1,4
, Yang Zhang
1
, Ke Tang
2
, Junjie Wu
3,4,∗
, Zhang Xiong
1
.
2019. AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strat-
egy using Interpretable Deep Reinforcement Attention Networks. In The
25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
(KDD ’19), August 4–8, 2019, Anchorage, AK, USA. ACM, New York, NY, USA,
9 pages. https://doi.org/10.1145/3292500.3330647
1 INTRODUCTION
Given the ability in handling large scales of transactions and
oering rational decision-makings, quantitative trading (QT) strate-
gies have long been adopted in nancial institutions and hedge
funds and have achieved spectacular successes.Traditional QT strate-
gies are usually based on specic nancial logics. For instance, the
momentum phenomenon found by Jegadeesh and Titman in the
stock market [
14
] was used to build momentum strategies. The
mean reversion [
20
] proposed by Poterba and Summers believes
that asset price tends to move to the average over time, so the bias
of asset prices to their means could be used to select investment
targets. The multi-factor strategy [
7
] uses factor-based asset valua-
tions to select assets. Most of these traditional QT strategies, though
equipped with solid nancial theories, can only leverage some spe-
cic characteristic of nancial markets, and therefore might be
vulnerable to complex markets with diverse states.
In recent years, deep learning (DL) emerges as an eective way
to extract multi-aspect characteristics from complex nancial sig-
nals. Many supervised deep neural networks are proposed in the
literature to predict asset prices using various factors, such as fre-
quency of prices [
11
], economic news [
12
], social media [
27
], and
nancial events [
4
,
5
]. Deep neural networks are also adopted in
reinforcement learning (RL) frameworks to enhance traditional
shallow investment strategies [
3
,
6
,
16
]. Despite the rich studies
above, applying DL to real-life nancial markets still faces several
challenges:
Challenge 1: Balancing return and risk. Most existing supervised
deep learning models in nance focus on price prediction without
risk awareness, which is not in line with fundamental investment
principles and may lead to suboptimal performance [
8
]. While some
RL-based strategies [
8
,
17
] have considered this problem, how to
adopt state-of-the-art DL approaches into risk-return-balanced RL
frameworks, is yet not well studied.
Challenge 2: Modeling interrelationships among assets. Many -
nancial tools in the market can be used to derive risk-aware prots
from the interrelationship among assets, such as hedging, arbitrage,
and the BWSL strategy used in this work. However, existing DL/RL-
based investment strategies paid little attention to this important
information.
Challenge 3: Interpreting investment strategies. There is a long-
standing voice arguing that DL-based systems are “unexplainable
black boxes” and therefore cannot be used in crucial applications
like medicine, investment and military [
9
]. RL-based strategies with
deep structures make it even worse. How to extract interpretable
rules from DL-enabled strategies remains an open problem.
In this paper, we propose AlphaStock, a novel reinforcement
learning based strategy using deep attention networks, to overcome
the above challenges. AlphaStock is essentially a buying winners and
selling losers (BWSL) strategy for stock assets. It consists of three
components. The rst is a Long Short-Term Memory with History
state Attention (LSTM-HA) network, which is used to extract asset
representations from multiple time series. The second component
is a Cross-Asset Attention Network (CAAN), which can fully model
the interrelationships among assets as well as the asset price rising
prior. The third is a portfolio generator, which gives the investment
proportion of each asset according to the output winner scores of
the attention networks. We use a RL framework to optimize our
model towards a return-risk-balanced objective, i.e., maximizing
the Sharpe Ratio. In this way, the merit of representation learning
via deep attention models and the merit of risk-return balance
via Sharpe ratio targeted reinforcement learning are integrated
naturally. Moreover, to gain interpretability for AlphaStock, we
propose a sensitivity analysis method to unveil how our model
selects an asset to invest according to its multi-aspect features.
Extensive experiments on long-periodic U.S. stock markets demon-
strate that our AlphaStock strategy outperforms some state-of-the-
art competitors in terms of a variety of evaluation measures. In
particular, AlphaStock shows excellent adaptability to diverse mar-
ket states (enabled by RL and Sharpe ratio) and exceptional ability
for extreme loss control (enabled by CAAN). Extended experiments
on Chinese stock markets further conrm the superiority of Alpha-
Stock and its robustness. Interestingly, the interpretation analysis
results reveal that AlphaStock selects assets by following a principle
as “selecting the stocks as winners with high long-term growth, low
volatility, high intrinsic value, and being undervalued recently”.
2 PRELIMINARIES
In this section, we rst introduce the nancial concepts used
throughout this paper, and then formally dene our problem.
2.1 Basic Financial Concepts
Definition 1 (Holding Period). A holding period is a minimum
time unit to invest an asset. We divide the time axis as sequential
holding periods with xed length, such as one day or one month. We
call the starting time of the t-th holding period as the time t.
Definition 2 (Seqential Investment). A sequential invest-
ment is a sequence of holding periods. For the
t
-th holding period, a
strategy uses original capital to invest in assets at time
t
, and gets
prots (could be negative) at time
t+
1. The capitals plus prots of the
t
-th holding period are used as the original capitals of the
(t+
1
)
-th
holding period.
Definition 3 (Asset Price). The price of an asset is dened as
a time series
p(i)={p(i)
1,p(i)
2, . . . , p(i)
t, . . .}
, where
p(i)
t
denotes the
price of asset iat time t.
In this work, we use a stock as an asset to describe our model,
which could be extended to other types of assets by taking asset
specicities and transaction rules into consideration.
Definition 4 (Long Position). The long position is the trading
operation that buys an asset at time
t1
rst and then sells it at
t2
. The
prot of a long position during the period from
t1
to
t2
for asset
i
is
ui(p(i)
t2−p(i)
t1), where uiis the buying volume of asset i.
In the long position, traders expect an asset will rise in price, so
they buy the asset rst and wait for the price rise to earn prots.
Definition 5 (Short Position). A short position is the trading
operation that sells an asset at
t1
rst and then buys it back at
t2
. The
prot of a short position during the period from
t1
to
t2
for asset
i
is
ui(p(i)
t1−p(i)
t2), where uiis the selling volume of asset i.
Short position is a reverse operation of the long position. Traders’
expectation in short position is that the price will drop, so they sell
at a price higher than the price at which they buy it back later. In
the stock market, a short position trader borrows stocks from a
broker and sells them at
t1
. At
t2
, the trader buys the sold stocks
back and returns them to the broker.
Definition 6 (Portfolio). Given an asset pool with
I
assets, a
portfolio is dened as a vector
b=(b(1), . . . , b(i), . . . , b(I))⊤
, where
b(i)
is the proportion of the investment on asset
i
, with
I
i=1b(i)=
1.
Assume we have a collection of portfolios
{b(1), . . . , b(j), . . . , b(J)}
.
The investment on portfolio
b(j)
is
M(j)
, with
M(j)≥
0when taking
a long position on
b(j)
, and
M(j)≤
0when taking a short position.
We then have the following important denition.
Definition 7 (Zero-investment Portfolio). A zero-investment
portfolio is a collection of portfolios that has a net total investment of
zero when the portfolios are assembled. That is, for a zero-investment
portfolio containing
J
portfolios, the total investment
J
j=1M(j)=
0.
For instance, an investor may borrow $1,000 worth of stocks in
one set of companies and sell them as a short position, and then use
the proceeds of short selling to purchase $1,000 stocks in another
set of companies as a long position. The assemble of the long and
short positions is a zero-investment portfolio. Note that while the
name is “zero-investment”, there still exists a budget constraint to
limit the overall worth of stocks that can be borrowed from the
broker. Also, we ignore real-world transaction costs for simplicity.
2.2 The BWSL Strategy
In this paper, we adopt the buy-winners-and-sell-losers (BWSL)
strategy for stock trading [
14
], the key of which is to buy the
assets with high price rising rate (winners) and sell those with low
price rising rate (losers). We execute the BWSL strategy as a zero-
investment portfolio consisting of two portfolios: a long portfolio
for buying winners and a short portfolio for selling losers. Given a
sequential investment with
T
periods, we denote the short portfolio
for the
t
-th period as
b−
t
and the long portfolio as
b+
t
,
t=
1
, . . . , T
.
At time
t
, given a budget constraint
˜
M
, we borrow the “loser”
stocks from brokers according to the investment proportion in
b−
t
.
The volume of stock ithat we can borrow is
u−(i)
t=˜
M·b−(i)
t/p(i)
t,(1)
where
b−(i)
t
is the proportion of stock
i
in
b−
t
. Next, we sell the
“loser” stocks we borrowed and get the money
˜
M
. After that, we
use
˜
M
to buy the “winner” stocks according to the long portfolio
b+
t. The volume of stock ithat we can buy at time tis
u+(i)
t=˜
M·b+(i)
t/p(i)
t.(2)
The money
˜
M
we used to buy winner stocks is the proceeds of short
selling, so the net investment on the portfolio {b+
t,b−
t}is zero.
At the end of the
t
-th holding period, we sell stocks in the long
portfolio. The money we can get is the proceeds of selling stocks
using new prices at t+1for all stocks, i.e.,
M+
t=
I
i=1
u+(i)
tp(i)
t+1=
I
i=1
˜
M·b+(i)
t
p(i)
t+1
p(i)
t
.(3)
Next, we buy the stocks in the short portfolio back and return them
to the broker. The money we spend on buying the short stocks is
M−
t=
I′
i=1
u−(i)
tp(i)
t+1=
I′
i=1
˜
M·b−(i)
t
p(i)
t+1
p(i)
t
.(4)
The ensemble prot earned by the long and short portfolios is
Mt=M+
t−M−
t
. Let
z(i)
t=p(i)
t+1/p(i)
t
denote the price rising rate of
stock
i
in the
t
-th holding period. Then, the rate of return of the
ensemble portfolio is calculated as
Rt=Mt
˜
M
=
I
i=1
b+(i)
tz(i)
t−
I′
i=1
b−(i)
tz(i)
t.(5)
Insight I.
As shown in Eq.
(5)
, a positive prot, i.e.,
Rt>
0, means
the average price rising rate of stocks in the long portfolio is higher
than that in the short portfolio, i.e.,
I
i=1
b+(i)
tz(i)
t>
I′
i=1
b−(i)
tz(i)
t.(6)
A protable BWSL strategy must ensure the stocks in the portfolio
b+
have a higher average price rising rate than the stocks in
b−
. That
is to say, even the prices of all stocks in the market are falling, as
long as we can ensure the price falling of stocks in
b+
is slower than
that in
b−
, we can still get prots. On the contrary, even the prices
of all stocks are rising, if the rising of stocks in
b−
is faster than that
in
b+
, our strategy still lose money. This characteristic implies that
the absolute price rising or falling of stocks is not the main concern
of our strategy; rather, the relative price relations among stocks
are much more important. As a consequence, we must design a
mechanism to describe the interrelationships of stock prices in our
model for the BWSL strategy.
2.3 Optimization Objective
In order to ensure that our strategy considers both return and
risk of an investment, we adopt the Sharpe ratio, a risk-adjusted
return developed by the Nobel laureate William F. Sharpe [
21
] in
1994, to measure the performance of our strategy.
LSTM-HA
LSTM-HA
LSTM-HA
……
LSTM-HA
……
CAAN
Portfolio Generator
features
t
features
t
features
t
features
t
……
……
Stock History States
Figure 1: The framework of the AlphaStock model.
Definition 8 (Sharpe Ratio). The Sharpe ratio is the average
return in excess of the risk-free return per unit of volatility. Given
a sequential investment that contains
T
holding periods, its Sharpe
ratio is calculated as
HT=AT−Θ
VT
,(7)
where
AT
is the average rate of return per period for the investment,
VT
is the volatility that is used to measure risk of the investment,
Θ
is a risk-free return rate, such as the return rate of bank.
Given a sequential investment with
T
holding periods,
AT
is
calculated as
AT=1
T
T
t=1
Rt−T Ct,(8)
where
TCt
is a transaction cost in the
t
-th period. The volatility
VT
in Eq. (7) is dened as
VT=T
t=1(Rt−¯
Rt)2
T,(9)
where ¯
Rt=T
t=1Rt/Tis the average of Rt.
For a
T
-period investment, the optimization objective of our
strategy is to generate the long and short portfolio sequences
B+=
{b+
1, . . . b+
T}
and
B−={b−
1, . . . , b−
T}
that can maximize the Sharpe
ratio of the investment as
arg max
{B+
,B−}
HTB+,B−.(10)
Insight II.
The Sharpe ratio evaluates the performance of a strategy
from both prot and risk perspectives. This prot-risk balance
characteristic requires our model not only focuses on maximizing
return rate
Rt
for each period, but also considers the long-term
volatility of
Rt
across all periods in an investment. In other words,
designing a far-sighted steady investment strategy is more valuable
than a short-sighted strategy with short-term high prots.
3 THE ALPHASTOCK MODEL
In this section, we propose a reinforcement learning (RL) based
model called AlphaStock to implement a BWSL strategy with the
Sharpe ratio dened in Eq.
(7)
as the optimization objective. As
shown in Fig. 1, AlphaStock contains three components. The rst
component is a LSTM with History state Attention network (LSTM-
HA). For each stock
i
, we use the LSTM-HA model to extract a
stock representation
r(i)
from its history states
X(i)
. The second
component is a Cross-Asset Attention Network (CAAN) to describe
interrelationships among the stocks. The CAAN takes as input the
representations (
r(i)
) of all stocks, and estimates a winner score
s(i)
for every stock. The
s(i)
is a score to indicate the degree of
stock
i
belonging to a winner. The third component is a portfolio
generator, which calculates the investment proportions in
b+
and
b−
according to the scores (
s(i)
) of all stocks. We use reinforcement
learning to end-to-end optimize the three components as a whole,
where the Sharpe ratio of a sequential investment is maximized
through a far-sighted way.
3.1 Raw Stock Features
The stock features used in our model contains two categories.
The rst category is the trading features, which describes the trading
information of a stock. At time t, the trading features include:
•Price Rising Rate (PR)
: The price rising rate of a stock during
the last holding period. It is dened as p(i)
t/p(i)
t−1for stock i.
•Fine-grained Volatility (VOL)
: A holding period can be fur-
ther divided into many sub-periods. We set one month as a holding
period in our experiment, thus a sub-period can be a trading day.
VOL is dened as the standard deviation of the prices of all sub-
periods from t−1to t.
•Trade Volume (TV)
: The total quantity of stocks traded from
t−1to t. It reects the market activity of a stock.
The second category is the company features, which describe
the nancial condition of the company that issues a stock. At time
t, the company features include:
•Market Capitalization (MC)
: For stock
i
, it is dened as the
product of the price p(i)
tand the outstanding shares of the stock.
•Price-earnings Ratio (PE)
: It is the ratio of the market capi-
talization of a company to its annual earnings.
•Book-to-market Ratio (BM)
: It is the ratio of the book value
of a company to its market value.
•Dividend (Div)
: It is the reward from company’s earnings to
stock holders during the (t−1)-th holding period.
Since the values of these features are not in the same scale, we
standardize them into Z-scores.
3.2 Stock Representations Extraction
The performance of a stock has close relations with its history
states. In the AlphaStock model, we propose a Long Short-Term
Memory with History state Attention (LSTM-HA) model to learn the
representation of a stock from its history features.
The sequential representation.
In the LSTM-HA network, we
use the vector
˜
xt
to denote the history state of a stock at time
t
,
which consists of the stock features given in Section 3.1. We name
the last Khistorical holding periods at time t,i.e., the period from
time
t−K
to time
t
, as a look-back window of
t
. The history states
of a stock in the look-back window are denoted as a sequence
X={x1, . . . , xk, . . . , xK}1
, where
xk=˜
xt−K+k
. Our model uses
a Long Short-Term Memory (LSTM) network [
10
] to recursively
encode Xinto a vector as
hk=LSTM (hk−1,xk),k∈ [1,K](11)
where
hk
is the hidden state encoded by LSTM at step
k
. The
hK
at the last step is used as a representation of the stock. It contains
the sequential dependence among elements in X.
1We also use Xto denote the matrix (xk), the two denitions are interchangeable.
The history state attention.
The
hK
can fully exploit the sequen-
tial dependence of elements in
X
, but the global and long-range
dependence among
X
are not eectively modeled. Therefore, we
adopt a history state attention to enhance
hK
using all middle hid-
den states
hk
. Specically, following the standard attention [
22
],
the history state attention enhanced representation, denoted as
r
,
is calculated as
r=
K
k=1
ATT (hK,hk)hk,(12)
where ATT(·,·) is an attention function dened as
ATT (hK,hk)=exp (αk)
K
k′=1exp (αk′),(13)
αk=w⊤·tanh W(1)hk+W(2)hK.
Here, w,W(1)and W(2)are the parameters to learn.
For the
i
-th stock at time
t
, the history state attention enhanced
representation is denoted as
r(i)
t
. It contains both the sequential and
global dependences of stock
i
’s history states from time
t−K+
1
to time
t
. In our model, the representation vectors for all stocks
are extracted by the same LSTM-HA network. The parameters
w
,
W(1)
,
W(2)
and those of the LSTM network in Eq.
(11)
are shared by
all stocks. In this way, the representations extracted by LSTM-HA
are relatively stable and general for all stocks rather than for a
particular one.
Remark.
A major advantage of LSTM-HA is that it can learn both
the sequential and global dependences from stock history states.
Compared with the existing studies that only use a recurrent neural
network to extract the sequential dependence in history states [
3
,
17
] or directly stack history states as an input vector of MLP [
16
]
to learn the global dependence, our model describes stock histories
more comprehensively. It is worth mentioning that LSTM-HA is also
an open framework. The representations learned from other types
of information sources, such as news, events and social media [
4
,
12, 27], could also be concatenated or attended with r(i)
t.
3.3 Winners and Losers Selection
In the traditional RL-based strategy models, the investment port-
folio is often directly generated from the stock representations
through a softmax normalization [
3
,
6
,
16
]. The drawback of this
type of methods is that it does not fully exploit the interrelation-
ships among stocks, which however is very important for the BWSL
strategy as analyzed in Insight I of Section 2.2. In light of this, we
propose a Cross-Asset Attention Network (CAAN) to describe the
interrelationships among stocks.
The basic CAAN model.
The CAAN model adopts the self-attention
mechanism proposed by Ref. [
24
] to model the interrelationships
among stocks. Specically, given the stock representation
r(i)
(we
omit time
t
without loss of generality), we calculate a query vector
q(i), a key vector k(i)and a value vectorv(i)for stock ias
q(i)=W(Q)r(i),k(i)=W(K)r(i),v(i)=W(V)r(i),(14)
where
W(Q)
,
W(K)
and
W(V)
are the parameters to learn. The
interrelationship of stock
j
to stock
i
is modeled as using the
q(i)
of the stock ito query the key k(j)of stock j,i.e.,
βi j =q(i)⊤ ·k(j)
√Dk
,(15)
where
Dk
is a re-scale parameter setting following Ref. [
24
]. Then,
we use the normalized interrelationships
{βi j }
as weights to sum
the values {v(j)}of other stocks into an attenuation score:
a(i)=
I
j=1
SATT q(i),k(j)·v(j),(16)
where the self-attention function
SATT (·,·)
is a softmax normalized
interrelationships of βi j ,i.e.,
SATT q(i),k(j)=
exp βi j
I
j′=1exp βi j′.(17)
We use a fully connected layer to transform the attention vector
a(i)into a winner score as
s(i)=sigmoid w(s)⊤ ·a(i)+e(s),(18)
where
w(s)
and
e(s)
are the connection weights and the bias to learn.
The winner score
s(i)
t
indicates the degree of stock
i
being a winner
in the
t
-th holding period. A stock with a higher score is more likely
to be a winner.
Incorporating price rising rank prior.
In the basic CAAN, the
interrelationships modeled by Eq.
(15)
are directly learned from
data. In fact, we could use priori knowledge to help our model to
learn the stock interrelationships. We use
c(i)
t−1
to denote the rank
of price rising rate of stock
i
in the last holding period (from
t−
1
to
t
). Inspired by the method for modeling positional information
from the NLP eld, we use the relative positions of stocks in the
coordinate axis of
c(i)
t−1
as a priori knowledge of the stock interrela-
tionships. Specically, given two stocks
i
and
j
, we calculate their
discrete relative distance in the coordinate axis of c(i)
t−1as
di j =c(i)
t−1−c(j)
t−1Q,(19)
where
Q
is a preset quantization coecient. We use a lookup matrix
L=(l1, . . . , lL)
to represent each discretized value of
di j
. Using
the
di j
as the index, the corresponding column vector
ldi j
is an
embedding vector of the relative distance dij .
For a pair of stocks
i
and
j
, we calculate a priori relation coe-
cient ψi j using ldi j as
ψi j =sigmoid w(L)⊤ldi j ,(20)
where
w(L)
is a learnable parameter. The relationship between
i
and jestimated by Eq. (15) is rewritten as
βi j =
ψi j q(i)⊤ ·k(j)
√D
.(21)
In this way, the relative positions of stocks in price rising rate rank
are introduced as a weight to enhance or weaken the attention
coecient. The stocks have similar history price rising rates will
have a stronger interrelationship in the attention and then have
similar winner scores.
Remark.
As shown in Eq.
(16)
, for each stock
i
, the winner score
s(i)
is calculated according to the attention of all other stocks. In
this way, the interrelationships among all stocks are involved into
CAAN. This special attention mechanism meets the model design
requirement of Insight I in Section 2.2.
3.4 Portfolios Generator
Given the winner scores
{s(1), . . . , s(i), . . . , s(I)}
of
I
stocks,
our AlphaStock model generally buys the stocks with high winner
scores and sells those with low winner scores. Specically, we rst
sort the stocks in descending order by their winner scores and
obtain the sequence number
o(i)
for each stock
i
. Let
G
denote the
preset size of portfolio
b+
and
b−
. If
o(i)∈ [
1
,G]
, stock
i
will enter
the portfolio b+(i), with the investment proportion calculated as
b+(i)=
exp s(i)
o(i′)∈[1,G]exp s(i′).(22)
If o(i)∈ (I−G,I], stock iwill enter b−(i)with a proportion
b−(i)=
exp 1−s(i)
o(i′)∈(I−G,I]exp 1−s(i′).(23)
The rest stocks are unselected for the lack of clear buy/sell signals.
For simplicity, we can use one vector to record all the information
of the two portfolios. That is, we form the vector
bc
of length
I
,
with
bc(i)=b+(i)
if
o(i)∈ [
1
,G]
, or
bc(i)=b−(i)
if
o(i)∈ (I−G,I]
,
or 0 otherwise,
i=
1
, . . . , I
. In what follows, we use
bc
and
{b+,b−}
interchangeably as the return of our AlphaStock model for clarity.
3.5 Optimization via Reinforcement Learning
We frame the AlphaStock strategy into a RL game with discrete
agent actions to optimize the model parameters, where a
T
-period
investment is modeled as a state-action-reward trajectory
π
of a
RL agent, i.e.,
π={stat e1,action1,rew ard1, . . . , st atet,act iont,
rewardt, . . . , st ateT,actionT,rewardT}
. The
statet
is the history
market state observed at
t
, which is expressed as
Xt=(X(i)
t)
. The
actiont
is an
I
-dimensional binary vector, of which the element
action(i)
t=
1when the agent invests stock
i
at
t
, and 0otherwise
2
.
According to
statet
, the agent has a probability
Pr(action(i)
t=
1
)
to
invest stock i, which is determined by AlphaStock as
Pr ac tion(i)
t=1Xn
t,θ=1
2G(i)(Xn
t,θ)=1
2bc(i)
t,(24)
where
G(i)(Xn
t,θ)
is part of AlphaStock that generates
bc(i)
t
,
θ
de-
notes the model parameters, and 1
/
2is to ensure
I
i=1Pr(action(i)
t=
1
)=
1. Let
Hπ
denote the Sharpe ratio of
π
, then
rewardt
is the
contribution of actiontto Hπ, with T
t=1rewardt=Hπ.
For all possible π, the average reward of the RL agent is
J(θ)=π
HπPr(π|θ)dπ,(25)
where
Pr(π|θ)
is the probability of generating
π
from
θ
. Then,
the objective of the RL model optimization is to nd the optimal
parameters θ∗=arg maxθJ(θ).
2
In the RL game, the actions of an agent are discrete states with the probability
bc(i)
t/2
indicating whether to invest stock
i
. In the real investment, we allocate capitals to
stocks
i
according the continuous proportion
bc(i)
t
. This approximation is for the sake
of problem solving.
We use the gradient ascent approach to iteratively optimize
θ
at
round
τ
as
θτ=θτ−1+η∇J(θ)|θ=θτ−1
, where
η
is a learning rate.
Given a training dataset that contains
N
trajectories
{π1, . . . , πn,
. . . , πN},∇J(θ)can be approximately calculated as [23]
∇J(θ)=π
HπPr(π|θ)∇ log Pr(π|θ)dπ.
≈1
N
N
n=1Hπn
Tn
t=1
I
i=1∇θlog Pr act ion(i)
t=1X(n)
t,θ,
(26)
The gradient
∇θlog Pr(action(i)
t=
1
|X(n)
t,θ)=∇θlog G(i)(Xn
t,θ)
,
which is calculated by the Back Propagation algorithm.
In order to ensure the proposed model can beat the market,
we introduce the threshold method [
23
] into our reinforcement
learning. Then the gradient ∇J(θ)in Eq. (26) is rewritten as
∇J(θ)=1
N
N
n=1Hπn−H0
Tn
t=1
I
i=1∇θlog G(i)Xn
t,θ,(27)
where the threshold
H0
is set as the Sharpe ratio of the overall mar-
ket. In this way, the gradient ascent only encourages the parameters
that can outperform the market.
Remark.
The Eq.
(27)
uses
(Hπn−H0)
to integrally weight the
the gradients
∇θlog G
of all holding periods in
πn
. The reward is
not directly given to any isolated step in
πn
but given to all steps
in
πn
. This feature of our model meets the far-sight requirement of
Insight II in Section 2.2.
4 MODEL INTERPRETATION
In the AlphaStock model, the LSTM-HA and CAAN networks
cast the raw stock features as winner scores. The nal investment
portfolios are directly generated from the winner scores. A natural
follow-up question is: what kind of stocks would be selected as
winners by AlphaStock? To answer this question, we propose a
sensitivity analysis method [
1
,
25
,
26
] to interpret how the history
features of a stock inuence its winner score in our model.
We use
s=F(X)
to express the function of history features
X
of
a stock to its winner score
s
. In our model,
s=F(X)
is a combined
network of LSTM-HA and CAAN. We use
xq
to denote an element
of
X
which is the value of one feature (dened in Section 3.1) at
a particular time period of the look-back window, e.g., the price
rising rate of a stock at the time of three months ago.
Given the history state
X
of a stock, the inuence of
xq
to its
winner score s,i.e., the sensitivity of sto xq, is expressed as
δxq(X)=lim
∆xq→0
F(X)− F xq+∆xq,X¬xq
xq−xq+∆xq=
∂F(X)
∂xq
,(28)
where X¬xqdenotes the elements of Xexcept xq.
For all possible stock states in a market, the average inuence of
the stock state feature xqto the winner score sis
¯
δxq=DX
Pr(X)δxq(X)dσ.(29)
where
Pr(X)
is the probability density function of
X
, and
DX·
d
σ
is an integral over all possible value of
X
. According to the Large
Number Law, given a dataset that contains history states of
I
stocks
in Nholding periods, the ¯
δxqis approximated as
¯
δxq=1
I×N
N
n=1
I
i=1
δxqX(i)
nX(¬i)
n,(30)
where
X(i)
n
is the history state of the
i
-th stock at the
n
-th holding
period, and
X(¬i)
n
denotes the history states of other stocks that are
concurrent with the history state of i-th stock.
We use
¯
δxq
to measure the overall inuence of a stock feature
xq
to the winner score. A positive value of
¯
δxq
indicates that our
model tends to take a stock as a winner when
xq
is large, and vice
versa. For example, in the experiment to follow, we obtain
¯
δ<
0
for the ne-grained volatility feature, which means that our model
trends to select low volatility stocks as winners.
5 EXPERIMENT
In this section, we empirically evaluate our AlphaStock model by
the data in the U.S. markets. The data in the Chinese stock markets
are also used for robustness check.
5.1 Data and Experimental Setup
The data of U.S. stock market used in our experiments are ob-
tained from Wharton Research Data Services (WRDS)
3
. The time
range of the data is from Jan. 1970 to Dec. 2016. This long time range
covers several well-known market events, such as the dot-com bub-
ble from 1995 to 2000 and the subprime mortgage crisis from 2007 to
2009, which enables the evaluation over diverse market states. The
stocks are from four markets: NYSE, NYSE American, NASDAQ,
and NYSE Arca. The number of valid stocks is more than 1000 per
year. We use the data from Jan. 1970 to Jan. 1990 as the training
and validation set, and the rest as the test set.
In the experiment, the holding period is set to one month, and
the number of holding periods
T
in an investment is set to 12, i.e.,
the Sharpe ratio reward is calculated every 12 months for RL. The
look-back window size
K
is set to 12, i.e., we look back on the 12-
month history states of stocks. The size
G
of the portfolios is set as
1/4 of number of all stocks.
5.2 Baseline Methods
AlphaStock is compared with a number of baselines including:
•Market: the uniform Buy-And-Hold strategy [13];
•
Cross Sectional Momentum (CSM) [
15
] and Time Series Mo-
mentum (TSM) [18]: two classic momentum strategies;
•
Robust Median Reversion (RMR): a newly reported reversion
strategy [13];
•
Fuzzy Deep Direct Reinforcement (FDDR): a newly reported
RL-based BWSL strategy [3];
•
AlphaStock-NC (AS-NC): the AlphaStock model without the
CAAN, where the outputs of LSTM-HA are directly used as the
inputs of the portfolio generator.
•
AlphaStock-NP (AS-NP): the AlphaStock model without price
rising rank prior, where we use the basic CAAN in our model.
The baselines TSM/CSM/RMR represent the traditional nancial
strategies. TSM and CSM are based on the momentum logic and
3https://wrds-web.wharton.upenn.edu/wrds/
RMR is based on the reversion logic. FDDR represents the state-of-
the-art RL-based BWSL strategy. AS-NC and AS-NP are used as a
contrast to verify the eectiveness of the CAAN and price rising
rank prior. The Market is used to indicate states of the market.
5.3 Evaluation Measures
The most standard evaluation measure for investment strategies
is Cumulative Wealth, which is dened as
CWT=
T
t=1(Rt+1−TC ),(31)
where
Rt
is the rate of return dened in Eq.
(5)
and the transaction
cost TC is set to 0.1% in our experiments according to Ref. [3].
The preferences of dierent investors are varied. Therefore, we
also use some other evaluation measures including:
1) Annualized Percentage Rate (APR) is an annualized average
of return rate. It is dened as
APRT=AT×NY
, where
NY
is the
number of holding periods in a year.
2) Annualized Volatility (AVOL) is an annualized average of volatil-
ity. It is dened as
AVOLT=VT×√NY
and is used to measure the
average risk of a strategy during an unit time period.
3) Annualized Sharpe Ratio (ASR) is the risk-adjusted annualized
return based on APR and AVOL. The formalized denition of ASR
is ASRT=APRT/AVOLT.
4) Maximum DrawDown (MDD) is the maximum loss from a
peak to a trough of a portfolio, before a new peak is attained. It
is the other way to measure the investment risk. The formalized
denition of MDD is
M D DT=max
τ∈[1,T]max
t∈[1,τ]AP Rt−A P Rτ
AP Rt.(32)
5) Calmar Ratio (CR) is the risk-adjusted APR based on Maximum
DrawDown. It is calculated as CRT=APRT/MDDT.
6) Downside Deviation Ratio (DDR) measures the downside risk
of a strategy as the average of returns when it falls below a mini-
mum acceptable return (MAR). It is the risk-adjusted APR based on
Downside Deviation. The formalized denition of DDR is given as
DDRT=AP RT
Downside Deviation =AP RT
E[min(Rt,MAR)]2,t∈ [1,T].
(33)
In our experiment, the MAR is set to zero.
5.4 Performance in U.S. Markets
Fig. 2 is a cumulative wealth comparison of AlphaStock and the
baselines. In general, the performance of AlphaStock (AS) is much
better than other baselines, which veries the eectiveness of our
model. Some interesting observations are highlighted as follows:
1) The performance of AlphaStock is better than AlphaStock-NP
and the performance of AlphaStock-NP is better than AlphaStock-
NC, which indicates that the stock rank priors and interrelation-
ships modeled by CAAN are very helpful for the BWSL strategy.
2) The FDDR is also a kind of deep RL investment strategy, which
extracts the fuzzy representations of stocks using a recurrent deep
neural network. In our experiment, the performance of AlphaStock-
NC is better than FDDR, indicating the advantage of our LSTM-HA
network in the stock representation learning.
Year
1990 1995 2000 2005 2010 2015
Cumulative Wealth
0
1
2
3
4
5
AS AS-NP AS-NC FDDR RMR CSM TSM Market
Figure 2: The Cumulative Wealth in U.S. markets.
Table 1: Performance comparison on U.S. markets.
APR AVOL ASR MDD CR DDR
Market 0.042 0.174 0.239 0.569 0.073 0.337
TSM 0.047 0.223 0.210 0.523 0.090 0.318
CSM 0.044 0.096 0.456 0.126 0.350 0.453
RMR 0.074 0.134 0.551 0.098 1.249 0.757
FDDR 0.063 0.056 1.141 0.070 0.900 2.028
AS-NC 0.101 0.052 1.929 0.068 1.492 1.685
AS-NP 0.133 0.065 2.054 0.033 3.990 4.618
AS 0.143 0.067 2.132 0.027 5.296 6.397
3) The TSM strategy performs well in the bull market but very
poorly in the bear market (the nancial crisis in 2003 and 2008),
while the RMR has an opposite performance. This implies the tradi-
tional nancial strategies can only adapt to a certain type of market
state without an eective forward-looking mechanism. This defect
is greatly addressed by the RL strategies, including AlphaStock and
FDDR, which perform much stably across dierent market states.
The performances evaluated by other measures are listed in Ta-
ble 1. For the measures underlined (AVOL, MDD), the lower value
indicates the better performance, while the situation is opposite for
the other measures. As shown in Table 1, the performances of Al-
phaStock, AlphaStock-NP and AlphaStock-NC are better than other
baselines with all measures, conrming the eectiveness and robust-
ness of our strategy. The performances of AlphaStock, AlphaStock-
NP and AlphaStock-NC are close in terms of ASR, which might be
due to all of these models are optimized for maximizing the Sharpe
ratio. The prots of AlphaStock and AlphaStock-NP measured by
APR are higher than that of AlphaStock-NC, at the cost of a little
bit higher volatility.
More interestingly, the performance of AlphaStock measured
by MDD, CR and DDR is much better than that of AlphaStock-
NP. The similar results could be observed by comparing MDD,
CR and DDR of AlphaStock-NP and AlphaStock-NC. The three
measures are used to indicate the extreme loss in an investment,
i.e., the maximum draw down and the returns below the minimum
acceptable threshold. The results suggest that the extreme loss
control ability of the three models are AlphaStock
>
AlphaStock-NP
>
AlphaStock-NC, which highlights the contribution of the CAAN
component and the price rising rank prior. Indeed, CAAN with price
rising rank priors fully exploits the ranking relationship among
stocks. This mechanism can protect our strategy from the error
of “buying losers and selling winners”, and therefore can greatly
avoid extreme losses in investments. In summary, AlphaStock is
a very competitive strategy for investors with dierent types of
preferences.
Month before t
-12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1
7
/xp
-0.05
0
0.05
0.1 Influence of PR to WS
(a) Price Rising
History Months
-12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1
7
/xp
-0.02
-0.01
0
0.01
0.02 Influence of TV to WS
(b) Trade Volume
History Months
-12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1
7
/xp
-0.06
-0.04
-0.02
0
Influence of VOL to WS
(c) Fine-grained Volatility
MC PE BM DIV
7
/xp
-0.05
0
0.05
0.1
(d) Company Features
Figure 3: Inuence of history trading features to winner scores.
Table 2: Performance comparison on Chinese markets.
APR AVOL ASR MDD CR DDR
Market 0.037 0.260 0.141 0.595 0.062 0.135
TSM 0.078 0.420 0.186 0.533 0.147 0.225
CSM 0.023 0.392 0.058 0.633 0.036 0.064
RMR 0.079 0.279 0.282 0.423 0.186 0.289
FDDR 0.084 0.152 0.553 0.231 0.365 0.801
AS-NC 0.104 0.113 0.916 0.163 0.648 1.103
AS-NP 0.122 0.105 1.163 0.136 0.895 1.547
AS 0.125 0.103 1.220 0.135 0.296 1.704
5.5 Performance in Chinese Markets
In order to further testify the robustness of our model, we run
the back-test experiments of our model and baselines over the
Chinese stock markets, which contains two exchanges: Shanghai
Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE). The
data are obtained from the WIND databese
4
. The stocks are the
RMB priced ordinary shares (A-share) and the total number of
stocks used for experiment is 1,131. The time range of our data is
from Jun. 2005 to Dec. 2018, with the period from Jun. 2005 – Dec.
2011 used as the training/validation set and the rest as the test set.
Since the Chinese markets cannot short sell, so we only use the
b+
portfolio in the experiment.
The experimental results are given in Table 2. From the table we
can see that the performances of AlphaStock, AlphaStock-NP and
AlphaStock-NC are better than that of other baselines again. This
veries the eectiveness of our model over the Chinese markets.
By further comparing Table 2 with Table 1, it turns out that the risk
of our model measured by AVOL and MDD in the Chinese markets
is higher than that in the U.S. markets. This might be attributable
to the market faultiness of emerging countries like China, with
more speculative capital but less eective governance. The lack of
short sell mechanism also contributes to the imbalance of market
forces. The AVOL and MDD of the Market and other baselines in
the Chinese markets are also higher than that in the U.S. markets.
Compared with these baselines, the risk control ability of our model
is still competitive. To sum up, the experimental results in Table 2
indicate the robustness of our model over emerging markets.
5.6 Investment Strategies Interpretation
Here, we try to interpret the underlying investment strategies
of AlphaStock, which is crucial for practitioners to better under-
standing this model. To this end, we use
¯
δxp
in Eq.
(30)
to measure
the inuence of the stock features dened in Section 3.1 to Al-
phaStock’s winner selection. Figures 3(a)-3(b) plot the inuences
from the trading features. The vertical axis denotes the inuence
4http://www.wind.com.cn/en/Default.html
strengths indicated by
¯
δxq
, and the horizontal axis denotes how
many months before the trading time. For example, the bar indexed
by “-12” of the horizontal axis in Fig. 3(a) denotes the inuence of
stock price rising rate (PR) at the time of twelve months ago.
As shown in Fig. 3(a), the inuence of history price rising rate is
heterogeneous along the time axis. The PR in long-term months, i.e.,
9 to 11 months ahead, has positive inuence to winner scores, but
for the short-term months, i.e., 1 to 8 months ahead, the inuence
becomes negative. This result indicates that our model tends to buy
the stocks with long-term rapid price increase (valid excellence) or
with short-term rapid price retracement (over undervalued). This
implies that AlphaStock behaviors like a long-term momentum but
short-term reversion mixed strategy. Moreover, since price rising
is usually accompanied by frequent stock trading, Fig. 3(b) shows
that the
¯
δxp
of trading volumes (TV) has a similar tendency with
the price rising rate (PR). Finally, as shown in Fig. 3(c), the volatili-
ties (VOL) have negative inuence to winner scores for all history
months. It means that our model trends to select low volatility
stocks as winners, which indeed explains why AlphaStock can
adapt to diverse market states.
Fig. 3(d) further exhibits the average inuences of dierent com-
pany features to the winner score, i.e., the
¯
δxp
averaged on all
history months. It turns out that Market Capitalization (MC), Price-
earnings Ratio (PE), and Book-to-market Ratio (BM) have positive
inuences. The three features are important company valuation
factors for a listed company, which indicates that AlphaStock tends
to select companies with sound fundamental values. In contrast,
dividends mean a part of company values are returned to share-
holders and could reduce the intrinsic value of a stock. That is why
the inuence of Dividends (DIV) is negative in our model.
To sum up, while AlphaStock is an AI-enabled investment strat-
egy, the interpretation analysis proposed in Section 4 can help to
extract investment logics from AlphaStock. Specically, AlphaStock
suggests selecting the stocks as winners with high long-term growth,
low volatility,high intrinsic value, and being undervalued recently.
6 RELATED WORKS
Our work is related to the following research directions.
Financial Investment Strategy:
Classic nancial investment
strategy includes Momentum, Mean Reversion, and Multi-factors. In
the rst work of BWSL [
14
], Jegadeesh and Titman found “momen-
tum” could be used to select winners and losers. The momentum
strategy buys assets that have had high returns over a past period as
winners, and sells those that have had poor returns over the same
period. Classic momentum strategies include the Cross Sectional
Momentum (CSM) [
15
] and the Time Series Momentum (TSM) [
18
].
The mean reversion strategy [
20
] considers asset prices always re-
turn to their mean over a past period, so it buys assets with a price
under their historical mean and sells above the historical mean.
The multi-factor model [
7
] uses factors to compute a valuation for
each asset and buys/sells those assets with price under/above their
valuations. Most of these nancial investment strategies can only
exploit a certain factor of nancial markets and thus might fail in
complex market environments.
Deep Learning in Finance:
In recent years, deep learning ap-
proaches begin to be applied in the nancial areas. In the literature,
L. Zhang et al. proposed to exploit frequency information to predict
stock prices [
11
]. News and social media were used in price pre-
diction in Refs. [
12
,
27
]. Information about events and corporation
relationships were used to predict stock prices in Ref. [
2
,
4
]. Most
of these works focus on price prediction rather than end-to-end
investment portfolio generation like us.
Reinforcement Learning in Finance:
The RL approaches used
in investment strategies fall in two categories: the value-based and
the policy-based [
8
]. The value-based approaches learn a critic
to describe the expected outcomes of markets to trading actions.
Typical value-based approaches in investment strategies include
Q-learning [
19
] and deep Q-learning [
16
]. A defect of value-based
approaches is the market environment is too complex to be approxi-
mated by a critic. Therefore, policy-based approaches are considered
as more suitable to nancial markets [
8
]. The AlphaStock model
also belongs to this category. A classic policy-based RL algorithm
in investment strategy is the Recurrent Reinforcement Learning
(RRL) [
17
]. The FDDR [
3
] model extends the RRL framework using
deep neural networks. In the Investor-Imitator model [
6
], a policy-
based deep RL framework was proposed to imitate the behaviors of
dierent types of investors. Compared with RRL and its deep learn-
ing extensions, which focus on exploiting sequential dependence in
nancial signals, our AlphaStock model pays more attention to the
interrelationships among assets. Moreover, deep RL approaches are
often hard to deployed in real-life applications for unexplainable
deep network structures. The interpretation tools oered by our
model can solve this problem.
7 CONCLUSIONS
In this paper, we proposed a RL-based deep attention network
to design a BWSL strategy called AlphaStock. We also designed a
sensitivity analysis method to interpret the investment logics of
our model. Compared with existing RL-based investment strategies,
AlphaStock fully exploits the interrelationship among stocks, and
opens a door for solving the “black box” problem of using deep
learning models in nancial markets. The back-testing and simula-
tion experiments over U.S. and Chinese stock markets showed that
AlphaStock performed much better than other competing strate-
gies. Interestingly, AlphaStock suggests buying stocks with high
long-term growth, low volatility, high intrinsic value, and being
undervalued recently.
ACKNOWLEDGMENTS
J. Wang’s work was partially supported by the National Natu-
ral Science Foundation of China (NSFC) (61572059), the Science
and Technology Project of Beijing (Z181100003518001), and the
CETC Union Fund (6141B08080401). Y. Zhang’s work was partially
supported by the National Key Research and Development Pro-
gram of China under Grant (2017YFC0820405) and the Fundamen-
tal Research Funds for the Central Universities. K. Tang’s work
was partially supported the National Social Sciences Foundation
of China (No.14BJL028). J. Wu’s work was partially supported by
NSFC (71725002, 71531001, U1636210).
REFERENCES
[1]
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt,
and Been Kim. 2018. Sanity checks for saliency maps. In NIPS’18. 9525–9536.
[2]
Yingmei Chen, Zhongyu Wei, and Xuanjing Huang. 2018. Incorporating Corpo-
ration Relationship via Graph Convolutional Neural Networks for Stock Price
Prediction. In CIKM’18. ACM, 1655–1658.
[3]
Yue Deng, Feng Bao, Youyong Kong, Zhiquan Ren, and Qionghai Dai. 2017. Deep
direct reinforcement learning for nancial signal representation and trading.
IEEE TNNLS 28, 3 (2017), 653–664.
[4]
Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for
event-driven stock prediction.. In IJCAI’15. 2327–2333.
[5]
Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2016. Knowledge-driven
event embedding for stock prediction. In COLING’16. 2133–2142.
[6]
Yi Ding, WeiqingLiu, Jiang Bian, Daoqiang Zhang, and Tie-Yan Liu. 2018. Investor-
Imitator: A Framework for Trading Knowledge Extraction. In KDD’18. ACM,
1310–1319.
[7]
Eugene F Fama and Kenneth R French. 1996. Multifactor explanations of asset
pricing anomalies. J. Finance 51, 1 (1996), 55–84.
[8]
Thomas G Fischer. 2018. Reinforcement learning in nancial markets-a survey.
Technical Report. FAU Discussion Papers in Economics.
[9]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca
Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black
box models. ACM Computing Surveys (CSUR) 51, 5 (2018), 93.
[10]
Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long Short-Term Memory.
Neural Computation 9, 8 (1997), 1735–1780.
[11]
Hao Hu and Guo-Jun Qi. 2017. State-Frequency Memory Recurrent Neural
Networks. In ICML’17. 1568–1577.
[12]
Ziniu Hu, Weiqing Liu, Jiang Bian, Xuanzhe Liu, and Tie-Yan Liu. 2018. Listening
to chaotic whispers: A deep learning framework for news-oriented stock trend
prediction. In WSDM’18. ACM, 261–269.
[13]
Dingjiang Huang, Junlong Zhou, Bin Li, Steven CH Hoi, and Shuigeng Zhou.
2016. Robust median reversion strategy for online portfolio selection. IEEE TKDE
28, 9 (2016), 2480–2493.
[14]
Narasimhan Jegadeesh and Sheridan Titman. 1993. Returns to buying winners
and selling losers: Implications for stock market eciency. J. Finance 48, 1 (1993),
65–91.
[15]
Narasimhan Jegadeesh and Sheridan Titman. 2002. Cross-sectional and time-
series determinants of momentum returns. RFS 15, 1 (2002), 143–157.
[16]
Olivier Jin and Hamza El-Saawy. 2016. Portfolio Management using Reinforcement
Learning. Technical Report. Stanford University.
[17]
John Moody, Lizhong Wu,Yuansong Liao, and Matthew Saell. 1998. Performance
functions and reinforcement learning for trading systems and portfolios. Journal
of Forecasting 17, 5-6 (1998), 441–470.
[18]
Tobias J Moskowitz, Yao Hua Ooi, and Lasse Heje Pedersen. 2012. Time series
momentum. J. Financial Economics 104, 2 (2012), 228–250.
[19]
Ralph Neuneier. 1995. Optimal Asset Allocation using Adaptive Dynamic Pro-
gramming. In NIPS’95.
[20]
James M Poterba and Lawrence H Summers. 1988. Mean reversion in stock prices:
Evidence and implications. J. Financial Economics 22, 1 (1988), 27–59.
[21] William F Sharpe. 1994. The sharpe ratio. JPM 21, 1 (1994), 49–58.
[22]
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence
Learning with Neural Networks. NIPS’14 (2014), 3104–3112.
[23]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An intro-
duction. MIT press.
[24]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In NIPS’17. 5998–6008.
[25]
Jingyuan Wang, Qian Gu, Junjie Wu, Guannan Liu, and Zhang Xiong. 2016.
Trac speed prediction and congestion source exploration: A deep learning
method. In ICDM’16. IEEE, 499–508.
[26]
Jingyuan Wang, Ze Wang, Jianfeng Li, and Junjie Wu. 2018. Multilevel wavelet
decomposition network for interpretable time series analysis. In KDD’18. ACM,
2437–2446.
[27]
Yumo Xu and Shay B Cohen. 2018. Stock movement prediction from tweets and
historical prices. In ACL’18, Vol. 1. 1970–1979.