ArticlePDF Available

Abstract and Figures

A properly performing and efficient bond market is widely considered important for the smooth functioning of trading systems in general. An important feature of the bond market for investors is its liquidity. High-frequency trading employs sophisticated algorithms to explore numerous markets, such as fixed-income markets. In this trading, transactions are processed more quickly, and the volume of trades rises significantly, improving liquidity in the bond market. This paper presents a comparison of neural networks, fuzzy logic, and quantum methodologies for predicting bond price movements through a high-frequency strategy in advanced and emerging countries. Our results indicate that, of the selected methods, QGA, DRCNN and DLNN-GA can correctly interpret the expected bond future price direction and rate changes satisfactorily, while QFuzzy tend to perform worse in forecasting the future direction of bond prices. Our work has a large potential impact on the possible directions of the strategy of algorithmic trading for investors and stakeholders in fixed-income markets and all methodologies proposed in this study could be great options policy to explore other financial markets.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
Computational Economics (2024) 64:2263–2354
https://doi.org/10.1007/s10614-023-10502-3
High‑Frequency Trading inBond Returns: AComparison
Across Alternative Methods andFixed‑Income Markets
DavidAlaminos1 · MaríaBelénSalas2,3· ManuelA.Fernández‑Gámez2,3
Accepted: 16 October 2023 / Published online: 2 December 2023
© The Author(s) 2023
Abstract
A properly performing and efficient bond market is widely considered important for
the smooth functioning of trading systems in general. An important feature of the
bond market for investors is its liquidity. High-frequency trading employs sophis-
ticated algorithms to explore numerous markets, such as fixed-income markets. In
this trading, transactions are processed more quickly, and the volume of trades rises
significantly, improving liquidity in the bond market. This paper presents a com-
parison of neural networks, fuzzy logic, and quantum methodologies for predicting
bond price movements through a high-frequency strategy in advanced and emerg-
ing countries. Our results indicate that, of the selected methods, QGA, DRCNN
and DLNN-GA can correctly interpret the expected bond future price direction and
rate changes satisfactorily, while QFuzzy tend to perform worse in forecasting the
future direction of bond prices. Our work has a large potential impact on the pos-
sible directions of the strategy of algorithmic trading for investors and stakeholders
in fixed-income markets and all methodologies proposed in this study could be great
options policy to explore other financial markets.
Keywords Fixed-income markets· Bond returns· High-frequency trading· Deep
learning· Fuzzy logic· Quantum computing
JEL Classification C63· G12· G14
* David Alaminos
alaminos@ub.edu
1 Department ofBusiness, University ofBarcelona, Barcelona, Spain
2 Department ofFinance andAccounting, University ofMálaga, Málaga, Spain
3 Cátedra de Economía y Finanzas Sostenibles, University ofMálaga, Málaga, Spain
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2264
D.Alaminos et al.
1 Introduction
Fixed-income markets represent an important financing source for govern-
ments, domestic and international organizations, banks, and both private and
public companies with access to the fixed-income market. The development of
fixed-income trading can contribute to financial stability in general, and enhance
financial intermediation by increasing competition and developing the associ-
ated financial infrastructure, products, and services (Nunes et al., 2019; Inter-
national Monetary Fund, 2021). Government bonds are the main instrument of
most fixed-income asset markets, for developed and developing economies alike.
In the United States, the main data source for public securities trading activ-
ity is GovPX and MTS for Europe. The MTS is a fully electronic, quote-driven
interbank market comprising multiple trading platforms. All MTS platforms use
identical technology for trading; however, every platform maintains its rules set,
and market participants (Biais & Green, 2019; Friewald & Nagler, 2019). Fixed-
income securities are commonly traded over-the-counter (OTC), on inter-dealer
wholesale platforms, and, less frequently, on retail platforms where liquidity is
provided by pre-purchased dealers. Transactions are not anonymous and are bilat-
eral; therefore, the conditions of negotiation are determined by search and trading
frictions, in the absence of a focal point, dealers have to proactively seek out and
negotiate with possible counterparties to get the "best" offer (Darbha & Dufour,
2013; Glode & Opp, 2020; Neklyudov, 2019).
Fixed-income trading has generally received less attention from research-
ers than equity market trading, even though fixed-income markets involve sig-
nificantly more capital raising in comparison to equity markets. The electronic
ease of bond trading, however, is on the rise. The impact of electronically sup-
ported trading on the performance of the fixed-income market will be interesting
to evaluate as its contribution expands. In addition, some liquidity providers for
corporate bonds offer requests for trades under specific trade size limits using
algorithms instead of human participants (Bessembinder etal., 2020).
Technological advances have transformed how investors can operate in the
financial markets. High-Frequency Trading (HFT), which is algorithmic trading
(AT) distinguished by high-speed trade execution, exemplifies these changes in
technology (Frino etal., 2020). HFT is an approach to a financial market inter-
vention involving complex software tools, which are used to execute high-fre-
quency trades, guided by mathematical algorithms, in the markets for stocks,
options, bonds, derivative instruments, commodities, etc. (Rundo, 2019). Hender-
shott etal. (2011) claim that AT reduces trading costs and increases quote infor-
mation. In addition, liquidity providers’ revenues also increase with AT, although
this effect seems to be transitory. In conclusion, financial trading demands that
the AT scans the environment for suitable and prompt decisions in the absence of
monitored data (Aloud & Alkhamees, 2021).
In academic research, the term "high-frequency trading" is often used to
refer to asset price windows of 10min or 60 min, for example (Christiansen
& Ranaldo, 2007). For instance, a study published in the Journal of Financial
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2265
High‑Frequency Trading inBond Returns: AComparison Across…
Markets in 2013 defines high-frequency trading as "trading that takes place in
intervals of a few seconds to a few minutes" (Aldrich, 2013). Similarly, a paper
published in the Journal of Financial Economics in 2008 found that HFT can
improve liquidity provision in corporate bond markets, particularly for less liquid
bonds (Mahanti etal., 2008).
However, in practice, the term "high-frequency trading" is often used more
strictly to refer to price windows with even shorter times. For example, in the US
equities market, the Securities and Exchange Commission defines a high-frequency
trader as someone who trades at least 2 million shares or $20 million in securities in
a single day, with an average holding time of less than 0.5s (SEC, 2014).
Despite the varying definitions of high-frequency trading used in academic
research and in practice, there is general agreement that this type of trading has sig-
nificant implications for bond markets. Some studies have suggested that HFT can
increase market efficiency and liquidity, while others have argued that it can exacer-
bate market volatility and lead to market instability (e.g., Frino etal., 2013; Schestag
etal., 2016). As HFT continues to evolve and shape financial markets, it is likely
that academic researchers and practitioners will continue to debate its effects and
implications.
AT, both algorithms driven by fundamental and technical indicator analysis and
algorithms supported by machine learning techniques, have been examined by sev-
eral researchers. According to Goldblum etal., (2021), Machine Learning (ML) is
playing an important and growing role in financial business applications. Besides,
Deep learning (DL), which is a subclass of ML methods that study deep neural net-
works, develops DL algorithms that can be used to train complex data and predict
the output. Today numerous financial firms, ranging from hedge firms, investment
banks, and retailers to modern FinTech providers, are investing in developing exper-
tise in data science and ML (Goodell e al., 2021).
As market turmoil and uncertainty in financial markets have increased consider-
ably, ML algorithms are quite applicable for the analysis of financial markets and, in
particular, the fixed-income market. The marketplace is very complicated, and the
only forecast that can be made is its unpredictability. The financial market’s unfore-
seeability is caused by the uncertainty of many episodes that occur in it (Goldblum
etal., 2021). Deep Neural Networks draw knowledge from the data, that can then be
utilised to forecast and produce further data. This feedback decreases unreliability
by indicating specific problem-solving. ML is especially useful for handling prob-
lems where an analytical solution is not explicitly instructed to do so, such as com-
plicated categorisation techniques or recognition of trends (Ghoddusi etal., 2019).
The benefit of Deep Neural Network methods over those offered by classical stat-
isticians and econometricians is that ML algorithms can handle a huge quantity of
organised and non-structured information and provide quick predictions or conclu-
sions (Milana & Ashta, 2021).
Publications on the use of ML techniques with specific applications to fixed-
income markets are scarce. However, other financial areas have attracted much
more interest in the research literature, particularly in the equity and foreign
exchange markets (Nunes etal., 2018). Most of these studies involve the stock
market, mainly for forecasting with artificial neural networks (ANN), support
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2266
D.Alaminos et al.
vector machines (SVM), and random forests (RF) models. These methods have
proven to produce excellent results for financial time series forecasting (Deng
et al., 2021, 2022). For example, Kara et al. (2011) suggested an ANN-based
model for predicting the daily price movement in the stock market, and it yielded
high accuracy in the forecast. Akyildirim etal. (2022) compare the trading behav-
iour of several advanced forecasting techniques, such as ANN, autoregressive
integrated moving average (ARIMA), nearest neighbors, naïve Bayes method, and
logistic regression to forecast stock price movements relying on past prices. They
apply these methods to high-frequency data of 27 blue-chip stocks traded on the
Istanbul Stock Exchange. Their results highlight that, among the chosen meth-
odologies, naïve Bayes, nearest neighbors, and ANN can detect the future price
directions as well as percentage changes at a satisfactory level, while ARIMA and
logistic regression perform worse than the random walk model. In addition, these
authors establish a future line of research to test the chosen methods in other mar-
kets to achieve more accurate and widespread results.
Some authors have made predictions about the performance of fixed-income
assets through neural networks. Vukovic etal. (2020) analyze the model of a neu-
ral network that forecasts the Sharpe ratio. Their results demonstrate that neu-
ral networks are accurate in predicting nonlinear series with an 82% precision
in the test cases for forecasting the future Sharpe ratio dynamics and the posi-
tion of the investor’s portfolio. For future research, they propose analyzing more
data in stronger artificial intelligence technologies, such as Long Short-Term
Memory (LSTM) neural network technology. They conclude that these adaptive
methodologies should provide more accurate analysis and forecasting and such
an area of study requires additional attention and effort in the future. Li etal.
(2021) analyze sovereign CDS to prevent investment risks and propose a hybrid
ensemble forecasting model. They employ Autoregressive Integrated Moving
Average (ARIMA) model to predict trend elements, meanwhile, the Relevance
Vector Machine (RVM) technique is used to forecast market volatility and noise
elements, correspondingly, having the model excellent robustness. They estab-
lish that although the suggested model exhibits satisfactory prediction efficiency,
there is scope for further improvement and apart from sovereign CDS time series,
the prediction model provided may be applied to other financial time series to
test the generalizability of the model. Nunes etal. (2019) concentrate on yield
curve forecasting, currently the centerpiece of the bond markets. They apply ML
to fixed-income markets, specifically multilayer perceptrons (MLPs), to analyze
the yield curve overall. They exhibit that MLPs could be effectively utilized to
forecast the yield curve. They determine that, in terms of future work, an impor-
tant area of interest is to keep exploring multitask learning, as they believe that
further research is required to identify the terms and conditions under which their
methodology could be applied with enhanced performance.
To fill the gap in this research area, our study aims to predict bond price move-
ments based on past prices through high-frequency trading. We compare machine
learning methods applied to the fixed-income markets (sovereign, corporate and
high-yield debt) in advanced and emerging countries in the one-year bond market
for the period from 15 February 2000 to 12 April 2023.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2267
High‑Frequency Trading inBond Returns: AComparison Across…
Despite the limited number of observations at 10-min frequency, the methodolo-
gies applied in this work are capable of working and finally making estimates, some-
thing that would be impossible for conventional statistical methodologies and even
for some simple computational methodologies. Some previous works have found
more avaibility of this data from OTC markets. Although these limitations exist,
several works have also appeared investigating corporate bond data with 10-min
interval observations, such as Nowak etal. (2009), Aldana (2017), Gomber and Haf-
erkorn (2015), Holden etal. (2018), Gündüz etal. (2023).
We make at least two further contributions to the literature. First, we analyze the
fixed-income market through HFT comparing a wide range of innovative computa-
tional machine learning methodologies, since most of the previous studies employ
statistical and econometric methods. In addition, the prior literature deals with port-
folio optimization only with fixed-income assets is not too many, and even fewer are
dealing with the use of HFT. Within the ongoing advancement of financial markets,
HFT proportion has increased steadily in recent years, which is generally character-
ized by fast update frequency and high trading speed. HFT also will produce plenty
of profitable market influence, like increasing market liquidity and improving risk-
handling ability (Deng etal., 2021). Second, our study has made predictions of bond
price movements globally, and so not restricted to developed countries, being inter-
esting for those responsible for the economic policies of any country in the world.
Whereas the relevance of public debt markets has led to innumerable papers on these
markets in the United States and other advanced countries, comparatively limited
research exists on emerging bond markets (Bai etal., 2013). In addition, our study
has considered not only sovereign bonds but also corporate and high-yield debt.
The rest of the paper is organized as follows. In Sect.2, the methodologies are
described. Section3 details the sample and data involved in the research. Section4
points out the results and findings obtained. By last, Sect.5 finishes explaining the
conclusions reached.
2 Methodologies
We have used different methods to predict bond price movements through HFT. The
application of various techniques aims to obtain a robust model, which is tested not
just via one categorisation technique but using those that have proven successful
in prior literature and other areas. Specifically, this study applies Quantum-Fuzzy
Approach, Adaptive Boosting, and Genetic Algorithm, Support Vector Machine-
Genetic Algorithm, Deep Learning Neural Network- Genetic Algorithm, Quantum
Genetic Algorithm, Adaptive Neuro-Fuzzy Inference System-Quantum Genetic
Algorithm, Deep Recurrent Convolutional Neural Networks, Convolutional Neural
Networks-Long Short Term Memory, Gated Recurrent Unit- Convolutional Neural
Networks, and Quantum Recurrent Neural Networks. The techniques Deep Recur-
rent Convolutional Neural Networks and Quantum Genetic Algorithm have been the
ones that have obtained the best results as will be shown in Sect.4 of Results, there-
fore these methodologies will be explained below. The rest named are shown in the
Appendix 1” of this study.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2268
D.Alaminos et al.
2.1 Quantum Genetic Algorithm (QGA)
The quantum evolutionary algorithm (QEA) is an evolutionary algorithm built on the
concept of quantum computing. It introduces notions like superposition states in quan-
tum computing and incorporates the single encoding form to obtain improved experi-
mental results in the combinatorial optimisation problem. Nevertheless, when it comes
to the optimisation of multimodal functions using QEA, in the specific, high-dimen-
sional multimodal function optimisation problem, it is likely to drop into local optimum
and its computing efficiency is poor.
This study aims to improve the global optimisation capacity of the genetic algorithm
and the local search ability according to the quantum probabilistic model to introduce a
new type of quantum evolutionary algorithm, namely the "quantum genetic algorithm",
to deal with the above deficiencies of QEA. This algorithm utilises the quantum prob-
abilistic vector encoding mechanism and takes the crossover operator of the genetic
algorithm and the updating strategy of quantum computation simultaneously to opti-
mize the global search capacity of the quantum algorithm effectively.
The quantum genetic algorithm steps are:
2.1.1 Step 1: Population Initialisation
The lowest unit of information in QGA is a quantum bit. The state of a quantum bit can
be 0 or 1, expressed as:
being
𝛼
,
𝛽
two complex numbers corresponding to the likelihood of happening of
the respective state:
(|
𝛼
|2
+
|
𝛽
|2
=1
)
,
|
𝛼
|2
,
|
𝛽
|2
symbol the likelihood of the quan-
tum bit in the 0 and 1 state accordingly.
The most commonly adopted coding techniques in EA include binary coding, deci-
mal coding, and symbolic coding. In QGA, a new method of coding is introduced using
the quantum bit, namely the use of a pair of complex numbers to describe a quantum
bit. A system with m quantum bits is expressed as
In the equation,
|
|
𝛼
i|
|
2
+
|
|
𝛽
i|
|
2
=
1
(i = 1, 2, …, m). This method of display may be
applied to describe any linear superposition of states. For instance, a system of three
quantum bits having the next probability amplitudes:
The system state can be defined as
(1)
Ψ=𝛼0+𝛽1
(2)
[
𝛼1
𝛽
1||||
𝛼2
𝛽
2||||
𝛼m
𝛽
m||||]
(3)
1
2
1
23
2
1
2
1
2
3
2||||||
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2269
High‑Frequency Trading inBond Returns: AComparison Across…
2.1.2 Step 2: Conduct Individual Coding andMeasuring ofthePopulation
Generating Units
QGA is a probabilistic algorithm analogous to EA. The algorithm is
H
(t)=
{
Q
t
1
,Q
t
2
,Q
t
h
,Q
t
l}
h = 1,2,…l) being h the size of the population,
l(t)=
qt
1,qt
2,qt
j,qt
n
where n represents the number of generator units, t
denotes the evolution generation,
qt
j
symbols the binary coding of the generation
volume of the jth generator unit. Its chromosome is shown as below:
(j = 1, 2, …, n) (m is the length of the quantum chromosome).
During the ‘‘initialization of H(t),” if
𝛼t
1
,
𝛽t
1
(i = 1, 2, …, m) in
Ql(t)
and all the
qt
j
are initialized, it denotes that all the possible linear superposition states will
happen with equal likelihood. Over the step of ‘‘generating S(t) from H(t)”, a
common solution set S(t) is created through observation of the state of H(t),
wherein the tth generation,
S
(t)=
Pt
1,Pt
2,,Pt
h,,Pt
l
,Pl=
xt
1,xt
2,,xt
j,,xt
n
. Every
xt
j
(j = 1, 2, …, n) is a series,
(
x
1
,x
2
,,x
i
,,x
m)
, of length m, which are reached
from the amplitude of quantum bit
|
|
|
𝛼t
i
|
|
|
2
or
|
|
|
𝛽t
i
|
|
|
2
(i = 1, 2, …, m). The relevant pro-
cedure in the binary scenario is to randomly identify a number [0, 1]. Take ‘‘1 if
it is larger than
|
|
|
𝛼t
i
|
|
|
2
; take ‘‘0 otherwise.
2.1.3 Step 3: Make AnIndividual Measure forEvery Item inS(t)
Employ a fitness assessment function to test each object in S(t) and maintain the
best object in the generation. If you get a satisfactory solution, the algorithm
stops; if not, proceed to the fourth step. When dealing with non-binary optimi-
zation problems, the chromosome is usually represented by a set of real-valued
parameters rather than a binary string. In such cases, the fitness function is often
a continuous function that maps the parameter values to a scalar value that repre-
sents the fitness of the solution.
(4)
3
4
2000 +
3
42001 +
1
42010+
3
42011
+3
4
2100 +3
4
2101 +
1
4
2110+3
4
2111
(5)
q
t
j=
[
𝛼t
1
𝛽t
1||||
𝛼t
2
𝛽t
2||||
𝛼t
m
𝛽t
m||||]
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2270
D.Alaminos et al.
Considering a non-binary optimization problem with a chromosome composed of
three real-valued parameters × 1, ×2, and ×3, the fitness function for this problem
would be defined as:
The objective in this case would be to minimize the fitness function. To accom-
plish this, the QGA would search for a set of parameter values that produce the
minimum fitness value. The process would be similar to that for a binary problem,
with the genetic operators applied to the real-valued parameters rather than binary
strings.
2.1.4 Step 4: Apply Genetic Operators toCreate New Individuals
The crossover operator is applied by swapping some of the qubits between two chro-
mosomes. One of the most commonly used crossover operators in QGA is the uniform
crossover, which selects each qubit from one of the two parent chromosomes with a
certain probability. The crossover operator can be represented mathematically as:
where α∣ψparent1⟩ + β∣ψparent2⟩ are the two parent chromosomes, ∣ψchild⟩ is the
resulting child chromosome, and α and β are complex coefficients determined by the
crossover probability.
The mutation operator randomly flips some of the qubits in a chromosome. Math-
ematically, the mutation operator can be represented as:
where Um is a single-qubit unitary gate that applies a random rotation around the
Bloch sphere axis for the qubit to be mutated. The mutation rate determines the
probability of applying the mutation operator to each qubit in a chromosome.
It’s important to note that the application of genetic operators in QGA can be done
in different ways, and the specific equations used can vary depending on the implemen-
tation and problem being solved.
2.1.5 Step 5: Apply AnAppropriate Quantum Rotation Gate U(t) toUpdate S(t)
The conventional genetic algorithm utilises mating and mutation operations, etc.
to keep the population diverse. The quantum genetic algorithm uses a logic gate
to the likelihood amplitude of the quantum state to preserve the diversity of the
(6)
f(x1, x2, x3)=(x1 3)2+(x2 +1)2+(x3 2)2
(7)
ψchild
ψparent1
ψparent2ψchild
ψparent1
ψparent2
(8)
ψmutated=Umψoriginalψmutated=Umψoriginal
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2271
High‑Frequency Trading inBond Returns: AComparison Across…
population. Hence, the method of updating by a quantum gate is the essence of
the quantum genetic algorithm. The binary system, adaptation values, and the
probability amplitude comparison technique are utilised for updating using a
quantum gate in the classical genetic algorithm. This approach to updating via a
quantum gate is adequate for solving combinatorial optimisation problems with
an in-principle optimum. Nevertheless, for real optimisation problems, especially
those optimisation problems of multivariable continuous functions, whose best
solutions are in principle not available beforehand. Hence, a quantum rotation
gate of the quantum logic gate for the new quantum genetic algorithm is assumed
here.
being
𝜃
the quantum gate rotation angle. Its value is shown as
We consider k as a variable linked to the evolution generation to adjust the
mesh size in a self-adaptive way. Let t be the evolution generation, π is an angle,
itermax
is a constant that relies on the complexity of the optimization problem.
The aim of the function f
(
𝛼
i
,𝛽
i)
serves to cause the algorithm to seek the best
direction. It is based on the idea of gradually bringing the actual search solution
closer to the optimal solution and thus setting the direction of the quantum rota-
tion gate.
Thus, the process of implementing the quantum rotation gate to the entire prob-
ability amplitude for the individual object in the population, namely by applying
the quantum rotation gate U(t) to update S(t), in the quantum genetic algorithm
may be written as:
being t the evolution generation, U(t) represents the tth generation quantum
rotation gate, S(t) symbols the tth generation probability amplitude of a certain
object, S(t + 1) denotes the
t+1th
generation probability amplitude of the rel-
evant object.
(9)
U
=
[
cos 𝜃sin 𝜃
sin 𝜃cos 𝜃
]
(10)
𝜃
=kf
(
𝛼
i
,𝛽
i)
(11)
k
=𝜋exp
(
t
iter
max )
(12)
S(t+1)=U(t)×S(t)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2272
D.Alaminos et al.
2.1.6 Step 6: Perturbation
Since QGA is inclined to get caught at a better local extreme value, we disturb the
population. QGA analysis has shown that if the best individual of the present gen-
eration is a local extreme value, the algorithm is very difficult to free. Thus, the
algorithm is stuck at the local extrema if the best individual remains unchanged in
subsequent generations.
Finally, we show how is the pseudocode for the implementation of this method
for the problem studied and the flowchart (Fig.1) with the steps to follow as it
has been described previously.
Pseudocode of Quantum Genetic Algorithm
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2273
High‑Frequency Trading inBond Returns: AComparison Across…
Fig. 1 Flowchart of quantum genetic algorithm
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2274
D.Alaminos et al.
2.2 Deep Recurrent Convolutional Neural Network (DRCNN)
The RCNN model consists of a stack of RCLs and may include max pooling layers. To
save on computational resources, the first layer is a standard forward convolutional layer
with no recurrent connections, followed by a max pooling layer. Four RCLs are used with
a max pooling layer in the middle, and there are only feed-forward links between adjacent
RCLs. Both clustering operations have a stride of 2 and a size of 3. The fourth RCL’s
output tracks a global maximum clustering layer that produces the maximum of each fea-
ture map, resulting in a feature vector to represent the image. This approach differs from
Krizhevsky etal. (2017) model, which uses fully connected layers, and Lin etal. (2013)
and Szegedy etal.’s (2017) models, which use global average pooling. Finally, a softmax
layer is used to classify the feature vectors into C categories, with the output consisting of:
being
yk
the predicted probability belonging to the kth category, and x the feature
vector generated by the global max pooling.
RNNs have been deployed in many fields in time series forecasting with success
owing to their enormous predictive power. The standard RNN framework is struc-
tured by the output, which depends on its past estimations (Wan etal., 2017). The
standard RNN framework uses a hidden state to store information about past inputs,
which is combined with the current input to make a prediction for the output at the
current time step. The RCNN model incorporates the standard RNN framework by
using Recurrent Convolutional Layers (RCLs) to capture the temporal dependencies
in sequential data. The output of each RCL is a sequence of hidden states that can be
used to make predictions about future inputs. The DRCNN model extends the RCNN
model by stacking RCLs to create a deep architecture, with each layer applying a con-
volutional operation to the hidden states generated by the previous layer. The output
of the last layer is then fed into a supervised learning layer to produce a prediction for
the output at the current time step. the output of this RNN can be written as:
where yt is the output at time step t, st is the hidden state at time step t, Wy is the
weight matrix connecting the hidden state to the output, by is the bias term, and f is
the activation function.
An input sequence vector x, the hidden states of a recurrent layer s, and the out-
put of a unique hidden layer y, can be obtained from formulas (14) and (15).
being
Wxs
,
Wss
, and
Wso
the weights from the input layer x to the hidden layer s,
the hidden layer to itself, and the hidden layer to its output layer, respectively.
by
rep-
resent the biases of the hidden layer and output layer. Formula (16) points out
𝜎
and
o
as a symbol of the activation functions.
(13)
yk=exp
W
T
KX
K
exp
WT
K
X
(k=1, 2, ,C
)
(14)
yt
=f
(
W
y
s
t
+b
y)
(15)
st
=𝜎
(
W
xs
x
t
+W
ss
s
t1
+b
s)
(16)
yt
=o
(
W
so
s
t
+b
y)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2275
High‑Frequency Trading inBond Returns: AComparison Across…
where z(t) denotes the vibration signals, ω(t) symbols the Gaussian window function
focused around 0. T(τ, ω) represent a complex function defining the vibration sig-
nals over time and frequency. To compute the hidden layers with the convolutional
operation formulas (17) and (18) are used.
being W the convolution kernels. Below we show the pseudocode of activation func-
tion of these RCNNs:
Pseudocode for Activation function
# Activation function for identifying and ranking values
# Input: Characteristics from convolution layer
# Output: Removal of negative values
activation_function = lambda y: 1.0/(1.0 + np.exp(-y))
input_func = np.random.random((2, 3))
K1, a1 = np.random.random((4, 2)), np.random.random(4)
K2, a2 = np.random.random((1, 4)), np.random.random(1)
K3, a3 = np.random.random((1, 1)), np.random.random(1)
layer1 = activation_function(np.dot(K1, input_func) + a1)
layer2 = activation_function(np.dot(K2, layer1) + a2)
output = np.dot(K3, layer2) + a3
To establish a deep architecture, the recurrent convolutional neural network (RCNN)
can be stacked and form the DRCNN (Huang & Narayanan, 2017). In this combination
case, the last part of the model is a supervised learning layer, set by formula (19).
being Wh the weight and bh the bias, respectively. The error of predicted and actual
observations in the prediction training data may be estimated and fed back into
model training (Ma & Mao, 2019). Stochastic gradient descent is implemented to
optimise parameter learning. Assuming that the real data at time t is r, the loss func-
tion is given in the formula (20).
The number of filters is the number of neurons, since each neuron performs a dif-
ferent convolution on the input to the layer. It can also be distributed as multiples of
32, with a range limit of 32–512. The size of the filter defines how many neighboring
data points there are in a convolutional layer. The most used sizes in this work have
been 3 × 3 and 5 × 5. Stride and padding are a parameters of the neural network’s filter
(17)
STFT
{z(t)}(𝜏,𝜔)=
+∞
−∞
z(t)𝜔
(
t𝜏)ej𝜔tdt
)
(18)
St
=𝜎
(
W
TS
T
t
+W
SS
S
t1
+B
s)
(19)
Yt
=o
(
W
YS
S
t
+B
y)
(20)
r
=𝜎
(
W
h
h+b
h)
(21)
L
(r,r)=
1
2
rr
2
2
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2276
D.Alaminos et al.
that modifies the amount of movement over observations. In this work and usually, a
stride size no greater than 2 × 2 and a padding no greater than 1 × 1 have been used.
Finally, we provide a flowchar of the steps to complete in order to run this DRCNN in
the Fig.2.
Fig. 2 Deep recurrent convolutional neural network flowchart
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2277
High‑Frequency Trading inBond Returns: AComparison Across…
3 Sample andData
We employ bond prices for a one-year bond market in the period from February 15th,
2000 to April 12th, 2023. The sample consists of ten sovereign bonds in five advanced
economies (Germany, United States, Italy, Spain, and Japan) and five emerging coun-
tries (Turkey, Mexico, Indonesia, Nigeria, and Poland); ten corporate bonds in five
advanced economies (Walmart, Johnson & Johnson, Verizon, Unilever PLC, Rito Tinto
PLC) and in six emerging economies (Air Liquid, Ambev, Cemex, Turkish Airlines,
KCE Electronics, Telekomunikacja Polska), and finally, ten high-yield bonds in five
developed countries (Caesars Resort Collection LLC, Asurion LLC, Intelsat Jackson
Holdings, Athenahealth Group, Great Outdoors Group LLC) and in five developing
markets (Petroleos Mexicanos, Petrobas, Sands China Ltd, Indonesia Asahan Alumini,
Longfor Properties). “Appendix 3” displays a detailed information about the features
of every bond used in the sample. We have got data on the bond prices from the Eikon
database from Refinitiv. The data on trades in Refinitiv comprises information on exe-
cuted trades such as price and volume, which are timestamped up to the microsecond
with tools like Refinitiv Tick History. On the other hand, the information on order book
includes the limit price and order volume for both the bid and ask sides, covering lim-
its one to ten. This information has been used by recent studies from Clapham etal.
(2022), Hansen and Borch (2022), and Dodd etal. (2023). Table1 summarizes the
sample according to every category of the fixed-income market used.
We categorize all trades that happen in the continuous session across the day as
"continuous trades" and build "all trades" aggregating the trades performed in the open
and close sessions to the "continuous trades". To avoid dealing asynchronously, we dis-
play our data at 10, 30, and 60min.
In addition, we measure the cost-effectiveness of our selected forecasting techniques
by the following ratios.
3.1 Sign Prediction Ratio (SPR)
Correctly predicted price direction change is assigned 1, and 1 otherwise. This ratio
is defined as:
being “matches” the following
being the “sign function” that assigns + 1 for positive arguments and -1 for negative
arguments.
With the purpose of correcting the possible deficiencies of the model in terms
of its precision regarding the direction of the trend of the movements in the prices
(22)
SPR
=
M
j=1+M2matches
Yj,Y
j
M2
(23)
matches(
Yj,Yj
)
=
{
1if sign
(
Yj
)
=sign
(
Yj
)
0otherwise
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2278
D.Alaminos et al.
of the securities, we have incorporated a modification of the previous equation
by adding the Moving Average Convergence Divergence (MACD) model. The
MACD model is commonly calculated using the following equation:
The MACD line represents the difference between the 12-day EMA and the
26-day EMA. The EMA is a type of moving average that gives more weight to
recent data points. By subtracting the longer-term EMA from the shorter-term EMA,
the MACD line aims to capture the momentum and trend direction of the underlying
asset (Chong & Ng, 2008; Ramlall, 2016; Sezer & Ozbayoglu, 2018).The approach
(24)
MACD Line =12-day Exponential Moving Average (EMA)26-day EMA
Table 1 Sample of bonds used
Sovereign bonds Advanced economies Germany
United States
Italy
Spain
Japan
Emerging economies Turkey
Mexico
Indonesia
Nigeria
Poland
Corporate bonds Advanced economies Walmart
Johnson & Johnson
Verizon
Unilever PLC
Rio Tinto PLC
Emerging economies Air Liquide
Ambev
Cemex
Turkish Airlines
KCE Electronics
Telekomunikacja Polska
High-yield bonds Advanced economies Caesars Resort Collection LLC
Asurion LLC
Intelsat Jackson holdings
Athenahealth group
Great outdoors group LLC
Emerging economies Petróleos Mexicanos
Petrobras
Sands China Ltd
Indonesia Asahan Alumini
Longfor Properties
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2279
High‑Frequency Trading inBond Returns: AComparison Across…
of using MACD as a correction factor in the SPR can be modeled using the follow-
ing equation:
where SPR is the original sign prediction ratio, MACD is the signal generated by the
MACD model, and 1-MACD is used as a correction factor. The term (1-SPR) repre-
sents the complement of the original SPR, reflecting the portion of the original SPR
that is not considered accurate. The term (1-MACD) represents the complement of
the MACD signal, reflecting the extent to which the MACD signal indicates a poten-
tial reversal or correction in the market. The intuition behind this equation is that
when the MACD signal is positive, it is likely that the market is trending upwards
and the original SPR is more accurate. However, when the MACD signal is nega-
tive, it suggests a market reversal or a correction, and the original SPR may not be
as accurate. In this case, the correction factor is used to adjust the SPR downward to
reflect the possibility of a trend reversal (de Almeida & Neves, 2022; Ramlall, 2016;
Slade, 2017).
By adding the correction factor to the original SPR, the adjusted SPRAdjusted takes
into account the possibility of trend reversals or corrections indicated by the MACD
signal. When the MACD signal is positive, the original SPR is considered more
accurate and is only slightly adjusted. However, when the MACD signal is negative,
indicating a potential trend reversal, the original SPR is adjusted more significantly
downward to reflect the increased likelihood of a reversal.
This approach aims to combine the predictive power of the original SPR with
the insights provided by the MACD signal, adjusting the SPR to account for poten-
tial trend changes. It recognizes that the MACD signal can act as a corrective fac-
tor when the market conditions indicate a higher likelihood of a trend reversal or
correction.
3.2 Ideal Profit Ratio (IPR)
Is the ratio between the total Return and the maximum return.
The Total Return is computed in the following formula, where “sign” denotes the
“sign function” and the better the forecasting approach, the higher the total return
will be.
The maximum return is determined by summing all absolute expected figures and
reflects the maximum achievable return, considering a perfectly foreseeable forecast.
This ratio is defined as:
(25)
SPRAdjusted =SPR +(1SPR)(1MACD)
(26)
IPR
=
Tot a l Return
Maximum Return
(27)
Total Return
=
M
j=1+M2
sign
(
Y
j
)
Y
j
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2280
D.Alaminos et al.
Nelson-Siegel model, which was introduced by economists Nelson and Siegel in
1987. The model is based on the idea that the yield curve can be decomposed into
three factors: the level factor, the slope factor, and the curvature factor. These factors
capture the average level of interest rates, the steepness of the yield curve, and the
degree of curvature, respectively. The model can be expressed mathematically as
follows:
where r(t) represents the yield on a bond with time to maturity t, and β1, β2, β3, and
τ are parameters to be estimated. The parameter β1 represents the long-term mean
level of interest rates, β2 represents the slope of the yield curve at short maturities,
β3 represents the curvature of the yield curve, and τ represents the time scale over
which the yield curve adjusts to its long-term mean.
The rolling regression method involves estimating the relationship between the
excess returns of the bond portfolio and changes in the yield curve over a specified
rolling time period, such as one month or one quarter. The slope of the regression
line represents the expected excess return of the portfolio for a given change in the
yield curve (Grinold & Ronald, 1999; Ibbotson & Kaplan, 2000).
The equation for the rolling regression model can be written as follows:
where Excess Return is the excess return of the bond over the risk-free rate, typically
estimated using a 3-month U.S. Treasury bond as the benchmark (Campbell etal.,
2001). Yield Curve Change is the change in the yield curve over the rolling time
period, calculating the yields for each maturity point based on the Nelson-Siegel
model for both yield curves. The term α is the intercept of the regression line, which
represents the expected excess return of the bond when the yield curve change is
zero. The term β is the slope of the regression line, which represents the expected
excess return of the bond for a one-unit change in the yield curve. The term ε is the
residual error term, which represents the deviation of the actual excess return from
the predicted excess.
For its part, a modification of the Ideal Profit Ratio equation has been made fol-
lowing what has been done in works such as Elton etal. (1995) y Grinold and Ron-
ald (1999). The ideal profit ratio is a measure of the performance of an investment
strategy relative to a benchmark. It is calculated as the difference between the total
returns of the strategy and the maximum returns of the benchmark, divided by the
maximum returns of the benchmark.
To incorporate the excess return based on the yield curve into the calculation of
the ideal profit ratio, you could modify the equation as follows:
(28)
Maximum Return
=
M
j=1+M2
abs
(
Yj
)
(29)
r
(t)12
[
1exp(−t∕τ)
]
∕(t∕τ) + β3
[
(1exp(−t∕τ))∕(t∕τ) exp(−t∕τ)
]
(30)
Excess Return +β∗Yield Curve Change
(31)
Ideal Profit Ratio =(Total Return Expected Return)Maximum Return
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2281
High‑Frequency Trading inBond Returns: AComparison Across…
This modified equation measures the performance of the investment strategy rela-
tive to the benchmark, while taking into account the impact of the yield curve on the
expected return of the portfolio. A higher ideal profit ratio indicates better perfor-
mance relative to the benchmark.
Finally, after calculating the aforementioned equation of the Ideal Profit Ratio,
its final result will be the net value after applying the transaction cost. In our case
we used the difference between the average customer buy and the average customer
sell price on each day to quantify transaction costs according to the specification of
Hong and Warga (2000) and Chakravarty and Sarkar (2003):
where
P
buysell
t
t is the average price of all customers buy/sell trades on day t. We
calculate TCAvgBidAsk for each day on which there is at least one buy and one sell
trade and use the monthly mean as a monthly transaction cost measure following the
specifacions of previous works (Schestag etal., 2016).
4 Results
From our data described in the previous section, we collect a sample at 10, 30- and
60-min intervals, and afterward, we implement ten different methods defined in
Sect.2. The size of the training sample for the whole daily forecasting time horizon
appears as 50% of the total sample size approximated to the nearest integer value,
while the other 50% is used as an out-of-sample data set.
We introduce two key measures, defined in Sect.3, of the performance of the
methodologies. First, is the sign prediction rate, representing the proportion of
times that the corresponding methodology accurately estimates the direction of the
future price (up or down). Since correctly guessing the future price change would
not ensure better results, we should contrast the performance of different prediction
methodologies with a correct prediction of price changes. Thus, the ideal profit ratio
is the relationship between the profitability generated by a particular method and a
perfect sign forecast.
We implement the process mentioned above for "continuous operations" in the
sample period and, in addition, we also apply it for "all operations" to test for robust-
ness. Tables2, 3, 4, 5, 6, and 7 display the results achieved for each bond at different
time scales and for "continuous trades”. The results for the "all trades" case scenario
are presented in “Appendix 2” via Tables10, 11, 12, 13, 14, 15.
Table2 reports the sign prediction accuracy ratios of continuous trading for the
ten techniques on the considered bonds for 10min. We remark that QGA performs
the best with 31 bonds, with an accuracy rate of over 0.772 and a mean of 0.881.
DRCNN and DLNN-GA and are the second and third methods that correctly predict
the change in bond price direction, with an average of 0.850 and 0.847 respectively.
(32)
TC
AvgBidAsk =
Pbuy
tPsell
t
0.5
(
Pbuy
tPsell
t
)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2282
D.Alaminos et al.
Table 2 Sign Prediction Ratio (10min) for continuous trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.837 0.876 0.894 0.906 0.929 0.878 0.912 0.873 0.885 0.892
United States 0.828 0.875 0.888 0.903 0.925 0.876 0.909 0.867 0.878 0.885
Italy 0.820 0.871 0.882 0.902 0.923 0.869 0.904 0.861 0.878 0.882
Spain 0.817 0.863 0.877 0.894 0.917 0.863 0.903 0.858 0.870 0.876
Japan 0.814 0.862 0.874 0.887 0.913 0.859 0.899 0.849 0.870 0.870
Turkey 0.813 0.861 0.868 0.881 0.904 0.854 0.897 0.845 0.862 0.866
Mexico 0.809 0.856 0.863 0.881 0.900 0.850 0.895 0.837 0.860 0.861
Indonesia 0.806 0.850 0.862 0.875 0.893 0.842 0.889 0.833 0.855 0.854
Nigeria 0.805 0.843 0.854 0.870 0.888 0.838 0.888 0.829 0.853 0.847
Poland 0.804 0.838 0.847 0.863 0.883 0.835 0.881 0.829 0.851 0.846
Walmart 0.808 0.842 0.852 0.868 0.888 0.839 0.885 0.833 0.856 0.850
Johnson & Johnson 0.798 0.832 0.841 0.857 0.877 0.828 0.874 0.823 0.845 0.840
Verizon 0.790 0.828 0.852 0.865 0.892 0.835 0.881 0.834 0.832 0.836
Unilever PLC 0.802 0.830 0.864 0.872 0.900 0.848 0.883 0.836 0.835 0.842
Rio Tinto PLC 0.792 0.820 0.854 0.861 0.889 0.837 0.872 0.826 0.825 0.831
Air Liquide 0.819 0.844 0.889 0.888 0.932 0.864 0.901 0.846 0.850 0.863
Ambev 0.831 0.845 0.895 0.902 0.942 0.873 0.908 0.852 0.854 0.855
Cemex 0.824 0.827 0.872 0.887 0.935 0.871 0.902 0.830 0.826 0.848
Turkish Airlines 0.806 0.816 0.853 0.879 0.914 0.852 0.889 0.811 0.814 0.822
KCE Electronics 0.777 0.798 0.832 0.863 0.894 0.837 0.866 0.787 0.796 0.799
Telekomunikacja Polska 0.761 0.777 0.822 0.856 0.882 0.821 0.846 0.773 0.777 0.796
Caesars Resort Collection LLC 0.761 0.757 0.822 0.845 0.871 0.807 0.836 0.748 0.764 0.793
Asurion LLC 0.732 0.740 0.821 0.829 0.868 0.784 0.808 0.742 0.749 0.771
Intelsat Jackson Holdings 0.711 0.720 0.812 0.803 0.867 0.772 0.800 0.739 0.727 0.782
Athenahealth Group 0.706 0.710 0.798 0.798 0.863 0.744 0.773 0.739 0.698 0.781
Great Outdoors Group llc 0.681 0.689 0.784 0.781 0.847 0.742 0.769 0.722 0.694 0.778
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2283
High‑Frequency Trading inBond Returns: AComparison Across…
Table 2 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Petróleos Mexicanos 0.677 0.684 0.782 0.777 0.821 0.737 0.765 0.711 0.692 0.755
Petrobras 0.669 0.660 0.760 0.765 0.805 0.722 0.744 0.691 0.675 0.741
Sands China Ltd 0.662 0.641 0.739 0.748 0.792 0.697 0.741 0.662 0.651 0.740
Indonesia Asahan Alumini 0.658 0.629 0.736 0.721 0.774 0.681 0.722 0.646 0.647 0.728
Longfor Properties 0.640 0.627 0.715 0.726 0.772 0.659 0.715 0.621 0.629 0.706
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2284
D.Alaminos et al.
Table 3 Ideal profit ratio (10min) for continuous trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.0099 0.0127 0.0106 0.0133 0.0167 0.0138 0.0124 0.0095 0.0112 0.0138
United States 0.0119 0.0136 0.0112 0.0137 0.0178 0.0140 0.0125 0.0110 0.0122 0.0139
Italy 0.0140 0.0149 0.0116 0.0137 0.0180 0.0146 0.0127 0.0118 0.0133 0.0149
Spain 0.0150 0.0152 0.0121 0.0146 0.0194 0.0158 0.0138 0.0131 0.0138 0.0152
Japan 0.0165 0.0167 0.0128 0.0150 0.0205 0.0171 0.0150 0.0140 0.0146 0.0155
Turkey 0.0186 0.0186 0.0134 0.0158 0.0212 0.0173 0.0158 0.0154 0.0154 0.0168
Mexico 0.0168 0.0172 0.0123 0.0151 0.0200 0.0169 0.0143 0.0150 0.0153 0.0164
Indonesia 0.0159 0.0152 0.0116 0.0145 0.0200 0.0169 0.0141 0.0138 0.0147 0.0157
Nigeria 0.0157 0.0142 0.0109 0.0143 0.0189 0.0158 0.0135 0.0131 0.0145 0.0146
Poland 0.0146 0.0122 0.0095 0.0130 0.0181 0.0146 0.0128 0.0126 0.0139 0.0145
Walmart 0.0137 0.0119 0.0090 0.0116 0.0171 0.0138 0.0113 0.0114 0.0126 0.0145
Johnson & Johnson 0.0119 0.0104 0.0076 0.0114 0.0170 0.0126 0.0111 0.0102 0.0112 0.0133
Verizon 0.0110 0.0089 0.0062 0.0103 0.0158 0.0119 0.0111 0.0091 0.0104 0.0126
Unilever PLC 0.0089 0.0084 0.0054 0.0103 0.0154 0.0107 0.0103 0.0084 0.0101 0.0123
Rio Tinto PLC 0.0075 0.0082 0.0046 0.0089 0.0150 0.0097 0.0097 0.0073 0.0090 0.0119
Air Liquide 0.0081 0.0100 0.0048 0.0101 0.0156 0.0107 0.0101 0.0080 0.0093 0.0131
Ambev 0.0088 0.0111 0.0060 0.0108 0.0162 0.0117 0.0112 0.0083 0.0094 0.0138
Cemex 0.0107 0.0114 0.0068 0.0120 0.0165 0.0132 0.0118 0.0087 0.0101 0.0140
Turkish Airlines 0.0121 0.0116 0.0075 0.0133 0.0173 0.0133 0.0119 0.0101 0.0106 0.0150
KCE Electronics 0.0139 0.0120 0.0085 0.0145 0.0185 0.0137 0.0127 0.0109 0.0119 0.0155
Telekomunikacja Polska 0.0143 0.0130 0.0097 0.0158 0.0192 0.0148 0.0133 0.0113 0.0120 0.0158
Caesars Resort Collection LLC 0.0150 0.0139 0.0104 0.0160 0.0192 0.0162 0.0134 0.0122 0.0134 0.0168
Asurion LLC 0.0141 0.0121 0.0099 0.0151 0.0187 0.0147 0.0129 0.0113 0.0129 0.0165
Intelsat Jackson Holdings 0.0157 0.0100 0.0088 0.0142 0.0186 0.0143 0.0124 0.0100 0.0120 0.0155
Athenahealth Group 0.0145 0.0081 0.0082 0.0129 0.0186 0.0129 0.0123 0.0090 0.0111 0.0147
Great Outdoors Group llc 0.0144 0.0062 0.0073 0.0128 0.0179 0.0129 0.0117 0.0077 0.0109 0.0132
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2285
High‑Frequency Trading inBond Returns: AComparison Across…
Table 3 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Petróleos Mexicanos 0.0136 0.0062 0.0072 0.0119 0.0166 0.0124 0.0114 0.0072 0.0101 0.0119
Petrobras 0.0122 0.0042 0.0063 0.0116 0.0163 0.0115 0.0102 0.0065 0.0098 0.0114
Sands China Ltd 0.0104 0.0035 0.0056 0.0115 0.0151 0.0105 0.0093 0.0053 0.0084 0.0103
Indonesia Asahan Alumini 0.0098 0.0026 0.0044 0.0106 0.0142 0.0094 0.0086 0.0041 0.0084 0.0101
Longfor Properties 0.0095 0.0005 0.0031 0.0094 0.0139 0.0088 0.0085 0.0028 0.0074 0.0096
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2286
D.Alaminos et al.
Table 4 Sign Prediction Ratio (30min) for continuous trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.816 0.855 0.872 0.897 0.906 0.857 0.890 0.851 0.864 0.878
United States 0.808 0.853 0.866 0.895 0.902 0.854 0.887 0.845 0.857 0.876
Italy 0.800 0.850 0.860 0.893 0.900 0.848 0.882 0.840 0.856 0.873
Spain 0.797 0.842 0.855 0.886 0.894 0.841 0.881 0.837 0.849 0.867
Japan 0.794 0.840 0.853 0.878 0.890 0.838 0.877 0.828 0.848 0.860
Turkey 0.793 0.839 0.847 0.872 0.882 0.833 0.874 0.824 0.841 0.856
Mexico 0.789 0.834 0.841 0.872 0.878 0.829 0.873 0.817 0.838 0.852
Indonesia 0.786 0.829 0.841 0.867 0.871 0.822 0.867 0.812 0.834 0.844
Nigeria 0.785 0.822 0.833 0.862 0.866 0.817 0.866 0.809 0.832 0.838
Poland 0.784 0.817 0.826 0.855 0.862 0.814 0.859 0.808 0.830 0.837
Walmart 0.776 0.814 0.823 0.849 0.861 0.812 0.857 0.808 0.823 0.832
Johnson & Johnson 0.768 0.802 0.815 0.840 0.859 0.804 0.844 0.804 0.811 0.822
Verizon 0.771 0.807 0.830 0.856 0.870 0.814 0.859 0.813 0.811 0.826
Unilever PLC 0.782 0.810 0.843 0.864 0.878 0.827 0.861 0.816 0.814 0.832
Rio Tinto PLC 0.790 0.814 0.853 0.873 0.894 0.835 0.864 0.820 0.818 0.843
Air Liquide 0.798 0.823 0.867 0.880 0.909 0.842 0.878 0.825 0.829 0.854
Ambev 0.810 0.824 0.873 0.893 0.918 0.851 0.886 0.831 0.832 0.846
Cemex 0.804 0.807 0.850 0.878 0.912 0.849 0.880 0.810 0.805 0.838
Turkish Airlines 0.786 0.796 0.832 0.871 0.891 0.831 0.867 0.791 0.794 0.813
KCE Electronics 0.757 0.778 0.811 0.855 0.872 0.816 0.844 0.768 0.777 0.790
Telekomunikacja Polska 0.742 0.757 0.802 0.848 0.860 0.801 0.825 0.754 0.758 0.787
Caesars Resort Collection LLC 0.742 0.738 0.802 0.837 0.849 0.787 0.815 0.730 0.745 0.784
Asurion LLC 0.714 0.721 0.800 0.821 0.847 0.764 0.788 0.724 0.731 0.762
Intelsat Jackson Holdings 0.693 0.702 0.791 0.795 0.846 0.753 0.780 0.721 0.709 0.753
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2287
High‑Frequency Trading inBond Returns: AComparison Across…
Table 4 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.689 0.692 0.778 0.790 0.842 0.725 0.754 0.721 0.681 0.753
Great Outdoors Group llc 0.664 0.672 0.764 0.774 0.826 0.724 0.750 0.704 0.676 0.750
Petróleos Mexicanos 0.661 0.667 0.762 0.769 0.801 0.718 0.746 0.693 0.675 0.728
Petrobras 0.653 0.644 0.741 0.757 0.785 0.704 0.726 0.674 0.658 0.714
Sands China Ltd 0.645 0.625 0.721 0.740 0.773 0.680 0.723 0.646 0.635 0.713
Indonesia Asahan Alumini 0.642 0.613 0.718 0.719 0.755 0.664 0.704 0.630 0.631 0.701
Longfor Properties 0.624 0.612 0.697 0.719 0.753 0.643 0.697 0.605 0.613 0.680
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2288
D.Alaminos et al.
Table 5 Ideal profit ratio (30min) for continuous trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.0074 0.0091 0.0137 0.0158 0.0182 0.0146 0.0120 0.0104 0.0101 0.0136
United States 0.0080 0.0093 0.0149 0.0158 0.0207 0.0153 0.0138 0.0109 0.0115 0.0157
Italy 0.0085 0.0096 0.0156 0.0159 0.0216 0.0158 0.0143 0.0120 0.0120 0.0164
Spain 0.0097 0.0117 0.0174 0.0160 0.0221 0.0174 0.0163 0.0130 0.0137 0.0182
Japan 0.0110 0.0119 0.0185 0.0168 0.0238 0.0190 0.0184 0.0144 0.0138 0.0195
Turkey 0.0132 0.0124 0.0191 0.0189 0.0252 0.0200 0.0201 0.0165 0.0148 0.0196
Mexico 0.0119 0.0118 0.0175 0.0181 0.0235 0.0188 0.0185 0.0155 0.0133 0.0176
Indonesia 0.0100 0.0096 0.0169 0.0160 0.0229 0.0181 0.0166 0.0154 0.0117 0.0170
Nigeria 0.0097 0.0085 0.0164 0.0143 0.0217 0.0162 0.0152 0.0153 0.0097 0.0163
Poland 0.0080 0.0077 0.0143 0.0129 0.0222 0.0157 0.0135 0.0132 0.0079 0.0150
Walmart 0.0072 0.0075 0.0133 0.0127 0.0208 0.0144 0.0117 0.0111 0.0059 0.0137
Johnson & Johnson 0.0063 0.0069 0.0116 0.0112 0.0201 0.0135 0.0108 0.0105 0.0048 0.0122
Verizon 0.0047 0.0053 0.0112 0.0109 0.0181 0.0135 0.0092 0.0096 0.0037 0.0112
Unilever PLC 0.0030 0.0039 0.0094 0.0097 0.0172 0.0127 0.0075 0.0077 0.0019 0.0101
Rio Tinto PLC 0.0024 0.0035 0.0077 0.0094 0.0171 0.0123 0.0067 0.0058 0.0002 0.0098
Air Liquide 0.0036 0.0055 0.0082 0.0112 0.0173 0.0127 0.0079 0.0058 0.0015 0.0104
Ambev 0.0056 0.0066 0.0085 0.0131 0.0183 0.0130 0.0079 0.0068 0.0027 0.0119
Cemex 0.0078 0.0070 0.0102 0.0144 0.0190 0.0135 0.0080 0.0072 0.0042 0.0122
Turkish Airlines 0.0079 0.0078 0.0111 0.0146 0.0197 0.0144 0.0094 0.0085 0.0044 0.0128
KCE Electronics 0.0101 0.0093 0.0128 0.0148 0.0208 0.0158 0.0104 0.0093 0.0061 0.0131
Telekomunikacja Polska 0.0121 0.0108 0.0132 0.0159 0.0217 0.0178 0.0112 0.0100 0.0073 0.0137
Caesars Resort Collection LLC 0.0136 0.0123 0.0146 0.0176 0.0227 0.0186 0.0129 0.0108 0.0079 0.0137
Asurion LLC 0.0119 0.0111 0.0133 0.0171 0.0225 0.0170 0.0123 0.0103 0.0076 0.0116
Intelsat Jackson Holdings 0.0136 0.0098 0.0117 0.0153 0.0206 0.0152 0.0108 0.0101 0.0070 0.0112
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2289
High‑Frequency Trading inBond Returns: AComparison Across…
Table 5 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.0129 0.0097 0.0114 0.0142 0.0190 0.0152 0.0104 0.0089 0.0050 0.0092
Great Outdoors Group llc 0.0121 0.0081 0.0106 0.0126 0.0170 0.0131 0.0086 0.0072 0.0031 0.0084
Petróleos Mexicanos 0.0109 0.0071 0.0092 0.0120 0.0158 0.0118 0.0078 0.0051 0.0025 0.0070
Petrobras 0.0102 0.0053 0.0077 0.0111 0.0138 0.0117 0.0072 0.0034 0.0014 0.0066
Sands China Ltd 0.0082 0.0036 0.0056 0.0104 0.0127 0.0101 0.0069 0.0016 0.0005 0.0057
Indonesia Asahan Alumini 0.0070 0.0025 0.0054 0.0098 0.0112 0.0084 0.0060 0.0015 0.0004 0.0056
Longfor Properties 0.0059 0.0023 0.0044 0.0078 0.0098 0.0069 0.0039 0.0006 0.0000 0.0048
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2290
D.Alaminos et al.
Table 6 Sign Prediction Ratio (60min) for continuous trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.796 0.834 0.850 0.875 0.883 0.835 0.883 0.830 0.842 0.879
United States 0.788 0.832 0.845 0.872 0.880 0.833 0.880 0.824 0.836 0.877
Italy 0.780 0.829 0.839 0.871 0.878 0.827 0.875 0.819 0.835 0.874
Spain 0.777 0.821 0.834 0.864 0.872 0.821 0.874 0.816 0.828 0.868
Japan 0.775 0.820 0.831 0.857 0.868 0.817 0.870 0.808 0.827 0.862
Turkey 0.774 0.819 0.826 0.851 0.860 0.812 0.868 0.804 0.820 0.858
Mexico 0.770 0.814 0.821 0.851 0.856 0.809 0.866 0.797 0.818 0.853
Indonesia 0.766 0.809 0.820 0.845 0.850 0.801 0.860 0.792 0.813 0.846
Nigeria 0.766 0.802 0.813 0.841 0.844 0.797 0.860 0.789 0.811 0.839
Poland 0.764 0.797 0.806 0.834 0.840 0.794 0.852 0.788 0.810 0.838
Walmart 0.757 0.794 0.803 0.828 0.840 0.791 0.851 0.788 0.802 0.833
Johnson & Johnson 0.749 0.783 0.795 0.819 0.838 0.784 0.837 0.784 0.791 0.823
Verizon 0.751 0.787 0.810 0.835 0.849 0.794 0.853 0.793 0.791 0.828
Unilever PLC 0.763 0.790 0.822 0.842 0.856 0.806 0.855 0.796 0.794 0.834
Rio Tinto PLC 0.771 0.794 0.832 0.851 0.871 0.815 0.858 0.799 0.798 0.844
Air Liquide 0.779 0.802 0.845 0.858 0.886 0.822 0.872 0.804 0.808 0.856
Ambev 0.790 0.803 0.851 0.871 0.896 0.830 0.879 0.810 0.812 0.847
Cemex 0.784 0.787 0.829 0.856 0.889 0.828 0.873 0.790 0.785 0.840
Turkish Airlines 0.766 0.776 0.811 0.849 0.869 0.810 0.860 0.771 0.774 0.814
KCE Electronics 0.739 0.759 0.791 0.834 0.850 0.796 0.838 0.749 0.757 0.791
Telekomunikacja Polska 0.724 0.739 0.782 0.827 0.839 0.781 0.819 0.735 0.739 0.789
Caesars Resort Collection LLC 0.723 0.720 0.782 0.816 0.828 0.767 0.809 0.712 0.726 0.786
Asurion LLC 0.696 0.704 0.780 0.801 0.826 0.745 0.782 0.706 0.713 0.764
Intelsat Jackson Holdings 0.676 0.685 0.772 0.775 0.825 0.735 0.774 0.703 0.691 0.744
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2291
High‑Frequency Trading inBond Returns: AComparison Across…
Table 6 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.672 0.675 0.759 0.770 0.821 0.707 0.748 0.703 0.664 0.743
Great Outdoors Group llc 0.648 0.656 0.746 0.755 0.806 0.706 0.744 0.687 0.660 0.740
Petróleos Mexicanos 0.644 0.651 0.743 0.750 0.781 0.701 0.741 0.676 0.658 0.718
Petrobras 0.637 0.628 0.723 0.738 0.765 0.687 0.720 0.657 0.642 0.705
Sands China Ltd 0.629 0.610 0.703 0.722 0.754 0.663 0.717 0.630 0.619 0.704
Indonesia Asahan Alumini 0.626 0.598 0.700 0.701 0.737 0.648 0.699 0.615 0.615 0.693
Longfor Properties 0.608 0.597 0.680 0.701 0.735 0.627 0.692 0.590 0.598 0.672
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2292
D.Alaminos et al.
Table 7 Ideal profit ratio (60min) for continuous trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.0087 0.0111 0.0121 0.0174 0.0182 0.0153 0.0133 0.0077 0.0032 0.0095
United States 0.0101 0.0131 0.0139 0.0177 0.0187 0.0155 0.0137 0.0087 0.0044 0.0100
Italy 0.0112 0.0148 0.0147 0.0195 0.0202 0.0169 0.0150 0.0095 0.0048 0.0107
Spain 0.0121 0.0150 0.0153 0.0204 0.0211 0.0185 0.0162 0.0114 0.0059 0.0119
Japan 0.0125 0.0159 0.0153 0.0212 0.0213 0.0187 0.0174 0.0117 0.0070 0.0139
Turkey 0.0140 0.0172 0.0172 0.0227 0.0217 0.0206 0.0185 0.0125 0.0090 0.0150
Mexico 0.0138 0.0166 0.0169 0.0207 0.0214 0.0186 0.0164 0.0116 0.0083 0.0129
Indonesia 0.0137 0.0155 0.0164 0.0187 0.0193 0.0174 0.0155 0.0100 0.0064 0.0122
Nigeria 0.0118 0.0140 0.0162 0.0180 0.0188 0.0168 0.0150 0.0078 0.0057 0.0107
Poland 0.0111 0.0137 0.0154 0.0161 0.0174 0.0167 0.0149 0.0068 0.0035 0.0103
Walmart 0.0109 0.0129 0.0143 0.0156 0.0158 0.0161 0.0129 0.0065 0.0025 0.0089
Johnson & Johnson 0.0092 0.0127 0.0124 0.0145 0.0156 0.0160 0.0121 0.0051 0.0010 0.0085
Verizon 0.0075 0.0117 0.0121 0.0123 0.0154 0.0158 0.0105 0.0030 0.0007 0.0074
Unilever PLC 0.0071 0.0104 0.0113 0.0103 0.0150 0.0140 0.0085 0.0019 0.0010 0.0072
Rio Tinto PLC 0.0066 0.0086 0.0102 0.0087 0.0142 0.0118 0.0083 0.0001 0.0024 0.0067
Air Liquide 0.0079 0.0105 0.0119 0.0093 0.0147 0.0132 0.0094 0.0001 0.0011 0.0088
Ambev 0.0088 0.0105 0.0137 0.0098 0.0164 0.0136 0.0115 0.0002 0.0002 0.0090
Cemex 0.0104 0.0121 0.0138 0.0107 0.0174 0.0141 0.0117 0.0017 0.0005 0.0090
Turkish Airlines 0.0115 0.0133 0.0147 0.0107 0.0178 0.0150 0.0124 0.0024 0.0012 0.0102
KCE Electronics 0.0124 0.0148 0.0163 0.0108 0.0197 0.0165 0.0143 0.0043 0.0028 0.0114
Telekomunikacja Polska 0.0138 0.0161 0.0172 0.0116 0.0214 0.0181 0.0157 0.0046 0.0037 0.0121
Caesars Resort Collection LLC 0.0150 0.0168 0.0190 0.0131 0.0235 0.0190 0.0173 0.0052 0.0048 0.0124
Asurion LLC 0.0139 0.0148 0.0188 0.0122 0.0218 0.0178 0.0166 0.0034 0.0038 0.0124
Intelsat Jackson Holdings 0.0142 0.0144 0.0171 0.0117 0.0205 0.0157 0.0145 0.0022 0.0020 0.0108
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2293
High‑Frequency Trading inBond Returns: AComparison Across…
Table 7 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.0123 0.0136 0.0149 0.0097 0.0204 0.0146 0.0131 0.0016 0.0018 0.0086
Great Outdoors Group llc 0.0107 0.0127 0.0143 0.0085 0.0202 0.0140 0.0130 0.0005 0.0003 0.0070
Petróleos Mexicanos 0.0091 0.0114 0.0124 0.0075 0.0187 0.0139 0.0111 0.0007 0.0004 0.0070
Petrobras 0.0074 0.0107 0.0112 0.0060 0.0166 0.0133 0.0101 0.0012 0.0014 0.0053
Sands China Ltd 0.0062 0.0091 0.0101 0.0052 0.0148 0.0130 0.0100 0.0016 0.0025 0.0038
Indonesia Asahan Alumini 0.0044 0.0085 0.0100 0.0036 0.0132 0.0117 0.0084 0.0022 0.0027 0.0025
Longfor Properties 0.0044 0.0074 0.0079 0.0018 0.0131 0.0105 0.0084 0.0037 0.0045 0.0011
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2294
D.Alaminos et al.
SVM-GA may also be regarded as a reference model for the other machine learn-
ing algorithms, being the fourth best in the comparison. We notice that the fuzzy
approach ise the worst-performing techniques, with an overall mean of 0.770 for the
Qfuzzy method.
Table3 shows the results of the ideal profit ratios for the selected bonds and for a
time scale of 10min for every methodology. It is noted that, in line with the success
rates in Table2, QGA is again the best-performing method, as all bonds have a posi-
tive ideal profit ratio with a mean value of 0.0175. Nevertheless, in contrast to the
results of the accuracy rates, QRNN becomes the second-best performing method
this time, as all bonds also have a positive ideal profit ratio and a mean of 0.0140.
QRNN is followed by the ANFIS-QGA method with an average of 0.0134. In this
case, SVM-GA and CNN-LSTM are the worst-performing models regarding profit
generation for continuous 10-min trades. The maximum value of the ideal profit
ratio among sovereign bonds is 0.0212 and is reached by QGA in Turkey. Among
corporate bonds, Telelomunikacja Polska is the one that reaches a maximum value,
being 0.0192, again in the QGA method. Finally, among high-yield bonds, Caesars
Resort Collection LLC stands out with a value of 0.0192, also in the QGA method.
Table4 presents the results concerning the success ratios of continuous opera-
tions with a frequency of 30min. We note that, as in the case of the 10-min fre-
quency, the QGA method is once again the most performing in terms of mean bond
success ratio, with an average of 0.860. In the QGA method, among sovereign
bonds, Germany has the highest ratio at 0.906. Looking at corporate bonds, also
in the QGAs method, Ambev’s is the highest value at 0.918, and among high-yield
bonds, Caesars Resort Collection LLC ranks highest with a value of 0.849, also in
the QGA method. The next methods that present a correct forecast of the future
direction of bond prices are DLNN-GA and DRCNN, with a mean of 0.839 and
0.829 respectively. SVM-GA could also be accepted as a good model for the sign
prediction ratio. On the other hand, we notice that, as with the 10-min frequency, the
method Qfuzzy show low sign prediction capacity, with mean value of 0.750.
If we examine the results of the ideal profit ratio for continuous 30-min trades in
Table5, we observe that QGA emerges as the best model yielding the greatest profit
ratio, with an average of 0.0193 with all bonds having a positive ratio. This result is
following the success rates of the QGA method in Table4. However, if we look at
the techniques that worst predict the future direction of bond prices, in this case, this
is not the fuzzy one but GRU-CNN and AdaBoost-GA. Both have a mean value of
0.0063 and 0.0080 respectively.
On considering a sampling frequency of 60 min, Table6 reveals that, in line
with the previous results, the QGA is better than the other methods regarding the
mean sign prediction ratio, with an average of 0.838. As in the case of the 10-min
and 30-min time scales, the maximum success ratio for all bonds through this
QGA method is achieved for the corporate bond "Ambev". Moreover, DRCNN and
DLNN-GA correctly predict all bonds with a rate above 0.701. Furthermore, we
remark that, as with the 10-min frequency and the 30-min frequency, the Qfuzzy
method displays a weak sign prediction ability, with mean values of 0.732.
If we analyze Table7, it is evident that the genetic algorithms are the ones that
obtain the best results for the 60-min frequency in the ideal profit ratio. QGA,
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2295
High‑Frequency Trading inBond Returns: AComparison Across…
ANFIS-QGA, and SVM-GA are those that reflect, in this order, the highest ideal
profit ratio with all of the bonds having a positive ratio. DRCNN and DLNN-GA
come after with an average ratio of 0.0131 and 0.0128, respectively. The lowest val-
ues, in contrast to the previous table, are those achieved in the GRU-CNN and CNN-
LSTM methods, with an average value of 0.0032 and 0.0048 respectively.
When we examine continuous trading time series at 10, 30, and 60min, we can
reveal the impact of the sampling frequency on the prediction. We note in Fig.3
that all methods show better results concerning the Sign Prediction Ratio at lower
frequencies. Nevertheless, the case is otherwise, as illustrated in Fig.4, for the Ideal
Benefit Ratio, as AdaBoost-GA, SVM-GA, ANFIS-QGA and DRCNN perform the
best model for trading strategy setting at 60min of sampling, and QGAperforms the
best for 30min of sampling. Following our results, not only the bonds and meth-
odology but also the prediction intervals are important. As a consequence, we may
conclude that one method is not suitable for everything. While a method may be
suitable for raw data, it may not be appropriate for fine data.
Tables8 and 9 show the results of Sovereign bonds at 1 and 5min frequency
intervals. If we observe Table8, the methodology with the best results is QGA
Fig. 3 Sign Prediction Ratio for continuous trades at different sampling frequencies of each method
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2296
D.Alaminos et al.
for both ratios, sign prediction and ideal profit. In the case of the sign predic-
tion ratio for a one-minute frequency, German sovereign bonds are the ones that
reach the highest value (0.924) and for the 5-min frequency case, Italian sov-
ereign bonds with a ratio of 0.946. With reference to the ideal profit ratio, the
best result is obtained by Turkey for a frequency of 1min (0.0229), and Spain
for a frequency of 5min (0.0247). For all trades, Table9 illustrates that the best
sovereign bond performance in sign prediction ratio at 1-min frequency is Japan
(0.930) in the Qfuzzy method. However, at a frequency of 5min, the DLMM-
GA method performs best, with Germany having the highest ratio. Regarding
the ideal profit ratio, QRNN is the best methodology, with Turkey obtaining the
highest values at both frequencies, 0.0244 at a 1-min frequency and 0.0195 at a
5-min frequency.
In comparison with other works, Vukovic et al. (2020) obtain an accuracy
of 82% on test cases for predicting future Sharpe ratio dynamics with neural
networks. Nunes etal. (2019) achieve RMSE reductions compared to the model
without synthetic data in the range of 11% to 70% (mean values, for forecast
horizons of 15 and 20days) for predicting the bond market yield curve, using the
Fig. 4 Ideal profit ratio for continuous trades at different sampling frequencies of each method
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2297
High‑Frequency Trading inBond Returns: AComparison Across…
Table 8 Sign prediction and ideal profit ratios in small frequency for continuous trades of sovereign bonds
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Sign prediction ratio (1min)
Germany 0.833 0.872 0.890 0.916 0.924 0.874 0.908 0.869 0.881 0.897
United States 0.824 0.871 0.884 0.913 0.920 0.872 0.905 0.862 0.874 0.895
Italy 0.817 0.867 0.878 0.911 0.919 0.865 0.900 0.857 0.874 0.892
Spain 0.813 0.859 0.872 0.904 0.912 0.859 0.899 0.854 0.866 0.886
Japan 0.810 0.858 0.870 0.896 0.908 0.855 0.895 0.845 0.866 0.879
Turkey 0.809 0.857 0.864 0.890 0.900 0.850 0.892 0.841 0.858 0.875
Mexico 0.805 0.851 0.859 0.890 0.896 0.846 0.891 0.833 0.856 0.871
Indonesia 0.802 0.846 0.858 0.884 0.889 0.838 0.885 0.829 0.851 0.863
Nigeria 0.801 0.839 0.850 0.879 0.883 0.834 0.884 0.825 0.849 0.856
Poland 0.800 0.834 0.843 0.872 0.879 0.831 0.876 0.825 0.847 0.855
Sign prediction ratio (5min)
Germany 0.814 0.838 0.878 0.899 0.921 0.860 0.891 0.844 0.843 0.870
United States 0.822 0.848 0.893 0.906 0.936 0.868 0.905 0.850 0.854 0.881
Italy 0.835 0.849 0.899 0.920 0.946 0.877 0.912 0.856 0.858 0.873
Spain 0.828 0.831 0.876 0.905 0.939 0.875 0.907 0.834 0.829 0.865
Japan 0.809 0.820 0.857 0.897 0.918 0.856 0.893 0.815 0.818 0.839
Turkey 0.780 0.801 0.836 0.880 0.898 0.841 0.870 0.791 0.800 0.815
Mexico 0.765 0.780 0.826 0.873 0.886 0.825 0.850 0.777 0.781 0.813
Indonesia 0.764 0.761 0.826 0.862 0.875 0.810 0.840 0.752 0.767 0.809
Nigeria 0.736 0.743 0.824 0.846 0.872 0.787 0.811 0.746 0.753 0.786
Poland 0.714 0.723 0.815 0.819 0.871 0.776 0.804 0.743 0.730 0.766
Ideal profit ratio (1min)
Germany 0.0092 0.0115 0.0126 0.0180 0.0189 0.0159 0.0137 0.0080 0.0034 0.0099
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2298
D.Alaminos et al.
Table 8 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
United States 0.0107 0.0138 0.0147 0.0186 0.0197 0.0164 0.0144 0.0092 0.0046 0.0106
Italy 0.0118 0.0156 0.0155 0.0206 0.0213 0.0178 0.0158 0.0100 0.0051 0.0113
Spain 0.0128 0.0158 0.0161 0.0216 0.0222 0.0195 0.0171 0.0120 0.0062 0.0125
Japan 0.0132 0.0168 0.0162 0.0224 0.0225 0.0198 0.0183 0.0123 0.0074 0.0147
Turkey 0.0148 0.0182 0.0181 0.0239 0.0229 0.0218 0.0195 0.0132 0.0095 0.0158
Mexico 0.0145 0.0176 0.0178 0.0218 0.0226 0.0196 0.0173 0.0122 0.0088 0.0136
Indonesia 0.0145 0.0164 0.0173 0.0197 0.0204 0.0184 0.0164 0.0105 0.0068 0.0129
Nigeria 0.0124 0.0148 0.0171 0.0190 0.0199 0.0178 0.0158 0.0082 0.0060 0.0113
Poland 0.0107 0.0145 0.0162 0.0170 0.0183 0.0176 0.0157 0.0072 0.0037 0.0108
Ideal PROFIT RATIO (5min)
Germany 0.0076 0.0094 0.0142 0.0163 0.0209 0.0151 0.0124 0.0107 0.0105 0.0140
United States 0.0083 0.0096 0.0154 0.0163 0.0231 0.0158 0.0142 0.0112 0.0119 0.0162
Italy 0.0088 0.0099 0.0162 0.0165 0.0242 0.0163 0.0148 0.0124 0.0124 0.0169
Spain 0.0100 0.0120 0.0180 0.0165 0.0247 0.0180 0.0168 0.0134 0.0141 0.0188
Japan 0.0114 0.0123 0.0191 0.0173 0.0266 0.0196 0.0190 0.0148 0.0142 0.0202
Turkey 0.0136 0.0128 0.0197 0.0196 0.0281 0.0207 0.0207 0.0171 0.0153 0.0202
Mexico 0.0123 0.0121 0.0181 0.0187 0.0262 0.0194 0.0191 0.0160 0.0138 0.0182
Indonesia 0.0103 0.0099 0.0174 0.0166 0.0255 0.0187 0.0172 0.0159 0.0121 0.0175
Nigeria 0.0100 0.0088 0.0169 0.0148 0.0243 0.0167 0.0157 0.0158 0.0100 0.0168
Poland 0.0082 0.0080 0.0148 0.0133 0.0229 0.0162 0.0139 0.0136 0.0082 0.0155
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2299
High‑Frequency Trading inBond Returns: AComparison Across…
Table 9 Sign prediction and ideal profit ratios in small frequency for all trades of sovereign bonds
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Sign prediction ratio (1min)
Germany 0.796 0.811 0.827 0.860 0.885 0.926 0.911 0.890 0.922 0.894
United States 0.830 0.831 0.821 0.858 0.878 0.922 0.905 0.884 0.915 0.888
Italy 0.867 0.849 0.817 0.856 0.872 0.916 0.897 0.880 0.908 0.885
Spain 0.903 0.850 0.810 0.855 0.869 0.908 0.891 0.876 0.906 0.881
Japan 0.930 0.881 0.807 0.853 0.864 0.904 0.886 0.868 0.901 0.873
Turkey 0.929 0.878 0.802 0.849 0.857 0.899 0.881 0.862 0.898 0.872
Mexico 0.927 0.861 0.800 0.847 0.856 0.893 0.878 0.857 0.892 0.871
Indonesia 0.913 0.847 0.792 0.844 0.852 0.889 0.869 0.856 0.887 0.870
Nigeria 0.912 0.834 0.788 0.843 0.847 0.888 0.867 0.855 0.884 0.867
Poland 0.912 0.833 0.782 0.835 0.841 0.882 0.863 0.848 0.882 0.861
Sign prediction ratio (5min)
Germany 0.815 0.847 0.872 0.913 0.898 0.877 0.909 0.866 0.902 0.913
United States 0.809 0.845 0.866 0.909 0.892 0.871 0.902 0.865 0.894 0.907
Italy 0.805 0.844 0.860 0.903 0.885 0.867 0.895 0.860 0.889 0.907
Spain 0.799 0.843 0.856 0.895 0.879 0.863 0.893 0.854 0.886 0.903
Japan 0.796 0.841 0.852 0.892 0.873 0.855 0.888 0.846 0.885 0.895
Turkey 0.791 0.837 0.845 0.886 0.869 0.849 0.885 0.844 0.884 0.890
Mexico 0.789 0.835 0.844 0.881 0.865 0.845 0.879 0.842 0.876 0.886
Indonesia 0.780 0.832 0.840 0.876 0.857 0.843 0.875 0.841 0.869 0.882
Nigeria 0.776 0.831 0.835 0.875 0.855 0.843 0.871 0.837 0.867 0.876
Poland 0.771 0.824 0.829 0.869 0.851 0.836 0.869 0.833 0.859 0.874
Ideal profit ratio (1min)
Germany 0.0094 0.0110 0.0137 0.0144 0.0157 0.0122 0.0074 0.0103 0.0135 0.0186
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2300
D.Alaminos et al.
Table 9 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
United States 0.0096 0.0128 0.0141 0.0161 0.0166 0.0140 0.0090 0.0108 0.0156 0.0208
Italy 0.0104 0.0130 0.0161 0.0171 0.0173 0.0158 0.0103 0.0125 0.0172 0.0217
Spain 0.0115 0.0136 0.0168 0.0173 0.0194 0.0175 0.0120 0.0142 0.0181 0.0229
Japan 0.0123 0.0152 0.0174 0.0175 0.0196 0.0172 0.0126 0.0144 0.0179 0.0237
Turkey 0.0141 0.0159 0.0196 0.0188 0.0220 0.0185 0.0132 0.0157 0.0204 0.0244
Mexico 0.0137 0.0145 0.0193 0.0186 0.0212 0.0172 0.0124 0.0152 0.0187 0.0224
Indonesia 0.0136 0.0133 0.0186 0.0185 0.0202 0.0163 0.0117 0.0150 0.0170 0.0211
Nigeria 0.0122 0.0124 0.0184 0.0165 0.0183 0.0161 0.0106 0.0145 0.0170 0.0202
Poland 0.0115 0.0119 0.0178 0.0157 0.0164 0.0143 0.0104 0.0137 0.0160 0.0191
Ideal profit ratio (5min)
Germany 0.0072 0.0093 0.0095 0.0111 0.0114 0.0101 0.0076 0.0057 0.0102 0.0122
United States 0.0091 0.0101 0.0104 0.0120 0.0118 0.0114 0.0082 0.0072 0.0113 0.0141
Italy 0.0101 0.0118 0.0125 0.0126 0.0130 0.0117 0.0097 0.0075 0.0121 0.0162
Spain 0.0103 0.0123 0.0131 0.0138 0.0144 0.0121 0.0101 0.0095 0.0129 0.0180
Japan 0.0101 0.0132 0.0137 0.0142 0.0154 0.0126 0.0116 0.0112 0.0146 0.0179
Turkey 0.0119 0.0145 0.0157 0.0165 0.0165 0.0139 0.0126 0.0135 0.0160 0.0195
Mexico 0.0107 0.0129 0.0141 0.0158 0.0156 0.0125 0.0121 0.0121 0.0143 0.0192
Indonesia 0.0095 0.0113 0.0141 0.0151 0.0156 0.0107 0.0106 0.0101 0.0143 0.0178
Nigeria 0.0083 0.0100 0.0133 0.0141 0.0152 0.0089 0.0105 0.0087 0.0134 0.0174
Poland 0.0077 0.0097 0.0127 0.0126 0.0149 0.0071 0.0098 0.0069 0.0128 0.0159
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2301
High‑Frequency Trading inBond Returns: AComparison Across…
Multilayer Perceptrons method. In summary, our study has high precision, and
also exceeds the accuracy level of previous work, being the genetic algorithms
the ones that obtain the best results, especially the QGA method. Moreover, pre-
vious literature dealing with fixed-income assets is not concerned with the use
of HFT. The results of our study show that bond market transactions through
HFT are executed faster and trading volume increases considerably, enhancing
the liquidity of the bond market.
Finally, we analyse the cumulative net profits for each bond market (sover-
eign, corporate and high-yield) and according to each price window (10-min,
30-min, 60-min and 1-min, 5-min). These results are presented in “Appendix 4
via Figs. 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15. Over the span of two decades
under examination, all models encountered drawdowns of varying magnitudes,
ranging from 5 to 15% at different points in time. On average, these drawdowns
persisted for approximately 2.5 months. Instances of model underperformance
became evident during periods of extreme market volatility, exemplified by the
2008 financial crisis, which witnessed model losses surpassing the 20% mark.
Similarly, unexpected geopolitical events posed challenges, with losses reaching
up to 18%. Our models demonstrated a propensity to falter when confronted with
’black swan’ events of exceptional magnitude that surpassed historical data, as
exemplified by the impact of the COVID-19 pandemic (Papadamou etal., 2021).
Moreover, these models reduce their performance a little in forecasting abrupt
market shifts induced by unprecedented occurrences, such as major regulatory
changes. Despite these limitations in predicting and achieving profits, our mod-
els achieve a higher and more consistent level of cumulative profits over time
than previous work on algorithmic trading models, especially high-frequency
trading models (Dixon et al, 2018; Rundo, 2019; Lahmiri & Bekiros, 2021;
Goudarzi & Bazzana, 2023).
Sovereign Bonds exhibit a fluctuating pattern over the years, with a negative start
in 2001 but a significant shift towards positive gains in 2002. This positive trend
continued until 2006, followed by intermittent fluctuations. By 2023, net gains had
stabilized at a relatively positive level, demonstrating the resilience of these bonds.
Corporate Bonds also had a negative start in 2001 but saw notable improvements in
2002, with consistent gains until around 2006. There was volatility in the subsequent
years, with lucrative moments such as in 2010 but also difficulties. By 2023, corpo-
rate net gains appear to have regained a positive trajectory. High-Yield Bonds started
in the negative in 2001 and remained mostly so until 2003. They then experienced
a period of consistent gains until around 2011, followed by volatility. In 2023, they
maintain positive cumulative gains, albeit more moderate.
At the beginning of the 2000s, central banks, particularly the U.S. Federal
Reserve, had a more neutral monetary policy stance. Interest rates were relatively
higher compared to the 2010s (Jarrow, 2019). However, following the burst of the
dot-com bubble and the September 11 attacks in 2001, central banks, including the
Federal Reserve, lowered interest rates to stimulate economic growth. These rate
cuts resulted in lower yields on government bonds (Fabozzi & Fabozzi, 2021).
Bond yields, especially in the U.S., remained relatively low during the first half of
the decade but started to rise as the economy improved. The latter part of the 2000s
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2302
D.Alaminos et al.
was marked by the U.S. housing bubble and the subsequent global financial crisis
of 2008. These events led to a flight to safety, with investors seeking refuge in gov-
ernment bonds, particularly U.S. Treasuries (Gilchrist etal., 2019). This increased
demand for government bonds drove prices up and yields down.
Corporate bonds in the early 2000s offered higher yields compared to govern-
ment bonds, reflecting the risk premium associated with corporate debt. However,
during the financial crisis, corporate bond yields rose significantly as investors
became concerned about the creditworthiness of corporations (Jarrow, 2019). Bond
spreads, which measure the difference in yields between corporate bonds and gov-
ernment bonds, widened substantially during this period. Emerging market bonds
experienced mixed performance during the 2000s. Some emerging market econo-
mies attracted foreign investment, leading to lower yields on their bonds. However,
there were instances of bond market turmoil in emerging markets, driven by factors
such as currency devaluations and political instability (Beirne and Sugandi, 2023).
Regarding the 2010s decade, central banks, particularly in developed economies
like the United States, Europe, and Japan, implemented accommodative monetary
policies in response to the global financial crisis of 2008 (Albagli et al., 2018).
These policies included near-zero or negative interest rates and large-scale bond-
buying programs (quantitative easing) aimed at stimulating economic growth. As
a result, yields on government bonds, which serve as benchmarks for other fixed-
income securities, remained historically low (Blanchard, 2023). The low yield
environment prompted investors to seek higher-yielding assets, which sometimes
led to increased demand for riskier bonds, such as high yield or corporate bonds.
This increased demand pushed up bond prices and drove yields lower. The global
economy experienced a prolonged period of low inflation and, at times, deflation-
ary pressures during the 2010s. Low inflation expectations are often associated with
lower yields on fixed-income securities (Fabozzi & Fabozzi, 2021).
Regulatory changes in the financial industry, such as Basel III banking regula-
tions, encouraged financial institutions to hold more high-quality liquid assets,
including government bonds (Ranaldo etal., 2019). This increased demand for gov-
ernment bonds also contributed to lower yields. While low yields were a prominent
feature of the 2010s bond market, it’s essential to note that not all bonds experienced
the same level of yield compression. The extent of yield compression varied among
different types of bonds, and some segments of the bond market, like high yield
or emerging market bonds, offered higher yields to compensate for increased risk
(Fabozzi & Fabozzi, 2021).
5 Conclusions
This study has developed a comparison of methodologies to predict bond price
movements based on past prices through high-frequency trading. We compare ten
machine learning methods applied to the fixed-income markets in sovereign, cor-
porate and high-yield debt, in both developed and emerging countries, in the one-
year bond market for the period from 15 February 2000 to 12 April 2023. Our
results indicate that QGA, DRCNN and DLNN-GA can correctly interpret the
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2303
High‑Frequency Trading inBond Returns: AComparison Across…
expected bond future price direction and rate changes satisfactorily. Curiously,
QFuzzy is not adequate for forecasting high-frequency returns and dealers ought
to avoid these models in their trading decisions, for the sample bond market.
Our study shows that all methods show better results concerning the Sign Pre-
diction Ratio at lower frequencies. Thus, considering 10min of frequency, the
QGA method is the best performer with all the bonds, with an accuracy rate
higher than 0.772 and a mean of 0.881. DRCNN and DLNNN-GA are the second
and third methods that correctly predict the change in bond price direction, with
an average of 0.850 and 0.847 respectively. However, for the Ideal Profit Ratio,
not all methods show better results at the highest frequency. Some methods such
as SVM-GA, ANFIS-QGA and DRCNN, perform the best model for the trading
strategy configuration at 60-min sampling, and QGA performs the best for 30min
of sampling. Therefore, it is important to consider that depending on the sam-
pling frequency and the objective of the approach, one method does not fit all,
and a mixture of different alternative techniques must be examined.
In contrast to previous research, this study has achieved better accuracy results
and has made a comparison of innovative methods of ML with the use of HFT,
not been applied in the bond market so far. ML algorithms have become widely
available for fixed-income market analysis, especially since uncertainty in the
financial markets has risen sharply. In addition, our study has made predictions of
bond price movements globally, hence it is not exclusively focused on industrial-
ized countries. Finally, our study includes not only sovereign bonds but also cor-
porate and high-yield debt, making it of interest to policymakers in any country.
Our study provides important benefits in the field of finance. From an insider
trading perspective, it strengthens the implementation of reliable and fast forecast
systems on the bond prices, including the pursuit of returns and volatility target-
ing, and can analyse the information of indirect market-based monetary policy
instruments and the macro environment. Adequate bond price predictability can
reduce medium- and long-term debt servicing costs through the development of
a deep and liquid market for government securities. At the microeconomic level,
the development of a robust bond price prediction model can increase overall
financial stability and enhance financial intermediation via increased competition
and the development of related financial infrastructure, products, and services.
In addition, more generally, financial crises tend to arise in credit markets. Our
model has the potential to provide financial institutions with information on the
effects of policy measures on the credit market’s fragility and to provide a better
understanding of how market trends influence liquidity provision, implementation
costs, and the impact on transaction prices.
In summary, our paper has a great perspective impact It can facilitate the work
of professionals from financial institutions dedicated to trading as well as possible
private investors and other stakeholders. This research makes an important contribu-
tion to high-frequency trading, as the conclusions have important implications both
for investors and market participants as they seek to derive economic and financial
profits from the bond market.
Our work has limitations in data availability for 10- and 30-min price frequen-
cies for corporate debt securities. In order for this type of research to have greater
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2304
D.Alaminos et al.
generalizability for fixed income market practitioners, greater data availability
would be necessary. We leave this issue as a reason to explore future research in
which more complex trading strategies can be organized to test and demonstrate the
effectiveness of the techniques presented in this work for trading in debt securities.
Besides, further research should broaden the scope of the comparative analysis of
methodologies to cover the field of crypto-assets, such as cryptocurrencies and fan
tokens, since, in recent years, financial institutions have increasingly incorporated
crypto-assets in their portfolios.
Appendix1: Other Methodologies
Quantum‑Fuzzy Approach (QFuzzy)
Singh et al. (2018) and Singh and Huang (2019) proposed recently the quantum
optimization algorithm (QOP) modeled on the ’’entanglement’’ concept of quan-
tum mechanics. In the present research, QOP is enhanced to resolve the multi-objec-
tive optimization problem (MOOP) and is called QFuzzy. In the case of MOOP,
the major goal of QFuzzy is to choose the optimal solution set. For the operation
of selecting a set of solutions, all the solutions are placed in a memory where they
could be utilized to acquire the Pareto-optimal front screening out all the non-
dominated optimal solutions. The concept of an archive is developed in this pro-
cess, which stores all the non-dominated Pareto-optimal solutions (AONDPS). Sub-
sequently, a selection criterion is assumed to choose the most prominent solution
regarding the position of the quantum of the file. We then formulate a generalized
MOOP to prove the application of QFuzzy toward seeking optimized solutions for
the MOOP.
A MOOP has two goals formulated:
Under linear restrictions
In Eq.(33), there are M objective functions:
Θ
(x)=
(
D
1
(x),D
2
(x),,D
M
(x)
)T
.
In Eq. (34), there are N objective functions:
𝛽
(x)=
(
Y
1
(x),Y
2
(x),,Y
N
(x)
)T
. In
this case, the objective function must be minimized or maximized. A solution x is a
vector of n decision variables, where
x=(x1,x2,x3,xm)T
. The space extended by
(33)
Optimize (Max.or Mim.)Θ(x)=Dm(x),xn
(34)
Optimize (Max.or Mim.)𝛽(x)=Yn(x),xn
(35)
Dm(x)0, m=1, 2, ,M
(36)
Yn(x)0, n=1, 2, ,N
(37)
x(LB)
i
x
i
x
(UB)
i
,i=1, 2, ,
n
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2305
High‑Frequency Trading inBond Returns: AComparison Across…
the
xi
is called the quantum system
n
, whereas the space created by the
Θ(x)
and
𝛽(x)
values are named the solution space. Finally, the last condition stated here indi-
cates the restriction of the variable that limits the value of each
xi
. For these restric-
tions, a condition set for a decision variable
xi
is described as
G(LB)
x
i
G
(UB)
that
constrains the value of each
xi
within the lower bound (
G(LB)
) and the upper bound
(
G(UB)
). To optimize the above MOOP with QFuzzy we have the following steps.
Step 1 Quantum initialization in the quantum system: Start every search agent
according to the next equation of Schrodinger (1935) as follows:
In Eq.(6),
Qk(e)
denotes the k-th quantum with an epoch e, and k = 1; 2; …; q;
being q the total number of quanta in the
n
.
Q1k(e)
and
Q2k(e)
are two wave func-
tions for the k-th quantum;
�=a+ib
indicates a complex number, a and b denote
real numbers in [0,1] and i is the imaginary unit
i=1
. In the representation of
a complex number, multiplication by -1 refers to a 180-degree rotation about the
origin of the k-th quantum. Therefore, the multiplication by i refers to a 90-degree
rotation of the k-th quantum in the ‘‘positive”, in the counterclockwise direction
(Berezin & Shubin, 2012). Because the complex number
cannot be used straight-
forwardly to start the quantum in the search space, its absolute value is employed in
the computational procedure, which is defined as
=
a2+b2 .
Q1k(e)
and
Q2k(e)
could be related as:
where
r1[0, 1]
and
r2[0, 1]
are two different random functions, correspondingly.
Step 2 Quantum localization: The acquired localization of the
Qk(e)
is given by
Mk(e)
, and may be stated as follows:
Step 3 The motion of the quantum: The motion displayed by the
Qk(e)
is indicated
by
Mk(e)
, and may be described as:
being
mf
the “quantum movement factor”, which can be in [0,1].
(38)
Qk
(e)=�Q1
k
(e)+
(
1−�
)
Q2
k
(e
)
(39)
Q
1
k
(e)=
{
G
UB
+r
1
(
G
UB
G
LB)}
(40)
Q
2
k
(e)=
{
G
LB
+r
2
(
G
UB
G
LB)}
(41)
L
k(e)=
1
Qk(e)
e2Qk(e
)
(42)
M
k(e)=
|
|
|
|
Qk(e)
L
k
(e)
2
ln
(
1mf
)|
|
|
|
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2306
D.Alaminos et al.
Step 4 Quantum shift: The displacement of
Qk(e)
is indicated by
Dk(e)
, and may be
defined as:
Step 5 Evaluation of the suitability of the shift: A fitness value is established for
Dk(e)
, and updated if a solution better than the preceding one is available.
Step 6 Extension of the quantum search range: The extension of the motion, that is,
Mk(e)
for the next epoch, e + 1 is expressed as
Mk(e+1)
, and can be stated as:
being
In this case,
𝛼
is named the “quantum acceleration factor”, expressed as:
Being itr = 1,2,…,Itr. Itr indicates the maximum number of iterations set for
the algorithm. Here,
𝛼min
and
𝛼max
can be taken in [0.1,0.9], where
𝛼max
>
𝛼min
. In
formula (14),
pBDk(e)
is the personal best displacement that
Dk(e)
has reached
since the first epoch. In formula (15),
gBDk(e)
is the global best displacement
achieved so far among the displacements. In formulas (14) and (15),
r3[0, 1]
and
r4[0, 1]
show two different random functions, correspondingly.
Step 7 Updating the quantum offset: The offset adjustment, that is,
Dk(e)
for the next
epoch is denoted by
Dk(e+1)
, and may be stated as:
Step 8 Put AONDPSs into the file. Employing the AONDPSs, this algorithm starts
by scanning the Pareto Optimal Front. Two components are integrated with the file
to examine the optimal solutions as Knowles and Corne (2000): Controller and
Grid. The addition of a particular solution to the file is determined by the controller.
The information in the file is regarded as up to date according to one of the below
conditions:
(43)
Dk
(e)=2
|
|
L
k
(e)M
k
(e)
|
|
(44)
Mk(e+1)=M1+M2+M3
(45)
M1=𝛼
Mk(e)
(46)
M
2
=ln
(
1m
f)
r
3
[
pBD
k
(e)D
k
(e)
]
(47)
M
3
=ln
(
1m
f)
r
4
[
gBD
k
(e)D
k
(e)
]
(48)
𝛼
=𝛼max itr ×
|
|
𝛼max 𝛼min
|
|
Itr
(49)
Dk(e+1)=Dk(e)+Mk(e+1)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2307
High‑Frequency Trading inBond Returns: AComparison Across…
Condition 1: If the optimal non-dominated Pareto solution is missing and the file is
empty, then the present solution should be adopted.
Condition 2: If an optimal solution is mastered by any other factor within the file,
that particular solution must be rejected.
Condition 3: If a Pareto optimal solution is not mastered by the external factor, then
the particular solution should be adopted and kept in the file.
Condition 4: If the optimal solutions are controlled by the new element, they are
removed from the file.
Finally, the other component is Grid: When the AONDPS are positioned in the file,
a solution space is generated for every objective. Recursively bisecting the solution
space produces an individual placement called a grid location. A grid location helps
identify how many non-dominated Pareto solutions are in a grid and where they are
located.
Step 9 Stop the algorithm if the stop condition is satisfied; if not, go back to step 7.
Adaptive Boosting andGenetic Algorithm (AdaBoost‑GA)
According to the conventional AdaBoost algorithm, every base classifier’s weight is
set after being computed; and the classifier adaptivity of each base classifier is not
regarded (Wang etal., 2011). Hence, in this research, the GA is utilised in the adap-
tive integration procedures of the base classifiers. The number of decision groups is
the number of weak classifiers Adaboost, and the weight of every weak classifier is
the starting population of GA.
Both crossover probability and mutation probability significantly impact the algo-
rithm’s optimization effect (Cheng etal., 2019). To choose the suitable crossover
likelihood and mutation likelihood, based on the literature (Drezner and Misevicius,
2013), we describe the crossover likelihood and mutation likelihood as:
being
𝛾
a regulatory factor.
The fitness function was described as:
(50)
Pc=𝛾
(51)
Pm=0.1(1𝛾)
(52)
fit =
N
i=1I
y
Xi
=yi
N
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2308
D.Alaminos et al.
AdaBoost‑GA
The function of Adaboost is to build various base classifiers using training the data
distribution and thereafter allocate weights to these base classifiers by the error rate.
Adaboost uses the decision group as the base classifier to enlarge the system diver-
sity of the ensemble set and uses the GA algorithm to maximise the weight of each
base classifier by combining all the base classifiers.
Given
{
𝜔
m,j|
i=1, 2, ,N;m=1,M
}
, which is the weight of every sample
in the base classifier.
𝜔m,j
symbols the weight of the ith sample in mth base classi-
fier. Let
ym(x)
and Y(x) be a base classifier and strong classifier, respectively.
𝛼m
denotes the weight of mth classifier.
𝜀m
represents the error function of mth base
classifier. And M constitutes the number of base classifiers. The AdaBoost-GA sug-
gested algorithm could be explained below.
Input:
-Training sets
-Validation sets
-
𝜔1,i
The weight of each training sample
Output:
Y(.): The final strong classifier:
1. Initialize
𝜔1,i=1N
.
2. For i = 1 to N do.
3.
4.
(53)
Y
(x)=sign
(M
j=1
𝛼jyj(x)
)
(54)
𝜀
m=
N
i=1
𝜔m,iI
(
ym
(
Xi
)
yi
)
(55)
𝛼
m
=1n
{
1
𝜀m
𝜀
m}
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2309
High‑Frequency Trading inBond Returns: AComparison Across…
if
𝛼m
0 then
𝛼m
increases with the decrement of
𝜀m
. End if
5.
where
6. end for
Return Y(x).
Support Vector Machine‑ Genetic Algorithm (SVM‑GA)
The problem of SVM parameter setting has been addressed by several approaches
ranging from raw force to more refined metaheuristics, one of the best known of
which is genetic algorithms (GA). The major benefit of GAs compared to sim-
pler methods is their ability to deliver stochastic near-optimal solutions at modest
cost, while simultaneously optimising multiple parameters with no prior knowl-
edge (Goldberg, 1990). Similarly, every parameter in the search space is encoded
as an allele or gene in a GA, and the complete configuration of a specific solution
has termed a chromosome. The native formulation of GAs encodes every gene in
binary form (binary genetic algorithms, BGAs) so that multiple evolutionary opera-
tors can be successfully implemented. Such encoding, however, leads to needless
computational expense because it transforms real values into their binary value, and
conversely; the storage expense is raised because operators such as mutation and
crossover require only a single pair of bits to fulfil their function (Chih-Hung etal.,
2009). Another method that improves storage and computational costs is called real-
valued genetic algorithms (RGA). The latter type of coding is employed in the pre-
sent work.
However, GAs suffer from some disadvantages, such as early conversion to local
optima owing to their genetic operators, and their robustness is only obtained when
parameters such as the population size or the number of generations are adjusted. In
(56)
𝜔
m+1,i=
𝜔
m,i
Zm
exp
(
𝛼myiym
(
xi
))
(57)
Z
m=
N
i=1
𝜔m,iexp
(
𝛼myiyg
(
xi
))
(58)
𝛼m=GA(𝛼)
(59)
Y
x=sign
(M
j=1
𝛼jyj(x)
)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2310
D.Alaminos et al.
addition, GA outcomes become less reproducible and need a statistical process to
assure solution configuration conversion. Moreover, by keeping the optimal solution
in the population, a GA is capable of converging to the global optimum (Sivaraj &
Ravichandran, 2011).
Thus, key features in the development of evolutionary algorithms are the genetic
operators like selection, crossover, and even the random number generator (RNG)
they employ. The choice operator sets the search neighbourhoods, whereas the oper-
ators of recombination and mutation search for a particular space. In this work, we
suggest a new Boltzmann operator that similarly describes a cooling scheme that
the linear schedule suggested by Kirkpatrick etal. (1983). Besides, the convergence
properties of a genetic algorithm may be improved by applying the chaotic number
generators (Caponetto etal., 2003). In this study, three chaotic sequences have been
also employed by SV RGBC genetic operators.
SVRGBC
Approach
A major stage in developing an efficient support vector model for ranking or regres-
sion is the adjustment of the parameters. Typically, this procedure accounts for
the error compensation constant (C), the choice of a kernel, and its related kernel-
specific constants. The easiest way to conduct this fitting procedure is called grid
search (GS). This approach involves generating multiple SVM models starting from
the learning step with a set step for the parameter values. Best values are achieved
through a test of each model on a validation set and the choice of the best result.
However, this method has various disadvantages, the most important of which are: a
priori knowledge, large computational expense, local optima, and the uncertain dis-
tribution of the parameter values. An alternative method for SVM parameter adjust-
ment is GA techniques; these are capable of coping with the above detractors owing
to their demonstrated efficiency in the contextual handling of the model constants.
GAs performs a non-linear query of the solution space based on no knowledge of
the model’s characteristics.
For the SVR hyperparameter setting in the volatility prediction task, our novel
approach considers a triplet of parameters composed of the kernel type and a set of
kernel-specific constants. Starting with an original population of chromosomes pro-
duced by a pseudo-random or chaotic distribution, the process begins. Every chro-
mosome is composed of an integer-valued and a real-valued part. The method of
choosing individuals for the mating pool in each generation can be elitism, roulette,
or the suggested Boltzmann choice method. The rest of the mating pool is filled with
an n-point crossover operator and a boundary mutation technique (Chih-Hung etal.,
2009; Ping-Feng etal., 2006), employing pseudorandom and chaotic sequences.
Genetic Operators
SVRGBC
is a mix of an integer-valued and a real-valued genetic algorithm; it uses
various genetic operators such as selection, crossover, and mutation to generate off-
spring from the population of real solutions. Three methods of selection are included
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2311
High‑Frequency Trading inBond Returns: AComparison Across…
in our process: Elitism, the roulette wheel, and a new selection method named
Boltzmann selection. Such systems are employed to select the best offspring to pro-
ceed with the development cycle. The chosen individuals are then gathered into a
breeding pool and the crossover and mutation operators are performed on them. An
inconvenience of the GA crossover operation in the SVR parameter setting prob-
lem is chromosomal heterogeneity. A solution to this problem is the employment
of a dominance scheme, as indicated by Lewis etal. (1998). Every gene’s value is
determined by its kernel together with an upper and lower limit of the allele. The
mutation is the following stage, where several chromosomes are targeted for a muta-
tion to yield changed clones that will be incorporated into the new population. The
present work is based on a uniform mutation, represented as:
being
cold
and
cnew
a chosen chromosome before and after mutation accordingly; n
enumerates the number of genes in the chromosome structure;
cnew
i
represents the
new value for the i allele after mutation; r symbols a random number in the range
[0,1], produced by one of the available probability distributions;
LBi
and
UBi
are the
lower and upper bound of the i allele. The various ranks and kinds of SVR param-
eters require an integer-valued mutation and a real-valued mutation (Chih-Hung
etal., 2009). The former is employed to handle the kernel type
KT
owing to its inte-
ger encoding, whereas the latter changes
P1
or
P2
values. For these ends, a minor
alteration of the displayed mutation operator is needed: a rounding function is intro-
duced right after the perturbation of allele i to guarantee correct values.
Boltzmann Selection
As Kirkpatrick etal. (1983) found, the solution acceptance function constitutes an
essential feature of simulated annealing (SA). Once the SA procedure is looped for
a sufficient amount of time, the solution acceptance distribution function follows the
Boltzmann distribution. SA is comprised of three items: A probabilistic acceptance
criterion, a neighbourhood exploration approach, and a cooling function to reach
thermal balance.
Goldberg (1990) suggested that the SA-like mechanism will improve the capa-
bilities of the GA given the thermal equilibrium afforded by the cooling function,
and the effectiveness of the Boltzmann distribution demonstrated to be an explo-
ration heuristic (Goldberg, 1990; Kirkpatrick etal., 1983). Guided by the work of
Goldberg (1990), we suggest Boltzmann selection, used to choose the surviving
set of mats based on the temperature of the system given by a cooling schedule.
Every solution is either accepted or refused following the Boltzmann distribution,
defined by (29)
(60)
c
old
=
{
c1,c2,,ci,cn
}
c
new
i=LBi+r
(
UBiLBi
)
c
new =
{
c
1
,c
2
,,cnew
i
,c
n}
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2312
D.Alaminos et al.
being k the Boltzmann constant,
ΔE
represents the energy among the best and the
present solution, and T denotes the actual system temperature. The latter parameter
is achieved in SA by utilising a cooling function from the classical exponential or
linear options of annealing schemes (Kirkpatrick etal., 1983). In this work, a cool-
ing scheme linked to the linear cooling function for the AG, which correlates the
temperature with the present generation and the number of total generations of the
AG, is presented.
Fitness Function
The optimisation procedure of the SVR parameters via the GA needs a fitness
function to assess and choose the chromosomes for the matting set. In this paper,
x MSE is employed to evaluate the quality of every solution to maintain a good
genetic material. This is determined by (30), being
𝜎t
the observed volatility for
period t,
𝜎 t
represents the forecasted volatility for period t, and n symbols the total
forecasted time frame.
In addition, the MSE is computed using a statistical estimate of the generalisa-
tion error named k-fold cross-validation (CV). This refers to a method of calculat-
ing the parameter values of a model based on a training sample (Kohavi, 1995;
Refaeilzadeh etal., 2009). The sample, taken in the simplest case, is split into k
independent subsets of equal size. Using k 1 of these, a model is trained and 42
is computed over the residual subset. This procedure is repeated for the remaining
k 1 left samples, and then the mean is calculated. At last, the model that reduces
the CV value to the minimum is the optimal one (Refaeilzadeh etal., 2009).
Deep Learning Neural Network‑ Genetic Algorithm (DLNN‑GA)
The procedure of multiple forward dispersion is defined in the next linear model
linking the incident optical modes and the transferred optical modes (Vellekoop
& Mosk, 2008).
being
En
the nth complex incident mode with amplitude
|
|
E
n|
|
and phase
𝜙n
, while
Em
represents the mth complex optical mode transferred from the dispersion media.
tmn
symbols one element in the complex transmission matrix which constitutes light
(61)
P(
x
i)
=e
(
−ΔE
KT
)
(62)
MSE =
n
i=1
𝜎t𝜎 t
2
n
(63)
E
m=
N
n=1
tmnEn=
N
n=1|
|
tmn
|
|
exp
(
i𝜙mn
)|
|
En
|
|
exp
(
i𝜙n
)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2313
High‑Frequency Trading inBond Returns: AComparison Across…
scattering paths. The phase values fulfil
𝜙n
= −
𝜙mn
(Bossy & Gigan, 2016). The
light will focus perfectly on the selected location when it is set to this condition.
The procedure of using the GA for wavefront modelling involves five steps: ini-
tialisation, classification, reproduction, mutation, and iteration. First, a given num-
ber G of phase patterns is generated, each phase value being selected from a uniform
pseudo-random distribution. Next, these standards are marked using a specially
designed fitness function. Following formula (32), the fitness function is described
as the intensity of the light at a given location (Conkey etal., 2012)
being
An
the amplitude of
En
. Phase patterns are classified according to the results
of the fitness function assessment. High scores lead to higher rankings. The second
step is breeding. Offspr ing is produced by offspr ing = T × ma + (1 − T) × pa, with T
being a random binary template and ma and pa being parental parents. Both parents
are sorted under the rule that higher-ranking parents are more likely to be adopted.
After reproduction, certain sections of the offspring become mutated and changed
by chance. The mutation rate R declines with growing generations n to prevent over
mutation, as follows
R
=
(
R
0
R
end)
×exp (n𝜆)+R
end,
, being
R0
,
Rend
, and λ
the initial mutation rate, the final mutation rate, and the decay factor, for each of
them (Conkey etal., 2012). The progeny shall also be assessed by the fitness func-
tion. In each generation, a certain number of offspring will be replicated to substi-
tute already existing patterns with inferior scores (Conkey etal., 2012). Then, the
G-stage patterns are all reclassified based on their scores. The previous reproduction
and mutating proceedings will be repeated several times before the final condition
is fulfilled. Usually, the iteration ceases when a prespecified amount of generations
is replayed or the result of the fitness function evaluation meets a certain level of
threshold.
The advantages of the AG are important. The AG manages to identify a suitable
solution quickly. In addition, GA is robust to noise, as it updates the largest number
of pixels rather than adjusting pixels one by one. While GA results are significantly
affected by many factors, such as the mutation and reproduction rate, the fitness
function, and particularly the number of phase patterns employed in every genera-
tion, namely the size of the population, matching the right parameters is not trivial
and needs time and experience. Furthermore, several scenarios haphazard introduce
start patterns, potentially in the neighbourhood of one or more local minima. GA
is susceptible to getting locked into a local minimum, as it is a staged optimisation
process and descendants reproduce themselves by replicating and mutating existing
patterns. This involves the risk that likely better solutions cannot be probed. Con-
sequently, the use of a good initialisation is crucial to reach global optima (Boden-
hofer, 2003).
Deep learning, which is a data-driven process, employs a separate strategy to esti-
mate the phase pattern for focusing the light. The poor proposal and nonlinearity of
inverse scattering problems show that direct inversion is impractical, which makes
(64)
I
m=
|
|
Em
|
|
2
=1
N
|
|
|
|
|
|
N
n=1
tmnAnexp
(
i𝜙n
)|
|
|
|
|
|
2
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2314
D.Alaminos et al.
the requirements of iterative algorithms with regulation (Wei & Chen, 2018) to be
as below:
being H the forward scattering model, y represents the recorded speckle intensity, W
symbols a transformation, and p denotes transformation coefficients so that
x=Wp
is the desired reconstruction.
Nearly every state-of-the-art iterative algorithm for back-scattering problems
are cascades of linear convolutions and pointwise nonlinear transactions, (McCann
et al., 2017), resembling the structure of convolutional neural networks (CNNs).
Examples of a representative implementation are the well-known iterative shrink-
age-thresholding algorithms (ISTAs) founded on model blocks as follows:
being L the Lipschitz constant. The iterative optimization governed by formula (60)
can be handled as a convolutional procedure with kernel
I
1
L
WHHW and
bias (1/L)W * H * y, followed by a nonlinear activation function
A𝜃
. So, CNNs can
be regarded as inherently suitable for solving the problem of reverse dispersion (Li
etal., 2018; Wei & Chen, 2018).
Conventional iterative algorithms are successful and warrant the use of CNNs as
a way to approach light via scattering media. Deep CNNs (DCCNs) have been dem-
onstrated powerful in solving inverted problems (Li etal., 2018; Lucas etal., 2018).
Therefore, DCNNs can model the H -1 reverse scattering process through super-
vised learning, disclosing the connection between the transferred speckles and the
incident x optical phase patterns. The inputs of the DCNN are the measured inten-
sity distributions of the transferred speckles captured with a camera, and the outputs
are their incident phase patterns modified by a spatial light modulator (SLM). Upon
training, the DCNN will accurately map the speckle to the incident phase patterns,
and therefore the DCNN can forecast the phase pattern needed to target the light via
a particular dispersion media.
Using the DL approach is easier, as the relation of speckle to incident phase pat-
terns is acquired directly via training. However, its output is influenced strongly by
the training samples. Accordingly, these samples are typical to represent the com-
plete scattering processes only when the training sample size is large enough, and
thus, the DCNN can forecast the best phase pattern for the light focus following
the training. However, the supervised learning method experiences the dilemma of
achieving the global optimum because it is imprudent to incorporate every conceiv-
able phase pattern and speckle for training, and sample topics are hard to estimate as
well. Despite this, the DCNN results can be considered a good initialisation for the
GA. Concerning the GA, maximum global convergence is obtained under the con-
straint that the initial figure is close to the global optima (Bodenhofer, 2003).
The suggested GeneNNN contains two parts. The first part consists of gathering
samples to train a DCNN. Following the training, an early focus mote with the phase
pattern predicted by the DCNN can be obtained. The second part is the adoption of
(65)
arg
min
p
yHW
2
p2+𝜆p
1
(66)
pm+1=A𝜃
[1
L
WHy+
(
I
1
L
WHHW
)
pm
]
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2315
High‑Frequency Trading inBond Returns: AComparison Across…
the GA for optimising the focused process. We present two methods for building
early phase patterns, each employing the DCNN results. The first method, called
GeneNNv1, adopts the pattern foreseen by the DCNN for one of the starting pat-
terns, whereas other patterns are generated according to a uniform pseudo-random
distribution (Conkey etal., 2012). The DCNN pattern will undoubtedly have the
strongest classification, thus having the highest chance of being selected for breed-
ing. With the other method, called GeneNNNv2, all starting patterns are built
according to the DCNN patterns predicted by the DCNN but adding several extra
phase patterns to this general basis, whereas all additional patterns are generated
with a uniform pseudo-random distribution as well.
Adaptive Neuro‑Fuzzy Inference System‑Quantum Genetic Algorithm
(ANFIS‑QGA)
For this method, the likelihood amplitudes of every qubit are considered to be two
genes, each chromosome comprises two gene strings, and every gene string stands
for an optimisation solution. The number of genes is fixed by the number of opti-
misation parameters. With each qubit of the optimal chromosome as a target, sin-
gles are actualised by quantum rotation gates and mutated by non-quantum gates
so that population diversity is increased. The mutation procedure was performed by
the non-quantum gates and the crossover and selection operations were performed
by the quantum rotation gates. The varying tendency of the fitness function at the
search point is transferred into the rotation angle calculation function design. If the
change ratio of the fitness function at a certain search point is larger than that at
other points, the rotation angle is decreased appropriately. With this method, each
chromosome can be made to advance in the flatness of the search process to quicken
the conversion and hurry in the sweep of the searching operation to prevent losing
the globally optimal solution (Cao & Shang, 2010).
The key steps of the suggested model are outlined in Algorithm1.
Algorithm1: Suggested model
Step 1: Data Pre-processing (Phase one) Step 2: Generate random population
Step 3: Calculate the value of Radii Step 4: Initialize Anfis model Step 5: Perform
optimization algorithm using DCQGA (Phase two). Step 6: Finding the best accu-
racy and performance. Step 7: Stop.
The application of the suggested model comprises two major steps.
Phase one: Data Pre-processing: This stage consists of a procedure that trans-
forms the raw inputs and outputs into an acceptable form before the training process.
It is mainly used to decrease the dimensionality of the input data and to improve the
performance of the generalisation (Bishop, 1995).
The original data are assigned to [0,1] using min–max normalisation:
(67)
X
(i)=
x(i)m
Mm
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2316
D.Alaminos et al.
for the time series data x, m = min{x}, M = max(x). Four types of time series, for
example, the opening price, the closing price, the highest price, and the lowest price
are separately standardised in the experiment.
Phase two: Optimization algorithm: The algorithm used in this step is inspired
by the double-stranded quantum genetic algorithm (Cao & Shang, 2010) and is
designed as below:
1. Produce the original angle to create the double strand from it.
being
yi,n
a random number between 0 and 2π,
Pi,1
named cosine solution and
Pi,2
called sine solution.
2. Execute solution space transform
where i = 1: m, j = 1: n, m is the number of qubits, and n is the population size
3. Compute fitness value being equivalent to
4. Renovate the best composition.
5. Execute quantum rotation gates to renovate corresponding qubits on the actual
chromosome for every chromosome.
Being
Δ𝜃0
the initial value of rotation angle,
Therefore, the rotation angle Δθ can be fixed by rules such as the following: if
A 0, then sgn (Δθ) = −sgn(A), else if A = 0, the direction of Δθ is arbitrary,
(68)
Pi,1
=
(
cos
(
y
i,1)
, cos
(
y
i,2)
,, cos
(
y
i,n))
(69)
Pi,2
=
(
sin
(
y
i,1)
, sin
(
y
i,2)
,, sin
(
y
i,n))
(70)
X
(i,j)c
=0.5
[
b
i(
1+𝛼
i,j)
+a
i(
1𝛼
i,j)]
(71)
X(i,j)s
=0.5
[
b
i(
1+𝛽
i,j)
+a
i(
1𝛽
i,j)]
(72)
1
1+MSE
(73)
Δ
𝜃i,j=−sgn(A)Δ𝜃0exp
f
Xi,j
−∇fimin
fimax −∇fimin
(74)
A
=
|
|
|
|
𝛼
0
𝛼
1
𝛽
0
𝛽
1
|
|
|
|
(75)
f
(
Xi,j
)
=
f
(
Xi,p
)
f
(
Xi,c
)
(
X
i,j)
p
(
X
i,j)
c
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2317
High‑Frequency Trading inBond Returns: AComparison Across…
being
Xi,p
and
Xi,c
ith vector in solution space like the parent colony and child
colony, and
(
X
1,j)p
, and
(
X
1,j)c
, denotes the jth variable of the vector
Xi,p
and
Xi,c
, correspondingly.
6. Mutate qubits based on the likelihood of mutation (Pm = 0.1) equivalent to 1 over
the population size (pop size = 10) and these figures are derived experimentally
for acceptable speed and efficiency for the model.
The above-listed methods have been used to determine the optimal value for the
optimisation parameter utilizing the double-string quantum genetic algorithm and to
compute the fitness function by the ANFIS model employing subtractive clustering
to cause initial FIS for the ANFIS system.
Convolutional Neural Networks‑Long Short‑Term Memory (CNN‑LSTM)
CNN is characteristic of attending to very evident properties within the sight line,
therefore, it is extensively applied in engineering. LSTM features the characteristic
to expand based on the time sequence, and it makes a large use in the time series.
Following the characteristics of CNN and LSTM, a value forecasting model based
on CNN-LSTM is constructed.
CNN was developed as a network model by Lecun etal. in 1998. CNN is a type
of feed-forward neural network, which performs well in both image and natural lan-
guage processing (Kim & Kim, 2019). CNN could successfully be implemented in
time-series forecasting. CNN local sensing and distribution of weights may reduce
the parameter number to a large extent, thereby increasing the learning efficiency
of the model (Qin et al., 2018). CNN consists of two parts mainly: the convolu-
tion layer and the clustering layer. The convolution layer each holds a multiplicity
of convolution kernels, and its formula of calculation is given in Eq.(46). Following
the convolution operation of the convolution layer, features are removed from the
data, however, the dimensions of the separated characteristics become high, there-
fore, to resolve this issue and decrease the training cost of the network, a clustering
layer is inserted directly after the convolution layer for reducing the dimension of
the characteristics:
(76)
fimax =max
f
X1,p
f
X1,c
X1,j
p
X1,j
c
,
f
X2,p
f
X2,c
X2,j
p
X2,j
c
,,
f
Xn,p
f
Xn,c
Xn,j
p
Xn,j
c
(77)
fimin =min
f
X1,p
f
X1,c
X1,j
p
X1,j
c
,
f
X2,p
f
X2,c
X2,j
p
X2,j
c
,,
f
Xn,p
f
Xn,c
Xn,j
p
Xn,j
c
(78)
lt
=tnh
(
x
t
k
t
+b
t)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2318
D.Alaminos et al.
being
lt
the output value after convolution, tnh represents the activation function,
xt
stands for the input vector,
kt
means the weight of the convolution kernel, and
bt
symbols the bias of the convolution kernel.
LSTM is aimed at overcoming the long-standing explosion and vanishing gradi-
ent problems in Recurrent Neural Networks (RNNs) (Ta etal., 2020). It has been
largely employed in speech detection, sentiment analysis, and text processing since
it has a unique memory and can make relatively precise predictions (Gupta & Jalal,
2020). The LSTM contains three parts: the forgetting gate, the input gate, and the
output gate.
The computational procedure of the LSTM follows:
1. The output last time value and the input current time value are entered into the
forgetting gate, and the output forgetting gate value is determined after calcula-
tion, according to the equation below:
being the value range of
ft
(0,1),
Wf
symbols the weight of the forget gate, and
bf
represents the bias of the forget gate,
xt
is the input current time value, and
ht1
is the output last time value.
2. The output last time value and the input current time value are entered in the
input gate, and the output value and the status of the input gate candidate are
derived after the calculation, illustrated by the below equations:
Being the value range of
it
(0,1),
Wi
represents the weight of the input gate,
bi
symbols the bias of the input gate,
Wc
is the weight of the candidate input gate,
and
bc
represents the bias of the candidate input gate.
3. Update the current cell state as follows:
where the value range of
Ct
is (0,1).
4. The output
ht1
and input
xt
are taken at time t as the input values of the output
gate, and the output
ot
of the output, the gate is determined by:
being the value range of
ot
(0,1),
Wo
symbols of the weight of the output gate,
and
bo
represents the bias of the output gate.
5. The LSTM output value is achieved by computing the output of the output gate
and the cell status, based on the formula below:
(79)
f
t
=𝜎
(
W
f
.
[
h
t1
,x
t]
+b
f)
(80)
it
=𝜎
(
W
i
[
h
t1
,x
t]
+b
i)
(81)
Ct
=tanh
(
W
c
[
h
t1
,x
t]
+b
c)
(82)
Ct=
f
t
C
t1+
i
t
C
t
(83)
ot
=𝜎
(
W
o
[
h
t1
,x
t]
+b
o)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2319
High‑Frequency Trading inBond Returns: AComparison Across…
The principal stages of the CNN-LSTM training and prediction procedure are
listed below:
1. Enter the necessary data for the training of the CNN-LSTM.
2. Since the gap in the input data is large, for better training of the model, the
z-score standardisation approach to standardise the input data is assumed, illus-
trated by the formulas below:
being
yi
the standardized value,
xi
represents the input data,
x
symbols the
average of the input data, and
s
is the standard deviation of the input data.
3. Initiate the weighting of each layer of the CNN-LSTM and biases.
4. Input data passes successively via the convolution layer and the clustering
layer in the CNN layer, the input data is feature extracted and the output value
is acquired.
5. The CNN layer output data is computed via the LSTM layer, and the output
value is given.
6. LSTM layer output value is fed into the complete connection layer to obtain the
output value.
7. The output value computed via the output layer is checked against the actual
value of this dataset, resulting in the respective error.
8. The completion conditions are that a specified number of cycles are reached,
that the weight is below a specified limit and that the forecast error rate is below
a specified limit. If one of the end conditions is achieved, the training shall be
fulfilled, the entire CNN-LSTM network shall be updated and go to step 10; if
not, proceed to step 9.
9. Propagates the computed error in reverse, maintains the weight and bias of
every layer, and proceeds to step 4 to further train the network.
10. Save the model trained for the forecast.
11. Enter input data needed for the forecast.
12. The input data are normalised by Eq.(40).
13. Input the calibrated data into the CNN-LSTM trained model, and subsequently
obtain an output value for it.
14. The output value provided by the CNN-LSTM model is the normalised value,
and the normalised value is subtracted from the original value. As given by for-
mula (41), where
xi
the restored normalised value,
yi
is the CNN-LSTM output
value, s is the standard deviation of the input data, and x is the mean value of
the input data.
15. Issue the results restored for completing the prediction procedure.
(84)
ht
=o
t
tanh
(
C
t)
(85)
y
i=
x
i
x
s
(86)
xi=yis+x
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2320
D.Alaminos et al.
Gated Recurrent Unit‑ Convolutional Neural Networks (GRU‑CNN)
RNN is a type of artificial neural network suitable for processing and analysing
temporal data sequences, in contrast to classical neural networks, which rely on
the weighting connection between the layers. The RNN implements the hidden
layers to retain the information of the prior time, and the output is affected by the
present states and the memories of the prior time. The structure of the unrolled
RNN is presented as follows:
being
at
the output of one single hidden layer at time t, and
𝜔t
aa
,
𝜔t
ax
, and
𝜔t
ay
are the
hidden layers’ weight matrixes, the input weight matrixes, and the output weight
matrixes, respectively.
ba
and
by
symbol the bias vectors of one single hidden
layer and the output, respectively, and
g1
and
g2
represent the nonlinear activation
function.
The RNN works properly if the output is near its related inputs; nevertheless,
with a long-time-interval and a great number of weights, the input shall not have
much effect on the output owing to the problem of disappearing gradient. To
resolve the gradient vanishing problem and the simple structure of the RNN hid-
den layer, we proposed a particular kind of RNN called GRU.
The GRU consists of a variation of the LSTM with a closed RNN structure,
and in comparison, to the LSTM, there are two gates (update gate and reset gate)
in the GRU and three gates (forget gate, entry gate, and exit gate) in the LSTM
(Alaminos etal., 2022a; Gao etal., 2019).
The GRU equations are:
being
𝜔u
,
𝜔r
, and
𝜔c
the training weight matrix of the update gate, the reset gate, and
the candidate activation
ct
, respectively and
bu
,
br
, and
bc
represent the bias vectors.
The Establishment ofCNN Module
CNN is often employed in visual image and video recognition, and text categori-
sation. To keep the spatial information of the data registered by sensors and smart
appliances in the power system, the spatio-temporal matrix was suggested. Its
data are based on the sensors’ location and time sequence. The spatio-temporal
matrix appears like this:
(87)
at
=g1
(
𝜔aaa
t1
+𝜔axx
t1
+ba
)
yt=g
2(
𝜔
ay
at+b
y)
(88)
Γu=𝜎
𝜔u
c
t1
,x
t
+bu
,
Γr=𝜎𝜔rct1,xt+br,
c
t=tanh𝜔cΓrct1,xt+bc
,
c
t
=
1−Γ
u
c
t1
u
c
t
,
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2321
High‑Frequency Trading inBond Returns: AComparison Across…
being k the
k
th smart sensor, n is the
n
th time sequence, and
Xk(n)
symbols of
the data recorded by the
k
th a smart sensor at n time. For extracting the charging
characteristic from the spatio-temporal matrix, CNN was utilised for processing the
spatio-temporal matrix.
First, numerous two-dimensional space–time matrices are piled into blocks of
three-dimensional matrices, followed by applying these blocks with a convolution
operation. The convolution operation aims to obtain a strongly abstract feature, then
after the convolution operation, the results of the convolution operation are applied
to the grouping operation. The pooling operation makes no change to the entry
matrix depth, though it may decrease the size of the matrices as well as the number
of nodes so that the parameters in the complete neural networks are reduced. Fol-
lowing the repeated convolution and pooling operations, the highly abstract feature
was extracted and smoothed to a one-dimensional vector, hence it can be linked to
the whole layer connected. Next, the weights and bias parameters within the glob-
ally connected layer can be computed iteratively. Lastly, the forecasting outcomes
are provided by the output of the activation function.
The GRU‑CNN Hybrid Neural Networks
Our proposed GRU-CNN hybrid neural networks framework is composed of a
GRU module and a CNN module. Inputs are the time sequence data information
and spatio-temporal matrices recorded from the power system; outputs represent
the forecasting of the future load value. As for the CNN module, it is good for
processing two-dimensional data, such as spatio-temporal matrices and images.
CNN module employs local connection and weight sharing to extract local char-
acteristics of the data directly from the spatio-temporal matrices and get an
efficient presentation using the convolution layer and the clustering layer. CNN
module structure contains two convolution layers and one flattening operation,
with each convolution layer containing one convolution operation and one clus-
tering operation. Following the second clustering operation, high-dimensional
data is smoothed into one-dimensional data, and the outputs of the CNN module
become linked to the connected layer. Moreover, the purpose of the GRU mod-
ule is to catch the long-term dependency, and the GRU module could gather
helpful information in the historical data over a long period via the memory cell,
while the useless data will be ignored by the oblivion gate. GRU module inputs
are the temporal sequence data; the GRU module holds plenty of closed recur-
rent units, and the outputs of all these closed recurrent units are linked with the
connected layer. At last, the load forecasting outcomes may be achieved with the
average value of all the neurons in the gated layers.
(89)
x
=
X1(1)Xn(n)
⋮⋱⋮
Xk(1)Xk(n)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2322
D.Alaminos et al.
Quantum Recurrent Neural Network (QRNN)
A quantum system on n qubits exists in the n-fold Hilbert space of tensor prod-
uct
H
=
(
2
)d
with resulting dimension
2d
. A quantum state represents a unit
vector
𝜓
H
, commonly described in quantum computing in bra-ket nota-
tion
𝜓
H
|; its conjugate transpose with
𝜓=𝜓
; then the inner product
𝜓
𝜓
=
𝜓
2
2
means the square of the 2-norm of
𝜓
𝜓

𝜓
then denominates
the outer product, yielding a tensor of rank 2. The computational ground condi-
tions correspond to
|0
= (1, 0),
1
= (0, 1), and compound ground states are for
example set by
01=
01
= (0, 1, 0, 0).
Thus a quantum gate becomes a unitary operation
𝕌
on
H
; where the opera-
tion nontrivially operates on a subset
𝕊
[n] of qubits, then
𝕌
𝜖𝕊𝕌
(
2
|
𝕊
|)
; to oper-
ate on
H
we expand
𝕌
to operate as identity on the remainder of the space, i.e.
𝕌
𝕊
1[n]𝕊
. This extension is usually ignored, and indicates if the gate oper-
ates in a quantum circuit: the first gate R(θ) represents a unitary of a qubit that
operates on the second qubit from below, and which depends on the parameter
θ. The dotted line extending from the gate designates a "controlled" operation,
if the control, for example, acts only on a single qubit denominates the single
block-diagonal unitary map
0⟩⟨01+1⟩⟨1(𝜃)=1(𝜃)
it stands for
“if the control qubit is in state
|1
apply
(𝜃)
“. The gate sequences are computed
as matrix products, and the circuits.
The projective measures of a single qubit are provided by a hermitian 2 × 2 matrix
P, such as M
1⟩⟨1
= diag(0, 1); the complementary outcome is then
M
= 1 − M.
They are measured by metres in the circuit. Considering a quantum state
𝜓
, the
post-measurement state is
M𝜓p
/p with probability
p=M𝜓2
. This is also the
post-selection likelihood to ensure a measured result M; this likelihood may be
extended close to 1 using
1
p
rounds of amplitude amplification (Alaminos
etal., 2022b; Grover, 2005).
The quantum recurrent neural networks within this proposal are all runnable
on classical hardware in which the “hidden state” on n qubits is expressed by
an array of size
2n,
and the set of parameters is provided by the collection of all
parameterized quantum gates in the process, leading to matrices with parameter-
ised inputs. To run a QRNN conventionally, we employ a series of matrix–vec-
tor multiplications for the gates, and matrix–vector multiplications with subse-
quent renormalisation of the status for norm 1 for the measure and post-select
transactions. Running on quantum hardware, matrix multiplications are "free",
and the hidden state in n qubits, which classically requires exponential memory,
may be contained in n qubits only.
Parametrized Quantum Gates
Quantum VQE circuits are very compact, meaning that they alternate single-qubit
parameterised gates with entangled gates, such as controlled-no transactions. Hence,
this offers the advantage of packing many parameters in a rather dense circuit.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2323
High‑Frequency Trading inBond Returns: AComparison Across…
Moreover, although these circuits are known to form a universal family, their high
entanglement gate density, as well as the missing correlation between the param-
eters, results in very over-parameterised models which are difficult to train in sorting
tasks for inputs of more than a few bits (Benedetti etal., 2019).
We build a highly structured parameterised quantum circuit in which a few
parameters are reused again and again. It is mainly based on a new type of quantum
neuron that spins its target lane following a non-linear activation function attached
to the polynomials of its binary inputs. The cell consists of a composite of an input
stage that, at every step, puts the actual input into the state of the cell. Multiple work
steps follow which calculate the input and the cell state, plus a concluding output
step that generates a density of probability on possible forecasts. The application
of these QRNN cells in an iterative fashion over the input sequence in a recurrent
model is very similar to traditional RNNs.
In training we implement quantum amplitude amplification (Guerreschi, 2019) on
the output vias, to make sure we measure the right token of the training data at all
steps. Although the measures are usually non-unitary operations, using the ampli-
tude amplification step ensures that the measures while training remains as close to
unitary as we want them to be.
A Higher‑Degree Quantum Neuron
The power of classical neural networks arises from the implementation of non-linear
activation factors to the related converse transformations in the layers of the net-
work. Instead, because of the nature of quantum mechanics, any quantum circuit
would inevitably be a linear operation.
Nevertheless, nonlinear behaviour does not happen anywhere in quantum
mechanics: a simple example is a single-qubit gate R(θ) = exp(iYθ) for the Pauli
matrix Y (Nielsen & Chuang, 2001), acting as a
namely like a rotation within the two-dimensional space covered by the computa-
tional basis vectors of a single qubit,
{|0, |1}
. Meanwhile, the rotation matrix itself
remains linear, we observe that the state amplitudes —
cos 𝜃
and
sin 𝜃
—depend non-
linearly on the angle θ. Lifting the rotation to a checked operation cR(i,
𝜃i
) condi-
tional on the
ith
qubit of a state
|x
for
x{0, 1}n
, we obtain the map
Hence, this corresponds to a rotation by an infinite transformation of the basis
vector
x
with
x
=
{
x
1,,
x
n}
{0, 1}
n
, by a parameter vector
𝜃
=
(
𝜃
0
,𝜃
1
,,𝜃
n).
The process is linearly expanded to the base and target state superpositions, and
owing to the form of R(θ) all changes in amplitude just introduced are true-valued.
(90)
R
(𝜃)=exp
(
i𝜃
(
0i
i0
))
=
(
cos 𝜃sin 𝜃
sin 𝜃cos 𝜃
)
(91)
R
𝜃0
cR
1, 𝜃1
cR
n,𝜃n
x

0=
x
(cos(𝜂)
0
+sin(𝜂)
1
)
for
𝜂=𝜃0+
n
i=1
𝜃ixi
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2324
D.Alaminos et al.
This transformation of the cosine of the amplitudes through a checked transac-
tion is already non-linear; however, a sine function is not especially sharp, lacking
also a sufficient "flat" region where the activation stays constant, as is the case of a
linear rectifier unit. Cao etal. (2017) suggested an approach to implement a linear
map into a set of qubits that produces amplitudes that exhibit these steeper slopes
and plateaus, in a manner very similar to a sigmoidal activation function. The acti-
vation has a parameter of order ord 1 governing the tilt, the circuit resulting in
the activation amplitude. This quantum neuron in pure states is rotated by an angle
f(𝜃)=arctan
(
tan(𝜃)
2ord )
, where ord 1 is the order of the neuron. Assuming an
affine transformation η for the input bitstring
xi
as shown in formula (93), this rota-
tion is translated into the amplitudes.
which arises by standardising the transform
0
↦→
cos (𝜃)
2
ord 0+sin (𝜃)
2
ord 1
as
can be seen clearly. For ord = 1, the circuit is shown on the left; for ord = 2 on the
right. Superior orders are recursively buildable.
A so-called repetition-to-success (RUS) circuit is this quantum neuron, indicat-
ing that the measured ring signals if the circuit has been performed correctly. If the
result is zero, the neuron has been committed. A correction circuit returns the state
to its original configuration when the result is one. Beginning with a pure state (e.g.
x
for
x{0, 1}2
and recurring every time a 1 is measured, an arbitrarily high prob-
ability of success is reached.
Alas, for control in superposition, such as a state
x
+
y2
, this does not work for
xy
two bit-strings of length n. The amplitudes within the overlap, in this case, will
rely on the success story. A technique called fixed-point oblique amplitude amplifi-
cation (Tacchino etal., 2019), essentially post-selects in the measurement of result 0
while preserving the unitarity of the operation with arbitrary precision. There is the
additional cost of multiple rounds of these quantum circuits, whose number will depend
on the chance of a zero being measured in the first place. This depends obviously on the
parameters of the neuron, θ, and the input state is given. We stress that by selecting suf-
ficiently large individual post-selection probabilities, there is no exponential reduction
in the overall probability of success across the number of quantum neurons employed.
We extend this quantum neuron in this paper with an increase in the number of
check terms. More precisely, η as provided in formula (89) is an affine transform of
the boolean vector
x
=
{
x
1,,
x
n}
for
xi
{0, 1}. When we introduce multi-control
gates—having their own parameterised rotation, labelled by a multi-index θ_I that
varies depending on the qubits i I on which the gate conditions—we get the option
of incorporating higher-degree polynomials, i.e.
(92)
cos
(f(𝜂)) =1
1+tan (𝜂)2×2ord
and sin (f(𝜂)) =tan (𝜂)2
ord
1+tan (𝜂)2×2ord
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2325
High‑Frequency Trading inBond Returns: AComparison Across…
being d the degree of the neuron; for d = 2 and n = 4 an example of a checked rota-
tion that increase to this higher-order transformation
𝜂
on the bit string
xi.
So, higher
degree boolean logic operations could be directly encrypted inside a unique condi-
tional rotation: an AND operation between two bits
x1
and
x2
is simply
x1x2
.
QRNN Cell andSequence toSequence Model
The identified quantum neuron becomes the central component in building our quan-
tum recurrent neural network cell. As for conventional RNNs and LSTMs, we pro-
vide such a cell to be applied successively to the input submitted to the network. In
particular, the cell consists of input and output lanes that are restored following each
step, plus a cell-internal state that is transmitted into the next iteration of the network.
To implement the constructed QRNN cell, we require an iterative application of
the QRNN cell to a sequence of input words
in1,in2, ,inL.
The outgoing lanes
outi
label a discrete distribution measuring
pi
over the class
labels. The distribution could be entered into an assigned loss function, like cross-
entropy or CTC loss.
Appendix2: Results forthe‘All Trades’ Scenario
This supplement reports the results of the main forecasting techniques implemented
to "all trades" rather than just those produced in the continuous trading situation.
See Tables10, 11, 12, 13, 14, 15.
(93)
𝜂
=𝜃0+
n
i=1
𝜃ixi+
n
i=1
n
j=1
𝜃ijxixj+=
I[n]
|I|
d
𝜃I
iI
x
i
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2326
D.Alaminos et al.
Table 10 Sign prediction ratio (10min) for all trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.810 0.842 0.866 0.907 0.893 0.872 0.903 0.860 0.896 0.885
United States 0.804 0.840 0.860 0.903 0.887 0.866 0.896 0.859 0.888 0.880
Italy 0.800 0.839 0.854 0.897 0.879 0.862 0.889 0.854 0.884 0.880
Spain 0.794 0.837 0.851 0.889 0.873 0.858 0.887 0.848 0.880 0.876
Japan 0.791 0.836 0.846 0.886 0.868 0.850 0.883 0.841 0.879 0.869
Turkey 0.786 0.832 0.839 0.880 0.863 0.844 0.879 0.839 0.878 0.864
Mexico 0.783 0.829 0.839 0.875 0.859 0.839 0.873 0.836 0.870 0.860
Indonesia 0.775 0.827 0.835 0.870 0.851 0.838 0.869 0.836 0.864 0.855
Nigeria 0.771 0.826 0.829 0.870 0.849 0.837 0.866 0.831 0.862 0.850
Poland 0.766 0.818 0.823 0.863 0.845 0.830 0.863 0.827 0.853 0.848
Walmart 0.764 0.818 0.817 0.862 0.838 0.826 0.863 0.827 0.846 0.841
Johnson & Johnson 0.763 0.803 0.808 0.861 0.823 0.824 0.846 0.821 0.844 0.830
Verizon 0.771 0.808 0.815 0.863 0.832 0.824 0.851 0.831 0.857 0.845
Unilever PLC 0.786 0.825 0.831 0.863 0.846 0.841 0.866 0.845 0.873 0.849
Rio Tinto PLC 0.793 0.836 0.841 0.871 0.852 0.853 0.867 0.846 0.888 0.851
Air Liquide 0.796 0.842 0.848 0.876 0.858 0.866 0.883 0.847 0.898 0.860
Ambev 0.804 0.859 0.856 0.885 0.865 0.873 0.890 0.861 0.901 0.847
Cemex 0.802 0.848 0.833 0.863 0.850 0.853 0.878 0.852 0.890 0.844
Turkish Airlines 0.783 0.838 0.825 0.852 0.830 0.839 0.855 0.849 0.875 0.840
KCE Electronics 0.768 0.820 0.806 0.850 0.806 0.828 0.840 0.842 0.868 0.823
Telekomunikacja Polska 0.761 0.793 0.802 0.839 0.781 0.817 0.836 0.823 0.850 0.808
Caesars Resort Collection LLC 0.760 0.790 0.795 0.835 0.755 0.812 0.826 0.822 0.821 0.804
Asurion LLC 0.760 0.785 0.771 0.809 0.735 0.789 0.813 0.817 0.812 0.793
Intelsat Jackson Holdings 0.746 0.764 0.769 0.801 0.718 0.769 0.800 0.801 0.804 0.794
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2327
High‑Frequency Trading inBond Returns: AComparison Across…
Table 10 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.730 0.756 0.746 0.794 0.696 0.767 0.781 0.782 0.788 0.769
Great outdoors Group llc 0.704 0.744 0.737 0.774 0.681 0.752 0.753 0.762 0.785 0.767
Petróleos Mexicanos 0.687 0.734 0.729 0.765 0.659 0.731 0.741 0.734 0.776 0.752
Petrobras 0.680 0.728 0.717 0.760 0.635 0.729 0.735 0.728 0.758 0.748
Sands China Ltd 0.674 0.715 0.695 0.738 0.614 0.722 0.712 0.700 0.735 0.746
Indonesia Asahan Alumini 0.656 0.693 0.694 0.714 0.597 0.718 0.690 0.678 0.726 0.723
Longfor Properties 0.635 0.683 0.693 0.709 0.590 0.708 0.680 0.663 0.720 0.720
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2328
D.Alaminos et al.
Table 11 Ideal profit ratio (10min) for all trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.0091 0.0106 0.0132 0.0139 0.0152 0.0118 0.0086 0.0097 0.0128 0.0170
United States 0.0091 0.0121 0.0134 0.0153 0.0157 0.0133 0.0102 0.0100 0.0145 0.0187
Italy 0.0099 0.0123 0.0152 0.0162 0.0164 0.0149 0.0116 0.0116 0.0159 0.0196
Spain 0.0109 0.0129 0.0159 0.0164 0.0184 0.0166 0.0136 0.0132 0.0168 0.0207
Japan 0.0116 0.0144 0.0165 0.0166 0.0186 0.0163 0.0143 0.0133 0.0166 0.0213
Turkey 0.0133 0.0151 0.0185 0.0178 0.0208 0.0175 0.0150 0.0146 0.0189 0.0220
Mexico 0.0129 0.0137 0.0183 0.0176 0.0201 0.0163 0.0141 0.0141 0.0174 0.0202
Indonesia 0.0129 0.0126 0.0176 0.0176 0.0191 0.0155 0.0132 0.0139 0.0158 0.0190
Nigeria 0.0115 0.0118 0.0175 0.0156 0.0174 0.0153 0.0120 0.0134 0.0157 0.0182
Poland 0.0109 0.0113 0.0169 0.0149 0.0156 0.0135 0.0118 0.0128 0.0149 0.0172
Walmart 0.0098 0.0093 0.0152 0.0134 0.0150 0.0122 0.0094 0.0121 0.0135 0.0157
Johnson & Johnson 0.0088 0.0078 0.0145 0.0115 0.0141 0.0117 0.0086 0.0102 0.0126 0.0153
Verizon 0.0082 0.0060 0.0134 0.0100 0.0130 0.0110 0.0075 0.0086 0.0110 0.0147
Unilever PLC 0.0080 0.0056 0.0130 0.0084 0.0112 0.0100 0.0053 0.0083 0.0108 0.0146
Rio Tinto PLC 0.0080 0.0043 0.0129 0.0078 0.0096 0.0091 0.0040 0.0081 0.0100 0.0137
Air Liquide 0.0100 0.0045 0.0147 0.0097 0.0101 0.0099 0.0048 0.0096 0.0107 0.0141
Ambev 0.0119 0.0063 0.0162 0.0107 0.0112 0.0111 0.0050 0.0112 0.0114 0.0147
Cemex 0.0131 0.0066 0.0174 0.0116 0.0122 0.0119 0.0053 0.0127 0.0130 0.0157
Turkish Airlines 0.0136 0.0081 0.0182 0.0134 0.0126 0.0125 0.0077 0.0141 0.0142 0.0175
KCE Electronics 0.0153 0.0084 0.0199 0.0147 0.0146 0.0140 0.0082 0.0148 0.0145 0.0182
Telekomunikacja Polska 0.0164 0.0097 0.0201 0.0158 0.0153 0.0160 0.0101 0.0162 0.0154 0.0201
Caesars Resort Collection LLC 0.0169 0.0098 0.0204 0.0161 0.0154 0.0163 0.0124 0.0166 0.0169 0.0216
Asurion LLC 0.0163 0.0080 0.0203 0.0160 0.0151 0.0161 0.0110 0.0158 0.0162 0.0210
Intelsat Jackson Holdings 0.0182 0.0075 0.0196 0.0160 0.0134 0.0152 0.0110 0.0150 0.0155 0.0194
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2329
High‑Frequency Trading inBond Returns: AComparison Across…
Table 11 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.0172 0.0070 0.0183 0.0147 0.0133 0.0135 0.0106 0.0140 0.0148 0.0176
Great Outdoors Group llc 0.0157 0.0056 0.0175 0.0136 0.0132 0.0127 0.0105 0.0125 0.0139 0.0165
Petróleos Mexicanos 0.0145 0.0037 0.0159 0.0133 0.0126 0.0119 0.0091 0.0112 0.0137 0.0149
Petrobras 0.0127 0.0031 0.0148 0.0122 0.0110 0.0103 0.0091 0.0101 0.0134 0.0145
Sands China Ltd 0.0108 0.0026 0.0145 0.0114 0.0104 0.0101 0.0075 0.0085 0.0132 0.0143
Indonesia Asahan Alumini 0.0102 0.0013 0.0140 0.0094 0.0096 0.0087 0.0052 0.0078 0.0123 0.0131
Longfor Properties 0.0094 0.0010 0.0138 0.0081 0.0079 0.0077 0.0050 0.0059 0.0113 0.0118
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2330
D.Alaminos et al.
Table 12 Sign prediction ratio (30min) for all trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.793 0.824 0.848 0.888 0.874 0.853 0.884 0.842 0.877 0.892
United States 0.787 0.822 0.842 0.884 0.868 0.847 0.877 0.841 0.869 0.887
Italy 0.783 0.821 0.836 0.878 0.860 0.843 0.871 0.836 0.865 0.887
Spain 0.777 0.820 0.833 0.871 0.854 0.840 0.869 0.831 0.861 0.883
Japan 0.774 0.818 0.828 0.867 0.849 0.832 0.864 0.823 0.861 0.876
Turkey 0.769 0.814 0.822 0.861 0.845 0.826 0.861 0.821 0.859 0.871
Mexico 0.767 0.812 0.821 0.856 0.841 0.821 0.855 0.819 0.852 0.867
Indonesia 0.759 0.809 0.817 0.852 0.833 0.820 0.850 0.818 0.845 0.862
Nigeria 0.755 0.808 0.812 0.851 0.831 0.820 0.847 0.814 0.843 0.856
Poland 0.750 0.801 0.806 0.845 0.828 0.813 0.845 0.810 0.835 0.854
Walmart 0.748 0.801 0.799 0.844 0.820 0.809 0.844 0.809 0.828 0.848
Johnson & Johnson 0.747 0.786 0.791 0.843 0.805 0.806 0.828 0.803 0.827 0.836
Verizon 0.755 0.791 0.798 0.845 0.815 0.807 0.833 0.814 0.839 0.852
Unilever PLC 0.770 0.807 0.813 0.845 0.828 0.823 0.848 0.827 0.854 0.856
Rio Tinto PLC 0.776 0.818 0.823 0.853 0.834 0.835 0.848 0.828 0.869 0.858
Air Liquide 0.779 0.824 0.830 0.858 0.839 0.848 0.865 0.829 0.879 0.867
Ambev 0.787 0.841 0.838 0.867 0.847 0.855 0.871 0.842 0.882 0.853
Cemex 0.785 0.830 0.815 0.844 0.832 0.835 0.860 0.834 0.871 0.851
Turkish Airlines 0.767 0.821 0.808 0.834 0.813 0.821 0.837 0.831 0.857 0.846
KCE Electronics 0.752 0.802 0.789 0.833 0.789 0.811 0.822 0.824 0.849 0.829
Telekomunikacja Polska 0.745 0.777 0.785 0.821 0.764 0.800 0.818 0.806 0.832 0.815
Caesars Resort Collection LLC 0.744 0.773 0.779 0.817 0.739 0.795 0.809 0.805 0.804 0.810
Asurion LLC 0.744 0.768 0.755 0.792 0.719 0.772 0.796 0.800 0.795 0.799
Intelsat Jackson Holdings 0.730 0.748 0.753 0.784 0.703 0.753 0.783 0.784 0.788 0.779
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2331
High‑Frequency Trading inBond Returns: AComparison Across…
Table 12 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.715 0.740 0.730 0.777 0.681 0.751 0.764 0.766 0.771 0.779
Great Outdoors Group llc 0.689 0.728 0.721 0.757 0.667 0.736 0.737 0.746 0.768 0.777
Petróleos Mexicanos 0.672 0.719 0.714 0.749 0.645 0.716 0.725 0.718 0.760 0.762
Petrobras 0.666 0.713 0.702 0.743 0.622 0.714 0.719 0.712 0.742 0.758
Sands China Ltd 0.660 0.700 0.680 0.722 0.601 0.706 0.697 0.685 0.720 0.756
Indonesia Asahan Alumini 0.642 0.678 0.679 0.698 0.584 0.703 0.675 0.664 0.711 0.733
Longfor Properties 0.622 0.668 0.678 0.694 0.578 0.694 0.665 0.649 0.705 0.729
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2332
D.Alaminos et al.
Table 13 Ideal profit ratio (30min) for all trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-SVM GRU-CNN QRNN
Germany 0.0070 0.0090 0.0092 0.0108 0.0110 0.0098 0.0086 0.0055 0.0099 0.0118
United States 0.0088 0.0098 0.0101 0.0116 0.0114 0.0110 0.0094 0.0069 0.0110 0.0137
Italy 0.0098 0.0114 0.0121 0.0122 0.0126 0.0114 0.0110 0.0073 0.0117 0.0157
Spain 0.0100 0.0119 0.0127 0.0134 0.0139 0.0117 0.0115 0.0092 0.0125 0.0174
Japan 0.0098 0.0128 0.0132 0.0138 0.0149 0.0122 0.0132 0.0109 0.0141 0.0174
Turkey 0.0115 0.0141 0.0152 0.0160 0.0160 0.0135 0.0143 0.0130 0.0155 0.0189
Mexico 0.0104 0.0125 0.0137 0.0153 0.0151 0.0121 0.0137 0.0118 0.0139 0.0186
Indonesia 0.0092 0.0109 0.0136 0.0146 0.0151 0.0104 0.0120 0.0098 0.0138 0.0172
Nigeria 0.0081 0.0097 0.0129 0.0137 0.0148 0.0086 0.0120 0.0084 0.0130 0.0169
Poland 0.0074 0.0094 0.0123 0.0122 0.0144 0.0069 0.0111 0.0067 0.0124 0.0154
Walmart 0.0069 0.0084 0.0108 0.0114 0.0130 0.0061 0.0096 0.0059 0.0120 0.0136
Johnson & Johnson 0.0051 0.0071 0.0106 0.0101 0.0124 0.0053 0.0078 0.0056 0.0111 0.0133
Verizon 0.0071 0.0086 0.0111 0.0118 0.0134 0.0063 0.0099 0.0061 0.0124 0.0140
Unilever PLC 0.0053 0.0074 0.0109 0.0104 0.0127 0.0054 0.0080 0.0057 0.0114 0.0137
Rio Tinto PLC 0.0052 0.0071 0.0098 0.0098 0.0122 0.0042 0.0061 0.0037 0.0099 0.0119
Air Liquide 0.0052 0.0052 0.0079 0.0081 0.0116 0.0025 0.0057 0.0020 0.0089 0.0100
Ambev 0.0048 0.0042 0.0076 0.0060 0.0101 0.0023 0.0041 0.0010 0.0069 0.0083
Cemex 0.0052 0.0046 0.0089 0.0079 0.0105 0.0031 0.0055 0.0018 0.0071 0.0097
Turkish Airlines 0.0056 0.0050 0.0103 0.0086 0.0109 0.0050 0.0070 0.0035 0.0090 0.0116
KCE Electronics 0.0061 0.0055 0.0104 0.0098 0.0110 0.0056 0.0083 0.0037 0.0090 0.0136
Telekomunikacja Polska 0.0064 0.0061 0.0114 0.0105 0.0123 0.0067 0.0106 0.0052 0.0097 0.0143
Caesars Resort Collection LLC 0.0069 0.0070 0.0122 0.0121 0.0130 0.0068 0.0126 0.0072 0.0106 0.0158
Asurion LLC 0.0088 0.0071 0.0132 0.0125 0.0134 0.0079 0.0130 0.0081 0.0112 0.0171
Intelsat Jackson Holdings 0.0092 0.0084 0.0138 0.0135 0.0150 0.0090 0.0148 0.0094 0.0127 0.0178
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2333
High‑Frequency Trading inBond Returns: AComparison Across…
Table 13 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-SVM GRU-CNN QRNN
Athenahealth Group 0.0076 0.0063 0.0138 0.0132 0.0136 0.0084 0.0147 0.0091 0.0110 0.0164
Great Outdoors Group llc 0.0083 0.0054 0.0120 0.0130 0.0135 0.0067 0.0128 0.0086 0.0094 0.0147
Petróleos Mexicanos 0.0069 0.0044 0.0119 0.0117 0.0126 0.0059 0.0118 0.0069 0.0088 0.0142
Petrobras 0.0050 0.0040 0.0108 0.0111 0.0108 0.0040 0.0110 0.0055 0.0073 0.0139
Sands China Ltd 0.0040 0.0022 0.0100 0.0101 0.0092 0.0029 0.0104 0.0043 0.0072 0.0123
Indonesia Asahan Alumini 0.0021 0.0028 0.0087 0.0082 0.0080 0.0017 0.0097 0.0038 0.0070 0.0118
Longfor Properties 0.0016 0.0010 0.0071 0.0069 0.0061 0.0016 0.0077 0.0019 0.0069 0.0098
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2334
D.Alaminos et al.
Table 14 Sign prediction ratio (60min) for all trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.773 0.804 0.827 0.866 0.908 0.832 0.892 0.865 0.884 0.909
United States 0.768 0.802 0.821 0.862 0.902 0.826 0.885 0.864 0.877 0.903
Italy 0.764 0.801 0.816 0.856 0.894 0.823 0.878 0.859 0.872 0.903
Spain 0.758 0.799 0.812 0.849 0.888 0.819 0.876 0.853 0.869 0.899
Japan 0.755 0.798 0.808 0.846 0.883 0.811 0.872 0.845 0.868 0.891
Turkey 0.750 0.794 0.801 0.840 0.878 0.806 0.869 0.843 0.867 0.886
Mexico 0.748 0.792 0.801 0.835 0.874 0.801 0.863 0.841 0.859 0.882
Indonesia 0.740 0.789 0.797 0.831 0.866 0.800 0.858 0.840 0.853 0.878
Nigeria 0.736 0.788 0.792 0.830 0.864 0.800 0.855 0.836 0.851 0.872
Poland 0.732 0.781 0.786 0.824 0.860 0.792 0.853 0.832 0.842 0.870
Walmart 0.729 0.781 0.780 0.823 0.852 0.789 0.852 0.831 0.835 0.863
Johnson & Johnson 0.728 0.767 0.772 0.822 0.837 0.786 0.836 0.825 0.834 0.851
Verizon 0.736 0.772 0.778 0.824 0.847 0.787 0.840 0.836 0.846 0.868
Unilever PLC 0.751 0.787 0.793 0.824 0.861 0.803 0.855 0.849 0.862 0.872
Rio Tinto PLC 0.757 0.798 0.803 0.832 0.867 0.815 0.856 0.850 0.877 0.873
Air Liquide 0.759 0.804 0.810 0.837 0.872 0.827 0.873 0.851 0.886 0.882
Ambev 0.768 0.820 0.817 0.845 0.880 0.833 0.879 0.865 0.890 0.869
Cemex 0.766 0.810 0.795 0.824 0.865 0.814 0.868 0.856 0.878 0.866
Turkish Airlines 0.748 0.800 0.788 0.814 0.845 0.801 0.845 0.854 0.864 0.862
KCE Electronics 0.734 0.783 0.770 0.812 0.820 0.791 0.830 0.846 0.857 0.844
Telekomunikacja Polska 0.727 0.757 0.766 0.801 0.794 0.780 0.825 0.828 0.839 0.829
Caesars Resort Collection LLC 0.726 0.754 0.759 0.797 0.768 0.775 0.816 0.826 0.811 0.825
Asurion LLC 0.725 0.749 0.736 0.773 0.797 0.753 0.803 0.822 0.801 0.814
Intelsat Jackson Holdings 0.712 0.730 0.734 0.764 0.778 0.735 0.790 0.806 0.794 0.793
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2335
High‑Frequency Trading inBond Returns: AComparison Across…
Table 14 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.697 0.721 0.712 0.758 0.754 0.733 0.771 0.786 0.778 0.768
Great Outdoors Group llc 0.672 0.710 0.703 0.739 0.738 0.718 0.743 0.766 0.775 0.766
Petróleos Mexicanos 0.655 0.701 0.696 0.730 0.714 0.698 0.731 0.738 0.766 0.752
Petrobras 0.650 0.695 0.685 0.725 0.734 0.696 0.726 0.732 0.748 0.747
Sands China Ltd 0.644 0.683 0.663 0.704 0.709 0.689 0.704 0.704 0.726 0.745
Indonesia Asahan Alumini 0.626 0.661 0.662 0.681 0.689 0.685 0.681 0.682 0.717 0.723
Longfor Properties 0.606 0.652 0.661 0.677 0.681 0.676 0.671 0.666 0.711 0.719
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2336
D.Alaminos et al.
Table 15 Ideal profit ratio (60min) for all trades
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Germany 0.0114 0.0066 0.0064 0.0085 0.0093 0.0050 0.0051 0.0037 0.0068 0.0082
United States 0.0129 0.0070 0.0068 0.0103 0.0111 0.0053 0.0051 0.0042 0.0061 0.0074
Italy 0.0135 0.0079 0.0072 0.0104 0.0111 0.0057 0.0055 0.0058 0.0065 0.0079
Spain 0.0143 0.0072 0.0075 0.0107 0.0123 0.0076 0.0067 0.0061 0.0074 0.0090
Japan 0.0157 0.0072 0.0080 0.0112 0.0128 0.0093 0.0083 0.0075 0.0083 0.0100
Turkey 0.0160 0.0082 0.0087 0.0117 0.0144 0.0111 0.0099 0.0076 0.0099 0.0120
Mexico 0.0150 0.0078 0.0084 0.0114 0.0125 0.0104 0.0097 0.0074 0.0077 0.0093
Indonesia 0.0130 0.0071 0.0080 0.0107 0.0123 0.0100 0.0086 0.0065 0.0074 0.0090
Nigeria 0.0127 0.0062 0.0073 0.0091 0.0123 0.0096 0.0072 0.0051 0.0071 0.0087
Poland 0.0109 0.0058 0.0070 0.0090 0.0121 0.0075 0.0052 0.0043 0.0066 0.0080
Walmart 0.0105 0.0051 0.0061 0.0073 0.0111 0.0061 0.0051 0.0042 0.0060 0.0073
Johnson & Johnson 0.0099 0.0047 0.0056 0.0055 0.0096 0.0058 0.0028 0.0034 0.0047 0.0057
Verizon 0.0091 0.0049 0.0056 0.0057 0.0090 0.0050 0.0022 0.0019 0.0041 0.0050
Unilever PLC 0.0069 0.0040 0.0046 0.0048 0.0083 0.0047 0.0017 0.0002 0.0028 0.0035
Rio Tinto PLC 0.0049 0.0036 0.0040 0.0048 0.0068 0.0045 0.0011 0.0008 0.0027 0.0033
Air Liquide 0.0050 0.0047 0.0036 0.0067 0.0090 0.0048 0.0036 0.0005 0.0034 0.0042
Ambev 0.0052 0.0050 0.0035 0.0070 0.0105 0.0058 0.0054 0.0009 0.0049 0.0059
Cemex 0.0057 0.0056 0.0024 0.0078 0.0109 0.0076 0.0069 0.0016 0.0070 0.0085
Turkish Airlines 0.0067 0.0060 0.0015 0.0086 0.0117 0.0093 0.0075 0.0029 0.0079 0.0096
KCE Electronics 0.0073 0.0071 0.0008 0.0089 0.0129 0.0106 0.0100 0.0033 0.0079 0.0096
Telekomunikacja Polska 0.0092 0.0079 0.0007 0.0100 0.0144 0.0110 0.0112 0.0049 0.0084 0.0102
Caesars Resort Collection LLC 0.0112 0.0090 0.0011 0.0116 0.0154 0.0114 0.0096 0.0033 0.0108 0.0131
Asurion LLC 0.0103 0.0084 0.0007 0.0116 0.0154 0.0104 0.0095 0.0033 0.0085 0.0104
Intelsat Jackson Holdings 0.0119 0.0076 0.0014 0.0115 0.0134 0.0090 0.0075 0.0032 0.0077 0.0094
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2337
High‑Frequency Trading inBond Returns: AComparison Across…
Table 15 (continued)
Qfuzzy AdaBoost-GA SVM-GA DLNN-GA QGA ANFIS-QGA DRCNN CNN-LSTM GRU-CNN QRNN
Athenahealth Group 0.0103 0.0065 0.0022 0.0107 0.0116 0.0082 0.0050 0.0018 0.0056 0.0068
Great Outdoors Group llc 0.0082 0.0064 0.0026 0.0104 0.0109 0.0080 0.0025 0.0015 0.0039 0.0047
Petróleos Mexicanos 0.0063 0.0058 0.0026 0.0098 0.0094 0.0068 0.0017 0.0054 0.0080 0.0086
Petrobras 0.0046 0.0056 0.0032 0.0077 0.0074 0.0065 0.0017 0.0061 0.0057 0.0060
Sands China Ltd 0.0043 0.0048 0.0035 0.0075 0.0052 0.0052 0.0003 0.0066 0.0057 0.0058
Indonesia Asahan Alumini 0.0031 0.0042 0.0037 0.0057 0.0049 0.0038 0.0019 0.0077 0.0056 0.0057
Longfor Properties 0.0026 0.0032 0.0039 0.0049 0.0037 0.0035 0.0020 0.0085 0.0075 0.0079
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2338
D.Alaminos et al.
Appendix3: Features About Every Bonds Used intheSample
Bond
Issuer
ISIN code Issue Date Maturity Coupon
(%)
Volume Issued (in
USD)
Volume Exe-
cuted (in USD)
Germany DE0001141836 07/04/2010 07/04/2042 3.250 27,458,500,000 2,532,011,575
DE0001102317 05/15/2013 05/15/2023 1.500 19,770,120,000 1,695,369,335
DE0001102325 09/11/2013 08/15/2023 2.000 19,770,120,000 1,647,715,293
DE0001102333 01/29/2014 02/15/2024 1.750 19,770,120,000 1,813,871,000
DE0001104875 08/15/2017 08/15/2048 1.250 21,417,630,000 2,111,752,498
DE0001141794 01/23/2019 04/05/2024 0.000 23,065,140,000 2,131,391,960
DE0001102358 05/21/2014 05/15/2024 1.500 24,712,650,000 2,064,163,436
DE0001104883 05/17/2022 06/14/2024 0.200 18,671,780,000 1,520,070,072
DE0001102366 09/10/2014 08/15/2024 1.000 19,648,260,000 1,509,855,113
DE0001135366 07/23/2008 07/04/2040 4.750 17,573,440,000 1,447,789,640
United
States
US9128283W81 2/15/2018 2/15/2028 2.750 70,572,104,500 5,499,761,156
US912828P469 02/15/2016 02/15/2026 1.625 64,940,659,900 4,700,899,049
US9128285W63 01/15/2019 01/15/2029 1.032 13,000,026,300 1,100,035,194
US912810SG40 02/15/2019 02/15/2049 1.181 15,385,015,600 1,449,539,188
US91282CDD02 10/31/2021 10/31/2023 0.375 66,099,908,400 4,592,193,508
US9128285C00 09/30/2018 09/30/2025 3.000 31,000,000,000 2,243,332,249
US912810QP66 02/15/2011 02/15/2041 2.884 23,984,657,100 1,704,139,189
US912810QF84 02/15/2010 02/15/2040 2.922 15,171,280,100 1,148,366,429
US912810RY64 08/15/2017 08/15/2047 2.750 43,512,330,700 3,776,592,369
US912810QX90 08/15/2012 08/15/2042 2.750 41,995,432,300 3,414,186,998
US912810QW18 05/15/2012 05/15/2042 3.000 43,918,685,600 3,472,679,035
US912810PX00 05/15/2008 05/15/2038 4.500 25,500,122,800 1,873,596,980
Italy IT0005436693 02/01/2021 08/01/2031 0.600 21,300,000,000 1,545,041,693
IT0005240350 09/01/2016 09/01/2033 2.450 16,808,228,000 1,379,642,560
IT0005210650 08/01/2016 12/01/2026 1.250 18,891,843,000 1,374,124,929
IT0005127086 08/01/2015 12/01/2025 2.000 19,427,596,000 1,262,818,368
IT0004953417 08/01/2013 03/01/2024 4.500 23,264,571,000 1,450,240,610
IT0005321325 09/01/2017 09/01/2038 2.950 14,963,750,000 1,141,990,363
IT0004644735 09/01/2010 03/01/2026 4.500 21,999,898,000 1,321,215,545
IT0005327306 03/15/2018 05/15/2025 1.450 15,419,125,000 965,491,378
IT0005090318 03/02/2015 06/01/2025 1.500 19,786,723,000 1,195,137,809
IT0005386245 10/01/2019 02/01/2025 0.350 19,468,306,000 1,233,633,129
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2339
High‑Frequency Trading inBond Returns: AComparison Across…
Bond
Issuer
ISIN code Issue Date Maturity Coupon
(%)
Volume Issued (in
USD)
Volume Exe-
cuted (in USD)
Spain ES00000128H5 07/26/2016 10/31/2026 1.300 26,346,638,000 1,555,103,387
ES00000127G9 06/09/2015 10/31/2025 2.150 25,740,540,000 1,421,847,795
ES0000012B88 07/03/2018 07/30/2028 1.400 23,365,049,000 1,483,202,516
ES0000012B70 11/30/2017 11/30/2023 0.175 5,456,261,000 285,222,503
ES00000127Z9 01/19/2016 04/30/2026 1.950 22,952,139,000 1,252,560,660
ES00000124C5 07/16/2013 10/31/2028 5.150 18,769,067,000 1,126,497,346
ES00000126A4 11/30/2013 11/30/2024 2.140 13,170,578,000 630,704,030
ES00000120N0 06/20/2007 07/30/2040 4.900 20,669,793,000 1,448,690,669
ES00000127C8 11/30/2014 11/30/2030 1.186 16,836,799,000 1,106,295,472
Japan JP1103351E98 9/22/2014 9/20/2024 0.500 58,407,605,995 3,114,395,372
JP1103291D68 6/20/2013 6/20/2023 0.800 56,783,915,871 2,760,761,340
JP1103551K72 6/20/2019 6/20/2029 0.100 51,040,754,700.003,341,038,531.12
JP1201551FC0 12/20/2015 12/20/2035 1.000 33,483,028,176 2,594,309,168
JP1200881660 6/20/2006 6/20/2026 2.300 18,164,193,301 1,080,066,060
JP1200871653 3/20/2006 3/20/2026 2.200 8,161,464,243 450,517,545
JP1103451GC0 12/20/2016 12/20/2026 0.100 65,843,010,168 3,791,204,304
JP1103341E67 6/20/2014 6/20/2024 0.600 53,826,367,974 2,567,397,676
JP1103301D90 9/20/2013 9/20/2023 0.800 54,398,599,836 1,770,760,994
JP1200631388 6/20/2003 6/20/2023 1.800 6,943,752,926 183,727,606
JP1103391F65 6/20/2015 6/20/2025 0.400 58,932,408,728 2,240,969,006
Turkey XS1843443356 01/31/2019 03/31/2025 4.625 1,372,925,000 49,902,475
XS1909184753 11/14/2018 02/16/2026 5.200 1,649,992,500 63,873,111
US900123CF53 01/29/2014 03/22/2024 5.750 2,500,000,000 80,067,646
XS1629918415 06/17/2017 06/14/2025 3.250 1,101,885,000 36,815,007
US900123CP36 01/17/2018 02/17/2028 5.125 2,000,000,000 87,130,939
US900123CW86 11/14/2019 11/14/2024 5.600 2,500,000,000 56,788,080
US900123AW05 01/24/2005 05/02/2025 7.325 3,250,000,000 86,709,767
US900123AY60 01/17/2006 03/17/2036 6.875 2,750,000,000 130,954,080
US900123CB40 04/16/2013 04/16/2043 4.875 3,000,000,000 174,902,998
US900123CM05 05/11/2017 05/11/2047 5.750 3,500,000,000 206,898,779
Mexico US91087BAG59 7/31/2019 1/31/2050 4.500 2,903,527,000 201,705,721
US91087BAK61 4/27/2020 4/27/2032 4.750 2,500,000,000 133,551,452
MX0MGO000102 2/23/2017 11/07/2047 8.000 258,121,595 16,530,175
MX0MGO000078 12/30/2004 12/05/2024 10.000 240,194,562 10,264,518
MX0MGO0000P2 6/23/2011 5/29/2031 7.750 437,969,452 19,781,815
US91086QBE70 01/21/2014 01/21/2045 5.550 3,000,000,000 193,969,256
XS2135361686 09/18/2020 09/18/2027 1.350 750,000,000 41,896,431
US91087BAJ98 04/27/2020 04/27/2025 3.900 1,000,000,000 47,779,374
US91087BAD29 10/10/2017 02/10/2048 4.600 2,525,274,000 159,228,149
US91086QAS75 09/27/2004 09/27/2034 6.750 4,266,566,000 254,190,320
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2340
D.Alaminos et al.
Bond
Issuer
ISIN code Issue Date Maturity Coupon
(%)
Volume Issued (in
USD)
Volume Exe-
cuted (in USD)
Indone-
sia
US455780CT15 04/15/2020 10/15/2050 4.200 1,650,000,000 123,440,663
USY20721BE87 04/15/2013 04/15/2043 4.625 1,500,000,000 90,206,635
US455780CR58 01/14/2020 02/14/2050 3.500 800,000,000 63,291,522
XS2012546714 06/13/2019 09/18/2026 1.450 817,590,000 40,398,068
USY20721BU20 07/18/2017 07/14/2047 4.750 1,000,000,000 77,217,563
US455780CD62 12/11/2017 01/11/2028 3.500 1,250,000,000 77,888,231
USY20721BT56 07/18/2017 07/18/2027 3.850 1,000,000,000 56,096,003
US455780CY00 07/28/2021 07/28/2031 2.150 600,000,000 39,203,276
USY20721BQ18 12/08/2016 01/08/2027 4.350 1,250,000,000 77,979,617
US455780CQ75 01/14/2020 02/14/2030 2.850 1,200,000,000 81,257,678
Nigeria XS1717013095 11/28/2017 11/28/2047 7.625 1,500,000,000 87,347,474
XS1777972941 02/23/2018 02/23/2038 7.696 1,250,000,000 66,002,475
XS1717011982 11/28/2017 11/28/2027 6.500 1,500,000,000 44,900,152
XS0944707222 07/12/2013 07/12/2023 6.375 500,000,000 10,540,924
XS2384698994 09/28/2021 09/28/2028 6.125 1,250,000,000 42,094,445
XS1777972511 02/23/2018 02/23/2030 7.143 1,250,000,000 45,435,632
XS1566179039 02/16/2017 02/16/2032 7.875 1,000,000,000 35,545,324
XS1910826996 11/21/2018 11/21/2025 7.625 1,118,352,000 28,394,300
XS2384704800 09/28/2021 09/28/2051 8.250 1,250,000,000 75,140,475
XS2384701020 09/28/2021 09/28/2033 7.375 1,250,000,000 62,662,368
Poland XS2114767457 02/10/2020 02/10/2025 0.000 1,647,510,000 34,734,468
XS0224427160 07/20/2005 07/20/2055 4.250 519,208,000 22,189,504
XS1288467605 09/09/2015 09/09/2025 1.500 1,091,570,000 25,934,254
XS1015428821 01/15/2014 01/15/2024 3.000 2,196,680,000 44,408,386
XS1508566392 10/25/2016 10/25/2028 1.000 818,677,500 21,003,581
XS1584894650 03/23/2017 10/22/2027 1.375 1,091,570,000 25,899,231
XS1766612672 02/07/2018 08/07/2028 1.125 1,090,420,000 25,933,439
US857524AC63 01/22/2014 01/22/2024 4.000 2,000,000,000 41,690,796
XS1346201889 01/18/2016 01/18/2036 2.375 2,183,140,000 82,782,859
XS1960361720 03/07/2019 03/08/2043 2.000 545,785,000 27,703,475
Walmart US931142CH46 04/05/2007 04/05/2027 5.875 750,000,000 99,404,779
US931142BF98 2/15/2000 2/15/2030 7.550 1,000,000,000 100,362,707
US931142CB75 8/31/2005 09/01/2035 5.250 2,500,000,000 277,573,653
US931142CK74 8/24/2007 8/15/2037 6.500 2,250,000,000 268,070,264
US931142CM31 4/15/2008 4/15/2038 6.200 1,500,000,000 178,712,981
US931142AU74 10/14/1993 10/15/2023 6.750 250,000,000 7,729,944
Johnson
&
John-
son
US478160CM48 11/10/2017 1/15/2048 3.500 750,000,000 94,421,361
US478160AL82 5/22/2003 5/15/2033 4.950 500,000,000 31,020,881
US478160AN49 8/16/2007 8/15/2037 5.950 1,000,000,000 82,177,051
US478160AT19 6/23/2008 7/15/2038 5.850 700,000,000 58,192,503
US478160AV64 8/17/2010 09/01/2040 4.500 550,000,000 50,015,272
US478160BA19 5/20/2011 5/15/2041 4.850 300,000,000 29,396,442
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2341
High‑Frequency Trading inBond Returns: AComparison Across…
Bond
Issuer
ISIN code Issue Date Maturity Coupon
(%)
Volume Issued (in
USD)
Volume Exe-
cuted (in USD)
Verizon XS1405769727 11/02/2016 11/02/2035 3.125 604,125,000 60,766,843
US92343VFE92 3/20/2020 3/22/2030 3.150 1,500,000,000 127,097,596
US92343VFU35 11/20/2020 11/20/2050 2.875 2,750,000,000 269,487,826
AU3CB0246221 8/17/2017 2/17/2025 4.050 301,603,525 24,454,081
US92343VDU52 3/16/2017 3/16/2037 5.250 3,000,000,000 420,910,769
AU3CB0268142 11/06/2019 05/06/2026 2.100 304,182,753 28,351,338
Unilever
PLC
XS2008925344 06/11/2019 06/11/2039 1.500 709,520,500 88,993,363
XS1684780031 9/15/2017 9/15/2024 1.375 335,625,000 23,428,648
XS2008921277 06/11/2019 7/22/2026 1.500 671,250,000 42,497,268
XS1684780205 9/15/2017 9/15/2029 1.875 335,625,000 26,178,357
XS2008925344 06/11/2019 06/11/2039 1.500 697,398,000 78,171,947
Rio
Tinto
PLC
US767201AT32 11/02/2021 11/02/2051 2.750 1,250,000,000 155,997,716
US767201AD89 6/27/2008 7/15/2028 7.125 750,000,000 60,751,202
US767201AL06 11/02/2010 11/02/2040 5.200 500,000,000 52,837,019
XS0863127279 12/11/2012 12/11/2024 2.875 457,678,278 22,966,598
XS0863076930 12/11/2012 12/11/2029 4.000 671,250,000 42,099,681
US013716AW59 31/05/2005 01/06/2035 5.750 300,000,000 19,457,646
Air Liq-
uide
FR0011951771 06/05/2014 06/05/2024 1.875 545,785,000 37,902,233
FR0011439835 03/06/2013 09/06/2023 2.375 327,471,000 18,981,136
FR0014005HY8 9/20/2021 9/20/2033 0.375 538,457,000 48,757,525
FR0013182839 6/13/2016 6/13/2024 0.750 537,289,000 33,483,220
FR0013428067 6/20/2019 6/20/2030 0.625 654,942,000 57,805,408
FR0013182847 6/13/2016 6/13/2028 1.250 1,091,570,000 82,578,453
FR0013241346 03/08/2017 03/08/2027 1.000 661,824,000 46,333,309
FR0012766889 06/03/2015 06/03/2025 1.250 542,643,000 33,326,475
FR0013505559 04/02/2020 04/02/2025 1.000 539,271,000 24,767,777
Ambev LU0000870137 4/21/2023 3/31/2027 10.625 41,352,000 4,388,593
US20441XAB82 10/30/2002 12/15/2011 10.500 497,432,000 10,637,773
USP30580AA55 12/15/2011 12/15/2019 13.200 209,796,000 6,547,211
LU1234567890 2/15/2016 2/15/2021 9.750 59,793,000 2,661,324
Cemex US151290BZ57 01/11/2021 01/11/2031 3.875 1,107,769,000 123,496,287
US151290BX00 9/17/2020 9/17/2030 5.200 717,384,000 60,778,504
US151290BW27 06/05/2020 06/05/2027 7.375 935,879,000 65,878,868
US151290BV44 11/19/2019 11/19/2029 5.450 753,053,000 55,026,216
US766879AA85 04/01/2003 7/21/2025 7.700 149,897,000 7,485,139
Turkish
Air-
lines
US10010YAA01 03/15/2015 03/15/2027 4.200 328,274,000 31,979,536
USU0567PAA40 03/04/2013 03/04/2025 7.500 750,000,000 57,689,650
KCE
Elec-
tronics
US48245B1098 8/21/2017 8/21/2022 8.250 800,000,000 40,990,255
XS2180488111 06/04/2020 12/31/2041 9.683 304,556,000 34,738,793
XS1265917481 08/03/2015 12/31/2030 8.575 290,000,000 24,240,151
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2342
D.Alaminos et al.
Bond
Issuer
ISIN code Issue Date Maturity Coupon
(%)
Volume Issued (in
USD)
Volume Exe-
cuted (in USD)
Teleko-
muni-
kacja
Polska
XS1585599667 01/08/2016 01/08/2024 1.875 754,117,000 69,232,737
XS1763026934 05/19/2017 05/19/2024 1.250 532,928,000 42,621,102
Caesars
Resort
Col-
lection
LLC
USU1230PAB77 02/06/2023 2/15/2030 7.000 2,000,000,000 206,375,311
USU1230PAA94 10/15/2022 10/15/2029 4.625 1,200,000,000 95,097,735
USU2829LAD74 07/01/2020 07/01/2027 8.125 1,800,000,000 119,510,985
US28470RAJ14 07/01/2018 07/01/2025 6.250 3,400,000,000 130,617,993
Asurion
LLC
US04649VAX82 12/23/2018 12/23/2026 8.750 3,510,356 258,496
US04649VAW00 11/03/2016 11/03/2024 7.500 150,000,000 8,370,686
US04649VAQ32 07/08/2014 07/08/2020 6.975 100,000,000 3,347,416
US04649VAG59 05/04/2012 05/04/2019 7.250 120,000,000 2,795,915
Intelsat
Jack-
son
Hold-
ings
US45824TAR68 03/29/2016 02/15/2024 8.000 1,349,678,000 119,432,338
US45824TAM71 09/15/2018 09/15/2022 6.625 1,275,000,000 41,601,937
US45824TAS42 09/30/2014 09/30/2022 9.500 490,000,000 20,561,418
US45824TAP03 06/05/2013 08/01/2023 5.500 2,000,000,000 89,294,027
USL5137XAT65 9/19/2018 10/15/2024 8.500 2,950,000,000 170,262,952
USL5137XAJ83 3/29/2016 2/15/2024 8.000 1,250,000,000 686,303,258
Athena-
health
Group
US60337JAA43 02/15/2022 02/15/2030 6.500 2,350,000,000 196,740,827
US04686RAB96 02/15/2021 02/15/2029 6.500 5,900,000,000 481,150,076
US04686RAC79 09/15/2021 09/15/2029 6.500 1,000,000,000 69,922,650
Great
Out-
doors
Group
llc
US07014QAN16 03/06/2018 03/06/2028 8.380 200,000,000 18,961,194
US07014QAM33 03/05/2022 03/05/2028 7.250 250,000,000 20,901,121
US07014QAJ04 03/25/2014 03/25/2019 7.500 150,000,000 7,220,059
US07014QAG64 06/05/2015 06/05/2020 6.950 300,000,000 17,237,389
US07014QAK76 09/25/2019 09/25/2024 8.450 200,000,000 16,713,977
Petróleos
Mexi-
canos
US706451BD26 9/15/2004 9/15/2027 9.500 162,425,000 11,355,157
US706451BR12 06/04/2008 6/15/2038 6.625 491,175,000 40,759,059
US706451BG56 12/15/2005 6/15/2035 6.625 2,749,000,000 202,429,964
XS0213101073 2/24/2005 2/24/2025 5.500 1,098,340,000 70,566,416
US71654QCB68 02/04/2016 08/04/2026 6.875 2,969,774,000 179,974,227
US71654QDN97 02/08/2023 02/08/2033 10.000 2,000,000,000 177,155,911
US71643VAB18 12/16/2021 2/16/2032 6.700 6,779,842,000 502,568,278
USP78625DC49 9/26/2014 09/12/2024 7.190 2,612,203,780 133,945,121
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2343
High‑Frequency Trading inBond Returns: AComparison Across…
Bond
Issuer
ISIN code Issue Date Maturity Coupon
(%)
Volume Issued (in
USD)
Volume Exe-
cuted (in USD)
Petrobras XS0718502007 12/12/2011 12/14/2026 6.250 658,493,565 49,036,883
US71645WAS08 1/27/2011 1/27/2041 6.750 2,250,000,000 241,054,565
US71645WAQ42 10/30/2009 1/20/2040 6.875 1,500,000,000 139,721,044
US71647NAM11 03/17/2014 03/17/2024 6.250 559,315,000 33,895,621
XS0982711714 01/14/2014 01/14/2025 4.750 3,096,714,713 17,648,749
US71647NAT63 09/27/2017 01/27/2025 5.299 633,300,000 34,370,495
US71647NAQ25 05/23/2016 05/23/2026 8.750 386,934,000 16,232,224
US71647NAS80 01/17/2017 01/17/2027 7.375 710,066,000 47,443,714
US71645WAQ42 10/30/2009 01/20/2040 6.875 728,963,000 62,530,497
Sands
China
Ltd
US80007RAE53 08/09/2018 08/08/2028 5.900 1,892,760,000 136,711,791
US80007RAL96 06/04/2020 6/18/2030 4.375 697,375,000 55,222,959
USG7801RAD10 06/04/2020 6/18/2030 4.375 700,000,000 52,121,268
US80007RAK14 06/04/2020 01/08/2026 4.300 796,938,000 34,972,824
US80007RAF29 08/09/2018 08/08/2025 5.625 1,786,475,000 76,954,319
USG7801RAB53 08/09/2018 08/08/2025 5.125 1,800,000,000 35,750,190
USG7801RAE92 01/08/2019 01/08/2026 3.800 800,000,000 23,243,316
US80007RAQ83 03/08/2021 03/08/2029 2.850 649,621,000 27,246,119
Indo-
nesia
Asahan
Alu-
mini
USY7140WAE85 05/15/2020 05/15/2025 4.750 1,000,000,000 60,601,528
USY7140WAA63 11/15/2018 11/15/2021 5.230 498,671,000 16,447,360
USY7140WAF50 05/15/2020 05/15/2030 5.450 1,000,000,000 61,079,590.44
USY7140WAB47 11/15/2019 11/15/2023 5.710 310,939,000 13,176,920
USY7140WAC20 11/15/2018 11/15/2028 6.530 598,460,000 36,264,242
Longfor
Proper-
ties
XS1743535491 01/16/2018 01/16/2028 4.500 500,000,000 34,963,499
XS1633950453 07/13/2017 07/13/2022 3.875 450,000,000 13,282,281
XS2033262895 9/16/2019 9/16/2029 3.950 850,000,000 44,377,944
XS2098650414 1/13/2020 1/13/2032 3.850 400,000,000 31,701,054
XS2098539815 1/13/2020 4/13/2027 3.375 250,000,000 16,287,838
XS0877742105 1/29/2013 1/29/2023 6.750 500,000,000 15,988,419
XS1743535228 1/16/2018 4/16/2023 3.900 300,000,000 9,683,816
XS0844323930 10/18/2012 10/18/2019 6.875 400,000,000 12,740,378
The column “Volume Issued (in USD)” is referred to the volume issued by the bond
and the column “Volume Executed (in USD” is referred to the volume we have actu-
ally used through the observations extracted from the database in the sample.
Appendix4: Cumulative Net Prots forEach Bond Market (Sovereign,
Corporate andHigh‑Yield) andAccording toEach Price Window
(10‑min, 30‑min, 60‑min and1‑min, 5‑min)
See Figs.5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2344
D.Alaminos et al.
0510 15 20 25
Time (years)
-2
0
2
4
6
8
10
Cumulative Net Profit
10
7
QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 5 Cumulative net profits for sovereign bonds (10-min)
0510 15 20 25
Time (years)
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
Cumulative Net Profit
10
6
QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 6 Cumulative net profits for corporate bonds (10-min)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2345
High‑Frequency Trading inBond Returns: AComparison Across…
0510 15 20 25
Time (years)
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Cumulative Net Profit
10
6
QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 7 Cumulative net profits for high-yield bonds (10-min)
0510 15 20 25
Time (years)
-1
0
1
2
3
4
5
6
7
8
Cumulative Net Profit
10
7
QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 8 Cumulative net profits for sovereign bonds (30-min)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2346
D.Alaminos et al.
0510 15 20 25
Time (years)
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Cumulative Net Profit
10
6QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 9 Cumulative net profits for corporate bonds (30-min)
0510 15 20 25
Time (years)
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Cumulative Net Profit
10
6QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 10 Cumulative net profits for high-yield bonds (30-min)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2347
High‑Frequency Trading inBond Returns: AComparison Across…
0510 15 20 25
Time (years)
-2
0
2
4
6
8
10
Cumulative Net Profit
107QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 11 Cumulative net profits for sovereign bonds (60-min)
0510 15 20 25
Time (years)
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Cumulative Net Profit
10
6
QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 12 Cumulative net profits for corporate bonds (60-min)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2348
D.Alaminos et al.
0510 15 20 25
Time (years)
-1
0
1
2
3
4
5
Cumulative Net Profit
106QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 13 Cumulative net profits for high-yield bonds (60-min)
0510 15 20 25
Time (years)
-2
0
2
4
6
8
10
12
Cumulative Net Profit
10
7
QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 14 Cumulative net profits for sovereign bonds (1-min)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2349
High‑Frequency Trading inBond Returns: AComparison Across…
Acknowledgements Not applicable.
Author Contributions All the authors contributed an equal effort to complete the manuscript.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
This research was funded by the Universitat de Barcelona, under the grant UB-AE-AS017634.
Availability of Data and Material The datasets used and/or analysed during the current study are available
from the corresponding author on reasonable request.
Declarations
Conflict of interest The authors declare that they have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is
not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission
directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen
ses/ by/4. 0/.
References
Albagli, E., Ceballos, L., Claro, S., & Romero, D. (2019). Channels of US monetary policy spillovers to
international bond markets. Journal of Financial Economics, 134(2), 447–473.
Akyildirim, E., Bariviera, A. F., Nguyen, D. K., & Sensoy, A. (2022). Forecasting high-frequency stock
returns: A comparison of alternative methods. Annals of Operations Research, 313(2), 639–690.
0510 15 20 25
Time (years)
-2
0
2
4
6
8
10
12
Cumulative Net Profit
10
7
QFuzzy
AdaBoost-GA
SVM-GA
DLNN-GA
QGA
ANFIS-QGA
DRCNN
CNN-LSTM
GRU-CNN
QRNN
Fig. 15 Cumulative net profits for sovereign bonds (5-min)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2350
D.Alaminos et al.
Alaminos, D., Salas, M.B., & Fernández-Gámez, M.A. (2022a). Deep Neural Networks Methods for Esti-
mating Market Microstructure and Speculative Attacks Models: The case of Government Bond
Market. The Singapore Economic Review, Online Ready.
Alaminos, D., Salas, M. B., & Fernández-Gámez, M. A. (2022b). Forecasting stock market crashes via
real-time recession probabilities: a quantum computing approach. Fractals-Complex Geometry,
Patterns, and Scaling in Nature and Society, 30(5), 1–16.
Aldana, I.B. (2017). Investing in Distressed Debt in Europe. Globe Law and Business Limited. London,
United Kingdom.
Aldrich, E. (2013). High-frequency trading: A practical guide to algorithmic strategies and trading sys-
tems. John Wiley & Sons.
Aloud, M. E., & Alkhamees, N. (2021). Intelligent algorithmic trading strategy using reinforcement
learning and directional change. IEEE Access, 9, 114659–114671.
Bai, J., Fleming, M.J., & Horan, C. (2013). The microstructure of China’s government bond market. FRB
of New York Staff Report, (622).
Beirne, J., & Sugandi, E. (2023). Central bank asset purchase programs in emerging market economies.
Finance Research Letters, 54(C), 103769.
Benedetti, M., Lloyd, E., Sack, S., & Fiorentini, M. (2019). Parameterized quantum circuits as machine
learning models. Quantum Science and Technology, 4(4), 043001.
Berezin, F. A., & Shubin, M. (2012). The Schrödinger Equation (Vol. 66). Springer Science & Business
Media.
Bessembinder, H., Spatt, C., & Venkataraman, K. (2020). A survey of the microstructure of fixed-income
markets. Journal of Financial and Quantitative Analysis, 55(1), 1–45.
Biais, B., & Green, R. (2019). The microestructure of the bond market in the 20th century. Review of
Economic Dynamics, 33, 250–271. https:// doi. org/ 10. 1016/j. red. 2019. 01. 003
Bishop, C. M. (1995). Neural networks for pattern recognition. Oxford University Press.
Blanchard, O. (2023). Fiscal policy under low interest rates. The MIT Press.
Bodenhofer, U. (2003). Genetic algorithms: theory and applications. Lecture notes, Fuzzy Logic Labora-
torium Linz-Hagenberg, Winter, 2004.
Bossy, E., & Gigan, S. (2016). Photoacoustics with coherent light. Photoacoustics, 4(1), 22–35.
Cao, M., & Shang, F. (2010, March). Double chains quantum genetic algorithm with application in train-
ing of process neural networks. In 2010 Second International Workshop on Education Technology
and Computer Science (Vol. 1, pp. 19–22). IEEE.
Cao, Y., Guerreschi, G. G., & Aspuru-Guzik, A. (2017). Quantum neuron: an elementary building block
for machine learning on quantum computers. arXiv preprint arXiv: 1711. 11240.
Campbell, R., Huisman, R., & Koedijk, K. (2001). Optimal portfolio selection in a value-at-risk frame-
work. Journal of Banking & Finance, 25(9), 1789–1804.
Caponetto, R., Fortuna, L., Fazzino, S., & Xibilia, M. (2003). Chaotic sequences to improve the perfor-
mance of evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 7, 289–304.
Chakravarty, S., & Sarkar, A. (2003). Trading costs in three U.S. bond markets. Journal of Fixed Income,
13, 39–48.
Cheng, Y., Zheng, Z., Wang, J., Yang, L., & Wan, S. (2019). Attribute reduction based on genetic algo-
rithm for the coevolution of meteorological data in the industrial internet of things. Wireless com-
munications and mobile computing, 2019.
Chih-Hung, W., Gwo-Hshiung, T., & Rong-Ho, L. (2009). A Novel hybrid genetic algorithm for kernel
function and parameter optimization in support vector regression. Expert Systems with Applica-
tions, 36, 4725–4735.
Chong, T., & Ng, W.-K. (2008). Technical analysis and the London stock exchange: testing the MACD
and RSI rules using the FT30. Applied Economics Letters, 15(14), 1111–1114.
Christiansen, C., & Ranaldo, A. (2007). Realized bond—stock correlation: Macroeconomic announce-
ment effects. Journal of Futures Markets: Futures, Options, and Other Derivative Products, 27(5),
439–469.
Clapham, B., Haferkorn, M., & Zimmermann, K. (2023). The impact of high-frequency trading on mod-
ern securities markets. Business & Information Systems Engineering, 65, 7–24.
Conkey, D. B., Brown, A. N., Caravaca-Aguirre, A. M., & Piestun, R. (2012). Genetic algorithm optimi-
zation for focusing through turbid media in noisy environments. Optics Express, 20(5), 4840–4849.
Darbha, M., & Dufour, A. (2013). Microstructure of the Euro-area government bond market. In H. K.
Baker & H. Kiymaz (Eds.), Market microstructure in emerging and developed markets. Robert W.
Kolb series in finance. John Wiley.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2351
High‑Frequency Trading inBond Returns: AComparison Across…
De Almeida, R. L., & Neves, R. F. (2022). Stock market prediction and portfolio composition using a
hybrid approach combined with self-adaptive evolutionary algorithm. Expert Systems with Appli-
cations, 204, 117478.
Deng, S., Huang, X., Qin, Z., Fu, Z., & Yang, T. (2021). A novel hybrid method for direction forecasting
and trading of Apple Futures. Applied Soft Computing, 110, 107734.
Deng, S., Zhu, Y., Huang, X., Duan, S., & Fu, Z. (2022). High-Frequency Direction Forecasting of the
Futures Market Using a Machine-Learning-Based Method. Future Internet, 14(6), 180.
Dixon, M. F., Polson, N. G., & Sokolov, V. O. (2018). Deep learning for spatio-temporal modeling:
Dynamic traffic flows and high frequency trading. Applied Stochastic Models in Business and
Industry, 35(3), 788–807.
Dodd, O., Frijns, B., Idriawan, I., & Pascual, R. (2023). US cross-listing and domestic high-frequency
trading: Evidence from Canadian stocks. Journal of Empirical Finance, 72, 301–320.
Dougherty, J., Kohavi, R., & Sahami, M. (1995). Supervised and unsupervised discretization of continu-
ous features. In Machine learning proceedings 1995 (pp. 194–202). Morgan Kaufmann.
Drezner, Z., & Misevičius, A. (2013). Enhancing the performance of hybrid genetic algorithms by differ-
ential improvement. Computers & Operations Research, 40(4), 1038–1046.
Elton, E. J., Martin, J. G., & Christopher, R. B. (1995). Fundamental economic variables, expected
returns and bond fund performance. Journal of Finance, 40, 1229–1256.
Fabozzi, F. J., & Fabozzi, F. A. (2021). Bond markets, analysis, and strategies (10th ed.). The MIT Press.
Friewald, N., & Nagler, F. (2019). Over-the-counter market frictions and yield spread changes. Journal of
Finance, 74(6), 3217–3257. https:// doi. org/ 10. 1111/ jofi. 12827
Frino, A., Garcia, M., & Zhou, Z. (2020). Impact of algorithmic trading on speed of adjustment to new
information: Evidence from interest rate derivatives. Journal of Futures Markets, 40(5), 749–760.
Frino, A., Mollica, V., & Webb, R. I. (2014). The impact of co-location of securities exchanges’and trad-
ers’ computer servers on market liquidity. Journal of Futures Markets, 34, 20–33.
Gao, X., Li, X., Zhao, B., Ji, W., Jing, X., & He, Y. (2019). Short-term electricity load forecasting model
based on EMD-GRU with feature selection. Energies, 12(6), 1140.
Ghoddusi, H., Creamer, G. G., & Rafizadeh, N. (2019). Machine learning in energy economics and
finance: A review. Energy Economics, 81, 709–727. https:// doi. org/ 10. 1016/j. eneco. 2019. 05. 006
Gilchrist, S., Yue, V., & Zakrajšek, E. (2019). US monetary policy and international bond markets. Jour-
nal of Money, Credit and Banking, 51(51), 127–161.
Glode, V., & Opp, C. (2020). Over-the-counter vs. limit-order markets: The role of traders’ expertise.
Review of Financial Studies, 33(2), 866–915. https:// doi. org/ 10. 1093/ rfs/ hhz061
Goldberg, D. E. (1990). A note on Boltzmann tournament selection for genetic algorithms and popula-
tionoriented simulated annealing. Complex Systems, 4, 44.
Goldblum, M., Schwarzschild, A., Patel, A., & Goldstein, T. (2021). Adversarial attacks on machine
learning systems for high-frequency trading. In Proceedings of the Second ACM International
Conference on AI in Finance (pp. 1–9).
Gomber, P., & Haferkorn. (2015). High frequency trading. Encyclopedia of information science and tech-
nology (3rd ed.). IGI Global.
Goodell, J. W., Kumar, S., Lim, W. M., & Pattnaik, D. (2021). Artificial intelligence and machine learn-
ing in finance: Identifying foundations, themes, and research clusters from bibliometric analysis.
Journal of Behavioral and Experimental Finance, 32, 100577.
Goudarzi, M., & Bazzana, F. (2023). Identification of high-frequency trading: A machine learning
approach. Research in International Business and Finance, 66, 102078.
Grinold, R. C., & Ronald, N. K. (1999). Active portfolio management: A quantitative approach for pro-
ducing superior returns and controlling risk. McGraw-Hill.
Grover, L. K. (2005). Fixed-point quantum search. Physical Review Letters, 95(15), 150501.
Guerreschi, G. G. (2019). Repeat-until-success circuits with fixed-point oblivious amplitude amplifica-
tion. Physical Review A, 99(2), 022306.
Gündüz, Y., Pelizzon, L., Schneider, M., and Subrahmanyam, M.G. (2023). Lighting Up the Dark:
Liquidity in the German Corporate Bond Market. The Journal of Fixed Income 32(4).
Gupta, N., & Jalal, A. S. (2020). Integration of textual cues for fine-grained image captioning using deep
CNN and LSTM. Neural Computing and Applications, 32(24), 17899–17908.
Hansen, K. B., & Borch, C. (2022). Alternative data and sentiment analysis: Prospecting non-standard
data in machine learning-driven finance. Big Data & Society, 9(1), 20539517211070700.
He, K., Chen, Y., & Tso, G. K. (2017). Price forecasting in the precious metal market: A multivariate
EMD denoising approach. Resources Policy, 54, 9–24.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2352
D.Alaminos et al.
Hendershott, T., Jones, C. M., & Menkveld, A. J. (2011). Does algorithmic trading improve liquidity?
The Journal of Finance, 66(1), 1–33.
Ho, T. K. (1998). The random subspace method for constructing decision forests. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 20(8), 832–844.
Holden, C. W., Liu, X., Mao, Y, & Nam, J. (2018) Price Discovery in the Stock, OTC Corporate Bond,
and NYSE Corporate Bond Markets (June 9, 2018). Kelley School of Business Research Paper No.
18–53.
Houg, G., & Warga, A. (2000). An empirical study of bond market transactions. Financial Analyst Jour-
nal, 56(2), 32–46.
Huang, C. W., & Narayanan, S. S. (2017, July). Deep convolutional recurrent neural network with atten-
tion mechanism for robust speech emotion recognition. In 2017 IEEE international conference on
multimedia and expo (ICME) (pp. 583–588). IEEE.
Ibbotson, R. G., & Kaplan, P. D. (2000). Does asset allocation policy explain 40%, 90% or 100% of per-
formance? Financial Analysts Journal, 56, 26–33.
International Monetary Fund. (2021). Developing Government Bond Markets. The World Bank, ISBN:
9780821349557. ISSN: 2663-3744, 21 Sep 2001. DOI: Doi: https:// doi. org/ 10. 5089/ 97808 21349
557. 069
Jarrow, R. (2019). Modeling fixed income securities and interest rate options (3rd ed.). Stanford Univer-
sity Press.
Jeddi, S., & Sharifian, S. (2020). A hybrid wavelet decomposer and GMDH-ELM ensemble model for
Network function virtualization workload forecasting in cloud computing. Applied Soft Computing,
88, 105940.
Kara, Y., Boyacioglu, M. A., & Baykan, Ö. K. (2011). Predicting direction of stock price index move-
ment using artificial neural networks and support vector machines: The sample of the Istanbul
Stock Exchange. Expert Systems with Applications, 38(5), 5311–5319.
Keller, J. M., Gray, M. R., & Givens, J. A. (1985). A fuzzy K-nearest neighbor algorithm. IEEE Trans.
Systems Man Cybernet., 15(4), 580–585.
Kim, B. S., & Kim, T. G. (2019). Cooperation of simulation and data model for performance analysis of
complex systems. International Journal of Simulation Modelling, 18(4), 608–619.
Kirkpatrick, S., Gelatt, C. D., Jr., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science,
220, 671–680.
Knowles, J. D., & Corne, D. W. (2000). Approximating the nondominated front using the Pareto archived
evolution strategy. Evolutionary Computation, 8(2), 149–172.
Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection.
In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’95), vol. 2,
pp. 1137–1143.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Pdf ImageNet classification with deep convolu-
tional neural networks. Communications of the ACM, 60, 84–90.
Lahmiri, S., & Bekiros, S. (2021). Deep Learning Forecasting in Cryptocurrency High-Frequency Trad-
ing. Cognitive Computation, 13, 485–487.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11), 2278–2324.
Lewis, J., Hart, E., & Ritchie, G. (1998). A comparison of dominance mechanisms and simple mutation
on non-stationary problems. Parallel Problem Solving from Nature (PPSN v), 1498, 139–148.
Li, S., Deng, M., Lee, J., Sinha, A., & Barbastathis, G. (2018). Imaging through glass diffusers using
densely connected convolutional networks. Optica, 5(7), 803–813.
Li, J., Hao, J., Sun, X., & Feng, Q. (2021). Forecasting China’s sovereign CDS with a decomposition
reconstruction strategy. Applied Soft Computing, 105, 107291.
Lin, Z., Chen, Y., Zhao, X., & Wang, G. (2013). Spectral-spatial classification of hyperspectral image
using autoencoders. In 2013 9th International Conference on Information, Communications & Sig-
nal Processing (pp. 1–5).
Lucas, A., Iliadis, M., Molina, R., & Katsaggelos, A. K. (2018). Using deep neural networks for inverse
problems in imaging: Beyond analytical methods. IEEE Signal Processing Magazine, 35(1),
20–36.
Ma, M., & Mao, Z. (2019, June). Deep recurrent convolutional neural network for remaining useful
life prediction. In 2019 IEEE international conference on prognostics and health management
(ICPHM) (pp. 1–4). IEEE.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2353
High‑Frequency Trading inBond Returns: AComparison Across…
Mahanti, S., Nashikkar, A., Subramanyam, M. G., Chacko, G., & Mallik, G. (2008). Latent liquidity: A
new measure of liquidity with an application to corporate bonds. Journal of Financial Economics,
88, 272–298.
McCann, M. T., Jin, K. H., & Unser, M. (2017). Convolutional neural networks for inverse problems in
imaging: A review. IEEE Signal Processing Magazine, 34(6), 85–95.
Milana, C., & Ashta, A. (2021). Artificial intelligence techniques in finance and financial markets: A sur-
vey of the literature. Strategic Change, 30(3), 189–209.
Nauck, D., Klawonn, F., & Kruse, R. (1997). Foundations of neuro-fuzzy systems. Wiley.
Nauck, D., Kruse, R. (1997). New learning strategies for NEFCLASS. In Proc.Seventh International
Fuzzy Systems Association World Congress IFSA´97, volume IV, 50–55, Prague.
Neklyudov, A. (2019). Bid–Ask Spreads and the Over-the-Counter Interdealer Markets: Core and Periph-
eral Dealers. Review of Economic Dynamics, 33, 57–84. https:// doi. org/ 10. 1016/j. red. 2019. 04. 005
Nelson, C., & Siegel, A. F. (1987). Parsimonious modeling of yield curves. Journal of Business, 60,
473–489.
Nielsen, M. A., & Chuang, I. L. (2001). Quantum computation and quantum information. Physics Today,
54(2), 60.
Norouzi, M., Collins, M., Johnson, M. A., Fleet, D. J., & Kohli, P. (2015). Efficient non-greedy optimiza-
tion of decision trees. Advances in Neural Information Processing Systems, 28.
Nowak, S., Andritzky, J. R., Jobst, A., & N. T. Tamirisa. (2009). Macroeconomic fundamentals, price
discovery and volatility dynamics in emerging markets. International Monetary Fund Working
Paper No.09/147.
Nunes, M., Gerding, E., McGroarty, F., & Niranjan, M. (2018). Artificial neural networks in fixed income
markets for yield curve forecasting. Available at SSRN 3144622.
Nunes, M., Gerding, E., McGroarty, F., & Niranjan, M. (2019). A comparison of multitask and single
task learning with artificial neural networks for yield curve forecasting. Expert Systems with Appli-
cations, 119, 362–375.
Papadamou, S., Fassas, A. P., Kenourgios, D., & Dimitriou, D. (2021). Flight-to-quality between global
stock and bond markets in the COVID era. Finance Research Letters, 38(C), 101852.
Ping-Feng, P., Chih-Shen, L., Wei-Chiang, H., & Chen-Tung, C. (2006). A hybrid support vector machine
regression for exchange rate prediction. International Journal of Information and Management Sci-
ences, 17, 19–32.
Qin, L., Yu, N., & Zhao, D. (2018). Applying the convolutional neural network deep learning technology
to behavioural recognition in intelligent video. Tehnički Vjesnik, 25(2), 528–535.
Quinlan, J. R. (1993). C4,5: programs for machine learning (p. 1993). Morgan Kaufmann Publishers Inc.
Ramlall, I. (2016). Applied technical analysis for advanced learners and practitioners. Emerald
Publishing.
Ranaldo, A., Schaffner, P., & Vasios, M. (2019). Regulatory effects on short-term interest rates. Journal
of Financial Economics, 141(2), 750–770.
Refaeilzadeh, P., Tang, L., & Liu, H. (2009). Cross-validation. Encyclopedia of Database Systems.
https:// doi. org/ 10. 1007/ 978-0- 387- 39940-9_ 565
Rundo, F. (2019). Deep LSTM with reinforcement learning layer for financial trend prediction in FX high
frequency trading systems. Applied Sciences, 9(20), 4460.
Schestag, R., Schuster, P., & Uhrig-Homburg, M. (2016). Measuring liquidity in bond markets. Review of
Financial Studies, 29, 1170–1219.
Schilling, F. (2016). The effect of batch normalization on deep convolutional neural networks.
Schneider, M. (2018). Market Microstructure, Price Impact and Liquidity in Fixed Income Markets. PhD
Thesis, Scuola Normale Superiore Pisa, Italy.
Schrödinger, E. (1935). The present status of quantum mechanics. Die Naturwissenschaften, 23(48),
1–26.
Securities and Exchange Commission. (2014). Equity market structure literature review—Part II: High
frequency trading. Retrieved from https:// www. sec. gov/ marke tstru cture/ resea rch/ hft_ lit_ review_
march_ 2014. pdf.
Sezer, O. B., & Ozbayoglu, A. M. (2018). Algorithmic financial trading with deep convolutional neural
networks: Time series to image conversion approach. Applied Soft Computing, 70, 525–538.
Singh, P., Dhiman, G., & Kaur, A. (2018). A quantum approach for time series data based on graph and
Schrödinger equations methods. Modern Physics Letters A, 33(35), 1850208.
Singh, P., & Huang, Y. P. (2019). A new hybrid time series forecasting model based on the neutrosophic
set and quantum optimization algorithm. Computers in Industry, 111, 121–139.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2354
D.Alaminos et al.
Sivaraj, R., & Ravichandran, T. (2011). A review of selection methods in genetic algorithm. International
Journal of Engineering Science and Technology, 3, 3792–3797.
Slade, S. (2017). Artificial intelligence applications on wall street. Taylor & Francis.
Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017). Inception-v4, inception-resnet and the impact
of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intel-
ligence (Vol. 31, No. 1).
Ta, V. D., Liu, C. M., & Tadesse, D. A. (2020). Portfolio optimization-based stock prediction using long-
short term memory network in quantitative trading. Applied Sciences, 10(2), 437.
Tacchino, F., Macchiavello, C., Gerace, D., & Bajoni, D. (2019). An artificial neuron implemented on an
actual quantum processor. Npj Quantum Information, 5(1), 1–8.
Tang, Y. (2013). Deep learning using linear support vector machines. arXiv preprint arXiv: 1306. 0239.
Vellekoop, I. M., & Mosk, A. P. (2008). Phase control algorithms for focusing light through turbid media.
Optics Communications, 281(11), 3071–3080.
Vukovic, D., Vyklyuk, Y., Matsiuk, N., & Maiti, M. (2020). Neural network forecasting in prediction
Sharpe ratio: Evidence from EU debt market. Physica a: Statistical Mechanics and Its Applica-
tions, 542, 123331.
Wang, J., Gao, L., Zhang, H., & Xu, J. (2011, July). Adaboost with SVM-based classifier for the classifi-
cation of brain motor imagery tasks. In International Conference on Universal Access in Human-
Computer Interaction (pp. 629–634). Springer, Berlin, Heidelberg.
Wang, Y., Ma, F., Wei, Y., & Wu, C. (2016). Forecasting realized volatility in a changing world: A
dynamic model averaging approach. Journal of Banking & Finance, 64, 136–149.
Wan, K. H., Dahlsten, O., Kristjánsson, H., Gardner, R., & Kim, M. S. (2017). Quantum generalisation of
feedforward neural networks. Npj Quantum Information, 3(1), 1–8.
Wei, Z., & Chen, X. (2018). Deep-learning schemes for full-wave nonlinear inverse scattering problems.
IEEE Transactions on Geoscience and Remote Sensing, 57(4), 1849–1860.
Zhao, Y., Li, J., & Yu, L. (2017). A deep learning ensemble approach for crude oil price forecasting.
Energy Economics, 66, 9–16.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com
... From a scalability standpoint, the framework was crafted to effectively adjust to market shifts, even when faced with rigorous computational limitations. Clapham (2023) [48] and Alaminos (2024) [49] emphasize that these issues are particularly vital in high-frequency trading environments, where cutting-edge technology is crucial for enhancing performance and minimizing latency. To meet these requirements in real-time trading situations, the following strategies were employed: [49], where machine learning architectures were adapted to handle high-frequency trading data across multiple markets effectively. ...
... From a scalability standpoint, the framework was crafted to effectively adjust to market shifts, even when faced with rigorous computational limitations. Clapham (2023) [48] and Alaminos (2024) [49] emphasize that these issues are particularly vital in high-frequency trading environments, where cutting-edge technology is crucial for enhancing performance and minimizing latency. To meet these requirements in real-time trading situations, the following strategies were employed: [49], where machine learning architectures were adapted to handle high-frequency trading data across multiple markets effectively. ...
... Clapham (2023) [48] and Alaminos (2024) [49] emphasize that these issues are particularly vital in high-frequency trading environments, where cutting-edge technology is crucial for enhancing performance and minimizing latency. To meet these requirements in real-time trading situations, the following strategies were employed: [49], where machine learning architectures were adapted to handle high-frequency trading data across multiple markets effectively. ...
Article
Full-text available
In this study, we introduce a detailed framework for predicting market conditions and selecting stocks by integrating machine learning techniques with ontological financial analysis. The process starts with ontology-based stock selection, categorizing companies using fundamental financial indicators such as liquidity, profitability, debt ratios, and growth metrics. For instance, firms showcasing favorable debt-to-equity ratios along with robust revenue growth are identified as high-performing entities. This classification facilitates targeted analyses of market dynamics. To predict market states—categorizing them into bull, bear, or neutral phases—the framework utilizes a Non-Stationary Markov Chain (NMC), BERT, to assess sentiment in financial news articles and Long Short-Term Memory (LSTM) networks to identify temporal patterns. Key inputs like the Sentiment Index (SI) and Illiquidity Index (ILLIQ) play essential roles in dynamically influencing regime predictions within the NMC model; these inputs are supplemented by variables including GARCH volatility and VIX to enhance predictive precision further still. Empirical findings demonstrate that our approach achieves an impressive 97.20% accuracy rate for classifying market states, significantly surpassing traditional methods like Naive Bayes, Logistic Regression, KNN, Decision Tree, ANN, Random Forest, and XGBoost. The state-predicted strategy leverages this framework to dynamically adjust portfolio positions based on projected market conditions. It prioritizes growth-oriented assets during bull markets, defensive assets in bear markets, and maintains balanced portfolios in neutral states. Comparative testing showed that this approach achieved an average cumulative return of 13.67%, outperforming the Buy and Hold method’s return of 8.62%. Specifically, for the S&P 500 index, returns were recorded at 6.36% compared with just a 1.08% gain from Buy and Hold strategies alone. These results underscore the robustness of our framework and its potential advantages for improving decision-making within quantitative trading environments as well as asset selection processes.
Article
Full-text available
With the increasing complexity and volatility of modern power markets, high-frequency trading (HFT) has emerged as a promising approach to enhance market efficiency and liquidity. This paper proposes a high-frequency trading methodology and system tailored for power market transactions. The approach leverages advanced algorithmic techniques, including predictive modeling, real-time data analytics, and adaptive decision-making processes, to capitalize on micro-level price movements and optimize trade execution. We introduce a novel system architecture that integrates market data feeds, trade execution modules, and risk management tools, enabling rapid responses to market dynamics and improving overall trading performance. Through simulations and case studies, we demonstrate the effectiveness of this method in capturing arbitrage opportunities, reducing transaction costs, and enhancing market stability. The proposed HFT framework offers a potential solution to address the growing challenges in power markets, particularly in terms of price discovery, liquidity provision, and market efficiency.
Article
Full-text available
This review explores the dynamic intersection of algorithmic trading and artificial intelligence (AI) within financial markets. It delves into the evolution, strategies, and broader market impact of algorithmic trading fueled by AI technologies. Examining the symbiotic relationship between advanced algorithms and AI, the review navigates through the various strategies employed, shedding light on their implications for market efficiency, liquidity, and overall stability. From high-frequency trading to machine learning-driven predictive analytics, this review unveils the multifaceted landscape of algorithmic trading in the era of AI, presenting both opportunities and challenges for financial markets. The review begins by tracing the historical development of algorithmic trading, emphasizing the paradigm shift with the integration of AI. From traditional programmatic trading to the emergence of sophisticated algorithms driven by machine learning and deep learning, the evolution sets the stage for a comprehensive understanding of the subject. An in-depth analysis of diverse algorithmic trading strategies unfolds, covering areas such as trend following, statistical arbitrage, market making, and sentiment analysis. The incorporation of AI introduces adaptive learning capabilities, enabling algorithms to evolve and optimize strategies based on real-time market conditions. Exploring the impact of algorithmic trading on financial markets, the review examines how AI-driven strategies contribute to market efficiency, liquidity provision, and price discovery. It dissects the implications for traditional market structures, regulatory considerations, and the potential risks associated with algorithmic dominance. Acknowledging the transformative power of algorithmic trading with AI, the review critically assesses the challenges and ethical considerations. From algorithmic bias to systemic risks, the review delves into the darker corners of this technological advancement, prompting a reflection on the need for responsible and transparent practices. The review concludes by peering into the future trajectory of algorithmic trading fueled by AI. Anticipated innovations, regulatory responses, and the evolving landscape of financial markets are discussed, offering insights into the ongoing transformation and potential disruptions in the realm of algorithmic trading. In essence, this review provides a nuanced perspective on the intricate relationship between algorithmic trading and AI, offering a comprehensive understanding of their strategies and the transformative impact on financial markets.
Article
Full-text available
We investigate the impact of Asset Purchase Programs by 14 EME central banks during COVID-19, finding a statistically significant effect in compressing bond spreads vis-`a-vis the US. A counterfactual analysis shows that without APPs, EME bond spreads would have been higher. Country-specific VAR impulse response functions indicate that a shock imposed on asset purchases becomes persistent on bond spreads after around 5 – 10 days, with a peak effect of around 40 basis points. Persistent stabilizing effects are also found on exchange rates and capital flow volatility, while stock markets and inflation expectations are overall not affected by the APPs.
Book
Full-text available
Rethinking fiscal and monetary policy in an economic environment of high debt and low interest rates. Policy makers in advanced economies find themselves in an unusual fiscal environment: debt ratios are historically high, and—once the fight against inflation is won—real interest rates will likely be very low again. This combination calls for a rethinking of the role of fiscal and monetary policy—and this is just what Olivier Blanchard proposes in Fiscal Policy under Low Interest Rates. There is a wide set of opinions about the direction that fiscal policy should take. Some, pointing to the high debt levels, make debt reduction an absolute priority. Others, pointing to the low interest rates, are less worried; they suggest that there is still fiscal space, and, if justified, further increases in debt should not be ruled out. Blanchard argues that low interest rates decrease not only the fiscal costs of debt but also the welfare costs of debt. At the same time, he shows how low rates decrease the room to maneuver in monetary policy—and thus increase the benefits of using fiscal policy, including deficits and debt, for macroeconomic stabilization. In short, low rates imply lower costs and higher benefits of debt. Having sketched what optimal policy looks like, Blanchard considers three examples of fiscal policy in action: fiscal consolidation in the wake of the Global Financial Crisis, the large increase in debt in Japan, and the current US fiscal and monetary policy mix. His conclusions hold practical implications for economic and fiscal policy makers, bankers, and politicians around the world.
Article
Full-text available
High-frequency traders account for a significant part of overall price formation and liquidity provision in modern securities markets. In order to react within microseconds, high-frequency traders depend on specialized low latency infrastructure and fast connections to exchanges, which require significant IT investments. The paper investigates a technical failure of this infrastructure at a major exchange that prevents high-frequency traders from trading at low latency. This event provides a unique opportunity to analyze the impact of high-frequency trading on securities markets. The analysis clearly shows that although the impact on trading volume and the number of trades is marginal, the effects on liquidity and to a lesser extent on price volatility are substantial when high-frequency trading is interrupted. Thus, investments in high-frequency trading technology provide positive economic spillovers to the overall market since they reduce transaction costs not only for those who invest in this technology but for all market participants by enhancing the quality of securities markets.
Article
Full-text available
Futures price-movement-direction forecasting has always been a significant and challenging subject in the financial market. In this paper, we propose a combination approach that integrates the XGBoost (eXtreme Gradient Boosting), SMOTE (Synthetic Minority Oversampling Technique), and NSGA-II (Non-dominated Sorting Genetic Algorithm-II) methods. We applied the proposed approach on the direction prediction and simulation trading of rebar futures, which are traded on the Shanghai Futures Exchange. Firstly, the minority classes of the high-frequency rebar futures price change magnitudes are oversampled using the SMOTE algorithm to overcome the imbalance problem of the class data. Then, XGBoost is adopted to construct a multiclassification model for the price-movement-direction prediction. Next, the proposed approach employs NSGA-II to optimize the parameters of the pre-designed trading rule for trading simulation. Finally, the price-movement direction is predicted, and we conducted the high-frequency trading based on the optimized XGBoost model and the trading rule, with the classification and trading performances empirically evaluated by four metrics over four testing periods. Meanwhile, the LIME (Local Interpretable Model-agnostic Explanations) is applied as a model explanation approach to quantify the prediction contributions of features to the forecasting samples. From the experimental results, we found that the proposed approach performed best in terms of direction prediction accuracy, profitability, and return–risk ratio. The proposed approach could be beneficial for decision-making of the rebar traders and related companies engaged in rebar futures trading.
Article
A sovereign bond market offers a wide range of opportunities for public and private sector financing and has drawn the interest of both scholars and professionals as they are the main instrument of most fixed-income asset markets. Numerous works have studied the behavior of sovereign bonds at the microeconomic level, given that a domestic securities market can enhance overall financial stability and improve financial market intermediation. Nevertheless, they do not deepen methods that identify liquidity risks in bond markets. This study introduces a new model for predicting unexpected situations of speculative attacks in the government bond market, applying methods of deep learning neural networks, which proactively identify and quantify financial market risks. Our approach has a strong impact in anticipating possible speculative actions against the sovereign bond market and liquidity risks, so the aspect of the potential effect on the systemic risk is of high importance.
Article
This work presents a new approach to maximize financial market investment returns. It incorporates two Evolutionary Algorithms (EAs) combined with fundamental and technical investment strategies. The first EA (simple) maintains its evolutionary parameters static during evolution. The second (self-adaptive) introduces the variation operators’ parameters’ values in the representation for them to evolve. The EA is responsible for optimizing the weight that financial ratios from the F-Score have on composing static/dynamic portfolios. Furthermore, it is also responsible for defining the importance that selected technical indicators have on revealing the best timing for market positions placement. Results showed that both cases surpassed the S&P500 returns, performing their best results using a self-adaptive EA combined with a static portfolio and a sliding window of 2 years of train/test. The technical case study showed better results in “bear markets” since it predicted some market declines. Its best subtest achieved returns on average 2.2x and in its best 3.5x higher than the benchmark. Its Sharpe Ratio achieved, on average, 4.9x and in its best 9x higher results than the benchmark. The fundamental case study displayed best in the “bull market”, achieving high market prices. Its best subtest achieved returns on average 2.4x and in its best 3.2x higher than the benchmark. Its Sharpe Ratio achieved, on average, 4.4x and in its best 6.5x higher results than the benchmark.