ArticlePDF Available

Armageddon of Financial Markets: Is the U.S. equity market eventually going to collapse?

Authors:

Abstract

Fitting Dow Jones 30 index data for the 1790−1999 period to a log-periodic power-law singularity (LPPLS) model, the seminal paper from Johansen and Sornette (2001) was the first to show that the U.S. equity growth rate is accelerating such that the market is growing as a power law towards a spontaneous singularity. Their model suggests that the U.S. equity market reaches the critical time in the year 2052±10 years signaling an abrupt transition to a new regime. This study reexamines this important issue using (i) some novel approach to calibrate the LPPLS model and (ii) using a different data set accounting for more than 20 years of more data. The extended data account for the dot.com bubble burst (2000), the global financial crisis period (2008−2009), the COVID-19 crisis (2020−2022), and the ongoing Russian-Ukrainian war (starting in 2022)-all events having severe consequences on the global economy. The calibrated LPPLS model suggests that the U.S. equity market reaches a singularity condition already in June 2050−which may manifest itself in an apocalyptic collapse never evidenced before in the history of financial markets.
1
Armageddon of Financial Markets
Is the U.S. equity market eventually going to collapse?
This draft: 23. January 2023
Klaus Grobys1
Abstract
Fitting Dow Jones 30 index data for the 1790−1999 period to a log-periodic power-law singularity
(LPPLS) model, the seminal paper from Johansen and Sornette (2001) was the first to show that the
U.S. equity growth rate is accelerating such that the market is growing as a power law towards a
spontaneous singularity. Their model suggests that the U.S. equity market reaches the critical time in
the year 2052±10 years signaling an abrupt transition to a new regime. This study re-examines this
important issue using (i) some novel approach to calibrate the LPPLS model and (ii) using a different
data set accounting for more than 20 years of more data. The extended data account for the dot.com
bubble burst (2000), the global financial crisis period (2008−2009), the COVID-19 crisis
(2020−2022), and the ongoing Russian-Ukrainian war (starting in 2022)–all events having severe
consequences on the global economy. The calibrated LPPLS model suggests that the U.S. equity
market reaches a singularity condition already in June 2050−which may manifest itself in an
apocalyptic collapse never evidenced before in the history of financial markets.
JEL Classification: C22, G12, G13, G14, O10.
Key Words: Armageddon; Financial markets; Log-period power laws; Power laws; S&P 500;
Singularity.
1K. Grobys (corresponding author)
Finance Research Group, School of Accounting and Finance, University of Vaasa, Wolffintie 34, 65200 Vaasa, Finland;
Innovation and Entrepreneurship (InnoLab), University of Vaasa, Wolffintie 34, 65200 Vaasa, Finland
E-mail: klaus.grobys@uva.fi
2
The world has gone mad and the system is broken.”
(Ray Dalio, hedge fund manager and founder of Bridgewater Associates)
1. Introduction
A well-established fact, documented in the physics literature, is that faster-than-exponential growth
is not sustainable in the long-term and will eventually result in what physicists typically term a “finite-
time singularity.”1 In a series of research articles, the well-known physicist Didier Sornette and his
research collaborators provided extensive evidence that the re-occurring build-up of stock market
bubbles manifests itself as an overall super-exponential power-law acceleration in the price decorated
by log-periodic precursors−a concept which is related to fractals.2 Building the theory for his model
on complex systems, Sornette (2017) argues that
Financial markets are not only systems with extreme events. Financial markets
constitute one among many other systems exhibiting a complex organization and
dynamics with similar behavior. Systems with large number of mutually interacting
parts, often open to their environment, self-organize their internal structure and their
dynamics with novel and sometimes macroscopic (“emergent”) properties. The
complex system approach, which involves “seeing” interconnections and relationships,
that is, the whole picture as well as the component parts, is nowadays pervasive in
modern control of engineering devices and business management. (p. 15)
Viewing stock markets through the lenses of complex self-organizing systems, stock market crashes
are caused by the slow build-up of long-range correlations resulting in a global cooperative behavior
of the market and eventually resulting in a collapse (viz., finite-time singularity) in a short, critical
time interval. An empirical manifestation of this issue is that drawdowns are governed by power-law
processes (Filimonov and Sornette, 2015). In their 2001 paper, Johansen and Sornette propose a log-
period power-law singularity (LPPLS) model to estimate finite-time singularities in the dynamics of
the world population and some financial indices. The authors argue that the growth rates of both the
world population and financial indices are compatible with a spontaneous singularity occurring at the
same critical time 2052±10 signaling an abrupt transition into a new regime. It is interesting to note
that Johansen and Sornette’s (2001) findings are in line with Godley’s (2012) prediction of a period
of consolidation due to the unsustainable growth of the U.S. economy in the last decade of the
1 An interesting discussion on this subject is, for instance, provided in West (2017, chapter 10).
2 Sornette (2017) provides a detailed overview on the relevant literature.
3
twentieth century, based on investigating fiscal policy, foreign trade, and private income expenditure
and borrowing.3 The arrival of a finite-time singularity in the near future may have an enormous
impact on the global financial ecosystem.
Sornette (2017, p. 28) argues that the Dow Jones index’s faster-than-exponential growth is
manifested in an acceleration of the growth rates: Whereas the growth rate of the return of the Dow
Jones Index was on average about 3% per year in the 1780 to 1930 period, the growth rate of return
shifted to an average of about 7% per year in the 1930 to 2000 period.4 Since the arrival of a finite-
time singularity is a result of an accelerated growth rate which is not sustainable, the question arises
what has happened in the U.S. financial system after the publication of Johansen and Sornette’s
(2001) study? Did the acceleration of the growth rate of the U.S. equity market in the last 20 years
slow down which could perhaps avert the risk of the arrival of a finite-time singularity, respectively,
a severe collapse of the financial ecosystem? In the wake of the global financial crisis (2008−2009),
the Federal Reserve Bank in the U.S. made use of considerable interventions in order to avert a total
collapse of the financial ecosystem. Unfortunately, the monetary policy turned out to be a double-
edged sword.
In this regard, Ray Dalio, founder of the company Bridgewater Associates managing the world’s
largest hedge fund, termed these interventions of the central banks “money printing” which inevitably
resulted in dramatically “inflated stock prices.”5 In fact, in the financial crisis period, the Dow Jones
Index reached its local minimum in February 2009 with a quotation of 9,881.04. In December 2919,
however, the index quotation reached 32,961.45; that is, in one decade the average return of the Dow
Jones index was more than 20% per year which strongly suggests in additional acceleration in growth
rates. In this regard, it is noteworthy that central banks continued “intervening” by means of printing
money also to support the financially stricken global economy in the wake of the outbreak of the
COVID-19 pandemic (2020−2022). Indeed, the government of the United States−followed by many
other governments of western countries−has printed money in an attempt to support deficit spending
over many years. In this regard, Ray Dalio stated, “Large government deficits exist and will almost
certainly increase substantially, which will require huge amounts of more debt to be sold by
3 Similarly, Kapitza (1996) documented superexponential acceleration of human activity which is consistent with a power
law singularity as mentioned in Sornette (2017, p. 378).
4 See Figure 2.1 in Sornette (2017, p. 28).
5 In his lecture entitled “The Economic Machine”, he details the mechanism how stock prices can be inflated due to
monetary policies from the central bank (see https://www.youtube.com/watch?v=PHe0bXAIuk0). Specifically, he argues
that the central bank is forced to print money when having reached a regime of zero interest rates. Unlike cutting spending,
debt reduction and wealth redistribution, printing money is in general inflationary and stimulative. However, the central
bank uses the printed money to buy financial assets and government bonds which drives up asset prices making people
more creditworthy.
4
governments–amounts that cannot naturally be absorbed without driving up interest rates at a time
when an interest rate rise would be devastating for markets and economies because the world is so
leveraged long.” Unsurprisingly, in a CNBC interview that took place in November 2019, he summed
up his view: “The world has gone mad and the [monetary] system is broken.”6
Due to the recent developments in the financial ecosystem involving the global financial crisis
period (2008−2009), the COVID-19 crisis (2020−2022), and the ongoing Russian-Ukrainian war
(starting in 2022)–all events having dramatic impacts on the global economy−it is important to re-
assess the relevance of Johansen and Sornette’s (2001) apocalyptic” view on the future of the
financial sphere. Hence, the purpose of this study is to re-examine Johansen and Sornette’s (2001)
study by taking a coarse-grained perspective of the U.S. equity market to analyze whether the LPPLS
model provides evidence for an expected finite-time singularity in the year 2052. While Johansen and
Sornette (2001) calibrate the LPPLS model in their main analysis for monthly Dow Jones Index data
using a sample from 1790 to 2000, this current research uses monthly data on the S&P 500 for the
1870 to 2022 period obtained from Robert Shiller’s online library; by this, the most recent challenging
economic periods such as the global financial crisis, which is perhaps the most severe financial crisis
evidenced in recent financial history, are accounted for in the sample. The LPPLS model is calibrated
using what we term a “worst-case-scenario-model” as first-order approximation for the first stage.
The model is tested by implementing a standard augmented dickey fuller (ADF) test for the residuals
(Lin, Ren and Sornette, 2014). The accuracy of the proposed approach used for the first-order
approximation model is explored by calibrating it on daily data for predicting the well-known October
1987 crash and the October 1929 crash.
This study contributes to the literature in various important ways. First, as already mentioned
earlier, given the extraordinary economic challenges in the last two decades (e.g., dot.com bubble
burst (2000), global financial crisis (2008−2009), COVID-19 crisis (2020−2022), and ongoing
Russian-Ukrainian war (starting in 2022)), it is important to re-assess the relevance of a possible
regime switch in the financial ecosystems, as predicted first in Johansen and Sornette’s (2001) study.
As pointed out in Sornette (2017, p. 300), the prediction capability of the LPPLS model is more
accurate as we approach the estimated critical time. It is important to note that Johansen and
Sornette’s (2001) research used data ended in December 1999. Since the predicted critical time in
their study is 2052, more than 40% of the waiting time (viz., the time between the end of the original
sample and the predicted finite-time singularity) has already vanished. Moreover, while the data used
in Johansen and Sornette’s (2001) research do not account for the extraordinary challenging economic
6 See https://www.cnbc.com/2019/11/08/bridgewaters-ray-dalio-on-economy-worlds-gone-mad-system-is-broken.html.
5
periods occurring in the most recent two decades, this is the first study examining the potential arrival
of a finite-time singularity using S&P 500 data for an extended sample period comprising those
periods of enormous economic stress.7 Next, re-assessing whether earlier documented results are still
relevant ex-post publication is an important issue per se. In this regard, Hou, Xue, and Zhang (2020)
who attempted to replicate 452 cross-sectional asset pricing anomalies concluded that at least 65% of
those anomalies fail to replicate. The authors argue that “The crux is that unlike natural sciences,
economics, finance, and accounting are mostly observational in nature. As such, it is critical to
evaluate the reliability of published results against ‘similar, but not identical,’ specifications.” (Hou,
Xue, and Zhang, 2020, p. 2022). Interestingly, the authors also found that even for replicated
anomalies, their economic magnitudes are much smaller than documented in the original studies. The
current research is the first that provides a sound replication of Johansen and Sornette’s (2001)
research which meets the requirements for a scientific replication in line with Hamermesh (2007)
because we employ (i) a different population sample period, and (ii) a similar but not identical
approach to estimate the LPPLS model parameters.
Moreover, Sornette (2017, p. 334) points out that his proposed approach to calibrate LPPLS models
serves as a “first-order approximations, and novel improved methods have been developed that are
not published.” Indeed, LPPLS models are not standard techniques taught at business schools.
Perhaps the reason for this issue is that these models were originally developed in the physics
literature (viz., mathematical and statistical physics of bifurcations and phase transitions), and as a
consequence, could be difficult to grasp for most finance scholars lacking the required mathematical
background. Hence, a gap in the literature is to provide an intuitive LPPLS model set-up which can
be implemented using some standard software. The current research remedies this gap by proposing
a very simple yet powerful three-stage estimation procedure for estimating the LPPLS model, where
the first stage corresponds to what we term a “worst-case-scenario-model.” In proving a simple
approach to calibrate LPPLS models, the current research lives also up to the argument raised from
Mandelbrot’s (2008, p. 125): “Contrary to popular opinion, mathematics is about simplifying life, not
complicating it.” Finally, the current research adds to the literature on bubble diagnoses and
predictions using quantitative methodologies. Jiang, Zhou, Sornette, Woodard, Bastiaensen and
Cauwels (2010), and Sornette (2017) provide detailed literature reviews on this topic.
7 Note also that the S&P 500 index is a considerably broader index as opposed to the Dow Jones index and constitutes of
firms often recognized as “industry leaders.” In this regard, Gnabo, Lyudmyla and Lahaye (2014) highlight that the S&P
500 index is widely considered to be an important gauge of the U.S. equity market and is prominently quoted in stock
markets around the world.
6
Using monthly data on the S&P 500, the results of the current research show that the LPPLS model
predicts the arrival of a finite-time singularity in 2050. A 95% confidence interval for the critical time
is estimated between December 2043 and December 2050. This figure is close to the critical time
derived in Johansen and Sornette (2001) corresponding to 2052±10 years. However, it is noteworthy
that point estimate for the critical time derived in Johansen and Sornette’s (2001) original study is
outside the upper bound of the confidence interval derived in the current research. This means that
adding more recent data, the estimated arrival for a finite-time singularity is statistically significantly
earlier than previously expected from Johansen and Sornette (2001). The reason for this issue could
be that the extreme monetary policies undertaken from the central bank inflated stock prices in an
unforeseen manner which, as a consequence, accelerated the arrival of the expected critical time.
Finally, robustness checks show that the proposed approach to estimate the LPPLS model based on
the proposed “worst-case-scenario-model”, is capable of successfully predicting both well-known
crashes, the October 1987 crash and the October 1929 crash.
The paper is organized as follows. The next section provides on overview on the data. The third
section shows the empirical analysis and the last section concludes.
2. Data
Data for the main analysis of this study were downloaded from Robert Shiller’s data library which is
available for free at econ.yale.edu/~shiller/data.htm. Specifically, monthly data on the S&P 500 were
downloaded from January 1871 to November 2022. Moreover, for robustness checks additional daily
data on the S&P 500 were obtained for the period January 2, 1980 until December 31, 1986 from
finance.yahoo.com. Finally, daily data on the Dow Jones index for the period June 1, 1921 until
December 31, 1928 were retrieved from the website https://stooq.com/q/d/?s=%5Edji.
3. Methodology
3.1. Implementing the LPPLS model using a novel three-stage estimation approach.
According to Sornette (2017) a simple power-law model for financial log-prices is given by
ln[𝑝(𝑡)] = 𝐴 + 𝐵(𝑡 𝑡), (1)
where ln[𝑝(𝑡)] is the logarithm of the S&P 500 at time t, 𝑡 is the critical time, 𝐴 is the expected
value of the S&P 500 in logarithm, 𝐵 measures the exposure to faster-than-exponential growth and
7
𝛽 is the power-law exponent controlling faster than exponential price growth. Note that for this model
specification 𝐴 > 0, 𝐵 < 0, and 0.1 𝛽 0.9 must hold. Sornette (2017) points out that the simple
power-law model of equation (1) needs to be expanded by accounting for periodic oscillations:
ln[𝑝(𝑡)] = 𝐴 + 𝐵(𝑡 𝑡)[1 + 𝐶cos(𝜔 ln(𝑡 𝑡)+ 𝜙)], (2)
where 𝐶 measures the exposure responsible for periodic oscillations, ω is the angular frequency of
the log-periodic oscillations during the bubble formation, 𝜙 is the phase parameter and all other
notation is as before. Whereas Sornette (2017) documents that 𝜙 cannot be meaningful restricted, the
study follows earlier research in requiring |𝐶|< 1 and imposing the constraint 5 𝜔 15. The
implementation of the log-linear power-law singularity (LPPLS) model of equation (2) requires
several model parameters 𝚽 = (𝐴, 𝐵, 𝑡, 𝛽, 𝐶 , 𝜙) to be estimated using a highly non-linear model.
Using any non-linear solver to find optimal values for all parameters at once will inevitably generate
spurious fits where different initial chosen values for some parameters in the parameter vector 𝚽 will
result in different values for some other parameters. To address this issue, calibration of the model is
typically based on some combination of finding suggested solutions for parameters, freezing some of
the parameters and using some non-linear solver to find solutions for the free parameters. Note that
some search algorithms, like the often-used Levenberg–Marquardt algorithm, are sensible to the
selected initial values and may suggest local instead of global minimums when initial solutions are
less optimal.
This study proposes an intuitively plausible approach to choose some initial values for parameters
when employing non-linear solvers such Microsoft Excel’s solver.8 Since 𝐴 is the expected value of
the S&P 500 in logarithm, 𝐴 = ln[𝑝(𝑇)] in association with 𝑡= 𝑇 + 1 means the last price notation
of the S&P 500 is the expected price for the critical time 𝑡 and the critical event is suggested to occur
in the subsequent time period. Next, Sornette (2017, p. 335) points out that the exponent 𝛽 needs to
be between 0 and 1 for the price to accelerate and to remain finite. In practical applications, the more
stringent criterion 0.2 𝛽 0.8 has been found useful to avoid the pathologies associated with the
endpoints 0 and 1 of the interval. Hence, treating 𝛽 as “fixed”, where 𝛽{0.2, 0.4, 0.6, 0.8}, and
setting 𝐵 = −1, equation (1) simplifies to:9
8 It is important to note that the methodological discussion that follows here has the purpose to provide an intuitive “first-
order” approximation which is applicable using some standard software.
9 Note that the parameter 𝐵 is typically considerably smaller than 𝐴 in its economic magnitude. As an example, Sornette
(2017, p. 345) reports in Table 9.9. the results from attempting to predict the stock market crash occurring on August
1998 using S&P 500 data. Using as first date 1991 in association with last dates varying between 97.444 and 97.8904 in
the model specifications, the corresponding parameters for 𝐴 range between 6.06 and 6.51, whereas the corresponding
8
ln[𝑝(𝑡)] = ln[𝑝(𝑇)] (𝑡 𝑡)
. (3)
Panel A of Table 1 reports the corresponding initial model specifications. Note that log[𝑝(𝑇)] = 8.23
corresponds to the natural logarithm of the last price quotation of the S&P 500 in the sample. Since
we use monthly data, one unit of time corresponds to 1/12 and because the sample ends at time 𝑇 =
151.9167, it follows that 𝑇 + 1 = 152. We see that using 𝛽= 0.4 generates the minimum sum of
squared residuals (SSR). Note that the initial model specifications can be regarded as “worst-case-
scenarios” because these models suggest that the finite time singularity occurs
“immediately”−implying that there are no further precautionary actions possible. Next, when
optimizing the first model, the following constraints are accounted for:
𝑡 𝑇 + 1,
0.1 𝛽 0.9.
The first constraint allows for the possibility that the finite time singularity arrives at some unknown
time point in the future, whereas the second constraint is needed for the price to accelerate and to
remain finite. Employing Microsoft Excel’s non-linear solver to compute optimal values for the
parameter vector 𝚽
= (𝐴, 𝐵, 𝑡
, 𝛽), the optimized values for the first model’s (model 1)
parameters are reported in Panel B of Table 1. We see that model specification 2 of model 1 provides
parameter values generating the minimum SSR with 𝚽
= (33.24, −19.45, 152, 0.10). It is
interesting to note that regardless of the model specification, all optimized models suggest that the
finite-time singularity occurs at the same time, that is, 𝑡
= 𝑇 + 1. Figure 1 plots the optimized model
1 along with natural logarithm of the S&P 500 denoted as ln[𝑝(𝑡)]. Visual inspection of Figure 1
shows that this model properly captures the faster-than-exponential growth property of the S&P 500.
Using, 𝚽
as initial values for model 2 in equation (2) in association with 𝐶 = 𝜙 = 0, model 2 is then
optimized for varying values 𝜔, that is, 𝜔 {5, 6, ,14, 15}.10 The rationale for this approach rests
upon the argument in Sornette (2017, p. 336) documenting that in practice 5 < 𝜔 < 15 is often used.
Note that Sornette (2017, p. 232) derives the relationship 𝜆 = exp (
) and argues “For the October
1987 crash, we find 𝜆 1.5 1.7 (this value is remarkably universal and is found approximately the
same for other crashes…).” Despite this empirical evidence, it is interesting to note that the optimal
parameters for 𝐵 range between -0.338 and -0.584. Hence, using 𝐵 = −1, as suggested here, is perhaps a somewhat
arbitrary, yet not unreasonable choice for initial values.
10 Note that this approach is similar as the one suggested Liberatore (2010), who proposes to fit a simple model with 𝐶 =
0 and 𝛽 = 1 in the first step and then using the obtained optimized parameter values as seed on the second estimation
step, that is, Liberatore (2010) operates with optimizing the function ln[𝑝(𝑡)]= A (𝑡 𝑡) in the first step.
9
values for 𝜔 at times exhibit considerable variation−depending perhaps on the distance between the
end of the used data sample and the expected critical time 𝑡, for instance.11 Therefore, and because
1.5 < 𝜆 < 3.5 5 < 𝜔 < 15, it seems reasonable to examine various initial values for 𝜔 in the
process of optimizing model 2. Furthermore, when optimizing model 2, the following constraints are
accounted for:
𝑡 𝑇 + 1,
0.1 𝛽 0.9,
5 𝜔 15.
Note that the parameter 𝜙 remains unconstrained which is in line with Sornette (2017, p. 336) who
points out that the phase parameter 𝜙 cannot be meaningfully restricted. Again, using Microsoft
Excel’s non-linear solver to compute optimal values for the parameter vector 𝚽
∗∗ =
(𝐴∗∗, 𝐵∗∗ , 𝑡
∗∗, 𝛽∗∗, 𝐶∗∗, 𝜔∗∗, 𝜙∗∗). Given the input parameter vector 𝚽
=(𝚽
, 𝐶, 𝜔, 𝜙)=
(𝐴, 𝐵, 𝑡
, 𝛽, 𝐶, 𝜔, 𝜙) using 𝐴= 33.24, 𝐵= −19.45, 𝑡
= 152, 𝛽= 0.1 obtained from the
optimization in the previous step as well as choosing the initial values 𝐶 = 𝜙 = 0, only 𝜔 is varied
in this step due to the reasons discussed previously. Using successively 𝜔 {5, 6, ,14, 15}, Panel
A of Table 2 reports the input data vectors, whereas Panel B of Table 2 reports the optimized
parameter vectors 𝚽
∗∗ for each run. We observe from Panel A of Table 2 that the input
parametrization given by 𝚽
=(𝐴, 𝐵, 𝑡
, 𝛽, 𝐶, 𝜔, 𝜙)= (33.24, −19.45, 152, 0.1, 0, 6, 0), results
in the optimized parametrization for model 2 giving the least SSR (e.g.,𝑆𝑆𝑅 = 140.32), where 𝐴∗∗ =
33.08, 𝐵∗∗ = −17.65, 𝑡
∗∗ = 179.53, 𝛽∗∗ = 0.13, 𝐶∗∗ = 0.01, 𝜔∗∗ = 5.00, and 𝜙∗∗ = 2.04.
Strikingly, the optimal model 2 suggests that the finite time singularity occurs at 𝑡
∗∗ = 179.53
which corresponds in the notation here to 331 units of time (e.g., months) in the future. Given that
the data sample ends in November 2022, the critical time is reached in June 2050 which is very close
to Johansen and Sornette’s (2001) prediction using a different data and a different sample which
already ends in December 1999. Figure 2 plots the optimized model 2 along with natural logarithm
of the S&P 500 denoted as ln[𝑝(𝑡)], whereas Figure 3 plots the residuals covering the in-sample time
window (e.g., January 1871–November 2022). Comparing Figure 2 in the current research with
Figure 10.8 in Sornette (2017, p. 373) shows that the models are virtually indistinguishable from each
11 As an example, Sornette (2017, p. 331) reports in Tables 9.2 and 9.3. the results from attempting to predict the well-
known stock market crash occurring on October 1987 using S&P 500 data. Using varying end-dates in the model
specifications (86.88-87.65) gives varying values for the optimal 𝜔 ranging from 4.1 (for 87.04 as end-date) and 12.3 (for
87.52 as end-date).
10
other despite of using (i) different data, (ii) different samples, and (iii) different estimation
approaches. Next, visual inspection from Figure 3 shows that the residuals do not appear to be
integrated despite of being highly autocorrelated.
3.2. Residuals tests.
Analyzing the residuals of the LPPLS is an important issue for evaluating the model fit to the data.
Non-stationary residuals may indicate spurious regression, and makes statistical reasoning
unreasonable. Lin, Ren and Sornette (2014) argue that the LPPLS model residuals should follow a
stationary mean-reverting process. The stationarity of the residual series is tested using the
Augmented Dickey-Fuller (ADF), where calibrations with the 99% confidence level of stationarity
of the residuals are considered “successful”, that is, the log-periodic power-law signature is
considered statistically significant (e.g. Jiang et al., 2010). Implementing the ADF test requires
running the following test regression:
∆𝑢= 𝛿+ 𝛿𝑡 + 𝛿𝑢 + 𝛾∆𝑢 + + 𝛾∆𝑢 + 𝜖, (4)
where 𝛿= 𝛿= 0 for the model specification where no deterministic terms are accounted for, and
𝛿= 0 for the model specification accounting only for a constant term. Moreover, the parameters
𝛾,…, 𝛾 are related to the autoregressive part of this model. Note that the model using 𝛿= 𝛿= 0
corresponds to modelling a random walk under the null hypothesis, whereas using 𝛿= 0
corresponds to modeling a random walk with a drift under the null hypothesis. As a consequence,
there are three main versions of the ADF test. The ADF test is carried out under the null hypothesis
𝛿
󰆹= 0 against the alternative hypothesis of 𝛿
󰆹< 0. The estimated test statistic 𝜆
󰆹 is then the estimated
t-statistic associated with the point estimate 𝛿
󰆹. Table 3 reports the estimated test statistics for various
ADF tests, the critical values for the 5% and 1% level and the p-value. Note that all model
specifications employ a lag-order of 𝑝 = 2 as suggested by the Schwarz-Criterion. From Table 3 it
becomes evident that the null model is rejected on the 1% for the model specification which does not
account for any deterministic terms (model 1) and the model specification which accounts for for a
constant term only (model 2). Unreported results show that the neither 𝛿
󰆹 for model 2 nor 𝛿
󰆹 and 𝛿
󰆹
for model 3 are statistically significant.12 Hence, it is inferred that model 1 correctly specifies the
residuals in the ADF test regression. As a consequence, the LPPLS model residuals follow a
12 The corresponding t-statistics for 𝛿
󰆹 (𝛿
󰆹 and 𝛿
󰆹 ) for model 2 (model 3) are estimated at -0.17 (-0.15 and 0.07).
11
stationary mean-reverting process implying that the log-periodic power-law signature is indeed
statistically significant.
Can a LPPLS signature in financial market data occur purely as a matter of chance? Johansen,
Ledoit and Sornette (2000) tested whether the null hypothesis that a GARCH(1,1) model accounting
for fat-tailed t-distributions could generate such LPPLS signatures. Using 1,000 artificial data sets of
length 400 weeks, only 2 of them exhibited LPPLS signature-like properties which corresponds to a
confidence level of 99.8% for rejecting the hypothesis that a GARCH(1,1) model−which Sornette
(2017) terms the “industry standard”−could generate meaningful lop-periodicity in the data.13 Hence,
we can rule out that a statistically significant log-periodic power-law signature would be a
manifestation of a matter of chance.
3.3. Estimating confidence intervals.
In general, the point estimates from different calibrations could be used and according to the central
limit theorem, the sample average should be distributed as normal. In Table 4, the descriptive statistics
are reported from the average parameter estimates for calibrations reported in Panel B of Table 2. It
becomes evident that the Jarque-Bera test cannot reject the null hypothesis of normally distributed
parameter estimates. There is, however, one exception which is the average point estimate for the
phase parameter 𝜙 which appears to be non-normally distributed−at least based on the sample of
model calibrations. The reason for this issue rests perhaps in the “nature” of this parameter, which
can according to Sornette (2017) not be meaningfully constrained.14 Given the descriptive statistics,
a 95% confidence interval for the critical time is then [173; 180] corresponding to December 2043 to
December 2050. The critical time and confidence interval are close to the figures derived in Johansen
and Sornette (2001) corresponding to 2052±10 years. However, there are two issues that are worth
mentioning. First, note that the 95% confidence interval using the intuitive first-order model
approximation proposed here, delivers a considerably tighter confidence interval than the 70%
confidence interval derived in Johansen and Sornette’s (2001) research. Second, the point estimate
for the critical time derived in Johansen and Sornette’s (2001) original study is outside the upper
bound of the 95% confidence interval derived in the current research. This means that adding more
13 A more comprehensive discussion on this issue is provided in Sornette (2017, chapter 9).
14 Note that the simplest model specification for the LPPLS model, which is discussed in Sornette (2017, chapter 7), does
not even account for such a phase parameter.
12
recent data, the estimated arrival for a finite-time singularity is statistically significantly earlier than
previously expected.
3.4. Other robustness checks.
3.4.1. Predicting the stock market crash of October 19, 1987.
One may wonder how the approach proposed in this study to implement a first-order model
approximation for estimating LPPLS parameters would have performed for predicting other
singularities, respectively, crashes. The most prominent stock market crash ever witnessed in the
history of financial markets is perhaps the stock market crash of October 19, 1987. In line with
traditional finance theory, the -22.6% return of the Dow Jones Index corresponded to a 20-sigma
event–odds that such an event could have occurred were according to Mandelbrot (2008) less than 1
in 1050. In this regard, Mandelbrot (2008) highlighted: “It is a number outside the scale of nature. You
could span the powers of ten from the smallest subatomic particle to the breadth of the measurable
universe−and still never meet such a number.” (p. 4). Moreover, Sornette (2017) emphasizes:
A lot of work has been carried out to unravel the origin(s) of the crash, notably in the
properties of trading and the structure of markets; however, no clear cause has been
singled out. It is noteworthy that the strong market decline during October 1987
followed what for many countries has been an unprecedented market increase during
the first nine months of the year and even before. In the U.S. market, for instance, stock
prices advanced 31.4% over those nine months. Some commentators have suggested
that the real cause of October’s decline was that overinflated prices generated a
speculative bubble during the earlier period. (p. 5).
Overall, the October 1987 crash remains an intellectual curiosity for at least three reasons: First, the
economic magnitude which wiped out more than 20% of equity market capitalization on just a single
day of trading was spectacular; second, the probabilistic occurrence of that event laid outside finance
models as it was “a number outside the scale of nature;” and third, the crash occurred without any
preceding warning signals. Hence, to evaluate how the proposed first-order model approximation for
estimating LPPLS parameters would have performed for predicting this extraordinary October 19,
1987 crash, the model of equation (3) is fit with initial parameter 𝐴 = 5.49 (i.e., ln (𝑝) with 𝑝=
242.17, the price quotation of the S&P 500 on December 31, 1986), 𝑡= 𝑇 + 1, treating 𝛽 as
“fixed”, where 𝛽{0.2, 0.4, 0.6, 0.8}, and setting 𝐵 = −1.
13
Panel A of Table A.1 reports the corresponding initial model specifications. Note that we use here
daily data covering a sample from January 2, 1980 until December 31, 1986 corresponding to 1,170
observations.15 Hence, in this model, one unit of time is equal to 0.004.16 Hence, the sample ends at
time 𝑇 = 7.08, and, as a consequence, 𝑇 + 1 = 7.084. We see that using 𝛽= 0.2 generates the
minimum sum of squared residuals (SSR). Implementing the constraints 𝑡 𝑇 + 1 and 0.1 𝛽
0.9 and employing Microsoft Excel’s non-linear solver to compute optimal values for the parameter
vector 𝚽
= (𝐴, 𝐵, 𝑡
, 𝛽), the optimized values for the first model’s (model 1) parameters are
reported in Panel B of Table A.1. We see that model specification 1 of model 1 provides parameter
values generating the minimum SSR with 𝚽
= (7.16, −1.60, 8.02, 0.20). Using, 𝚽
as initial values
for model 2 of equation (2) in association with 𝐶 = 𝜙 = 0, model 2 is then optimized for varying
values 𝜔, that is, 𝜔 {5, 6, ,14, 15}. Optimizing model 2, requires accounting for the constraints
𝑡 𝑇 + 1, 5 𝜔 15, and 0.1 𝛽 0.9, whereas the parameter 𝜙 remains unconstrained.
Employing again Microsoft Excel’s non-linear solver to compute optimal values for the parameter
vector 𝚽
∗∗ = (𝐴∗∗, 𝐵, 𝑡
∗∗, 𝛽∗∗, 𝐶∗∗, 𝜔∗∗, 𝜙∗∗), given the input parameter vector 𝚽
=
(𝚽
, 𝐶, 𝜔, 𝜙)=(𝐴, 𝐵, 𝑡
, 𝛽, 𝐶, 𝜔, 𝜙) with 𝐴= 7.16, 𝐵= −1.60, 𝑡
= 8.02, 𝛽= 0.20 obtained
from the optimization in the previous step while 𝐶 = 𝜙 = 0, only 𝜔 is varied in this step. Using
successively 𝜔 {5, 6, ,14, 15}, Panel A of Table A.2 reports the input data vectors, whereas Panel
B reports the optimized parameter vectors 𝚽
∗∗ for each run. We observe from Panel A of Table A.2
that the input parametrization given by 𝚽
=(𝐴, 𝐵, 𝑡
, 𝛽, 𝐶, 𝜔, 𝜙)=
(7.16, −1.60, 8.02, 0.20, 0, 13, 0), results in the optimized parametrization for model 2 giving the
least SSR (e.g.,𝑆𝑆𝑅 = 4.10), with 𝐴∗∗ = 6.07, 𝐵∗∗ = −0.50, 𝑡
∗∗ = 8.26, 𝛽∗∗ = 0.50, 𝐶∗∗ = −0.10,
𝜔∗∗ = 13.63, and 𝜙∗∗ = 1.91. Hence, the optimal model 2 suggests that a finite time singularity to
occur on March 1, 1988 which corresponds in the notation here to 294 units of time (e.g., days) in the
future. The corresponding model is plotted in Figure A.1 in the appendix. That means, the model
deviates from the real crash occurring on October 19, 1987 by 92 days which should not come as a
surprise as Sornette points out that “…we expect that fits will give values of 𝑡 which are in general
close to but systematically later than the real time of the crash: the critical time 𝑡 is included in the
log-periodic power law structure of the bubble, whereas the crash is randomly triggered with a biased
probability increasing strongly close to 𝑡.” (Sornette, 2017, p. 332).
15 Note that Sornette (2017, p. 231ff) uses the same sample (viz., January 1980 to December 1986) in some parts of his
LPPLS analyses.
16 As we use 7 years of daily data and 1,770 daily observations, it follows that 0.004 1,770 = 7.084 7.
14
Interestingly, using the same data sample, Sornette (2017) documented that in his model set-up,
where the realized critical time is 87.80 (viz., October 19, 1987)17, his LPPLS model calibration
produced two optimums 88.35 and 87.68; that is, the former expected critical time is 201 days later
than October 19, 1987, whereas the later data expects the crash 44 days too early. On average,
Sornette’s (2017, p. 331) documented LPPLS model calibrations generated an error of 122.50 days,
whereas the first order approximation model proposed in the current research, reduces the error by
25%.
3.4.2. Predicting the crash of October 23, 1929.
Next, another prominent stock market crash is the one of October 1929. According to Sornette (2017)
the crash occurred on October 23, 1929 which should serve here as the critical time in order to make
models comparable. However, the October 23, 1929 crash consisted of a series of subsequent
drawdowns: Starting with a return of -6.31% on October 23, 1929, the Dow Jones Index fell by
31.45% after only 14 trading days (viz., in the October 23 to November 12, 1929 period). It is
interesting to note that Sornette (2017, p. 240) points out: “The similarity between the situations in
1929 ad 1987 was in fact noticed at a qualitative level in an article in the Wall Street Journal on
October 19, 1987, the very morning of the day of the stock market crash …” On the other hand, the
LPPLS methodology is designed to capture such similarity in a quantitative way. Hence, to evaluate
how the proposed first-order model approximation for estimating LPPLS parameters would have
performed for predicting the October 23, 1929 crash, the model of equation (3) is fit with initial
parameter 𝐴 = 5.70 (i.e., ln (𝑝) with 𝑝= 300.00, the price quotation of the Dow Jones Index on
December 31, 1921), 𝑡= 𝑇 + 1, treating 𝛽 as “fixed”, where 𝛽{0.2, 0.4, 0.6, 0.8}, and setting
𝐵 = −1.
Panel A of Table A.3 reports the corresponding initial model specifications. Note that we use here
daily data covering a sample from June 1, 1921 until December 31, 1928 corresponding to 1,942
observations.18 In this model here, a unit of time is 0.0038. Hence, the sample ends at time 𝑇 =
7.3062, and, as a consequence, 𝑇 + 1 = 7.3100. We see that using 𝛽= 0.2 generates the minimum
sum of squared residuals (SSR). Implementing the constraints 𝑡 𝑇 + 1 and 0.1 𝛽 0.9 and
employing again Microsoft Excel’s non-linear solver to compute optimal values for the parameter
17 See Table 9.5 in Sornette (2017, p. 331).
18 Note that Sornette (2017, p. 231ff) uses the same sample (viz., June 1921 to December 1928) in some parts of his
LPPLS analyses.
15
vector 𝚽
= (𝐴, 𝐵, 𝑡
, 𝛽), the optimized values for the first model’s (model 1) parameters are
reported in Panel B of Table A.3. We see that model specification 3 of model 1 provides parameter
values generating the minimum SSR with 𝚽
= (6.11, −0.59, 8.12, 0.53). Using, 𝚽
as initial values
for model 2 of equation (2) in association with 𝐶 = 𝜙 = 0, model 2 is then optimized for varying
values 𝜔, that is, 𝜔 {5, 6, ,14, 15}. Optimizing model 2, requires accounting for the constraints
𝑡 𝑇 + 1, 5 𝜔 15, and 0.1 𝛽 0.9, whereas the parameter 𝜙 remains unconstrained.
Employing again Microsoft Excel’s non-linear solver to compute optimal values for the parameter
vector 𝚽
∗∗ = (𝐴∗∗, 𝐵, 𝑡
∗∗, 𝛽∗∗, 𝐶∗∗, 𝜔∗∗, 𝜙∗∗), given the input parameter vector 𝚽
=
(𝚽
, 𝐶, 𝜔, 𝜙)=(𝐴, 𝐵, 𝑡
, 𝛽, 𝐶, 𝜔, 𝜙) with 𝐴= 6.11, 𝐵= −0.59, 𝑡
= 8.12, 𝛽= 0.53 obtained
from the optimization in the previous step while 𝐶 = 𝜙 = 0, only 𝜔 is varied in the second step.
Using successively 𝜔 {5, 6, ,14, 15}, Panel A of Table A.4 reports the input data vectors, whereas
Panel B reports the optimized parameter vectors 𝚽
∗∗ for each run. We observe from Panel A of Table
A.4 that the input parametrization given by 𝚽
=(𝐴, 𝐵, 𝑡
, 𝛽, 𝐶, 𝜔, 𝜙)=
(6.11, −0.59, 8.12, 0.53, 0, 11, 0), results in the optimized parametrization for model 2 giving the
least SSR (e.g.,𝑆𝑆𝑅 = 4.01), with 𝐴∗∗ = 6.04, 𝐵∗∗ = −0.35, 𝑡
∗∗ = 8.97, 𝛽∗∗ = 0.75, 𝐶∗∗ = 0.09,
𝜔∗∗ = 12.68, and 𝜙∗∗ = −4.03. Hence, the optimal model 2 suggests that a finite time singularity to
arrive on June 11, 1930 which corresponds in the notation here to 434 units of time (e.g., days) in the
future. The corresponding model is plotted along with the Dow Jones Index in terms of its natural
logarithms in Figure A.2 in the appendix.
The results indicate that the proposed model deviates from the real crash occurring on October 23,
1929 by 181 days. It is noteworthy that using the same data sample, Sornette (2017) documented that
in his model set-up, where the realized critical time corresponds in his notation to 29.81 (viz., October
23, 1929), his proposed LPPLS model calibration produced two optimums 30.52 and 30.3519; that is,
the former expected critical time is 259 days later than October 23, 1929, whereas the latter expects
the crash 197 days later. That is, on average Sornette’s (2017, p. 333) documented LPPLS model
calibrations generated an error of 228 days, whereas the proposed first-order model approximation,
reduces the error by 21%.
It is interesting to note that even though a different approach is used here for setting-up the LPPLS
model, there are remarkable similar observations. Sornette (2017, p. 240) points out that the time
windows used to estimate the finite time singularities manifested in the October 1987 and October
1929 crash (e.g., January 2, 1980 until December 31, 1986 and June 1, 1921 until December 31,
19 See Table 9.5 in Sornette (2017, p. 333).
16
1928), show similar acceleration and oscillatory structures, quantified by similar exponents and log-
periodic angular frequency. Whereas in his model, these parameters varied between 𝛽 = 0.33 and
𝛽 = 0.45 and 𝜔 = 7.4 and 𝜔 = 7.9, for the October 1987 and October 1929 crashes, the
corresponding figures are here 𝛽 = 0.50 and 𝛽 = 0.75 and 𝜔 = 13.63 and 𝜔 = 12.68. Overall, the
evidence supports Sornette (2017) by showing that the LPPLS model is capable of revealing
quantitative similarities across changing regimes in financial market data which may manifest
themselves, for instance, in crashes.
3.5. Other tests.
Note that faster-than-exponential growth in the underlying price series is the core assumption of the
LPPLS model. Here we hypothesize that we should be able to detect faster-than-exponential growth
in the data using some standard econometric tests. For instance, recall the general regression model
of equation (4),
∆𝑢= 𝛿+ 𝛿𝑡 + 𝛿𝑢 + 𝛾∆𝑢 + + 𝛾∆𝑢 + 𝜖,
which is used for testing whether the residuals of the optimal LPPLS model are stationary. Applying
this model directly to the logarithmic prices of the underlying data (viz., stock index), 𝑝, we get
accordingly,
∆𝑝= 𝜑+ 𝜑𝑡 + 𝜑𝑝 + 𝜑∆𝑝 + + 𝜑∆𝑝 + 𝑒, (5)
and hypothesize that faster-than-exponential growth should be manifested in 𝜑< 0 in association
with acceptance of the null hypothesis corresponding to an explosively growing non-stationary
process. In Table A.5 in the appendix, the corresponding results for the test regressions using (i)
monthly data on the S&P 500 for the period January 1871 to November 2022, (ii) daily data on the
S&P 500 for the period January 2, 1980 until December 31, 1986, and (iii) daily data on the Dow
Jones index for the period June 1, 1921 until December 31, 1928 are reported.20 First, we see from
Table A.5 that the null hypothesis of a non-stationary processes cannot be rejected for any of those
models because the p-values are >5%. Second, the point estimates 𝜑 are statistically significant on
a common 5% for models (i) and (ii) and on a 10% level for model (iii). We infer that 𝑝 indeed
exhibited explosiveness in random-walk behavior conditionally on the selected samples. It is
noteworthy that while the underlying non-stationary process for a successfully implemented LPPLS
20 The lag-order for each model (i)-(iii) is chosen with respect to the SIC, maximum lag length of 24.
17
model should exhibit explosiveness, the reverse argument cannot be made; that is, explosiveness in
some non-stationary processes does not necessarily imply that the process is subject to log-periodic
power-law behavior. Overall, the standard econometric test applied here can only serve as an
additional test whether the data sample meets the conditions required for the implementation of a
LPPLS model.
3.6. Limitations
The current research uses LPPLS models to forecast finite-time singularities resulting from faster-
than-exponential growth. As mentioned earlier, different initial values chosen for some parameters in
the parameter vector Φ will result in different values for some other parameters. To address this issue,
calibration of the model is usually based on some combination of finding suggested solutions for
parameters, freezing some of the parameters and using a non-linear solver to find solutions for the
free parameters. The approach to select initial values for the three-stage model estimation proposed
in the current research is, in the parlance of Sornette (2017), only a “first-order approximation.” More
work is warranted to improve this methodology.
Next, we have shown that the proposed approach to calibrate the LPPLS model yields similar results
as the models discussed in Johansen and Sornette (2001) and Sornette (2017). In fact, the approach
proposed here resulted in a substantial decrease in the overestimation of the critical time which was
evidenced from the robustness checks. The stock market crashes as of October 1987 and October
1929 have been subject to enormous amount of research, and the fact that especially the October 1987
crash did not exhibit precursory patterns certainly demands special attention from scholars. However,
in the last century, the U.S. economy faced a considerable amount of economic stress. For instance,
in the November 9, 1903 to January 19, 1906 period, the Dow Jones Index increased by 144% (viz.,
from 30.53 to 74.60 index points), whereas in the January 19, 1906 to November 22, 1907 period, the
index fell by 48% (viz., from 74.60 to 38.44 points). This stock market crash is often referred to as
“The Panic of 1907” which perhaps already started on March 14, 1907 where the daily return of the
Dow Jones Index was -8.29%−statistically corresponding to an 8-sigma event. Overall, there are some
similarities with “The Panic of 1907” and the October 1987 crash: In the preceding period stock prices
increased by a substantial margin before the−in the spirit of Mandelbrot (2008)−“impossible event”
suddenly happened. This is one example for stock market drops that remain undiscussed in the
corresponding literature on predicting crashes. This comes perhaps not as a surprise because
unreported results show that the calibration of the LPPLS model does not work out due to failed
18
convergence in the final model optimization attempt. This means that the absence of log-periodic
power-law signatures do not necessarily imply the absence of sudden crashes.21 Future research is
warranted to elaborate more on this issue.
4. Conclusion
Using monthly S&P 500 data from 1871 to 2022, this study examined the potential arrival of a finite-
time singularity for the U.S. equity market. To do so, this study uses (i) some novel approach to
calibrate the LPPLS model and (ii) uses an extended data set accounting for more than 20 years of
more data compared to earlier relevant research. The extended data set account for the dot.com bubble
burst (2000), the global financial crisis period (2008−2009), the COVID-19 crisis (2020−2022), and
the ongoing Russian-Ukrainian war (starting in 2022)–all events had and have enormous
consequences not only for the U.S. but also the global economy. The proposed calibrated LPPLS
model suggests that the U.S. equity market reaches a singularity condition in June 2050 which is in
line with earlier research forecasting a regime switch in 2052. What does a finite-time singularity,
respectively, a regime switch actually mean? Sornette (2017, p. 383-395) discusses different scenarios
of how finite-time singularities could manifest themselves. The first and perhaps most obvious
possible scenario is a “collapse”, whereas the second possible scenario is a “transition to
sustainability” and the third possible scenario is “resuming accelerating growth” by overpassing
fundamental barriers. Sornette (2017) argues that social scientists typically take an optimistic point
of view and
expect that the innovative spirit of mankind will be able to solve the problems associated
with a continuing increase in the growth rate […] Specifically, […] they believe that
world economic development will continue as a successive unfolding of revolutions,
for example, the Internet, bio-technological, and other yet unknown innovations
replacing the prior agricultural, industrial, medical, and information revolutions of the
past. (p. 359)
In this regard, West (2017) points out that
21 It is interesting to note that Didier Sornette and the well-known scholar and bestselling author Nassim Talab discussed
predictability of crashes using the LPPLS model in an ETH sponsored meeting. It is perhaps for this reason that Taleb
terms Sornette’s concept of “Dragon Kings” as “Grey Swans” in an attempt to highlight the lack of accuracy in
predictability; see https://www.youtube.com/watch?v=vuvbghZuM8U.
19
[u]nfortunately, however, it’s not quite as simple as that. There’s yet another major
catch, and it’s a big one. The theory dictates that to sustain continuous growth the time
between successive innovations has to get shorter and shorter. This paradigm-shifting
discoveries, adaptions, and innovations must occur at an increasingly accelerated pace.
Not only does the general pace of life inevitable quicken, but we must innovate at a
faster and faster rate! (p. 417f)
It is noteworthy that the great Benoit Mandelbrot, who developed the theory of “roughness and self-
similarity”, showed that financial markets are subject to fractality, that is, statistical self-similarity. A
manifestation of statistical self-similarity in financial markets is that financial markets “behave at a
higher frequency in the same manner as at a lower frequency” which is a statistical manifestation of
power-law behavior.22 What does this mean for the expected finite-time singularity in 2050? The
stock market crashes of October 1987 and October 1929 which were investigated in the current
research as robustness checks might serve as a guide of how such a collapse could evolve: For both
events, market participants observed extreme reductions in market capitalization in a very short time.
Still, these events were on a small scale because we considered relatively short time windows which
showed evidence for bubble formations−as predicted by LPPLS models. For the main analysis in this
study, however, we took a coarse-grained, respectively, large-scale perspective by exploring the
evolution of the U.S. financial market over more than 150 years. Statistically, we are able to identify
the same log-period power-law behavior on the larger scale as we identified for the smaller scales
(viz., crashes of October 1987 and October 1929). Fractality suggests that we can rather expect to
observe a large-scale collapse of the U.S. equity market as opposed to some other possible scenario.
It is interesting to note that even Ray Dalio as a practitioner−working in the finance industry for 50
years−expects a “regime switch” despite of choosing a different terminology in expressing this his
expectation. In this recent book entitled The Changing World Order, Dalio (2021) studies major
empires and compares the successes and failures of the world’s empires throughout history. In
studying this issue, he identifies some critical factors that supposedly result in a “regime switch”,
respectively, a “change of the world order.” Specifically, Dalio observed the following recent three
factors: First, the confluence of enormous debts and close-to-zero interest rates led to massive printing
in the world’s major currencies−especially the US dollar. Second, due to substantial increases in
wealth, political and value gaps in just a century, significant political and social arose within
countries. Third, China as a rising new power challenges the US which is as of today the existing
22 Mandelbrot (1963) was the first to show that cotton price changes are governed by a power-law process.
20
world power setting the rules for the world order. Dalio argues that the leading country’s (viz., US)
financial picture begins to change when approaching the regime shift. Having the US dollar as reserve
currency gave the US the extraordinary privilege of being able to borrow more money, which got it
deeper into debt which accelerated the US spending power over the short term but weakened it over
the longer run. Inevitably, the US began to borrow excessively to finance both financing domestic
overconsumption and international military conflicts which contributed to the enormous
accumulation of US debts where unsurprisingly China served as the major lender. Finally, when those
countries holding the reserve currency and debt of the US lose faith and sell them, this stage will
mark the end of the US as leading empire constituting the final stage in the regime switch, where
perhaps China−as the major lender−might attain the power and hence may set the forthcoming rules
for a new world order. Overall, even though Dalio does not explicitly employ the terminology “finite-
time singularity”, he unequivocal expects the financial ecosystem in the US to collapse in the nearer
future. In this study, a time frame for this regime switch is derived.
21
References
Dalio, R. (2021). Changing World Order: Why Nations Succeed and Fail. Simon & Schuster, New
York, USA.
Gnabo, J.-Y., Hvozdyk, L., and J. Lahaye (2014). System-wide tail comovements: A bootstrap test
for cojump identification on the S&P 500, US bonds and currencies. Journal of
International Money and Finance 48, 147–74.
Liberatore, V. (2010). Computational LPPL fit to financial bubbles. arXiv preprint
arXiv:1003.2920.
Jiang, Z. Q., Zhou, W. X., Sornette, D., Woodard, R., Bastiaensen, K., and Cauwels, P. (2010).
Bubble diagnosis and prediction of the 2005–2007 and 2008–2009 Chinese stock
market bubbles. Journal of economic behavior & organization 74(3), 149-162.
Johansen, A., and D. Sornette (2001). Finite-time singularity in the dynamics of the world
population, economic and financial indices, Physica A: Statistical Mechanics and its
Applications (294), 465-502.
Lin, L., R.E. Ren, and D. Sornette (2014). The volatility-confined LPPL model: A consistent model
of ‘explosive’ financial bubbles with mean-reverting residuals, International Review
of Financial Analysis 33, 210-225.
Johansen, A., Ledoit, O., and D. Sornette (2000). Crashes as critical points, International Journal of
Theoretical and Applied Finance 3, 219-255.
Filimonov, V., and D. Sornette (2015). Power law scaling and “Dragon-Kings” in distributions of
intraday financial drawdowns, Chaos, Solitons & Fractals 74, 27-45.
Godley, W. (2012). Seven Unsustainable Processes: Medium-Term Prospects and Policies for the
United States and the World. In: Lavoie, M., Zezza, G. (eds) The Stock-Flow
Consistent Approach. Palgrave Macmillan, London.
Hamermesh, D. S. (2007). Viewpoint: Replication in Economics. Canadian Journal of
Economics/Revue Canadienne D’économique 40 (3): 715–733.
Hou, K., C. Xue, and L. Zhang (2020). Replicating Anomalies. Review of Financial Studies 33(5),
2019–2133.
Jiang, Z.-Q., Zhou, W.-X., Sornette, D., Woodard, R., Bastiaensen, K., P. Cauwels (2010). Bubble
diagnosis and prediction of the 2005–2007 and 2008–2009 Chinese stock market
bubbles, Journal of Economic Behavior & Organization 74(3), 149-162.
Kapitza, S.P. (1996). The phenomenological theory of world population growth, Physics-Uspekhi
39(1), 57.
Mandelbrot, B. (1963). The variation of certain speculative prices. Journal of Business 36, 394-
419.
22
West, G. (2017). Scale: Universal Laws of Life, Growth, and Death in Organisms, Cities, and
Companies, Penguin Books, New York.
Table 1. Calibrating the power-law model for the S&P 500 using monthly data from 1871-2022.
To model financial log-prices we use a simple power-law model which is given by,
ln[𝑝(𝑡)]= 𝐴 + 𝐵(𝑡 𝑡),
where ln[𝑝(𝑡)] is the logarithm of the S&P 500 at time t, 𝑡 is the critical time, 𝐴 is the expected value of the S&P 500 in
logarithm, 𝐵 measures the exposure to faster-than-exponential growth and 𝛽 is the power-law exponent controlling faster
than exponential price growth. Note that for this model specification 𝐴 > 0, 𝐵 < 0, and 0.1 𝛽 0.9 must hold.
Treating 𝛽 as “fixed”, where 𝛽{0.2, 0.4, 0.6, 0.8}, and setting 𝐵 = −1, equation (1) simplifies to:
ln[𝑝(𝑡)]= ln[𝑝(𝑇)] (𝑡 𝑡)
.
Note that 𝐴 is the expected value of the S&P 500 in logarithm, and 𝐴 = ln[𝑝(𝑇)] in association with 𝑡= 𝑇 + 1 means
the last price notation of the S&P 500 is the expected price for the critical time 𝑡 and the critical event is suggested to
occur in the subsequent time period. Panel A of Table 1 reports the corresponding initial model specifications. Note that
log[𝑝(𝑇)]= 8.23 corresponds to the natural logarithm of the last price quotation of the S&P 500 in the sample. Next, the
model is optimized allowing for the following constraints:
𝑡 𝑇 + 1,
0.1 𝛽 0.9,
The first constraint allows for the possibility that the finite time singularity arrives at some unknown time point in the
future, whereas the second constraint is needed for the price to accelerate and to remain finite. Employing Microsoft
Excel’s non-linear solver to compute optimal values for the parameter vector 𝚽
= (𝐴, 𝐵, 𝑡
, 𝛽), the optimized values
are reported in Panel B of Table 1.
Panel A.
In
itial parameter values for
model 1.
Specification
𝛽
𝑡
=
𝑇
+
1
𝑆𝑆𝑅
1
8
.
23
-
1
0
.
2
152
13960
.
61
2
8
.
23
-
1
0
.
4
152
1256
.
6
6
3
8
.
23
-
1
0
.
6
152
139262
.
3
0
4
8
.
23
-
1
0
.
8
152
1586913
.00
Panel B.
O
ptim
i
zed
parameter values for
model 1.
Specification
𝛽
𝑡
𝑆𝑆𝑅
1
16
.
0
5
-
4
.
5
7
0
.
2
4
152
192
.
75
2 33.24 -19.45 0.10 152 164.81
3
9
.
7
3
-
0
.
83
0
.
4
7
152
257
.
22
4
9
.
72
-
0
.
80
0
.
4
8
152
258
.
6
8
Table 2. Calibrating the LPPLS model for the S&P 500 using monthly data from 1871-2022.
In line with Sornette (2017, p. 335), the log-period power-law singularity (LPPLS) model is given by,
ln[𝑝(𝑡)]= 𝐴 + 𝐵(𝑡 𝑡)[1 + 𝐶cos(𝜔 ln(𝑡 𝑡)+ 𝜙)],
with 𝐴 > 0, 𝐵 < 0, and 0.1 𝛽 0.9, and 𝐴 is the expected value of the S&P 500 in logarithm, 𝐵 measures the exposure to faster-than-exponential growth, 𝛽 is the power-
law exponent controlling faster than exponential price growth and 𝐶 measures the exposure responsible for periodic oscillations, ω is the angular frequency of the log-periodic
oscillations during the bubble formation and 𝜙 is the phase parameter. Whereas 𝜙 cannot be meaningful restricted, here we require |𝐶|< 1 and impose the constraint 5 𝜔
15. We choose the optimal values for 𝚽
= (33.24, −19.45, 152, 0.10) from the first estimation step. Using in addition 𝐶 = 𝜙 = 0, the LPPLS model (model 2) is then
optimized for varying values 𝜔, that is, 𝜔 {5, 6, ,14, 15}. Whereas Panel A of Table 2 reports the corresponding input parameters, Panel B of Table 2 reports the optimized
parameters using the following constraints:
𝑡 𝑇 + 1,
0.1 𝛽 0.9,
5 𝜔 15.
Note that the parameter 𝜙 remains unconstrained which is in line with Sornette (2017, p. 336). The parameters are obtained using Microsoft Excel’s non-linear solver.
Panel A. Initial parameter values for model 2.
Specification
1
2
3
4
5
6
7
8
9
10
11
𝐴
33
.
24
33
.
24
33
.
24
33
.
24
33
.
24
33
.
24
33
.
24
33
.
24
33
.
24
33
.
24
33
.
24
𝐵
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
-
19
.
45
𝑡
152
152
152
152
152
152
152
152
152
152
152
𝛽
0
.
1
0
.
1
0
.
1
0
.
1
0
.
1
0
.
1
0
.
1
0
.
1
0
.
1
0
.
1
0
.
1
𝐶
0
0
0
0
0
0
0
0
0
0
0
𝜔
5
6
7
8
9
10
11
12
13
14
15
𝜙
0
0
0
0
0
0
0
0
0
0
0
𝑆𝑆𝑅
1722
.
65
1722
.
65
1722
.
65
1722
.
65
1722
.
65
1722
.
65
1722
.
65
1722
.
65
1722
.
65
1722
.
65
1722
.
65
Panel B. Optimized parameter values for model 2.
Specification
1
2
3
4
5
6
7
8
9
10
11
𝐴
37
.
31
36.08
49
.
53
34
.
07
37
.
27
30
.
04
30
.
61
39
.
42
30
.
18
39
.
04
33
.
80
𝐵
-
20
.
16
-17.65
-
28
.
91
-
16
.
55
-
19
.
26
-
14
.
20
-
13
.
69
-
20
.
18
-
14
.
66
-
19
.
79
-
17
.
85
𝑡
170
.
66
179.53
184
.
44
175
.
40
176
.
17
168
.
60
174
.
45
181
.
98
167
.
66
182
.
38
168
.
21
𝛽
0
.
12
0.13
0
.
10
0
.
13
0
.
12
0
.
14
0
.
15
0
.
12
0
.
13
0
.
13
0
.
12
𝐶
-
0
.
01
0.01
-
0
.
01
-
0
.
01
0
.
01
0
.
01
-
0
.
01
-
0
.
01
0
.
00
-
0
.
01
0
.
00
𝜔
5
.
00
5.00
6
.
16
9
.
93
10
.
03
8
.
11
9
.
80
11
.
18
15
.
00
11
.
23
15
.
00
𝜙
-
0
.
05
2.04
-
0
.
48
-
12
.
31
-
3
.
42
-
0
.
03
0
.
96
-
0
.
02
-
4
.
88
-
0
.
33
1
.
27
𝑆𝑆𝑅
178
.
36
140.32
157
.
65
185
.
60
185
.
16
205
.
75
186
.
22
192
.
86
241
.
73
193
.
85
241
.
72
Table 3. Testing the residuals of the LPPLS model.
Using the residuals from the LPPLS model, we implement ADF tests using the following test regression,
∆𝑢= 𝛿+ 𝛿𝑡 + 𝛿𝑢 + 𝛾∆𝑢 + + 𝛾∆𝑢 + 𝜖,
where 𝛿= 𝛿= 0 for the model specification where no deterministic terms are accounted for, and 𝛿= 0 for the model
specification accounting only for a constant term. The parameters 𝛾,…, 𝛾 are related to the autoregressive part of the
model. The model using 𝛿= 𝛿= 0 corresponds to modelling a random walk under the null hypothesis, whereas using
𝛿= 0 corresponds to modeling a random walk with a drift under the null hypothesis. As a consequence, there are three
main versions of the ADF test. The ADF test is then carried out under the null hypothesis 𝛿
󰆹= 0 against the alternative
hypothesis of 𝛿
󰆹< 0. The estimated test statistic 𝜆
󰆹 is then the estimated t-statistic associated with the point estimate 𝛿
󰆹.
Table 3 reports the estimated test statistics for various ADF tests, the critical values for the 5% and 1% level and the p-
value. Note that all model specifications employ a lag-order of 𝑝 = 2 as suggested by the Schwarz-Criterion.
Model specification
𝜆
󰆹
Critical values
5% 1%
No deterministic terms -3.89***
(0.0001)
-1.94 -2.56
Constant -3.89***
(0.0021)
-2.86 -3.43
Constant and trend -3.89**
(0.0126)
-3.41 -3.96
Table 4. Descriptive statistics of average parameter estimates for various model calibrations.
This table reports the descriptive statistics for the average parameter estimates for calibrations reported in Panel B of
Table 2.
𝐵
𝑡
𝛽
𝐶
𝜔
𝜙
𝑆𝑆𝑅
Mean
36
.
36
-
18
.
51
176
.
13
0
.
13
0
.
00
9
.
14
-
1
.
85
186
.
75
Median
36
.
67
-
18
.
45
175
.
79
0
.
13
-
0
.
01
9
.
87
-
0
.
19
185
.
91
Maximum
49
.
53
-
13
.
69
184
.
44
0
.
15
0
.
01
15
.
00
2
.
04
241
.
73
Minimum
30
.
04
-
28
.
91
167
.
66
0
.
10
-
0
.
01
5
.
00
-
12
.
31
140
.
32
Std. Dev.
5
.
85
4
.
43
5
.
92
0
.
01
0
.
01
3
.
14
4
.
19
26
.
98
Skewness
0
.
96
-
1
.
18
-
0
.
08
-
0
.
49
0
.
47
0
.
19
-
1
.
70
0
.
29
Kurtosis
3
.
63
4
.
10
1
.
69
3
.
07
1
.
38
2
.
38
4
.
95
3
.
41
Jarque
-
Bera
(JB) test
1
.
71
2
.
83
0
.
72
0
.
40
1
.
47
0
.
22
6
.
42
0
.
21
p
-
value (JB test
)
0
.
4253
0
.
2427
0
.
6966
0
.
8183
0
.
4799
0
.
8967
0
.
0403
0
.
9002
Figure 1. The S&P 500 in logarithms and optimized power-law model 1.
Figure 2. The S&P 500 in logarithms and optimized model 2 (LPPLS model) using monthly data
for the 1871-2022 period.
Figure 3. Residuals of the calibrated LPPLS model.
Appendix
Table A.1. Calibrating the power-law model for the S&P 500 using daily data from 1980-1986.
To model financial log-prices we use a simple power-law model which is given by,
ln[𝑝(𝑡)]= 𝐴 + 𝐵(𝑡 𝑡),
where ln[𝑝(𝑡)] is the logarithm of the S&P 500 at time t, 𝑡 is the critical time, 𝐴 is the expected value of the S&P 500 in
logarithm, 𝐵 measures the exposure to faster-than-exponential growth and 𝛽 is the power-law exponent controlling faster
than exponential price growth. Note that for this model specification 𝐴 > 0, 𝐵 < 0, and 0.1 𝛽 0.9 must hold.
Treating 𝛽 as “fixed”, where 𝛽{0.2, 0.4, 0.6, 0.8}, and setting 𝐵 = −1, equation (1) simplifies to:
ln[𝑝(𝑡)]= ln[𝑝(𝑇)] (𝑡 𝑡)
.
Note that 𝐴 is the expected value of the S&P 500 in logarithm, and 𝐴 = ln[𝑝(𝑇)] in association with 𝑡= 𝑇 + 1 means
the last price notation of the S&P 500 is the expected price for the critical time 𝑡 and the critical event is suggested to
occur in the subsequent time period. Panel A of Table A.1 reports the corresponding initial model specifications. Note
that log[𝑝(𝑇)]= 5.49 corresponds to the natural logarithm of the last price quotation of the S&P 500 in the sample. Next,
the model is optimized allowing for the following constraints:
𝑡 𝑇 + 1,
0.1 𝛽 0.9,
The first constraint allows for the possibility that the finite time singularity arrives at some unknown time point in the
future, whereas the second constraint is needed for the price to accelerate and to remain finite. Employing Microsoft
Excel’s non-linear solver to compute optimal values for the parameter vector 𝚽
= (𝐴, 𝐵, 𝑡
, 𝛽), the optimized values
are reported in Panel B of Table A.1.
Panel A. Initial parameter values for model 1.
Specification
𝛽
𝑡
=
𝑇
+
1
𝑆𝑆𝑅
1
5
.
49
-
1
0
.
2
7
.
084
1091
.
19
2
5
.
49
-
1
0
.
4
7
.
084
2289
.
70
3
5
.
49
-
1
0
.
6
7
.
084
4983
.
81
4
5
.
49
-
1
0
.
8
7
.
084
10737
.
59
Panel B. Optimized parameter values for model 1.
Specification
𝛽
𝑡
𝑆𝑆𝑅
1 7.16 -1.60 8.02 0.20 11.62
2
6
.
08
-
0
.
66
7
.
54
0
.
36
11
.
73
3
5
.
88
-
0
.
50
7
.
37
0
.
43
11
.
81
4
5
.
93
-
0
.
53
7
.
40
0
.
41
11
.
79
Table A.2. Calibrating the LPPLS model for the S&P 500 using daily data from 1980-1986.
In line with Sornette (2017, p. 335), the log-period power-law singularity (LPPLS) model is given by,
ln[𝑝(𝑡)]= 𝐴 + 𝐵(𝑡 𝑡)[1 + 𝐶cos(𝜔 ln(𝑡 𝑡)+ 𝜙)],
with 𝐴 > 0, 𝐵 < 0, and 0.1 𝛽 0.9, and 𝐴 is the expected value of the S&P 500 in logarithm, 𝐵 measures the exposure to faster-than-exponential growth, 𝛽 is the power-
law exponent controlling faster than exponential price growth and 𝐶 measures the exposure responsible for periodic oscillations, ω is the angular frequency of the log-periodic
oscillations during the bubble formation and 𝜙 is the phase parameter. Whereas 𝜙 cannot be meaningful restricted, here we require |𝐶|< 1 and impose the constraint 5 𝜔
15. We choose the optimal values for 𝚽
= (7.16, −1.60, 8.02, 0.20) from the first estimation step. Using in addition 𝐶 = 𝜙 = 0, the LPPLS model (model 2) is then optimized
for varying values 𝜔, that is, 𝜔 {5, 6, ,14, 15}. Whereas Panel A of Table A.2 reports the corresponding input parameters, Panel B of Table A.2 reports the optimized
parameters using the following constraints:
𝑡 𝑇 + 1,
0.1 𝛽 0.9,
5 𝜔 15.
Note that the parameter 𝜙 remains unconstrained which is in line with Sornette (2017, p. 336). The parameters are obtained using Microsoft Excel’s non-linear solver.
Panel A. Initial parameter values for model
2.
Specification
1
2
3
4
5
6
7
8
9
10
11
𝐴
7.16
7.16
7.16
7.16
7.16
7.16
7.16
7.16
7.16
7.16
7.16
𝐵
-
1.60
-
1.60
-
1.60
-
1.60
-
1.60
-
1.60
-
1.60
-
1.60
-
1.60
-
1.60
-
1.60
𝑡
8.02
8.02
8.02
8.02
8.02
8.02
8.02
8.02
8.02
8.02
8.02
𝛽
0.20
0.20
0.20
0.20
0.20
0.20
0.20
0.20
0.20
0.20
0.20
𝐶
0
0
0
0
0
0
0
0
0
0
0
𝜔
5
6
7
8
9
10
11
12
13
14
15
𝜙
0
0
0
0
0
0
0
0
0
0
0
𝑆𝑆𝑅
11.62
11.62
11.62
11.62
11.62
11.62
11.62
11.62
11.62
11.62
11.62
Panel B. Optimized parameter values for
model 2.
Specification
1
2
3
4
5
6
7
8
9
10
11
𝐴
6.51
9.24
9.25
9.29
9.29
9.17
6.23
6.07
6.07
6.07
6.21
𝐵
-
0.83
-
2.98
-
2.97
-
3.03
-
3.03
-
2.89
-
0.62
-
0.49
-0.50
-
0.49
-
0.59
𝑡
8.62
10.41
10.49
10.44
10.44
10.53
8.34
8.25
8.26
8.25
8.39
𝛽
0.37
0.18
0.18
0.18
0.18
0.19
0.44
0.50
0.50
0.50
0.45
𝐶
-
0.07
0.02
0.02
0.02
0.02
-
0.02
0.08
-
0.10
-0.10
0.10
0.08
𝜔
14.99
14.68
14.85
14.74
14.74
14.94
13.92
13.61
13.63
13.62
14.12
𝜙
-
17.81
-
17.77
-
18.27
-
17.95
-
17.95
-
15.39
-
5.78
-
1.87
-1.91
1.26
0.03
𝑆𝑆𝑅
4.21
7.53
7.49
7.52
7.48
7.48
4.11
4.10
4.10
4.10
4.11
Table A.3. Calibrating the power-law model for the Dow Jones using daily data from 1921-1928.
To model financial log-prices we use a simple power-law model which is given by,
ln[𝑝(𝑡)]= 𝐴 + 𝐵(𝑡 𝑡),
where ln[𝑝(𝑡)] is the logarithm of the Dow Jones at time t, 𝑡 is the critical time, 𝐴 is the expected value of the Dow Jones
in logarithm, 𝐵 measures the exposure to faster-than-exponential growth and 𝛽 is the power-law exponent controlling
faster than exponential price growth. Note that for this model specification 𝐴 > 0, 𝐵 < 0, and 0.1 𝛽 0.9 must hold.
Treating 𝛽 as “fixed”, where 𝛽{0.2, 0.4, 0.6, 0.8}, and setting 𝐵 = −1, equation (1) simplifies to:
ln[𝑝(𝑡)]= ln[𝑝(𝑇)] (𝑡 𝑡)
.
Note that 𝐴 is the expected value of the Dow Jones in logarithm, and 𝐴 = ln[𝑝(𝑇)] in association with 𝑡= 𝑇 + 1 means
the last price notation of the Dow Jones is the expected price for the critical time 𝑡 and the critical event is suggested to
occur in the subsequent time period. Panel A of Table A.3 reports the corresponding initial model specifications. Note
that log[𝑝(𝑇)]= 5.70 corresponds to the natural logarithm of the last price quotation of the Dow Jones in the sample.
Next, the model is optimized allowing for the following constraints:
𝑡 𝑇 + 1,
0.1 𝛽 0.9,
The first constraint allows for the possibility that the finite time singularity arrives at some unknown time point in the
future, whereas the second constraint is needed for the price to accelerate and to remain finite. Employing Microsoft
Excel’s non-linear solver to compute optimal values for the parameter vector 𝚽
= (𝐴, 𝐵, 𝑡
, 𝛽), the optimized values
are reported in Panel B of Table A.3.
Panel A. Initial parameter values for model 1.
Specification
𝛽
𝑡
=
𝑇
+
1
𝑆𝑆𝑅
1
5
.70
-
1
0
.
2
7
.3100
352
.
32
2
5
.70
-
1
0
.
4
7
.
3100
1090
.
74
3
5
.70
-
1
0
.
6
7
.
3100
3318
.
12
4
5
.70
-
1
0
.
8
7
.
3100
8814
.
82
Panel B. Optimized parameter values for model 1.
Specification
𝛽
𝑡
𝑆𝑆𝑅
1
8.72
-
2.41
0.26
10.09
60
1
2.52
2
6.79
-
0
.
99
0.42
8.9479
1
2.49
3 6.11 -0.59 0.53 8.1195 12.44
4
6.11
-
0
.
5
9
0.53
8.0707
1
2.45
Table A.4. Calibrating the LPPLS model for the Dow Jones using daily data from 1921-1928.
In line with Sornette (2017, p. 335), the log-period power-law singularity (LPPLS) model is given by,
ln[𝑝(𝑡)]= 𝐴 + 𝐵(𝑡 𝑡)[1 + 𝐶cos(𝜔 ln(𝑡 𝑡)+ 𝜙)],
with 𝐴 > 0, 𝐵 < 0, and 0.1 𝛽 0.9, and 𝐴 is the expected value of the Dow Jones in logarithm, 𝐵 measures the exposure to faster-than-exponential growth, 𝛽 is the power-
law exponent controlling faster than exponential price growth and 𝐶 measures the exposure responsible for periodic oscillations, ω is the angular frequency of the log-periodic
oscillations during the bubble formation and 𝜙 is the phase parameter. Whereas 𝜙 cannot be meaningful restricted, here we require |𝐶|< 1 and impose the constraint 5 𝜔
15. We choose the optimal values for 𝚽
= (6.11, −0.59, 8.12, 0.53) from the first estimation step. Using in addition 𝐶 = 𝜙 = 0, the LPPLS model (model 2) is then optimized
for varying values 𝜔, that is, 𝜔 {5, 6, ,14, 15}. Whereas Panel A of Table A.4 reports the corresponding input parameters, Panel B of Table A.4 reports the optimized
parameters using the following constraints:
𝑡 𝑇 + 1,
0.1 𝛽 0.9,
5 𝜔 15.
Note that the parameter 𝜙 remains unconstrained which is in line with Sornette (2017, p. 336). The parameters are obtained using Microsoft Excel’s non-linear solver.
Panel A. Initial parameter values for model 2.
Specification
1
2
3
4
5
6
7
8
9
10
11
𝐴
6.11
6.11
6.11
6.11
6.11
6.11
6.11
6.11
6.11
6.11
6.11
𝐵
-
0.59
-
0.59
-
0.59
-
0.59
-
0.59
-
0.59
-
0.59
-
0.59
-
0.59
-
0.59
-
0.59
𝑡
8.12
8.12
8.12
8.12
8.12
8.12
8.12
8.12
8.12
8.12
8.12
𝛽
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
𝐶
0
0
0
0
0
0
0
0
0
0
0
𝜔
5
6
7
8
9
10
11
12
13
14
15
𝜙
0
0
0
0
0
0
0
0
0
0
0
𝑆𝑆𝑅
12.44
12.44
12.44
12.44
12.44
12.44
12.44
12.44
12.44
12.44
12.44
Panel B. Optimized parameter values for model 2.
Specification
1
2
3
4
5
6
7
8
9
10
11
𝐴
5.54
5.58
5.56
5.56
6.04
5.67
6.04
6.04
6.05
6.07
6.19
𝐵
-
0.22
-
0.25
-
0.24
-
0.24
-
0.35
-
0.26
-0.35
-
0.35
-
0.36
-
0.36
-
0.41
𝑡
7.31
7.40
7.31
7.31
8.97
7.74
8.97
8.97
8.99
9.01
9.27
𝛽
0.90
0.85
0.86
0.86
0.75
0.84
0.75
0.75
0.74
0.74
0.70
𝐶
-
0.14
-
0.13
0.13
0.13
-
0.09
-
0.12
0.09
0.09
-
0.08
-
0.08
0.08
𝜔
8.90
9.25
8.97
8.96
12.68
9.98
12.68
12.68
12.72
12.79
13.61
𝜙
-
3.58
-
4.34
-
0.55
-
0.55
-
7.16
0.08
-4.03
-
4.02
-
0.98
-
1.14
-
0.12
𝑆𝑆𝑅
4.73
4.65
4.69
4.69
4.01
4.74
4.01
4.01
4.01
4.01
4.07
Table A.5. Testing for explosiveness in random-walk behavior.
Explosive random-walk behavior corresponding to faster-than-exponential growth is tested using the Augmented Dickey-
Fuller (ADF) test, requiring the implementation of the following test regression:
∆𝑝= 𝜑+ 𝜑𝑡 + 𝜑𝑝 + 𝜑∆𝑝  + + 𝜑∆𝑝 + 𝑒.
We hypothesize that faster-than-exponential growth should be manifested in 𝜑< 0 and 𝜑> 0 implying explosiveness
in random-walk behavior. This table reports the parameter estimates for the test regression using (i) monthly data on the
S&P 500 for the period January 1871 to November 2022, (ii) daily data on the S&P 500 for the period January 2, 1980
until December 31, 1986, and (iii) daily data on the Dow Jones index for the period June 1, 1921 until December 31,
1928. The lag-order for each model (i)-(iii) is chosen with respect to the SIC, maximum lag length of 24. The critical
values for the ADF tests are -3.96, -3.41, and -3.13 for the 1%, 5%, and 10% significance level. The ADF test statistic is
denoted as 𝜆
󰆹 and the corresponding p-value are reported too.
Sample
𝜑
𝜑
𝜑
𝜑
𝜑
𝑅
𝜆
󰆹
p-value SSR
(i) 0.0008
(0.44)
1.28E-05**
(2.27)
-0.0026*
(
-
1.80)
0.2980***
(12.75)
-0.0803***
(
-
3.43)
0.0858 -1.80 0.7057 2.7476
(ii) 0.0228**
(2.18)
2.26E-06**
(2.09)
-0.0048**
(
-
2.15)
0.0965***
(4.07)
0.0115 -2.15 0.5174 0.1424
(iii) 0.0151
(1.50)
2.65E-06*
(1.75)
-0.0035
(
-
1.48)
0.0020 -1.48 0.8363 0.1678
***Statistically significant on a 1% level, **Statistically significant on a 5% level, *Statistically significant on a 10% level.
Figure A.1. The S&P 500 in logarithms and optimized model 2 (LPPLS model) using daily data for
the 1980-1986 period.
Figure A.2. The Dow Jones in logarithms and optimized model 2 (LPPLS model) using daily data
for the 1921-1928 period.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Using the concept of the stochastic discount factor with critical behavior, we present a self-consistent model for explosive financial bubbles, which combines a mean-reverting volatility process and a stochastic conditional return which reflects nonlinear positive feedbacks and continuous updates of the investors’ beliefs and sentiments. The conditional expected returns exhibit faster-than-exponential acceleration decorated by accelerating oscillations, called “log-periodic power law" (LPPL). Tests on residuals show a remarkable low rate (0.2%) of false positives when applied to a GARCH benchmark. When tested on the S&P500 US index from Jan. 3, 1950 to Nov. 21, 2008, the model correctly identifies the bubbles ending in Oct. 1987, in Oct. 1997, in Aug. 1998 and the ITC bubble ending on the first quarter of 2000. Different unit-root tests confirm the high relevance of the model specification. Our model also provides a diagnostic for the duration of bubbles: applied to the period before the Oct. 1987 crash, there is clear evidence that the bubble started at least 4 years earlier. We confirm the validity and universality of the volatility-confined LPPL model on seven other major bubbles that have occurred in the World in the last two decades. Using Bayesian inference, we find a very strong statistical preference for our model compared with a standard benchmark, in contradiction with Chang and Feigenbaum (2006) which used a unit-root model for residuals.
Article
Full-text available
We study a rational expectation model of bubbles and crashes. The model has two components: (1) our key assumption is that a crash may be caused by local self-reinforcing imitation between noise traders. If the tendency for noise traders to imitate their nearest neighbors increases up to a certain point called the “critical†point, all noise traders may place the same order (sell) at the same time, thus causing a crash. The interplay between the progressive strengthening of imitation and the ubiquity of noise is characterized by the hazard rate, i.e. the probability per unit time that the crash will happen in the next instant if it has not happened yet. (2) Since the crash is not a certain deterministic out come of the bubble, it remains rational for traders to remain invested provided they are compensated by a higher rate of growth of the bubble for taking the risk of a crash. Our model distinguishes between the end of the bubble and the time of the crash: the rational expectation constraint has the specific implication that the date of the crash must be random. The theoretical death of the bubble is not the time of the crash because the crash could happen at any time before, even though this is not very likely. The death of the bubble is the most probable time for the crash. There also exists a finite probability of the attaining the end of the bubble without crash. Our model has specific predictions about the presence of certain critical log-periodic patterns in pre-crash prices, associated with the deterministic components of the bubble mechanism. We provide empirical evidence showing that these patterns were indeed present before the crashes of 1929, 1962 and 1987 on Wall Street and the 1997 crash on the Hong Kong Stock Exchange. These results are compared with the statistical tests on synthetic data.
Article
Full-text available
This examination of the role and potential for replication in economics points out the paucity of both pure replication - checking on others' published papers using their data - and scientific replication - using data representing different populations in one's own work or in a comment. Several controversies in empirical economics are used to illustrate how and how not to behave when replicating others' work. The incentives for replication are examined, and proposals aimed at journal editors and authors are advanced that might stimulate an activity that most economists applaud but few perform.
Article
Most anomalies fail to hold up to currently acceptable standards for empirical finance. With microcaps mitigated via NYSE breakpoints and value-weighted returns, 65% of the 452 anomalies in our extensive data library, including 96% of the trading frictions category, cannot clear the single test hurdle of the absolute $t$-value of 1.96. Imposing the higher multiple test hurdle of 2.78 at the 5% significance level raises the failure rate to 82%. Even for replicated anomalies, their economic magnitudes are much smaller than originally reported. In all, capital markets are more efficient than previously recognized. Received June 12, 2017; editorial decision October 29, 2018 by Editor Stijn Van Nieuwerburgh. Authors have furnished an Internet Appendix, which is available on the Oxford University Press Web site next to the link to the final published paper online.
Article
This paper studies bivariate tail comovements on financial markets that are of crucial importance for the world economy: The S&P 500, US bonds, and currencies. We propose to study that form of dependence under the lens of cojump identification in a bivariate Brownian semimartingale with idiosyncratic jumps, as well as cojumps. Whereas univariate jump identification has been widely studied in the high-frequency data literature, the multivariate literature on cojump identification is more recent and scarcer. Cojump identification is of interest, as it may identify comovements which are not trivially visible in a univariate setting. That is, price changes can be small relative to local variation, but still abnormal relative to local covariation. This paper investigates how simple parametric bootstrapping of the product of assets’ intraday returns can help detect cojumps in a multivariate Brownian semi-martingale with both idiosyncratic jumps and cojumps. In particular, we investigate how to disentangle idiosyncratic jumps from common jumps at an intraday level for pairs of assets. The approach is flexible, trivial to implement, and yields good power properties. It allows to shed new light on extreme dependence at the world economy level. We detect cojumps of heterogeneous size which are partly undetected with a univariate approach. We find an increased cojump intensity after the crisis on the S&P 500-US bonds pair before a return to normal.
Article
We investigate the distributions of epsilon-drawdowns and epsilon-drawups of the most liquid futures financial contracts of the world at time scales of 30 seconds. The epsilon-drawdowns (resp. epsilon- drawups) generalise the notion of runs of negative (resp. positive) returns so as to capture the risks to which investors are arguably the most concerned with. Similarly to the distribution of returns, we find that the distributions of epsilon-drawdowns and epsilon-drawups exhibit power law tails, albeit with exponents significantly larger than those for the return distributions. This paradoxical result can be attributed to (i) the existence of significant transient dependence between returns and (ii) the presence of large outliers (dragon-kings) characterizing the extreme tail of the drawdown/drawup distributions deviating from the power law. The study of the tail dependence between the sizes, speeds and durations of drawdown/drawup indicates a clear relationship between size and speed but none between size and duration. This implies that the most extreme drawdown/drawup tend to occur fast and are dominated by a few very large returns. We discuss both the endogenous and exogenous origins of these extreme events.
Article
Of all global problems world population growth is the most significant. Demographic data describe this process in a concise and quantitative way in its past and present. Analysing this development it is possible by applying the concepts of systems analysis and synergetics, to work out a mathematical model for a phenomenological description of the global demographic process and to project its trends into the future. Assuming self-similarity as the dynamic principle of development, growth can be described practically over the whole of human history, assuming the growth rate to be proportional to the square of the number of people. The large parameter of the theory and the effective size of a coherent population group is of the order of 105 and the microscopic parameter of the phenomenology is the human lifespan. The demographic transition — a transition to a stabilised world population of some 14 billion in a foreseeable future — is a systemic singularity and is determined by the inherent pattern of growth of an open system, rather than by the lack of resources. The development of a quantitative nonlinear theory of the world population is of interest for interdisciplinary research in anthropology and demography, history and sociology, for population genetics and epidemiology, for studies in evolution of humankind and the origin of man. The model also provides insight into the stability of growth and the present predicament of humankind, and provides a setting for discussing the main global problems.
Article
Contrary to common belief, both the Earth's human population and its economic output have grown faster than exponential, i.e., in a super-Malthusian mode, for most of the known history. These growth rates are compatible with a spontaneous singularity occurring at the same critical time 2052±10 signaling an abrupt transition to a new regime. The degree of abruptness can be infered from the fact that the maximum of the world population growth rate was reached in 1970, i.e., about 80 years before the predicted singular time, corresponding to approximately 4% of the studied time interval over which the acceleration is documented. This rounding-off of the finite-time singularity is probably due to a combination of well-known finite-size effects and friction and suggests that we have already entered the transition region to a new regime. As theoretical support, a multivariate analysis coupling population, capital, R&D and technology shows that a dramatic acceleration in the population growth during most of the timespan can occur even though the isolated dynamics do not exhibit it. Possible scenarios for the cross-over and the new regime are discussed.
Changing World Order: Why Nations Succeed and Fail
  • R Dalio
Dalio, R. (2021). Changing World Order: Why Nations Succeed and Fail. Simon & Schuster, New York, USA.