ArticlePDF Available

Links between entropy, complexity, and the technological singularity

Authors:
  • Growth Dynamics

Abstract and Figures

Entropy always increases monotonically in a closed system but complexity increases at first and then decreases as equilibrium is approached. Commonsense information-related definitions for entropy and complexity demonstrate that complexity behaves like the time derivative of entropy, which is proposed here as a new definition for complexity. A 20-year old study had attempted to quantify complexity (in arbitrary units) for the entire Universe in terms of 28 milestones, breaks in historical perspective, and had concluded that complexity will soon begin decreasing. That conclusion is now corroborated by other researchers. In addition, the exponential runaway technology trend advocated by supporters of the singularity hypothesis—which was in part based on the trend of the very 28 milestones mentioned above—would have anticipated five new such milestones by now, but none have been observed. The conclusions of the 20-year old study remain valid: we are at the maximum of complexity and we should expect the next two milestones at around 2033 and 2078. You can read a preprint here: https://osf.io/6nwf9/
Content may be subject to copyright.
1
Links between Entropy, Complexity, and the
Technological Singularity
Theodore Modis
Address reprint requests to:
Theodore Modis
Growth Dynamics
Via Selva 8
6900 Massagno
Lugano, Switzerland.
Tel. 41-91-9212054, E-mail: tmodis@yahoo.com
2
Abstract
Entropy always increases monotonically in a closed system but complexity increases at
first and then decreases as equilibrium is approached. Commonsense information-related
definitions for entropy and complexity demonstrate that complexity behaves like the time
derivative of entropy, which is proposed here as a new definition for complexity. A 20-year
old study had attempted to quantify complexity (in arbitrary units) for the entire Universe in
terms of 28 milestones, breaks in historical perspective, and had concluded that complexity
will soon begin decreasing. That conclusion is now corroborated by other researchers. In
addition, the exponential runaway technology trend advocated by supporters of the
singularity hypothesiswhich was in part based on the trend of the very 28 milestones
mentioned abovewould have anticipated five new such milestones by now, but none have
been observed. The conclusions of the 20-year old study remain valid: we are at the
maximum of complexity and we should expect the next two milestones at around 2033 and
2078.
Keywords: entropy; complexity; singularity; logistic growth; S-curve
3
1. Introduction
This work was triggered by the author’s invitation to speak at the international
symposium on Social singularity in the 21st century: At the crossroads of history in Prague, CZ on
September 18, 2021 (InstituteH21, 2021.) They asked him for an update of his 20-year old
work on the evolution of complexity and change in our lives (Modis, 2002; Modis, 2003) and
its impact on the possibility of an approaching technological singularity. The author has
previously published three related updates (Modis, 2006; Modis, 2012; Modis, 2020.)
During the last ten years there has been much literature published on the subjects of
complexity and singularity. One notable example is the work of theoretical physicist Sean M.
Carroll whose bestselling book The Big Picture: On the Origins of Life, Meaning, and the Universe
Itself argues that complexity is related to entropy and that ―complexity is about to begin
declining‖ (Carroll, 2016). The idea that complexity first increases and then decreases as
entropy increases in closed systems had been previously suggested by several researchers
(Huberman et al., 1986; Grassberger, 1989; Li, 1991; Gell-Mann, 1994; Carroll, 2010; Carroll,
2016). In the same direction Kauffman had coined the term ―complexity catastrophe‖ to
explain the low complexity of an overly connected network similar to that of a sparsely
connected network (Kauffman, 1995). But in a more recent publication, Carroll together
with Aaronson and Ouellette demonstrated quantitatively the phenomenon of decreasing
complexity when approaching equilibrium by calculating the complexity and the entropy in a
cup of coffee that is undergoing the mixing of coffee and cream (Aaronson et al., 2014).
These publications provided fertile ground for the work presented here. Two short videos
by Sean Carroll popularize these ideas in YouTube for the layperson (Carroll, 2021).
Entropy and complexity are subjects that have enjoyed enormous attention in the
scientific literature. Their treatment in the next section is very brief and relates only to their
connection to the concept of a technological singularity. With information-related definitions
for entropy and complexity, a simple mathematical relationship between them is established
in light of which the author reinstates his 20-year old conclusion, namely that we should
expect a decreasing complexity in the future instead of an approaching technological
singularity. This conclusion has been corroborated by Magee and Devezas who studied
shorter-timescale technologically-driven or simply human-driven profound societal changes
(Magee et al., 2011).
2. Entropy and Complexity
2.1 Entropy
There are many definitions of entropy. The concept was first developed by Rudolf
Clausius, a German physicist in the mid-nineteenth century (Clausius, 1867). The
classical thermodynamic entropy is defined in terms of the energy (heat) and the
temperature of a system. Boltzmann’s definition involves the number of different ways
the atoms or molecules of a thermodynamic system can be arranged; his celebrated
formula for entropy has been carved on his gravestone (Allen et al., 2017). The
definition of Gibbs involves the energy and the probability that it occurs for all
microstates of the system (Klein, 1990). There is also the quantum-mechanical entropy
defined by von Neumann (Zyczkowski et al., 2006). All these definitions of entropy are
related to each other but they are not relevant here.
4
In this paper we will concentrate on the fact that entropy is a measure of the
number of different ways a set of objects can be arranged or ―a measure of disorder‖
(Martin et al., 2013), even though entropy isn’t always disorder (Styer, 2019).1 With
disorder defined as the number of possible configurations, a messy or disordered room
has higher entropy than a tidy room. The number of possible configurations of the
items in a messy or disordered room is higher than the number of possible
configurations in a tidy room, where the items ―inhabit a small set of possible places –
the books on the bookshelf, the clothes in the dresses, and so on‖ (Martin et al., 2013).
The concepts of entropy and disorder are inherently linked‖ (Martin et al., 2013).
When entropy is high disorder is generally high and vice versa. Entropy always increases
in a closed system in accordance with the 2nd law of thermodynamics, which stipulates
that the entropy S will always increase: ΔS > 0. Entropy may locally decrease, but it will
increase elsewhere in the system by at least the same amount so that in a closed system
entropy (and also disorder) will generally increase.
There is a link between entropy and information. The higher the number of
possible configurations in a system, the more information is needed to describe the
system, i.e. the higher its information content will be. In information theory Shannon
has defined entropy as a measure of the information content in a message (Shannon,
1948). This is the amount of information an observer could expect to obtain from a
given message. A highly ordered, low-entropy state contains less information compared
to a highly disordered, high-entropy state. Let’s go back to the tidy-room example. If
they tell us a living-room is tidy (ordered), the information content of the message is
limited. Probably there is a sofa with pillows on it, there is an easy chair, a television
against the wall, chairs around a table, etc. But if they tell us that the living room is
utterly disordered, the information content of the message is much higher, because it
may include oddball situations like pillows on the floor, the television upside down, dirty
dishes on the table, chairs scattered around, etc. The more disordered the living room,
the greater the information content of the message we are given.
For the rest of this paper we will define entropy as information content.
On a larger scale entropy began increasing at the beginning of the Universe with
the Big Bang, when the Universe is thought to have been a smooth, hot, rapidly
expanding plasma and rather orderly; a state with low entropy and low information
content. Entropy will reach a maximum at the end of the Universe, which in a prevailing
view will be a state of heat death, after black holes have evaporated and the acceleration
of the Universe has dispersed all energy and particles uniformly everywhere (Carroll,
2010). The information content of this final state of maximal disorder (everything being
everywhere), namely the knowledge of the precise position and velocity of every particle
in it will also reach a maximum.
Entropy’s trajectory grew rapidly during early Universe. As the Universe expansion
accelerated, entropy’s growth accelerated. Its trajectory followed a rapidly rising
exponential-like growth pattern. At the other end, heat death, entropy will grow slowly
to asymptotically reach the ceiling of its final maximum (Patel, 2019). It will most likely
happen along another exponential-like pattern. It follows that the overall trajectory of
1 In recent times there has been criticism of the long-standing association of disorder with entropy. The
interested reader can go in more depth on this subject by consulting such publications as Floyd, 2007;
Lambert, 2002; Low, 1988; Styer 2020; and Wright, 1970.
5
entropy will trace some kind of an S-shaped curve with an inflection point somewhere
around the middle.
2.2 Complexity
There are also many definitions for complexity. In fact, John Horgan in his essay in
his June 1995 Scientific American editorial entitled ―From complexity to perplexity‖, has
mentioned a list of 31 definitions of complexity (Hogan, 1995). Among them notable is
the Kolmogorov complexity, which defines it as a measure of the computational
resources needed to specify the object (Kolmogorov, 1963; Kolmogorov 1998). Also,
the Effective complexity, defined by Murray Gell-Mann and Seth Lloyd as a measure of
the amount of non-random information in a system (Gell-Mann et al., 1996).
But in this paper, and for the sake of consistency with the previous section, we will
use the following information-related definition for complexity: the capacity of a system
to incorporate information at a given time. Complexity is more like a snapshot while
entropy is more like a sum. Informally, complexity reflects the amount of information
needed to describe everything ―interesting‖ about the system at a given point in time
(―interesting information is non-random information.) More intuitively, complexity
reflects how easy it is to describe the human system; the higher the complexity, the
more difficult it is to describe.2
In a closed system, entropy and complexity increase together initially, in other
words the greater the disorder the more difficult it is to describe the system. But things
change later on. Toward the end, as entropy approaches its final maximum where there
is also maximal disorder, complexity diminishes. Maximal disorder is simple to describe.
By the time entropy reaches its final ceiling the information content has become
maximal but also not ―interesting‖ because it has become 100% random information.
The degradation of the information content into non-interesting random information
begins when entropy reaches the inflection point of its trajectory, i.e. when the rate of
growth becomes maximal. At that point complexity goes over a maximum and begins
decreasing. Aaronson et al. have likened complexity to ―interestingness. They have
demonstrated that it declines as entropy reaches a ceiling with the example of a cup of
coffee with cream (Aaronson et al., 2014). In the beginning when the cream rests calmly
on top of the coffee, the entropy of the system is small (there is also order) and the
complexity is also small because the situation is very easy to describe. At the end of the
stirring when coffee and cream are completely mixed together, entropy is maximal
(there is also maximum disorder because everything is everywhere) but the situation is
again easy to describe, so the complexity is low again. Around the middle of the mixing
process when entropy (and also disorder) is growing fastest the complexity of the
system is maximal.
Another example is the Universe itself. The very early Universe near the Big Bang
was a low-entropy and easy to describe state (low complexity.) But the high-entropy
state of the end will also be easy to describe because everything will be uniformly
distributed everywhere. Complexity was low at the beginning of the Universe and will
be low again at the end. It becomes maximalmost difficult to describearound the
middle, the inflection point of entropy’s trajectory, when entropy’s rate of change is
2 This echoes Rosen’s epistemological account of complexity: “To say that a system is complex … is to say
that we can describe the same system in a variety of distinct ways …” (Rosen, 2000).
6
maximal (see milestone numbers 27, 28 in next section.) Complexity follows a bell-
shaped curve similar to the time derivative of a logistic function.
2.3 A new relationship between entropy and complexity
With the above-mentioned information-related definitions for entropy and
complexity for a closed system, namely:
Entropy: the information content
(or a measure of the amount of disorder)
Complexity: the capacity to incorporate information at a given time
(or a measure of how difficult it is to describe at a given time)
we see that entropy results from the accumulation of complexity, or alternatively, that
complexity is the time derivative of entropy. Entropy traces out an S-shaped curve while
complexity traces a bell-shaped curve. The ―interestingness‖ of entropy’s information
content diminishes during the second half of the growth process and so does the
complexity of the system. At the end there is purely random information everywhere
and zero capacity to incorporate ―interesting‖ information.
In this casei.e. with the chosen definitionsa new relationship between entropy
and complexity can be written as:
𝐶
𝑑𝑆
𝑑𝑡 (1)
or
𝑆= 𝐶
𝑑𝑡 (2)
The patterns of the trajectories followed by entropy and complexity may turn out not to be
exactly the classical logistic patterns, which are symmetric around the midpoint. But in the
coffee-and-cream study mentioned earlier, and with the particular quantitative definitions
the investigators used, they found indeed complexity to trace a symmetric bell-shaped curve
while entropy approached a ceiling asymptotically, see Figure 2 in (Aaronson et al., 2014).
3. Forecasting Complexity
In his 2002 article the author attempted to quantify the evolution of complexity in the
Universe in terms of 28 ―canonical‖ milestones—events of maximum importance, breaks in
historical perspectivebased on data he collected from thirteen different sources (Modis
2002; Modis 2003). In his book The Singularity Is Near Kurzweil presented the data behind
these 28 milestones in different ways demonstrating the rapid rate of change in our lives, see
four figures on pp 17-20 of his book. Together with other runaway trends Kurzweil arrived
at the conclusion that there is an approaching technological singularity (Kurzweil, 2005).
These 28 ―canonical‖ milestones generally consist of clusters of events. They are
reproduced here in Appendix A. The importance of each milestone was assumed to be
proportional to the amount of complexity it brought multiplied by the length of the
following stasis until the next milestone. Consequently the increase in complexity ΔCi
associated with milestone i of importance I is:
7
Δ𝐶𝑖=𝐼
Δ𝑇𝑖 (3)
where ΔTi is the time period between milestone i and milestone i+1.
Under the assumption that milestones of maximum importance were also milestones of
comparable (see equal) importance, values for complexity were obtained for 27 milestones in
relative terms (i.e. with arbitrary units) as being inversely proportional to the time difference
from one milestone to the next one.
In view of the discussion in Section 2.3 the accumulation of this complexityi.e. the
integral—should be akin (if not equal) to the system’s entropy. The evolution of the world
seen by these 28 milestones is a non-equilibrium open system and for such systems Grandy
has demonstrated that it is the time derivative of entropy rather than entropy itself, which
plays the major role governing the ongoing macroscopic processes (Grandy, 2004).
Below are reproduced some results from the author’s work of twenty years ago. Figure 1
shows the ―primordial‖ S-curve, a logistic fit (thick gray line) to the cumulative complexity
values, which should be akin (if not equal) to the entropy of the system. Figure 2 shows
complexity per milestone and the fitted curve here (thick gray line) is the bell-shaped logistic
life cycle, i.e. the derivative of the logistic function.
Figure 1. A logistic fit (thick gray line) and an exponential fit (thin black line) to the
cumulated complexity values of 27 milestones. The graph at the bottom has a logarithmic
vertical scale. The red line is on the 28th milestone and coincides with the center of the
logistic.
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0 5 10 15 20 25 30 35 40
Arbitrary units
Canonical milestone number
Cumulative change
S-curve fit
Exponential fit
1E-11
1E-10
1E-09
1E-08
0.000000
0.000001
0.00001
0.0001
0.001
0.01
0.1
1
10
100
1000
10000
0 5 10 15 20 25 30 35 40
Canonical milestone number
Cumulative Complexity (Entropy)
Data
8
Figure 2. A logistic life-cycle fit (thick gray line) and an exponential fit (thin black line) to the
complexity values of 27 milestones. The error bars reflect the spread on the values of the
milestones in the particular cluster. The little open circles forecast the position of future
milestones according to a logistic and to an exponential extrapolation. The graph at the
bottom has a logarithmic vertical scale. The red line is on the 28th milestone and coincides
with the center of the logistic.
The red line indicates the 28th milestone for which a complexity value cannot be
assigned yet not knowing the 29th milestone. The penetration level of the fitted logistic curve
at this time (1990) is 50.1%.
We also see in these two figures an exponential fit to the data (thin black line), which
would be compatible with the hypothesis of an approaching singularity. The two fits seem to
describe the data comparably well with exception the most recent data point, which is
overestimated by the exponential fit, something more obvious in Figure 2.
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0 5 10 15 20 25 30 35 40
Arbitrary units
Canonical milesto ne number
Amount of change between successive milestones
Logistic life cycle
Exponential fit
Amou nt of change
1E-10
0.000000001
0.00000001
0.0000001
0.000001
0.00001
0.0001
0.001
0.01
0.1
1
0 5 10 15 20 25 30 35 40
Arbitrary units
Canonical milestone number
Amount of change between successive milestones
Logistic fit
Exponential fit
ln(Amount of change)
Complexity
Data
9
The little open circles in Figure 2 forecast complexity values for future milestones
according to a logistic and to an exponential extrapolation. Since complexity was calculated
as being inversely proportional to the time to the next milestone, the forecasted complexity
of future milestonesbe it with a logistic or an exponential fitcan be translated to dates
using Equation (3). Table 1 gives time estimates for the next five milestones according to the
two forecasting methods.
Table 1. Milestone Forecasts
Milestone
Logistic
fit
Exponential
fit
Number
Complexity*
Year
Complexity*
Year
29
0.0223
2033
0.1540
2009
30
0.0146
2078
0.3247
2015
31
0.0081
2146
0.6846
2018
32
0.0041
2270
1.4435
2020
33
0.0020
2515
3.0436
2021
* In arbitrary units
4. Discussion
Twenty years after the authors original work, his conclusion that complexity and change in our
lives will soon begin decreasing is corroborated. First by the work of other scientists who not only
claim that complexity in a closed system must eventually decrease, but have also demonstrated with
quantitative calculations that it does so symmetrically (Aaronson et al., 2014; Carroll, 2016). And
second by the mere fact that no milestones of paramount importancebreaks in historical
perspectivehave been observed, while five of them had been expected during these twenty years
according to the exponential rate of growth advocated by supporters of the singularity hypothesis.
The relationship between entropy and complexity as expressed by Equations (1) and (2) is a direct
consequence of the definitions used in Section 2.3, but its validity could be more general despite the
fact that the relationship between entropy and complexity is not always one-to-one, as Wentian Li has
demonstrated (Li, 1991). As we said earlier the various definitions of entropy are related to each other
and so are most of the definitions of complexity. Seeing complexity as the derivative of entropy may
have widespread appeal and utility on an intuitive level. After all, complexity reaches a maximum value
when entropy grows the fastest. Grandy has amply demonstrated the importance of the role played by
the derivative of entropy (Grandy, 2004).
In any case complexity, as determined by the 28 milestones, has reached a maximum and now
begins on the declining slope of its bell-shaped pattern. It is a direct consequence of having described
the accumulation of entropy by a natural-growth (logistic) pattern, which so far seems to hold as there
haven’t been any ―milestones‖ in the last 25 years. There have been many small ones but nothing like
the Internet, DNA, or nuclear energy. The idea that our world’s complexity will decrease in the future
may seem difficult to accept but such a unimodal pattern (namely low at the beginning and at the end
but high in between, not unlike the normalGaussiandistribution3) is commonplace in everyday life.
It is associated with a reversal appearing at extremes. We say for example, that too much of a good
thing is not good. We saw that too much disorder is easy to describe in the examples of coffee and
3 The Gaussian and the derivative of the logistic function, the so-called life cycle are very similar (Modis,
2006).
10
cream, and in the evolution of the entire Universe. Also, I mentioned how Kauffman points out that an
overly connected network is as dysfunctional as a sparsely connected network. John Casti in his book
X-Events defines complexity as the number of independent decisions a decision-maker can make at
any given time‖ (Casti, 2012). Thus, if a decision-maker has only few decisions in his or her set of
possibilities, he/she faces low complexity. The complexity will increase as the number of possibilities
increases. But I believeCasti does not say thisthat if the decision-maker faces millions of
possibilities, life in fact will become simpler rather than more complicated because the situation will
trigger alternative ways to make decisions (e.g. random choices). Life may not be as simple as having
only one choice, but it will be simpler than having to choose among 20 or 30 possibilities, each of
which requires individual attention.
Because the time frame considered by this analysis is vast and the crowding of milestones in recent
times is extremely dense functions such as logistics and exponentials cannot describe the growth
process adequately. There are processes for which our Euclidean (linear) conception of time does not
accommodate an appropriate description. That’s why for this analysis, a better-suited time variable was
chosen: the sequential milestone number, which is a logistic time scale.
We are obviously dealing with an anthropic Universe here since we are overlooking how
complexity has been evolving in other parts of the Universe. Still, the author believes that such an
analysis carries more weight than just the elegance and simplicity of its formulation. John Wheeler has
argued that the very validity of the laws of physics depends on the existence of consciousness.4 In a
way, the human point of view is all that counts! In astronomy/cosmology this is referred to as the
Anthropic Principle (Bostrom, 2010), which in its weak form basically states that one sapient life form
(humans) looks back to the past from its point of view (Penrose, 1989).
One may object to including such cosmic events as the Big Bang and the formation of galaxies in
the same set of milestones as the invention of agriculture, or the internet. But if we dropped the first
two milestones and repeated our analysis beginning with the 3rd milestone cluster (the formation of our
solar system and the earth, oldest rocks, and origin of life on earth), then the fitted curves would
change only imperceptibly. But at the same time, there would now be rough corroboration of the
conclusion that complexity and entropy are presently around their midpoints: the sun is close to its
midlife (is thought to be 4.6 billion years old and expected to go out in 5.5 billion years from now.)
But we could restrict further our data set to those milestones that have to do only with humans.
The reader’s attention is drawn to the fact that the trends in Figures 1 and 2 remain purely exponential
(straight line on the lower graphs with the logarithmic vertical scales) with extremely low values for
most of the range. The trends begin deviating from exponential only very recently, namely after
milestone No. 23, i.e. after the fall of Rome, and zero and decimals invented. So even if we dropped all
pre-human milestones, we wouldn’t obtain a significantly different fit.
One of the thirteen data sets used to distill the 28 ―canonical‖ milestones of Figures 1 and 2 has
been provided by Nobel Laureate, Paul D. Boyer. In his contribution he had anticipated two future
milestones without specifying their timing. Boyer’s 1st future milestone was ―Human activities devastate
species and the environment,‖ and the 2nd was ―Humans disappear; geological forces and evolution
continue.‖ The logistic-fit time estimates for the two next milestones from Table 1 are 2033 and 2078
respectively. It is likely that there are bona fide scientists who would agree more with Boyer’s future
milestones and these time estimates rather than with an approaching technological singularity.
Alternatively, and on a more positive and realistic tone the next two milestones could well be along
the lines:
4 John Wheeler was a renowned American theoretical physicist best known for first using the term "black
hole" in 1967.
11
2033. A cluster of achievements in AI, robotics, nanotechnology, bioengineering, NASA’s
scheduled human mission to Mars, etc. could qualify as one milestone in the same way
modern physics, radio, electricity, automobile, and airplane had done at the turn of the
twentieth century (milestone No. 26).
2078. Teleportation or creation of life, two fields that have been attracting attention of
researchers for some time now.
In his publication of 2002 the author had concluded that ―we are sitting on top of the world‖ from
the point of view that we are experiencing complexity and change at their maximum and that they will
begin decreasing soon. Twenty years later there is no reason to revise that conclusion.
Author statement
I would like to thank Alain Debecker and Athanasios G. Konstandopoulos for fruitful discussions.
12
Appendix A
The 28 ―canonical‖ milestones generally represent an average of clustered events not all of which are
mentioned in this table. That is why some events, e.g. the asteroid collision, may appear dated
somewhat off. Highlighted in bold are in the most outstanding event in the cluster. The dates given are
expressed in number of years before year 2000.
No. Milestone Date
1. Big Bang and associated processes 1.55 x 1010
2. Origin of Milky Way, first stars 1.0 x 1010
3. Origin of life on Earth, formation of the solar system and the Earth, oldest rocks 4.0 x 109
4. First eukaryotes, invention of sex (by microorganisms), atmospheric oxygen, 2.1 x 109
oldest photosynthetic plants, plate tectonics established
5. First multicellular life (sponges, seaweeds, protozoans) 1.0 x 109
6. Cambrian explosion, invertebrates, vertebrates, plants colonize land, 4.3 x 108
first trees, reptiles, insects, amphibians
7. First mammals, first birds, first dinosaurs, first use of tools 2.1 x 108
8. First flowering plants, oldest angiosperm fossil 1.3 x 108
9. Asteroid collision, first primates, mass extinction, (including dinosaurs) 5.5 x 107
10. First hominids, first humanoids 2.85 x 107
11. First orangutans, origin of proconsul 1.66 x 107
12. Chimpanzees and humans diverge, earliest hominid bipedalism 5.1 x 106
13. First stone tools, first humans, Ice Age, Homo erectus, origin of spoken language 2.2 x 106
14. Emergence of
Homo sapiens
5.55 x 105
15. Domestication of fire, Homo heidelbergensis 3.25 x 105
16. Differentiation of human DNA types 2.0 x 105
17. Emergence of “modern humans,” earliest burial of the dead 1.06 x 105
18. Rock art, protowriting 3.58 x 104
19. Invention of agriculture 1.92 x 104
20. Techniques for starting fire, first cities 1.1 x 104
21. Development of the wheel, writing 4907
22. Democracy, city-states, the Greeks, Buddha 2437
23. Zero and decimals invented, Rome falls, Moslem conquest 1440
24. Renaissance (printing presss), discovery of New World, the scientific method 539
25. Industrial revolution (steam engine), political revolutions (France, USA) 223
26. Modern physics, radio, electricity, automobile, airplane 100
27. DNA structure described, transistor invented, nuclear energy, 50
World War II, Cold War, Sputnik
28. Internet, human genome sequenced 5
13
References
Aaronson, S., Carroll, S., Ouellette, L. 2014. Quantifying the Rise and Fall of Complexity in
Closed Systems: The Coffee Automaton.
https://www.researchgate.net/publication/262677209
Allen, R., and Lidstrom, S., 2017. Life, the Universe, and everything42 fundamental
questions. Physica Scripta, 92 (1):012501. doi: 10.1088/0031-8949/92/1/012501
Bostrom, N., 2010. Anthropic Bias: Observation Selection Effects in Science and
Philosophy. Routledge, New York.
Carroll, S., 2010. From Eternity to Here: The Quest for the Ultimate Theory of Time.
Dutton, New York.
Carroll, S., 2016. The Big Picture: On the Origins of Life, Meaning, and the Universe.
Dutton, New York.
Carroll, S., 2021. Cream & Coffee. https://www.youtube.com/watch?v=NgAtvbRqckQ.
The Universe Is Your Problem Solver. So Is Coffee.
https://www.youtube.com/watch?v=0MazeG_Gl5s.
Casti, J., 2012. X-Events: The Collapse of Everything. William Morrow, New York.
Clausius, R., 1867. The Mechanical Theory of Heat with its Applications to the Steam
Engine and to Physical Properties of Bodies. John van Voorst, London.
Floyd, J., 2007. Thermodynamics, entropy and disorder in futures studies. Futures, 39 (9),
pp.1029-1044.
Gell-Mann, M., 1994. The Quark and the Jaguar: Adventures in the Simple and the
Complex. Henry Holt and Company, New York.
Gell-Mann, M., Lloyd, S., 1996. Information measures, effective complexity, and total
information. Complexity 2 (1):44-52. https://philpapers.org/rec/GELIME
Grandy, W. T., 2004. Time Evolution in Macroscopic Systems. II. The Entropy. Foundations
of Physics. 34 (1): 21-57.
Grassberger, P., 1989. Problems in Quantifying Self-organized Complexity, Helvetica Physica
Acta. 62: 498-511.
Horgan, J., 1995. From Complexity to Perplexity. Scientific American, 272 (6), June 1995, 104-
109.
Huberman, B. A., Hogg, T., 1986. Complexity and Adaptation. Physica D. 22: 376-384.
InstituteH21, 2021. Social singularity in the 21st century: At the crossroads of history.
International Symposium. September 18, 2021. Prague, CZ.
Kauffman, S., 1995. At Home in the Universe: The Search for the Laws of Self-
Organization and Complexity. Oxford University Press, New York.
Klein, Martin J., 1990. The Physics of J. Willard Gibbs in his Time. Physics Today. 43 (9): 40
48. doi:10.1063/1.881258
Kolmogorov, A., 1963. On Tables of Random Numbers. Sankhyā Ser. A. 25: 369375.
Kolmogorov, A., 1998. On Tables of Random Numbers. Theoretical Computer Science. 207 (2):
387395. doi:10.1016/S0304-3975(98)00075-9.
Lambert, F., 2002. DisorderA Cracked Crutch for Supporting Entropy Discussions.
Journal of Chemical Education, 79, pp. 187-192.
Li, W., 1991. On the Relationship between Complexity and Entropy for Markov Chains and
Regular Languages. Complex Systems 5 (4), pp. 381-399.
Lowe, J.P., 1988. Entropy: conceptual disorder. Journal of Chemical Education, 65 (5), p.403.
14
Magee, C. L., Devezas, T. D., 2011. How many singularities are near and how will they
disrupt human history? Technological Forecasting & Social Change, 78, pp. 13651378.
doi:10.1016/j.techfore.2011.07.013
Martin, J.S., Smith, N.A. and Francis, C.D., 2013. Removing the entropy from the definition
of entropy: clarifying the relationship between evolution, entropy, and the second law of
thermodynamics. Evolution: Education and Outreach, 6(1), pp.1-9.
Modis, T., 2002. Forecasting the growth of complexity and change. Technol. Forecast. Soc.
Chang. 69: 377 404.
Modis, T., 2003. The Limits of Complexity and Change. The Futurist, May-June, 2003.
Modis, T., 2006. Book Review and Discussion. Technological Forecasting & Social Change, 73 (2):
104-112. doi: 10.1016/j.techfore.2005.12.004
Modis, T., 2006. The Normal, the Natural, and the Harmonic. Technological Forecasting & Social
Change, 74 (3): 391-404. doi.org/10.1016/j.techfore.2006.07.003
Modis, T., 2012. Why the Singularity Cannot Happen. Chapter in Eden, A. H. et al. (eds.),
Singularity Hypothesis. Springer-Verlag, Berlin Heidelberg: 311-339.
doi: 10.1007/978-3-642-32560-1_16
Modis, T., 2020. Forecasting the Growth of Complexity and ChangeAn Update. Chapter
in Korotayev, A.V. et al. (eds.), The 21st Century Singularity and Global Futures.
Springer Nature Switzerland, Cham, Switzerland: 101-104.
https://doi.org/10.1007/978-3-030-33730-8_4
Patel, V., and Lineweaver, C., 2019. Entropy Production and the Maximum Entropy of the
Universe. Proceedings, 46 (1), 11; doi.org/10.3390/ecea-5-06672
Penrose, R., 1989. The Emperor's New Mind: Concerning Computers, Minds and The Laws
of Physics. Oxford University Press, Oxford, United Kingdom.
Rosen, R., 2000. Essays on life itself. Columbia University Press, New York.
Shannon, Claude E., 1948. A Mathematical Theory of Communication. Bell System Technical
Journal, 27 (3): pp. 379423; 623656.
Styer, D., 2019. Entropy as disorder: History of a misconception. The Physics Teacher, 57 (7),
pp. 454-458.
Styer, D.F., 2000. Insight into entropy. American Journal of Physics, 68 (12), pp.1090-1096.
Wright, P.G., 1970. Entropy and disorder. Contemporary Physics, 11 (6), pp.581-588.
Zyczkowski, K., and Bengtsson, I. 2006. An Introduction to Quantum Entanglement: a
Geometric Approach. arXiv:quant-ph/0606228v1
15
Biography
Theodore Modis is a physicist, strategist, futurist, and international consultant. He is
author/co-author to over one hundred articles in scientific and business journals and ten
books. He has on occasion taught at Columbia University, the University of Geneva, at
business schools INSEAD and IMD, and at the leadership school DUXX, in Monterrey,
Mexico. He is the founder of Growth Dynamics, an organization specializing in strategic
forecasting and management consulting: http://www.growth-dynamics.com
... This section presents the method defined to detect information disorder in social media groups. The section starts with the introduction of the concepts defined by Modis [34] of Complexity and Milestone for closed systems, 8 the contextualization of these concepts to groups of social media users and the adoption of these concepts to detect information disorder. ...
... The idea behind the method proposed in this section is to contextualize some measures for evaluating the complexity of systems to the evaluation of information disorder in groups of social media users. Specifically, Modis in [34] defines Complexity and Entropy in an information theoretic perspective as follows: ...
Article
Full-text available
The paper presents and evaluates an approach based on Rough Set Theory, and some variants and extensions of this theory, to analyze phenomena related to Information Disorder. The main concepts and constructs of Rough Set Theory, such as lower and upper approximations of a target set, indiscernibility and neighborhood binary relations, are used to model and reason on groups of social media users and sets of information that circulate in the social media. Information theoretic measures, such as roughness and entropy, are used to evaluate two concepts, Complexity and Milestone, that have been borrowed by system theory and contextualized for Information Disorder. The novelty of the results presented in this paper relates to the adoption of Rough Set Theory constructs and operators in this new and unexplored field of investigation and, specifically, to model key elements of Information Disorder, such as the message and the interpreters, and reason on the evolutionary dynamics of these elements. The added value of using these measures is an increase in the ability to interpret the effects of Information Disorder, due to the circulation of news, as the ratio between the cardinality of lower and upper approximations of a Rough Set, cardinality variations of parts, increase in their fragmentation or cohesion. Such improved interpretative ability can be beneficial to social media analysts and providers. Four algorithms based on Rough Set Theory and some variants or extensions are used to evaluate the results in a case study built with real data used to contrast disinformation for COVID-19. The achieved results allow to understand the superiority of the approaches based on Fuzzy Rough Sets for the interpretation of our phenomenon.
... Despite the increase in published research, Young (2015) argued that this practice has resulted in a lack of palpable research breakthroughs. The anachronistic relationship between the increase in the number of scientific publications and technological advances is reinforced by growing evidence suggesting that scientific progress and technological development have plateaued in recent decades (Bloom et al., 2020;Huebner, 2005;Modis, 2022;Strumsky et al., 2010). The focus of incentives on research performance metrics, particularly concerning recruitment, tenure, and promotion decisions, has arguably led to the emergence of a "publish or perish" culture, in which academic researchers have little choice but to publish specific types of research in specific journals and within specific timeframes (Levin & Aliyeva, 2015). ...
... This misalignment indicates that academic research is to a large extent no longer governed by academics and that the idea of science that is dominant today overemphasises application and impact because of its focus on users and on lay university governors (often from or linked to the business sector). These conditions, along with incentives that promote the mass production of papers, are not conducive to scientific breakthroughs (Rzhetsky et al., 2015), and scientific progress and technological advancements seemed to be diminishing or stalled (e.g., Modis, 2022;Bloom et al., 2020;Strumsky et al., 2010). ...
Article
Full-text available
Academic research has evolved tremendously over the last century. The middle of the twentieth century saw the development of research and the strengthening of trust both within academia and between academics and external actors. Since the later part of the twentieth century, however, the development of academic research has been characterised by reduced trust in universities and academics. It is argued that the lowering degrees of trust in universities and science are reflected in the current incentives in academia, often driven by governmental funding agencies, and the result of the altered position of universities within innovation systems. Universities are still important contributors to knowledge production, but they have slowly become more peripheral within innovation systems. Rather than setting their own research directions, they face strong incentives to do research primarily to serve others. This requires them to interact with organisations with which they have little in common and with which they find it difficult to communicate. The academic research pendulum seems to have swung too much towards knowledge transfer and application, with problematic outcomes. These developments indicate that it is necessary to reassess the purposes and potential benefits of academic research to restore trust in universities and increase the integrity and usefulness of research.
... The "singularity," as it is sometimes referred to, is the moment when humans cease to be the most intelligent and smart species on the planet. However, some studies argue that such trends, on the other hand, are unsustainable and that we are currently at our maximum complexity and that complexity in our lives will begin to decrease soon [42]. ...
Article
Full-text available
This manuscript examines prominent English theoretical physicist Stephen Hawking’s doomsday AI (Artificial Intelligence) predictions, in which he claims that once AI becomes sophisticated enough to outsmart humans in the future, it may pose existential threats to ordinary humans. He felt that ordinary people might feel outmoded, putting their entire survival in jeopardy. Hawking’s forecast of doomsday AI is compared to another widely accepted basic concept, according to which neither the most intelligent nor the strongest will survive, but only those who can adapt to change will survive. This phrase on adaptability is frequently misattributed by Charles Darwin. Although, when we examine Darwin’s work, we discover that one of his letters reveals that Darwin believed adaptability to the environment to be one of the fundamentals of survival. While AI may represent an astringent threat to humanity’s survival in the future, this viewpoint ignores the reality that it is the ability to adapt to change, resilience, and self-awareness that distinguishes humans as a distinct species. This research additionally claims that due to AI’s addictive nature, and the potential to render humans overly reliant on it, AI would cause them to lose their extremely unique ability to adapt. According to the research, while AI can engender culturally smart humans, it additionally has the potential to adversely impact the intelligence of a human race that will be plenarily reliant on AI for its basics. It is also important that we consider the dangers to AI from humans as well, rather than starting a one-sided debate about perceiving AI as a threat to humans. This research is unique in the sense that it invokes a very interesting debate related to the uncertainties raised for humankind after the advent of artificial intelligence.
Article
How can strategic decision-making be reinforced through reliable forecasts of technological change? Observations of strategic forecasts have shown that they mainly rely upon expert opinions. To turn these opinions into consistent knowledge about the future, we need to manage cognitive biases using provable models. Observed forecasting methods provide useful tools for exploiting expert knowledge and data, but management of cognitive bias remains underdeveloped. To improve the situation with cognitive biases in technology forecasting, the Researching Future method (RFm) offers a mixed methods approach. This article introduces RFm, a method that combines a problem-based approach and a logistic function, unified by an applied resources paradigm. A practical case study is described to illustrate and validate RFm, and the results, limitations, and perspectives of RFm are then examined. The article contributes to the technology forecasting methodology and is of interest to copper mining technology R&D specialists, among others.
Conference Paper
Full-text available
The entropy of the observable universe has been calculated as Suni ~ 10104 k and is dominated by the entropy of supermassive black holes. Irreversible processes in the universe can only happen if there is an entropy gap ΔS between the entropy of the observable universe Suni and its maximum entropy Smax: ΔS = Smax − Suni. Thus, the entropy gap ΔS is a measure of the remaining potentially available free energy in the observable universe. To compute ΔS, one needs to know the value of Smax. There is no consensus on whether Smax is a constant or is time-dependent. A time-dependent Smax(t) has been used to represent instantaneous upper limits on entropy growth. However, if we define Smax as a constant equal to the final entropy of the observable universe at its heat death, Smax ≡ Smax,HD, we can interpret T ΔS as a measure of the remaining potentially available (but not instantaneously available) free energy of the observable universe. The time-dependent slope dSuni/dt(t) then becomes the best estimate of current entropy production and T dSuni/dt(t) is the upper limit to free energy extraction.
Article
Full-text available
In The Hitchhiker's Guide to the Galaxy, by Douglas Adams, the Answer to the Ultimate Question of Life, the Universe, and Everything is found to be 42—but the meaning of this is left open to interpretation. We take it to mean that there are 42 fundamental questions which must be answered on the road to full enlightenment, and we attempt a first draft (or personal selection) of these ultimate questions, on topics ranging from the cosmological constant and origin of the Universe to the origin of life and consciousness.
Chapter
Full-text available
The concept of a Singularity as described in Ray Kurzweil’s book cannot happen for a number of reasons. One reason is that all natural growth processes that follow exponential patterns eventually reveal themselves to be following S-curves thus excluding runaway situations. The remaining growth potential from Kurzweil’s “knee”, which could be approximated as the moment when an S-curve pattern begins deviating from the corresponding exponential, is a factor of only one order of magnitude greater than the growth already achieved. A second reason is that there is already evidence of a slowdown in some important trends. The growth pattern of the U.S. GDP is no longer exponential. Had Kurzweil been more rigorous in his fitting procedures, he would have recognized it. Moore’s law and the Microsoft Windows operating systems are both approaching end-of-life limits. The Internet rush has also ended—for the time being—as the number of users stopped growing; in the western world because of saturation and in the underdeveloped countries because infrastructures, education, and the standard of living there are not yet up to speed. A third reason is that society is capable of auto-regulating runaway trends as was the case with deadly car accidents, the AIDS threat, and rampant overpopulation. This control goes beyond government decisions and conscious intervention. Environmentalists who fought nuclear energy in the 1980s, may have been reacting only to nuclear energy’s excessive rate of growth, not nuclear energy per se, which is making a comeback now. What may happen instead of a Singularity is that the rate of change soon begins slowing down. The exponential pattern of change witnessed up to now dictates more milestone events during year 2025 than witnessed throughout the entire 20th century! But such events are already overdue today. If, on the other hand, the change growth pattern has indeed been following an S-curve, then the rate of change is about to enter a declining trajectory; the baby boom generation will have witnessed more change during their lives than anyone else before or after them.
Article
Full-text available
Complexity and change have grown at accelerating rates throughout history, but they may soon reach a turning point. A scientist and strategic analyst offers a way to quantify complexity as it accumulates via world-changing events.
Article
Full-text available
We review possible measures of complexity which might in particular be applicable to situations where the complexity seems to arise spontaneously. We point out that not all of them correspond to the intuitive (or "naive") notion, and that one should not expect a unique observable of complexity. We finally concentrate on quantities which measure in some way or other the difficulty of classifying and forecasting sequences of discrete symbols, and study them in simple examples.
Chapter
In 2002, Modis published an article forecasting that the rate of change in our lives was about to stop accelerating and indeed begin decelerating. Today, with twenty years’ worth more data, Modis revisits those forecasts. He points outs that an exponential trend would have predicted the appearance of three “cosmic” milestones by now, namely in 2008, 2015, and 2018, but we have seen none. The logistic trend, however, predicted the next milestone around 2033 and could well turn out to be a cluster of achievements in AI, robotics, nanotechnology, and bioengineering, analogous to what happened with the milestone at the turn of the 20th century. He sees this as confirmation that the concept of a Singularity is not called for.
Article
Before reading this essay, go to your kitchen and find a bottle of Italian salad dressing. Get one that’s been sitting still for a while at a fixed temperature—that is, one in thermal equilibrium. You’ll find an oil-rich layer at the top of the bottle and a vinegar-rich layer at the bottom (see Fig. 1). But think for a moment before spreading it over a delicious salad and eating up. That bottle’s in thermal equilibrium, so it’s in a state of maximum entropy. Doesn’t entropy mean “disorder”? No one would call a stack of 50 pennies and 50 dimes disordered if all the dimes were on the top and all the pennies at the bottom. So why is this salad dressing at thermal equilibrium segregated like an ordered stack of coins?
Article
This article defines the concept of an information measure and shows how common information measures such as entropy, Shannon information, and algorithmic information content can be combined to solve problems of characterization, inference, and learning for complex systems. Particularly useful quantities are the effective complexity, which is roughly the length of a compact description of the identified regularities of an entity, and total information, which is effective complexity plus an entropy term that measures the information required to describe the random aspects of the entity. Mathematical definitions are given for both quantities and some applications are discussed. In particular, it is pointed out that if one compares different sets of identified regularities of an entity, the ‘best’ set minimizes the total information, and then, subject to that constraint, minimizes the effective complexity; the resulting effective complexity is then in many respects independent of the observer.