ArticlePDF Available

Abstract and Figures

In this paper we focus on the beneficial role of random strate-gies in social sciences by means of simple mathematical and computational models. We briefly review recent results obtained by two of us in previous contributions for the case of the Peter principle and the efficiency of a Parlia-ment. Then, we develop a new application of random strategies to the case of financial trading and discuss in detail our findings about forecasts of markets dynamics.
Content may be subject to copyright.
arXiv:1209.5881v1 [physics.soc-ph] 26 Sep 2012
Noname manuscript No.
(will be inserted by the editor)
Alessio Emanuele Biondo
Alessandro Pluchino
Andrea Rapisarda
The beneficial role of random strategies
in social and financial systems
Received: 26.09.2012 / Accepted: date
Abstract In this paper we focus on the beneficial role of random strate-
gies in social sciences by means of simple mathematical and computational
models. We briefly review recent results obtained by two of us in previous
contributions for the case of the Peter principle and the efficiency of a Parlia-
ment. Then, we develop a new application of random strategies to the case of
financial trading and discuss in detail our findings about forecasts of markets
dynamics.
Keywords Random strategies ·Sociophysics ·Efficiency ·Numerical
simulations ·Peter Principle ·Parliament ·Financial Strategy ·Financial
Markets ·Expectations ·Momentum ·RSI
1 Introduction
In science and in physics in particular, noise and randomness are usually
restricted to be as low as possible in order to avoid any influence on the phe-
nomena under examination. Actually, this is not often possible and one has
to live with noise, but, on the other hand, random noise is not always so an-
noying as one might think intuitively. In fact there are many examples where
randomness has been proven to be extremely useful and beneficial. The use
of random numbers in science is very well known and Monte Carlo methods
A.E. Biondo
Dipartimento di Economia e Impresa - Universit´a Catania
E-mail: a.e.biondo@unict.it
A. Pluchino
Dipartimento di Fisica e Astronomia and INFN - Universit´a di Catania
E-mail: alessandro.pluchino@ct.infn.it
A. Rapisarda
Dipartimento di Fisica e Astronomia and INFN - Universit´a di Catania
E-mail: andrea.rapisarda@ct.infn.it
2
are very much used since long time[1], but also in real physical experiments
random noise has been proven to be very useful and crucial to explain or
help the dynamics: stochastic resonance is probably one of the most famous
and well studied examples of this [2,3]. Of course there are many other cases
which support this claim, like noice-induced stabilization [4], noise-improved
efficiency in communication networks[5], noise-induced phase transitions [6],
etc. On the other hand, in the last years, there has been an increasing inter-
est in social phenomena by the physics community [7,8, 9, 10,11]. Models to
study collective behaviors in socio-economical systems, elections mechanisms,
consensus formation, management of organizations, spreading of ideas in so-
cial networks or herding effects in financial markets, have been proposed and
studied with success, providing rigorous and quantitative ways to investigate
and help understanding common dynamical laws behind social phenomena.
In this respect, it also seems to grow the feeling that the power of traditional
optimization approaches for solving complex social or economical problems
is overestimated, while, on the other hand, it is usually underestimated the
role of chance and fluctuations in these fields [12,13, 14, 15,16].
It is just in this spirit, that we started a few years ago, to investigate the pos-
sible use of randomness in mathematical and computational models devised
to analyze social phenomena. The first application we studied was related to
the problems raised by the so-called Peter Principle [17,18, 19]. By means of
agent based simulations, we demonstrated that random promotion strategies
could stop the diffusion of incompetence in hierarchical groups, obtaining
also an increase in the global efficiency of the organization under study. Ex-
cited by the success of these first studies, we have recently investigated a way
to improve the efficiency of an institution like the Parliament by means of
random selection of part of its members [20]. We present a rapid overview of
the main results obtained for these two applications in sections 2. But the
central topic we want to address in this paper concerns the financial market
dynamics.
The very peculiar characteristic of economic and financial systems is that
their dynamics depends on their past: economic decisions taken today rely
on past expectations, whereas natural laws remain unchanged, no matter
what humans think. Thus, economic systems can be considered as feedback-
influenced systems, since agents’ expectations will influence the entire future
dynamics. Such an argument inspired the contribution of many authors who
tried to build a mechanism of beliefs formation. We can roughly say that two
main reference models of expectations have been widely established within
economic theory: the adaptive expectations model and the rational expecta-
tion model. We will not proceed in the formal description of such approaches,
but we can fruitfully report the main difference between them. The adaptive
expectations model (named after Arrow and Nerlove [21]), has been devel-
oped in contributions by Friedman [22,23], Phelps [24], and Cagan [25] and
assumes that the value of a variable is a somehow weighted average of its
past values. Whereas, the rational expectations approach (whose birth dates
back to contributions by Muth [26], Lucas [27], and Sargent-Wallace [28],
assumes that agents know exactly the entire model describing the economic
system and, since they are endowed by perfect information, their forecast for
3
any variable coincides with the objective prediction provided by theory.
As one can easily understand, the possibility to make predictions is abso-
lutely central in economic theory. In financial markets, this problem is even
more urgent, since the extremely high volatility and the strong instability
that everyday can be observed on those markets.This leads to articulated
theories of trading that try to forecast market dynamics in order to realize
profits from intermediation. There is not unanimity of opinions, in economic
literature, about the actual possibility of traders to predict financial values.
The so-called Efficient Market Hypothesis, which refers to the rational ex-
pectation models, consider the role of rationality of agents as the main and
most important part of market dynamics, whereas an adaptive approach is
oriented in building forecasts from past dynamics.
Financial crises showed that mechanisms and trading strategies are not im-
mune from failures. Their periodic success is not free of charge: catastrophic
events burns enormous values in dollars. Are we sure that elaborated strate-
gies fit the unpredictable dynamics of markets? Are analysts aware that
without having complete information, with non perfect markets, no ratio-
nal mechanism can be invoked in financial transactions? In order to reply to
these questions a simple simulation will be done. It will perform a compar-
ative analysis of performances between trading strategies: two very famous
and reliable technical strategies (namely momentum and RSI-divergence [29,
30]) that traders adopt everyday for their operations in real markets, versus
a random strategy. Rational expectations theorists would immediately bet
that the random strategy will easily loose the competition, but this is not
the case, as we discuss in section 3, where we present new detailed numerical
results. Conclusions are drawn in Section 4.
2 Improving the efficiency of social groups or organizations by
means of random strategies
2.1 The case of the Peter Principle
The Peter principle was enunciated by the Canadian psychologist Lawrence
J. Peter in a famous book of the 60s [17] based on some sensible assump-
tions on the transfer of skills from one level of a hierarchical organization
to the next one. The principle states that in a hierarchical organization
each member rises the hierarchy, as a result of meritocratic promotions, up
to when he/she reaches its minimum level of competence”. Even if it sounds
paradoxical, according to Peter such a perverse effect surely occurs when-
ever promoted people do change their task passing from the previous to the
next level: in this way, incompetence will inevitably spread at the top of the
organization, endangering its proper functioning. In refs. [18, 19] it has been
demonstrated by means of numerical simulations, that the principle is true
under certain conditions and that one can overcome its effects by adopting
random promotions. In the following we explain the details of the models
used and the main results obtained.
The first model studied in ref. [18], considered a schematic pyramidal orga-
nization with 160 positions divided into six levels. Each level had a different
4
number of members with a different responsibility according to the hierar-
chical position. The members of the organization were characterized by their
age, in the interval 18-60 years, and their degree of competence in the range
1-10. As initial conditions we selected ages and competences following normal
distributions. At each time step of one year, if members reach an age over
the retirement threshold ( fixed at 60 years) or have a competence lower than
the dismissal threshold (fixed at 4) they leave the organization and someone
from the level immediately below (or from outside for the initial bottom level)
has to be selected for promotion. Four different competing strategies of pro-
motions were taken into account: promotion of the best worker, promotion
of the worst, promotion of a random worker and promotion in an alternate
way of the best and of the worst. Two different mechanisms of competence
transmission were also considered:
1) Common Sense (CS) - if the features required from one level to the upper
are enough stable, the new competence at the upper level is correlated with
the previous one and the agent maintains his competence with a small error;
2) Peter Hypothesis (PH) - if the features required from one level to the
upper can change considerably, the new competence at the upper level is not
correlated with the previous one, so the new competence is again randomly
assigned for each promotion.
The global efficiency Ewas defined by summing the competences of the mem-
bers level by level, multiplied by the level-dependent factor of responsibility,
ranging from 0 to 1 and linearly increasing on climbing the hierarchy. If Ci
is the total competence of level i-th, the global efficiency can be written as
E(%) = P6
i=1 Ciri
Max(E)·N
·100,where Max(E) = P6
i=1(10 ·ni)·ri/N, and niis the
number of agents of level i-th.
The main results found after averaging over many different realizations of
the initial conditions, confirmed the risk of incompetence spreading when
the Peter Hypothesis holds. In particular it was found that promoting the
best members is a winning strategy only if the CS hypothesis holds, otherwise
is a loosing one. On the contrary, if the PH holds, the best strategy becomes
that one of even promoting the worst member. On the other hand, if one
does not know which of the two hypothesis holds, then adopting a random
promotion strategy, or alternating the promotion of the best and the worst
candidates, results to be always a winning choice.
Although the paper was quite successful and appreciated also for its simplic-
ity [31], its paradoxical results needed a confirmation within a more realistic
model. To this end in a second model [19] it was adopted a schematic modular
organization, i.e. a hierarchical tree network with K= 5 levels, where each
agent (node) at levels k= 1,2,3,4 (excluding the bottom level with k= 5)
has exactly Lfirst subordinates (i.e. first neighbors at level k+ 1), which will
fill that position when it will become empty. This means that, at variance
with the pyramidal schematic model of our first paper, in this case promoted
agents could follow the links to ascend through levels. On the other hand,
neglecting the links and promoting agents from the entire level kto the next
level k1, one could recover the pyramidal model as a particular case.
In the inset of Fig.1 we show an example of such a hierarchical tree network,
with K= 5 levels and L= 5, for a total of N= 781 agents. The respon-
5
Fig. 1 The dynamical evolution of the model efficiency gain during the first 20
years. One can observe an immediate increase of the efficiency since the beginning of
the adoption of a random strategy with respect to a meritocratic one with the Peter
hypothesis, see text for further details. The topology of the modular hierarchical
organization considered is also shown in the top left part of the figure.
sibility value is 0.2 for the bottom level and increases linearly, like in the
previous model, with step 0.2 for each level up to the top one, whose respon-
sibility value is 1. Another improvement of the second model was the time
units adopted, which becomes one month instead of one year. Moreover, in-
stead of studying the improvements with respect to an arbitrary initial state
for the organization (with an arbitrary value of the initial global efficiency,
as done in [17]), we evaluated a relative global efficiency Er(%), calculated
with respect to a fixed transient during which a meritocratic strategy (i.e.
the promotion of the best workers coupled with the Peter hypothesis) was
always applied. We introduced also a new ”mixed” strategy, where a differ-
ent increasing percentage of random promotions with respect to meritocratic
strategy is considered. Finally, we either considered promotions of a member
from a level to the next one following the links (neighbors mode) or without
considering the links of the hierarchical tree (global mode), in order to reduce
the new model to that one considered in ref. [18].
In Fig.1 we report, as an example, the relative efficiency as function of time
for the global mode. A period of 240 months (20 years) was considered and an
average over 30 different realizations of the initial conditions was performed
in order to diminish the effect of fluctuations. Our aim was to investigate the
effects of the introduction of an increasing percentage of random promotions
(from 25% to 100%) within an otherwise meritocratic strategy, under the Pe-
ter hypothesis of competence transmission. The plot clearly shows how even
a moderate amount of randomness increases the efficiency of the organization
in a rapid and substantial way. This second model thus confirms the results of
the previous one, even for larger organizations and different topologies: ran-
6
dom strategies provide a good advantage, in terms of efficiency, with respect
to a full ”naively” meritocratic system of promotions and, at the same time,
diminish the risk of the rising of incompetence to the top of the organization.
One can refer to ref.[19] for more details.
2.2 The case of the Parliament
Stimulated by the results about the Peter principle, we started to ask if ran-
dom strategies could be useful also in the selection of members of political
institutions. In this section we discuss a recent application of random strate-
gies for improving the efficiency of a prototypical Parliament and present
the main results obtained by two of us in ref.[20]. Inspired by the so called
’Cipolla’s diagram’ [32], we realized a virtual model of one chamber of a
Parliament by characterizing its members through their attitude to promote
personal and general interest through legislative proposals. In this way we
represented individual legislators as points in the two dimensional Cipolla’s
diagram, where on the xaxis is reported the personal gain and on the y-axis
the social gain (considered as the final outcome of trading relations produced
by the laws). Both the xand ycoordinates of each legislator are real numbers
included in the interval [1,1]. In our simulations we considered a Parlia-
ment with Nmembers and two parties or coalitions, P1(the majority one)
and P2(the minority one), with a different percentage of members. All the
points representing the members of a party will lie inside a circle, with a cen-
ter whose position on the Cipolla’s diagram is fixed by the average collective
behavior of all its members, and with a given radius rthat fixes the extent to
which the Party tolerates dissent within it (the larger this radius, the greater
the degree of tolerance within the party; for this reason this circle was called
circle of tolerance of the party).
In [20] we found that the efficiency of the Parliament, defined as the product
of the percentage of the accepted proposals times their overall social welfare,
can be influenced by the introduction of a given number Nind of not elected
but randomly selected legislators, called ’independent’ since we assume that
they remain free from the influence of any party. These independent legisla-
tors will be represented as free points on the Cipolla diagram. The dynamics
of our model is very simple. During a legislature Leach legislator, indepen-
dent or belonging to a party, can perform only two actions:
(i) he/she can propose one or more acts of Parliament, with a given personal
and social advantage depending on his/her position on the diagram.
(ii) he/she has to vote for or against a given proposal, depending on his/her
acceptance window, i.e. a rectangular subset of the Cipolla diagram into which
a proposed act has to fall in order to be accepted by the voter (whose posi-
tion fixes the lower left corner of the window). The main point is that, while
each free legislator has his/her own acceptance window, so that his/her vote
is independent from the others vote, all the legislators belonging to a party
always vote by using the same acceptance window, whose lower left corner
corresponds to the center of the circle of tolerance of their party. Further-
more, following the party discipline, any member of a party accepts all the
proposals coming from any another member of the same party (see [20] for
7
Fig. 2 Simulation results for a Parliament with N= 500 members, two parties P1
and P2and Nind independent (randomly selected) legislators. Panel (a): Average
percentage of accepted proposals vs Nind ; Panel (b): Average overall social gain
vs Nind; Panel (c): Size of the three Parliament components (P1,P2and Nind )
vs Nind; Panel (d): Global efficiency vs Nind. All the numerical points represent
averages over 100 different legislatures. See text.
further details). Once all the Nmembers of Parliament voted for or against
a certain proposal, the latter will be accepted only if receives at least N
2+ 1
favorable votes. At this point we can calculate the efficiency E f f(L) of the
Parliament during a legislature Lby simply multiplying the percentage of
accepted proposals Nacc (L) times the overall social gain Y(L) they ensure
(notice that Ef f (L) will be therefore expressed by a real number included in
the interval [-100,100]). In this respect, we investigated how the three quan-
tities Nacc(L), Y(L) and Ef f (L) change as function of the number Nind of
independent legislators introduced in the Parliament.
In Fig.2 we present some of the main results of [20], obtained by simulating a
Parliament with N= 500 members and two parties P1and P2with - respec-
tively - 60% and 40% of legislators. Notice that these latter values represent
the percentages of seats assigned to both the majority and minority parties
in a given legislature after having reserved the Nind seats to the independent
legislators, therefore are values that decrease by increasing Nind.
In panels (a) and (b) we plot, respectively, the percentage of accepted propos-
als and the correspondent social gain as function of the number of indepen-
dent legislators, averaged over a set of NL= 100 legislatures, each one with
a total number of 1000 proposals but with a different random distribution
8
of legislators and parties on the Cipolla’s diagram. We also repeated all the
simulations for two different values of the radius rof both the parties (0.1
and 0.4). It clearly appears that, in average, the introduction of an increasing
number of independent legislators causes a decrease in the percentage of ac-
cepted acts (since reduces the weight of the party discipline in accepting each
proposal) but, simultaneously, also produces an increase of the average social
gain of the same accepted acts (since only the proposals ensuring a higher
social gain succeed in being accepted by the majority of the Parliament in
presence of independent legislators). In both the curves two different thresh-
old values of Nind, corresponding to a change in the slope, can be recognized
and can be easily explained looking to panel (c), where the size of the two
parties P1and P2are reported as function of Nind (this point was not dis-
cussed in [20] so it still needs a clarification). It results that (for our choice
of parameters) the party P1looses the absolute majority in the Parliament
for Nind >84, therefore only over this first threshold Nacc and Y(L) start
to significantly change. The second threshold, on the other hand, takes place
when the independent component becomes, in turn, the absolute majority
of legislators, thus accelerating - respectively - the decreasing and increasing
trends of Nacc and Y(L). In any case, despite of these explanation, it remains
absolutely not trivial to predict the exact shape of these non linear curves,
and this will reflect on the difficulty of a-priori determining the resulting
efficiency.
Finally, in panel (d), we plot the product of the two previous quantities,
therefore obtaining (a-posteriori) the global efficiency of the Parliament (av-
eraged over the 100 legislatures). It is worthwhile to notice here that:
(i) for any value of Nind the global efficiency shows an increment with re-
spect to the two extreme cases Nind = 0 (only parties) and Nind =N(only
independent members and no parties); this means that, in analogy with the
results shown in the previous subsection, even a small degree of randomness
added to the system is able to increase its performance;
(ii) the combination of the two previous curves gives rise to a pronounced
peak in efficiency in correspondence of a critical value N
ind = 140 of indepen-
dent legislators, which does not change with the radius rbut only depends
on the relative size of the two parties. In [20] we discovered that it is possible
to write down an analytical formula, called efficiency golden rule, able to
exactly predict the value N
ind as function of the total number of legislators
Nand the size p(in percentage) of the majority party P1. The formula is the
following: N
ind =2N4N·(p/100)+4
14·(p/100) . It allows us to imagine a new electoral
system where, after ordinary elections for determining the relative sizes of
the majority and minority parties, one could use our ’golden rule’ to find out
the optimal number of independent legislators, chosen at random among all
the citizens willing to candidate (out of the parties system), able to maximize
the Parliament efficiency.
In conclusion, in this section we have shown a couple of applications of nu-
merical simulations to management and politics, with the aim to convince
the reader that some degree of randomness could play a constructive role
in improving the efficiency of our institutions. In the following section we
9
0500 1000 1500 2000 2500 3000 3500
days
-0,1
-0,05
0
0,05
0,1
FTSE-UK Returns
0500 1000 1500 2000 2500 3000 3500
days
1000
1500
2000
2500
3000
3500
4000
FTSE-UK Index
(a)
(b)
Fig. 3 Panel (a): Behavior of the FTSE UK index from January, 1st 1998 to
August, 3rd 2012 (14 years, 3714 days). Panel (b): Returns series for the FTSE UK
index in the same period.
will give further support to this hypothesis presenting a new application of
random strategies to financial markets.
3 Financial Markets, Randomness and Trading Strategies: The
Case Study of FTSE UK index
In 2001 a valued british psychologist, Richard Wiseman, performed an eccen-
tric experiment in order to test the predictive power of the trading strategies
in the financial markets [33]. He gave the same virtual amount of money
(5000 pounds) to three very different people, a London’s financial trader, an
astrologer and a four years old child named Tia, asking them to invest the
money in the UK stock market (the trader following his algorithms, the as-
trologer following the stars’ movements, and Tia completely at random). At
the end of a very turbulent year for the world financial markets, the result
of the competition was completely unexpected: if, on one hand, the trader
had lost 46.2% and the astrologer 6.2%, on the other hand, with the help
of her random strategy, Tia had even earned +5.8%! In the next years other
similar experiments, with similar results, were repeated by substituting the
child with a chimpanzee [33] but, as far as we know, no one has yet tried to
test the effectiveness of random strategies in finance through computer sim-
ulations. This is exactly what we will do in this section, by using the FTSE
UK all-share index series of the last 14 years as case study.
In panel (a) of Fig.3 we plot the behavior of the FTSE UK index I(t) from
January, 1st 1998 to August, 3rd 2012, for a total of TU K = 3714 days, while
in the panel (b) we report the correspondent ’returns series’, calculated as
the ratio [I(t)I(t1)]/I(t). From the latter it immediately appears, by
10
imagining to divide the time series into three trading windows of equal size
(of around 1200 days each), that the index behavior alternates a first inter-
mittent period with a more regular one, ending again with a last intermittent
interval. A finer resolution would reveal a further, self-similar, alternation of
intermittent and regular behavior over smaller time scales, a well known fea-
ture (which also resembles turbulence phenomena) characterizing financial
markets [12,13,14]. As previously anticipated, our goal is to test the perfor-
mance of three different trading strategies in simply predicting, day by day,
the upward (’bullish’) or downward (’bearish’) movement of the index I(t+ 1)
at a given day with respect to the closing value I(t) one day before: if the
prediction is correct, the trader wins, otherwise he/she looses. In this respect
we are only interested, here, in evaluating the percentage of wins or losses
guaranteed by each strategy and at different time scales, assuming that - at
every time step - the traders perfectly know the past history of the FTSE
UK index but do not possess any other information and cannot neither exert
nor receive any influence to or from the market. These are, of course, naive
approximations, but we are looking for a minimal model that would enable
us to explore the role of randomness in financial trading and, as we will im-
mediately show, this one seems to be able to do it.
The three strategies we will adopt in the present study are the following:
1) Random (RND) Strategy This strategy is the simplest one, since the cor-
respondent trader makes his/her ’bullish or ’bearish’ prediction at the time t
completely at random (with uniform distribution), like Tia in the Wiseman’s
experiment. The other two strategies, on the contrary, are based on two in-
dicators that are very well known by financial traders.
2) Momentum-based (MOM) Strategy This strategy is based on the so called
’momentum’ M(t) indicator, i.e. the difference between the value I(t) and
the value I(tτM), being τMa given trading interval (expressed in days).
Then if M(t) = I(t)I(tτM)>0, the trader predicts an increment of the
closing index for the next day (i.e. it predicts that I(t+ 1) I(t)>0) and
vice-versa. In the following simulations we will consider τM= 7 days, since
this is one of the most used time lag for the momentum indicator.
3) RSI-based Strategy This latter strategy is based on a more complex in-
dicator called ’RSI’ (Relative Strength Index) [29]. It is considered a mea-
sure of the stock’s recent trading strength and its definition is: RS I (t) =
100 100/[1 + RS(t)], where RS(t, τRS I ) is the ratio between the sum of the
positive returns and the sum of the negative returns occurred during the last
τRSI days before t. Once calculated the RSI index for all the days included
in a given time-window of length TRSI immediately preceding the time t, the
trader which follows the RSI strategy makes his/her prediction on the basis
of a possible reversal of the market trend, revealed by the so called ’diver-
gence’ between the original FTSE UK series and the new RSI one (see [29]
for more details). In our simplified model, the presence of such a divergence
translates into a change in the prediction of the I(t+1)I(t) sign, depending
on the ’bullish or ’bearish’ trend of the previous TRS I days. In the following
simulations we will choose τRSI =TRSI = 14 days, since - again - this value
is one of the mostly used in RSI-based actual trading strategies [30].
In Fig.4 we report a first comparison of the simulations results for our three
11
0 1 2 3
0e+00
2e-04
4e-04
6e-04
8e-04
Volatility
0 1 2 3
40
45
50
55
60
RND Strategy
0510 15 20 25 30
0 3 69 12 15 18
windows
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0 510 15 20 25 30
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0510 15 20 25 30
0 1 2 3
40
45
50
55
60
MOM Strategy
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18
0 1 2 3
windows
40
45
50
55
60
RSI Strategy
0 1 2 3 4 5 6 78 9
windows 0510 15 20 25 30
windows
Fig. 4 Simulations results: the FTSE UK series is divided into an increasing num-
ber of trading-windows of equal size, in order to simulate different time scales. In
the first row the volatility of the index, calculated inside each window, is shown
for comparison. In the other rows, the percentages of wins for the three strategies,
averaged over 10 different runs inside each window, are reported. A 50% dashed
line is also plotted as reference. See text.
strategies, applied to the FTSE UK series. In particular, we test the perfor-
mance of the strategies by dividing the whole series into a sequence of Nw
trading windows of equal size Tw=TU K /Nw(in days) and evaluating the
number of wins for each strategy inside each window while the traders move
along the series day by day, from t= 0 to t=TU K . This procedure, varying
Nw, allows us to explore the behavior of the various strategies at several time
scales (ranging, approximatively, from 6 months to 5 years). In the first row
of Fig.4 we plot the volatility of the FTSE UK index, calculated for 4 increas-
ing values of Nw: from left to right, we consider 3, 9, 18 and 30 windows with
size Twequal to, respectively, 1237, 412, 206 and 123 days. In the three rows
below we plot, in correspondence of the same 4 windows configurations, the
values of the percentage of wins for the three strategies within each window,
averaged over 10 different runs (in this first set of simulations such an average
is meaningful only for the random strategy, since the other two strategies are
completely deterministic, once fixed their characteristic parameters and the
trading series).
Differences between the Random strategy (RND, second row) and the two
standard trading strategies (MOM and RSI, third and fourth row) are evi-
12
Fig. 5 Panel (a): Percentages of wins for the three strategies (magnified on the
y-axis in order to better appreciate the comparison) averaged over all the windows
in each one of 10 different subdivisions of the FTSE UK series, with an increas-
ing number of windows (reported on the x-axis). Panel (b): The correspondent
total standard deviations for the same configurations of windows and for the three
strategies. See text.
dent. Actually, at any time scale (but in particular for large values of Nw),
the RND appears much less fluctuating (i.e. less risky) than the others. Fur-
thermore, MOM and RSI performances seem to behave slightly worse than
the RND one at the beginning and at the end of the whole FTSE UK series,
i.e. when the market behavior is more intermittent - as shown by the corre-
spondent higher volatility.
In Fig.5 we can better appreciate this quite surprising result by observing,
in panel (a), the percentage of wins for the three strategies averaged over all
the windows in each one of several configurations with different Nw(rang-
ing from 3 to 30 with step 3) and, in panel (b), the correspondent standard
deviations. From the first histogram it appears that the average gains of the
13
three strategies are comparable and restricted in a narrow band just below
the 50% of wins, with a slight advantage of the RND one (which, however,
could depend on the trading series chosen for the analysis). At the same time,
the second histogram confirms the higher stability of the random strategy
over the other ones. The fact that none of the three strategies overcomes
the threshold of 50% could seem paradoxical, but we stress that this is true
only averaging over the whole FTSE UK series, whereas, of course, they can
exceed that threshold within single trading windows, as also visible in Fig.4.
These findings seem to suggest that - as also observed by Taleb [16] - the
success of a trading strategy at a small time scale would probably depend
much more on luck than on the real effectiveness of the adopted algorithm,
since on a large time scale its performance is comparable with (or, as in this
case, even worse than) a random one.
At this point, following the results surveyed in the previous section, one may
suspect that the introduction of some randomness into the standard, other-
wise deterministic, trading strategies (MOM and RSI), could play a beneficial
role. In the next figures we actually show that this is indeed the case.
In Fig.6 we present plots similar to those in Fig.4, but with an increasing per-
centage PRND of random predictions mixed with the two standard trading
strategies, i.e. 20% in panel (a), 50% in panel (b), 70% in panel (c), respec-
tively. It immediately appears that the introduction of even a relatively small
quantity of ’noise’ (i.e. of random choices) in the MOM and RSI strategies
improves their performance, in terms of both enhancing the average number
of wins per window and stabilizing its fluctuations (i.e. reducing the trading
risk) in each configuration (different columns). In this case, of course, the
average over 10 runs performed inside each window (as in Fig.6) makes sense
for all the three strategies (notice that we repeat here all the calculations also
for the RND strategy, thus reinforcing the previous results). We summarize
these results in Fig.7, where we report a synthesis of both the averages and
the standard deviations calculated over all the trading windows for the same
configurations shown in Fig.6. It is evident that already for PRND = 50% the
average gain of the MOM and RSI strategies becomes comparable with RND
strategy (left column), as well as the corresponding fluctuations (right col-
umn). This further supports the analogy with the results found for the social
systems presented in the first section, where the beneficial role of random-
ness could be appreciated even in moderate doses: actually, the same seems
to happen also in financial trading, at least for the two standard strategies
we considered in this paper.
The rationale behind the advantages to adopt some kind of random strategy
for trading in financial markets, as suggested some years ago by the experi-
ments of Wiseman and now corroborated by the results of our simulations,
is twofold. On one hand, the intrinsic turbulent nature of financial markets
makes any long term prediction about their behavior very difficult with the
instruments of standard financial analysis, whose mathematical models are
often based on unrealistic assumptions [12]. Such assumptions usually lead
the traders to underestimate both the risks they face and the role of chance in
the possible success of their strategies, at least until the next big market crash
suddenly comes to reset their capital [16]. In this respect, the effectiveness
14
0 1 2 3
0e+00
2e-04
4e-04
6e-04
8e-04
Volatility
0 1 2 3
40
45
50
55
60
RND Strategy
0510 15 20 25 30
0 3 69 12 15 18
windows
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0 510 15 20 25 30
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0510 15 20 25 30
0 1 2 3
40
45
50
55
60
MOM (20%RND)
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18
0 1 2 3
windows
40
45
50
55
60
RSI (20%RND)
0 1 2 3 4 5 6 78 9
windows 0510 15 20 25 30
windows
(a)
0 1 2 3
0e+00
2e-04
4e-04
6e-04
8e-04
Volatility
0 1 2 3
40
45
50
55
60
RND Strategy
0510 15 20 25 30
0 3 69 12 15 18
windows
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0 510 15 20 25 30
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0510 15 20 25 30
0 1 2 3
40
45
50
55
60
MOM (50%RND)
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18
0 1 2 3
windows
40
45
50
55
60
RSI (50%RND)
0 1 2 3 4 5 6 78 9
windows 0510 15 20 25 30
windows
(b)
0 1 2 3
0e+00
2e-04
4e-04
6e-04
8e-04
Volatility
0 1 2 3
40
45
50
55
60
RND Strategy
0510 15 20 25 30
0 3 69 12 15 18
windows
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0 510 15 20 25 30
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18 0510 15 20 25 30
0 1 2 3
40
45
50
55
60
MOM (70%RND)
0 1 2 3 4 5 6 78 9 0 3 69 12 15 18
0 1 2 3
windows
40
45
50
55
60
RSI (70%RND)
0 1 2 3 4 5 6 78 9
windows 0510 15 20 25 30
windows
(c)
Fig. 6 Volatility and average percentage of wins (inside each window and over
10 runs) for the three traders, as in Fig.4, but with an increasing quantity of
randomness mixed with the MOM and the RSI strategies. Panel(a): PRND = 20%;
Panel (b): PRND = 50%; Panel (c): PRND = 70%. See text.
15
Fig. 7 Average percentage of wins (left column) with the correspondent standard
deviations (right column) calculated over all the windows for the same configura-
tions shown in Fig.6 and for the same three values of PRND .
of random strategies could be probably related to their stronger agreement
with the turbulent and erratic essence of the financial markets. On the other
hand, last but not least, random strategies are also very cheap to implement:
following them, everyone can invest in the stock market by himself/herself,
reducing the efforts to gather expensive information and without resorting
to costly financial consultants or to complicate trading rules.
4 Conclusions
In this paper we have explored the beneficial role of random strategies in
social and financial systems. We presented a short review of recent results
obtained in the managerial and political fields and then we focused our at-
tention on financial markets. In particular, we numerically simulated the per-
formance of three trading strategies (one completely random and two chosen
among the most popular ones adopted by traders) applied to the FTSE UK
index series, chosen here as a case study, in order to compare their predictive
16
capability. Our results clearly indicate that (i) the standard strategies, with
their algorithms based on the past history of the index, do not perform bet-
ter than the purely random one, which, on the other hand, is also much less
risky, and (ii) that the introduction of some degree of randomness in the same
strategies significantly improves their performance. This means that random
strategies offer also in the financial field a better and costless alternative.
Of course one should investigate these findings on different stock price index
series and consider a broader set of strategies in order to test the robustness
and generality of these results. A study in this direction is in progress [34].
References
1. K. Binder, D.W. Heermann, Monte Carlo Simulation in Statistical Physics: An
Introduction, Springer-Verlag, Berlin (1988)
2. R. Benzi, G. Parisi, A. Sutera, A. Vulpiani, Tellus 34, 10 (1982)
3. L. Gammaitoni, P. Hanggi, P. Jung, F. Marchesoni, Reviews of Modern Physics,
70, 1 (1998)
4. Mantegna R. and Spagnolo B., Phys, Rev. Lett. 76 (1996) 563
5. Caruso F., Huelga S.F., Plenio M.B., Phys. Rev. Lett. 105 (2010) 190501
6. Van den Broeck C., Parrondo J. M. R. and Toral R., Physical Review Letters,
vol. 73 p. 3395 (1994)
7. Helbing D., Quantitative sociodynamics (Kluwer, Dordrecht, 1995); Helbing D.,
Social Self-Organization, Springer (2012)
8. Castellano C., Fortunato S., Loreto V., Reviews of Modern Physics, 81 (2009).
9. Pluchino A., Latora V., Rapisarda A., Int. Jour. Mod. Phys. C vol.16, no.4
(2005)
10. Pluchino A., Latora V., Rapisarda A., Eur. Phys. J. B 50 (2006) 169
11. Buchanan M., The social atom, (Bloomsbury, 2008)
12. Mandelbrot B.B., The variation of certain speculative prices. Journal of Busi-
ness, Vol. 36, pp. 394419 (1963).
13. Mandelbrot B.B., Fractals and Scaling in Finance. Springer, New York (1997).
14. Mantegna R.N., Stanley H.E., Introduction to Econophysics: Correlations and
Complexity in Finance. Cambridge University Press, Cambridge (1999).
15. J.B. Satinover and D. Sornette, Illusion of control in Time-Horizon Minority
and Parrondo Games, Eur. Phys. J. B 60, 369384 (2007); J.B. Satinover e D.
Sornette, Illusory versus genuine control in agent-based games, Eur. Phys. J. B
67, 357367 (2009)
16. Taleb N.N., Fooled by Randomness: The Hidden Role of Chance in the Markets
and in Life, Random House, NY (2005); Taleb N.N., The Black Swan: The Impact
of the Highly Improbable, Random House NY (2007)
17. L.J. Peter, R. Hull, The Peter Principle: Why Things Always Go Wrong,
William Morrow and Company, New York, 1969.
18. Pluchino A., Rapisarda A. and Garofalo C., Physica A, 389, 467 (2010). See
also http://oldweb.ct.infn.it/cactus/peter-links.html
19. Pluchino A., Rapisarda A. and Garofalo C., Physica A, 390 3496 (2011)
20. Pluchino A., Garofalo C., Rapisarda A., Spagano S., Caserta M., Physica A
390, 3944 (2011). See also http://www.pluchino.it/Parliament.html
21. Arrow K.J., and Nerlove M., Econometrica, Vol. 26, pp. 297-305 (1958).
22. Friedman M., A Theory of the Consumption Function. Princeton University
Press, Princeton, N.J. (1956).
23. Friedman M.,The Role of Monetary Policy, The American Economic Review,
pp. 1-17 (1968).
24. Phelps E., Phillips Curve Expectations of Inflation, and Output Unemployment
Over Time, Economica, Vol. 34, no. 135, pp. 254-281, (1967).
25. Cagan P. The Monetary Dynamics of Hyperinflation. In Friedman M., (ed.).
Studies in the Quantity Theory of Money. University of Chicago Press, Chicago
(1956).
17
26. Muth J.F., Rational Expectation and the Theory of Price Movements. Econo-
metrica, Vol. 29, pp. 315-335 (1961).
27. Lucas R.E., Expectations and the Neutrality of Money. Journal of Economic
Theory, Vol. 4, pp. 103-124 (1972).
28. Sargent T.J., and Wallace N. Rational Expectations, the Optimal Monetary
Instrument, and the Optimal Money Supply Rule. Journal of Political Economy,
Vol. 83, no. 2, pp. 241-254 (1975).
29. Wilder J.W., New concepts in technical trading systems. Trend Research,
Greensboro, N.C., USA (1978)
30. Murphy J.J., Technical Analysis of the Financial Markets: A Comprehensive
Guide to Trading Methods and Applications, New York Institute of Finance (1999)
31. The paper was quoted by several blogs and specialized newspapers, among
which the MIT blog, the New York Times and the Financial Times, and
it was also awarded the IG Nobel prize 2010 for ”Management”. See also
http://www.pluchino.it/ignobel.html
32. Cipolla C.M., The Basic Laws of Human Stupidity. The Mad Millers (1976)
33. Wiseman, R., Quirkology. Pan Macmillan, London, UK (2007)
34. Biondo A.E., Pluchino A., Rapisarda A., in progress.
... This kind of misleading evaluation ends up switching cause and e®ect, rating as the most talented people those who are, simply, the luckiest ones [45,46]. In line with this perspective, in previous works, it was advanced a warning against such a kind of \naive meritocracy" and it was shown the e®ectiveness of alternative strategies based on random choices in many di®erent contexts, such as management, politics and¯nance [47][48][49][50][51][52][53][54]. In Sec. 3, we provide an application of our approach and sketch a comparison of possible public funds attribution schemes in the scienti¯c research context. ...
... In other words, if luck matters, and if it matters more than we are willing to admit, it is not strange that meritocratic strategies reveal less e®ective than expected, in particular, if we try to evaluate merit ex-post. In the previous studies [47][48][49][50][51][52][53][54], there was already a warning against this sort of \naive meritocracy", showing the e®ectiveness of alternative strategies based on random choices in management, politics and¯nance. Consistently with such a perspective, the TvL model shows how the minimum level of success of the most talented people can be increased, in a world where luck is important and serendipity is often the cause of important discoveries. ...
... The latter is a further con¯rmation that, in complex social and economical contexts where chance plays a relevant role, the e±ciency of alternative strategies based on random choices can easily overtake that of standard strategies based on the \naively meritocratic" approach. Such a counterintuitive phenomenon, already observed in management, politics and¯nance [47][48][49][50][51][52][53][54],¯nds therefore new evidence also in the research funding context. ...
Article
Full-text available
This paper further investigates the Talent versus Luck (TvL) model described by [Pluchino et al. Talent versus luck: The role of randomness in success and failure, Adv. Complex Syst. 21 (2018) 1850014] which models the relationship between ‘talent’ and ‘luck’ on the impact of an individuals career. It is shown that the model is very sensitive to both random sampling and the choice of value for the input parameters. Running the model repeatedly with the same set of input parameters gives a range of output values of over 50% of the mean value. The sensitivity of the inputs of the model is analyzed using a variance-based approach based upon generating Sobol sequences of quasi-random numbers. When using the model to look at the talent associated with an individual who has the maximum capital over a model run it has been shown that the choice for the standard deviation of the talent distribution contributes to 67% of the model variability. When investigating the maximum amount of capital returned by the model the probability of a lucky event at any given epoch has the largest impact on the model, almost three times more than any other individual parameter. Consequently, during the analysis of the model results one must keep in mind the impact that only small changes in the input parameters can have on the model output.
... Of course, it should be noted that the appearance of noise in the system (T = 1) slightly orders the system in relation to the situation when T = 0 (lack of randomness). This is in accordance with other studies (Ren et al., 2007;Biondo et al., 2013;Shirado & Christakis, 2017;Kowalska-Styczeń & Malarz, 2020), in which a low noise level has brought order to the system. With the formation of opinion clusters, the probability of sustaining opinion is closely related. ...
Preprint
Full-text available
We examine how the randomness of behavior and the flow of information between agents affect the formation of opinions. Our main research involves the process of opinion evolution, opinion clusters formation and studying the probability of sustaining opinion. The results show that opinion formation (clustering of opinion) is influenced by both flow of information between agents (interactions outside the closest neighbors) and randomness in adopting opinions.
... The phase transition reported here links the Heider balance to a more wide set of problems on stability of Boolean networks in the presence of noise [30,31]. Our results add to the area of research on conditions, where a noise allows to get into a global minimum of work function but is not as strong as to remove the system from there (see Refs. [32,33,34,35,36] for examples ranging from the games theory to simulations of opinion formation). ...
Preprint
The Heider balance is investigated in a chain of actors, with periodic boundary conditions and the neighborhood of range $r$, with $r$ as a parameter. Two model dynamics are applied: a deterministic cellular automaton (Malarz et al, Physica D 411 (2020) 132556) and the heat-bath algorithm, with the density of unbalanced-balanced triads in the role of energy. The outcome is a spectrum of energy in stationary and blinking states and a balanced-unbalanced network transition driven by thermal noise. The critical point $T_c$ increases with the range $r$ and it does not depend on the system size.
... Random noise facilitates the dynamics and reduces relaxation times in the models of social influence [63]. In addition, noise plays a beneficial role in developing cooperation [64], in the application of social and financial strategies [65] and in addressing the coordination problem of human groups [66]. ...
Article
Full-text available
We propose an opinion dynamics model based on Latané’s social impact theory. Actors in this model are heterogeneous and, in addition to opinions, are characterised by their varying levels of persuasion and support. The model is tested for two and three initial opinions randomly distributed among actors. We examine how the noise (randomness of behaviour) and the flow of information among actors affect the formation and spread of opinions. Our main research involves the process of opinion formation and finding phases of the system in terms of parameters describing noise and flow of the information for two and three opinions available in the system. The results show that opinion formation and spread are influenced by both (i) flow of information among actors (effective range of interactions among actors) and (ii) noise (randomness in adopting opinions). The noise not only leads to opinions disorder but also it promotes consensus under certain conditions. In disordered phase and when the exchange of information is spatially effectively limited, various faces of disorder are observed, including system states, where the signatures of self-organised criticality manifest themselves as scale-free probability distribution function of cluster sizes. Then increase of noise level leads system to disordered random state. The critical noise level above which histograms of opinion clusters’ sizes loose their scale-free character increases with increase of the easy of information flow.
... Random noise facilitates the dynamics and reduces relaxation times in the models of social influence [63]. In addition, noise plays a beneficial role in developing cooperation [64], in the application of social and financial strategies [65] and in addressing the coordination problem of human groups [66]. ...
Preprint
Full-text available
We propose an opinion dynamics model based on Latané's social impact theory. Actors in this model are heterogeneous and, in addition to opinions, are characterised by their varying levels of persuasion and support. The model is tested for two and three initial opinions randomly distributed among actors. We examine how the noise (randomness of behaviour) and the flow of information among actors affect the formation and spread of opinions. Our main research involves the process of opinion formation and finding phases of the system in terms of parameters describing noise and flow of the information for two and three opinions available in the system. The results show that opinion formation and spread are influenced by both (i) flow of information among actors (effective range of interactions among actors) and (ii) noise (randomness in adopting opinions). The noise not only leads to opinions disorder but also it promotes consensus under certain conditions.
... The goal was not to efficiently explore the chemical space, but instead to establish a neutral estimate of the 'reaction cross-section' for each amine, revealing systematic reactivity differences between popular amines and not-popular amines (that is, those that were unpopular or absent from the CSD), centred around the reaction compositions that humans are likely to have attempted to search. Despite this, we note that random choices are often better than human expertise and comparable to more sophisticated numerical methods in fields as diverse as oil exploration 31,32 , chemical reaction discovery 33 and numerous social and financial applications 34 . ...
Article
Full-text available
Most chemical experiments are planned by human scientists and therefore are subject to a variety of human cognitive biases¹, heuristics² and social influences³. These anthropogenic chemical reaction data are widely used to train machine-learning models⁴ that are used to predict organic⁵ and inorganic6,7 syntheses. However, it is known that societal biases are encoded in datasets and are perpetuated in machine-learning models⁸. Here we identify as-yet-unacknowledged anthropogenic biases in both the reagent choices and reaction conditions of chemical reaction datasets using a combination of data mining and experiments. We find that the amine choices in the reported crystal structures of hydrothermal synthesis of amine-templated metal oxides⁹ follow a power-law distribution in which 17% of amine reactants occur in 79% of reported compounds, consistent with distributions in social influence models10–12. An analysis of unpublished historical laboratory notebook records shows similarly biased distributions of reaction condition choices. By performing 548 randomly generated experiments, we demonstrate that the popularity of reactants or the choices of reaction conditions are uncorrelated to the success of the reaction. We show that randomly generated experiments better illustrate the range of parameter choices that are compatible with crystal formation. Machine-learning models that we train on a smaller randomized reaction dataset outperform models trained on larger human-selected reaction datasets, demonstrating the importance of identifying and addressing anthropogenic biases in scientific data.
Article
In 1969, the psychologist Laurence J. Peter made a observation about how organizations promote its members: “The members of an organization climb the hierarchy until the level of maximum incompetence”. The first computational study on this principle suggests that promoting members randomly is the safest strategy. Here, we modify the original model adding the diversity of competences and learning. Our results suggest that, even though the Peter principle negatively affects the efficiency of a business, this effect is less drastic than the one suggested in the previous work when adding the new ingredients. The strategy of promoting the individual with the best performance in a level really seems to be the best strategy, recovering the common sense hypothesis.
Article
The Heider balance is investigated in a chain of actors, with periodic boundary conditions and the neighborhood of range r, with r as a parameter. Two model dynamics are applied: a deterministic cellular automaton [Malarz et al., Physica D 411 (2020) 132556] and the heat-bath algorithm, with the density of unbalanced–balanced triads in the role of energy. The outcome is a spectrum of energy in stationary and blinking states and a balanced–unbalanced network transition driven by thermal noise. The critical point Tc increases with the range r and it does not depend on the system size.
Article
In this paper we present a new analytical model of a Parliament and investigate the beneficial effects of the selection of legislators by lot in order to reduce some of the drawbacks of modern representative democracies. Resorting to sortition for the selection of public officers used to be in the past a popular way of taming factionalism in public affairs. Factionalism is assumed to be detrimental since public officers tend to favour their own faction instead of pursuing the general interest. In this respect our mathematical model shows in a rigorous way how it is possible to improve the efficiency of a Parliament by introducing the use of sortition to select part of its members. It will be shown that, starting from a Parliament working with two parties (or coalitions), where the costs of representative democracy are quite apparent through the detrimental effects of party discipline, one can beneficially move towards a Parliament where independent, randomly selected legislators sit alongside elected members. In particular, we show that increasing the number of independent legislators up to a critical point enhances the efficiency of the Parliament and puts into check the factionalism likely to arise from party discipline.
Article
Full-text available
Research background: Market participants have been trying to forecast future price movements and create tools to facilitate making the right investment decisions since the beginning of the operation of stock exchanges. As a result, there are an increasing number of methods, tools, strategies and models to make the decision process which is becoming extremely complicated. Purpose: to maximize the simplification of trade rules and to check whether it is possible to parameterize transactions based on the length of price movements in order that the system built in this way would generate profits. Research methodology: empirical research was conducted on data from the period between 20/01/1998 and 29/06/2018 covering listing futures contracts for the WIG20. First, the length of the price movements was determined according to the closing rate, then the frequency of individual lengths of the price movements was determined so transaction parameters were fixed. Next, the parameters were optimized and the rates of return from the tested options were examined. Result: It is possible to parameterize transactions based on the length of price movements and to create a simple investment strategy which generates profits. In the audited period, the optimal length of traffic was 25 points with a simultaneous use of a profit/loss ratio of 1 : 1, 1 : 2 or 1 : 3. Novelty : an original investment strategy based on the parameterization of transactions that is based on length of price movement and profit/loss ratio.
Article
Full-text available
Author begins by deducing a set of restrictions on option pricing formulas from the assumption that investors prefer more to less. These restrictions are necessary conditions for a formula to be consistent with a rational pricing theory. Attention is given to the problems created when dividends are paid on the underlying common stock and when the terms of the option contract can be changed explicitly by a change in exercise price or implicitly by a shift in the investment or capital structure policy of the firm. Since the deduced restrictions are not sufficient to uniquely determine an option pricing formula, additional assumptions are introduced to examine and extend the seminal Black-Scholes theory of option pricing. Explicit formulas for pricing both call and put options as well as for warrants and the new ″down-and-out″ option are derived. Other results.
Chapter
Rank-size plots, also called Zipf plots, have a role to play in representing statistical data. The method is somewhat peculiar, but throws light on one aspect of the notions of concentration. This chapter’s first goals are to define those plots and show that they are of two kinds. Some are simply an analytic restatement of standard tail distributions but other cases stand by themselves. For example, in the context of word frequencies in natural discourse, rank-size plots provide the most natural and most direct way of expressing scaling. Of greatest interest are the rank-size plots that are rectilinear in log-log coordinates. In most cases, this rectilinearity is shown to simply rephrase an underlying scaling distribution, by exchanging its coordinate axes. This rephrasing would hardly seem to deserve attention, but continually proves its attractiveness. Unfortunately, it is all too often misinterpreted and viewed as significant beyond the scaling distribution drawn in the usual axes. These are negative but strong reasons why rank-size plots deserve to be discussed in some detail. They throw fresh light on the meaning and the pitfalls of infinite expectation, and occasionally help understand upper and lower cutoffs to scaling.