arXiv:1102.0604v1 [physics.bio-ph] 3 Feb 2011
The conundrum of functional brain networks: small-world or
Lazaros K. Gallos1, Hern´ an A. Makse2, Mariano Sigman2
1Levich Institute and Physics Department,
City College of New York, New York, New York 10031, USA
2Integrative Neuroscience Laboratory, Physics Department,
FCEyN, Universidad de Buenos Aires, Buenos Aires, Argentina
(Dated: February 4, 2011)
The human brain is organized in functional modules. Such an organization poses a conundrum:
modules ought to be sufficiently independent to guarantee functional specialization and sufficiently
connected to bind multiple processors for efficient information transfer. It is commonly accepted
that small-world architecture may solve this problem. However, there is intrinsic tension between
shortcuts generating small-worlds and the persistence of modules. Here we provide a solution to
this puzzle. We show that the functional brain network formed by percolation of strong links is
highly modular. Contrary to the common view, modules are self-similar and therefore are very far
from being small-world. Incorporating the weak ties to the network converts it into a small-world
preserving an underlying backbone of well-defined modules. Weak ties are organized precisely
as predicted by theory maximizing information transfer with minimal wiring costs. This trade-
off architecture is reminiscent of the “strength of weak ties” crucial concept of social networks
and provides a natural solution to the puzzle of efficient information flow in the highly modular
structure of the brain.
One of the main findings in neuroscience is the modular organization of the brain which
in turn implies the parallel nature of brain computations . For example, in the visual
modality, more than thirty visual areas analyze simultaneously distinct features of the vi-
sual scene: motion, color, orientation, space, form, luminance and contrast among others
. These features, as well as information from different sensory modalities, have to be
integrated, as one of the main aspects of perception is its unitary nature [1, 3].
This leads to a basic conundrum of brain networks: modular processors have to be
sufficiently isolated to achieve independent computations, but also globally connected to be
integrated in coherent functions . A widely accepted view is that small-world networks
confer a capability for both specialized processing and integrated processing over the entire
network since they combine high local clustering and short path length [5–7]. This view has
been fueled by the systematic finding of small-world topology in a wide range of human brain
networks derived from functional [8–10], structural  and diffusion tensor  MRI. Small-
world topology has also been identified at the cellular-network scale in functional cortical
neuronal circuits in mammals [13, 14] and even in the nervous system of the nematode
Caenorhabditis Elegans , so far the only one to be comprehensively mapped at a cellular
level. Moreover, small-world property seems to be relevant for brain function since it is
affected by disease , with normal aging and by pharmacological blockade of dopamine
Despite this common belief, and systematic experimental observations, traditional models
of small-world networks cannot fully capture the coexistence of highly modular structure
with broad global integration. Local clustering and modularity are independent network
features.The clustering coefficient, the typical measure of clustering, is a purely local
quantity which can be assessed inspecting the immediate neighborhood of a node. On
the contrary, modularity is a global property of the network, determined by the existence of
strongly connected groups of nodes that are only loosely connected to the rest of the network.
In principle, modularity cannot be inferred from local clustering and vice versa. In fact, it
is easy to construct modular and unclustered networks or, reciprocally, clustered networks
without modules. More importantly, small world-topology is typically incompatible with
strong modularity . While a clustered network preserves its clustering when a small
fraction of shortcuts are added (converting it into a small-world) , the persistence of
modules is not equally robust, and the shrinking of the network diameter may quickly
destroy the modules.
Hence, the concept of a small-world network is not adequate by itself to explain the
modular and integration features of brain networks on its own. We propose that a solution
to modularity and broad integration can be achieved by a network comprised by two different
layers: a layer formed by strong links with a highly modular, non small-world topology and
an underlying network of weak ties which establish shortcuts between modules converting
it to a non-structured (non-modular) small-word network. At low connectivity thresholds,
when the network is fully connected, weak connections confer small-world properties in
agreement with most previous observations.
This proposal is inspired by a fundamental notion of sociology termed “the strength
of weak ties” [17, 18]: according to this theory, strong ties (close friends) clump together
forming modules. An acquaintance (weak tie) becomes a crucial bridge (a shortcut) between
the two densely knit clumps (modules) of close friends .
Interestingly, this idea emerges also in neuronal circuits  where stronger connections
tend to be more clustered than weaker ones, a structure referred to as a skeleton of stronger
connections in a sea of weaker ones. This theme also emerges in theoretical models of
large-scale cognitive architecture. Integration of information across modules is referred as
the binding problem in psychology . Several theories have suggested mechanisms based
on dynamic binding [4, 19] or on a workspace system [1, 20]. For instance, the workspace
model [1, 20] proposes that a flexible routing system with dynamic and comparably weaker
connections transiently connects modules with very strong connections carved by long-term
Here we set out to investigate whether in effect, brain networks conform to a two layer
structure determined by scale and strength of connections.
Network analisys.— We capitalize on a well-known dual-task paradigm, the psycho-
logical refractory period  in which stimuli from different sensory modalities (visual and
auditory) have to be routed to different motor effectors (in our experiment the motor ef-
fectors are the left and right hand, see Section I). The temporal gap between the auditory
and visual stimuli varied in four different conditions of 0, 300, 900 and 1200 ms. A total
of 16 subjects had to respond with the right hand to the visual stimulus and with the left
hand to the auditory stimulus. The sequence of activated regions which unfolds during the
execution of the task has been reported in a previous manuscript . Here we investigate
how this broad activated region organizes in a network which may achieve modularity and
Our network analysis relies on time-resolved fMRI based on the phase signal . We
first compute the phase of the BOLD-fMRI response on each trial, each subject, and each
voxel . We then determine the correlation matrix 0 ≤ cij ≤ 1 between the i-th and
j-th voxel measuring the phase correlation between the corresponding pair of voxels for each
individual subject and condition (see Section II). Here we do not explore the differences
in networks between different conditions. Rather, we consider them as independent exper-
iments, generating a total of 64 different networks, one for each condition of temporal gap
The connectivity between voxels can be naturally mapped to a percolation problem de-
fined in the N × N space of interactions cij . We consider each voxel (comprising a
volume of 1 mm3) as a node in the network. A link or tie between nodes i and j exists if
cij> p for a given threshold p (see Section III).
In general, the size of the largest component of connected links in a percolation process
remains very small for small p and increases abruptly through a critical phase transition at
pc, in which a single largest connected component spans the whole system . A single
incipient connected component of nodes is expected to appear if the links in the network are
occupied at random without correlations, i.e. when the probability to find an active bond is
uncorrelated with the activity of all the other bonds in the network. When this percolation
analysis is applied to the functional brain network a more complex picture emerges revealing
non-trivial correlations in brain activity.
For each participant, we calculate the size of the largest connected component as we lower
the percolation threshold from p = 1 to 0. We find that, for all participants and stimuli in
this study, the size of the largest connected component increases progressively with a series
of sharp jumps (Fig. 1A). This is indicative of a multiplicity of percolating components which
subsequently merge as p decreases rather than a single spanning component emerging at a
single critical p as expected for uncorrelated percolation. Each of these jumps define a single
percolation transition focused on groups of voxels which are highly correlated, constituting
a well-defined module, as shown in Fig. 1B for a typical individual.
Therefore, to identify the modules, we locate the percolation threshold around the first
jump in the size of the largest connected component when p is lowered from p = 1 towards
p = 0 that yields three modules of at least 1,000 voxels each (pc= 0.979 in the example of
Fig. 1B). This process results in a total of 192 modules among all participants and stimuli
which are pooled together for the present analysis. An example of a module in the network
space is shown in the right panel of Fig. 1B, while the left panel in Fig. 1B shows the same
module in real space. The topography of these modules reflects coherent patterns across
different subjects as shown in Section V.
Scaling and modular organization.— To determine the structure of the modules we
investigate scaling properties of the mass of each module (the total number of voxels, Nc) as
a function of three length scales: (i) the maximum network diameter, ℓmax, (ii) the average
network distance between two nodes, ?ℓ?, and (iii) the maximum Euclidean distance between
two nodes of the module, rmaxthat are directly connected. The distance in the network space
or chemical distance, ℓ, is defined as the number of links along the shortest path between
two nodes in the module. The maximum network diameter is the largest shortest path in
the network representation.
Figure 2 (central panel) indicates power-law scaling for these quantities defining the
fractal dimension of the modules . For instance:
Nc(rmax) ∼ (rmax)df,(1)
defines the Euclidean Haussdorf fractal dimension, df = 2.1 ± 0.2. The scaling with ℓmax
and ?ℓ? is consistent with Eq. (1) as seen in Fig. 2. The fractal dimension dfquantifies how
densely the area is covered by a specific module.
Equation (1) indicates that all modules, taken globally have a self-similar structure.
We next investigate whether the internal structure of each module is also scale-invariant.
This can be investigated applying renormalization group (RG) analysis for complex networks
[16, 24, 25]. This technique allows one to observe the network at different scales transforming
it into successively simpler copies of itself, which can be used to detect characteristics which
are difficult to identify at a specific scale of observation . Here we use this technique to
characterize sub-modular structure within each module obtained from percolation analysis.
Each module identified by percolation is first tiled with the minimum possible num-
ber of boxes or sub-modules, NB, of a given chemical distance ℓB. The requirement that
the number of boxes should be minimized poses an optimization problem which can be
solved using the box-covering algorithm explained in  (see Section IV and Fig. 3A
explaining the Maximum Excluded Mass Burning algorithm, MEMB, downloaded from
http://lev.ccny.cuny.edu/~hmakse/soft_data.html).The resulting boxes are char-
acterized by the proximity between all their nodes and minimization of the links outside
the boxes. Thus, the box-covering algorithm detects boxes/submodules that also tend to
Different values of the box diameter ℓB yield a different partition of the percolation
modules in submodules of varying sizes (Fig. 3A). The right panel in Fig. 2 shows in
different colors the identified submodules of size ℓB = 4 in a typical percolation module.
We apply the box-covering algorithm to perform a RG analysis to each of the percolation
modules. Figure 3B shows the scaling of NBversus ℓBaveraged over all the modules for all
individuals and stimuli. This property is quantified in the power-law relation:
NB(ℓB) ∼ ℓ−dB
where dBis the box fractal dimension [16, 24, 25]. The exponent dBcharacterizes the self-
similarity between different parts of the module. Finite and small values of dBshow that the
network has fractal features in the topological space, where the covering boxes retain their
connectivity scheme under different scales, and smaller-scale boxes behave in a similar way
as the original network. The resulting dBaveraged over all the modules is dB= 1.9 ± 0.1.
This relatively small value means that the modules are not very dense, resembling more
a tree-like structure, enriched with small-scale features such as loops and dangling ends,
while at large-scale it presents a more linear form. Combining both results, we find that
globally the ensemble of brain modules forms a self-similar structure characterized by Eq.
(1). Locally, each module is in turn hierarchically formed by constituting submodules with
a self-similar relation as indicated by Eq. (2). These submodules have a large modularity
as indicated by scaling analysis in Section VI.
A surprising consequence of Eqs. (1) and (2) is that the network at high p-values (i.e.
determined by strong links) lacks the small-world logarithmic scaling ?ℓ? ∼ logNc believed
to be necessary for efficient information transfer in the network [5, 7]. Indeed, a fractal
network  poses much larger distances than those appearing in small-worlds: a distance
ℓmax∼ 100 observed in Fig. 2 would require an enormous small-world network of the order
Nc∼ 10100, rather than Nc∼ 104as observed for fractal networks. The structural differences
between a modular fractal network and a small-world network are starkly revealed when we
rewire a typical percolation module achieved by randomly reconnecting links while keeping
the degree of each node intact. The topology of the rewired module is depicted in the left
panel of Fig. 2 which should be compared to the original module in the right panel. The
rewired network has become small-world with the apparent loss of the modular structure.
Indeed, the rewired networks have very small average distances ?ℓ? ≈ 3 with the concomitant
exponential behaviour, Nc= exp?ℓ?/ℓ0with a very small characteristic size ℓ0= 1/7, as
shown in the central panel of Fig. 2. The crux of the matter is how functional modules in
the brain can be connected closely without collapsing into a cohesive small-world structure.
Short-cut wiring is optimal for efficient flow.— When we extend the percolation
analysis lowering further the threshold p, weaker ties are incorporated to the network con-
necting the self-similar modules through short-cuts. A typical scenario is depicted in Fig.
4A showing three percolation modules identified at the first jump in Fig. 1B at p = 0.98.
At this tie strengths, the modules are separated and show submodular fractal structure in-
dicated in the colored boxes. When we lower p = 0.975, Fig. 4B, modules are connected
with each other and a global incipient component starts to appear linking the whole brain.
A second global percolation-like transition appears in the system identified when the mass
of the largest component occupies half of the activated area (Fig. 4C). For different indi-
viduals, global percolation occurs in the interval pc= [0.945,0.96] as indicated in the inset
of Fig. 1A.
Our next aim is to characterize this two-layer network formed by an underlying fractal
structure determined by strong links, shortcutted by weak ties. The spatial distribution
of shortcuts (the weak links) will determine topological properties of the network. When
the cumulative probability distribution to find a Euclidean distance between two connected
nodes, rij, larger than r follows a power-law:
P(rij> r) ∼ r−α+1, (3)
statistical physics makes precise predictions about optimization schemes for global function
of the network as a function of the relation between the shortcut exponent α and the dimen-
sion of the network df[16, 27, 28]. Specifically, there are three critical values for α as shown
schematically in Fig. 4D. If α is too large then shortcuts will not be sufficiently long and the
network will behave as fractal, similarly to the underlying structure. Below a critical value
determined by the relation α < 2df, shortcuts are sufficient to convert the network in a
small world. Within this regime there are two significant optimization values:
(i) Wiring cost minimization with full routing information. This assumes a network of
dimension df, over which short-cuts are added to optimize communication, with a wiring
cost constraint proportional to the total shortcut length. It is also assumed that coordinates
of the network are known, i.e. it is the shortest path that it is being minimized. Under
these circumstances it is found that the optimal distribution of shortcuts corresponds to a
power-law Eq. (3) with α = df+ 1 . This precise scaling is found in the US airport
network  where a cost limitation applies to maximize profits.
(ii) Decentralized searches with only local information. This corresponds to the classic
Milgram’s “small-world experiment” of decentralized search in social networks , where
a person (a node) has knowledge of local links and of the final destination but not of the
intermediate routes. Under these circumstances, which also apply to routing packets in the
Internet, the problem corresponds to a greedy search, rather than to optimization of the
minimal path. The optimal relation is obtained when α = df[16, 27].
Hence, the analysis of the distribution of shortcuts provides information both on the
topology of the resulting network and on which transport procedure is optimized. To inves-
tigate how short-cuts are distributed, we analyze the cumulative probability distribution of
the Euclidean length between two nodes i,j connected by the weak ties. This distribution
reveals a well defined power-law behavior Eq. (3) with an exponent α = 3.1 ± 0.1 (see Fig.
4E). Given the value obtained in Eq. (1), df= 2.1, this implies that the network composed
of strong and weak links is small-world (α < 2df) and optimizes wiring cost assuming full
knowledge of routing information (α = df+ 1).
The existence of modular organization of strong ties in a sea of weak ties is reminiscent of
the structure found to bind dissimilar communities in social networks. Granovetter’s work
on social networks [17, 18] proposes the existence of weak ties to cohese well-defined social
groups into a large-scale social network. Such a two-scale structure has a large impact on
the diffusion and influence of information across the entire social structure. Our observation
of this two-layer organization in brain networks suggests that it may be a ubiquitous natural
solution to the puzzle of information flow in highly modular structures.
Previous studies have found that wiring of neuronal networks at the cellular level is close
to optimal [13, 30]. Specifically it is found that long-range connections do not minimize
wiring but achieve network benefits. In agreement with this observation, at the mesoscopic
scale explored here, we find an optimization which reduces wiring cost while maintaining
network proximity. An intriguing element of our observation is that this minimization as-
sumes that broadcasting and routing information are known to each node. How this may
be achieved– what aspects of the neural code convey its own routing information– remains
an open question in neuroscience.
 S. Dehaene, L. Naccache, The cognitive neuroscience of consciousness, The MIT Press, 1-37,
 D. J. Felleman, D. C. van Essen, Cereb. Cortex 1, 1 (1991).
 A. Treisman, Current opinion in neurobiology 6, 171 (1996).
 G. Tononi, O. Sporns, G. M. Edelman, Proc. Nat. Acad. Sci. USA 91, 5033 (1994).
 O. Sporns, D. R. Chialvo, M. Kaiser, C. C. Hilgetag, Trends Cognit. Sci. 8, 418 (2004).
 D. Watts, S. Strogatz, Nature 393, 440 (1998).
 D. S. Bassett, E. Bullmore, Neuroscientist 12, 512 (2006).
 S. Achard, R. Salvador, B. Whitcher, J. Suckling, E. Bullmore, J. Neurosci. 26, 63 (2006).
 S. Achard, E. Bullmore, PLoS Comput. Biol. 3, e17 (2007).
 V. M. Eguiluz, D. R. Chialvo, G. A. Cecchi, M. Baliki, A. V. Apkarian, Phys. Rev. Lett. 94,
 Y. He, Z. J. Chen, A. C. Evans, Cereb. Cortex 17, 2407 (2007).
 P. Hagmann, M. Kurant, X. Gigandet, P. Thiran, V. J. Wedeen, R. Meuli, J. P. Thiran, PLoS
One 2, e597 (2007).
 S. Song, P. J. Sjostrom, M. Reigl, S. Nelson, D. D. Chklovskii, PLoS Biol. 3, e68 (2005).
 S. Yu, D. Huang, W. Singer, D. Nikolic, Cereb. Cortex 18, 2891 (2008).
 C. J. Stam, B. F. Jones, G. Nolte, M. Breakspear, P. Scheltens, Cereb. Cortex 17, 92 (2007).
 H. D. Rozenfeld, C. Song, H. A. Makse, Phys. Rev. Lett. 104, 025701 (2010).
 M. S. Granovetter, American Journal of Sociology 78, 1360 (1973).
 J.-P. Onnela, J. Saramaki, J. Hyvonen, G. Szabo, D. Lazer, K. Kaski, J. Kertesz, A.-L.
Barabasi, Proc. Nat. Acad. Sci. USA 104, 7332 (2007).
 G. Tononi, G. M. Edelman, Science 282, 1846 (1998).
 B. J. Baars, In the theater of consciousness: The workspace of the mind (Oxford University
Press, USA, 1997).
 H. Pashler, Psychological Bulletin, 116, 220 (1994).
 M. Sigman, S. Dehaene, J. Neurosci. 28, 7585 (2008).
 A. Bunde, S. Havlin, editors. Fractals and Disordered Systems, 2nd edition (Springer-Verlag,
 C. Song, S. Havlin, H. A. Makse, Nature 433, 392 (2005).
 F. Radicchi, J. J. Ramasco, A. Barrat, S. Fortunato, Phys. Rev. Lett. 101, 148701 (2008).
 C. Song, L. K. Gallos, S. Havlin, H. A. Makse, J. Stat. Mech., P03006 (2007).
 J. Kleinberg, Nature 406, 845 (2000).
 G. Li, S. D. S. Reis, A. A. Moreira, S. Havlin, H. E. Stanley, J. S. & Andrade Jr, Phys. Rev.
Lett. 104, 018701 (2010).
 G. Bianconi, P. Pin, M. Marsili, Proc. Nat. Acad. Sci. USA 106, 11433 (2009).
 M. Kaiser, C. C. Hilgetag, PLoS Comput. Biol. 2, e95 (2006).
FIG. 1. Percolation Analysis. (A) Size of the largest connected component of nodes
(as measure by the fraction to the total system size) as a function of percolation threshold
p. The main plot shows the size of the largest connected component for every one of the 16
subjects for a given set of the 4 conditions. The curves follow the general percolation shape
rising rapidly to 1 in a narrow range of p around 0.95, albeit with discrete jumps. The inset
presents a detail around p ≈ 0.95. (B) This panel shows a detail for a single individual. As
we lower p the size of the largest component increases in jumps when new modules emerge,
grow, and finally get absorbed by the largest component. We follow and plot the evolution
of the modules by plotting components with more than 1,000 voxels for a given p value. The
right lower panel shows a typical module in network representation. The same module is
shown embedded in real space in the left lower panel - this specific module projects to the
medial occipital cortex, see Section V for the spatial projection of all modules.
FIG. 2. Strong ties define fractal modules. The central panel shows the number
of voxels or mass of each module, Nc, as a function of different length scales in network
and real space. Each point represents a bin average over the modules for all individuals
and conditions. In this plot we use all the modules appearing at the first jump in Fig.
1A. The mass of the modules is plotted as a function of the maximum network diameter,
ℓmax, the average network path, ?ℓ?, and the Euclidean diameter of a module, rmax. The
last one yields the Haussdorf fractal dimension, df, Eq. (1). A typical percolation module
in network representation is shown in the right panel. The colors identify scale invariant
sub-modules in the network as found by the box-covering algorithm explained in Section IV.
The network has a very rich modular structure typical of fractal topologies . The fractal
module contains 4097 nodes. The large diameter of this network is visually apparent, with
an average chemical distance ?ℓ? = 41.7, large chemical diameter ℓmax= 139, and Euclidean
diameter rmax = 136 mm. When the links of the network of each module are randomly
rewired, preserving the degree of each node, we find an exponential behavior Nc= exp[7?ℓ?]
characteristic of small-world networks  as shown in the central panel. The left panel
shows the topology of the rewired network where the modular structure disappears and
the network becomes a typical small-world structure characterized by very short average
distance between nodes.
FIG. 3. Fractal submodules in network space. (A) Detection of submodules and
fractal dimension inside the modules. We demonstrate the box-covering algorithm for a
schematic network, following the Maximum Excluded Mass Burning algorithm in [24, 26]
(see Section IV). We cover a network with boxes of size ℓB which are identified as sub-
modules to reveal the self-similar structure. (B) Scaling of the number of boxes NBneeded
to cover the network of a module as a function of length of the boxes ℓB (measured in
topology space), yielding the network fractal dimension dB.
FIG. 4. Weak ties are optimally distributed. (A) Three modules identified at the
first jump for the subject shown in Fig. 1B for p = 0.98. (B) When we lower the threshold
to p = 0.975, weak ties connect the modules. Blue lines represent the weak links with
distance longer than 10 mm and the light blue nodes are the nodes added from A. (C)
Real space representation of the modules connected by weak ties (blue lines) as the network
achieves the second global percolation where the largest component is half the total mass.
(D) Sketch of the different critical values of the shortcut exponent α in comparison with df.
(E) Cumulative probability distribution P(rij> r) of Euclidean distances rijbetween any
two voxels that are directly connected in the correlation network. The straight line fitting
yields a exponent α−1 = 2.1±0.1 indicating optimal information transfer with wiring cost