A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks.

Levich Institute and Physics Department, City College of New York, New York, NY 10031, USA.
Proceedings of the National Academy of Sciences (Impact Factor: 9.81). 02/2012; 109(8):2825-30. DOI: 10.1073/pnas.1106612109
Source: PubMed

ABSTRACT The human brain is organized in functional modules. Such an organization presents a basic conundrum: Modules ought to be sufficiently independent to guarantee functional specialization and sufficiently connected to bind multiple processors for efficient information transfer. It is commonly accepted that small-world architecture of short paths and large local clustering may solve this problem. However, there is intrinsic tension between shortcuts generating small worlds and the persistence of modularity, a global property unrelated to local clustering. Here, we present a possible solution to this puzzle. We first show that a modified percolation theory can define a set of hierarchically organized modules made of strong links in functional brain networks. These modules are "large-world" self-similar structures and, therefore, are far from being small-world. However, incorporating weaker ties to the network converts it into a small world preserving an underlying backbone of well-defined modules. Remarkably, weak ties are precisely organized as predicted by theory maximizing information transfer with minimal wiring cost. This trade-off architecture is reminiscent of the "strength of weak ties" crucial concept of social networks. Such a design suggests a natural solution to the paradox of efficient information flow in the highly modular structure of the brain.


Available from: Lazaros K. Gallos, Apr 01, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Individual learning performance of cognitive function is related to functional connections within 'task-activated' regions where activities increase during the corresponding cognitive tasks. On the other hand, since any brain region is connected with other regions and brain-wide networks, learning is characterized by modulations in connectivity between networks with different functions. Therefore, we hypothesized that learning performance is determined by functional connections among intrinsic networks that include both task-activated and less-activated networks. Subjects underwent resting-state functional MRI and a short period of training (80-90 min) in a working memory task on separate days. We calculated functional connectivity patterns of whole-brain intrinsic networks and examined whether a sparse linear regression model predicts a performance plateau from the individual patterns. The model resulted in highly accurate predictions (R(2) = 0.73, p = 0.003). Positive connections within task-activated networks, including the left fronto-parietal network, accounted for nearly half (48%) of the contribution ratio to the prediction. Moreover, consistent with our hypothesis, connections of the task-activated networks with less-activated networks showed a comparable contribution (44%). Our findings suggest that learning performance is potentially constrained by system-level interactions within task-activated networks as well as those between task-activated and less-activated networks.
    Scientific Reports 01/2015; 5:7622. DOI:10.1038/srep07622 · 5.08 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The fractality property are discovered on complex networks through renormalization procedure, which is implemented by box-covering method. The unsolved problem of box-covering method is finding the minimum number of boxes to cover the whole network. Here, we introduce a differential evolution box-covering algorithm based on greedy graph coloring approach. We apply our algorithm on some benchmark networks with different structures, such as a E.coli metabolic network, which has low clustering coefficient and high modularity; a Clustered scale-free network, which has high clustering coefficient and low modularity; and some community networks (the Politics books network, the Dolphins network, and the American football games network), which have high clustering coefficient. Experimental results show that our algorithm can get better results than state of art algorithms in most cases, especially has significant improvement in clustered community networks.
    Evolutionary Computation (CEC), 2014 IEEE Congress on; 07/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a generative model for protein contact networks. The soundness of the proposed model is investigated by focusing primarily on mesoscopic properties elaborated from the spectra of the graph Laplacian. To complement the analysis, we study also classical topological descriptors, such as statistics of the shortest paths and the important feature of modularity. Our experiments show that the proposed model results in a considerable improvement with respect to two suitably chosen generative mechanisms, mimicking with better approximation real protein contact networks in terms of diffusion properties elaborated from the Laplacian spectra. However, as well as the other considered models, it does not reproduce with sufficient accuracy the shortest paths structure. To compensate this drawback, we designed a second step involving a targeted edge reconfiguration process. The ensemble of reconfigured networks denotes improvements that are statistically significant. As a byproduct of our study, we demonstrate that modularity, a well-known property of proteins, does not entirely explain the actual network architecture characterizing protein contact networks. In fact, we conclude that modularity, intended as a quantification of an underlying community structure, should be considered as an emergent property of the structural organization of proteins. Interestingly, such a property is suitably optimized in protein contact networks together with the feature of path efficiency.