Pedro A. M. Mediano’s research while affiliated with University of Cambridge and other places
What is this page?
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
Adaptive cognition relies on cooperation across anatomically distributed brain circuits. However, specialised neural systems are also in constant competition for limited processing resources. How does the brain's network architecture enable it to balance these cooperative and competitive tendencies? Here we use computational whole-brain modelling to examine the dynamical and computational relevance of cooperative and competitive interactions in the mammalian connectome. Across human, macaque, and mouse we show that the architecture of the models that most faithfully reproduce brain activity, consistently combines modular cooperative interactions with diffuse, long-range competitive interactions. The model with competitive interactions consistently outperforms the cooperative-only model, with excellent fit to both spatial and dynamical properties of the living brain, which were not explicitly optimised but rather emerge spontaneously. Competitive interactions in the effective connectivity produce greater levels of synergistic information and local-global hierarchy, and lead to superior computational capacity when used for neuromorphic computing. Altogether, this work provides a mechanistic link between network architecture, dynamical properties, and computation in the mammalian brain.
A key feature of information theory is its universality, as it can be applied to study a broad variety of complex systems. However, many information-theoretic measures can vary significantly even across systems with similar properties, making normalisation techniques essential for allowing meaningful comparisons across datasets. Inspired by the framework of Partial Information Decomposition (PID), here we introduce Null Models for Information Theory (NuMIT), a null model-based non-linear normalisation procedure which improves upon standard entropy-based normalisation approaches and overcomes their limitations. We provide practical implementations of the technique for systems with different statistics, and showcase the method on synthetic models and on human neuroimaging data. Our results demonstrate that NuMIT provides a robust and reliable tool to characterise complex systems of interest, allowing cross-dataset comparisons and providing a meaningful significance test for PID analyses.
High-order phenomena play crucial roles in many systems of interest, but their analysis is often highly nontrivial. There is a rich literature providing a number of alternative information-theoretic quantities capturing high-order phenomena, but their interpretation and relationship with each other is not well understood. The lack of principles unifying these quantities obscures the choice of tools for enabling specific type of analyses. Here we show how an entropic conjugation provides a theoretically grounded principle to investigate the space of possible high-order quantities, clarifying the nature of the existent metrics while revealing gaps in the literature. This leads to identify novel notions of symmetry and skew-symmetry as key properties for guaranteeing a balanced account of high-order interdependencies and enabling broadly applicable analyses across physical systems.
The partial information decomposition (PID) and its extension integrated information decomposition (ID) are promising frameworks to investigate information phenomena involving multiple variables. An important limitation of these approaches is the high computational cost involved in their calculation. Here we leverage fundamental algebraic properties of these decompositions to enable a computationally-efficient method to estimate them, which we call the fast M\"obius transform. Our approach is based on a novel formula for estimating the M\"obius function that circumvents important computational bottlenecks. We showcase the capabilities of this approach by presenting two analyses that would be unfeasible without this method: decomposing the information that neural activity at different frequency bands yield about the brain's macroscopic functional organisation, and identifying distinctive dynamical properties of the interactions between multiple voices in baroque music. Overall, our proposed approach illuminates the value of algebraic facets of information decomposition and opens the way to a wide range of future analyses.
5-methoxy-N,N-dimethyltryptamine (5-MeO-DMT) is a psychedelic drug known for its uniquely profound effects on subjective experience, reliably eradicating the perception of time, space, and the self. However, little is known about how this drug alters large-scale brain activity. We collected naturalistic electroencephalography (EEG) data of 29 healthy individuals before and after inhaling a high dose (12mg) of vaporised synthetic 5-MeO-DMT. We replicate work from rodents showing amplified low-frequency oscillations, but extend these findings with novel tools for characterising the organisation and dynamics of complex low-frequency spatiotemporal fields of neural activity. We find that 5-MeO-DMT radically reorganises low-frequency flows of neural activity, causing them to become incoherent, heterogeneous, viscous, fleeting, nonrecurring, and to cease their typical travelling forwards and backwards across the cortex compared to resting state. Further, we find a consequence of this reorganisation in broadband activity, which exhibits slower, more stable, low-dimensional behaviour, with increased energy barriers to rapid global shifts. These findings provide the first detailed empirical account of how 5-MeO-DMT sculpts human brain dynamics, revealing a novel set of cortical slow wave behaviours, with significant implications for extant neuroscientific models of serotonergic psychedelics.
Biological and artificial neural networks develop internal representations that enable them to perform complex tasks. In artificial networks, the effectiveness of these models relies on their ability to build task specific representation, a process influenced by interactions among datasets, architectures, initialization strategies, and optimization algorithms. Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically. Here, we examine how initialization influences learning dynamics in deep linear neural networks, deriving exact solutions for lambda-balanced initializations-defined by the relative scale of weights across layers. These solutions capture the evolution of representations and the Neural Tangent Kernel across the spectrum from the rich to the lazy regimes. Our findings deepen the theoretical understanding of the impact of weight initialization on learning regimes, with implications for continual learning, reversal learning, and transfer learning, relevant to both neuroscience and practical applications.
Psilocybin is a classic psychedelic and a novel treatment for mood disorders. Psilocybin induces dose-dependent transient (4-6 hours) usually pleasant changes in perception, cognition, and emotion by non-selectively agonizing the 5-HT2A receptors and negatively regulating serotonin reuptake, and long-term positive antidepressant effect on mood and well-being. Long-term effects are ascribed to the psychological quality of the acute experience, increase in synaptodensity and temporary (1-week) down-regulation of 5-HT2A receptors. Electroencephalography, a non-invasive neuroimaging tool, can track the acute effects of psilocybin; these include the suppression of alpha activity, decreased global connectivity, and increased brain entropy (i.e. brain signal diversity) in eyes-closed resting-state. However, few studies investigated how these modalities are affected together through the psychedelic experience. The current research aimed to evaluate the psilocybin intoxication temporal EEG profile. 20 healthy individuals (10 women) underwent oral administration of psilocybin (0.26 mg/kg) as part of a placebo-controlled cross-over study, resting-state 5-minute eyes closed EEG was obtained at baseline and 1, 1.5, 3, 6, and 24 hours after psilocybin administration. Absolute power, relative power spectral density (PSD), power envelope global functional connectivity (GFC), Lempel-Ziv complexity (LZ), and a Complexity via State-Space Entropy Rate (CSER) were obtained together with measures of subjective intensity of experience. Absolute power decreased in alpha and beta band, but increased in delta and gamma frequencies. 24h later was observed a broadband decrease. The PSD showed a decrease in alpha occipitally between 1 and 3 hours and a decrease in beta frontally at 3 hours, but power spectra distribution stayed the same 24h later. The GFC showed decrease acutely at 1, 1.5, and 3 hours in the alpha band. LZ and showed an increase at 1 and 1.5 hours. Decomposition of CSER into functional bands shows a decrease in alpha band but increase over higher frequencies. Further, complexity over a source space showed opposing changes in the Default Mode Network (DMN) and visual network between conditions, suggesting a relationship between signal complexity, stimulus integration, and perception of self. In an exploratory attempt, we found that a change in gamma GFC in DMN correlates with oceanic boundlessness. Psychological effects of psilocybin may be wrapped in personal interpretations and history unrelated to underlying neurobiological changes, but changes to perception of self may be bound to perceived loss of boundary based on whole brain synchrony with the DMN in higher frequency bands.
Many information-theoretic quantities have corresponding representations in terms of sets. The prevailing signed measure space for characterising entropy, the I-measure of Yeung, is occasionally unable to discern between qualitatively distinct systems. In previous work, we presented a refinement of this signed measure space and demonstrated its capability to represent many quantities, which we called logarithmically decomposable quantities. In the present work we demonstrate that this framework has natural algebraic behaviour which can be expressed in terms of ideals (characterised here as upper-sets), and we show that this behaviour allows us to make various counting arguments and characterise many fixed-parity information quantity expressions. As an application, we give an algebraic proof that the only completely synergistic system of three finite variables X, Y and is the XOR gate.
The Shannon entropy of a random variable X has much behaviour analogous to a signed measure. Previous work has explored this connection by defining a signed measure on abstract sets, which are taken to represent the information that different random variables contain. This construction is sufficient to derive many measure-theoretical counterparts to information quantities such as the mutual information , the joint entropy , and the conditional entropy . Here we provide concrete characterisations of these abstract sets and a corresponding signed measure, and in doing so we demonstrate that there exists a much finer decomposition with intuitive properties which we call the logarithmic decomposition (LD). We show that this signed measure space has the useful property that its logarithmic atoms are easily characterised with negative or positive entropy, while also being consistent with Yeung's I-measure. We present the usability of our approach by re-examining the G\'acs-K\"orner common information and the Wyner common information from this new geometric perspective and characterising it in terms of our logarithmic atoms - a property we call logarithmic decomposability. We present possible extensions of this construction to continuous probability distributions before discussing implications for quality-led information theory. Lastly, we apply our new decomposition to examine the Dyadic and Triadic systems of James and Crutchfield and show that, in contrast to the I-measure alone, our decomposition is able to qualitatively distinguish between them.
Different whole-brain computational models have been recently developed to investigate hypotheses related to brain mechanisms. Among these, the Dynamic Mean Field (DMF) model is particularly attractive, combining a biophysically realistic model that is scaled up via a mean-field approach and multimodal imaging data. However, an important barrier to widespread usage of the DMF model is that current implementations are computationally expensive, supporting only simulations on brain parcellations that consider less than 100 brain regions. Here, we introduce an efficient and accessible implementation of the DMF model: the FastDMF. By leveraging analytical and numerical advances – including a novel estimation of the feedback inhibition control parameter, and a Bayesian optimization algorithm – the FastDMF circumvents various computational bottlenecks of previous implementations, improving interpretability, performance and memory use. Furthermore, these advances allow the FastDMF to increase the number of simulated regions by one order of magnitude, as confirmed by the good fit to fMRI data parcellated at 90 and 1000 regions. These advances open the way to the widespread use of biophysically grounded whole-brain models for investigating the interplay between anatomy, function, and brain dynamics, and to identify mechanistic explanations of recent results obtained from fine-grained neuroimaging recordings.
Citations (59)
... Although there exist many different "flavors" of dynamic models-e.g. biophysically realistic dynamical models (with varying degrees of complexity) [66,67]here we study a spreading model grounded in network science. This model abstracts away biophysical realism, replacing it with tractability and interpretability, thus allowing us to unambiguously track the evolution of signaling cascades through the fly brain. ...
... Although the brain networks analysed here were strongly redundancy-dominated (see Figure 6(B.1-2)) and redundancy was shown to increase across several localised EEG sources (Figures 3 and 5), the most significant changes from baseline were found in the direction of increased complementarity ( Figure 4) and reductions in redundancy dominance (i.e. increased synergy) (Figure 6), together suggesting improvements in both local and distributed information processing [48]. This also suggests that the crucial balance between redundancy and synergy as functionally segregative and integrative forces respectively in dynamical systems like the brain was maintained following nootropic supplementation, thus ensuring adequate robustness (through compensatory increases in redundancy) to support the overall increase in computational capacity gained with increased synergy [48][49][50]. ...
... This value also has the benefit of being non-negative and has a reasonably straightforward interpretation: it is the information about the future of the whole that is not disclosed by the most informative part. In the scientific literature, the MMI-based integrated information decomposition is far and away the most popular, having been applied to evolutionary analysis of boolean networks [7], in-silico models of spiking neural networks [8], macaque [9] and human [10] brain data, analysis of physical phase transitions [2], and clinical studies of loss of consciousness in anesthesia or brain injury [11,12]. Despite its increasingly widespread use, the MMI-based redundancy framework has gone relatively unstudied from a mathematical perspective. ...
... Accordingly, the "strong" version of emergence, associated with downward causation, [9][10][11] can then be linked to higher-order cognitive functions, engaging in processing the synergistic information, and interacting by laws and principles which cannot be simply reduced to the underlying neurophysiology [12,13] . As regards the causal role of consciousness, downward causation happens implicitly involved in the free will problem and Cartesian dualism, presented in the form of 'synergistic core' [14,15] that could emerge spontaneously in the brain and govern underlying neural activity by downward causation over hierarchical levels. ...
... Synergy has also been found to be ubiquitous in natural and artificial systems, having been observed in networks of cortical neurons 6,7 , whole-brain fMRI dynamics [8][9][10][11] , climatological systems 12 , interactions between social identities and life-outcomes (often called "intersectionality") 13 , and heart-rate dynamics 14 . Furthermore, synergy has been proposed to inform about clinically-meaningful processes such as ageing 15 , traumatic brain injury 16,17 , and the actions of anaesthetic 17 and psychedelic drugs 18 . This list is nonexhaustive. ...
... Then, we propose a second metric of order relevance to quantify the contribution of hyperlinks of large and small orders to the different considered network properties. Inspired by similar concepts in multivariate information theory 60 and recently also applied in the context of multilayer network analysis 61 , the proposed measure allows us to determine whether orders contribute to the considered structural network property in a synergistic or in a redundant manner. Third, to derive meaningful insights from networks in which node labels are available, we propose the measure of group balance, which measures how hyperlinks of different orders connect nodes with either the same or different labels (i.e., intra-and inter-label connections). ...
... Information theory provides effective methods to assess important questions in the field of computational neuroscience, including the assessment of various aspects of cognition and consciousness. For instance, significant advances have been achieved in using complexity measures to characterise altered states induced by psychoactive substances [38][39][40]. Complementing these studies, here we explore the relationship between brain dynamics and conscious states by decomposing the information structure of such conditions with PID. One partic-elicited by psychedelic drugs like the serotonergic agonists LSD [41] and psilocybin [42] and the NMDA antagonist ketamine [43]. ...
... Information theory has emerged as a powerful lens for the understanding of complex systems, offering tools to uncover structure in the variation of multiple interacting components. From broad explorations of the nature of complexity [1-4] to detailed investigations of specific systems such as the brain [5][6][7], collective behavior in nature [8,9], and toy models from condensed matter physics [10,11], information-theoretic approaches reveal hidden patterns and interdependencies. These methods bridge disciplines, providing a domain-agnostic framework for quantifying how components interact to give rise to system-wide phenomena. ...
... Another model, proposed by [27], predicts fitness levels by focusing on physiological features such as heart rate, step count data, and total oxygen consumption. Milan [28] emphasized the importance of dynamic models that capture physiological parameters, such as heart rate, in response to exercise, aiming to better understand the relationship between physiological responses and exercise. ...
... Generally, maximizing entropy is associated with loss of information, however in the case of higher-order synergies, that relationship is more nuanced. The first major finding, from Orio et al., showed that adding stochastic noise to the dynamic function of an elementary cellular automata can increase the synergy [23]. Subsequently, Varley and Bongard showed that random Boolean networks are inherently synergistic, and that there were deep dynamical similarities between random networks and synergistic ones that did not apply to redundant systems [7]. ...