# Joseph T LizierThe University of Sydney · Centre for Complex Systems

Joseph T Lizier

Ph.D.

## About

148

Publications

21,372

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

4,792

Citations

Introduction

Complex systems scientist, studying the dynamics of information processing and computation in complex networks. Interests: complex networks, computational neuroscience, machine learning.

Additional affiliations

January 2019 - present

January 2015 - December 2018

**The University of Sydney**

Position

- Professor (Associate)

May 2012 - January 2015

## Publications

Publications (148)

We use a standard discrete-time linear Gaussian model to analyze the information storage capability of individual nodes in complex networks, given the network structure and link weights. In particular, we investigate the role of two- and three-node motifs in contributing to local information storage. We show analytically that directed feedback and...

The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure...

We present a measure of local information transfer, derived from an existing averaged information-theoretical measure, namely, transfer entropy. Local transfer entropy is used to produce profiles of the information transfer into each spatiotemporal point in a complex system. These spatiotemporal profiles are useful not only as an analytical tool, b...

The brains of many organisms are capable of complicated distributed computation underpinned by a highly advanced information processing capacity. Although substantial progress has been made towards characterising the information flow component of this capacity in mature brains, there is a distinct lack of work characterising its emergence during ne...

Antarctic krill swarms are one of the largest known animal aggregations, and yet, despite being the keystone species of the Southern Ocean, little is known about how swarms are formed and maintained. Understanding the local interactions between individuals that provide the basis for these swarms is fundamental to knowing how swarms arise in nature,...

Scientists have developed hundreds of techniques to measure the interactions between pairs of processes in complex systems. But these computational methods -- from correlation coefficients to causal inference -- rely on distinct quantitative theories that remain largely disconnected. Here we introduce a library of 249 statistics for pairwise intera...

Here, we combine network neuroscience and machine learning to reveal connections between the brain’s network structure and the emerging network structure of an artificial neural network. Specifically, we train a shallow, feedforward neural network to classify hand-written digits and then used a combination of systems neuroscience and information-th...

The brains of many organisms are capable of complicated distributed computation underpinned by a highly advanced information processing capacity. Although substantial progress has been made towards characterising the information flow component of this capacity in mature brains, there is a distinct lack of work characterising its emergence during ne...

Neuromorphic systems comprised of self-assembled nanowires exhibit a range of neural-like dynamics arising from the interplay of their synapse-like electrical junctions and their complex network topology. Additionally, various information processing tasks have been demonstrated with neuromorphic nanowire networks. Here, we investigate the dynamics...

Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and...

Inferring linear dependence between time series is central to our understanding of natural and artificial systems. Unfortunately, the hypothesis tests that are used to determine statistically significant directed or multivariate relationships from time-series data often yield spurious associations (Type I errors) or omit causal relationships (Type...

Making fast and accurate group decisions under uncertain and risky conditions is a fundamental problem for groups. Currently, there is little empirical evidence of how natural selection (such as environmental predation risk) has shaped the mechanisms of group decision making. We repeatedly tested individually marked guppies, Poecilia reticulata, fr...

Antarctic krill swarms are one of the largest known animal aggregations. However, despite being the keystone species of the Southern Ocean, little is known about how swarms are formed and maintained, and we lack a detailed understanding of the local interactions between individuals that provide the basis for these swarms. Here we analyzed the traje...

The behaviour of animals is strongly influenced by the detection of cues relating to foraging opportunity or to risk, while the social environment plays a crucial role in mediating their behavioural responses. Despite this, the role of the social environment in the behaviour of non-grouping animals has received far less attention than in social spe...

Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We stu...

The algorithmic rules that define deep neural networks are clearly defined, however the principles that define their performance remain poorly understood. Here, we use systems neuroscience and information theoretic approaches to analyse a feedforward neural network as it is trained to classify handwritten digits. By tracking the topology of the net...

Functional and effective networks inferred from time series are at the core of network neuroscience. Since it is common practice to compare network properties between patients and controls, it is crucial for inferred network models to reflect key underlying structural properties. However, even a few spurious links severely distort the shortest-path...

Transfer entropy (TE) is a widely used measure of directed information flows in a number
of domains including neuroscience. Many real-world time series in which we are interested in
information flows come in the form of (near) instantaneous events occurring over time, including
the spiking of biological neurons, trades on stock markets and posts to...

Transfer entropy (TE) is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) TE from a source to a target node in a network does not depend solely on the local source-target link weight, but on the wider network structure that the link is embedded in. This...

The ability to quantify complex relationships within multivariate time series is a key component of modelling many physical systems, from the climate to brains and other biophysical phenomena. Unfortunately, even testing the significance of simple dependence measures, such as Pearson correlation, is complicated by altered sampling properties when a...

The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random vari...

Transfer entropy is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) transfer entropy from a source to a target node in a network does not depend solely on the local source-target link weight, but on the wider network structure that the link is embedded...

A key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memor...

The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random vari...

Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently...

Animal groups are often composed of individuals that vary according to behavioral, morphological, and internal state parameters. Understanding the importance of such individual-level heterogeneity to the establishment and maintenance of coherent group responses is of fundamental interest in collective behavior. We examined the influence of hunger o...

A key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memor...

Collectively moving animals often display a high degree of synchronization and cohesive group-level formations, such as elongated schools of fish. These global patterns emerge as the result of localized rules of interactions. However, the exact relationship between speed, polarization, neighbour positioning and group structure has produced conflict...

Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently...

Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently...

It is generally accepted that, when moving in groups, animals process information to coordinate their motion. Recent studies have begun to apply rigorous methods based on Information Theory to quantify such distributed computation. Following this perspective, we use transfer entropy to quantify dynamic information flows locally in space and time ac...

Despite the frequency with which mixed-species groups are observed in nature, studies of collective behaviour typically focus on single-species groups. Here, we quantify and compare the patterns of interactions between three fish species, threespine sticklebacks (Gasterosteus aculeatus), ninespine sticklebacks (Pungitius pungitius) and roach (Rutil...

Due to the interdisciplinary nature of complex systems as a field, students studying complex systems at University level have diverse disciplinary backgrounds. This brings challenges (e.g. wide range of computer programming skills) but also opportunities (e.g. facilitating interdisciplinary interactions and projects) for the classroom. However, the...

Complex infrastructural networks provide critical services to cities but can be vulnerable to external stresses, including climatic variability. This vulnerability has also challenged past urban settlements, but its role in cases of historic urban demise has not been precisely documented. We transform archeological data from the medieval Cambodian...

Information dynamics is an emerging description of information processing in complex systems that describes systems in terms of intrinsic computation, identifying computational primitives of information storage and transfer. In this paper we make a formal analogy between information dynamics and stochastic thermodynamics that describes the thermal...

The Information Dynamics Toolkit xl (IDTxl) is a comprehensive software package for efficient inference of networks and their node dynamics from multivariate time series data using information theory. IDTxl provides functionality to estimate the following measures: 1) For network inference: multivariate transfer entropy (TE)/Granger causality (GC),...

The characterization of information processing is an important task in complex systems science. Information dynamics is a quantitative methodology for modeling the intrinsic information processing conducted by a process represented as a time series, but to date has only been formulated in discrete time. Building on previous work which demonstrated...

The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted...

What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information deco...

The characterisation of information processing is an important task in complex systems science. Information dynamics is a quantitative methodology for modelling the intrinsic information processing conducted by a process represented as a time series, but to date has only been formulated in discrete time. Building on previous work which demonstrated...

The neurophysiological underpinnings of the nonsocial symptoms of autism spectrum disorder (ASD) which include sensory and perceptual atypicalities remain poorly understood. Well-known accounts of less dominant top-down influences and more dominant bottom-up processes compete to explain these characteristics. These accounts have been recently embed...

The pointwise mutual information quantifies the mutual information between events $x$ and $y$ from random variable $X$ and $Y$. This article considers the pointwise mutual information in a directed sense, examining precisely how an event $y$ provides information about $x$ via probability mass exclusions. Two distinct types of exclusions are identif...

What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine to provide complementary information? The redundancy lattice from the partial information decomposition of Wil...

We study self-organization of collective motion as a thermodynamic phenomenon in the context of the first law of thermodynamics. It is expected that the coherent ordered motion typically self-organises in the presence of changes in the (generalized) internal energy and of (generalized) work done on, or extracted from, the system. We aim to explicit...

Information dynamics is an emerging description of information processing in complex systems. In this paper we make a formal analogy between information dynamics and stochastic thermodynamics. As stochastic dynamics increasingly concerns itself with the processing of information we suggest such an analogy is instructive in providing hitherto unexpl...

In this paper we explore several fundamental relations between formal systems, algorithms, and dynamical systems, focussing on the roles of undecidability, universality, diagonalization, and self-reference in each of these computational frameworks. Some of these interconnections are well-known, while some are clarified in this study as a result of...

Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. Thi...

We study self-organisation of collective motion as a thermodynamic phenomenon, in the context of the second law of thermodynamics. It is expected that the coherent/ordered motion can only self-organise in the presence of entropy flux from the system of moving particles to the environment. We aim to explicitly quantify the entropy flux from a system...

Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre-)activated prior knowledge serving these predictions are still unknown. Based on the idea that such pre-activated prior knowledge...

Understanding epidemic dynamics has always been a challenge. As witnessed from the ongoing Zika or the seasonal Influenza epidemics, we still need to improve our analytical methods to better understand and control epidemics. While the emergence of complex sciences in the turn of the millennium have resulted in their implementation in modelling epid...

Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre-)activated prior knowledge serving these predictions are still unknown. Based on the idea that such pre-activated prior knowledge...

We develop and apply several novel methods quantifying dynamic multi-agent team interactions. These interactions are detected information-theoretically and captured in two ways: via (i) directed networks (interaction diagrams) representing significant coupled dynamics between pairs of agents, and (ii) state-space plots (coherence diagrams) showing...

Artificial computing systems are a pervasive phenomenon in today's life. While traditionally such systems were employed to support humans in tasks that required mere number-crunching, there is an increasing demand for systems that exhibit autonomous, intelligent behavior in complex environments. These complex environments often confront artificial...

In this chapter we get to the essential mathematics of the book—a detailed discussion of transfer entropy
.

Having introduced the transfer entropy in Chap. 4, we now turn our attention for the remainder of the book to reviewing what this measure can tell us about various complex systems, and guiding the reader through these relevant applications of the measure.