Figure 2 - uploaded by Kai Olav Ellefsen
Content may be subject to copyright.
The "Goldilocks principle"-model of evolved learning: Learning evolves in an intermediate range of environmental variability. Too slow or too rapid changes select against learning.

The "Goldilocks principle"-model of evolved learning: Learning evolves in an intermediate range of environmental variability. Too slow or too rapid changes select against learning.

Context in source publication

Context 1
... view of how environmental change affects the learning ability of individuals ( Kerr and Feldman (2003); Dukas (1998)) suggests that the relationship between envi- ronmental variability and the utility of learning follows the "Goldilocks principle" (Figure 2): For learning to be ben- eficial, environmental variability needs to "just right" -too frequent changes, and learning cannot track them. Too in- frequent, and evolution can track them alone. ...

Citations

... The application of environmental variability or noise is used to address a number of questions concerned with the affects of environmental uncertainty (Grove et al. 2022). The most common amongst these are whether changing environments affect: the emergence of phenotypic plasticity (Wilder & Stanley 2015;Kouvaris et al. 2017) and evolvability (Steiner 2012;Canino-Koning et al. 2016;Ofria & Lalejini 2016); and the evolution of versatile adaptations such as learning (Nolfi & Parisi 1996;Ellefsen 2014) and social learning (Borg & Channon 2012;Bullinaria 2018). What is common amongst all these questions and domains is that they are seeking to understand the interaction between changing environments, specialist-generalist evolutionary dynamics, and adaptability. ...
... Other implementations include the use of sine waves to determine environmental fluctuations over time, or variations on this theme (Borg & Channon, 2012;Grove, 2014Grove, , 2018Khan et al., 2020;Stanton, 2018;Stanton & Channon, 2013). A final common approach is to predetermine a series of states which an environment could be in, and move between these states given a certain frequency, this frequency determining the difficulty or harshness of environmental change (Asakura et al., 2015;Canino-Koning et al., 2016;Ellefsen, 2014;Nolfi & Parisi, 1996;Ofria & Lalejini, 2016;Wilder & Stanley, 2015). One thing is common across all of these approaches, they all rarely consider pink noise, or ground the method of environmental uncertainty in empirical observations. ...
... The application of environmental variability or noise is used to address a number of questions concerned with the affects of environmental uncertainty. The most common amongst these are whether changing environments affect: the emergence of phenotypic plasticity (Kouvaris et al., 2017;Wilder & Stanley, 2015) and evolvability (Canino-Koning et al., 2016;Ofria & Lalejini, 2016;Steiner, 2012); the evolution of robust controllers in 3D virtual creatures (Stanton, 2018;Stanton & Channon, 2013) and robots (physical and simulated) (Asakura et al., 2015;Bongard & Pfeifer, 2003;Jakobi, 1997;Jakobi et al., 1995); and the evolution of versatile adaptations such as learning (Ellefsen, 2014;Nolfi & Parisi, 1996) and social learning (Borg & Channon, 2012;Bullinaria, 2018). What is common amongst all these questions and domains is that they are seeking to understand the interaction between changing environments, specialist-generalist evolutionary dynamics, and adaptability. ...
Article
Simulations of evolutionary dynamics often employ white noise as a model of stochastic environmental variation. Whilst white noise has the advantages of being simply generated and analytically tractable, empirical analyses demonstrate that most real environmental time series have power spectral densities consistent with pink or red noise, in which lower frequencies contribute proportionally greater amplitudes than higher frequencies. Simulated white noise environments may therefore fail to capture key components of real environmental time series, leading to erroneous results. To explore the effects of different noise colours on evolving populations, a simple evolutionary model of the interaction between life-history and the specialism-generalism axis was developed. Simulations were conducted using a range of noise colours as the environments to which agents adapted. Results demonstrate complex interactions between noise colour, reproductive rate, and the degree of evolved generalism; importantly, contradictory conclusions arise from simulations using white as opposed to red noise, suggesting that noise colour plays a fundamental role in generating adaptive responses. These results are discussed in the context of previous research on evolutionary responses to fluctuating environments, and it is suggested that Artificial Life as a field should embrace a wider spectrum of coloured noise models to ensure that results are truly representative of environmental and evolutionary dynamics.
... The first matter, the variance of information, has been the object of little study [SSR18]. The frequency of environment changes has been shown to impact the evolution of learning [Ell14], and changes at every generation has been shown to lead to phenotypic plasticity [OL16]. ...
Thesis
The biological brain is an ensemble of individual components which have evolved over millions of years. Neurons and other cells interact in a complex network from which intelligence emerges. Many of the neural designs found in the biological brain have been used in computational models to power artificial intelligence, with modern deep neural networks spurring a revolution in computer vision, machine translation, natural language processing, and many more domains. However, artificial neural networks are based on only a small subset of biological functionality of the brain, and often focus on global, homogeneous changes to a system that is complex and locally heterogeneous. In this work, we examine the biological brain, from single neurons to networks capable of learning. We examine individually the neural cell, the formation of connections between cells, and how a network learns over time. For each component, we use artificial evolution to find the principles of neural design that are optimized for artificial neural networks. We then propose a functional model of the brain which can be used to further study select components of the brain, with all functions designed for automatic optimization such as evolution. Our goal, ultimately, is to improve the performance of artificial neural networks through inspiration from modern neuroscience. However, through evaluating the biological brain in the context of an artificial agent, we hope to also provide models of the brain which can serve biologists.
... With timescales comparable to a lifetime, evolution may lead to phenotypic plasticity, which is the capacity for a genotype to express different phenotypes in response to different environmental conditions (Lalejini and Ofria, 2016). The frequency of environmental changes was observed experimentally in plastic neural networks to affect the evolution of learning (Ellefsen, 2014), revealing a complex relationship between environmental variability and evolved learning. A focus on the deceptiveness of evolving to learn is presented in ; Lehman and Miikkulainen (2014). ...
Article
Full-text available
Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence, but the complexity of the whole system of interactions is an obstacle to the understanding of the key factors at play. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks, artificial systems composed of sensors, outputs, and plastic components that change in response to sensory-output experiences in an environment. These systems may reveal key algorithmic ingredients of adaptation, autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed structures and algorithms currently used in most deep neural networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main computational methods and results are reviewed. Finally, new opportunities and developments are presented.