Predictive Regulation in Affective
and Adaptive Behaviour:
An Allostatic-Cybernetics Perspective
University of Skövde/University of Gothenburg, Sweden
University of Gothenburg, Sweden
University of Gothenburg, Sweden
In this chapter, different notions of allostasis (the process of achieving stability through change) as they
apply to adaptive behavior are presented. The authors discuss how notions of allostasis can be usefully
applied to Cybernetics-based homeostatic systems. Particular emphasis is placed upon affective states –
motivational and emotional – and, above all, the notion of ‘predictive’ regulation, as distinct from forms
of ‘reactive’ regulation, in homeostatic systems. The authors focus here on Ashby’s ultrastability concept
that entails behavior change for correcting homeostatic errors (deviations from the healthy range of
essential, physiological, variables). The authors consider how the ultrastability concept can be broadened
to incorporate allostatic mechanisms and how they may enhance adaptive physiological and behavioral
activity. Finally, this chapter references different (Cybernetics-based) theoretical frameworks that
incorporate the notion of allostasis. The article then attempts to untangle how the given perspectives fit
into the ‘allostatic ultrastable systems’ framework postulated by the authors.
Keywords: Homeostasis, Emotions, Predictive Processing, Ashby, Goal-Directed Behaviour,
Equilibrium, Allostatic Ultrastable Systems.
Mid-twentieth century (‘first-wave’) Cybernetics led by Ashby (1952) has had, at its core, the concept of
adaptive behavior serving homeostatic control. Cybernetics has been considered a forerunner for modern
Cognitive Science (cf. Pickering 2010, Staddon 2014) and has also provided strong input to the domain of
Systems Biology (Froese & Stewart 2010). In this chapter, we will discuss the role of an emerging
perspective of (second order) homeostasis known as ‘allostasis’ and how it may fit within a Cybernetics
approach as it concerns adaptive behavior.
Ashby (1952), as a chief exponent of first-wave cybernetics, coined the term ultrastability, which refers to
the requirement of multiple (at least two) feedback loops – behavioural and internal (‘physiological’) – in
order to achieve equilibrium between an organism and its environment. This perspective has, ever since,
been a source of inspiration for many artificial systems, and theoretical, conceptions of adaptive behavior.
Such work has focused on the role of essential variable (or homeostatic) errors that signal the need for
behavioural change in order to maintain an organism-environment equilibrium. Essential variables are
those variables most critical to the viable functioning of the (biological or artificial) organism.
Approaches that have emphasized the existence of double feedback loops have manifested in studies of
activity cycles of both behavioural and internal states (e.g. McFarland & Spier 1997; Di Paolo 2000,
2003; McFarland 2008). According to these approaches, homeostatic processes typically amount to
reactive (corrective) responses that are purely behavioural including those mediated by a proximal action
selection process. In more recent years, Cybernetics- and Artificial Intelligence-based perspectives on
homeostasis (and ultrastability) have considered the role of prediction in regulating organisms’
behavioural and internal needs (Muntean & Wright 2007, Lowe & Ziemke 2011, Seth 2014).
While allostasis has many definitions (for example, McEwen and Wingfield 2003; Berridge 2004;
Sterling 2004, 2012; Schulkin 2011), a common thread among them is a focus on the predictive
regulatory nature of biological organisms (particularly humans). Homeostasis, by comparison, is more
typically conceived as a reactive process. However, a number of commentators have pointed out that this
perspective emanates from a misconception of Cannon’s (1929) original detailing of homeostasis (cf. Day
2005, Craig 2015). It is therefore, contentious, as to whether given notions of allostasis amount to i)
complementary versus substitutive positions with respect to homeostasis, ii) definitions that provide
illuminating versus confusing emphases on aspects of homeostatic regulation. Furthermore, in relation to
ii), it may be illuminating to consider the relation between notions of homeostasis-allostasis and those
(mis)applied to Cybernetics approaches. We will consider these aspects in the section “Allostasis and
Emotion in Cybernetics”.
At the heart of the predictive regulation emphasis of allostasis is the need for biological / artificial
organisms to be imbued with adaptive global states that are, nevertheless, not divorced from local
homeostatic needs (cf. Damasio 2003). In such a manner, organisms can overcome perceived
environmental challenges that place great demands on the organism’s physiological and behavioural
capabilities. Such demands may be transient (cf. Wingfield 2004) – do not have long-term effects on local
homeostatic regulation – or long-lasting (cf. Sterling 2004, 2012) – modulate long-term sensitivities to
expected physiological demands preparatory to adaptive behaviour. In either case, these demands require
the mechanistic means to differentially suppress (or augment) the signaling influence of local homeostatic
variables in the service of the whole organism, its ability to survive, and its ability to reproduce (or
achieve a particular designer-specified task). This predictive regulation provides the essence of the
allostasis-cybernetics perspective on adaptive behavior that we will consider here.
A core consideration for biological or artificial organisms (and organism-environment systems) is how
local homeostatic (‘motivational’) requirements are embedded within higher order (including emotional)
allostatic processes. This chapter will attempt to explain how allostasis and (‘classical’) homeostasis
should be considered, and complemented, within an extended (i.e. allostatic) ultrastable framework. It
will also look at specific examples and theoretical frameworks that make use of the allostasis term and
discuss how they may be related to allostatic ultrastable systems.
The remainder of this chapter breaks down as follows. In the next section “Homeostasis and Cognition in
Cybernetics”, we will discuss the commonly used notion of homeostasis and its popular use in
Cybernetics- and robotics-based applications, specifically in relation to the work of Ashby (1952, 1960)
and his ultrastable system. Following this, in the section “Allostasis and Emotions in Cybernetics”, we
will look at different perspectives on allostasis, as they relate to homeostasis, from both the viewpoints of
biologists and cyberneticians (or roboticists). In the section thereafter “Allostasis in Cognitive-Affective
Cybernetics Theory”, we will discuss the use of the term allostasis in particular Cybernetics-relevant
theories of cognition and emotion. Finally, we will end with a “Conclusion” section summarizing the
content of the chapter.
HOMEOSTASIS AND COGNITION IN CYBERNETICS
The notion of homeostasis has gone hand in hand with Cybernetics since the pioneering work of Ashby
(1952). Ashby, considered by many as the leader of the Cybernetics movement outside the US (cf. Seth
2014), manifested a control theoretic understanding of homeostasis in terms of the ultrastability principle
as utilized in his homeostat artefact. The ultrastable system consists of an organism and an environment.
The organism’s homeostatic regulatory processes are governed by signals from monitors of one or more
essential variables (EVs). These EVs are so-called because their values, operating within rigid boundaries,
determine whether the organism-environment system is in equilibrium (inside boundaries) or not (outside
boundaries). If the system is in equilibrium, the organism is said to be exhibiting adaptive behavior and if
not its behavior is not considered adaptive. When the value of the essential variable(s) falls outside its
rigid limits, random changes to the organism’s parameter set that affect its mode of interaction with its
environment are enacted. These random changes are continued until such a point where a new parameter
set is found that enables an organism-environment interaction that satisfies the homeostasis of the
organism’s essential variable(s), i.e. the value of the essential variable(s) falls within the limits.
To summarize, the ultrastable system consists of:
1) An Environment,
2) An Organism that consists of:
a. Viability indicators: One or more Essential Variables
b. Re-parameterization to meet present demands: Random changes to the organism’s
behavioural parameter set
c. Sensor-motor morphology: Interactive interface mapped to the behavioural parameter set.
Figure 1. Left. Control theoretic loop. Summed errors sensed as deviations from the ideal state(s) of the controlled
variable(s) induce effectors to promote error-reducing activity. Right. Ashby’s ultrastable system. The ultrastable
system consists of an external environment (Env), and an organism (Org) that is depicted inside the dashed polygon.
The organism’s effector consists of S (parameterizations of sensor-motor mappings) and R (physical actuators) and
is comprised of the sensorimotor (‘nervous’) system that, via Env, provides the first-order feedback loop of the
organism-environment system. The controlled variable for the ultrastable system is the organism’s essential
variable (EV) whose activity (thick line) when falling outside fixed limits (thin lines) generates an error signal. This
sensed signal then drives the effector by way of changing the sensor-motor mappings at S – a transfer function,
which modifies the organism-environment interaction.
The essential variable (EV) term was first coined by Cannon (1929) in relation to biological organisms
whose viability depends on the maintenance within tight operational limits of a few such variables.
Examples of these EVs include glucose, water, and pH levels in the blood. Ashby’s analogy to the
perceived Cannonian view on homeostasis has been considered to draw from a control theoretic tradition
within Cybernetics (cf. Froese & Stewart 2010). The Ashbyan ultrastable system is schematized in figure
1 (right) and flanked, for comparison, by a diagram of a standard control theoretic loop. Both perform a
form of reactive homeostatic control insofar as outputs from controlled variables consist of errors from set
points and are minimized using an effector (sub-)system. In the case of the ultrastable system this effector
sub-system consists of the sensor-motor mappings (parameters) whose settings determine the mode of
organism-environment interaction affecting the controlled variable. The ultrastable system consists of a
double feedback system: The first concerning feedback from the organism-environment interaction as it
affects essential variable homeostasis; the second concerning the feedback given by the essential
(controlled) variable that modulates the organism-environment coupling.
From the perspective of artificial systems, ultrastability offers an approach to understanding adaptive
behavior without recourse to strong design. It does this by exploiting random processes that, given
enough time, permit (re-) establishment of organism-environment equilibrium. The simplicity of the
ultrastable system is a strength thereby, i.e. that behavior is adaptive if the organism-environment system
is in equilibrium, and not otherwise. Ultrastable systems, including Ashby’s exemplar, the homeostat,
though simple, are nevertheless not immune to the effects of design decisions. These design decisions
may constrain the adaptivity and flexibility of the organism, particularly in regard to changing or
incompletely known (by the designer) environments. For the homeostat, design concerns the setting of the
internal homeostatic ranges that provide the source of negative-feedback operation errors. For Ashby-
inspired artificial systems, e.g. robotic agents, imbued with ultrastability, the design typically concerns
the setting of ranges (set points) within which essential variables (e.g., battery level) are “comfortable”
(cf. Di Paolo 2003; Avila-Garcìa & Cañamero 2005; Pitonakova 2013).
Applying the ultrastability notion to adaptive artificial (‘cybernetic’) systems has often involved
compromising on some aspects of the canonical description so as to promote behavior adaptive to the
given task. Design decisions for ultrastable-like artificial systems may thus be categorized according to
their strictness of adherence to the notion of ultrastability (see Lowe 2013):
1. Random self-configurable approaches: Connections between units that constitute the parameter
set that determines the sensor-motor mappings are randomly modified as a result of essential
variable (the controlled variable) error. This is the standard Ashbyan ultrastable system.
2. Non-random self-configurable approaches: Parameters, including those of neural network
transfer functions in artificial systems, that affect the sensorimotor mappings, may be
directionally modulated (e.g. favouring goal-directed behavior) rather than being random.
3. Non self-configurable approaches: where internal parameters are not changed but nevertheless,
internal and behavioural homeostasis (the basic, double feedback systems) may be achieved
based on the use of essential variables.
Furthermore, as examples of artificial systems-based applications, ultrastable-like robotics approaches
may be classified into those that use one essential variable type, e.g., energy or neural (network) activity,
and those that use two or more essential variable types, e.g., fuel and temperature levels. The use of
multiple essential variables creates an action selection problem where sensor-motor mappings are
required to be differentiated in order to adaptively satisfy multiple homeostatic needs.
The above-mentioned relaxations of the original specification of Ashyban ultrastability permit greater
scope for applications that maintain the core concept of a double feedback loop. One such important
modification concerns the notion of adaptive behavior and whether Ashbyan ultrastable systems are really
imbued with this property (cf. Pickering 2010). Of the ultrastable types listed above, we will in turn
provide specific examples of each as they have been applied to robot (or simulated robotics), as an
example of an artificial organism, scenarios. We will then consider the extent to which they imbue the
artificial organisms with adaptive behavior.
In the case of 1., random self-configurable approaches, Di Paolo (2003) provides a fitting example (also
see Di Paolo 2000, Pitonakova 2013). Di Paolo’s simulated phototactic robot, that uses simple light
sensors and motors to interact with its environment, is required to maintain its battery level (essential
variable) homeostatically while moving in its simple environment. When the robot’s essential variable is
out of bounds – battery level is too high or too low – random changes in the sensor-motor mapping
parameters (that provide transfer functions for light sensor activity onto motor activity) ensues. Only
when parameters are found that permit the re-establishment of organism-environment interactive
equilibrium (behavioural stability, cf. McFarland & Bösser 1993) is ultrastability achieved.
Whilst the above provides an apparently faithful instantiation of Ashbyan ultrastability, it also highlights
limitations with the application of this notion to artificial and biological organisms. The robot in Di
Paolo’s (2003) example arrives at an equilibrium state only through a drawn out trial-and-error interactive
process. It has been pointed out that such trial-and-error behaviours, even those that chance upon an equi-
librium (re-)establishing behaviour, cannot be considered adaptive as small changes in the environment
may render the particular behaviour insufficient to maintain equilibrium (cf. Pickering 2010). The
problem with trial-and-error processes is not just their inefficiency and non-adaptivity in the context of a
mobile biological or artificial organism (cf. Manicka & Di Paolo 2009, though also see Pitonakova 2013).
Rather, in a complex, dynamic and hazardous environment, purely trial-and-error driven behaviour is
non-viable – the organism risks encountering damage or even destruction if it repeatedly tries out the
“wrong” behaviour. Usefully incorporating the ultrastable system concept in designing for complex
organism-environment systems, thus, requires compromising on the purity of the Ashbyan vision whilst
acknowledging the need to minimize the extent of design of the equilibrium (re-)establishing process.
In the case of 2., non-random self-configurable approaches, Lowe et al. (2010) provide a representative
case study. Here, the robot’s task was to survive, as long as possible, via selecting, over a number of trials
between two resources that replenished different essential variables. In this case, an evolutionary robotics
approach was used in order to ‘ground’ artificial metabolic processes in an artificial neural network
controller for a simulated (e-puck) robot.
The robot-environment could be compared to an ultrastable system:
a) Essential variables: Values were given by the level of “energy” and “water” within a simulated
microbial fuel cell (cf. Melhuish et al. 2006).
b) Fixed homeostatic limits: Thresholds, set by the genetic algorithm (GA), determined essential
variable monitor nodes’ homeostatic limits/regime.
c) Parameter modulation: Chemical node activation of the network adapted the gain of nodes’
electrical activity output, a function that directly altered the robot’s sensorimotor activity
interaction with its environment. This concerns the S component of the organism (see fig. 1).
This approach was non-random since the GA determined the direction of the modulation of the output
function slope. However, the directedness (affecting action selection mediation) of the ultrastable
behaviour was not explicitly designed. It emerged from the evolutionarily designing for satisficing (via a
fitness function of ‘time of survival’). Nevertheless, the directed activity of the modulator nodes allowed
for motors to be activated in particular directions as a response to sensory (camera) inputs. This promoted
adaptive solutions to an action selection problem based on a particular type of ultrastable system by
eliminating the random nature of re-parameterizations (at S, figure 1 right).
Finally, in relation to 3., non self-configurable approaches, much work has been done using the
ultrastability-inspired approach of satisfying two feedback loops (for internal and behavioural
homeostasis) in order to provide stable dynamics. McFarland and Spier (1997), Avila-Garcìa and
Cañamero (2005), and also Kiryazov et al. (2013) have utilized variables that are provisioned with
homeostatic limits, where activity outside these limits comprises “physiological drive” errors. They are
essential variables. Adaptive behavior consists of achieving stable activity cycles where multiple essential
variables are homeostatically maintained according to error reducing behaviours (e.g. remaining
stationary at a recharger zone when battery level is low). In these works, there is a basic sense in which
essential variables non-randomly influence motivated decision-making. The example of Lowe et al.
(2010) above, in a sense, provides an evolutionarily grounded version of such non self-configurable
approaches where parameter values that affect organism-environment interactions are modulated as a
result of essential variable ‘errors’. In the case of the non self-configurable approaches, however, the
networks do not structurally reconfigure, i.e. the sensor-motor transfer functions do not change. In this
sense, sensorimotor re-paramerization to meet current demand, is not included. Instead, the strength of
errors from specific variables modulates the tendency to choose one action over another.
The non-random self-configurable approaches and non self-configurable approaches arguably require
more design decisions than the pure Ashbyan ultrastability interpretation (category 1). They can,
nevertheless, be considered ultrastable-like. Importantly, these approaches that use non-random
(sensorimotor) re-parameterizations to meet current demand, are equipped to deal with action selection
problems. Firstly, they arrive at the equilibrium states more quickly with a lower likelihood of incurring
highly maladaptive re-parameterizations (e.g. that happen to cause collisions). Furthermore, these design
decisions partially deal with two major problems of action selection – opportunism and persistence –
where organisms have multiple essential variables (EVs) and thus, needs to be satisfied. Simply,
opportunism requires organisms to be flexible in their action selection when an opportunity to satisfy a
non-dominant need arises. For example, the organism may most urgently need to replenish a battery level
deficit, but senses a proximal heat source that can reduce a thermal deficit (of being too cold). Under
some circumstances, it is adaptive to be opportunistic in this way – though not always., e.g. it may lead to
the phenomenon of ‘dithering’, moving back and forth between motivated actions to the point of
exhaustion. Persistence, alternatively, entails the sustained performing of an action that best satisfies the
most urgent need. This may occur even in the presence of an opportunity to act upon an easier target
concerning a less urgent need.
Notwithstanding the provided solutions to the limitations in adaptive-behaviour of the Ashbyan
ultrastability notion, critical limitations still pertain. Fundamentally, the adaptive capability of organisms
within ultrastable systems is compromised by the fact they are heteronomous (influenced by forces
outside the organism) as opposed to being autonomous (Franchi 2013). Organisms, in order to have
autonomous, adaptive, control, are required to have regulative capabilities from within, e.g. of a
predictive nature. Such predictive regulatory control allows for organisms to persist, and yet be flexibly
opportunistic, in their goal-directed behaviour. They also allow organisms to meet demands in changing
environmental contexts. This type of control allows ‘ultrastable’ systems to address three important
shortcomings noted by Ashby (1954) of the ultrastability concept that are of biological relevance (cf.
Vernon 2013), namely: i) inability to adapt gradually1, i.e. re-parameterization is not directed in relation
to meeting (predicted) demands; ii) inability to conserve previous contextual adaptations, i.e. there is no
(predictive) prior knowledge; iii) the trial and error re-parameterizations require an arbitrary time length
to ‘hit upon’ the adaptive solution. Above all ii) cannot be met by the ultrastable-like systems mentioned
in this section. In the following section, we will further discuss the importance of non-reactive means of
provisioning systems with opportunistic and persistence capabilities in relation to adaptive and predictive
behavior that also helps deal with some of the shortcomings of the reactive ultrastable system.
1 The organism is either adaptive or not depending on whether the essential variables are within the critical bounds.
ALLOSTASIS AND EMOTION IN CYBERNETICS
What is Allostasis?
The notion of allostasis has, in recent years, been put forward variably as a substitute for, or as a
complement to, homeostasis in regards to adaptive behavior. For Sterling (2004, 2012), for example, the
‘classical’ homeostasis model is wrong. On this account allostasis is about prediction whereas
homeostasis is about reaction. Alternatively, Wingfield (2004) on the other hand, views allostasis as
imbuing organisms with emergency mechanisms that facilitate long-term homeostasis. Allostasis,
generally, concerns (physiological and sensorimotor) re-parameterizations for meeting demand not just
for short-term homeostasis (or equilibrium) but for predicted longer-term adaptive gain. Thus,
notwithstanding the different definitions and foci, unifying the perspectives on allostasis, whether it
concern short-term or longer-term adaptation, is the notion of predictive regulation.
The term allostasis has further been applied to the social domain (Schulkin 2011), as well as to the
workings of artificial (cybernetic) systems (Munteaun and Wright 2007, Lowe and Kiryazov 2013,
Vernon et al. 2015). Moreover, allostasis has recently found specific application to Ashby-ultrastable
systems (Gu et al. 2014, Seth 2014, also see Lowe 2016).
Controversies exist in the use of the allostasis term, not least because of the different definitions used.
Day (2005), has suggested that allostasis is a redundant term and adds little (perhaps only confusion) to
the understanding of what Cannon (1929) meant by homeostasis. Allostasis, like Cannonian homeostasis,
has considered prediction as a key means by which nervous systems can avoid potential irrecoverable
deficits (cf. Day 2005). Allostasis has also been viewed (McEwen & Wingfield 2003, McEwen 2004) as
a means for achieving homeostasis of essential variables where essential concerns variables for which
viable bounds are particularly rigid (e.g. for blood glucose and water levels and pH levels). Other
variables, including stress levels and blood pressure are considered somewhat less essential, though, are
also required to be maintained within certain ‘healthy’ ranges. Irrespective of the controversies that exist,
we feel the notion of allostasis as it concerns responses to anticipated threats to ongoing viability
(maintenance of homeostasis of essential variables) has utility. Further, as Sterling (2004, 2012) describes
it: “[t]here are solid scientific reasons [for its use as a term]: the allostasis model connects easily with
modern concepts in sensory physiology, neural computation, and optimal design” (Sterling 2004, p.22).
Of particular interest here is the role allostasis can bring to bear to cybernetic, including neural
computational, understanding of adaptive behavior.
The ultrastability notion has been criticized on the grounds of its painting a picture of life as passive-
contingent (Froese & Stewart 2010; Franchi 2013, 2015). Artificial (or hypothetical biological) organisms
imbued purely with ultrastability processes are externally driven (heteronomous) and in the absence of
such external perturbations (environmental changes), will do nothing. At least one obvious problem with
this notion is that environmental dynamics, particularly as they confront organisms with nervous systems
of the complexity of humans, are ever-changing and perturb organisms on a variety of time scales.
Nervous systems, themselves, exhibit spontaneous activity in the absence of external perturbations, and
are required to produce responses in the absence of, e.g. prior to, changes that may ‘break’ the organisms,
i.e. lead to irrecoverable deficits (cf. McFarland & Bösser 1993).
Similar to the previous section, we now wish to categorize different types of conceptions of allostasis in
an attempt to clarify the properties of the most popular perspectives on homeostatic – allostatic regulation
(see also Lowe 2016).
1. Reactive homeostatic regulation: Ashbyan homeostasis, essential variable errors (via first-order
feedback) produce signals that lead to behavioural corrective measures (second-order feedback).
The essential variables are “immediately affected by the environment only” (Ashby 1960, p.81).
Thus, re-parameterizations affect only sensorimotor, and current, demand.
2. Predictive transient regulation: Satisfaction of goals has the effect of facilitating long-term
homeostatic equilibrium. The reactive process (Position 1) is embedded within this allostatic
process. Second-order feedback involves behavioural corrective mechanisms. Third-order
feedback involves transient re-setting of local homeostatic bounds to meet predicted demand.
3. Predictive non-transient regulation: Survival and reproduction (as well as goals) require
neurophysiological states to deal with predicted demands. Second-order feedback involves
behaviour suited to meet predicted demands. Third-order feedback involves modulation of local
homeostatic activity. Demands are ever-changing throughout daily and seasonal cycles; thereby,
no ‘resetting’ exists.
Position 1 was covered in the previous section and will thus not be re-visited here. It suffices to say that
ultrastability provides somewhat adaptive behavior. This ‘adaptive’ behavior is limited by a lack of (goal)
directedness and prior knowledge requisite to flexible opportunistic and persistent sequential behavior.
Figure 2. Left. Predictive (allostatic) control loop. Unlike for the classical ‘homeostatic’ control loop, prior
knowledge of the controller provides a top-down predictive control, which entails re-setting of homeostatic bounds
or otherwise suppressing / increasing of the effects of homeostatic errors on the effector. Right. Allostatic
ultrastable system. Depicted is the classic ultrastable system with additional dashed arrows superimposed. In this
case, prior knowledge is given by the sensorimotor activity at R which constitutes the nervous system (cf. Froese &
Stewart 2010). This provides a third feedback loop, which modulates the homeostatic regime (set points) according
to predicted demand. In this way the artificial organism has two means by which to achieve homeostasis when 1st
feedback produces an error signal: i) behaviourally (via 2nd feedback), ii) autonomically (via 3rd feedback).
Autonomic changes also consider 2nd feedback, i.e. the predicted demand of producing particular behaviours.
In regard to case 2, it might be considered that the examples of Avila-Garcìa and Cañamero (2005) and
Lowe et al. (2010) conform to transient regulation, at least insofar as motivated behavior may persist in
the face of non-related essential variable needs. In these cases, corrective behaviours were instigated as a
function of homeostatic errors, i.e. adaptive behavior is reactively driven. These examples are very much
borderline cases, however. The sense in which allostasis is normally considered is in terms of predictive
regulation. Here, prior knowledge, based on sensed environmental contingencies, may bring to bear in a
top-down fashion on local homeostatic (essential) variables. In control theoretic terms, we can compare
the classic (‘homeostatic’) control loop from figure 1 (left), to that of figure 2 (left) – a predictive
‘allostatic’ controller (Sterling 2004). Prior knowledge allows for a prediction of the (peripheral)
physiological requirements (cf. Schulkin 2004) of the organism in relation to environmental
contingencies. This can manifest both in terms of the actions required, and the physiological cost of
persisting in these actions until the threat is averted (or goal state achieved). This means that errors
signaling certain homeostatic deficits / surfeits may be transiently suppressed, e.g. via shifting thresholds
or gain parameters on the output signals (Sterling 2004). Furthermore, S (in figure 1, right; figure 2, right)
should be seen as providing peripheral changes that constitute re-parameterizations both of sensor-motor
couplings and of peripheral physiological activations (stress levels, blood pressure) that meet the
predicted demand of the situation. The effector of the control loop (figure 2, left) must be considered to
consist of corrective responses to predicted challenges that concern behavioural activity (2nd feedback
loop) and autonomic (peripheral physiological) activity (the 3rd feedback loop that we suggest is
necessary). Naturally, further feedback loops may be necessary to engender biological organisms with the
necessary flexibility to deal with even transient challenges to homeostatically realized equilibrium states.
We can, for example, imagine a 4th (autonomic) feedback loop between ‘S’ and the EVs that signal
internal changes to EV values (glucose mobilization) according to perceived demand. We might further
expect ‘S’ to not consist of purely non-directional (i.e. random) changes to parameters but changes
directed according to deficits or surfeits in the level of the essential variable. This is also in accordance
with research done on the effects of neurons in the hypothalamus sensitive to surfeits and deficits in blood
glucose and oxygen levels (cf. Canabal et al. 2007).
Figure 3. Allostatic ultrastable system with multiple essential variables (EVs). In this version of the allostatic
ultrastable system (depicted in figure 2, right), there are multiple essential variables (EV1, EV2). The influence of
these EVs on the S (effector) that modulates sensorimotor parameterization may be differentially weighted
according to prior experience of the demand required to carry out the action requisite to survival / goal-directed
needs. These EVs may also be differentially weighted according to the particular action needed.
In goal-directed behaviours, unlike in the non-random self-configurable and non self-configurable
ultrastable systems described in the previous section, an organism typically needs to produce sequences
of actions in order to arrive at the state satisfying the essential variables. This requires prior knowledge of
the length of the sequence, proximity to goal as well as an ability to deal with obstacles that may
predictably occur during the goal-directed behavior. The above-mentioned reactive artificial organisms
are only equipped to persist in relation to persistent (exteroceptive) sensory stimulation. More adaptive
organisms should be able to persist according to prior knowledge and predictive regulation. Figure 3,
depicts an ultrastable system with multiple essential variables. In a predictive regulatory system, such as
that of figure 2 (right), the organism’s controller may differentially suppress / increase the effects of
homeostatic errors by modulating the gains / thresholds of the error-sensing function. In this way
predicted demand for ongoing goal-directed behavior can be adjusted so as to enhance the prospects of
achieving the goal, which may include an essential variable satisfying state.
The above predictive (allostatic) form of regulation for single or multiple essential variable based
ultrastable systems might be viewed according to either case 2, predictive transient regulation or case 3,
predictive non-transient regulation, as described above. The former ultrastable system, however, requires
re-setting of homeostatic bounds to ‘ideal’ values following the transient episode. These ‘ideal’ values
could be set by evolution (in biological organisms) by an evolutionary algorithm (in artificial organisms)
or by a designer (see previous section for discussion). Furthermore, particularly in case 3, variables such
as blood pressure and stress levels might also be considered (lesser) essential variables whose outputs
effect physiological re-parameterizations (preparedness for particular actions). A distinction between the
monitored and signaled levels of the (greater or lesser) essential variables and the effects they have on
preparing the organism for specific responses (in ‘S’) may be made in this respect. Note, this also applies
to the greater essential variables (e.g. blood glucose levels) whose levels are to be a) monitored, and b)
effect action preparedness.
To reiterate, predictive transient regulation, the predictive (3rd feedback) loop transiently modifies the
error sensing parameters of the essential variable(s) over the duration of the goal-directed behavioural
episode. This may happen, above all, in cases where motivational homeostatic needs are deprioritized in
favour of basic survival needs, e.g. life-threatening situations, mating opportunities, long-sequence goal-
directed behaviour (Sterling 2004). Such cases typically involve emotional states that are stimulated by
external sensory signals (rather than purely internal sensory signals). Predictive regulation (based on
nervous system activity) thereby modulates the homeostatic parameters either to suppress or augment
their effects on sensorimotor parameterization (via the effector) in the service of behavioural persistence.
As an example, blood glucose levels and blood pressure may rise beyond their nominal homeostatic upper
limit in order to sufficiently energize the organism over a goal-directed episode. The organism thus
engages in an emotional episode that culminates in achievement or not of a goal state. Critically, from the
(predictive) transient regulation perspective, following the achievement of this state, the ‘ideal’
homeostatic regimen (non-emergency critical limits) is again adhered to. This predictive transient
regulation perspective is consistent with Wingfield (2004) who suggests allostasis, compared to reactive
homeostasis, provides greater flexibility of the organism-environment coupled system as a whole and
entails “emergency adjustments of physiology and behaviour to maintain homeostasis in the face of
challenges” (Wingfield 2004, p.312). It is also consistent with Gu and Fitzgerald (2014) who assume that
allostasis is “the process of achieving homeostasis” (p.1). Emergency adjustments to cognitive and
physiological processes have been previously put forward as providing a major function of emotions (e.g.
Simon 1967; Oatley & Johnson-Laird 1987, 1996; Sloman 2001). Naturally, however, flexibility entails
non-rigidity in relation to behavioural persistence. The organism must still allow for opportunistic
behavior given sufficiently beneficial opportunities. Such opportunism, however, must be weighed
against, not simply proximity of stimulus (as for the reactive homeostatic approaches), but in relation to
the proximity of the alternative goal-directed behavior achievement, as well as time and energy invested
in that alternative behavior. Such prior knowledge (of the properties of the behavioural sequence) guard
against ‘dithering’ caused by an abundance of opportunities for satisfying one essential variable.
Contrary to the above transient perspective of allostasis (predictive regulation), Sterling’s (2004, 2012)
position concerning allostasis is that it is best conceived not as a form of homeostasis. Sterling has
advocated that the reactive model centred on local homeostasis is wrong. Allostatic processes do not
merely sub-serve homeostatic local states; neither are allostatic processes merely “emergency systems
superimposed on the homeostatic model” (Sterling 2012, p.2). The standard model of homeostasis, that
typically concerns local homeostatic defending of set points is considered misguided. Allostasis, rather, is
viewed as a top-down interactive-constitutive process that recruits resources from many (local)
physiological systems to meet predicted current demand. But this demand should not be considered
transient or in the service of long-term realization of ideal homesostatic states. Sterling (2004, 2012)
suggests that there are no ideal states, mean values of essential variables “need not imply a setpoint but
rather the most frequent demand” (Sterling 2004, p.23). Perceived demand is set by external sources, e.g.,
social norms, in relation to the organism’s objectives. This exerts a pressure, both in terms of survival and
reproduction. Adaptive behavior, in this view, is constrained, rather than set, by homeostatic needs.
Moreover, the ideal physiological state for the organism – in terms of self-maintenance – is not static.
Sterling uses the example of blood pressure, which fluctuates throughout the day as a consequence of
daily rhythms and may be elevated for prolonged periods according to perceived external (e.g. work
pressure) demands – high blood pressure can be of adaptive advantage. For Sterling (2012), a healthy
system should be defined as one of “optimal predictive fluctuation” (Sterling 2012, p.9). The organism is
continuously in a non-equilibrium state (or in a precarious state, cf. Froese & Stewart 2010), something
which, in complex environments, Ashby would expect of a given ultrastable system. However, the
organism is not passive-contingent, it is predictive, in part creating its own demands to survive and
reproduce in the world and its dynamic homeostatic regulation of essential variables reflects this.
In the next part of this article, we will discuss the aforementioned notions of predictive regulation
(allostasis) in relation to cybernetics-compatible views on cognition and adaptive behavior and attempt to
illuminate to what extent such views fit into the above-mentioned allostasis descriptions.
ALLOSTASIS IN COGNITIVE-AFFECTIVE CYBERNETICS THEORY
Allostats and Adaptive Behaviour
Munteaun and Wright (2007) suggest that Artificial Intelligence approaches to the study of autonomy and
agency need to account for the phenomenon of allostasis. They put forward an example of how artificial
agents might have built-in allostats that override set points of homeostats under certain conditions. The
example homeostat given is a mobile ‘space rover’ that is designed to have homeostatic set-points
determine which inclines it attempts to negotiate based on perceived steepness. Through allostatic
mechanisms the space rover ‘allostat’ may override such set-points according to expectancies based on
prior knowledge. Effectively, the allostatically regulated space rover is able to modify pre-designed set-
points according to the experience of the demands of the situation; thus, the local homeostatic mechanism
becomes subsumed in the global allostatic process resembling the allostatic ultrastable systems depicted
in figures 2 (right) and 3. Through re-setting homeostatic limits back to the ideal state having met a
particular demand, the allostat provides an example of a predictive transient regulating organism. This
less conservative (than a reactive homeostat) artificial agent thereby has greater flexibility in its behaviour
being less constrained by initial implementation design. Such behavioral dynamics rooted in allostatic
regulation need not be inconsistent with the type of emergent homeostatic behavioral dynamics noted by
Di Paolo (2000, 2003) but rather, can be viewed as extending behavioral flexibility in order to meet the
demands of an organism in a changing and challenging environment. In such artificial allostats, the ‘S’
component of the organism (fig. 2 and 3) need concern primarily sensor-motor re-parameterization rather
than physiological analogues. An organism’s battery level, for example, might be allowed to run down to
dangerously low levels in order for a sequential goal-directed behavior to be achieved. On the other hand,
by analogy, ‘physiological’ re-parameterizations might manifest in terms of speed of movement
(somewhat analogous to stress levels) that further run down the allostat’s battery but allow it to deal with
Predictive Processing, Cybernetics and Emotions
Seth (2013, 2014) provides a predictive processing (PP) account of cognition, extending earlier PP
accounts that were focused on exteroceptive (sensory) and proprioceptive (motoric) inference (cf. Friston
2010, 2013). Seth posits that interoceptive processing, a term used also by Damasio (2010) and Craig
(2013, 2015) in relation to biological homeostasis and affective feelings, can also fit within an inferential
(predictive modeling) perspective. It is also suggested by Seth (2015) that his interoceptive inference
account fits naturally into a cybernetics, specifically Ashbyan ultrastability, framework. This is further
said to “lead to … a new view of emotion as active inference” (Seth 2015, p.1), where active inference is
defined as “selective sampling of sensory signals so as to improve perceptual predictions” (p.3).
Drawing from Gu et al. (2014), Seth (2014) refers to the 2nd feedback loop of Ashby’s as being allostatic
and that allostasis is “[t]he process of achieving homeostasis” (p.2). This would put Seth’s (2014) and Gu
et al.’s (2014) predictive processing account of allostasis in category 2., of the previous section, i.e.
predictive transient regulation, where ‘ideal’ regulative states are realizable. Further Seth (2014)
suggests: “On this theory of interoceptive inference … emotional states (i.e., subjective feeling states)
arise from top-down predictive inference of the causes of interoceptive sensory signals” (p.9). This
particular view has similarities to Lowe and Ziemke (2011), who have suggested that emotional feeling
states have a predictive regulatory (top-down) role on likely bodily changes as a result of perceived
external (exteroceptive) signals – a form of interoceptive inference.
Figure 4. Interoceptive inference in an Allostat. Left. Predicted probability distributions of allostatic loads (top);
response sensivity to predicted loads (bottom) – from Sterling (2004). Right. Allostatic ultrastable system (from
figure 3) with functional description. In R, the organism receives exteroceptive (and proprioceptive) inputs that
signal expected demand (requirements for action), (3). EVs transduce a range of inputs (prior probability
distribution) into a range of outputs (e.g via a sigmoid function), in (2.1). The range of activation of EVs is
differentiated according to a) rigidity (genetic), b) expected demand (learning). The most essential variables are the
most rigid in relation to expected range of inputs and output signaling sensitivity. The response is stronger or
weaker as a function of the input signal and the monitoring response function (sigmoid parameters). This, in turn,
parameterizes i) sensorimotor, ii) physiological states (stress hormones, blood pressure, etc.), according to the
expected demand for preparing the organism for action (2.2). This internal loop is continuous in relation to ever-
changing perceived environmental conditions and internal capabilities to cope with such conditions.
The Lowe and Ziemke (2011) position, however, is considered more in line with category 3 – predictive
non-transient regulation. Figure 4 depicts an interpretation of the theory conceived as compatible with a
predictive processing account. In figure 4 (left) is shown Sterling’s (2004) account of prior predicted
allostatic loads – expected demands - as a function of perceived context, e.g. in relation to the perception
of an appraised emotion-inducing stimulus. This load is transduced into a non-linear (i.e. sigmoid)
response function. In figure 4 (right), this is conceived as exteroceptive signals (at ‘R’) eliciting predicted
prior (allostatic) loads/demands. These are interoceptive predictions of the probable bodily states (EV
settings). Output responses are sensitive to the signaled level of EVs and parameterize the body (in 2.1.)
accordingly. This has the function of preparing the organism for an appropriate action response. If such
action preparation (2.2) is insufficient, for example, hormone or glucose mobilization in the circulatory
system is insufficient to prepare a viable action, an interoceptive error is signaled and the organism shifts
its predicted allostatic loads (fig 4., left). Thereby, responsivity to the EVs, e.g. blood pressure level,
blood glucose level, is suppressed or augmented (differentially over EVs), leading to a re-
parameterization at ‘S’. The organism is now physiologically, and sensor-motorically, modified to meet a
newly predicted demand, in turn preparing for a new, or modified, action. Essentially, the organism
predicts its bodily changes (that prepare action) and simultaneously induces them through this prediction.
The prediction is then confirmed/disconfirmed as a result of bodily feedback, which sets in motion a
modified prediction whose error the organism seeks to minimize within the ultrastable system. There are
no ideal states (set ranges) but rather the EV ranges are more or less in flux depending on expected
demand and how rigid are their genetically set critical limits. Predictive regulation here is adaptive to the
extent that it promotes survival to reproduce or otherwise achieve certain short- and long-term
tasks/goals. Adaptive behavior is not specifically for homeostasis in this view.
Allostatic Ultrastability and the Bayesian Brain
In relation to Ashby’s ultrastable system, Seth (2014) identifies three means by which interoceptive
prediction errors can be minimized:
i. “updating predictive models (perception, corresponding to new emotional contents);
ii. changing interoceptive signals through engaging autonomic reflexes (autonomic control or active
iii. performing behavior so as to alter external conditions that impact on internal homeostasis
In relation to figure 3, we might view i. as entailing differential strengthening of the multiple 3rd feedback
connections to autonomic reflexes, the activity of which (ii.) may then lead to modifications in the
interoceptive (error) signals produced. In this way, the organism is differentially behaviourally prepared
(for enacting iii.) according to the particular (emotional) demands of the situation. This would provide a
type of embodied appraisal (Prinz 2004) comprising perceived correspondence between exteroceptive
signals and interoceptive signals as a function of experience. It could also be linked to Damasio’s (1994,
2010) perspective on somatic markers (and interoceptive processing) wherein perception / prediction of
different physiological ‘markers’ differentially prepares the organism for action and constrains action
selection to options compatible with the physiological state (see also Lowe & Ziemke 2011). Finally, iii.
standardly instantiates Ashby’s 2nd feedback loop. However, the reference to allostasis here is not entirely
representative of Sterling’s (2004, 2012) allostasis conception2. In fact, i. and ii. are also part of the
allostatic process for Sterling (see figure 2, left). Prior expectations of physiological requirements to meet
perceived physical demand entailing changes in autonomic processes (mobilization of metabolic
resources) is at the core of Sterling’s allostasis account. Nevertheless, Sterling’s position on allostasis
appears to fit quite well with the predictive processing standpoint. Sterling, for example, consistent with
predictive processing models (cf. Seth 2013, Hohwy 2014), refers to allostatic loads being Bayesian by
nature. Essential variable sensors are sensitive to signals as a function of a prior (Gaussian) distribution,
which constitutes ranges of probable essential variable (physiological) values relevant to meeting a given
(exteroceptively perceived) demand. The posterior is computed based on new sensory evidence such that
a shift in the sensitivity range may result. Following figure 4 (left), figure 5 visualizes this effect in
relation to physiological change (e.g. the output of an ‘essential’ variable such as blood pressure) whose
prior probabilistic range constitutes an expected demand (upon the system for sustaining adaptive
behavior). This distribution, in a healthy system (figure 5, A) is able to rapidly shift to a new distribution
2 It can be noted that Gu et al. (2014) directly reference Sterling (2004) in relation to their notion of ”allostasis is the
means for achieving homeostasis”, though it is not clear that this is consistent with Sterling’s position.
(posterior distribution) when demand changes. The unhealthy system (figure 5, B) lacks such optimal
predictive fluctuation as a result of prolonged exposure to high demand. According to Sterling (2004),
though unhealthy, this latter system shouldn’t be considered maladaptive since it is adapted to expect, and
respond to, previous (prolonged) demand according to (prior) probabilities based on sensory evidence.
The allostatic organism is adaptive, in this view, less from the point of view of achieving homeostasis
(according to a notion of fixed set points) and more from the point of view of minimizing predictive error
based on experience of expected demand. In relation to figure 3, the sensitivity ranges of the sensors of
essential variable (EVs) activity are continually in flux as a function of shifting predicted demand. The
sensory outputs of the EVs then re-parameterize (in ‘S’) the organism to best meet this predicted demand
and the effects on the sensorimotor (nervous) system (R) sets new posteriors (updates differential
weighting effects on the EV sensors) according to new sensory evidence.
A key difference between the predictive transient, and predictive non-transient allostatic regulation
accounts, is that in the case of the former, the sensorimotor (R) component of the organism suppresses (or
amplifies) essential variable signals but in the service of long-term re-establishing of equilibrium. In the
case of the latter, the achievement of equilibrium states is less clear since essential variable signal
sensitivity ranges shift not just according to ever-changing environmental demands but also according to
daily, seasonal cycles as adapted by evolutionary pressures (Sterling 2004).
An Allostatic Cognitive-Affective Architectural Framework
Ziemke and Lowe (2009), Lowe and Kiryazov (2014) and Vernon et al. (2015) have offered perspectives
on cognitive-affective architectural development that adopt a Sterling (2004, 2012) inspired allostatic
viewpoint. Here the focus is on the ‘higher level of organization’ (Vernon et al. 2015, p.7), coordinating
local (homeostatic) processes to serve adaptive behavior. Based on Damasio’s (2003) nested hierarchy of
homeostatic regulation, the architectural approach, nevertheless, emphasizes the top-down regulatory role
of emotional feelings on its constitutive (e.g. reflexes, drives, motivations) processes. This could be seen
as a type of interoceptive (predictive) processing. The architecture also emphasizes the integrative role of
sensory (exteroceptive) and motoric (proprioceptive) states within this allostatic regulatory framework.
The schema could apply to both predictive transient regulation and predictive non-transient regulation.
In either case, adaptive behavior would require design of ‘desirable’ homeostatic bounds from which
(interoceptive prediction) errors can be signaled. As alluded to previously in this article, an evolutionary
Figure 5. A. ‘Healthy system’. Here the (prior)
probability distribution based on statistical sampling
of homeostatic (‘essential variable’) states shifts
according to a demand. This permits responses
sensitive within the new expected range of sensor
inputs. It exhibits ‘optimal predictive fluctuation’ by
virtue of its rapidly shifting from low to high to low
demand based on updated predictions (posteriors). B.
‘Unhealthy system’. Here, as a result of prolonged
high demand, response sensitivity may be relatively
resistant to shifting demand. The system expects high
demand, and does not flexibly respond to sudden and
brief low demand. The system thus sustains the same
potentially high-cost physiological response. From
robotics approach, in principle, could allow for context-sensitive or life-time relevant homeostatic bounds
to be established as a statistical measure of organismic success. Any such “optimal predictive fluctuation”
(Sterling 2012, p.12) would be sensitive to instability of the system as a whole. A trade-off, thus, must be
sought between the inflexibility of the ultrastable system that imbues reactive homeostatic regulation and
one that imbues predictive transient or predictive non-transient regulation.
In this article, we have discussed the notion of allostasis, in relation to the classically conceived control
theoretic homeostasis perspective, and applied it to Ashby’s cybernetic vision of ultrastability and applied
variations thereof. We have attempted to evaluate allostasis versus homeostasis from the perspective of
adaptive behavior and how that manifests in terms of affective processes, motivational, and emotional.
The article has focused on predictive regulation, and specifically allostatic accounts of predictive
regulation. We further looked at examples that fit different possible definitions on a homeostasis-
allostasis continuum, the extremes (reactive homeostatic regulation, predictive non-transient regulation)
of which being seemingly incompatible. Finally, we provided examples of theoretical approaches that
enlist allostasis and attempted to identify where on the aforementioned continuum the allostasis
conceptions lie. In table 1, we provide a simple summary of the different aspects of allostasis as they
concern adaptive behavior and affective states.
Table 1. Properties of homeostatic-allostatic ultrastable systems
• Interoceptive signals (i.e.
of homeostatic / essential
• Exteroceptive predictive
signals (perceived threat to
• Interoceptive predictive
signals (predicted homeostatic
• Exteroceptive predictive
signals (perceived threat to
• Interoceptive predictive
signals (predicted homeostatic
• Errors signaled until
behavioural change re-
• No structural change (to
homeostatic error sensing)
• Exteroceptive/ Interoceptive
errors signaled until
behavioural change re-
• Local homeostatic errors
suppressed / augmented
• No structural change (to
‘ideal’ homeostatic error
• Exteroceptive/ Interoceptive
errors signaled until
behavioural change re-
• Local homeostatic activity
suppressed / augmented
• Structural changes occur (to
homeostatic activity sensing)
that reflect life-time experience.
• Cognition for
• Cognition for homeostasis
• Cognition for reproduction /
specified tasks (in artificial
• Motivational / Drive-
• Motivational / Drive-based
• Emotions (for facilitating
• Motivational / Drive-based
• Emotions (for facilitating
It should be noted that the predictive non-transient regulation view does not so obviously conform to the
ultrastability notion of utilizing multiple feedback to establish an organism-environment equilibrium since
this perspective, instead, concerns a continual flux. In the transient case sensorimotor predictive feedback
concerns: firstly, expected demand – shifted ranges of essential variable values are tolerated in the service
of meeting the demand (response); secondly, homeostatic needs – the transient shifted ranges are in the
service of long-term maintenance of ideal homeostatic ranges. Thereby, long-term equilibrium is
facilitated through the allostatic process, which is thereby adaptive. In the non-transient case, no such
equilibrium state obviously exists. Shifting demands are adaptive insofar as the organism is able to
sustain a viable coupling to the environment in the service, ultimately, of evolutionary exigencies (i.e.
reproduction). Nevertheless, as previously alluded to, some variables are more essential than others (Day
2005). Even in the non-transient case it is imperative to respect the limited phase space of viable states of
certain essential variables (e.g. pH levels). On this basis, the organism-environment coupling is still
constrained to a type of equilibrium that obeys the demands of the most essential variables. To some
extent, therefore, the difference between predictive transient and predictive non-transient regulatory
perspectives on allostasis may concern the emphasis that the former places on the role of the most
essential variables with respect to the latter, in which case both types of allostasis may be considered
within an adapted ultrastability framework.
Ashby W. R. (1952). Design for a brain. First edition. John Wiley & Sons, New York.
Ashby W. R. (1954). Design for a brain. First edition. reprinted with corrections. John Wiley &
Sons, New York.
Ashby W. R. (1960). Design for a brain. Second edition. John Wiley & Sons, New York.
Avila-Garcìa, O. & Cañamero, L. (2005). Hormonal modulation of perception in motivation-
based action selection architectures. In Proceedings of the symposium Agents that Want and
Like: Motivational and Emotional roots of Cognition and Action (9–17), at the AISB’05
Convention. university of Hertfordshire, Hatfield.
Berridge, K. C. (2004). Motivation concepts in behavioral neuroscience. Physiol. Behav., 81(2),
Canabal, D. D., Song, Z., Potian, J. G., Beuve, A., McArdle, J. J. & Routh. V. H. (2007).
Glucose, insulin, and leptin signaling pathways modulate nitric oxide synthesis in glucose-
inhibited neurons in the ventromedial hypothalamus. Am. J. Physiol. Regul. Integr. Comp.
Physiol., 292, 1418-1428.
Cannon, W. B. (1929). Organization for physiological homeostasis. Physiol. Rev. 9, 399–31.
Craig, A. D. (2013). An interoceptive neuroanatomical perspective on feelings, energy, and
effort. Behavioral and Brain Sciences, 36(06), 685-686.
Craig, A. D. (2015). How do you feel?: an interoceptive moment with your neurobiological self.
Princeton University Press.
Damasio, A. R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain, New York:
GP Putnam’s Sons.
Damasio, A. R. (2003). Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Harcourt.
Damasio, A. R. (2010). Self Comes to Mind: Constructing the Conscious Brain. New York:
Day, T. A. (2005). Defining stress as a prelude to mapping its neurocircuitry: no help from
allostasis. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 29(8), 1195-1200.
Di Paolo, E. A. (2000). Homeostatic adaptation to inversion of the visual field and other
sensorimotor disruptions. In J-A. Meyer, A. Berthoz, D. Floreano, H. Roitblat & S W. Wilson
(Eds.), From Animals to Animats, Proc. of the Sixth International Conference on the Simulation
of Adaptive Behavior. MIT Press.
Di Paolo, E. A. (2003). Organismically-inspired robotics: Homeostatic adaptation and natural
teleology beyond the closed sensorimotor loop. In K. Murase & T. Asakura (Eds.), Dynamical
Systems Approach to Embodiment and Sociality. Advanced Knowledge International, Adelaide,
Franchi, S. (2013). Homeostats for the 21st Century? Simulating Ashby Simulating the Brain.
Constructivist Foundations, 9(1), 93–101.
Franchi, S. (2015). Ashbian Homeostasis as non-Autonomous Adaptation. SASO 2015 Ninth
IEEE International Conference on Self-Adaptive and Self-Organizing Systems, At Cambridge,
Friston, K. J. (2010). The free-energy principle: A unified brain theory? Nature Reviews
Friston, K., Schwartenbeck, P., FitzGerald, T., Moutoussis, M., Behrens, T., & Dolan, R. J.
(2013). The anatomy of choice: active inference and agency. Frontiers in Human Neuroscience,
Froese, T. & Stewart, J. (2010). Life after Ashby: ultrastability and the autopoietic foundations
of biological individuality. Cybernetics & Human Knowing, 17(4): 83–106.
Gu, X., & FitzGerald, T. H. (2014). Interoceptive inference: homeostasis and decision-making.
Trends in Cognitive Science, 18(6), 269-70.
Hohwy, J. (2014). The neural organ explains the mind. In Open MIND. Open MIND. Frankfurt
am Main: MIND Group.
Kiryazov, K., Lowe, R., Becker-Asano, C., & Randazzo, M. (2013). The role of arousal in two-
resource problem tasks for humanoid service robots. In RO-MAN, 2013 IEEE (62-69). IEEE.
Lowe, R. (2013). Designing for Emergent Ultrastable Behaviour in Complex Artificial Systems
– The Quest for Minimizing Heteronomous Constraints. Constructivist Foundations, 9(1), 105–
Lowe, R. (2016). The Role of Allostasis in Sense-Making: A Better Fit for Interactivity than
Cybernetic-Enactivism? Constructivist Foundations, 11(2), 251–254.
Lowe, R., Montebelli, A., Ieropoulos, I., Greenman, J., Melhuish, C., & Ziemke, T. (2010).
Grounding motivation in energy autonomy: a study of artificial metabolism constrained robot
dynamics. In ALIFE (725–732), Odense: The MIT Press.
Lowe, R., & Kiryazov, K. (2014). Utilizing Emotions in Autonomous Robots: An Enactive
Approach. In Emotion Modeling (76-98). Springer International Publishing.
Lowe, R., & Ziemke, T. (2011). The feeling of action tendencies: on emotional regulation of
goal-directed behaviour. Frontiers in Psychology, 346(2), 1-24.
Manicka, S. & Di Paolo E. A. (2009). Local ultrastability in a real system based on
programmable springs. In Kampis, G., Karsai, I. & Szathmary, E. (Eds.), Advances in artificial
life. Proceedings of the tenth European Conference on Artificial Life (ECAL09). 87–94, Berlin:
McEwen, B. S. (2004). Protective and Damaging Effects of the Mediators of Stress and
Adaptation: Allostasis and Allostatic Load. In J. Schulkin (Ed.), Allostasis, Homeostasis, and the
Costs of Adaptation, Cambridge University Press.
McEwen, B. S., & Wingfield, J. C. (2003). The concept of allostasis in biology and biomedicine.
Horm. Behav. 43(1), 2–15.
McFarland, D. (2008). Guilty Robots, Happy Dogs. New York: Oxford University Press.
McFarland, D., & Bösser, T. (1993). Intelligent Behavior in Animals and Robots. The MIT
McFarland, D., & Spier, E. (1997). Basic cycles, utility and opportunism in self-sufficient robots.
Robotic Autonomous Systems, 20, 179–190.
Melhuish, C., Ieropoulos, I., Greenman, J. & Horsfield, I. (2006). Energetically autonomous
robots: food for thought. Autonomous Robots. 21, 187-198.
Muntean, I., & Wright, C. D. (2007). Autonomous agency, AI, and allostasis. Pragmatics and
Cognition, 15(3), 485-513.
Oatley, K., & Johnson-Laird, P. N. (1987). Towards a Cognitive Theory of Emotions. Cognition
& Emotion, 1(1), 29-50.
Oatley, K., & Johnson-Laird, P. N. (1996). The communicative theory of emotions: Empirical
tests, mental models, and implications for social interaction. In L.L. Martin & A. Tesser (Eds.),
Striving and feeling: Interactions among goals, affect, and self-regulation, Hillsdale, NJ:
Pickering, A. (2010). The cybernetic brain: Sketches of another future. University of Chicago
Press, Chicago IL.
Pitonakova, L. (2013). Ultrastable neuroendocrine robot controller. Adaptive Behavior, 21(1),
Prinz, J. J. (2004). Gut Reactions: A Perceptual Theory of Emotion. Oxford University Press.
Schulkin, J. (2004). Allostasis, homeostasis, and the costs of physiological adaptation.
Cambridge University Press.
Schulkin, J. (2011). Adaptation and well-being: Social allostasis. Cambridge University Press.
Seth, A. K. (2013). Interoceptive inference, emotion, and the embodied self. Trends in cognitive
sciences, 17(11), 565-573.
Seth, A. K. (2014). The Cybernetic Bayesian Brain. In Open MIND. Open MIND. Frankfurt am
Main: MIND Group.
Simon, H. A. (1967). Motivational and emotional controls of cognition. Psychological Review,
Sloman, A. (2001). Beyond shallow models of emotion. Cognitive Processing, 2(1), 177–98.
Staddon, J. (2014). The new behaviorism. Psychology Press.
Sterling, P. (2004). Principles of allostasis: optimal design, predictive regulation,
pathophysiology and rational therapeutics. In J. Schulkin (Ed.), Allostasis, Homeostasis, and the
Costs of Adaptation, Cambridge University Press.
Sterling, P. (2012). Allostasis: a model of predictive regulation. Physiology & behavior, 106(1),
Vernon, D. (2013). Interpreting Ashby–But which One?. Constructivist Foundations, 9(1), 111-
Vernon, D., Lowe, R., Thill, S., & Ziemke, T. (2015). Embodied cognition and circular causality:
on the role of constitutive autonomy in the reciprocal coupling of perception and action.
Frontiers in psychology, 6.
Wingfield, J. C. (2004). Allostatic Load and Life Cycles: Implications for Neuroendocrine
Control Mechanisms. In J. Schulkin (Ed.), Allostasis, Homeostasis, and the Costs of Adaptation,
Cambridge University Press.
Ziemke, T. and Lowe, R. (2009). On the Role of Emotion in Embodied Cognitive Architectures:
From Organisms to Robots. Cognitive Computation, 1, 104-117.
KEY TERMS AND DEFINITIONS*
Adaptive Behavior: Behaviour which promotes individual well-being, survival, reproductive
advantage or task-specific/goal-directed achievement.
Affective States: Umbrella term for value-based states including feelings, moods, emotions,
motivations and drives.
Allostasis: Top-down predictive regulation of local homeostatic variables.
Allostatic Ultrastable Systems: Predictive regulating organism that, through multiple feedback
loops, strives for equilibrium with the environment in accordance with the maintenance of its
(most) essential variables.
Emotional States: Physiological preparatory states to action.
Homeostasis: Maintenance of organismic essential variables within critical bounds.
Motivational States: Physiological states that reflect sub-optimal maintenance of essential
Predictive Regulation: A physiological and sensorimotor process that entails re-
parameterization of the organism to suit predicted action-based demand.
*These definitions are not considered all-encompassing. Rather, they concern the sense in which
the terms are referred to in the text of this chapter.