ArticlePDF Available

Processing power limits social group size: Computational evidence for the cognitive costs of sociality

The Royal Society
Proceedings of the Royal Society B
Authors:

Abstract

Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses.
rspb.royalsocietypublishing.org
Research
Cite this article: Da
´
vid-Barrett T, Dunbar RIM.
2013 Processing power limits social group size:
computational evidence for the cognitive costs
of sociality. Proc R Soc B 280: 20131151.
http://dx.doi.org/10.1098/rspb.2013.1151
Received: 7 May 2013
Accepted: 28 May 2013
Subject Areas:
behaviour, cogniti on, evolution
Keywords:
social brain hypothesis, behavioural synchrony,
social network size
Author for correspondence:
T. Da
´
vid-Barrett
e-mail: tamas.david-barrett@psy.ox.ac.uk
Electronic supplementary material is available
at http://dx.doi.org/10.1098/rspb.2013.1151 or
via http://rspb.royalsocietypublishing.org.
Processing power limits social group size:
computational evidence for the cognitive
costs of sociality
T. Da
´
vid-Barrett and R. I. M. Dunbar
Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
Sociality is primarily a coordination problem. However, the social (or com-
munication) complexity hypothesis suggests that the kinds of information
that can be acquired and processed may limit the size and/or complexity
of social groups that a species can maintain. We use an agent-based
model to test the hypothesis that the complexity of information processed
influences the computational demands involved. We show that successive
increases in the kinds of information processed allow organisms to break
through the glass ceilings that otherwise limit the size of social groups:
larger groups can only be achieved at the cost of more sophisticated kinds
of information processing that are disadvantageous when optimal group
size is small. These results simultaneously support both the social brain
and the social complexity hypotheses.
1. Introduction
Social living can take one of two forms: animals can form loose aggregations
(exemplified by insect swarms and antelope herds, which are typically based
on short-term advantages and whose persistence depends on immediate costs
and benefits) or they can form congregations (exemplified by the bonded
social groups of primates and some other mammals, including cetaceans, ele-
phants and equids among others) whose advantages derive from long-term
association, and whose persistence mainly depends on the trade-off between
short- and long-term benefits and costs [1]. It has been suggested that this
second kind of sociality is a kind of large-scale coordination problem that
depends on bonding processes underpinned by more sophisticated cognitive
mechanisms (associated with larger brains [2]). Indeed, the social brain hypoth-
esis [27] implies that the size of group that can coordinate its behaviour is
limited by the cognitive capacity (essentially, the neural processing capacity)
that organisms can bring to bear on the problem.
Although there is considerable neuroanatomical evidence (at the individual as
well as the species level) to support the social brain hypothesis in both primates [8]
and specifically humans [912], the role that cognition plays in this remains unspe-
cified. The recent resurgence of interest in the relationship between communication
complexity and social complexity (the social complexity hypothesis) [13,14] offers
a possible mechanism by postulating that (i) the complexity of information proces-
sing limits social group size and (ii) these communication competences depend on
computationally expensive cognitive capacities.
We here use an agent-based model to show that the cognitive demands of
coordination impose an upper limit on the size of social group that a species can
maintain, but that costly increases in information processing, while disadvanta-
geous when optimal group size is small, can allow the evolution of much larger
groups providing there is sufficient benefit from doing so. In our model, the objec-
tive function is the size of group that can achieve effective social coordination, and
we use a central processing unit’stime required to achieve this objective function as
an estimate of the cognitive demands of different information-processing strat-
egies. In this, we assume that the size of the brain affects the speed and volume,
and hence complexity, of decisions that can be made.
&
2013 The Author(s) Published by the Royal Society. All rights reserved.
We use a novel coordination task for this. A pure coordi-
nation task differs from the conventional public good tasks in
that there is no optimal solution, the sole point being to con-
verge on some common behaviour, with the payoff simply
reflecting the extent to which the members of the group con-
verge. Conventional public good games involve what
amounts to a trading relationship, whereas a coordination
task of the kind we use simply requires agents to converge
on some common solution. In this respect, it mirrors the
common human case in which individuals adopt a set of
shared cultural values, where the cultural icon or marker
may be arbitrary and the value of the icon itself secondary
to the fact that individuals are bound together in a cultural
community, which in turn allows them to solve collective
action problems more effectively [15,16]. Coordination pro-
blems of this kind may have been especially important in
the context of the evolution of human sociality [16]. One
important consequence of this conception is that, since
agents do not pay or withhold some of their capital, there
is no payoff for free-riding.
While coordination tasks can, of course, have a direct
functional outcome (e.g. agreeing direction of travel to
some desirable resource [17]), many coordination tasks in
humans simply involve agreeing on cultural markers (or
‘tags’) to identify group members so as to enable collective
action in the future [ 15,16]. In many of these cases, the coordi-
nation task can be quite arbitrary (agreeing on a common
cultural icon or belief about the world [1820], or even a
common dialect [21,22]). Cultural convergence of this kind is
not a functional end in itself, but rather provides the psycho-
logical basis for subsequently achieving a functional end
(in some cases—but not always—reinforced through costly sig-
nalling [23,24] or costly punishment [25]). Signing up to the
same cultural marker may signal acceptance of a set of cultural
or moral values that acts as a cue of trustworthiness and
willingness to reciprocate.
An important feature of our model is that agents are not
panmictic, but rather, as in the real world, are constrained in
the number of individuals with whom they can interact by
the structure of the social network in which they are embedded
[2628]. The social networks of these species are characterized
by dyadic relationships that require expensive maintenance
(for instance, in grooming time) leading to long-term edge
stability. Although panmixia has often been assumed in
models of the evolution of behaviour [29,30], natural human
populations are invariably structured and network dynamics
are radically different in structured populations [3133]. Our
models map agents onto a network, and so limit the range of
agents with whom they can interact.
2. Material and methods
The basic structure of our approach focuses on a group of n
agents that face a coordination problem which requires behav-
ioural synchrony, such that each agent has to do her part at the
right time and in the right way for the group to be able to act
as one (for a general overview of the model, see [31]). We use
synchronization on a dial to capture this problem. This is a
simple but widely used [31,34] device that stands for almost
any coordination problem, though it most obviously relates to
how a group decides on direction of travel. (We stress that agree-
ing a direction of travel is only one of many possible
coordination tasks, many of which—as in agreeing on an
arbitrary cultural icon as a marker of group identity—are in
themselves only indirectly functional. Agreeing the direction of
travel has the advantage of being particularly easy to model.)
Each agent is first assigned an initial value between 08 and
3608, and, for convenience, we refer to this as its information
value. One of these vectors is defined as the ‘true information’
and is the property of just one agent, while all the other agents
are assigned a randomly distributed value. The agents are
arranged in a random n-node network in which each agent is
linked to k others. The agents’ task is to synchronize their infor-
mation values, which they do by comparing their respective
values and then finding a consensus. Synchronization takes
place via a set of random dyadic meetings, in which the agents
exchange and update their information values. (For detailed
mathematical definitions and derivation, see appendices.) After
each meeting (and exchange of information), each agent calcu-
lates a weighted average of three bits of information: her
original information, the partner’s information and the infor-
mation the partner received in her previous meeting. These
meetings are repeated until, on average, each agent has taken
part in
t
meetings. At this point, the average distance between
the agents’ individual information and the true information is
dðn; w;
v
Þ¼
1
n
X
n
i¼1
j
f
T;i
f
TI
j;
where w and
v
are weight matrices for the weights the agent uses
for the partners’ and the third party information, respectively,
while
f
T,i
is the information held by agent i at the end of the syn-
chronization episode,
f
TI
is the true information and T is the total
number of meetings in the group: T ¼ n
t
/2.
Agents are assumed to be trying to get as close to the true
information as possible, though they are not necessarily aware
of what this value actually is. To do this, each of them estimates
an ‘optimal’ set of weights using a memory of past information
exchanges and a simple least-squares optimization function
(which is similar to the way humans optimize [35]). Note that
requiring the agents to converge on a single value (the true infor-
mation) is not in itself a defining feature of the model: it is simply
a heuristic device to force coordination while at the same time
minimizing computational demand. Allowing the agents to find
their own equilibrium leads to convergence in just the same way
[31], but it invariably takes longer. Our concern is with the con-
straints that different communication and cognitive strategies
place on how fast convergence (synchronization) occurs, and the
limits that this imposes on the size of social groups.
In our analysis, agents have three possible ways of estimating
the optimal weights, which correspond to increasing levels of
cognitive demand. Model F ¼ 1 is the simplest: agents ignore
both the third party information and the differences among
their partners. In model F ¼ 2, the agents ignore third party
information, but recognize that there are differences (e.g. in
reliability) among their partners. In model F ¼ 3, the agents
use both types of information. In calculating these weights, we
varied the size of memory sample that the agents could use in
their optimization. We used the processor time associated with
each optimization act as an index of the cognitive demand of a
strategy, and used this as a proxy for the amount of brain
tissue needed in managing a strategy. This allowed us to intro-
duce an implicit distance function
d
ðF; n;
~
cÞj
t
;k
¼ dðn;
^
w; ^
v
Þj
t
;k
;
where
~
c is the measured processor time, F is the index of the
method used by the agents, and
^
w and ^
v
are the ‘optimal’
weight matrices as estimated by the agents.
We assume that there is some threshold of synchronization
efficiency, above which the group is deemed unable to perform
the communal action, and below which the group is in sufficient
behavioural synchrony to be able to act as one. Using this
rspb.royalsocietypublishing.org Proc R Soc B 280: 20131151
2
threshold, we can define the largest group that can be in
synchrony as follows:
n
ðF;
~
cÞj
t
;k;
l
¼ max n s.t.
d
ðF; n;
~
cÞj
t
;k
l
;
where l is the synchrony threshold.
3. Results
As might be expected, our simulations show that the maxi-
mum group size increases as calculation capacity increases
for all the three models (figure 1). More importantly, how-
ever, figure 1 shows that for both models F ¼ 2 and F ¼ 3
there is a c*(F) such that n*(F 2 1,c) . n*(F,c), for c , c* and
n*(F 2 1,c) , n*(F,c), for c . c*. In other words, the simplest
model with the lowest computational demand allows
groups to form, but these are constrained to relatively small
sizes. There is some possibility of increasing group size by
increasing computational capacity, but this option is capped.
To achieve a significant further increase in group size,
the agents must switch to a more complex information-
processing strategy (model F ¼ 2) that allows them to differ-
entiate among their partners. Note that while this method is
uneconomical for small groups, it yields an increase in group
size if the calculation capacity is large enough. Just as in the
simple model, a further increase in calculation capacity
allows a further increase in group size, but it too hits an
upper limit (albeit at a higher level). To move beyond this
limit, the agents not only need larger computational
capacities, but also have to add an additional information
stream (model F ¼ 3). This is the least economical method
for small or middle-sized groups, but, by permitting third
party information to be exchanged, it allows the group to
cut through the glass ceiling imposed by the other two
models and significantly raises the limit on group size.
4. Discussion
It is important to note that the more complex strategies are
highly disadvantageous when group size is small: indeed,
the more complex the strategy, the more disadvantageous it
is. Thus, the evolution of communicative and cognitive com-
plexity is explicitly dependent on an ecological demand for
large social groups: it is only when there is a need for large
groups that the selection pressure will be sufficient to warrant
the costs involved. This is driven by the demands of social
coordination. If there is no requirement for coordination (in
terms of the present model, optimal group size n* 1 because
organisms do not need to cooperate), then there is insufficient
selection pressure to motivate either complex communication
or investment in the large brains required to support the
requisite cognitive abilities.
This suggests that complex communicative abilities,
large social groups that involve social coordination and large
brains will all be equally rare, as indeed seems to be the case
[13]. Although we have used a very simple (and computation-
ally easy to implement) objective function in the model
(achieving synchrony in compass direction), it is important to
be clear that our model is not limited to this particular context.
Rather, our device stands as an abstraction for any behaviour
that requires synchrony or coordination in order to maximize
biological fitness, whether the fitness payoff is a direct or an
indirect consequence of coordination along stable, expansive
dyadic network edges. Direct fitness payoffs may arise from
coordinated foraging or hunting (as in some social carnivores
[36]), cooperative defence against predators or rival conspeci-
fics (e.g. group territorial defence; as in many primates) or
any other ecologically relevant behaviour that requires group
members to synchronize or coordinate their behaviour in
some way. However, in species characterized by multi-level
social systems (e.g. elephants, some cetaceans, most primates
[37] and humans [27,28]) where the higher level of organization
functions to solve a collective action problem, the payoff
may be indirect and mechanisms are needed to facilitate
cooperation at group level. In these cases, solving a collective
action problem is a two-step process: willingness to collaborate
is established before the need to collaborate [38], and is often
(but not necessarily always) signalled by some marker (or
‘tag’) of group membership.
Our model not only shows how cognition (processing
power) could limit group size, but also sheds light on why
some other species (for instance, herding mammals with pan-
mictic group structure) use entirely different—and much
simpler—coordination strategies. This, of course, does not
preclude the possibility that bonded species such as
humans might use simple heuristics to solve coordination
tasks when the task demands are simple [39,40]. However,
our analysis does raise the question as to why some species
choose simple coordination methods associated with panmic-
tic structure, while others resort to using more complex and
costly cognition.
In sum, our analysis provides a formal mechanism for
both the social brain hypothesis [3] and the social (or commu-
nicative) complexity hypothesis [13] by demonstrating that
greater information-processing demands are reflected in
–1 0 1 2 3
lo
g
c˜
6
8
10
12
14
16
18
n*
Figure 1. Limiting values for social group size n* as a function of the pro-
cessor time costs
~
c required to achieve synchrony in an ecological objective,
for three different cognitive strategies in an agent-based model with a struc-
tured network. F ¼ 1 (dotted line): agents rely only on current information
about agents they interact with. F ¼ 2 (dashed line): agents take note
of individual differences in the quality of information other agents offer.
F ¼ 3 (solid line): agents take note of individual differences between
other agents and rely on third party information about each other received
from other agents. Increasing calculation capacity allows larger groups in each
of the three strategies for evaluating the quality of potential collaborators.
However, each strategy has an upper limit (glass ceiling), and if groups
need larger group size they have to switch to more sophisticated methods
of information acquisition, and that necessitates an increase in the compu-
tational costs (i.e. brain size). (Parameters used: k ¼ 4,
t
¼ 20,
l
¼ 11.
The results are robust to these parameters.)
rspb.royalsocietypublishing.org Proc R Soc B 280: 20131151
3
greater cognitive (computational) costs, but that bringing
these on stream can allow organisms to break through glass
ceilings to significantly increase social group size. If size mat-
ters (large groups offer greater protection against predators
[41], are more efficient for foraging [42] or win more territor-
ial fights [43,44]), then there will be selection pressure to pay
these costs. But the significant finding is that these costs are
considerable and, when optimal group size is small, make
the costs prohibitive. In these circumstances, simpler cogni-
tive strategies are more profitable. In the limiting case,
when there is no requirement for bonded relationships to
ensure group stability through time, it is not worth paying
the costs of complex cognition. This suggests that these
kinds of more complex sociality will be relatively rare.
Broadly speaking, this is what we see in the natural world [2].
Funding statement.
This research is supported by a European Research
Council Advanced grant to R.I.M.D.
Appendix A. The formal model
Let us define a group as a set of n agents that are interlinked
in a connected network with each agent having a degree of k.
Let G(n) denote the set of all possible such networks:
GðnÞ¼fgðnÞj#e
i
¼ k8i;
r
ði; jÞ , 18 i ; jg;
where e
i
denotes the set of agents that agent i is connected to,
and hence #e
i
denotes the length of this set and
r
(i,j) denotes
the network distance between nodes i and j.
Let f denote the basic information-updating function,
defined in the following way:
f ðA; B ; xÞ¼
f
1
if 08 f
1
3608
f
1
3608 if 3608 , f
1
f
1
þ 3608 if f
1
08
8
<
:
f
1
ðA; B; xÞ¼
A
x
1 þ x
ðA BÞ if 08 jA Bj1808
A
x
1 þ x
ðA B 3608Þ if 1808 , A B
A
x
1 þ x
ðA B þ 3608Þ if A B , 1808:
8
>
>
>
>
>
<
>
>
>
>
>
:
Thus, the function f calculates a weighted average of two
values on a dial, A and B, with the weights being, respect-
ively, 1 and x. Thus, this function is a simple weighted
average, and the only reason for the complication above is
that it is on a dial. (It is possible to define f using trigono-
metric functions; however, the form is less intuitive that way.)
Let h denote the information-updating function with two
information sources, defined as a nested f function:
hðA; B; C; x; yÞ¼f ð f ðA; B; xÞ; f ðA; C; yÞ; y/xÞ:
That is, the way the function h works is that first the function f
calculates the weighted average of dial values A and B using
the weight x, and dial values A
and C using the weight y; and
then the weight of the resulting two values is calculated using
the weight y/x. (Note that due to the construction of the y/x
weight, the parameters x and y do not have symmetric effect.
This reflects the fact that B and C play different roles in the
model, with B being the information the agent’s meeting
partner holds, and C being the third party information.)
Let
f
t,i
denote the value of the information variable held
by agent i at time t. Then, let TI Uf1, 2, ...,ng denote a
randomly selected agent that receives the true information,
f
TI
U(08,3608). The initial information values are defined
the following way:
f
0;i
¼
f
TI
if i ¼ TI
U(08; 3608)ifi = TI.
That is, all agents receive a random initial value on a dial,
except for the agent that was chosen as TI, which receives
the true information,
f
TI
.
Let
p
t,i
denote the third party information, and let p
t,i
denote the index of the agent from which the third
party information originates, both held by agent i at time t.
The initial values for these two variables are set the
following way:
p
0;i
¼
f
0;i
8i
and p
0;i
¼ i 8i:
That is, the first third party information the agent receives
from herself.
Let us define a synchronization event on a network g(n) [
G(n) as a series of T meetings among the agents, where a
‘meeting’ between two agents, a and b, is defined in the fol-
lowing way:
fa; bg Uf1; 2; ...; ng
2
s.t. a = b; a [ e
b
; b [ e
a
:
That is, first, the agents a and b are chosen such that they are
linked to each other. Second, the agents exchange and update
their respective information, third party information, and
third party agent index the following way:
f
tþ1;a
¼ hð
f
t;a
;
f
t;b
;
p
t;b
; w
a;b
;
v
a;p
t;b
Þ;
f
tþ1;b
¼ hð
f
t;b
;
f
t;a
;
p
t;a
; w
b;a
;
v
b;p
t;a
Þ;
p
tþ1;a
¼ b;
p
tþ1;b
¼ a;
p
tþ1;a
¼
f
t;b
and
p
tþ1;b
¼
f
t;a
;
where w
a,b
and
v
a;p
t;b
are the weights associated, respectively,
with the information agent a receives from agent b and the
third party information she receives from agent b.
These random meetings among the agents are repeated
until t ¼ T, where T ¼
t
n/2. That is, on average each agent
takes part in
t
meetings.
Given the definition of the synchronization as a series of
information exchanges on a graph, we can define a function
that measures the average distance of the agent from the
true information at the end of the synchronization process.
Let d denote this measure in the following way:
dðn; w;
v
Þ¼
1
n
X
n
i¼1
j
f
T;i
f
TI
j:
where w ¼ fw
i,j
g and
v
¼ f
v
i,j
g are the respective weight
matrices.
(Note that d(n,w,
v
) is independent of the arbitrary par-
ameter
f
TI
. And thus, this structure is why we employ the
cumbersome use of a dial, rather than a one-dimensional
range for the information variable. At the same time, if the
weight parameters of the agents and the network structure
are well behaved, the average distance, d, converges to zero
as
t
goes to infinity.)
Let us assume that a set of n agents, linked up in the net-
work g(n) [ G(n), goes through a series of S synchronization
events in a such a way that both the agent selected to be TI
rspb.royalsocietypublishing.org Proc R Soc B 280: 20131151
4
and the true information stays the same throughout, while
the information variables of all non-TI agents acquire a new
random initial value in each of the synchronization events.
(All elements of the synchronization event are as defined ear-
lier.) Then, let r
s,t,i
denote a memory record that agent i
collects at meeting number t in the synchronization event s,
defined the following way:
r
s;t;i
¼
fa
s;t
; b
s;t
; p
s;t;b
;
f
s;t;a
;
f
s;t;b
;
p
s;t;p
s;t;b
g if i ¼ a
fb
s;t
; a
s;t
; p
s;t;a
;
f
s;t;b
;
f
s;t;a
;
p
s;t;p
s;t;a
g if i ¼ b
otherwise.
8
<
:
That is, if an agent is party to a meeting, she records all the
relevant information in her memory.
Then let R
i
denote the entire memory of agent i, contain-
ing all her records for all of T meetings in all of S
synchronization events:
R
i
¼fr
s;t;i
g
T
t¼1
no
S
s¼1
:
(For the initial values of w
i,j
and
v
i,j
used in the memory
build-up, see below.)
The agents will select a sample from their memory, and
choose a weight that will minimize their distance from the
true information. Before we describe this process, however,
we introduce a generic weight-optimization function. Let
z(Q) denote the optimal weight given the sample Q, defined
the following way:
zðQÞ¼arg min
b
E
f
TI
A þ
b
B
1 þ
b

2
fA; Bg [ Q
"#
; ðA1Þ
where E is the expectations operator. That is, z is the coeffi-
cient that minimizes the distance of the weighted value
from the true information. (We used optimization on a
linear range rather than on a dial, as the latter results in
impractically long calculation time. Robustness checks
showed, however, that there is no significant difference
between the two measures in practice.)
Given the definition of optimization in (A 1) let us con-
sider three different models of how the agents sample their
memory, and calculate the optimal weights. Let F serve as
an index of these models.
In the first model, F ¼ 1, the agents ignore all third party
information and do not differentiate among the other agents
they encounter. Formally,
Q
i
U
fA; Bgjfa; b; c; d; A; B; C; D; g [ R
i
; a ¼ i; b = i
q
;
and
^
w
i;j
¼
zðQ
i
Þ if j [ e
i
0ifj e
i
and
^
v
i;k
¼ 08k ¼ 1; 2; ...; n;
where q denotes the size of the sample. That is, the agents
only focus on primary information, and only differentiate
self from not-self.
In the second model, F ¼ 2, the agents still ignore the
third party information, but they do differentiate among
their partners. Formally,
Q
i;j
U
fA; Bgjfa; b; c; d; A; B; C; D; g [ R
i
; a ¼ i; b ¼ j
q
;
and
^
w
i;j
¼
zðQ
i;j
Þ if j [ e
i
0ifj e
i
and (as before)
^
v
i;k
¼ 08k ¼ 1; 2; ...; n:
That is, the agents still only focus on the primary information, but
also use the informa tion that lies in the differentia tion between
their partners. Tha t is, by recognizing that their partners are
different agents, the y allow the w eights they assign to their
partners to reflect the difference in the quality of the information.
In the third model, F ¼ 3, the agents use the third party
information and differentiate among the otheragents. Formally,
Q
i;j
U
fA; Bgjfa; b; c; d; A; B; C; D; g [ R
i
; a ¼ i; b ¼ j
q
;
Q
i;k
U
fA; Bgjfa; b; c; d; A; B; C; D; g [ R
i
; a ¼ i; c ¼ k
q
;
and (as before)
^
w
i;j
¼
zðQ
i;j
Þ if j [ e
i
0ifj e
i
and
^
v
i;j
¼
zðQ
i;k
Þ if Q
i;k
=
0ifQ
i;k
¼ :
Let c (F,q,n,g(n)) denote the processor time, which is the
time it takes for the simulation computer to perform the
optimization defined in (A 1). The processor time varies
with the model index F, the sample size q, the group size n
and the graph (network) that connects the agents.
Using this concept of processor time, let us define an
implicit distance function
d
the following way:
d
ðF; n;
~
cÞj
t
;k
¼ dðn;
^
w; ^
v
Þj
t
;k
; ðA2Þ
where
~
c is the average measured processor time for all g(n) [
G(n). (That is,
~
c is a parameter that is measured during the
simulation process.) Note that #e
i
¼ k8i and g(n) allows the
comparative measure, and thus the definition of the implicit
distance function in (A 2). Also note that due to the construc-
tion of (A 2) the processor time c scales with the sample size q
(although not linearly), allowing for a wider interpretation of
computational complexity.
Using the
d
function, we can define a maximum group
size, denoted by n*, as follows:
n
ðF;
~
cÞj
t
;k;
l
¼ max n s.t.
d
ðF; n;
~
cÞj
t
;k
l
;
where
l
is an arbitrary synchrony threshold. That is, n* is the
largest number of agents that can perform a group synchroni-
zation (on an average k-degree network) in a way that the
group reaches a threshold of synchrony, given the calculation
capacity and method of optimization of the agents.
Proposition A.1. ð@n
ðF;
~
cÞ/@
~
cÞ . 0 for all F ¼ 1, 2, 3.
Proposition A.2. For both F ¼ 2,3 there is a c*(F) such that n*(F 2 1,
c) . n*(F, c) for c , c* and n*(F 2 1, c) , n*(F, c) for c . c*.
There is no algebraic solution to this problem, to our knowl-
edge. However, a simulation result is obtained with
t
¼ 20,
k ¼ 4 and
l
¼ 118. (Further computational parameters used
were estimation limits in (A 2): 2L
b
L, where we used
L ¼ 1000, and the number of synchronization events in the
rspb.royalsocietypublishing.org Proc R Soc B 280: 20131151
5
calculated memory S ¼ 100. The initial values for the simu-
lations’ memory build up were w
i,j
¼ 1and
v
i,
k ¼ 0 for F ¼
1,2, and w
i,j
¼ 1 and
v
i,
k ¼ 1 for F ¼ 3, for all i, j, and k.For
robustness checks, see the electronic supplementary material.)
For the calculated evidence for both propositions A.1 and
A.2, see figure 1. (Note that due to the nature of numerical
simulation, the inequalities of propositions A.1 and A.2
hold only weakly for some parameter ranges.)
Appendix B. A note on evolutionary dynamics
The model we present in this paper shows the group size
limit of any network using the particular mode of infor-
mation updating. One of the important questions
concerning our model is whether there is a plausible evol-
utionary mechanism that would favour higher processing
capacity on the individual level. Although our model is pri-
marily concerned with constraints on the group size that
would serve as limiting factors of any evolutionary process,
we recognize that there might be a collective action problem
(or public good problem) underlying the phenomenon we are
modelling [45]. If the increased processing power is costly for
the individual while aiding the group-level synchronization
efficiency only marginally, then there might be an incentive
for the individual to ‘cheat’ on the others by reducing its pro-
cessing power, and thus free-ride on the others’ higher
capacities. If that was evolutionarily beneficial, there would
be no group-level action at all [46].
Let us consider the set of graphs G(10) with k ¼ 5, a group
of 10 agents linked up in a random graph in such a way
that each of them is connected to five others. Let us, for
simplicity, assume that they are going through a series of
synchronization events without any optimization, without
differentiating among their partners and without third
party information (i.e. w ¼ 1 and
v
¼ 0).
However, let us assume that the information reception of
the agents contains noise in the following way:
f
tþ1;i
¼ f ð
f
t;i
;
^
f
t;j
Þ;
where
^
f
t;j
Uð
f
t;j
1;
f
t;j
þ 1Þ
and where
1 ¼
1
1
if i ¼ 1
30 if i = 1
:
That is, all agents apart from agent 1 receive the information
from others within a bandwidth of +30. For agent 1, the
error level can vary in the range 1 [ (0,180).
Let us assume that, just as in the main model of the paper,
there is true information received by one randomly selected
agent. After
t
¼ 30 information exchanges per agent, we
measure the average distance of the true information from
agent 1, as well as from all the other agents. We found that
both agent 1’s distance and the other agents’ distance from
the true information increases with the error level of agent 1;
however, the former relationship is more steep (figure 2a).
Now let us consider a payoff function for agent 1 that has
the following form:
p
1
¼ a
1
n
X
n
i¼1
j
f
T;i
f
TI
j
b
ð180 1
1
Þj
f
T;1
f
TI
j
;
ðB1Þ
where p is the total payoff, a is the payoff that agent 1 receives
as a result of the collective action,
b
is a cost parameter and
n ¼ 10. Thus, the payoff function is of the following form:
(payoff to focal individual) ¼ (benefit of collective action)
(average group deviance) (cost of cognitive investment)
(focal individual’s deviation).
That is, the total payoff that the agent receives is dependent
on the group being able to perform the collective action with
efficiency, where the agent faces a cost for reducing her infor-
mation’s noise level, and she pays a penalty for being far
from the true information. (Note that this is a particular
example for a payoff function, where all the functional forms
are linear, and the cost associated with group and individual
efficiency is 1 in both cases. Trivially, the payoff from the collec-
tive action, a, can be any value, but unless a is big enough, p will
not be positive and hence no action takes place.)
This formulation of the payoff function allows us to
search for the optimal error level given the cost parameter:
1
1
ð
b
Þ¼arg max
1
1
[(0;180)
p
1
:
Simulation results show that—as expected—the distance
from the true information increases with the error level of
50
e
1
b
e
*
1
100 150
20
40
60
80
(a)(b)
|ff
TI
|
1.00.80.60.40.2
50
100
150
Figure 2. The public good problem can be bypassed in the behavioural synchrony framework. (a) How the error level of agent 1 affects agent 1’s and the other
agents’ respective distances from the true information. Solid line, agent 1; dashed line, mean for all the other agents. (b) A positive relationship between the cost
parameter (
b
) and agent 1’s optimal error level. Solid line, optimal error level of agent 1; dashed line, mean error level of all other agents. Note that if the cost is
sufficiently low, then agent 1 would choose to have an error level lower than that of the rest of the group.
rspb.royalsocietypublishing.org Proc R Soc B 280: 20131151
6
the focus agent for both the focus agent and the rest of the
group, albeit with a different slope (figure 2a). This results in
the effect that agent 1’s optimal level of error increases as the
cost of noise reduction increases (see figure 2b). Importantly,
in some parameter ranges, the optimal error is smaller than
that of the rest of the group. This suggests that if the focus
agent’s error level was the same as the rest of the group initially,
reducing the error level would be beneficial for this individual,
providing the route to evolutionary dynamics.
Note that there are two reasons why the public good pro-
blem does not arise in the context of a coordination game, in
line with the observation that public good problems are
essentially two-trait in nature [47]. First, the public good in
a coordination game is non-rival, which extends the range
of the cost parameter (
b
), under which the public goods
problem does not emerge. Second, and most importantly,
investment in the public good and exploitation of the public
good are one and the same thing. This means that the focus
agent must invest in the public good (increase coordination
efficiency) in order to gain from it at all (increase the agent’s
alignment). Hence, there is no possibility of acting as a free-
rider: there is no benefit to withdrawing from the public
good, and the value of the public good is diminished as a
direct consequence.
Although the functional form of the payoff function
(equation (B 1)) is entirely arbitrary, this simple example illus-
trates an interesting property of the behavioural synchrony
model: it can bypass (or perhaps even resolve) the public
good problem, adding to the existing literature on the
relationship between cognition and cooperation [48,49].
References
1. Dunbar RIM, Shultz S. 2010 Bondedness and
sociality. Behaviour 147, 775803. (doi:10.1163/
000579510X501151)
2. Shultz S, Dunbar R. 2010 Encephalization is not a
universal macroevolutionary phenomenon in
mammals but is associated with sociality. Proc. Natl
Acad. Sci. USA 107, 21 582 21 586. (doi:10.1073/
pnas.1005246107)
3. Dunbar RIM. 1992 Neocortex size as a constraint on
group-size in primates. J. Hum. Evol. 22, 469493.
(doi:10.1016/0047-2484(92)90081-J)
4. Dunbar RIM. 1998 The social brain hypothesis. Evol.
Anthropol. 6, 178 190. (doi:10.1002/(SICI)1520-
6505(1998)6:5,178::AID-EVAN5.3.0.CO;2-8)
5. Barton RA. 1996 Neocortex size and behavioural
ecology in primates. Proc. R. Soc. Lond. B 263,
173177. (doi:10.1098/rspb.1996.0028)
6. Byrne RB, Whiten A (eds) 1988 A Machiavellian
intelligence: social complexity and the evolution of
intellect in monkeys, apes and humans. Oxford, UK:
Oxford University Press.
7. Barton RA, Dunbar RIM. 1997 Evolution of the social
brain. In Machiavellian intelligence. II. Extensions and
evaluations (eds A Whiten, RB Byrne), pp. 240263.
Cambridge, UK: Cambridge University Press.
8. Sallet J et al. 2011 Social network size affects neural
circuits in macaques. Science 334, 697700.
(doi:10.1126/science.1210027)
9. Lewis PA, Rezaie R, Brown R, Roberts N, Dunbar RI.
2011 Ventromedial prefrontal volume predicts
understanding of others and social network size.
Neuroimage 57, 1624 1629. (doi:10.1016/j.
neuroimage.2011.05.030)
10. Powell J, Lewis PA, Roberts N, Garcia-Finana M,
Dunbar RIM. 2012 Orbital prefrontal cortex volume
predicts social network size: an imaging study of
individual differences in humans. Proc. R. Soc. B
279, 21572162. (doi:10.1098/rspb.2011.2574)
11. Bickart KC, Wright CI, Dautoff RJ, Dickerson BC,
Barrett LF. 2011 Amygdala volume and social
network size in humans. Nat. Neurosci. 14,
163164. (doi:10.1038/nn.2724)
12. Kanai R, Bahrami B, Roylance R, Rees G. 2012
Online social network size is reflected in human
brain structure. Proc. R. Soc. B 279, 13271334.
(doi:10.1098/rspb.2011.1959)
13. Freeberg TM, Dunbar RI, Ord TJ. 2012 Social
complexity as a proximate and ultimate factor in
communicative complexity. Phil. Trans. R. Soc. B
367, 17851801. (doi:10.1098/rstb.2011.0213)
14. Dunbar RIM. 2012 Bridging the bonding gap: the
transition from primates to humans. Phil.
Trans. R. Soc. B 367, 18371846. (doi:10.1098/rstb.
2011.0217)
15. Dunbar RIM. 2008 Mind the gap: or why humans
aren’t just great apes. Proc. Br. Acad. 154,
403423.
16. Ihara Y. 2011 Evolution of culture-dependent
discriminate sociality: a gene-culture coevolutionary
model. Phil. Trans. R. Soc. B 366, 889 900. (doi:10.
1098/rstb.2010.0247)
17. Sigg H, Stolba A. 1981 Home range and daily march
in a hamadryas baboon troop. Folia Primatol.
(Basel) 36, 40 75. (doi:10.1159/000156008)
18. Tajfel H, Billig MG, Bundy RP, Flament C. 1971
Social categorization and intergroup behavior.
Eur. J. Soc. Psychol. 1, 149177. (doi:10.1002/ejsp.
2420010202)
19. Riolo RL, Cohen MD, Axelrod R. 2001 Evolution of
cooperation without reciprocity. Nature 414,
441443. (doi:10.1038/35106555)
20. Antal T, Ohtsuki H, Wakeley J, Taylor PD, Nowak
MA. 2009 Evolution of cooperation by phenotypic
similarity. Proc. Natl Acad. Sci. USA 106,
85978600. (doi:10.1073/pnas.0902528106)
21. Nettle D, Dunbar RIM. 1997 Social markers and the
evolution of reciprocal exchange. Curr. Anthropol.
38, 9399. (doi:10.1086/204588)
22. Cohen E. 2012 The evolution of tag-based
cooperation in humans the case for accent. Curr.
Anthropol. 53, 588 616. (doi:10.1086/667654)
23. Sosis R, Alcorta C. 2003 Signaling, solidarity, and
the sacred: the evolution of religious behavior. Evol.
Anthropol. 12, 264 274. (doi:10.1002/evan.10120)
24. Sosis R, Kress HC, Boster JS. 2007 Scars for war:
evaluating alternative signaling explanations for
cross-cultural variance in ritual costs. Evol. Hum.
Behav. 28, 234 247. (doi:10.1016/j.evolhumbehav.
2007.02.007)
25. Boyd R, Gintis H, Bowles S, Richerson PJ. 2003 The
evolution of altruistic punishment. Proc. Natl Acad.
Sci. USA 100, 35313535. (doi:10.1073/pnas.
0630443100)
26. Granovetter M. 1973 The strength of weak ties.
Am. J. Sociol. 78, 13601380. (doi:10.1086/
225469)
27. Hamilton MJ, Milne BT, Walker RS, Burger O, Brown
JH. 2007 The complex structure of hunter gatherer
social networks. Proc. R. Soc. B 274, 21952202.
(doi:10.1098/rspb.2007.0564)
28. Zhou WX, Sornette D, Hill RA, Dunbar RIM. 2005
Discrete hierarchical organization of social group
sizes. Proc. R. Soc. B 272, 439 444. (doi:10.1098/
rspb.2004.2970)
29. Boyd R, Richerson PJ. 1985 Culture and the
evolutionary process, pp. viii, 331. Chicago, IL:
University of Chicago Press.
30. Sachs JL, Mueller UG, Wilcox TP, Bull JJ. 2004 The
evolution of cooperation. Q. Rev. Biol. 79, 135 160.
(doi:10.1086/383541)
31. David-Barrett T, Dunbar RI. 2012 Cooperation,
behavioural synchrony and status in social networks.
J. Theor. Biol. 308, 88 95. (doi:10.1016/j.jtbi.2012.
05.007)
32. Dunbar RIM. 2011 Constraints on the evolution of
social institutions and their implications for
information flow.
J. Institutional Econ. 7, 345 371.
(doi:10.1017/S1744137410000366)
33. Gomez-Gardenes J, Reinares I, Arenas A,
Floria LM. 2012 Evolution of cooperation in
multiplex networks. Sci. Rep. 2, 620. (doi:10.
1038/srep00620)
34. Couzin ID, Krause J, Franks NR, Levin SA. 2005
Effective leadership and decision-making in animal
groups on the move. Nature 433, 513 516.
(doi:10.1038/nature03236)
rspb.royalsocietypublishing.org Proc R Soc B 280: 20131151
7
35. Griffiths TL, Tenenbaum JB. 2006 Optimal
predictions in everyday cognition. Psychol. Sci. 17,
767773. (doi:10.1111/j.1467-9280.2006.01780.x)
36. Kruuk H. 1972 The spotted hyena: a study of
predation and social behavior, pp. xvi, 335. Chicago,
IL: University of Chicago Press.
37. Hill RA, Bentley RA, Dunbar RIM. 2008 Network
scaling reveals consistent fractal pattern in
hierarchical mammalian societies. Biol. Lett. 4,
748751. (doi:10.1098/rsbl.2008.0393)
38. Harcourt AH. 1992 Coalitions and alliances: are
primates more complex than non-primates? In
Coalitions and alliances in humans and other
animals (eds AH Harcourt, FBM de Waal), pp. 445
472. Oxford, UK: Oxford University Press.
39. Dyer JR, Ioannou CC, Morrell LJ, Croft DP, Couzin ID,
Waters DA, Krause J. 2008 Consensus decision
making in human crowds. Anim. Behav. 75,
461470. (doi:10.1016/j.anbehav.2007.05.010)
40. Kearns M, Suri S, Montfort N. 2006 An experimental
study of the coloring problem on human subject
networks. Science 313, 824827. (doi:10.1126/
science.1127207)
41. Shultz S, Noe R, McGraw WS, Dunbar RIM. 2004 A
community-level evaluation of the impact of prey
behavioural and ecological characteristics on
predator diet composition. Proc. R. Soc. Lond. B
271, 725732. (doi:10.1098/rspb.2003.2626)
42. Reader SM, Hager Y, Laland KN. 2011 The evolution
of primate general and cultural intelligence. Phil.
Trans. R. Soc. B 366, 10171027. (doi:10.1098/rstb.
2010.0342)
43. Adams ES. 1990 Boundary disputes in the territorial
ant Azteca trigona: effects of asymmetries in colony
size. Anim. Behav. 39, 321 328. (doi:10.1016/
S0003-3472(05)80877-2)
44. Wilson ML, Wrangham RW. 2003 Intergroup
relations in chimpanzees. Annu. Rev. Anthropol. 32,
363392. (doi:10.1146/annurev.anthro.32.061002.
120046)
45. Torney CJ, Levin SA, Couzin ID. 2010 Specialization
and evolutionary branching within migratory
populations. Proc. Natl Acad. Sci. USA 107, 20 394
20 399. (doi:10.1073/pnas.1014316107)
46. West SA, Griffin AS, Gardner A. 2007 Evolutionary
explanations for cooperation. Curr. Biol. 17, R661
R672. (doi:10.1016/j.cub.2007.06.004)
47. Brown SP, Taylor PD. 2010 Joint evolution of
multiple social traits: a kin selection analysis.
Proc. R. Soc. B 277, 415 422. (doi:10.1098/rspb.
2009.1480)
48. Brosnan SF, Salwiczek L, Bshary R. 2010 The interplay
of cognition and cooperation. Phil. Trans. R. Soc. B
365, 26992710. (doi:10.1098/rstb.2010.0154)
49. McNally L, Brown SP, Jackson AL. 2012 Cooperation
and the evolution of intelligence. Proc. R. Soc. B
279, 30273034. (doi:10.1098/rspb.2012.0206)
rspb.royalsocietypublishing.org Proc R Soc B 280: 20131151
8
... 17 Mentalizing is central to this: to avoid responding inappropriately to others' actions, we need to understand their motives and intentions. Both mentalizing and inhibition are much more cognitively demanding than is often supposed, as well as involving dedicated neural systems in the brain (inhibition 7,18,19 ; mentalizing [20][21][22][23][24][25] ). In the case of mentalizing, this involves one of the largest neural networks in the brain (the default mode network). ...
Article
Full-text available
Humans, like all monkeys and apes, have an intense desire to be social. The human social world, however, is extraordinarily complex, depends on sophisticated cognitive and neural processing, and is easily destabilized, with dramatic consequences for our mental and physical health. To show why, I first summarize descriptive aspects of human friendships and what they do for us, then discuss the cognitive and neurobiological processes that underpin them. I then summarize the growing body of evidence suggesting that our mental as well as our physical health and wellbeing are best predicted by the number and quality of close friend/family relationships we have, with five being the optimal number. Finally, I review neurobiological evidence that both number of friends and loneliness itself are correlated with the volume of certain key brain regions associated with the default mode neural network and its associated gray‐matter processing units.
... What differs, from species to species, is the average size of a group [25][26][27][28][29], e.g. shoals are usually much bigger than herds, herds are bigger than families, and so on and so forth. ...
Preprint
We introduce a model, based on the Evolutionary Game Theory, for studying the dynamics of group formation. The latter constitutes a relevant phenomenon observed in different animal species, whose individuals tend to cluster together forming groups of different size. Results of previous investigations suggest that this phenomenon might have similar reasons across different species, such as improving the individual safety (e.g. from predators), and increasing the probability to get food resources. Remarkably, the group size might strongly vary from species to species, and sometimes even within the same species. In the proposed model, an agent population tries to form homogeneous groups. The homogeneity of a group is computed according to a spin vector, that characterizes each agent, and represents a set a features (e.g. physical traits). We analyze the formation of groups of different size, on varying a parameter named 'individual payoff'. The latter represents the gain one agent would receive acting individually. In particular, the agents choose whether to form a group (receiving a 'group payoff'), or if to play individually (receiving an 'individual payoff'). The phase diagram representing the equilibria of our population shows a sharp transition between the 'group phase' and the 'individual phase', in correspondence of a critical 'individual payoff'. In addition, we found that forming (homogeneous) small groups is easier than forming big groups. To conclude, we deem that our model and the related results supports the hypothesis that the phenomenon of group formation has evolutionary roots.
... All animals, including humans, can use their cognitive resources to cope with this problem of sociality, but their natural computational resources are limited, and need to be used in a rational way (Lieder and Griffiths, 2020). While smaller groups require less information processing power (Dávid-Barrett and Dunbar, 2013;Fajardo et al., 2022), by establishing social systems, individuals are also able to distribute cognitive loads of complex tasks that exceed their limited cognitive capacities (Giere and Moffatt, 2003;Kozowyk et al., 2023). Humans can also rely on formality to organize face-to-face interaction (Kosse, 1990) and to produce technology that stores or transmits information (Shin et al., 2020). ...
... In a group of total size N, N − 1 will be stimulated by the seeding to follow the rising square root rate of kinetic attention. This corresponds to 'grooming' work in the language by Dávid-Barrett & Dunbar [22]. As the back pressure arising from inevitable contention in the group rises, innate limitations on agents' capacity for this work drives them away. ...
Article
Full-text available
Human communities have self-organizing properties in which specific Dunbar Numbers may be invoked to explain group attachments. By analysing Wikipedia editing histories across a wide range of subject pages, we show that there is an emergent coherence in the size of transient groups formed to edit the content of subject texts, with two peaks averaging at around N=8 for the size corresponding to maximal contention, and at around N=4 as a regular team. These values are consistent with the observed sizes of conversational groups, as well as the hierarchical structuring of Dunbar graphs. We use a model of bipartite trust to derive a scaling law that fits the data and may apply to all group size distributions when these are based on attraction to a seeded group process. In addition to providing further evidence that even spontaneous communities of strangers are self-organizing, the results have important implications for the governance of the Wikipedia commons and for the security of all online social platforms and associations.
... Second, the decision-making mechanisms used by group members to integrate the information derived from cues or signals can be more or less complex (figure 1c, horizontal axis). For example, copying behaviour (or mimetism) [68] can be considered less complex than a process whereby individuals signal their preferred actions and groups reach a consensus through a quorum decision (or voting) [29,42,69,70], which is thought to require a different type of cognitive ability [71]. Thus, the cognitive complexity of a species may restrict how communication signals can be integrated, highlighting the importance of considering phylogenetic constraints when making comparisons across species. ...
Article
Full-text available
To receive the benefits of social living, individuals must make effective group decisions that enable them to achieve behavioural coordination and maintain cohesion. However, heterogeneity in the physical and social environments surrounding group decision-making contexts can increase the level of difficulty social organisms face in making decisions. Groups that live in variable physical environments (high ecological heterogeneity) can experience barriers to information transfer and increased levels of ecological uncertainty. In addition, in groups with large phenotypic variation (high individual heterogeneity), individuals can have substantial conflicts of interest regarding the timing and nature of activities, making it difficult for them to coordinate their behaviours or reach a consensus. In such cases, active communication can increase individuals’ abilities to achieve coordination, such as by facilitating the transfer and aggregation of information about the environment or individual behavioural preferences. Here, we review the role of communication in vertebrate group decision-making and its relationship to heterogeneity in the ecological and social environment surrounding group decision-making contexts. We propose that complex communication has evolved to facilitate decision-making in specific socio-ecological contexts, and we provide a framework for studying this topic and testing related hypotheses as part of future research in this area. This article is part of the theme issue ‘The power of sound: unravelling how acoustic communication shapes group dynamics’.
Article
Full-text available
Most reviews on primatology focus on the similarities between the species of Pan (chimpanzees and bonobos) and Homo (sapiens) genera. In this paper, however, we review the literature on heterochronic differences in ontogenetic development of species in order to raise a discussion about their behavioral differences, particularly in relation to the human communication system. A key concept discussed here is neoteny, which refers to the slowed rate of species development. Human biology exhibits high levels of neoteny, resulting in a prolonged period of development during the first decade of postnatal life. Human neoteny enables the brain to develop in conjunction with physical and social environments, emphasizing that biology and society should not be perceived as distinct perspectives of the same process, but rather as interdependent processes that collaborate for the normal development of human beings. In order to identify biological concepts that may have contributed to the emergence of language, throughout this paper we explore how neoteny can be a potential explanatory concept to some of the behavioral differences between species, and how this may relate to cognitive systems such as language acquisition in Homo sapiens.
Article
Humans exhibit what appears to be a unique vocal property: octave equivalence, whereby adult male voices are, on average, an octave lower in pitch than those of adult females and children. The evolutionary significance of this seems largely to have escaped notice. While sexual selection might explain why male voices are generally lower, it cannot explain why they should be so much lower than what would be expected for body size or why the average difference should be exactly one octave. Nor does a generalized dimorphism convey why precisely tuned octaves feature so commonly in human vocal interaction. The octave features strongly in the organization of music. A consequence of this characteristic of human pitch perception and production is the capacity to share and respond to vocal pitches (and their instrumental equivalents) as if they are “the same” irrespective of the difference in range, a phenomenon known as octave equivalence. We investigate the nature of octave equivalence from an adaptive perspective and propose a hypothesis for its evolution based on the importance of chorusing for social bonding and pitch matching in intergenerational exchange.
Preprint
Sociality has been argued to be the main selection pressure for the evolution of large brains and complex behavior on the basis of data from mammals and birds. Coleoid cephalopods have large brains, complex nervous systems and show signs of intelligent behavior comparable to that of birds, cetaceans, and primates. However, many cephalopods live largely solitary, semelparous, and short lives, leaving little to no opportunity for parental care, complex group dynamics, or social learning. A formal model is needed that takes these factors into consideration. Here we test the formal model of the “Asocial Brain Hypothesis” on cephalopod molluscs. We compiled a database of brain size, ecology, behavior, sociality, and life history from 3933 publications on the 79 species of octopus, squid, and cuttlefish for which comparable brain data is available. We analyze these data using an updated phylogeny and Bayesian multilevel models. In a set of pre- registered statistical analyses derived from the predictions of the “Asocial Brain Hypothesis” formal model, we find a large effect of habitat, suggesting ecology as a primary selection pressure on brain size in cephalopods. We also find evidence of a positive relationship between brain size and number of predator groups and no relationship between brain size and sociality. These results are inconsistent with social explanations for brain evolution but consistent with ecological explanations. They emphasize the need for new theories to explain the evolution of brains more generally, including in the cephalopods, which diverged from vertebrates over 500 million years ago.
Article
Full-text available
Although we share many aspects of our behaviour and biology with our primate cousins, humans are, nonetheless, different in one crucial respect: our capacity to live in the world of the imagination. This is reflected in two core aspects of our behaviour that are in many ways archetypal of what it is to be human: religion and story-telling. I shall show how these remarkable traits seem to have arisen as a natural development of the social brain hypothesis, and the underlying nature of primate sociality and cognition, as human societies have been forced to expand in size during the course of our evolution over the past 5 million years.
Article
Full-text available
This lecture presents the text of the speech about humans and apes delivered by the author at the 2007 Joint British Academy/British Psychological Society Annual Lecture held at the British Academy. It comments on the claim that an evolutionary perspective is not a competing paradigm for conventional explanations in the social sciences, and explains the why humans are so different from other apes and monkeys, despite the fact that we share so much of our evolutionary history with them.
Chapter
How can the intelligence of monkeys and apes, and the huge brain expansion which marked human evolution be explained? In 1988, Machiavellian Intelligence was the first book to assemble the early evidence suggesting a new answer: that the evolution of intellect was primarily driven by selection for manipulative, social expertise within groups where the most challenging problem faced by individuals was dealing with their companions. Since then a wealth of new information and ideas has accumulated. This new book will bring readers up to date with the most important developments, extending the scope of the original ideas and evaluating them empirically from different perspectives. It is essential reading for reseachers and students in many different branches of evolution and behavioural sciences, primatology, and philosophy.
Article
Conventional wisdom over the past 160 years in the cognitive and neurosciences has assumed that brains evolved to process factual information about the world. Most attention has therefore been focused on such features as pattern recognition, color vision, and speech perception. By extension, it was assumed that brains evolved to deal with essentially ecological problem-solving tasks. 1.
Article
Recent game-theoretic simulation and analytical models have demonstrated that cooperative strategies mediated by indicators of cooperative potential, or “tags,” can invade, spread, and resist invasion by noncooperators across a range of population-structure and cost-benefit scenarios. The plausibility of these models is potentially relevant for human evolutionary accounts insofar as humans possess some phenotypic trait that could serve as a reliable tag. Linguistic markers, such as accent and dialect, have frequently been either cursorily defended or promptly dismissed as satisfying the criteria of a reliable and evolutionarily viable tag. This paper integrates evidence from a range of disciplines to develop and assess the claim that speech accent mediated the evolution of tag-based cooperation in humans. Existing evidence warrants the preliminary conclusion that accent markers meet the demands of an evolutionarily viable tag and potentially afforded a cost-effective solution to the challenges of maintaining viable cooperative relationships in diffuse, regional social networks.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.