Conference PaperPDF Available

Setting group priorities — Swarms vs votes

Authors:
  • Unanimous AI
  • Unanimous AI, Inc.
978-1-5090-3502-1/16/$31.00 ©2016 IEEE
Setting Group Priorities Swarms vs Votes
Louis Rosenberg and David Baltaxe
Unanimous A.I.
2443 Fillmore Street, #116
San Francisco, CA. USA
david@unanimousai.com
Abstract As established by the Condorcet Jury Theorem, the
statistical average of a group-wise vote will generally outperform
the accuracy of the individual participants. Because of this, many
organizations use polls and surveys for critical decisions, such as
setting group priorities. Unfortunately, the conditions required by
the Condorcet Jury Theorem are very strict, demanding (a) that
participants are fully independent when casting votes, with no
cross-team influences or social biasing, (b) that all members of the
team are skilled performers who render correct decisions more
than 50% of the time, and (c) that the questions are binary, with
members selecting between only two options. A major problem,
therefore, is that real world teams engaged in authentic decisions,
judgements, and estimations rarely satisfy the ideal conditions for
statistical accuracy amplification. The present study explores the
use of “human swarming” as an alternative to polls and surveys
for real-world tasks such as the setting of group priorities. More
specifically, this study tasked a group of 43 voting age Americans
with prioritizing a set of political objectives by vote and by swarm,
and then asked the members to rate their satisfaction with the
resulting prioritizations. It was found that 68% of the participants
rated the swarm-based result as a more accurate reflection of their
personal priorities than the vote-based result. In addition, 74% of
participants rated the swarm-based result as a more accurate
reflection of the group’s priorities than the vote-based result.
With satisfaction being a core success measure for a prioritization
task, it appears that real-time swarming may offer groups a
significant benefit as compared to traditional polls and surveys.
Keywords Swarm Intelligence, Artificial Intelligence, Human
Swarming, Wisdom of Crowds, Collective Intelligence
I. INTRODUCTION
From business teams to political parties, organizations often
find it extremely challenging to prioritize their top objectives.
As a consequence, priority-setting can easily become a high
conflict endeavor within teams, especially when the group is
diverse, including participants of varied background, discipline,
or expertise. To make matters worse, conflict in priority-setting
is not just unpleasant, it can be counterproductive, reducing the
buy-in among participants in the final outcome. To mitigate such
conflicts, many organizations have turned away from purely
deliberative priority-setting methods in favor of more objective
statistical means, using votes, polls, and surveys to derive
average results that inform group-wise prioritization. This
approach is often justified by historical research that shows the
statistical average of group decisions, forecasts, and judgements,
outperforming the accuracy of individual responses.1
Much of the rationale for treating groups as statistical rather
than deliberative entities goes back to the Marquis de Condorcet,
who worked to justify the shift from dictatorial monarchy to
representative democracy during the turmoil of the French
Revolution. His intent was to validate the “will of the people” as
an intelligent and effective way to reach societal decisions,
render judgements, and set political priorities. Memorialized as
the Condorcet Jury Theorem, his work shows that so long as
each member of a group provides a correct judgement more than
50% of the time, the statistical average of group members will
outperform the individuals, the larger the group the greater the
accuracy advantage. The theorem requires, however that all
individuals provide their input independently, with no influence
from other members. In other words, no deliberation, cross-
pollination, or social biases a purely statistical result that
averages individuals in perfect isolation.2
But what if the individuals are not correct more than 50% of
the time as required by the Condorcet Jury Theorem? In such
cases, the statistical average of participants will underperform
the accuracy of individuals, with the collective insights getting
less accurate as the group size increases. This makes polling a
risky endeavor for group decision-making as it can amplify poor
judgement. Furthermore, is it realistic to model participants in
real-world decision tasks as purely independent actors, as is
formally required by the Jury Theorem? Probably not, for most
members of a working team share similar biases and impose
cross-team influences, not to mention the impact of a shared
organizational culture. Clearly, the strict idealization of the
Condorcet Jury Theorem faces real-world practicalities. In
addition, while the use of statistical averages via vote, poll, or
survey, has been shown to give improved results in idealized
cases, there is no reason to believe that such methods yield the
very best results. This inspires the research question is there
a better way for groups to decide upon their common priorities?
To find a more effective method for group prioritization, the
present researchers looked to Mother Nature for guidance.
That’s because many natural species make collective decisions
that greatly outperform the intellectual capacity of the individual
organisms in the group. Referred to as Swarm Intelligence (SI),
nature generally achieves this amplification by enabling groups
to form closed-loop systems in which participants explore the
decision-space in real-time synchrony and converge on optimal
outcomes. One of the most studied examples of amplified
Swarm Intelligence is among honeybee swarms, which have
been shown to prioritize potential home sites and select the
optimal destination 80% of the time.3,4,5,6 But can humans use
similar real-time swarming methods to reach optimized group
decisions? Prior research into human swarming has shown that
by enabling groups of online users to combine their knowledge,
wisdom, insights, and opinions in real-time swarms, enhanced
predictions and forecasts can be made.7,8,9,10,11
Prior research, however, does not address priority setting,
which inspires the question: Can real-time swarming be used by
groups to converge upon preferred sets of priorities as compared
to traditional polls, votes, and surveys? To answer this question,
researchers used the UNU swarm intelligence platform to
compare priority-setting among diverse groups by vote and by
swarm. More specifically, researchers assembled a group of 43
voting-age Americans of mixed party affiliation and tasked them
with evaluating and prioritizing a set of political objectives that
the government should focus on. The group was required to
order the set of priorities, from most important to least
important, in two ways: (i) by ranking individual preferences on
a traditional online survey, which would then be mathematically
combined to set priorities and (ii) by working together as online
swarm, setting the priorities in real-time synchrony.
II. ENABLING HUMAN SWARMS
To enable real-time decisions among groups of networked
users, the UNU online platform was employed. UNU allows
users to login simultaneously from all around the world and
participate in closed-loop swarms. As shown in Figure 1, users
answer questions by collectively moving a graphical puck to
select among a set of alternatives. The puck is modeled as a
physical system with a defined mass, damping and friction.
Users provide input by manipulating a graphical magnet with a
mouse or touchscreen. By positioning their magnet, users
impart their personal intent as a force vector on the puck. The
input from each user is not a discrete vote, but a stream of
vectors that varies freely over time. Because the full population
of users can adjust their intent at every time-step, the puck
moves in response to the dynamics of the full system. This
enables a real-time negotiation among the members of the
swarm, the group collectively exploring the decision-space and
converging on the most agreeable answer.7
Fig 1. A human swarm comprised of user-controlled magnets.
It’s important to note that users don’t only vary the direction
of their input, but also the magnitude by adjusting the distance
between the magnet and the puck. This enables users to convey
not only which choice they prefer most at a given time-step, but
also their level of conviction in that choice. In addition, real-
time predictive algorithms infer variations in user conviction
based on the frequency of choice changes over time.
III. SWARMS VS VOTES
To compare the effectiveness of swarming and voting in the
setting of group priories, 43 voting age Americans reviewed a
list of 24 popular political objectives that have been debated
during the 2016 Presidential and Congressional campaigns.
From that full list, participants were asked to identify and rank
which of the objectives they believed should be the top five
priorities for the new President and Congress in 2017. This is a
challenging task for any group, but to ensure high conflict in the
prioritization process, the pool of participants were selected as a
mix of Republican, Democrat, and Independent leaning voters.
In the first phase of the study, each participant completed an
online survey to identify and rank their top five priorities. The
surveys were performed independently and participants had no
opportunity to communicate with one another about their
selections. In the second phase of the experiment, the
participants worked together as a unified real-time swarm (using
the UNU swarming platform) to collectively rank their top five
priorities. In this way, the 43 participants produced two different
sets of priorities one set generated individually by ranked
survey and combined statistically, and one set generated by the
group working collectively as a real-time swarm.
In the final phase of the study, participants were surveyed
again and asked to individually reflect upon the two sets of
priorities that were generated by the group, indicating (a) which
set better reflected their personal views, and (b) which set better
reflected the views of the full population. Participants were also
asked to reflect on the process itself and indicate which
methodology was more enjoyable.
IV. RESULTS
As described above, a group of 43 voting age Americans,
with mixed party affiliation, collectively produced two ordered
sets of political priorities from a master list of 24 options. As
provided in Figure 2 below, List A shows the top five priorities
produced by the group working together as a unified swarm,
while List B shows the top five priorities produced by
aggregating the rankings provided on the individual surveys.
Fig 2. Ranked priorities produced by (A) swarm and (B) vote.
As shown in Figure 2, the sets of top-five priorities from the
swarm and the survey had significant similarities and important
differences. The key similarity is that first and second priorities
on the lists Provide Universal Healthcare and Create Jobs
were the same for both approaches. The next three priorities,
however, were completely different for the two methodologies.
It is interesting to observe that priorities 3, 4 and 5 in List A
(from the swarm) Repair crumbling infrastructure, Ensure fair
elections, and Reduce college costs and student debt reflect
concrete issues that could have immediate and direct impact on
respondents’ lives. In contrast, priorities 3, 4 and 5 in List B
(from the survey) Eliminate poverty, Defeat ISIS, and Reduce
wealth inequality between rich and poor address longer-term
issues and are more removed from the day to day lives of
participants. In fact, respondents commented that issues such as
“eliminating poverty” were not realistic goals for any President
and Congress to tackle, and yet it was highly ranked in the
survey results. This suggests that when filling out the survey
(which is an abstract individual exercise), respondents may have
felt a personal need to express abstract altruistic goals, while
participating in the collaborative swarm, where every ranking
was a real-time exercise in group negotiation and compromise,
users provided responses that were more grounded and realistic.
The findings raise the question of whether there is a bias
towards “altruism” associated with surveys, as the individuals
may feel they are being personally judged and therefore may be
more inclined to answer the way “they think they’re supposed
to as opposed to how they truly feel. Referred to generally as
the Hawthorne Effect, this conforms with prior research that
suggests altruistic bias can distort the participants true feelings
when providing individual responses.12 This raises an important
question does swarming mitigate this problem by having
participants respond together as a synchronous group? To
explore this, Part III of the research asked the 43 participants to
reflect on each set of priorities.
In Part III, participants were asked to review both sets of
priorities and independently complete an online questionnaire.
Participants were asked to indicate which set of issues best
represented their personal political priorities. As shown in
Figure 3 below, 66% of the respondents favored the list that
resulted from the swarm, compared with 34% that favored the
results of the survey.
Fig 3. "Which list best represents your priorities", among those that
expressed a preference (n=36)
Participants were also asked to reflect on which process
(swarm or survey) they found to better represent their view of
the group’s overall priorities. As shown in Figure 4, 74% of
the respondents believed the swarm better represented the
priorities of the group, with 26% that believed the results of the
survey were more representative. This result suggest improved
buy-in among the participants as three out of four participants
believe the swarming process yielded a more accurate reflection
of the group’s collective will.
Fig 4. “Which process best represents the group’s opinions?” among those
that expressed a preference (n=34)
Lastly, the participants were asked to reflect on the process
itself and indicate which method they found to be more
enjoyable prioritizing by survey, or prioritizing by swarm. As
shown in Figure 5, 65% of the respondents found the swarming
process to be more enjoyable, while 35% preferred the survey.
These results echo other research that indicates that swarming is
a more pleasant process than taking surveys. This is an
important result, for one of the primary logistical barriers to
collecting data by survey is user aversion to the process.
Fig 5. “Which was a more enjoyable experience?” among those that
expressed a preference (n=34)
V. DISCUSSION AND CONCLUSIONS
As reflected by the results above, this study suggests that
human swarming may be a more effective methodology for
setting priorities among diverse groups than traditional polling.
When participants compared the output of their swarm with the
aggregate results of their survey responses, a significant
majority reported that the swarm better represented both their
personal priorities and their perceived opinions of the broader
group. Two thirds of the subjects also found that participating in
the unified swarm was more enjoyable than taking the survey.
With surveys and other forms of polling widely used by
business organizations, market researchers, and news outlets to
gauge the sentiments of the public, the benefits of swarming may
have many applications. Surveys aggregate individual opinions
as isolated snapshots, highlighting differences within the group
74% 2 6%
0% 10% 20 % 30% 40 % 50% 6 0% 70% 8 0% 90%1 00%
Whichprocessbestrepresentsthegro ups
opinions ?
Swarm(UNU) Survey
65% 3 5%
0% 10% 2 0% 30% 4 0% 50% 6 0% 70% 8 0% 90% 100 %
Whichwasamoreenjoyableexp erienc e?
Swarm(UNU) Survey
rather than explicitly eliciting common ground. Surveys may
also encourage participants to mask their true feelings vs what
they believe they “should say”. In contrast, the swarming
process immerses respondents in a group decision dynamic that
is specifically aimed at converging on common ground and
results in clearer representations of overall group intent.
Swarming may also mitigate the Hawthorne Effect by enabling
respondents to feel part of a synchronous group rather than an
exposed individual who risks being personally judged. And
finally, swarming is perceived to be more enjoyable than
surveys and is therefore more likely to get repeat engagements.
ACKNOWLEDGMENT
This work was directly supported by Unanimous A.I., the
maker of the UNU platform for real-time human swarming. For
more information about UNU, visit http://UNU.ai.
REFERENCES
[1] Armstrong, J.S. (2001), Principles of forecasting: a handbook for
researchers and practitioners, Kluwer Academic Publishing, pages 417-
439.
[2] Bottom, W., P., Ladha, Krishna., Miller, Gary J. (2002) “Propagation of
Individual Bias through Group Judgement,” Journal Of Risk Uncertainty,
25: 152-154.
[3] Seeley, Thomas D., Visscher, P. Kirk. Choosing a home: How the scouts
in a honey bee swarm perceive the completion of their group decision
making. Behavioral Ecology and Sociobiology 54 (5) 511-520.
[4] I.D. Couzin, Collective Cognition in Animal Groups, Trends Cogn. Sci.
13,36 (2008).
[5] J.A.R. Marchall, R. Bogacz, A. Dornhaus, R. Planque, T.Kovacs, N.R.
Franks, On optimal decision making in brains and social insect colonies,
J.R. Soc Interface 6,1065 (2009).
[6] Seeley, Thomas D. Honeybee Democracy. Princeton University Press,
2010.
[7] Rosenberg, L.B., “Human Swarms, a real-time method for collective
intelligence.” Proceedings of the European Conference on Artificial Life
2015, pp. 658-659
[8] Rosenberg, Louis. "Artificial Swarm Intelligence vs Human Experts",
Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE.
[9] Palmer, Daniel W., et al. "Emergent Diagnoses from a Collective of
Radiologists: Algorithmic versus Social Consensus Strategies." Swarm
Intelligence. Springer International Publishing, 2014. 222-229.
[10] Eberhart, Russell, Daniel Palmer, and Marc Kirschenbaum. "Beyond
computational intelligence: blended intelligence." Swarm/Human
Blended Intelligence Workshop (SHBI), 2015. IEEE, 2015.
[11] L. B. Rosenberg, "Human swarming, a real-time method for parallel
distributed intelligence," Swarm/Human Blended Intelligence Workshop
(SHBI), 2015, Cleveland, OH, 2015, pp. 1-7.
[12] Bardsley, N. (2008). Dictator game giving: altruism or
artefact?. Experimental Economics, 11(2), 122-133.
... Ranking a set of outcomes or prioritizing a set of alternatives is a common use case of Swarm AI systems. Swarm-based rankings have been shown to be significantly more representative of group priorities than voting [29]. While ordinal rankings (1 st , 2 nd , 3 rd …) are useful, it's often also valuable to quantify the difference in ranking between two alternatives to understand the relative difference in sentiment between those alternatives. ...
... Similar results have been shown in swarms as small as 4 people to swarms as large as 30 people [1][2][3][4][5][6][7][28][29][30][31][32][33], so swarms can amplify the accuracy of both small and large groups. These and other results support the view that swarming, with closed-loop feedback, is a far more efficient method for harnessing group insights than polling, even when polls target significantly larger populations. ...
Chapter
Swarm Intelligence (SI) is a natural phenomenon that enables social species to quickly converge on optimized group decisions by interacting as real-time closed-loop systems. This process, which has been shown to amplify the collective intelligence of biological groups, has been studied extensively in schools of fish, flocks of birds, and swarms of bees. This paper provides an overview of a new collaboration technology called Artificial Swarm Intelligence (ASI) that brings the same benefits to networked human groups. Sometimes referred to as "human swarming" or building "hive minds," the process involves groups of networked users being connected in real-time by AI algorithms modeled after natural swarms. This paper presents the basic concepts of ASI and reviews recently published research that shows its effectiveness in amplifying the collective intelligence of human groups, increasing accuracy when groups make forecasts, generate assessments, reach decisions, and form predictions. Examples include significant performance increases when human teams generate financial predictions, business forecasts, subjective judgments, and medical diagnoses.
... Ranking a set of outcomes or prioritizing a set of alternatives is a common use case of Swarm AI systems. Swarm-based rankings have been shown to be significantly more representative of group priorities than voting [30]. While ordinal rankings (1 st , 2 nd , 3 rd …) are useful, it's often also valuable to quantify the difference in ranking between two alternatives to understand the relative difference in sentiment between those alternatives. ...
... Similar results have been shown in swarms as small as 4 people to swarms as large as 30 people [1][2][3][4][5][6][7][29][30][31][32][33][34], so swarms can amplify the accuracy of both small and large groups. These and other results support the view that swarming, with closed-loop feedback, is a far more efficient method for harnessing group insights than polling, even when polls target significantly larger populations. ...
Conference Paper
Full-text available
Swarm Intelligence (SI) is a natural phenomenon that enables social species to quickly converge on optimized group decisions by interacting as real-time closed-loop systems. This process, which has been shown to amplify the collective intelligence of biological groups, has been studied extensively in schools of fish, flocks of birds, and swarms of bees. This paper provides an overview of a new collaboration technology called Artificial Swarm Intelligence (ASI) that brings the same benefits to networked human groups. Sometimes referred to as "human swarming" or building "hive minds," the process involves groups of networked users being connected in real-time by AI algorithms modeled after natural swarms. This paper presents the basic concepts of ASI and reviews recently published research that shows its effectiveness in amplifying the collective intelligence of human groups, increasing accuracy when groups make forecasts, generate assessments, reach decisions, and form predictions. Examples include significant performance increases when human teams generate financial predictions, business forecasts, subjective judgments, and medical diagnoses.
... A recent study [19] attempting to measure the utilitarian optimality of groups with conflicting opinions found that ASI reaches decisions that are significantly better, as measured by the monetary amount won by the group, than the Borda and Majority methods. In another study [24], groups made political prioritizations using both ASI and Majority Voting protocols. The group later rated it's own prioritizations made via ASI as a more accurate representation of the group's opinions (74% of responses) and a more accurate representation of individual priorities (66% of responses) than the Majority Voting protocol. ...
Conference Paper
Full-text available
Groups often struggle to reach decisions, especially when populations are strongly divided by conflicting views. Traditional methods for collective decision-making involve polling individuals and aggregating results. In recent years, a new method called Artificial Swarm Intelligence (ASI or Swarm AI) has been developed that enables networked human groups to deliberate in real-time systems, moderated by artificial intelligence algorithms. While traditional voting methods aggregate input provided by isolated participants, Swarm-based methods enable participants to influence each other and converge on solutions together. In this study we compare the output of traditional methods such as Majority vote and Borda count to the Swarm method on a set of divisive policy issues. We find that the rankings generated using ASI and the Borda Count methods are often rated as significantly more satisfactory than those generated by the Majority vote system (p<0.05). This result held for both the population that generated the rankings (the "in-group") and the population that did not (the "out-group"): the in-group ranked the Swarm prioritizations as 9.6% more satisfactory than the Majority prioritizations, while the out-group ranked the Swarm prioritizations as 6.5% more satisfactory than the Majority prioritizations. This effect also held even when the out-group was subject to a demographic sampling bias of 10% (i.e. the out-group was composed of 10% more Labour voters than the in-group). The Swarm method was the only method to be perceived as more satisfactory to the "out-group" than the voting group.
... Swarm intelligence, which combines crowdsourcing with the Delphi method, may be the most-effective method [19], [20]; one problem, though, is that of motivating a large group of people to remain in the activity. The successful Foldit project made a game of the problem of protein folding and configuration [21]; this sort of game then becomes one of developing a well-posed problem. ...
Article
We studied the feasibility of using crowdsourcing and the Delphi method for obtaining unbiased, anonymous comments. The goal was to determine if either method might be feasible for estimating effort, time, and challenges from brief descriptions of technical projects for embedded systems. In two separate case studies, both methods estimated low for effort and time. The Delphi method converged to estimates and forecasts in both projects. The wide variance in the survey's results indicated that it was not useful for estimating effort, time, or challenges.
Article
Ratings provided by Pilots on workload scales and usability surveys can be biased by subjective differences in perception, experience, skill, emotional state, motivation, and estimation of risk/cost that may be associated with performing a task. Personality dynamics can further compound polarization of issues during pilot debriefings. What if these unwanted effects could be filtered out of pilot data collection and we could cost-effectively access a higher-order, collective ‘pilot brain’ made up of a combined pilot intellect, intuition, and experience to provide more accurate insight into workload and usability? Swarm AI technology was used in a high fidelity pilot simulation event and compared against a traditional methodology for collecting workload and usability survey data. Pilot and Subject Matter Expert workload and usability survey ratings were collected during the event and compared to a post-event pilot swarm. The results of the study showed pilots engaging in collective intelligence were found to be more effective at rating workload, and also more aligned with Subject Matter Expert workload ratings. This initial workload testing suggests that Swarm AI technology and techniques have great potential for usability research by activating the collective intelligence of groups, which can exceed that of the individual performing alone. The usability survey sample was limited, therefore further study is recommended to validate the generalizability of this technology to Likert Scale data.
Conference Paper
Full-text available
"Artificial Swarm Intelligence" (ASI) strives to amplify the combined intelligence of networked human groups by enabling populations of participants to form real-time closed-loop systems modeled after biological swarms. Prior studies [Rosenberg 2015] have shown that "human swarms" can converge on more accurate decisions and predictions than traditional methods for tapping the wisdom of groups such as votes and polls. To further explore the predictive ability of ASI systems, 75 randomly selected sports fans were assembled into real-time human swarms using the UNU software platform and were tasked with predicting College Bowl football games against the spread. Results show intelligence amplification.
Conference Paper
Full-text available
Much research has been done in the field of collective intelligence to aggregate input from human populations with the goal of amplifying the abilities of the groups. Nearly all prior research follows a similar model where input is collected from human participants in isolation and then aggregated statistically after the fact. The paper introduces a radically different approach in which the human participants is not aggregated statistically, but through a real-time dynamic control in which the participants act, react, and interact as a part of a system modeled after swarms in nature. Early testing of these "human swarms" suggest great potential for amplifying the intelligence of human groups, exceeding traditional aggregation methods. on the simulation of collaborative systems as it relates to the emergence of real-time collective intelligence. While theoretical studies are of great research value, there’s a growing need for real-world platforms that test the emergence of collective intelligence among human users. This short paper introduces such a platform. It enables networks of online collaborators to converge on questions, decisions, and dilemmas in real-time, functioning as a unified dynamic system. The dynamic system has been modeled after biological swarms, which is why refer to the process as “social swarming” or "human swarming". Early testing of human swarms suggests a great potential for harnessing collective intelligence.
Chapter
Full-text available
To improve forecasting accuracy, combine forecasts derived from methods that differ substantially and draw from different sources of information. When feasible, use five or more methods. Use formal procedures to combine forecasts: An equal-weights rule offers a reasonable starting point, and a trimmed mean is desirable if you combine forecasts resulting from five or more methods. Use different weights if you have good domain knowledge or information on which method should be most accurate. Combining forecasts is especially useful when you are uncertain about the situation, uncertain about which method is most accurate, and when you want to avoid large errors. Compared with errors of the typical individual forecast, combining reduces errors. In 30 empirical comparisons, the reduction in ex ante errors for equally weighted combined forecasts averaged about 12.5% and ranged from 3 to 24 percent. Under ideal conditions, combined forecasts were sometimes more accurate than their most accurate components.
Article
Full-text available
This study considers the mystery of how the scout bees in a honey bee swarm know when they have completed their group decision making regarding the swarm's new nest site. More specifically, we investigated how the scouts sense when it is appropriate for them to begin producing the worker piping signals that stimulate their swarm-mates to prepare for the flight to their new home. We tested two hypotheses: "consensus sensing," the scouts noting when all the bees performing waggle dances are advertising just one site; and "quorum sensing," the scouts noting when one site is being visited by a sufficiently large number of scouts. Our test involved monitoring four swarms as they discovered, recruited to, and chose between two nest boxes and their scouts started producing piping signals. We found that a consensus among the dancers was neither necessary nor sufficient for the start of worker piping, which indicates that the consensus sensing hypothesis is false. We also found that a buildup of 10–15 or more bees at one of the nest boxes was consistently associated with the start of worker piping, which indicates that the quorum sensing hypothesis may be true. In considering why the scout bees rely on reaching a quorum rather than a consensus as their cue of when to start preparing for liftoff, we suggest that quorum sensing may provide a better balance between accuracy and speed in decision making. In short, the bees appear to begin preparations for liftoff as soon as enough of the scout bees, but not all of them, have approved of one of the potential nest sites.
Article
Full-text available
The problem of how to compromise between speed and accuracy in decision-making faces organisms at many levels of biological complexity. Striking parallels are evident between decision-making in primate brains and collective decision-making in social insect colonies: in both systems, separate populations accumulate evidence for alternative choices; when one population reaches a threshold, a decision is made for the corresponding alternative, and this threshold may be varied to compromise between the speed and the accuracy of decision-making. In primate decision-making, simple models of these processes have been shown, under certain parametrizations, to implement the statistically optimal procedure that minimizes decision time for any given error rate. In this paper, we adapt these same analysis techniques and apply them to new models of collective decision-making in social insect colonies. We show that social insect colonies may also be able to achieve statistically optimal collective decision-making in a very similar way to primate brains, via direct competition between evidence-accumulating populations. This optimality result makes testable predictions for how collective decision-making in social insects should be organized. Our approach also represents the first attempt to identify a common theoretical framework for the study of decision-making in diverse biological systems.
Book
Principles of Forecasting: A Handbook for Researchers and Practitioners summarizes knowledge from experts and from empirical studies. It provides guidelines that can be applied in fields such as economics, sociology, and psychology. It applies to problems such as those in finance (How much is this company worth?), marketing (Will a new product be successful?), personnel (How can we identify the best job candidates?), and production (What level of inventories should be kept?). The book is edited by Professor J. Scott Armstrong of the Wharton School, University of Pennsylvania. Contributions were written by 40 leading experts in forecasting, and the 30 chapters cover all types of forecasting methods. There are judgmental methods such as Delphi, role-playing, and intentions studies. Quantitative methods include econometric methods, expert systems, and extrapolation. Some methods, such as conjoint analysis, analogies, and rule-based forecasting, integrate quantitative and judgmental procedures. In each area, the authors identify what is known in the form of `if-then principles', and they summarize evidence on these principles. The project, developed over a four-year period, represents the first book to summarize all that is known about forecasting and to present it so that it can be used by researchers and practitioners. To ensure that the principles are correct, the authors reviewed one another's papers. In addition, external reviews were provided by more than 120 experts, some of whom reviewed many of the papers. The book includes the first comprehensive forecasting dictionary.
Conference Paper
Although substantial research has explored the design of artificial swarms, the majority of such work involves swarms of autonomous robots or simulated agents. Little work, however, has been done on the creation of artificial swarms that connect groups of networked humans with the objective of fostering a unified emergent intelligence. This paper describes a novel platform called UNU that enables distributed populations of networked users to congregate online in real-time swarms and tackle problems as an artificial swarm intelligence (A.S.I.). Modeled after biological swarms, the UNU platform enables online groups to work together in synchrony, forging a unified dynamic system that can quickly answer questions and make decisions by exploring a decision-space and converging on a preferred solution. Initial testing suggests that human swarming has great potential for unleashing the collective intelligence of online groups, often exceeding individual abilities.
Conference Paper
Twelve radiologists independently diagnosed 74 medical images. We use two approaches to combine their diagnoses: a collective algorithmic strategy and a social consensus strategy using swarm techniques. The algorithmic strategy uses weighted averages and a geometric approach to automatically produce an aggregate diagnosis. The social consensus strategy used visual cues to quickly impart the essence of the diagnoses to the radiologists as they produced an emergent diagnosis. Both strategies provide access to additional useful information from the original diagnoses. The mean number of correct diagnoses from the radiologists was 50 and the best was 60. The algorithmic strategy produced 63 correct diagnoses and the social consensus produced 67. The algorithm’s accuracy in distinguishing normal vs. abnormal patients (0.919) was significantly higher than the radiologists’ mean accuracy (0.861; p = 0.047). The social consensus’ accuracy (0.951; p = 0.007) showed further improvement.
Article
In recent years there has been a growing interest in the relationship between individual behavior and population-level properties in animal groups. One of the fundamental problems is related to spatial scale; how do interactions over a local range result in population properties at larger, averaged, scales, and how can we integrate the properties of aggregates over these scales? Many group-living animals exhibit complex, and coordinated, spatio-temporal patterns which despite their ubiquity and ecological importance are very poorly understood. This is largely due to the difficulties associated with quantifying the motion of, and interactions among, many animals simultaneously. It is on how these behaviors scale to collective behaviors that I will focus here. Using a combined empirical approach (using novel computer vision techniques) and individual-based computer models, I investigate pattern formation in both invertebrate and vertebrate systems, including - Collective memory and self-organized group structure in vertebrate groups (Couzin, I.D., Krause, J., James, R., Ruxton, G.D. & Franks, N.R. (2002) Journal of Theoretical Biology 218, 1-11. (2) Couzin, I.D. & Krause, J. (2003) Advances in the Study of Behavior 32, 1-75. (3) Hoare, D.J., Couzin, I.D. Godin, J.-G. & Krause, J. (2003) Animal Behaviour, in press.) - Self-organized lane formation and optimized traffic flow in army ants (Couzin, I.D. & Franks, N.R. (2003) Proceedings of the Royal Society of London, Series B 270, 139-146) - Leadership and information transfer in flocks, schools and swarms. - Why do hoppers hop? Hopping and the generation of long-range order in some of the largest animal groups in nature, locust hopper bands.