Content uploaded by Pietro Michelucci
Author content
All content in this area was uploaded by Pietro Michelucci on Jun 15, 2020
Content may be subject to copyright.
INSIGHTS |
PERSPECTIVES
sciencemag.org SCIENCE
CREDIT: M. GRADY; ADAPTED BY N. CARY/SCIENCE
By Pietro Michelucci1 and Janis L.
Dickinson2
Human computation, a term intro-
duced by Luis von Ahn (1), refers to
distributed systems that combine
the strengths of humans and com-
puters to accomplish tasks that nei-
ther can do alone (2). The seminal
example is reCAPTCHA, a Web
widget used by 100 million
people a day when they tran-
scribe distorted text into a box
to prove they are human. This
free cognitive labor provides
users with access to Web con-
tent and keeps websites safe
from spam attacks, while feed-
ing into a massive, crowd-pow-
ered transcription engine that
has digitized 13 million articles
from The New York Times ar-
chives (3). But perhaps the best
known example of human com-
putation is Wikipedia. Despite
initial concerns about accuracy
(4), it has become the key re-
source for all kinds of basic in-
formation. Information science
has begun to build on these
early successes, demonstrat-
ing the potential to evolve hu-
man computation systems that
can model and address wicked
problems (those that defy
traditional problem-solving
methods) at the intersection of
economic, environmental, and
sociopolitical systems.
Like reCAPTCHA, many hu-
man computation systems har-
ness the combined efforts of
individuals to integrate fast,
repetitive work into a single
answer. In doing so, they often
take advantage of human vi-
sual perception, which remains
unmatched by machines. Of-
ten, small tasks are distributed
to many individuals—a method
termed microtasking (see the
figure, panel A). In one exam-
ple, 165,000 citizen scientists
in 145 countries are using the
EyeWire platform to map the
three-dimensional structure of
retinal neurons. This mapping provides
the first glimpse of how the structure and
organization of neurons in a mammalian
retina function to detect motion (5). Visual
microtasking is also being used to speed up
medical analysis in projects such as Ma-
lariaSpot, in which 22 casual gamers count
parasites as accurately as a single trained
pathologist (6).
Microtasking is well suited to problems
that can be addressed by repeatedly apply-
ing the same simple process to each part
of a larger data set (see the figure, panel
A), such as stitching together photographs
contributed by residents to decide where
to drop water during a forest fire. Mi-
crotasking alone, however, is inadequate
for addressing wicked problems, such as
climate change, disease, and
geopolitical conflict, which
are dynamic, involve multiple,
interacting systems, and have
nonobvious secondary effects,
such as political exploitation
of a pandemic crisis. These
problems require world knowl-
edge, multistep reasoning, and
creative abstraction to illumi-
nate new mitigation strate-
gies. The human computation
ecosystems of the future (see
the figure, panel C) have huge
HUMAN COMPUTATION
The power of crowds
Combining humans and machines can help tackle increasingly hard problems
Evolution of human computation
systems. (A) Crowdsourcing breaks
large tasks down into microtasks, which
can be things at which humans excel,
like classifying images. The microtasks
are delivered to a large crowd via a
user-friendly interface, and the data are
aggregated for further processing. (B)
Complex workflows funnel crowdworkers
into roles such that workers at each
step use and augment the information
provided by previous workers. (C) In
creating problem-solving ecosystems,
researchers are beginning to explore how
to combine the cognitive processing of
many human contributors with machine-
based computing to build faithful
models of the complex, interdependent
systems that underlie the world’s most
challenging problems. Prototypes
of these systems provide online
workspaces that allow participants
to engage in open-ended activities
where they contribute, combine, revise,
connect, evaluate, and integrate data
and concepts within a common analytic
framework and, in some cases, prescribe
or take actions in the real world. These
ecosystems hold promise for responding
more effectively to disasters as well as
chronic challenges like climate change
and geopolitical conflict.
Large task
Microtasks
Assign
Complete
Assign
Complete
Assign
Complete
Assign
Complete
Decision
branching
Di
Easy
Aggregate
Result
Result
Assign task / Collect results
New task
Analyze
Analyze
Assess
A
B
C
Microtasking
Problem-solving ecosystem
Shared workspace
Economy
Environment
Social systems
Ideation
Ideation
Revision
Revision
Integration
Integration
Evaluation
Evaluation
Expert
32 1 JANUARY 2016 • VOL 351 ISSUE 6268
Published by AAAS
1 JANUARY 2016 • VOL 351 ISSUE 6268 33
SCIENCE sciencemag.org
potential to help address wicked problems,
but are currently being explored in less
wicked contexts.
Prototypical examples include the Poly-
math Project (7), which helped prove an
80-year-old mathematical theorem, and
ePluribus Problem Solver (8), which pro-
duced a factually accurate and well-con-
structed journalistic article based on just
a handful of photographs distributed to
naïve public participants. In both
cases, diverse participants worked
in tandem to generate new insights
with real-time collective intelli-
gence, rather than first gathering
the human-based inputs and then
processing them afterwards. This
is made possible by enabling par-
ticipants to work independently, feeding
participants novel viewpoints from the
community, and allowing for multiple, vis-
ible solutions. In the case of the Polymath
Project, human contributors adhered to a
set of established ground rules, whereas
in ePluribus, a computer program and its
user interface enforced the ground rules.
Designing increasingly complex systems
also requires increased understanding of
the human-machine feedback loop. For ex-
ample, systems can be designed to enhance
individual contributions. This is the case
for CrowdCrit, which elicits professional-
quality critiques of graphics design from
the general public. It does this by build-
ing domain knowledge into the workflow,
so that nonexperts are guided to provide
expert-like feedback (9).
In CrowdCrit, a computer augments
human performance, but the reverse is
also possible: Human inputs can be used
to guide computers to be more effective,
as seen in interactive genetic algorithms,
which apply the process of natural selec-
tion to evolve new solutions. These al-
gorithms mutate, recombine, and prune
candidate solutions to ensure that each
successive generation of ideas improves
upon the previous one. However, imple-
menting such algorithms for real-world
applications often requires contextual
knowledge and cognitive capabilities that
elude machines. By having humans do
the steps that computers cannot do, such
as producing, combining, and evaluating
ideas, an interactive evolutionary algo-
rithm produced expert-quality solutions
for the Gulf of Mexico oil spill (10).
Until recently, systems that transcend
simple microtasking had to be created
from scratch due to a lack of supportive
infrastructure. Today, workflow tools have
emerged to accelerate development and
improve the reliability of human com-
putation systems. The TurKit toolkit, for
example, has enabled iterative task delega-
tion (see the figure, panel B) in the gen-
eralized crowdsourcing platform Amazon
Mechanical Turk, so that contributors can
build on each other’s work (11). Further-
more, QuikTurkit enables real-time crowd
responses by prefetching answers before
they are needed and queuing multiple jobs
at the same time (12). These architectural
building blocks simplify the development
of human computation ecosystems (see the
figure, panel C).
In the majority of human computation
systems, a small number of participants
do most of the work. Given this unusual
distribution of effort, we need to improve
our understanding of how to maximize re-
cruitment and retention of participants,
enhance skill development, and maximize
the total effort that participants contribute
to projects (13). It may be even more chal-
lenging to maximize the efficiency with
which human inputs, information sharing,
and machine-based processing work to-
gether. Machines tend to give predictable
outputs, such that errors can always be
traced to faulty code or design, but humans
are less predictable in terms of their avail-
ability and the quality of their work.
Human computation thus requires a de-
parture from traditional computer science
methods and can benefit from design ap-
proaches based on integrated understand-
ings of human cognition, motivation, error
rates, and decision theory. Research relat-
ing task performance to workflow design
and participant experience is sparse; new
A-B tests that examine how manipulat-
ing such factors can increase performance
would increase the predictability of future
systems.
Crowdsourcing arose with citizen sci-
ence as a form of volunteerism, but now
spans the gamut from work (paid mi-
crotasking) to play (games for good). As
more work is done in crowdsourcing en-
vironments, we need to consider what this
means for the labor force, unemployment
rates, and the economy. On the one hand,
crowdsourcing provides on-demand labor
for companies; however, it is currently out-
side the purview of labor laws. This must
be addressed so that crowdworkers are
protected from exploitation.
Some believe that faster computer pro-
cessing speeds will eventually bridge the
gap between machine-based intelligence
and human intelligence. However, human
computation already affords a tremen-
dous opportunity to combine the respec-
tive strengths of humans and machines
toward unprecedented capabilities in the
short term. It is important that
nefarious uses, such as disinforma-
tion engineering, in which human
computation systems are designed
to incite panic, steal information,
or manipulate behavior (14), are
not overlooked. Community-driven
guidance concerning transparency,
informed consent, and meaningful choice
is emerging to address the ethical and
social implications of increasingly perva-
sive and diverse forms of online partici-
pation (15). Ethical standards can help to
ensure that human computation remains
humanistic. ■
REFERENCES AND NOTES
1. L. von Ahn, thesis, Carnegie Mellon University,
Pittsburgh, PA (2005).
2. A. J. Qu inn, B. B . Beders on, in Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(Association for Computing Machinery, New York, 2011;
http://doi.acm.org/10.1145/1978942.1979148), pp.
1403–1412.
3. L. vo n Ahn, B. M aurer, C. McM illen , D. Abraham , M. Blum ,
Science 321, 1465 (2008).
4. J. Gi les, Nature 438, 900 (2005).
5. J. S. K im et al., Nature 509, 331 (2014).
6. M . A. Lu engo -Oroz, A . Arran z, J. Frean , J. Med. Internet
Res. 14, e167 (2012).
7. T. Gowers, M. Nielsen, Nature 461, 879 (2009)
8. K. Greene, D. Thomsen, P. Michelucci, Secur. Inform. 1, 12
(2012).
9. K. Luther et al., in Proceedings of the 18th ACM
Conference on Computer Supported Cooperative
Work; Social Computing (Association for Computing
Machinery, New York, 2015; http://doi.acm.
org/10.1145/2675133.2675283), CSCW ’15, pp.
473–48 5.
10. J. V. Nick erson, Y. Sakam oto, L. Yu, in CHI 2011 Workshop
on Crowdsourcing and Human Computation,
M. Bernstein et al., Eds. (Association for Computing
Machinery, New York, 2011), pp. 1-4.
11 . G. Litt le, L. B. Ch ilton , M. Gold man, R. C . Mille r,
in Proceedings of the ACM SIGKDD Workshop on
Human Computation (Association for Computing
Machinery, New York, 2009; http://dl.acm.org/citation.
cfm?id=1600159), pp. 29–30.
12 . W. S. Laseck i, C. Hom an, J. P. Bigham, Hum. Comput.
10.15346/hc.v1i1.29 (2014).
13. D. Easley, A. Gh osh, in Proceedings of the Sixteenth ACM
Conference on Economics and Computation (Association
for Computing Machinery, New York, 2015; http://doi.
acm.org/10.1145/2764468.2764513), pp. 679–696.
14. D. W. McDonald et al., Interactions 21, 72 (2014).
1 5. A. Bowse r, A. Wigg ins, Hum. Comput. 2, 19 (2015).
ACKNOWLEDGMENTS
We are gra teful to J. P. Bigham, R . J. Crouse r, J. Hendle r, A. Kittur,
W. Lasecki, and T. Malone for insightful comments, and to M.
Grady for help with the artwork.
10.1126/science.aad6499
“Human computation thus requires a
departure from traditional computer
science methods…”
1Human Computation Institute, Fairfax, VA 22032, USA.
2Cornell Lab of Ornithology, Cornell University, Ithaca, NY
14850, USA. E-mail: pem@humancomputation.org
Published by AAAS