PreprintPDF Available

Human computation requires and enables a new approach to ethical review

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

With humans increasingly serving as computational elements in distributed information processing systems and in consideration of the profit-driven motives and potential inequities that might accompany the emerging thinking economy[1], we recognize the need for establishing a set of related ethics to ensure the fair treatment and wellbeing of online cognitive laborers and the conscientious use of the capabilities to which they contribute. Toward this end, we first describe human-in-the-loop computing in context of the new concerns it raises that are not addressed by traditional ethical research standards. We then describe shortcomings in the traditional approach to ethical review and introduce a dynamic approach for sustaining an ethical framework that can continue to evolve within the rapidly shifting context of disruptive new technologies.
Content may be subject to copyright.
Note: This is a pre-publication draft submitted to 34th Conference on Neural Information Processing
Systems (NeurIPS 2020), Vancouver, Canada.
Human computation requires and enables a new
approach to ethical review
Libuše H. Vepˇ
rek
Human Computation Institute
Ithaca, NY 14850
l.veprek@lmu.de
Patricia Seymour
Human Computation Institute
Ithaca, NY 14850
pannseymour@yahoo.com
Pietro Michelucci
Human Computation Institute
Ithaca, NY 14850
pem@humancomputation.org
Abstract
With humans increasingly serving as computational elements in distributed in-
formation processing systems and in consideration of the profit-driven motives
and potential inequities that might accompany the emerging thinking economy[1],
we recognize the need for establishing a set of related ethics to ensure the fair
treatment and wellbeing of online cognitive laborers and the conscientious use
of the capabilities to which they contribute. Toward this end, we first describe
human-in-the-loop computing in context of the new concerns it raises that are not
addressed by traditional ethical research standards. We then describe shortcomings
in the traditional approach to ethical review and introduce a dynamic approach
for sustaining an ethical framework that can continue to evolve within the rapidly
shifting context of disruptive new technologies.
1 Introduction
A new branch of artificial intelligence called “human computation” has emerged over the last 15
years that combines the respective strengths of humans and computers to tackle problems that cannot
be solved in other ways[2]. Information processing systems based on this approach often employ
online crowdsourcing to delegate to humans cognitive “microtasks” that elude the capabilities of
machine-based methods. Real-world human computation systems are already advancing cancer[3],
HIV[4], and Alzheimer’s[5] research, diagnosing malaria[6] in sub-Saharan Africa, reducing female
genital mutilation in Tanzania[7], predicting flood effects in Togo[8], endowing the blind with
real-time scene understanding[9], expediting disaster relief despite language barriers and failing
infrastructure[10], rewriting our understanding of cosmology[11][12] and improving predictions in
conservation science[13].
Four main paradigms exist for engaging people in human computation tasks. One of the first online
human computation systems, reCAPTCHA, operates on a quid-pro-quo basis, requiring a person
to digitize distorted text to demonstrate being human, which provides access to a website while
simultaneously contributing to a massive text digitization project[14]. Modern versions of this, such
as hCaptcha, require people to select all images from a provided set that contain some class of objects,
Ludwig-Maximilians-Universität München
Preprint. Under review.
arXiv:2011.10754v1 [cs.CY] 21 Nov 2020
such as traffic lights. This generates revenue by providing a data-labelling service that is used to help
train customers’ machine learning models[15]. Another paradigm, citizen science, entices public
volunteers to participate in the scientific process[16] through which they collect and analyze research
data in exchange for hands-on opportunities to learn about various research topics. Citizen science has
been steadily gaining popularity via community connectors like SciStarter.org and curation platforms
like Zooniverse. In a third engagement paradigm, “clickworkers” are typically paid via crowdsourcing
marketplaces, such as Amazon Mechanical Turk and ClickWorker.com to participate in various online
tasks that might be used for science, marketing, or product development applications.[17] Mooqita
represents a version of paid-crowdsourcing that uses massive open online courses (MOOC) as a way
to onboard new employees for salaried jobs involving online cognitive labor for a single organization
rather than a marketplace. In the final and most recent paradigm, popular online games such as
EVE Online and Borderlands Science have begun embedding microtasks within existing gameplay
to engage potentially millions of online gamers who opt-in to participate in exchange for various
advancement opportunities in the games[18].
Humans are squarely involved in human computation systems, and although most configurations do
not constitute human subjects research, they often pose new ethical dilemmas. Thus, traditional re-
search values and review practices typically do not apply to human computation and seem inadequate
to address the many new human-centered contexts produced by this growing field of study.
This becomes especially apparent in ethical review processes. Currently, online Citizen Science
research is reviewed by the same standards as clinical trials although the review board’s understanding
of “human subject” or “research participant” does not fit the role of participants in online citizen
science. Citizen scientists are not mere “human subjects” in a research study, but can be researchers,
human subjects, or sometimes both at the same time. This implies that their role has to be communi-
cated and elucidated to the traditional review boards. These new role allocations also come along with
different ideas and expectations for the different stakeholders, especially those performing tasks. For
example, David Resnik shows that if the role of citizen scientists exceeds being a mere human subject,
they may feel more connected and hence more ownership over their collected data[19]. Moreover,
in the arena of citizen science and human computation, traditional IRB is not clearly mandated and,
when used, it rigidly enforces ethical standards most often associated with biomedical research.
Ironically, human computation provides a potential solution to these ethical challenges. Herein we
argue for a new approach to ethical oversight that addresses the needs of online citizen science and
the new forms of human-computer collaboration in a digital age. First we explain how a distinction
between morals and ethics can be useful for this endeavor. Based on this distinction we explore
how we might appeal to the participatory methods of human computation to design a technosocial
platform that enables the curation of a “living” set of ethics and crowdsources the application of those
ethics to a suitable review process.
2 Morality vs. Ethics
In everyday life “morality” and “ethics” are used synonymously to describe the “good” in contrast to
the “bad” or to distinguish between “right” and “wrong”. Formally, however, Ethics is considered a
branch of philosophy that studies morality[20], or in the words of the German philosopher Dietmar
Hübner: “Morality is the object, ethics is science.”[21] This brings us to the question of: what
ismorality? According to Hübner, then is “a system of norms, whose subject is human behavior and
which claims unconditional validity”(ibid.: 13, transl. b.t.a.). This means that different “morals” exist
in different contexts such as different cultures or political currents, and there even exist different
morals for specific groups of people like doctors, journalists and scientists. To decide whether a given
moral is right or wrong therefore requires reflecting on the normative context in which it occurs. Thus,
the aim of differentiating between “morality” and “ethics” is not to enforce correct usage, but about
orienting our perspective as we consider ethical review in the new kinds of digital collaborations that
manifest in human computation systems such as online citizen science.
We can now relate to two different layers that need to be considered: The first layer consists of
the moral framework in the field. We need to understand what the moral values of the different
stakeholders in human computation and citizen science are: For example, what is important to citizen
science participants? What do they expect from their participation and how do they want to be
acknowledged? This understanding of “moral framework” could be related to what Lisa Rasmussen
2
calls the “citizen science ethos”[23]. To identify moral values in online citizen science, we introduced
an online discussion forum and invited both citizen scientists and citizen science practitioners to
contribute [22]. The analysis of the different discussion thread shows that mutual respect, inclusion
and transparency a.o. belong to the “moral framework” of citizen science. This could then inform the
second layer which consists of reflecting this moral framework in a set of ethical guidelines, which
could both inform the design and execution of citizen science projects as well as evaluation by ethical
oversight committees, who can then review citizen science research in accordance with community
values. Of course, these ethical guidelines would need to remain flexible as we may discover that
values that apply to most citizen science projects may still be ill-suited to certain specific studies.
3 Traditional IRB
The difference between our attitudes toward people who participate in research can perhaps be
seen best in the debate between using the term “participant” or “subject” to refer to someone who
volunteers their body, mind, and time to research. In 2014, Public Responsibility in Medicine
and Research (PRIM&R) put forth the following statement on this debate: “subject” is the most
appropriate title for those involved in research studies (recognizing, however, that in some instances
“participant” may be appropriate; for example, in community-based participatory research). In the
world of citizen science, however, participant activities may be more closely aligned with the work
that scientists do. This means that currently there is no existing model of review that appropriately
assigns autonomy to those engaged.
Moreover, the absence of ethical guidelines for citizen science creates a dilemma for indepen-
dent/institutional reviews (IRB) or ethics review boards (ERB) because they have to choose between
either applying ethical guidelines that do not fit the application or making their own decisions about
what’s right or wrong. The lack of consistent interpretation of research studies has prompted the
National Institutes of Health in the United States to mandate a single IRB review for some federally
funded research. The researcher, at the time of the grant application, identifies an IRB that will serve
as the only IRB for the project. This removes the multitude of review decisions, consent variation and
timing delays that routinely plague even the most prestigious and well-funded research institutions.
2
Lastly, we build on our own experiences with IRB processes in the field of human computation-based
citizen science: Working with a traditional IRB highlighted the inconsistencies and infeasibility of
using the traditional US biomedical oriented approach of most IRBs. Fortunately, the experience was
mitigated by a knowledgeable and flexible liaison who helped to inform the members of the IRB
of the differences in human computation from biomedical research. For example, the concept of
risk and benefit, as defined by the U.S. regulations and guidance did not apply in a straightforward
manner to human computation. The protectionism of the US regulations may not be applicable to
human computation nor does it apply to the participant’s autonomy when performing tasks in this
arena. Each difference in approach and regulatory necessity had to be addressed, thus causing delays
and redundancies that was not conducive to efficient review.
Why should citizen science, and indeed the broader space of human computation, as a relatively new
approach to research, enter the fray of poorly organized and inefficient traditional review processes
just because review has always been done this way? Instead, we envision a triage-based approach
that determines whether traditional research ethics review applies to a research project based on the
project goals and participant task requirements and flips four key aspects of traditional IRB. It also
involves input from members of the human computation community at large.
4 Flipping IRB
Traditionally, IRB has often been compulsory and seemed like an adversarial process, where reviewers
were there to poke holes in the researcher’s ethical approach and eventually return a judgment. This
2
Throwing technology at the issue of IRB review was thought to be an answer to better organize and
streamline what had, for many years, been a paper-based process. The result of the development and use of
these electronic, online platforms has not been particularly helpful because it applies technology to an inefficient
process. To make matters worse, the technology is customized to the already-broken system, perpetuating the
pre-existing problems in an online context. Moreover, the systems available in the U.S. do not communicate
between institutions and do not solve the problem of inconsistent decisions and documents.
3
process could be recast as a collaborative one in which the role of the IRB expert is to help researchers
align their methods with established ethical guidelines toward achieving their research goals.
Traditionally, an application is filled out cautiously and word choices are carefully made to elicit an
approval outcome. The application then disappears into a black box called the IRB process, where
mystical and sometimes random-seeming things happen until a determination comes out the other
end. We believe full transparency is critical for building trust and working toward a shared goal. Once
aligned with an IRB expert who can direct the ethical review of a project as a collaborator, there is a
clear path to approval.
Traditionally, applications are isolated documents that are filed away and never seen again (until
it’s time to amend or renew). But that means that even if two projects by different researchers are
very similar, they each have to go through the same time-intensive, expensive, and laborious process.
We think that if your project is like someone else’s then we only need to consider the parts that are
different. We suggest that if we begin building a protocol repository, we can assist researchers to use
designs and approaches that have already been approved thus decreasing the issues that have to be
reviewed time and time again.
Traditionally, there was a ready roster of mostly the same people who were on tap to review applica-
tions as needed. We think the only time you need a panel is when there is something that isn’t clear
cut, and in that case, let’s enlist our community of peers, including human computation community
volunteers who have agreed to be on call.
5 A human computation approach to ethical review
Today, we have an opportunity to build a technosocial platform that turns this vision into reality.
We apply our own human computation approaches to building consensus and determining the right
division of labor between humans and machines. We also draw on our experience related to creating a
human computation platform that continues to engage over 30,000 volunteers to motivate community
participation in the review process. Indeed, we have begun to build this - it is called Civium, and its
main purpose is to make human computation research and applications more transparent, trustworthy,
and sustainable.3
For starters, we created an experimentation toolkit that makes it possible to clone a citizen science
project with a single click to create a sandbox version for running experiments without affecting the
live platform or data quality. This sandbox environment can be used to run an experiment that studies
the behaviors of the citizen science volunteers. In this case, volunteers are not working alongside
scientists analyzing data, they are being studied by scientists to help improve our understanding about
how to design effective citizen science platforms. For example, in one case study we investigated
human/AI-partnerships in the online game Stall Catchers. By including surveys in different stages of
the experiment and analyzing the collected (meta) data we could gain insights into when and why
users trusted the AI-assistant and when they questioned its’ skill. This configuration, however, goes
beyond traditional citizen science, might suggest the need for ethical review.
To make that process quick, informative, and painless, we are integrating a new IRB process (see
Figure 1) into the Civium environment. That means that instead of packaging up a description of
an experiment for the IRB expert, the expert can enter into the sandbox with the researcher and
examine the experiment to understand its design (see Figure 2 and 3 in the appendix) and see from the
standpoint of a participant how it will actually work (see Figure 4 in the appendix). The IRB expert
can make comments and suggest edits directly on the interface in support of aligning the research
goals with the community’s current ethical standards.This new procedure should minimize the time a
reviewer has to work on a proposal, since the submitted proposal could entail links to the sandbox
experiment allowing the reviewer to gain a better understanding of the design logic and the interface.
Traditionally, the reviewer would have to think about how the experiment would probably work based
on written material and some screenshots. The explicit role of the IRB expert is to shepherd the
review process and activate various assistive mechanisms as needed in service of that goal.
3
An effort based on a similar logic is OpenReview, a web interface with underlying database API that aims at
advancing openness in the scientific peer review process. We would like to thank the anonymous reviewer for
drawing our attention to this analogy.
4
Figure 1: Activity diagram of technosocial platform for ethical review
Meanwhile, the system logs the issues that arise and the ensuing dialog around those issues, as well as
any implemented solutions. These are made transparent and accessible in a repository that connects
these to a snapshot of the sandbox itself. This introduces something completely new and powerful for
ethical review - something we demand for our legal system and something we aim for in scientific
research, but that is missing in IRB, which is reproducibility. And when questions arise that are not
addressed or that are ambiguous under the current set of ethical guidelines, the IRB expert invokes
members of the community to review the issues. This is analogous to a section editor for an academic
journal finding reviewers for a manuscript. And the platform can help manage the recruitment of
these community contributors.
Finally, and critically, the outcome of this process can not only resolve the ethical dilemma for the
researchers, it can inform amendments to the ethical guidelines themselves, so that they can continue
to grow with our understanding of human-in-the-loop computing and fit the needs and circumstances
of this growing community.
Broader Impact
Herein we address the need for a general-purpose mechanism for AI governance that can evolve with
our understanding of the field. Not only does AI raise issues of autonomy, labor, and equitability, but
with increased reliance on systems that employ human cognition, the ethical waters become even
murkier. Our approach seeks to crowdsource the evaluation of the risks and rewards of situated AI
systems as well as human-computer collaborations, and aims at seeding and curating a set of related
ethics to ensure the fair treatment and wellbeing of humans in-the-loop. To ensure that the needs of
all stakeholders are taken into account we include diverse perspectives in a maximally transparent
process. For example, all reports will be stored in a searchable public repository. However, despite
our best intentions and planning, there are always the “unknown unknowns” that can arise when a
platform like this goes live. To address these risks, we will remain vigilant to the system’s behavior
and leverage community monitoring and feedback loops intrinsic to our approach.
Acknowledgments and Disclosure of Funding
We would like to thank Egl
˙
e Marija Ramanauskait
˙
e for her great assistance and preparation of the
activity diagram and Percy Mamedy for his work on the implementation of the platform. We also
5
wish to show our appreciation to all participants of our discussion on reinventing IRB as well as the
contributors in the citizen science forum for their helpful insights and the fruitful discussions.
Libuše Hannah Vepˇ
rek, Patricia Seymour and Pietro Michelucci declare that they have no conflict of
interest.
References
[1] P. Michelucci, “How do we create a sustainable thinking economy?," Toward Data Science, Nov. 04, 2019.
https://towardsdatascience.com/how-do-we-create-a-sustainable-thinking-economy-4d77839b031e (accessed
Feb. 04, 2020).
[2] P. Michelucci and J. L. Dickinson, “The power of crowds," Science, vol. 351, no. 6268, pp. 32–33, Jan.
2016, doi: 10.1126/science.aad6499.
[3] F. J. Candido Dos Reis et al., “Crowdsourcing the General Public for Large Scale Molecular Pathology
Studies in Cancer," EBioMedicine, vol. 2, no. 7, pp. 681–689, Jul. 2015, doi: 10.1016/j.ebiom.2015.05.009.
[4] F. Khatib et al., “Crystal structure of a monomeric retroviral protease solved by protein folding game players,"
Nat. Struct. Mol. Biol., vol. 18, no. 10, pp. 1175–1177, Sep. 2011, doi: 10.1038/nsmb.2119.
[5] O. Bracko et al., “High fat diet worsens pathology and impairment in an Alzheimer’s mouse model,
but not by synergistically decreasing cerebral blood flow," bioRxiv, p. 2019.12.16.878397, Dec. 2019, doi:
10.1101/2019.12.16.878397.
[6] M. A. Luengo-Oroz, A. Arranz, and J. Frean, “Crowdsourcing Malaria Parasite Quantification: An Online
Game for Analyzing Images of Infected Thick Blood Smears," J. Med. Internet Res., vol. 14, no. 6, Nov. 2012,
doi: 10.2196/jmir.2338.
[7] M. O. Leaders, “Put Rural Tanzania on the Map," Read, Write, Participate, Apr. 19, 2018.
https://medium.com/read-write-participate/put-rural-tanzania-on-the-map-79d0888df210 (accessed May 02,
2019).
[8] P. Suarez, “Rethinking Engagement: Innovations in How Humanitarians Explore Geoinformation," ISPRS
Int. J. Geo-Inf., vol. 4, no. 3, pp. 1729–1749, Sep. 2015, doi: 10.3390/ijgi4031729.
[9] J. P. Bigham et al., “VizWiz: Nearly Real-time Answers to Visual Questions," in Proceedings of the 23Nd
Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 2010, pp. 333–342,
doi: 10.1145/1866029.1866080.
[10] P. Meier, “Human Computation for Disaster Response," in Handbook of Human Computation, P. Michelucci,
Ed. New York, NY: Springer, 2013, pp. 95–104.
[11] A. J. Westphal et al., “Evidence for interstellar origin of seven dust particles collected by the Stardust
spacecraft," Science, vol. 345, no. 6198, p. 786, Aug. 2014, doi: 10.1126/science.1252496.
[12] C. N. Cardamone et al., “Galaxy Zoo Green Peas: Discovery of A Class of Compact Extremely Star-Forming
Galaxies," Mon. Not. R. Astron. Soc., vol. 399, no. 3, pp. 1191–1205, Nov. 2009, doi: 10.1111/j.1365-
2966.2009.15383.x.
[13] S. Kelling et al., “eBird: A Human/Computer Learning Network for Biodiversity Conservation and
Research," presented at the Twenty-Fourth IAAI Conference, Jul. 2012, Accessed: May 02, 2019. [Online].
Available: https://www.aaai.org/ocs/index.php/IAAI/IAAI-12/paper/view/4880.
[14] L. von Ahn, B. Maurer, C. McMillen, D. Abraham, and M. Blum, “reCAPTCHA: Human-Based Character
Recognition via Web Security Measures," Science, vol. 321, no. 5895, pp. 1465–1468, Sep. 2008, doi:
10.1126/science.1160379.
[15] “Moving from reCAPTCHA to hCaptcha," The Cloudflare Blog, Apr. 08, 2020.
https://blog.cloudflare.com/moving-from-recaptcha-to-hcaptcha/ (accessed Oct. 09, 2020).
[16] F. Heigl, B. Kieslinger, K. T. Paul, J. Uhlik, and D. Dörler, “Opinion: Toward an international definition of
citizen science," Proc. Natl. Acad. Sci., vol. 116, no. 17, p. 8089, Apr. 2019, doi: 10.1073/pnas.1903393116.
[17] J. Chandler, G. Paolacci, and P. Mueller, “Risks and Rewards of Crowdsourcing Marketplaces," in Handbook
of Human Computation, P. Michelucci, Ed. New York, NY: Springer, 2013, pp. 377–392.
[18] J. Waldispühl, A. Szantner, R. Knight, S. Caisse, and R. Pitchford, “Leveling up citizen science," Nat.
Biotechnol., vol. 38, no. 10, pp. 1124–1126, Oct. 2020, doi: 10.1038/s41587-020-0694-x.
[19] D. B. Resnik, “Citizen Scientists as Human Subjects: Ethical Issues," Citiz. Sci. Theory Pract., vol. 4, no.
1, Art. no. 1, Mar. 2019, doi: 10.5334/cstp.150.
6
[20] J. Deigh, An Introduction to Ethics. Cambridge: Cambridge University Press, 2010.
[21] D. Hübner, Einführung in die philosophische Ethik, 2nd ed. Göttingen: UTB GmbH, 2018.
[22] “Citizen Science Ethics" Human Computation Institute forum, 2020. https://forum.hcinst.org/c/citsci-
ethics/13 (accessed Nov. 18, 2020).
[23] L. M. Rasmussen, “Confronting Research Misconduct in Citizen Science,"Citiz. Sci. Theory Pract., vol.
4,no. 1, Art. no. 1, Mar. 2019, doi: 10.5334/cstp.207.
Appendix
Figure 2: Screenshot of nova backend: example experimental design (prototype)
Figure 3: Screenshot of nova backend: example experimental design (prototype)
7
Figure 4: Screenshot of example user interface for experiments (prototype)
8
ResearchGate has not been able to resolve any citations for this publication.
Preprint
Full-text available
Obesity is linked to increased risk for and severity of Alzheimer’s disease (AD). Cerebral blood flow (CBF) reductions are an early feature of AD and are also linked to obesity. We showed that non-flowing capillaries, caused by adhered neutrophils, underlie the CBF reduction in mouse models of AD. Because obesity could exacerbate the vascular inflammation likely underlying this neutrophil adhesion, we tested links between obesity and AD by feeding APP/PS1 mice a high fat diet (Hfd) and evaluating behavioral, physiological, and pathological changes. We found trends toward poorer memory performance in APP/PS1 mice fed a Hfd, impaired social interactions with either APP/PS1 genotype or a Hfd, and synergistic impairment of sensory-motor function in APP/PS1 mice fed a Hfd. The Hfd led to increases in amyloid-beta monomers and plaques in APP/PS1 mice, as well as increased brain inflammation. These results agree with previous reports showing obesity exacerbates AD-related pathology and symptoms in mice. We used a crowd-sourced, citizen science approach to analyze imaging data to determine the impact of the APP/PS1 genotype and a Hfd capillary stalling and CBF. Surprisingly, we did not see an increase in the number of non-flowing capillaries or a worsening of the CBF deficit in APP/PS1 mice fed a Hfd as compared to controls, suggesting capillary stalling is not a mechanistic link between a Hfd and increased severity of AD in mice. Reducing capillary stalling by blocking neutrophil adhesion improved CBF and short-term memory function in APP/PS1 mice, even when fed a Hfd. Significance statement Obesity, especially in mid-life, has been linked to increased risk for and severity of Alzheimer’s disease. Here, we show that blocking adhesion of white blood cells leads to increases in brain blood flow that improve cognitive function, regardless of whether mice are obese or not.
Article
Full-text available
So, you suspect that someone in a citizen science project committed research misconduct. What do you do now? As citizen science methods become increasingly popular, it seems inevitable that at some point, someone identifying themselves as a citizen scientist will be accused of committing research misconduct. Yet the growth of the field also takes research increasingly outside of traditional regulatory mechanisms of identifying, investigating, and delivering consequences for research misconduct. How could we prevent or handle an allegation of scientific misconduct in citizen science that falls outside of our familiar regulatory remedies? And more broadly, what does this imply for ensuring scientific integrity in citizen science? I argue that the increasing use of new research methods in citizen science poses a challenge to traditional approaches to research misconduct, and that we should consider how to confront issues of research misconduct in citizen science. I briefly describe existing approaches to research misconduct and some aspects of citizen science giving rise to the problem, then consider alternative mechanisms, ranging from tort law to professional responsibility to a proposed “research integrity insurance,” that might be deployed to address and prevent such cases.
Article
Full-text available
An increasing number of human studies are asking participants to have substantial involvement in research. Citizens in human studies may contribute to various research activities, including study design, recruitment, data interpretation, and data and sample collection. Citizen involvement in research raises novel ethical issues for human studies, because individuals have traditionally occupied the role of researcher or subject, but not both at the same time. The confluence of these two different roles in the same person poses challenges for investigators and oversight committees because legal rules and ethical guidelines focus on protecting the rights and welfare of human subjects and do not address issues that fall outside this domain, such as study design, data quality and integrity, reporting misconduct, authorship, or publication. This article examines some of these issues and makes recommendations for investigators and oversight committees.
Article
Full-text available
Crowdsourcing has become an increasingly popular means of flexibly deploying large amounts of human computational power. The present chapter investigates the role of microtask labor marketplaces in managing human and hybrid human machine computing. Labor marketplaces offer many advantages that in combination allow human intelligence to be allocated across projects rapidly and efficiently and information to be transmitted effectively between market participants. Human computation comes with a set of challenges that are distinct from machine computation, including increased unsystematic error (e.g. mistakes) and systematic error (e.g. cognitive biases), both of which can be exacerbated when motivation is low, incentives are misaligned, and task requirements are poorly communicated. We provide specific guidance about how to ameliorate these issues through task design, workforce selection, data cleaning and aggregation.
Article
Full-text available
When humanitarian workers embark on learning and dialogue for linking geoinformation to disaster management, the activities they confront are usually more difficult than interesting. How to accelerate the acquisition and deployment of skills and tools for spatial data collection and analysis, given the increasingly unmanageable workload of humanitarians? How to engage practitioners in experiencing the value and limitations of newly available tools? This paper offers an innovative approach to immerse disaster managers in geoinformation: participatory games that enable stakeholders to experience playable system dynamic models linking geoinformation, decisions and consequences in a way that is both serious and fun. A conceptual framework outlines the foundations of experiential learning through gameplay, with clear connections to a well-established risk management framework. Two case studies illustrate this approach: one involving flood management in the Zambezi river in southern Africa through the game UpRiver (in both physical and digital versions), and another pertaining to World Bank training on open data for resilience that combines applied improvisation activities with the need to understand and deploy software tools like Open Street Map and InaSAFE to manage school investments and schoolchildren evacuation in a simulated flood scenario for the city of La Plata, Argentina.
Article
Full-text available
Citizen science, scientific research conducted by non-specialists, has the potential to facilitate biomedical research using available large-scale data, however validating the results is challenging. The Cell Slider is a citizen science project that intends to share images from tumors with the general public, enabling them to score tumor markers independently through an internet-based interface. From October 2012 to June 2014, 98,293 Citizen Scientists accessed the Cell Slider web page and scored 180,172 sub-images derived from images of 12,326 tissue microarray cores labeled for estrogen receptor (ER). We evaluated the accuracy of Citizen Scientist's ER classification, and the association between ER status and prognosis by comparing their test performance against trained pathologists. The area under ROC curve was 0.95 (95% CI 0.94 to 0.96) for cancer cell identification and 0.97 (95% CI 0.96 to 0.97) for ER status. ER positive tumors scored by Citizen Scientists were associated with survival in a similar way to that scored by trained pathologists. Survival probability at 15 years were 0.78 (95% CI 0.76 to 0.80) for ER-positive and 0.72 (95% CI 0.68 to 0.77) for ER-negative tumors based on Citizen Scientists classification. Based on pathologist classification, survival probability was 0.79 (95% CI 0.77 to 0.81) for ER-positive and 0.71 (95% CI 0.67 to 0.74) for ER-negative tumors. The hazard ratio for death was 0.26 (95% CI 0.18 to 0.37) at diagnosis and became greater than one after 6.5 years of follow-up for ER scored by Citizen Scientists, and 0.24 (95% CI 0.18 to 0.33) at diagnosis increasing thereafter to one after 6.7 (95% CI 4.1 to 10.9) years of follow-up for ER scored by pathologists. Crowdsourcing of the general public to classify cancer pathology data for research is viable, engages the public and provides accurate ER data. Crowdsourced classification of research data may offer a valid solution to problems of throughput requiring human input.
Book
This book examines the central questions of ethics through a study of theories of right and wrong that are found in the great ethical works of Western philosophy. It focuses on theories that continue to have a significant presence in the field. The core chapters cover egoism, the eudaimonism of Plato and Aristotle, act and rule utilitarianism, modern natural law theory, Kant's moral theory, and existentialist ethics. Readers will be introduced not only to the main ideas of each theory but to contemporary developments and defenses of those ideas. A final chapter takes up topics in meta-ethics and moral psychology. The discussions throughout draw the reader into philosophical inquiry through argument and criticism that illuminate the profundity of the questions under examination. Students will find this book to be a very helpful guide to how philosophical inquiry is undertaken as well as to what the major theories in ethics hold.
Article
Human computation methods involving the use of real-time social media data have been used successfully to support humanitarian efforts for disaster-affected communities. Through an examination of various case studies, this chapter describes specific crowdsourcing methodologies applied to disaster relief, with attention to the challenges, benefits, and outcomes. Furthermore, consideration is given to potential methods that might combine more effectively the roles of machines and humans, such as adaptive systems, gamification, and high volume analytic techniques. A “call to action” concludes the chapter, endorsing a policy by which existing volume limitations on social media data access are suspended temporarily for humanitarian aid organizations during emergent crises.