Recently, there has been intense interest in crowdsourcing - wherein an organization calls upon the
general public to carry out specific tasks in support of organizational objectives (see Doan et al., 2011).
Applications of crowdsourcing are growing and include corporate product development, marketing
research, public policy, and scientific research. Specially-built crowdsourcing platforms, such as Amazon
Mechanical Turk or CrowdFlower, provide pools of 'crowdworkers' for hire.
Among other uses1, crowdsourcing promises to dramatically expand organizational “sensor” networks,
making it possible to collect large amounts of data from diverse audiences. As organizations increasingly
employ crowdsourcing to collect information to support internal decision making, questions about the
quality of information created by members of the crowd become critical (Sheppard et al., 2014,
forthcoming; Antelio et al., 2012; Kremen et al., 2011; Arazy et al., 2011; Alabri and Hunter, 2010). Several
approaches to information quality in crowdsourcing have been proposed (Lukyanenko et al., 2011;
Wiggins et al., 2011). These include collaborative or peer review, leveraging redundancy in the crowds,
and user training. Collaboration and peer review, for example, is the basis for iSpot (www.ispot.org.uk), a
project that relies on social networking for collaborative identification of species of plants and animals
(Silvertown, 2010). Crowd data can also be reviewed by experts (Sheppard et al., 2014, forthcoming;
Hochachka et al., 2012). Whenever possible, organizations leverage redundancy in the crowds (e.g., by
asking multiple observers to independently report on the same phenomena) (Franklin et al., 2011; Liu et
al., 2012). Training is a common approach, especially when there are established standards to which
contributions should adhere (Dickinson et al., 2010; Foster-Smith and Evans, 2003).