Content uploaded by Thierry Buecheler
Author content
All content in this area was uploaded by Thierry Buecheler
Content may be subject to copyright.
Available online at www.sciencedirect.com
Procedia Computer Science 7 (2011) 327–329
The European Future Technologies Conference and Exhibition 2011
Understanding Science 2.0: Crowdsourcing and Open Innovation in
the Scientific Method
Thierry Bücheler a,∗, Jan Henrik Sieg b
aArtificial Intelligence Lab, University of Zurich, Andreasstrasse 15, 8050 Zurich, Switzerland
bChair of Strategic Mgmt and Innovation, ETH Zurich, Kreuzplatz 5, 8032 Zurich, Switzerland
Abstract
The innovation process is currently undergoing significant change in many industries. The World Wide Web has created a virtual
world of collective intelligence and helped large groups of people connect and collaborate in the innovation process [1]. Von Hippel
[2], for instance, states that a large number of users of a given technology will come up with innovative ideas. This process, originating
in business, is now also being observed in science. Discussions around “Citizen Science” [3] and “Science 2.0” [4] suggest the same
effects are relevant for fundamental research practices. “Crowdsourcing” [5] and “Open Innovation” [6] as well as other names for
those paradigms, like Peer Production, Wikinomics, Swarm Intelligence etc., have become buzzwords in recent years. However,
serious academic research efforts have also been started in many disciplines. In essence, these buzzwords all describe a form of
collective intelligence that is enabled by new technologies, particularly internet connectivity. The focus of most current research
on this topic is in the for-profit domain, i.e. organizations willing (and able) to pay large sums to source innovation externally, for
instance through innovation contests. Our research is testing the applicability of Crowdsourcing and some techniques from Open
Innovation to the scientific method and basic science in a non-profit environment (e.g., a traditional research university). If the tools
are found to be useful, this may significantly change how some research tasks are conducted: While large, apriori unknown crowds
of “irrational agents” (i.e. humans) are used to support scientists (and teams thereof) in several research tasks through the internet,
the usefulness and robustness of these interactions as well as scientifically important factors like quality and validity of research
results are tested in a systematic manner. The research is highly interdisciplinary and is done in collaboration with scientists from
sociology, psychology, management science, economics, computer science, and artificial intelligence. After a pre-study, extensive
data collection has been conducted and the data is currently being analyzed. The paper presents ideas and hypotheses and opens the
discussion for further input.
© Selection and peer-review under responsibility of FET11 conference organizers and published by Elsevier B.V.
Keywords: Crowdsourcing; Open Innovation; Simulation; Agent-Based Modeling; Science 2.0; Citizen Science
1. Introduction and open questions
Fundamental research is still driven by many thinkers and doers cracking a problem alone, based on their own
knowledge and skills. However, vast exchange, often with participants from different backgrounds in different settings,
takes place in modern research and contemporary research projects are characterized by intense interactions between
groups and individuals, e.g., during idea generation, formulation of hypotheses, evaluation, and data analysis, among
many other research tasks. Large project conglomerates (e.g., EU-funded research or projects funded through the
Advanced Technology Program in the U.S.) actively foster such interactions. In many cases, the scientist groups self-
∗Corresponding author.
1877-0509/$ – see front matter © Selection and peer-review under responsibility of FET11 conference organizers and published by Elsevier B.V.
doi:10.1016/j.procs.2011.09.014
328 T. Bücheler, J.H. Sieg / Procedia Computer Science 7 (2011) 327–329
Fig. 1. Simplified Research Process, containing “tasks”: the Research Value Chain – from [9].
organize according to their individual strengths and skills (and other reasons) to reach a common goal, without a
strong centralized body of control (see e.g., [7],[8]). Interactions between these individuals and groups can be seen as
instances of collective intelligence, including consensus decision making, mass communication, and other phenomena
(see [9] for further details).
If basic science has become a collective intelligence effort, can it use the ideas and technologies from Crowdsourcing
and Open Innovation to become more efficient and effective relative to the money spent while maintaining the necessary
quality and validity levels? Which are the right incentives to include large groups in basic science and to foster sharing
of ideas and data? Based on empirical data, our research seeks to provide answers to these research questions.
2. Crowdsourcing and Open Innovation
Crowdsourcing and Open Innovation are two terms coined in the last seven years, influencing several research fields.
We use the following two working definitions:
“Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and
outsourcing it to an undefined, generally large group of people in the form of an open call.” [5]
“’Open Innovation’ is a paradigm that assumes that firms can and should use external ideas as well as internal ideas,
and internal and external paths to market, as the firms look to advance their technology. Open Innovation combines
internal and external ideas into architectures and systems whose requirements are defined by a business model.” [6]
3. How to analyze the Scientific Method?
In order to investigate “basic science” in a structured manner, we have simplified the tasks that are conducted in
most scientific inquiries (see Fig. 1) and used MIT’s “Collective Intelligence Gene” framework to analyze the tasks in
combination with the “Three Constituents Principle” from AI [10]. See [9] for details regarding the Simplified Research
Process and the framework used. Based on this categorization and taxonomy, we hypothesize that the following scientific
tasks are especially suited for Crowdsourcing: develop and choose methodology, Identify team of co-workers, gather
information and resources (prior work and implications), analyze data, retest.
4. Where we currently stand
During the ShanghAI Lectures 2009, a global lecture on Artificial Intelligence involving 48 universities from 5
continents, we have collected data to test these hypotheses and to test the other tasks for “crowdsourceability” (as
described in [9]): “The participants supported one of four current scientific projects by contributing a paper stating
their ideas on open questions. Some of the solutions were rated “excellent“, “well-elaborated” and “useful for the
advancement of the project” by the scientists that headed the projects. We sent questionnaires to 372 participating
students after the lectures and received 84 valid replies (23%). Although only 16.0% thereof stated that they had prior
theoretical or technical knowledge regarding the chosen subject, 22.6% of all participants perceived a significant impact
on current research if they participated in the contest. However, initial data collection during this pre-test was insufficient
to analyze all variables in our framework.“In the fall semester 2010, we expanded data collection and rigorously applied
the framework. We also used existing scales from other contexts (e.g., Crowdsourcing in economic environments) to
compare basic science with the corporate R&D domain. This time, we got a reply from 195 participants representing
51 teams (response rate of >70%). 57% of participants had at least a Bachelor’s degree. 44.9% consider themselves
rather specialists in a field (by education) than generalists with broad scientific knowledge. The data is currently being
T. Bücheler, J.H. Sieg / Procedia Computer Science 7 (2011) 327–329 329
analyzed, but a preliminary analysis of the data shows that from the 12 science projects available, representing an almost
complete research process from “define the question” to “interpret data” and “draw conclusions”, all have been chosen
by at least one team (10 teams chose the most popular project, “develop proposal”). 41.5% of the participants were at
least “satisfied” (score 5 or better in a 7-points Likert scale) by this experience and 57% are positive about working on
such a Crowdsourcing project in science again. This last dimension (not very surprisingly) correlates strongly with the
fun level that each participant perceived. The participants had zero financial incentives. The only thing they could get
by delivering a solution to the projects were 3 credit points (out of 61 necessary) to pass a lecture. Only 22% indicated
that they would have put more effort into their project if money had been awarded to the best solution (score 5 or higher
of 7). This group is almost disjoint from the group that was satisfied with the project at a score of 5 or more (from 7).
Again, the researchers supervising the projects were positively surprised by the quality, accuracy, and usefulness of
the results.
5. What we want to do next
The research team will now thoroughly analyze the data gathered in this second round of data collection. In
parallel, the team has started to implement a simulator for testing the identified local rules of interaction in such a
Crowdsourcing/Open Innovation context and other findings, comparing them with empirical data from other disciplines
(e.g., management science). In addition, this simulator allows us to better understand sensitivities of parameters that
researchers can set/influence and therefore might have some predictive power.
6. How you can be part of this
The team is reaching out to partners from other scientific domains (e.g., psychology for teamwork and brainstorming,
biology for consensus decision making, swarm behavior, etc.). If you believe you have a connection or interesting idea
that fits this topic or if you would like to challenge the ideas presented here, please get in contact with the authors.
References
[1] T.W. Malone, R. Laubacher, C. Dellarocas, Harnessing Crowds: Mapping the Genome of Collective Intelligence, in: MIT Sloan Research
Paper, 4732-09, 2009.
[2] E. von Hippel, Democratizing innovation, MIT Press, Cambridge, Mass, 2005.
[3] A. Irwin, Citizen science. A study of people, expertise and sustainable development Environment and society, Routledge, London, 1995.
[4] B. Shneiderman, Science 2.0 Copernican challenges face those who suggest that collaboration, not computation are the driving energy for
socio-technical systems that characterize Web 2.0, Science 319 (2008) 1349–1350.
[5] Howe, J. 2010. Crowdsourcing. Why the Power of the Crowd is Driving the Future of Business. http://www.crowdsourcing.com/. Accessed 20
Feb. 2011.
[6] H.W. Chesbrough, Open innovation. The new imperative for creating and profiting from technology, Harvard Business School Press, Boston,
Mass, 2003.
[7] G. Melin, Pragmatism and self-organization Research collaboration on the individual level, Research Policy 29 (1) (2000) 31–40.
[8] K. Stoehr, WHO, A multicentre col8laboration to investigate the cause of severe acute respiratory syndrome, The Lancet 361 (9370) (2003)
1730–1733.
[9] T. Buecheler, J.H. Sieg, R.M. Füchslin, R. Pfeifer, Crowdsourcing Open Innovation and Collective Intelligence in the Scientific Method: A
Research Agenda and Operational Framework, in: H. Fellermann, et al. (Eds.), Artificial Life XII. Proceedings of the Twelfth International
Conference on the Synthesis and Simulation of Living Systems, MIT Press, Cambridge, Mass, 2010, pp. 679–686.
[10] R. Pfeifer, J. Bongard, How the body shapes the way we think A new view of intelligence. A Bradford book, MIT Press, Cambridge, Mass,
2007.