Available online at www.sciencedirect.com
Procedia Computer Science 7 (2011) 327–329
The European Future Technologies Conference and Exhibition 2011
Understanding Science 2.0: Crowdsourcing and Open Innovation in
the Scientiﬁc Method
Thierry Bücheler a,∗, Jan Henrik Sieg b
aArtiﬁcial Intelligence Lab, University of Zurich, Andreasstrasse 15, 8050 Zurich, Switzerland
bChair of Strategic Mgmt and Innovation, ETH Zurich, Kreuzplatz 5, 8032 Zurich, Switzerland
The innovation process is currently undergoing signiﬁcant change in many industries. The World Wide Web has created a virtual
world of collective intelligence and helped large groups of people connect and collaborate in the innovation process . Von Hippel
, for instance, states that a large number of users of a given technology will come up with innovative ideas. This process, originating
in business, is now also being observed in science. Discussions around “Citizen Science”  and “Science 2.0”  suggest the same
effects are relevant for fundamental research practices. “Crowdsourcing”  and “Open Innovation”  as well as other names for
those paradigms, like Peer Production, Wikinomics, Swarm Intelligence etc., have become buzzwords in recent years. However,
serious academic research efforts have also been started in many disciplines. In essence, these buzzwords all describe a form of
collective intelligence that is enabled by new technologies, particularly internet connectivity. The focus of most current research
on this topic is in the for-proﬁt domain, i.e. organizations willing (and able) to pay large sums to source innovation externally, for
instance through innovation contests. Our research is testing the applicability of Crowdsourcing and some techniques from Open
Innovation to the scientiﬁc method and basic science in a non-proﬁt environment (e.g., a traditional research university). If the tools
are found to be useful, this may signiﬁcantly change how some research tasks are conducted: While large, apriori unknown crowds
of “irrational agents” (i.e. humans) are used to support scientists (and teams thereof) in several research tasks through the internet,
the usefulness and robustness of these interactions as well as scientiﬁcally important factors like quality and validity of research
results are tested in a systematic manner. The research is highly interdisciplinary and is done in collaboration with scientists from
sociology, psychology, management science, economics, computer science, and artiﬁcial intelligence. After a pre-study, extensive
data collection has been conducted and the data is currently being analyzed. The paper presents ideas and hypotheses and opens the
discussion for further input.
© Selection and peer-review under responsibility of FET11 conference organizers and published by Elsevier B.V.
Keywords: Crowdsourcing; Open Innovation; Simulation; Agent-Based Modeling; Science 2.0; Citizen Science
1. Introduction and open questions
Fundamental research is still driven by many thinkers and doers cracking a problem alone, based on their own
knowledge and skills. However, vast exchange, often with participants from different backgrounds in different settings,
takes place in modern research and contemporary research projects are characterized by intense interactions between
groups and individuals, e.g., during idea generation, formulation of hypotheses, evaluation, and data analysis, among
many other research tasks. Large project conglomerates (e.g., EU-funded research or projects funded through the
Advanced Technology Program in the U.S.) actively foster such interactions. In many cases, the scientist groups self-
1877-0509/$ – see front matter © Selection and peer-review under responsibility of FET11 conference organizers and published by Elsevier B.V.
328 T. Bücheler, J.H. Sieg / Procedia Computer Science 7 (2011) 327–329
Fig. 1. Simpliﬁed Research Process, containing “tasks”: the Research Value Chain – from .
organize according to their individual strengths and skills (and other reasons) to reach a common goal, without a
strong centralized body of control (see e.g., ,). Interactions between these individuals and groups can be seen as
instances of collective intelligence, including consensus decision making, mass communication, and other phenomena
(see  for further details).
If basic science has become a collective intelligence effort, can it use the ideas and technologies from Crowdsourcing
and Open Innovation to become more efﬁcient and effective relative to the money spent while maintaining the necessary
quality and validity levels? Which are the right incentives to include large groups in basic science and to foster sharing
of ideas and data? Based on empirical data, our research seeks to provide answers to these research questions.
2. Crowdsourcing and Open Innovation
Crowdsourcing and Open Innovation are two terms coined in the last seven years, inﬂuencing several research ﬁelds.
We use the following two working deﬁnitions:
“Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and
outsourcing it to an undeﬁned, generally large group of people in the form of an open call.” 
“’Open Innovation’ is a paradigm that assumes that ﬁrms can and should use external ideas as well as internal ideas,
and internal and external paths to market, as the ﬁrms look to advance their technology. Open Innovation combines
internal and external ideas into architectures and systems whose requirements are deﬁned by a business model.” 
3. How to analyze the Scientiﬁc Method?
In order to investigate “basic science” in a structured manner, we have simpliﬁed the tasks that are conducted in
most scientiﬁc inquiries (see Fig. 1) and used MIT’s “Collective Intelligence Gene” framework to analyze the tasks in
combination with the “Three Constituents Principle” from AI . See  for details regarding the Simpliﬁed Research
Process and the framework used. Based on this categorization and taxonomy, we hypothesize that the following scientiﬁc
tasks are especially suited for Crowdsourcing: develop and choose methodology, Identify team of co-workers, gather
information and resources (prior work and implications), analyze data, retest.
4. Where we currently stand
During the ShanghAI Lectures 2009, a global lecture on Artiﬁcial Intelligence involving 48 universities from 5
continents, we have collected data to test these hypotheses and to test the other tasks for “crowdsourceability” (as
described in ): “The participants supported one of four current scientiﬁc projects by contributing a paper stating
their ideas on open questions. Some of the solutions were rated “excellent“, “well-elaborated” and “useful for the
advancement of the project” by the scientists that headed the projects. We sent questionnaires to 372 participating
students after the lectures and received 84 valid replies (23%). Although only 16.0% thereof stated that they had prior
theoretical or technical knowledge regarding the chosen subject, 22.6% of all participants perceived a signiﬁcant impact
on current research if they participated in the contest. However, initial data collection during this pre-test was insufﬁcient
to analyze all variables in our framework.“In the fall semester 2010, we expanded data collection and rigorously applied
the framework. We also used existing scales from other contexts (e.g., Crowdsourcing in economic environments) to
compare basic science with the corporate R&D domain. This time, we got a reply from 195 participants representing
51 teams (response rate of >70%). 57% of participants had at least a Bachelor’s degree. 44.9% consider themselves
rather specialists in a ﬁeld (by education) than generalists with broad scientiﬁc knowledge. The data is currently being
T. Bücheler, J.H. Sieg / Procedia Computer Science 7 (2011) 327–329 329
analyzed, but a preliminary analysis of the data shows that from the 12 science projects available, representing an almost
complete research process from “deﬁne the question” to “interpret data” and “draw conclusions”, all have been chosen
by at least one team (10 teams chose the most popular project, “develop proposal”). 41.5% of the participants were at
least “satisﬁed” (score 5 or better in a 7-points Likert scale) by this experience and 57% are positive about working on
such a Crowdsourcing project in science again. This last dimension (not very surprisingly) correlates strongly with the
fun level that each participant perceived. The participants had zero ﬁnancial incentives. The only thing they could get
by delivering a solution to the projects were 3 credit points (out of 61 necessary) to pass a lecture. Only 22% indicated
that they would have put more effort into their project if money had been awarded to the best solution (score 5 or higher
of 7). This group is almost disjoint from the group that was satisﬁed with the project at a score of 5 or more (from 7).
Again, the researchers supervising the projects were positively surprised by the quality, accuracy, and usefulness of
5. What we want to do next
The research team will now thoroughly analyze the data gathered in this second round of data collection. In
parallel, the team has started to implement a simulator for testing the identiﬁed local rules of interaction in such a
Crowdsourcing/Open Innovation context and other ﬁndings, comparing them with empirical data from other disciplines
(e.g., management science). In addition, this simulator allows us to better understand sensitivities of parameters that
researchers can set/inﬂuence and therefore might have some predictive power.
6. How you can be part of this
The team is reaching out to partners from other scientiﬁc domains (e.g., psychology for teamwork and brainstorming,
biology for consensus decision making, swarm behavior, etc.). If you believe you have a connection or interesting idea
that ﬁts this topic or if you would like to challenge the ideas presented here, please get in contact with the authors.
 T.W. Malone, R. Laubacher, C. Dellarocas, Harnessing Crowds: Mapping the Genome of Collective Intelligence, in: MIT Sloan Research
Paper, 4732-09, 2009.
 E. von Hippel, Democratizing innovation, MIT Press, Cambridge, Mass, 2005.
 A. Irwin, Citizen science. A study of people, expertise and sustainable development Environment and society, Routledge, London, 1995.
 B. Shneiderman, Science 2.0 Copernican challenges face those who suggest that collaboration, not computation are the driving energy for
socio-technical systems that characterize Web 2.0, Science 319 (2008) 1349–1350.
 Howe, J. 2010. Crowdsourcing. Why the Power of the Crowd is Driving the Future of Business. http://www.crowdsourcing.com/. Accessed 20
 H.W. Chesbrough, Open innovation. The new imperative for creating and proﬁting from technology, Harvard Business School Press, Boston,
 G. Melin, Pragmatism and self-organization Research collaboration on the individual level, Research Policy 29 (1) (2000) 31–40.
 K. Stoehr, WHO, A multicentre col8laboration to investigate the cause of severe acute respiratory syndrome, The Lancet 361 (9370) (2003)
 T. Buecheler, J.H. Sieg, R.M. Füchslin, R. Pfeifer, Crowdsourcing Open Innovation and Collective Intelligence in the Scientiﬁc Method: A
Research Agenda and Operational Framework, in: H. Fellermann, et al. (Eds.), Artiﬁcial Life XII. Proceedings of the Twelfth International
Conference on the Synthesis and Simulation of Living Systems, MIT Press, Cambridge, Mass, 2010, pp. 679–686.
 R. Pfeifer, J. Bongard, How the body shapes the way we think A new view of intelligence. A Bradford book, MIT Press, Cambridge, Mass,