Conference Paper

Coordination in Collective Intelligence: The Role of Team Structure and Task Interdependence

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The success of Wikipedia has demonstrated the power of peer production in knowledge building. However, unlike many other examples of collective intelligence, tasks in Wikipedia can be deeply interdependent and may incur high coordination costs among editors. Increasing the number of editors increases the resources available to the system, but it also raises the costs of coordination. This suggests that the dependencies of tasks in Wikipedia may determine whether they benefit from increasing the number of editors involved. Specifically, we hypothesize that adding editors may benefit low-coordination tasks but have negative consequences for tasks requiring a high degree of coordination. Furthermore, concentrating the work to reduce coordination dependencies should enable more efficient work by many editors. Analyses of both article ratings and article review comments provide support for both hypotheses. These results suggest ways to better harness the efforts of many editors in social collaborative systems involving high coordination tasks.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This can lead to results that are much weaker than expected. According to Brooks's Law, adding manpower to a delayed software project makes it more delayed (Brooks 1975 after Kittur, Lee, andKraut, 2009). If a task is performed independently of others, an increase in the number of project members does not increase coordination costs, but they become larger as such dependencies increase. ...
... In the case of high coordination tasks, the benefits occur for small teams. (Kittur, Lee, and Kraut, 2009). ...
... When several actions were organized simultaneously, only a few people participated in each of them. This resulted in lower efficiency as Kittur, Lee, and Kraut (2009) showed that simple tasks work better when more people are involved. ...
Article
Full-text available
The first decade of the 21st century spawned the intense development of online communities of practice. The largest knowledge-sharing communities were formed in several dozen language versions of Wikipedia. Defining rules for cooperation was necessary to ensure the desired content quality created by Wikipedians. It was essential to develop the appropriate initiatives, tools, and space for effective activity coordination within the service. Previous research in this area pointed to the role of leadership, group size, and tools facilitating work automation in creating actionable strategies and in the self-organization of work. This paper aims to characterize the variability in creating new concepts of cooperation in selected language versions of Wikipedia and identify the factors of participating in various forms of cooperation. The author assumes that the greater number of initiatives a user enters contributes to an increase in their overall activity. The research conducted was both qualitative and quantitative. A netnographic approach was used, as well as a statistical analysis of user activity records. Thanks to the netnographic research, the stages of Wikipedia’s evolution were identified. Quantitative research has shown a correlation between the number of activity areas (a user’s affiliation to WikiProjects) and their overall activity (the number of edits made). A change in Wikipedians’ activity style was also observed depending on their seniority on the website. The study’s conclusions may be helpful for organizations using crowdsourcing to achieve their own goals.
... It is a free source of knowledge created by volunteers working collaboratively over an online platform. Over the years, the collaboration of Wikipedia has received much research interest as it harvests the \wisdom of crowds" online to produce valuable contents (Kittur et al., 2007, Kittur et al., 2009Niederer and Van Dijck, 2010). Although calling upon online volunteers o®ers unlimited promise, the quality of the work is a concern. ...
... Research also found adding more editors with more diverse backgrounds can also raise quality (Arazy et al., 2011;Robert and Romero, 2015). Previous studies also researched what type of collaboration would yield better quality (Kittur et al., 2008;Stvilia et al., 2008;Kittur et al., 2009;Ren and Yan, 2017). Some suggest more editors with diverse backgrounds might not yield better quality (Kittur et al., 2008(Kittur et al., , 2009Ren and Yan, 2017) and that implicit coordination-work directly on the articles is more important (Kittur et al., 2008). ...
... Previous studies also researched what type of collaboration would yield better quality (Kittur et al., 2008;Stvilia et al., 2008;Kittur et al., 2009;Ren and Yan, 2017). Some suggest more editors with diverse backgrounds might not yield better quality (Kittur et al., 2008(Kittur et al., , 2009Ren and Yan, 2017) and that implicit coordination-work directly on the articles is more important (Kittur et al., 2008). Some studies report patterns of editor combinations for di®erent quality levels (Liu and Ren, 2011;Ren and Yan, 2017;Lin and Wang, 2020). ...
Article
Full-text available
This research aims at understanding the open collaboration involved in producing Wikipedia Good Articles (GA). To achieve this goal, it is necessary to analyse who contributes to the collaborative creation of GA and how they are involved in the collaboration process. We propose an approach that first employs factor analysis to identify editing abilities and then uses these editing abilities scores to distinguish editors. Then, we generate sequence of editors participating in the work process to analyse the patterns of collaboration. Without loss of generality, we use GA of three Wikipedia categories covering two general topics and a science topic to demonstrate our approach. The result shows that we can successfully generate editor abilities and identify different types of editors. Then we observe the sequence of different editor involved in the creation process. For the three GA categories examined, we found that GA exhibited the characteristic of highly scored content-shaping ability editors involved in the later stage of the collaboration process. The result demonstrates that our approach provides a clearer understanding of how Wikipedia GA are created through open collaboration.
... In contrast to classifying data for citizen science, writing a scientific paper includes more varied dependences that potentially pose challenges to the writers. Figure 4 shows the structure of dependencies involved, based on published work on coordination in writing (Erkens et al. 2005), Wikipedia in particular (e.g., Kittur et al. 2009), and a detailed coordination-theory analysis of a comparable process, writing software (Crowston and Scozzi 2002;Crowston and Scozzi 2008). There are also differences in the diversity and difficulty of the tasks included, but these are not considered in analyzing the task dependencies. ...
... A first difference between Figures 3 and 4 is the presence of dependencies among the parts of the paper, the outputs of the paper-writing tasks. Only a few tasks in writing, such as proofreading, are like galaxy classification in that they can be done without affecting other tasks (Kittur et al. 2009), e.g., by crowdsourcing (Bernstein et al. 2015). For the most part, different parts of a paper cannot be written independently. ...
... Furthermore, the voice and writing style of the different sections need to match. These dependencies among paper parts impose constraints on how those parts are written (Kittur et al. 2009), posing coordination challenges to the people working on them. ...
Article
Full-text available
This paper presents a case study of an online citizen science project that attempted to involve volunteers in tasks with multiple dependencies including analyzing bulk data as well as interpreting data and writing a paper for publication. Tasks with more dependencies call for more elaborate coordination mechanisms. However, the relationship between the project and its volunteers limits how work can be coordinated. Contrariwise, a mismatch between dependencies and available coordination mechanisms can lead to performance problems, as were seen in the case. The results of the study suggest recommendations for design of online citizen science projects for advanced tasks.
... The second promise of computational research, after its ability to detect signals in noise, is its ability to handle population-size data. In communication, we can harvest and analyze the entire corpus of edits made on Wikipedia (Matei & Britt, 2017) or on vast collections of wikis of many kinds (Kittur, Lee, & Kraut, 2009;Shaw & Hill, 2014). We can analyze entire storms of interactions created by viral tweets on Twitter. ...
... More recently, using a uses and gratifications framework, Shao, Ross, and Grace (2015) suggested that social media rests on selfexpressive and interactional needs. Furthermore, looking at knowledge production sites, Kittur et al. (2009) or Matei, Jabal, and Bertino (2017) advanced a set of sociostructural factors that may structure and motivate social media production. Kittur et al. (2009) focused at coordination and conflict, while Matei and Britt (2017) proposed an evolutionary approach, supported by functional roles and "adhocratic" order. ...
... Furthermore, looking at knowledge production sites, Kittur et al. (2009) or Matei, Jabal, and Bertino (2017) advanced a set of sociostructural factors that may structure and motivate social media production. Kittur et al. (2009) focused at coordination and conflict, while Matei and Britt (2017) proposed an evolutionary approach, supported by functional roles and "adhocratic" order. The latter refers to a flexible, user-defined, practical order ran by those who do the most on a social media site. ...
Article
Full-text available
The article advances a new way to think about computational research, in general, and computational communication research, in particular. Contrary to current definitions of “computational science,” which emphasizes its inductive nature, we define computational research as an incomplete inductive process, blending both theoretical and data‐driven methods of discovery. Communication theory needs to be driven by a clear concept of human needs and abilities, recovering and extending known theoretical insights from mass and interpersonal communication research. The definition we propose for computational communication research has a practical implication. Relying on theory, the definition demands to identify specific processes and domains within the field of computational communication research. The processes include communication production, behavior, and effects. The domains include collaboration, trust, and data storytelling and journalism, while the methods include content and network analyses. The article starts with a broad definition of the “computational” approach, using the Johari window. We continue with a typology of computational communication research, which blends reviews of foundational texts with summaries of leading research. In the conclusions, we discuss the strengths and identify new opportunities in the field of computational communication research. This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction Commercial, Legal, and Ethical Issues > Social Considerations Fundamental Concepts of Data and Knowledge > Big Data Mining
... The past two decades have seen an explosion of large peer production and social computing projects, such as Wikipedia, Zooniverse, and Linux. These projects are developed by increasingly diverse communities of contributors who employ varying collaborative processes, such as those involved in consensus building, task delegation, and conflict management [3,13,15,17,4,7]. Previous work has (1) examined coordination dynamics within English Wikipedia [15,11,4,18], (2) compared coordination practices in small samples across Wikipedia language editions [8,6], and (3) analyzed content asymmetries across the Wikipedia platform [9,2,21,5]. ...
... These projects are developed by increasingly diverse communities of contributors who employ varying collaborative processes, such as those involved in consensus building, task delegation, and conflict management [3,13,15,17,4,7]. Previous work has (1) examined coordination dynamics within English Wikipedia [15,11,4,18], (2) compared coordination practices in small samples across Wikipedia language editions [8,6], and (3) analyzed content asymmetries across the Wikipedia platform [9,2,21,5]. However, while these studies suggest that editors in different Wikipedia language editions may favor different coordination processes, existing research does not comprehensively examine coordination dynamics across many language editions. ...
... Following previous work, we operationalize coordination through post counts to Wikipedia talk pages, which provide a measure of coordination between editors [15,14]. In order to test the association between language edition and coordination, we build two models. ...
Conference Paper
Social computing systems and online communities develop varying strategies for managing collaborative processes such as consensus building, task delegation, and conflict management. Although these factors impact both the ways in which communities produce content and the content they produce, little prior work has undertaken a large comparative analysis of coordination dynamics across linguistically diverse communities engaged in the same activity. We describe and model the coordination processes of Wikipedia editors across the 24 largest language editions. Our results indicate that language edition is associated with a difference in quantity of coordination activity, as measured by talk page posts, with increases as high as 60% when compared against pages in English.
... This approach is employed in workflows such as 'find-fix-verify by Bernstein et al. (2010) where the output of one worker becomes the input for the next. Multilevel review can occur synchronously, in a collaborative fashion similar to output agreement such as described by Kittur et al. (2009) where workers collaboratively and incrementally refined a translation task output. ...
... One challenge in workflow design is facilitating coordination within the distributed workforce. This has been studied by a number of researchers like Kittur et al. (2008Kittur et al. ( , 2009 some of whom have applied traditional techniques from computing such as van Der Aalst et al. (2003) and organisational literature such as Stohr and Zhao (2001). Other techniques that can constitute a form of workflow design include crowdsourcing contests (Cavallo and Jain, 2012;Dechenaux et al., 2014) or adopting some form of collaboration (Kittur, 2010) -although these would be addressed in details in future chapters. ...
Thesis
Crowdsourcing has the potential to revolutionise the way organisations carry out tasks that need to scale out quickly – and indeed this revolution has begun. However, crowdsourcing today, and especially paid microtasks, face several technical and socio-economic challenges that can hamper the realisation of this vision. This work addresses four of such challenges: workflow design; real-time crowd work; motivation and rewards; and synchronous collaboration. The thesis describes the use of a bespoke gamified crowdsourcing platform Wordsmith, and studies the use of furtherance incentives to tackle issues at the heart of microtasks that feature monetary payments as the primary source of incentivisation. Furtherance incentives represent a timely and appropriate reward to improve task continuance presented when a worker is about to quit a task. As such, the keys to effectively deploying furtherance incentives lie in: the timely ability to detect waning worker interest in a task, and, knowledge of the appropriate incentive to offer the particular worker at that stage of the task. In understanding how to improve crowdsourcing workflow designs, the thesis presents an approach that leverages on insights into task features and worker interaction preferences. The findings illustrate how workers interact with tasks in the presence of choice – thus offering us an idea into the types of furtherance incentive to offer workers. In the study on real-time crowd work, microtask contests are introduced as a medium to engage workers to complete tasks featuring tight time constraints. The results give us a rich model that we use to predict when workers are likely to exit a task at different stages. The research into motivation and rewards combines the two components of furtherance incentives by using gamification elements as an additional source of incentives. This leads to more tasks carried out and at a higher quality when compare with baseline paid microtasks. Finally our study on synchronous collaboration offers an additional case study on the effectiveness of furtherance incentives. Here we use sociality-based features of social pressure and social flow between interacting workers as furtherance incentives resulting in improved qualitative and quantitative results.
... Thus, our understanding of collaborative work attends to sociomaterial relations and configurations, the give and take of assistance, who is performing 'invisible' labor, and what power dynamics and hierarchies are at play. Although others have put forth the concept of interdependence for other areas of cooperative work, this has largely been instrumental and focused on task attributes (e.g., [7,[45][46][47]). Our work takes a broader view of interdependence that considers task-level coordination details alongside social and structural aspects of collaboration among ability-diverse teams. ...
... To extend current theorizing of accessibility in group work, we revisit our findings with respect to the concept of interdependence. Within CSCW, the notion of interdependence has largely focused on properties of a task and the extent to which group members need to coordinate and collaborate to achieve task goals [7,[45][46][47]. Accessibility scholarship and disability studies emphasizes other aspects of interdependence, including broader sociomaterial relations, the labor of people with and without disabilities, and power dynamics and hierarchies [11,95]. ...
Article
Collaborative writing tools have become ubiquitous in today's world and are used widely in many professional organizations and academic settings. Yet, we know little about how ability-diverse teams, such as those involving people with and without vision impairments, make use of collaborative writing tools. We report on interviews with 20 academics and professionals who are blind or visually impaired and perform collaborative writing with sighted colleagues. Our findings reveal that people with vision impairments perform collaborative writing activities through four interconnected processes, which include learning an ecosystem of (in)accessible tools, adapting to complexities of collaborative features, balancing the cost and benefit of accessibility, and navigating power dynamics within organizations. We discuss how our analysis contributes to theories of accessibility in collaboration and offers practical insights for future collaborative system design.
... In this new model, innovation is initiated and driven by end users rather than introduced top-down by large firms, corporations, and enterprises such as technology giants [5]. From Wikipedia, digital volunteerism, open source software development, citizen science, to crowdsourcing platforms such as Amazon Mechanical Turk, a body of information science and social computing research (e.g., [6,7,[9][10][11]) has tackled important problems on innovation in an information society, including how social computing tools and platforms support the collaborative construction of knowledge (e.g., [12,14]) and team coordination within online creative communities (e.g., [7]). Yet, how exactly technological innovation can happen from bottom up and what mechanisms support its operation remain understudied. ...
... In this new model, innovation is initiated and driven by end users rather than introduced top-down by large firms, corporations, and enterprises such as technology giants [5]. From Wikipedia, digital volunteerism, open source software development, citizen science, to crowdsourcing platforms such as Amazon Mechanical Turk, a body of information science and social computing research (e.g., [6,7,[9][10][11]) has tackled important problems on innovation in an information society, including how social computing tools and platforms support the collaborative construction of knowledge (e.g., [12,14]) and team coordination within online creative communities (e.g., [7]). Yet, how exactly technological innovation can happen from bottom up and what mechanisms support its operation remain understudied. ...
Chapter
Full-text available
In this paper, we explore a network of distributed individuals’ collective efforts to establish an innovation ecology allowing them to engage in bottom up creative technological practices in today’s information society. Specifically, we present an empirical study of the technological practices in an emerging creative technology community – independent [indie] game developers in the United States. Based on indie game developers’ own accounts, we identified four themes that constitute an innovation ecology from the bottom up, including problem solving; collaborative information seeking, sharing, and reproducing; community support; and policy and politics. We argue that these findings inform our understanding of bottom up technological innovation and shed light on the design of sociotechnical systems to mediate and support such innovation beyond the gaming context.
... It is not at all indifferent for high-quality knowledge production if a group is loosely or tightly organized or if the composition of the most productive group is stable or highly variable. Previous research has shown that the evolution of social structures (Kittur et al 2009) and stable contribution elites (i.e., high volume contributors), both on Wikipedia and Stack Overflow (Liu et al 2005;Jurczyk and Agichtein 2007;Pal et al 2012), may impact the outcome of the production itself. The high involvement of a given set of individuals who specialized in certain knowledge domains was identified as one of the most important factors in the success of wiki groups (Kane 2011). ...
... Kittur and Kraut (2008) studied the impact of coordination methods between contributors on content quality, also highlighting the importance of interactional stability in wiki spaces. Also, Kittur et al (2009) analyzed the role of uneven distribution of effort on productivity across thousands of articles on wiki spaces. This work drew attention to the core issue of coordination via concentration of effort among a few selected editors. ...
Article
Full-text available
Online knowledge production sites, such as Wikipedia and Stack Overflow, are dominated by small groups of contributors. How does this affect knowledge quality and production? Does the persistent presence of some key contributors among the most productive members improve the quality of the knowledge, considered in the aggregate? The paper addresses these issues by correlating week-by-week value changes in contribution unevenness, elite resilience (stickiness), and content quality. The goal is to detect if and how changes in social structural variables may influence the quality of the knowledge produced by two representative online knowledge production sites: Wikipedia and Stack Overflow. Regression analysis shows that on Stack Overflow both unevenness and elite stickiness have a curvilinear effect on quality. Quality is optimized at specific levels of elite stickiness and unevenness. At the same time, on Wikipedia, quality increases linearly with a decline in entropy, overall, and with an increase in stickiness in the maturation phase, after an entropy elite stickiness, quality of content peak is reached.
... A first difference between Figures 3 and 4 is the presence of dependencies between the parts of the paper, the outputs of the paper writing tasks. Only a few tasks in writing, such as proofreading, are like galaxy classification in that they can be done without affecting other tasks [15], i.e., by crowdsourcing [1]. For the most part, different parts of a paper cannot be written independently. ...
... Furthermore, the voice and writing style of the different sections needs to match. These dependencies among parts of a paper impose constraints on how the paper parts are written [15]. To manage these dependencies requires additional work as authors must either plan the writing process in advance [38,11], e.g., by developing a shared vision for the paper [39] (collectively or led by one person [14]), or writing and revising their parts to fit with other parts. ...
... systems (CHI 2009). ACM Press, New York Kittur A, Pendleton B, Kraut RE (2009b) Herding the cats: the influence of groups in coordinating peer production. ...
... In general, the early literature simply avoided confronting the issue of inequality on its own territory as a constitutive and generative phenomenon. Although methods for characterizing inequality and diver- sity in online environments were present in earlier research, they were used only episodically and as "pass-through" mechanisms to address other, unrelated issues; the existence of inequality itself was typically viewed as a nonessential by-product ( Kittur et al. 2009). ...
Chapter
Full-text available
This chapter explores the area of organizational configurations and links it with the evolutionary and revolutionary changes that we may observe in those configurations. We start by assessing the model laid out by Mintzberg (1979), significantly expanding on it by developing a comprehensive protocol to align any given organization with Mintzberg’s theoretical archetypes. This is then connected, on both a conceptual level and a measurement level, with the evolutionary and revolutionary changes that we may observe in those configurations, thereby yielding a new approach for researchers to directly observe and comprehend the development of organizational configurations over time. This is a vital piece of our larger theoretical model that specifies the mechanisms through which social structuration occurs in online collaborative groups.
... systems (CHI 2009). ACM Press, New York Kittur A, Pendleton B, Kraut RE (2009b) Herding the cats: the influence of groups in coordinating peer production. ...
... In general, the early literature simply avoided confronting the issue of inequality on its own territory as a constitutive and generative phenomenon. Although methods for characterizing inequality and diversity in online environments were present in earlier research, they were used only episodically and as "pass-through" mechanisms to address other, unrelated issues; the existence of inequality itself was typically viewed as a nonessential by-product ( Kittur et al. 2009). ...
Chapter
One of the most important goals of the present volume is to define and relate group structuration to other online organizational and interactional phenomena. Although structuration is a high-level concept that may hold different meanings for different people, within this research, the concept is quite simple and clear. In brief, structuration is equated with the concept of “signal” in information systems, as defined by Shannon and Weaver (1948). Structuration is meaningful order, so by Shannon’s logic, structure is the opposite of entropy. Since structure is measured using entropy, we may say that structure increases as the observed value of entropy decreases. Conceptually, this means that structure is captured in the negative by observing the degree to which the system is not random (noisy or disordered).
... For instance, previous research on group development suggest that groups can fundamentally change their focus, work structure, and processes before an evaluation [22,23]. In that sense, the factors identified as key predictors of crowd performance such as centralization and conflict [5,6,36,37,53,55] may not represent typical crowd behavior, but rather behavior that is distinct to pre-evaluation stages. Incorporating a group development perspective to the analysis of crowds would help identify other patterns in the behavior of collaborative crowds. ...
... We have observed that crowds increase their centralization as they approach evaluation and that centralization is linked to better future performance. This is consistent with the assertion that centralization reduces coordination costs, and thus increases performance, which has been observed in other studies [36,37]. However, because we also observed that centralization increases before entering an evaluation period, questions remain regarding whether centralization causes performance or whether the evaluation process causes centralization in Wikipedia crowds remains. ...
Article
Full-text available
Collaborative crowdsourcing is an increasingly common way of accomplishing work in our economy. Yet, we know very little about how the behavior of these crowds changes over time and how these dynamics impact their performance. In this paper, we take a group development approach that considers how the behavior of crowds change over time in anticipation and as a result of their evaluation and recognition. Towards this goal, this paper studies the collaborative behavior of groups comprised of editors of articles that have been recognized for their outstanding quality and given the Good Articles (GA) status and those that eventually become Featured Articles (FA) on Wikipedia. The results show that the collaborative behavior of GA groups radically changes just prior to their nomination. In particular, the GA groups experience increases in the level of activity, centralization of workload, and level of GA experience and decreases in conflict (i.e., reverts) among editors. After being promoted to GA, they converge back to their typical behavior and composition. This indicates that crowd behavior prior to their evaluation period is dramatically different than behavior before or after. In addition, the collaborative behaviors of crowds during their promotion to GA are predictive of whether they are eventually promoted to FA. Our findings shed new light on the importance of time in understanding the relationship between crowd performance and collaborative measures such as centralization, conflict and experience.
... Collective intelligence is easier to apply when the amount of coordination between participants required to solve a problem is minimal (Kittur, 2008;Kittur et al., 2009). In some applications of collective intelligence, each individual only needs to supply their best answer to a problem, with the collective answer being determined by the average of all the responses. ...
... Collective intelligence is more difficult to apply when new contributions only make sense in relation to what has gone before. A famous example of such a 'high coordination' project was the publishing house Penguin's attempt to write a book using an online collaboration platform, which largely failed (Kittur et al., 2009;Pulinger, 2007). ...
Chapter
The development of modern information and communication technologies (ICTs) has led to a renewed interest in the phenomenon of ‘collective intelligence’ (also described as the ‘wisdom of the crowds’, Surowiecki, 2005). Collective intelligence refers to the capacity to mobilise and coordinate the expertise and creativity possessed by large groups of individuals in order to solve problems and create new knowledge. Although this can be done offline, ICTs make it far easier for large groups of individuals to work collectively on common tasks, for example by removing the need for physical proximity, allowing for asynchronous communication and making it possible for single individuals to transmit information to very large groups (Wellman, 1997). These advantages have allowed online networks to solve iconic mathematics problems (Polymath, 2009; Gowers and Nielsen, 2009), create the world’s largest reference work, Wikipedia (Almeida, 2007), and even challenge grandmaster Garry Kasparov to a game of chess (Nielsen, 2011).
... Collective intelligence is easier to apply when the amount of coordination between participants required to solve a problem is minimal (Kittur, 2008;Kittur et al., 2009). In some applications of collective intelligence, each individual only needs to supply their best answer to a problem, with the collective answer being determined by the average of all the responses. ...
... Collective intelligence is more difficult to apply when new contributions only make sense in relation to what has gone before. A famous example of such a 'high coordination' project was the publishing house Penguin's attempt to write a book using an online collaboration platform, which largely failed (Kittur et al., 2009;Pulinger, 2007). ...
Chapter
The development of modern information and communication technologies (ICTs) has led to a renewed interest in the phenomenon of ‘collective intelligence’ (also described as the ‘wisdom of the crowds’, Surowiecki, 2005). Collective intelligence refers to the capacity to mobilise and coordinate the expertise and creativity possessed by large groups of individuals in order to solve problems and create new knowledge. Although this can be done offline, ICTs make it far easier for large groups of individuals to work collectively on common tasks, for example by removing the need for physical proximity, allowing for asynchronous communication and making it possible for single individuals to transmit information to very large groups (Wellman, 1997). These advantages have allowed online networks to solve iconic mathematics problems (Polymath, 2009; Gowers and Nielsen, 2009), create the world’s largest reference work, Wikipedia (Almeida, 2007), and even challenge grandmaster Garry Kasparov to a game of chess (Nielsen, 2011).
... Previous research highlights the systematic benefits these communities offer compared to markets and hierarchical management structures in digitally networked environments, especially in information or cultural production [4]. The success of such communities depends on effective coordination, governance, trust, and the scalability of collaboration [26]. While not always explicitly labeled as DAOs (Decentralized Autonomous Organizations), a wealth body of literature in areas like the theory of firms [67], public choice theory [57], and platform cooperatives [45,51] addresses themes such as decentralized governance, incentive alignment, self-management, group dynamics and paradoxes in decision making [44], themes central to the structure of DAOs. ...
Preprint
Decentralized Autonomous Organizations (DAOs) resemble early online communities, particularly those centered around open-source projects, and present a potential empirical framework for complex social-computing systems by encoding governance rules within "smart contracts" on the blockchain. A key function of a DAO is collective decision-making, typically carried out through a series of proposals where members vote on organizational events using governance tokens, signifying relative influence within the DAO. In just a few years, the deployment of DAOs surged with a total treasury of $24.5 billion and 11.1M governance token holders collectively managing decisions across over 13,000 DAOs as of 2024. In this study, we examine the operational dynamics of 100 DAOs, like pleasrdao, lexdao, lootdao, optimism collective, uniswap, etc. With large-scale empirical analysis of a diverse set of DAO categories and smart contracts and by leveraging on-chain (e.g., voting results) and off-chain data, we examine factors such as voting power, participation, and DAO characteristics dictating the level of decentralization, thus, the efficiency of management structures. As such, our study highlights that increased grassroots participation correlates with higher decentralization in a DAO, and lower variance in voting power within a DAO correlates with a higher level of decentralization, as consistently measured by Gini metrics. These insights closely align with key topics in political science, such as the allocation of power in decision-making and the effects of various governance models. We conclude by discussing the implications for researchers, and practitioners, emphasizing how these factors can inform the design of democratic governance systems in emerging applications that require active engagement from stakeholders in decision-making.
... Empirical evidence shows that peer-produced content can be of comparable quality to content that is produced by traditional editorial processes, yet studies also show that various problems emerge in open content production (Giles 2005, Chesney 2006, Brown 2011. To explain the capability of peer-production systems to generate and maintain high-quality content, prior research has examined their emerging organizational structures (Kittur et al. 2009, Baldwin and von Hippel 2011, Arazy et al. 2020, participation dynamics (Ransbotham and Kane 2011, Kane and Ransbotham 2016, Zheng et al. 2023, content production (Faraj et al. 2011, Kane et al. 2014, Levine and Prietula 2014, and governance (Shah 2006, Markus 2007, Forte et al. 2009, Aaltonen and Lanzara 2015, Mindel et al. 2018. ...
Article
Online peer-production systems create value by enabling people to participate in the production of a common good such as an open encyclopedia by building freely on each other’s work. Fixing quality problems in peer production in a timely manner is critical because millions of people rely on peer-produced content for learning and decision making. The longer low-quality content remains in place, the more it can harm the reputation of a peer-production system and diminish the capability of the system to maintain its contributor base. We study different mechanism affecting the timeliness of quality problem resolution in Wikipedia and find that the speedy resolution of quality problems depends on the successful integration of software robots (bots) and the careful calibration of policy citations to the different levels of experience among contributors. Most control mechanisms found in firm-based production do not apply to peer production, and instead, quality control in peer production must leverage the strengths of different contributors and harness the benefits of technological support and adaptive policy frameworks to improve productivity and achieve high-quality outcomes.
... • Team Size, Team Formation and Individual Skills influence Collective Skills [64][65][66][67], which is a latent factor. • Team Size, Team Formation and Collective Skills influence Social Cohesion [68][69][70][71], which is a latent factor. ...
Article
Full-text available
Cyber competitions are usually team activities, where team performance not only depends on the members’ abilities but also on team collaboration. This seems intuitive, especially given that team formation is a well-studied discipline in competitive sports and project management, but unfortunately, team performance and team formation strategies are rarely studied in the context of cybersecurity and cyber competitions. Since cyber competitions are becoming more prevalent and organized, this gap becomes an opportunity to formalize the study of team performance in the context of cyber competitions. This work follows a cross-validating two-approach methodology. The first is the computational modeling of cyber competitions using Agent-Based Modeling. Team members are modeled, in NetLogo, as collaborating agents competing over a network in a red team/blue team match. Members’ abilities, team interaction and network properties are parametrized (inputs), and the match score is reported as output. The second approach is grounded in the literature of team performance (not in the context of cyber competitions), where a theoretical framework is built in accordance with the literature. The results of the first approach are used to build a causal inference model using Structural Equation Modeling. Upon comparing the causal inference model to the theoretical model, they showed high resemblance, and this cross-validated both approaches. Two main findings are deduced: first, the body of literature studying teams remains valid and applicable in the context of cyber competitions. Second, coaches and researchers can test new team strategies computationally and achieve precise performance predictions. The targeted gap used methodology and findings which are novel to the study of cyber competitions.
... Coordination involves four different activities: communication, perception of common objects, group decision-making, and (pure) coordination (Malone and Crowston 1990). Group tasks typically require both production and coordination, although some tasks are particularly high in coordination, while others are primarily production-oriented (Kittur et al. 2009). ...
Article
Full-text available
Group work is a commonly used method of working, and the performance of a group can vary depending on the type and structure of the task at hand. Research suggests that groups can exhibit "collective intelligence"—the ability to perform well across tasks—under certain conditions, making group performance somewhat predictable. However, predictability of task performance becomes difficult when a task relies heavily on coordination among group members or is ill-defined. To address this issue, we propose a technical solution in the form of a chatbot providing advice to facilitate group work for more predictable performance. Specifically, we target well-defined, high-coordination tasks. Through experiments with 64 virtual groups performing various tasks and communicating via text-based chat, we found a relationship between the average intelligence of group members and their group performance in such tasks, making performance more predictable. The practical implications of this research are significant, as the assembly of consistently performing groups is an important organizational activity.
... Having a program flow in this way blends the organization's space with the participant's space. This blending of spaces by researchers is usually considered in terms of asynchronous interactions, such as collaborative work [46] or synchronous activities, such as family meals [4]. We propose developing this link between spaces over time to help form a lasting bond between individual staff and participants. ...
... Bonabeau (2009) also purported that collective intelligence enhances the decision-making process owing to the diverse viewpoints that emerge from group discussions. According to Kittur et al. (2009), this is particularly the case if the participants are consciously engaged in the group activity. Importantly, this process is bidirectional. ...
Article
Full-text available
The importance of social networks has increased in recent decades, yet the use of social learning in higher education is nascent. Little is known how to foster high levels of social learning discourse among students in higher education classrooms. To address this gap, the present study analyses the use of a mobile application (Soqqle) for sharing student-generated content and peer to-peer communication. Students from Hong Kong, Malaysia, and Indonesia uploaded videos linked to assessments and received feedback from their instructors and peers through social engagement features (e.g., comments, likes). The majority of students reported that the social learning experience promoted idea generation, increased creativity, and improved attention. These results indicate that integrating online platforms and mobile applications can promote social learning. The findings have important implications for educational practice because many educational institutions have adopted online learning due to the COVID-19 pandemic. © 2022 Hong Kong Bao Long Accounting And Secretarial Limited. All rights reserved.
... Yet, group creativity is difficult: individual creative contributions are fundamentally complex and co-dependent; their combination requires more intelligence than simple summation or independent voting. How to disaggregate/aggregate disparate contributions in complex tasks such as composing music has been an open question in human-computer interaction [45] and collective intelligence [46]. ...
Article
Full-text available
Information lattice learning (ILL) is a novel framework for knowledge discovery based on group-theoretic and information-theoretic foundations, which can rediscover the rules of music as known in the canon of music theory and also discover new rules that have remained unexamined. Such probabilistic rules are further demonstrated to be human-interpretable. ILL itself is a rediscovery and generalization of Shannon's lattice theory of information, where probability measures are not given but are learned from training data. This article explains the basics of the ILL framework, including both how to construct a lattice-structured abstraction universe that specifies the structural possibilities of rules, and how to find the most informative rules by performing statistical learning through an iterative student-teacher algorithmic architecture that optimizes information functionals. The ILL framework is finally shown to support both pedagogy and novel patterns of music co-creativity.
... These extra coordination costs render the SOPs approach more expensive than top-down approaches. This is phenomenon seems to be inherent to self-organizing social collaborative systems, such as Wikipedia (Kittur et al., 2009). Adding even more worker agency, for example, full agency across all task workflow stages, could result in even higher costs. ...
Article
As the volume and complexity of distributed online work increases, collaboration among people who have never worked together in the past is becoming increasingly necessary. Recent research has proposed algorithms to maximize the performance of online collaborations by grouping workers in a top-down fashion and according to a set of predefined decision criteria. This approach often means that workers have little say in the collaboration formation process. Depriving users of control over whom they will work with can stifle creativity and initiative-taking, increase psychological discomfort, and, overall, result in less-than-optimal collaboration results—especially when the task concerned is open-ended, creative, and complex. In this work, we propose an alternative model, called Self-Organizing Pairs (SOPs), which relies on the crowd of online workers themselves to organize into effective work dyads. Supported but not guided by an algorithm, SOPs are a new human-centered computational structure, which enables participants to control, correct, and guide the output of their collaboration as a collective. Experimental results, comparing SOPs to two benchmarks that do not allow user agency, and on an iterative task of fictional story writing, reveal that participants in the SOPs condition produce creative outcomes of higher quality, and report higher satisfaction with their collaboration. Finally, we find that similarly to machine learning-based self-organization, human SOPs exhibit emergent collective properties, including the presence of an objective function and the tendency to form more distinct clusters of compatible collaborators.
... Researchers of collective intelligence pay special attention to the Wikipedia project. For example, American scientists from Carnegie Mellon University have identified the relationship between the complexity of Wikipedia content and the competence of the editors of this project [17]. Horost [18, p. 251], who generally views all network resources as a global brain with memory, nodes and synapses, wrote about Wikipe-dia as a collective knowledge base: "Wikipedia is distinguished by its "intelligence," which it develops through collective consciousness and content editing. ...
Article
Full-text available
With the digitalization of the economy, the creative component of an organization’s activities increases. Standard business process management methods stop working due to the rise in uncertainty of the task solution time. Currently, there are no effective technologies for managing intellectual activity processes in organizations. The role of collective intelligence technologies for knowledge management in organizations has long been discussed in the literature, but there are still no concrete proposals on implementation. This work aims to show how collective technologies can solve the problems of managing business processes of intellectual activity. The possibility of collective intelligence technologies for increasing labor productivity is demonstrated. Models for distributing tasks by competencies and synergy from collaboration are proposed for this demonstration. The paper shows that competencies are the primary metric that can be used to measure work with knowledge in an organization. But they should also be considered when organizing group activities. A simple model example shows that the correct distribution of tasks by competencies allows you to increase the speed of solving tasks by a group by several times. In real cases, calculations using computing resources are necessary. A model is also proposed that demonstrates increasing the joint activity of a creative employee and an analyst. It is shown that business process management should be supplemented by mapping the competence model and group work options to the stages of business processes. This will allow you to manage the business processes of intellectual activity.
... Collective intelligence has been conceptualized and measured in a variety of different contexts, including organizations (Knott, 2008;Mayo & Woolley, 2021), networks (Kabo, 2018;Pescetelli et al., 2021), crowds (Kittur et al., 2009), and small groups (Riedl et al., 2021;Woolley et al., 2010). The design problem of collective intelligence in all of these settings is to put together a set of heterogeneous members who can coordinate their distributed cognitive resources (i.e., memory and attention) and diverse goals to formulate a series of joint actions that fulfill its efficiency and maintenance functions over time. ...
Article
Full-text available
Human society faces increasingly complex problems that require coordinated collective action. Artificial intelligence (AI) holds the potential to bring together the knowledge and associated action needed to find solutions at scale. In order to unleash the potential of human and AI systems, we need to understand the core functions of collective intelligence. To this end, we describe a socio-cognitive architecture that conceptualizes how boundedly rational individuals coordinate their cognitive resources and diverse goals to accomplish joint action. Our transactive systems framework articulates the inter-member processes underlying the emergence of collective memory, attention, and reasoning, which are fundamental to intelligence in any system. Much like the cognitive architectures that have guided the development of artificial intelligence, our transactive systems framework holds the potential to be formalized in computational terms to deepen our understanding of collective intelligence and pinpoint roles that AI can play in enhancing it.
... Less is known about transient teams, which are short-lived encounters in which members will not know each other in a meaningful way and have to rely on convention and stigmergy for coordination. While these features are seemingly limit effective team performance, these anonymous temporary teams have proven to be quite capable at specific tasks, such as collaboratively creating content, the most famous example being Wikipedia [54,87], building complex open-source software [87], competing in challenging team-based games [53,59] as well as solving a range of other problems in both real-world and virtual setting [86,99,111]. ...
Conference Paper
Full-text available
Although player performance in online games has been widely studied, few studies have considered the behavioral preferences of players and how that impacts performance. In a competitive setting where players must cooperate with temporary teammates, it is even more crucial to understand how differences in playing style contribute to teamwork. Drawing on theories of individual behavior in teams, we describe a methodology to empirically profile players based on the diversity and conformity of their gameplay styles. Applying this approach to a League of Legends dataset, we find three distinct types of players that align with our theoretical framework: generalists, specialists, and mavericks. Importantly, the behavior of each player type remains stable despite players becoming more experienced. Additionally, we extensively investigate the benefits and drawbacks of each type of player by evaluating their individual performance, contribution to the team, and adaptation to changes in the game environment. We find that, overall, specialists tend to outperform others, while mavericks bear high risk but also potentially reap great rewards. Generalists are the most resilient to instability in the environment (game patches). We discuss the implications of these findings in terms of game design and community management, as well as team building in environments with varying levels of stability.
... As mentioned in [17], norms are key contextual variables that drive feedback mechanisms, reputation building and perceived implicit hierarchies in online forums. Norms and community culture may impact the extent of expression [20,21], nature of trust, intimacy and attachment [10], collaboration and distribution of tasks [22], all of which may affect the processes of peer driven knowledge production and discussion or the nature of epistemic culture [23] and values [24] promoted in these collectives. ...
Article
Software programming is increasingly becoming a collaborative and community driven effort, with online discussions becoming vital resources for learning and knowledge sharing. This study explores differences in the discourse patterns in two popular online programming communities to provide preliminary insights for the question of how virtual learning communities should be designed and structured. A content analysis of a random sample of 15 discussion threads from each of r/Askprogramming (236 contributions) and Stack Overflow (SO; 224 contributions) was used to explore the observed interaction patterns. Differences between sites emerge in the scope of topics and the nature of responses the community provides. While Stack Overflow is more task‐specific, r/Askprogramming supports a greater sense of bonding and camaraderie among community members in addition to task‐specific discussions. These findings suggest key normative structures that regulate the nature of discourse in these communities which may in turn have design implications for such online learning initiatives.
... By drawing inspiration from peers' ideas [64,73,74] or external examples [17,41,56], a community can converge on novel, high quality solutions more quickly than individuals can alone [15]. However, large groups tend to develop large amounts of shallow and redundant ideas and have trouble deciding on and selecting a good subset [13,48,70]. In particular, with enough people and enough time, groups can achieve abundant levels of divergent thinking (or the generation of ideas in diverse directions) [34], but struggle with convergent thinking (or narrowing ideas and identifying the optimal solutions) [27]-both of which are considered distinct but critical pathways to creativity [58]. ...
Preprint
In many instances of online collaboration, ideation and deliberation about what to write happen separately from the synthesis of the deliberation into a cohesive document. However, this may result in a final document that has little connection to the discussion that came before. In this work, we present interleaved discussion and summarization, a process where discussion and summarization are woven together in a single space, and collaborators can switch back and forth between discussing ideas and summarizing discussion until it results in a final document that incorporates and references all discussion points. We implement this process into a tool called Wikum+ that allows groups working together on a project to create living summaries-artifacts that can grow as new collaborators, ideas, and feedback arise and shrink as collaborators come to consensus. We conducted studies where groups of six people each collaboratively wrote a proposal using Wikum+ and a proposal using a messaging platform along with Google Docs. We found that Wikum+'s integration of discussion and summarization helped users be more organized, allowing for light-weight coordination and iterative improvements throughout the collaboration process. A second study demonstrated that in larger groups, Wikum+ is more inclusive of all participants and more comprehensive in the final document compared to traditional tools.
... Collective intelligence has been found in groups of heterogeneous individuals for predictions and other menial tasks, such as forecasting the results of a political election or counting the number of beans in a container (Sunstein 2006). Online crowd-based systems have also shown to be effective at coordinating very large groups of people, focusing on complex tasks that may require some form of self-organization or leadership (Kittur, Lee, and Kraut 2009). ...
Article
Full-text available
This paper examines the opportunities and the economic benefits of exploiting publicly-sourced datasets of road surface quality. Crowdsourcing and crowdsensing initiatives channel the participation of engaged citizens into communities that contribute towards a shared goal. In providing people with the tools needed to positively impact society, crowd-based initiatives can be seen as purposeful drivers of social innovation from the bottom. Mobile crowdsensing (MCS), in particular, takes advantage of the ubiquitous nature of mobile devices with on-board sensors to allow large-scale inexpensive data collection campaigns. This paper illustrates MCS in the context of road surface quality monitoring, presenting results from several pilots adopting a public crowdsensing mobile application for systematic data collection. Evaluation of collected information, its quality, and its relevance to road sustainability and maintenance are discussed, in comparison to authoritative data from a variety of other sources.
... The article "The Capitalist's Dilemma," conceptualized and written by two professors and 150 of their MBA students, is one example (Christensen & van Bever, 2014). As with other forms of collaborative writing online, such as Wikipedia, channeling the contributions of many collaborators into a quality finished article requires a few group leaders who complete a disproportionate amount of the work and organize and edit the written material of others (Kittur & Kraut, 2008;Kittur, Lee, & Kraut, 2009). Our personal experience with articles with many authors is that a large number of contributors commenting publicly on the draft greatly facilitates working out a solid framework and set of arguments, identifying relevant articles and literatures to cite (especially unpublished work), ferreting out quantitative and grammatical errors, and tempering claims appropriately. ...
Article
Full-text available
Most scientific research is conducted by small teams of investigators who together formulate hypotheses, collect data, conduct analyses, and report novel findings. These teams operate independently as vertically integrated silos. Here we argue that scientific research that is horizontally distributed can provide substantial complementary value, aiming to maximize available resources, promote inclusiveness and transparency, and increase rigor and reliability. This alternative approach enables researchers to tackle ambitious projects that would not be possible under the standard model. Crowdsourced scientific initiatives vary in the degree of communication between project members from largely independent work curated by a coordination team to crowd collaboration on shared activities. The potential benefits and challenges of large-scale collaboration span the entire research process: ideation, study design, data collection, data analysis, reporting, and peer review. Complementing traditional small science with crowdsourced approaches can accelerate the progress of science and improve the quality of scientific research.
... A WikiProject is "a group of contributors who want to work together as a team to improve Wikipedia" 2 . Prior work [21,41,78] suggests that WikiProjects provide three valuable support mechanisms for their members: (1) WikiProjects enable members to find help and expert collaborators; (2) WikiProjects can guide members' efforts and explicitly structure members' participation by organizing to-do lists, events like "Collaborations of the Week", and task forces; and (3) WikiProjects can offer new editors "protection" for their work, shielding them from unwarranted reverts and edit wars. As we mentioned in the introduction, despite all the benefits WikiProjects provide, WikiProjects have been suffering decline. ...
Article
Most commonly used approaches to developing automated or artificially intelligent algorithmic systems are Big Data-driven and machine learning-based. However, these approaches can fail, for two notable reasons: (1) they may lack critical engagement with users and other stakeholders; (2) they rely largely on historical human judgments, which do not capture and incorporate human insights into how the world can be improved in the future. We propose and describe a novel method for the design of such algorithms, which we call Value Sensitive Algorithm Design. Value Sensitive Algorithm Design incorporates stakeholders' tacit knowledge and explicit feedback in the early stages of algorithm creation. This increases the chance to avoid biases in design choices or to compromise key stakeholder values. Generally, we believe that algorithms should be designed to balance multiple stakeholders' needs, motivations, and interests, and to help achieve important collective goals. We also describe a specific project "Designing Intelligent Socialization Algorithms for WikiProjects in Wikipedia" to illustrate our method. We intend this paper to contribute to the rich ongoing conversation concerning the use of algorithms in supporting critical decision-making in society.
... The stream of research focusing on content relies heavily on article attributes (e.g., word count) as proxies for information quality (e.g., Stvilia et al. 2008). The research stream focused on understanding the collaborative process investigates how human and bot contributors interact (e.g., Kittur et al. 2009). Takeaways from the latter stream include that humans adopt specific roles such as gatekeepers (Shaw 2012), content changers, content retainers, and deliberation facilitators (Kane et al. 2014). ...
Conference Paper
Full-text available
This research proffers a typology of human and bot co-production processes and conceptualizes co-production as having three iterative phases (i.e., content generation, content positioning, and content protection), each comprised of unique processes. We theorize that specific information quality threats (i.e., content bias, influence disparities, and selection bias and source bias) are enabled or constrained during distinct phases of digital co-production. Specifically, content bias is shaped during the content generation and protection phases, influence disparities are shaped during the content positioning phase, and selection and source bias are shaped during the content protection phase. Notably, increases in source and selection bias during the content protection phase are associated with decreases in content bias. Bots-as active contributors in each process-are an important influence on information quality. Bots have a paradoxical effect on information quality, i.e., bots reduce source, selection and content bias, but bots increase influence disparities in digitally co-produced information.
... As with other forms of collaborative writing online, such as Wikipedia, channeling the contributors of many collaborators into a quality finished paper requires a few group leaders who complete a disproportionate amount of the work, and also organize and edit the written material of others (Kittur & Kraut, 2008;Kittur, Lee, & Kraut, 2009). Our personal experience with many-authored papers is that a large number of contributors commenting publicly on the draft greatly facilitates working out a solid framework and set of arguments, identifying relevant articles and literatures to cite (especially unpublished work), ferreting out quantitative and grammatical errors, and tempering claims appropriately. ...
Preprint
Most scientific research is conducted by small teams of researchers, who together formulate hypotheses, collect data, conduct analyses, and report novel findings. These teams are rather closed and operate as vertically integrated silos. Here we argue that scientific research that is horizontally distributed provides substantial complementary value by maximizing available resources, increasing inclusiveness and transparency, and facilitating error detection. This alternative approach enables researchers to tackle ambitious projects by diversifying contributions and leveraging specialized expertise. The benefits of large scale collaboration span the entire research process: ideation, study design, data collection, data analysis, reporting, and peer review. Crowdsourcing can accelerate the progress of science and improve the quality of scientific research.
... While some attempts were also made to measure intelligence at the group level too (Devine & Philips, 2001). Research has emphasized the superiority of collective power of problem-solving over individual power (Atlee & Por, 2000;Heylighen, 1999;Kittur & Kraut, 2008;Kittur, Lee, & Kraut, 2009). ...
Article
In today’s growing competition, organizations face shrinking innovation cycles, swelling customer expectations, and distributed talent which impels organizations to apply knowledge, skills, and experience of employees most effectively. Applying Collective Intelligence, i.e., the combined knowledge and expertise of a diverse group, has become the order of the day. Therefore, collective intelligence level of an individual is of immense importance for high performance and achievement of the goals. In the present research, an attempt is made to operationalize the components of organizational collective intelligence from working professionals. Specifically, an attempt is made to develop a scale to measure collective intelligence among 600 working professionals. The results were subjected to the robust measurement tools such as Exploratory Factor Analysis and Structures Equation Modeling to confirm the factor structure. The instrument resulted in four factors and a 17-item scale. The instrument can be used by the policymakers and human resource managers for selecting, harnessing, and retaining appropriate talent in the organization.
... Kittur and Kraut [30] studied the impact of coordination methods between contributors on content quality. Also, Kittur et al. [12] analyzed the role of uneven distribution of effort on productivity across thousands of articles on wiki spaces. This work drew attention to the core issue of coordination via concentration of effort among a few selected editors. ...
Conference Paper
Full-text available
Online knowledge production sites, such as Wikipedia or Stack Overflow, are dominated by small groups of contributors. How does this affect knowledge production and its quality? Does the persistent presence of some key contributors among the most productive members improve or not the quality of the knowledge, considered in the aggregate? The present paper considers these issues by correlating week-by-week value changes in contribution unevenness, elite resilience (stickiness), and content quality. The goal is to detect if and how changes in social structural variables may influence the quality of the knowledge produced by online knowledge production sites. The paper addresses such question by an extensive data analysis carried out on the datasets of two representative sites: Wikipedia and Stack Overflow. Results from the analysis show that on Stack Overflow both unevenness and elite stickiness have a curvilinear effect on quality. Quality is optimized at specific levels of elite stickiness and unevenness. At the same time, on Wikipedia, quality increases linearly with a decline on entropy, overall, and with an increase in stickiness in the maturation phase, after an entropy peak is reached.
... They also found that knowledge gaps occurred when users of social software platforms perceived the platform as a broadcast medium rather than a discourse enabler. Other examples of self-promotion have demonstrated in participants developing coalitions where they overwhelm the crowd with votes and posts to ensure that they win and others lose (Dellarocas, 2010;Kittur et al., 2009). ...
Article
Full-text available
Arvind Malhotra is the H. Allen Andrew Professor of Entrepreneurial Education and Professor of Strategy & Entrepreneurship at the Kenan–Flagler Business School, University of North Carolina at Chapel Hill. His research projects include studying how successful brands leverage social media for creating a loyal customer base, successful open-innovation organizational and extra-organizational structures; adoption of innovative technology-based services, such as wireless, by consumers and organizations; and management of knowledge in extra-organizational contexts. He has received research grants from the Society for Information Managers Advanced Practices Council, Dell, Carnegie Bosch Institute, National Science Foundation, RosettaNet consortium, UNC-Small Grants Program and the Marketing Sciences Institute.
Article
Full-text available
Intelligence is a concept that occurs in multiple contexts and has various meanings. It refers to the ability of human beings and other entities to think and understand the world around us. It represents a set of skills directed at problem-solving and targeted at producing effective results. Thus, intelligence and governance are an odd couple. We expect governments and other governing institutions to operate in an intelligent manner, but too frequently we criticize their understanding of serious public problems, their decisions, behaviors, managerial skills, ability to solve urgent problems, and overall governability wisdom. This manuscript deals with such questions using interdisciplinary insights (i.e., psychological, social, institutional, biological, technological) on intelligence and integrating it with knowledge in governance, administration, and management in public and non-profit sectors. We propose the IntelliGov framework, that may extend both our theoretical, methodological, analytical, and applied understanding of intelligent governance in the digital age.
Article
Full-text available
Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals’ and collectives’ freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi‐agent reinforcement learning to simulate dynamics within a human–machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q‐learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.
Article
Full-text available
Many models of learning in teams assume that team members can share solutions or learn concurrently. However, these assumptions break down in multidisciplinary teams where team members often complete distinct, interrelated pieces of larger tasks. Such contexts make it difficult for individuals to separate the performance effects of their own actions from the actions of interacting neighbors. In this work, we show that individuals can overcome this challenge by learning from network neighbors through mediating artifacts (like collective performance assessments). When neighbors' actions influence collective outcomes, teams with different networks perform relatively similarly to one another. However, varying a team's network can affect performance on tasks that weight individuals' contributions by network properties. Consequently, when individuals innovate (through "exploring" searches), dense networks hurt performance slightly by increasing uncertainty. In contrast, dense networks moderately help performance when individuals refine their work (through "exploiting" searches) by efficiently finding local optima. We also find that decentralization improves team performance across a battery of 34 tasks. Our results offer design principles for multidisciplinary teams within which other forms of learning prove more difficult.
Article
Full-text available
Objective:This study examines the relationship between seminal plasma anti-Müllerian hormone (AMH) levels and sperm morphology and sperm DNA fragmentation.Materials and Methods:Semen and blood samples were obtained from volunteers. There were four patient groups that are normozoospermia (n=46), oligoasthenoteratozoospermia (n=18), azoospermia (n=19) and teratozoospermia (n=68), based on semen analysis results. Serum follicle-stimulating hormone, luteinizing hormone, testosterone, serum and seminal plasma AMH levels were measured. DNA fragmentation of sperm was assessed by the TdT-mediated dUTP nick-end labeling (TUNEL) test.Results:Azoospermic group showed the highest blood AMH levels. The seminal AMH level of normozoospermic patients was found to be significantly lower compared to the oligoasthenoteratozoospermia group. No significant associations between seminal AMH, sperm morphology, and sperm DNA damage were observed. No significant difference was observed among the groups regarding sperm DNA fragmentation.Conclusion:A large number of TUNEL-positive cells in normozoospermic patients demonstrates that DNA damage of sperm may also occur in normal sperm parameters individuals. The measurement of serum and seminal AMH levels does not provide any additional benefit during the evaluation of male infertility.
Article
Full-text available
In 2010, a new research stream began on collective intelligence (CI), defined as a group's general ability to perform consistently well across a wide variety of tasks. Subsequent empirical evidence presents a mixed picture. Some studies have found groups to exhibit CI while others have not. To resolve these disparate results, we compare 21 experimental studies to understand what influences whether groups exhibit CI. We find that task structure is a boundary condition for CI in that groups exhibit CI across well‐structured tasks but not across ill‐structured tasks. For ill‐structured tasks, CI has a more nuanced set of multiple factors that may be interpreted as different facets of CI. This research extends our understanding of CI by suggesting that the original definition of CI was too all‐encompassing. CI should be reconceptualized as a multi‐dimensional phenomenon, similar to research on individual intelligence. We highlight avenues for future research to continue to move CI research forward, particularly regarding ill‐structured tasks.
Article
The article considers the conceptual and categorical apparatus of "intelligence", explores various characteristics of its essence and content, analyses the sources, the structure formation and components features. The key role in the intelligence formation belongs to the existing education model chosen in the country and the learning model formed at the enterprise. An overview of research in the collective intelligence field is presented. The demand for collective intelligence technologies as the transition to the knowledge society is argued. The obtained results build the principles of the organizational and economic mechanism for ensuring the industrial enterprises intellectual potential effective use in the implementation of the domestic economy European vector development.
Article
In many instances of online collaboration, ideation and deliberation about what to write happen separately from the synthesis of the deliberation into a cohesive document. However, this may result in a final document that has little connection to the discussion that came before. In this work, we present interleaved discussion and summarization, a process where discussion and summarization are woven together in a single space, and collaborators can switch back and forth between discussing ideas and summarizing discussion until it results in a final document that incorporates and references all discussion points. We implement this process into a tool called Wikum+ that allows groups working together on a project to create living summaries-artifacts that can grow as new collaborators, ideas, and feedback arise and shrink as collaborators come to consensus. We conducted studies where groups of six people each collaboratively wrote a proposal using Wikum+ and a proposal using a messaging platform along with Google Docs. We found that Wikum+'s integration of discussion and summarization helped users be more organized, allowing for light-weight coordination and iterative improvements throughout the collaboration process. A second study demonstrated that in larger groups, Wikum+ is more inclusive of all participants and more comprehensive in the final document compared to traditional tools.
Chapter
Mit der wachsenden Bedeutung algorithmischer Systeme in gesellschaftlichen Kontexten nimmt die Debatte darüber, wie sie gestaltet werden sollten, zu. Die Wikipedia-Community hat bereits vielfältige Erfahrungen mit der Gestaltung und dem Einsatz algorithmischer Systeme sammeln können. So hat sich in den letzten 15 Jahren eine soziotechnische Assemblage aus menschlichen und algorithmischen Akteuren herausgebildet, die gemeinschaftlich und aufeinander abgestimmt umfangreiche Aufgaben bewältigen. Dieser Artikel beschreibt unterschiedliche Mechanismen dieser Algorithmic Governance, die die Grundlage für Gestaltungsempfehlungen von algorithmischen Systemen bilden. Diese umfassen die eindeutige Identifizierbarkeit nichtmenschlicher Systeme, einen gemeinschaftlich definierten Handlungsrahmen, die Diversität algorithmischer Realisierungen, eine offene Infrastruktur, eine Werteorientierung und Folgenabschätzung sowie die Sicherstellung der menschlichen Handlungsfähigkeit. Dieser Beitrag führt diese Gestaltungsempfehlungen ein und diskutiert sie, um auf einen werteorientierten Umgang mit algorithmischen Systemen hinzuwirken.
Article
Full-text available
There were comparatively analyzed the morphology, endocrine function and folliculogenesis of mature and neonatal rat ovarian tissue grafts after hypothermic storage (HS) in the media with different composition. For research performance the following methods were used: heterotopic transplantation (study of ovarian tissue structure and function after HS); immunoassay (determining the concentration of sex hormones in plasma); morphological method (evaluation of tissue graft structure integrity after HS). The composition of incubation medium was experimentally proven to be a key factor in follicular development and endocrine function of ovarian tissue grafts of different stages of histogenesis after HS. It was found that the use of phosphate-buffered saline for HS significantly reduced the follicular pool and decreased the steroidogenic function of grafts in both mature and neonatal ovarian tissue. It was shown that when using the mannitol-containing solution for HS, the number of follicles (of different maturity degree and corpus luteum) and the endocrine function after ovarian tissue transplantation were comparable with those after fresh tissue transplantation.Probl Cryobiol Cryomed 2019; 29(1): 019–027Â
Conference Paper
Full-text available
Increasingly, information generated by open collaboration communities is being trusted and used by individuals to make decisions and carry out work tasks. Little is known about the quality of this information or the bias it may contain. In this study we address the question: How is gender bias embedded in information about organizational leaders in an open collaboration community? To answer this question, we use the bias framework developed by Miranda and colleagues (2016) to study bias stemming from structural constraints and content restrictions in the open collaboration community Wikipedia. Comparison of Wikipedia profiles of Fortune 1000 CEOs reveals that selection, source, and influence bias stemming from structural constraints on Wikipedia advantage women and disadvantage men. This finding suggests that information developed by open collaboration communities may contain unexpected forms of bias.
Conference Paper
This study investigates the behaviour of Ukrainian, Russian and English Wikipedia contributors in terms of their attention management, which Pierre Lévy casts as the initial stage of personal knowledge management. We analyse the salience of the Ukrainian crisis of 2013-14 as a topic of public discussion on the national, regional and international level, as well as the changing intensity of discussions between Ukrainian-speaking, Russian-speaking and English-speaking communities of Wikipedia contributors. We propose a meta-driven methodology to identify and track multi-faceted topics of public discussion rather than individual articles, which is common practice in Wikipedia scholarship. We develop a ‘discussion intensity’ metric to trace the salience of topics related to the Ukrainian crisis among Wikipedia contributors over time and to detect which aspects of this topic fuel discussions and direct attention. This method allows for a comparison across different language versions of Wikipedia and enables the identification of major differences in the attention management of different communities of Wikipedia creators and the role of the encyclopaedia in the development of collective knowledge. We observe three distinct patterns of collective attention management, which we characterize as intense attention, dispersed attention, and focused attention.
Conference Paper
Taking a picture has been traditionally a one-person task. In this paper we present a novel system that allows multiple mobile devices to work collaboratively in a synchronized fashion to capture a panorama of a highly dynamic scene, creating an entirely new photography experience that encourages social interactions and teamwork. Our system contains two components: a client app that runs on all participating devices, and a server program that monitors and communicates with each device. In a capturing session, the server collects in realtime the viewfinder images of all devices and stitches them on-the-fly to create a panorama preview, which is then streamed to all devices as visual guidance. The system also allows one camera to be the host and send direct visual instructions to others to guide camera adjustment. When ready, all devices take pictures at the same time for panorama stitching. Our preliminary study suggests that the proposed system can help users capture high quality panoramas with an enjoyable teamwork experience.
Conference Paper
Prior work on creativity support tools demonstrates how a computational semantic model of a solution space can enable interventions that substantially improve the number, quality and diversity of ideas. However, automated semantic modeling often falls short when people contribute short text snippets or sketches. Innovation platforms can employ humans to provide semantic judgments to construct a semantic model, but this relies on external workers completing a large number of tedious micro tasks. This requirement threatens both accuracy (external workers may lack expertise and context to make accurate semantic judgments) and scalability (external workers are costly). In this paper, we introduce IdeaHound, an ideation system that seamlessly integrates the task of defining semantic relationships among ideas into the primary task of idea generation. The system combines implicit human actions with machine learning to create a computational semantic model of the emerging solution space. The integrated nature of these judgments allows IDEAHOUND to leverage the expertise and efforts of participants who are already motivated to contribute to idea generation, overcoming the issues of scalability inherent to existing approaches. Our results show that participants were equally willing to use (and just as productive using) IDEAHOUND compared to a conventional platform that did not require organizing ideas. Our integrated crowdsourcing approach also creates a more accurate semantic model than an existing crowdsourced approach (performed by external crowds). We demonstrate how this model enables helpful creative interventions: providing diverse inspirational examples, providing similar ideas for a given idea and providing a visual overview of the solution space.
Article
Full-text available
This paper classifies alternative mechanisms for coordinating work activities within organizations into impersonal, personal and group modes. It investigates how variations and interactions in the use of these coordination mechanisms and modes are explained by task uncertainty, interdependence and unit size. Nine hypotheses that relate these three determining factors to the use of the three coordination modes are developed in order to test some key propositions of Thompson (1967) and others on coordination at the work unit or departmental level of organization analysis. Research results from 197 work units within a large employment security agency largely support the hypotheses. The findings suggest that there are differences in degree and kind of influence of each determining factor on the mix of alternative coordination mechanisms used within organizational units.
Article
Full-text available
This study investigated 3 broad classes of individual-differences variables (job-search motives, competencies, and constraints) as predictors of job-search intensity among 292 unemployed job seekers. Also assessed was the relationship between job-search intensity and reemployment success in a longitudinal context. Results show significant relationships between the predictors employment commitment, financial hardship, job-search self-efficacy, and motivation control and the outcome job-search intensity. Support was not found for a relationship between perceived job-search constraints and job-search intensity. Motivation control was highlighted as the only lagged predictor of job-search intensity over time for those who were continuously unemployed. Job-search intensity predicted Time 2 reemployment status for the sample as a whole, but not reemployment quality for those who found jobs over the study's duration. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Conference Paper
Full-text available
Member-maintained communities ask their users to perform tasks the community needs. From Slashdot, to IMDb, to Wikipedia, groups with diverse interests create community- maintained artifacts of lasting value (CALV) that support the group's main purpose and provide value to others. Said com- munities don't help members find work to do, or do so with- out regard to individual preferences, such as Slashdot assi gn- ing meta-moderation randomly. Yet social science theory suggests that reducing the cost and increasing the personal value of contribution would motivate members to participate more. We present SuggestBot, software that performs intelligent task routing (matching people with tasks) in Wikipedia. Sug- gestBot uses broadly applicable strategies of text analysi s, collaborative filtering, and hyperlink following to recom- mend tasks. SuggestBot's intelligent task routing increas es the number of edits by roughly four times compared to sug- gesting random articles. Our contributions are: 1) demon- strating the value of intelligent task routing in a real depl oy- ment; 2) showing how to do intelligent task routing; and 3) sharing our experience of deploying a tool in Wikipedia, which offered both challenges and opportunities for research.
Conference Paper
Full-text available
User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach. Author Keywords Remote user study, Mechanical Turk, micro task, Wikipedia.
Conference Paper
Full-text available
Wikipedia's success is often attributed to the large numbers of contributors who improve the accuracy, completeness and clarity of articles while reducing bias. However, because of the coordination needed to write an article collaboratively, adding contributors is costly. We examined how the number of editors in Wikipedia and the coordination methods they use affect article quality. We distinguish between explicit coordination, in which editors plan the article through communication, and implicit coordination, in which a subset of editors structure the work by doing the majority of it. Adding more editors to an article improved article quality only when they used appropriate coordination techniques and was harmful when they did not. Implicit coordination through concentrating the work was more helpful when many editors contributed, but explicit coordination through communication was not. Both types of coordination improved quality more when an article was in a formative stage. These results demonstrate the critical importance of coordination in effectively harnessing the "wisdom of the crowd" in online production environments.
Conference Paper
Full-text available
Task dependencies drive the need to coordinate work activities. We describe a technique for using automatically generated archi-val data to compute coordination requirements, i.e., who must coordinate with whom to get the work done. Analysis of data from a large software development project revealed that coordina-tion requirements were highly volatile, and frequently extended beyond team boundaries. Congruence between coordination re-quirements and coordination activities shortened development time. Developers, particularly the most productive ones, changed their use of electronic communication media over time, achieving higher congruence. We discuss practical implications of our technique for the design of collaborative and awareness tools.
Article
Full-text available
Analyzed experimental comparison of groups and individuals on 4 dimensions: task, process, individual differences, and methodology. A standardized terminology based on a study by I. Lorge et al is developed to preserve operational definitions in the comparisons of (a) group vs individual, (b) group vs the most competent individual in an aggregate, (c) group vs pooled responses of an aggregate, and (d) group vs math models of performance. Research supported I. D. Steiner's (1972) theory of process loss but also suggested evidence for process gain. To avoid confounding group conditions and S variables, this review focused on the results of random assignment of Ss to conditions. (2½ p ref)
Conference Paper
Full-text available
Wikipedia, a wiki-based encyclopedia, has become one of the most successful experiments in collaborative knowledge building on the Internet. As Wikipedia continues to grow, the potential for conflict and the need for coordination increase as well. This article examines the growth of such non-direct work and describes the development of tools to characterize conflict and coordination costs in Wikipedia. The results may inform the design of new collaborative knowledge systems. Author Keywords Wikipedia, wiki, collaboration, conflict, user model, Web-based interaction, visualization.
Article
Full-text available
Wikipedia has been a resounding success story as a collaborative system with a low cost of online participation. However, it is an open question whether the success of Wikipedia results from a “wisdom of crowds ” type of effect in which a large number of people each make a small number of edits, or whether it is driven by a core group of “elite ” users who do the lion’s share of the work. In this study we examined how the influence of “elite ” vs. “common ” users changed over time in Wikipedia. The results suggest that although Wikipedia was driven by the influence of “elite ” users early on, more recently there has been a dramatic shift in workload to the “common ” user. We also show the same shift in del.icio.us, a very different type of social collaborative knowledge system. We discuss how these results mirror the dynamics found in more traditional social collectives, and how they can influence the design of new collaborative knowledge systems. Author Keywords Wikipedia, Wiki, collaboration, collaborative knowledge
Conference Paper
Full-text available
Effective information quality analysis needs powerful yet easy ways to obtain metrics. The English version of Wikipedia provides an extremely interesting yet challenging case for the study of Information Quality dynamics at both macro and micro levels. We propose seven IQ metrics which can be evaluated automatically and test the set on a representative sample of Wikipedia content. The methodology of the metrics construction and the results of tests, along with a number of statistical characterizations of Wikipedia articles, their content construction, process metadata and social context are reported.
Article
Full-text available
This survey characterizes an emerging research area, sometimes called coordination theory , that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology. A key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints , and task/subtask dependencies . Section 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.
Article
Full-text available
This paper presents two case studies of the development and maintenance of major OSS projects, i.e., the Apache server and Mozilla. We address key questions about their development processes, and about the software that is the result of those processes. We first studied the Apache project, and based on our results, framed a number of hypotheses that we conjectured would be true generally of open source developments. In our second study, which we began after the analyses and hypothesis formation were completed, we examine comparable data from the Mozilla project. The data provide support for several of our original hypotheses
Article
Full-text available
Since its inception six years ago, the online encyclopedia Wikipedia has accumulated 6.40 million articles and 250 million edits, contributed in a predominantly undirected and haphazard fashion by 5.77 million unvetted volunteers. Despite the apparent lack of order, the 50 million edits by 4.8 million contributors to the 1.5 million articles in the English-language Wikipedia follow strong certain overall regularities. We show that the accretion of edits to an article is described by a simple stochastic mechanism, resulting in a heavy tail of highly visible articles with a large number of edits. We also demonstrate a crucial correlation between article quality and number of edits, which validates Wikipedia as a successful collaborative effort.
Article
I anatomize a successful open-source project, fetchmail, that was run as a deliberate test of some surprising theories about software engineering suggested by the history of Linux. I discuss these theories in terms of two fundamentally different development styles, the "cathedral" model of most of the commercial world versus the "bazaar" model of the Linux world. I show that these models derive from opposing assumptions about the nature of the software-debugging task. I then make a sustained argument from the Linux experience for the proposition that "Given enough eyeballs, all bugs are shallow", suggest productive analogies with other self-correcting systems of selfish agents, and conclude with some exploration of the implications of this insight for the future of software.
Article
The book, The Mythical Man-Month, Addison-Wesley, 1975 (excerpted in Datamation, December 1974), gathers some of the published data about software engineering and mixes it with the assertion of a lot of personal opinions. In this presentation, the author will list some of the assertions and invite dispute or support from the audience. This is intended as a public discussion of the published book, not a regular paper.
Article
I anatomize a successful open-source project, fetchmail, that was run as a deliberate test of some theories about software engineering suggested by the history of Linux. I discuss these theories in terms of two fundamentally different development styles, the "cathedral" model, representing most of the commercial world, versus the "bazaar" model of the Linux world. I show that these models derive from opposing assumptions about the nature of the software-debugging task. I then make a sustained argument from the Linux experience for the proposition that "Given enough eyeballs, all bugs are shallow," suggest productive analogies with other self-correcting systems of selfish agents, and conclude with some exploration of the implications of this insight for the future of software.
Article
This paper contributes to the research on the relationship of subunit work characteristics to subunit structure and performance. Information-processing ideas are used to develop a set of hypotheses to test a contingency approach to subunit structure directly; whether high-performing subunits with different information-processing requirements have systematically different degrees of communication structure. Results indicate that task characteristics, environment, and interdependence each have an important impact on subunit communication structure, and that these effects are accentuated for high-performing subunits. This research supports the idea that there is no one best way of structuring subunit communication. Rather, for high-performing subunits, communication structure is contingent on the subunit's work.
Article
This article presents a quantitative review of 93 studies examining relationships between team design features and team performance. Aggregated measures of individual ability and disposition correlate positively with team performance. Team member heterogeneity and performance correlate near zero, but the effect varies somewhat by type of team. Project and management teams have slightly higher performance when they include more members. Team-level task meaningfulness exhibits a modest but inconsistent relationship with performance. Increased autonomy and intrateam coordination correspond with higher performance, but the effect varies depending on task type. Leadership, particularly transformational and empowering leadership, improves team performance.
Article
Wikipedia's brilliance and curse is that any user can edit any of the encyclopedia entries. We introduce the notion of the impact of an edit, measured by the number of times the edited version is viewed. Using several datasets, including recent logs of all article views, we show that an overwhelming majority of the viewed words were written by frequent editors and that this majority is increasing. Similarly, using the same impact measure, we show that the probability of a typical article view being damaged is small but increasing, and we present empirically grounded classes of damage. Finally, we make policy recommendations for Wikipedia and other wikis in light of these findings.
Article
This paper proposes a model of how coordinating mechanisms work, and tests it in the context of patient care. Consistent with organization design theory, the performance effects of boundary spanners and team meetings were mediated by relational coordination, a communication- and relationship-intensive form of coordination. Contrary to organization design theory, however, the performance effects of routines were also mediated by relational coordination. Rather than serving as a replacement for interactions, as anticipated by organization design theory, routines work by enhancing interactions among participants. Likewise, all three coordinating mechanisms, including routines, were found to be increasingly effective under conditions of uncertainty.
Conference Paper
This paper presents the user-centred iterative design of software that supports collaborative writing. The design grew out of a study of how people write together that included a survey of writers and a laboratory study of writing teams linked by a variety of communications media. The resulting taxonomy of collaborative writing is summarized in the paper, followed by a list of design requirements for collaborative writing software suggested by the work. The paper describes two designs of the software. The first prototype supports synchronous writing and editing from workstations linked over local area and wide area networks. The second prototype also supports brainstorming, outlining, and document review, as well as asynchronous work. Lessons learned from the user testing and actual usage of the two systems are also presented.
Article
This paper was the first initiative to try to define Web2.0 and understand its implications for the next generation of software, looking at both design patterns and business modes. Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an "architecture of participation," and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.
Article
The relationships among input uncertainty, means of coordination, and criteria of the organizational effectiveness of hospital emergency units were explored using data from 30 emergency units in six midwestern states. Input uncertainty generally was not associated with the use of various means of coordination. However, input uncertainty affected relationships between the means of coordination and the effectiveness criteria. Specifically, programmed means of coordination made a greater contribution to organizational effectiveness under conditions of low uncertainty than under conditions of high uncertainty. Conversely, nonprogrammed means of coordination made a greater contribution to organizational effectiveness when uncertainty was high than when it was low. Findings were interpreted and suggestions were advanced as to how emergency units might best solve their coordination problems under varying conditions of uncertainty.
Article
Jimmy Wales' Wikipedia comes close to Britannica in terms of the accuracy of its science entries, a Nature investigation finds.
Article
There is little evidence on unemployment duration and its determinants in developing countries. This study is on the duration aspect of unemployment in a developing country, Turkey. We analyze the determinants of the probability of leaving unemployment for employment or the hazard rate. The effects of the personal and household characteristics and the local labor market conditions are examined. The analyses are carried out for men and women separately. The results indicate that the nature of unemployment in Turkey exhibits similarities to the unemployment in both the developed and the developing countries.
Article
In most developing countries, income inequality tends to worsen during initial stages of growth, especially in urban areas. The People’s Republic of China (PRC) provides a sharp contrast where income inequality among urban households is lower than that among rural households. In terms of inclusive growth, the existence of income mobility over a longer period of time may mitigate the impacts of widening income inequality measured using crosssectional data. We explore several ways of measuring income mobility and found considerable income mobility in the PRC, with income mobility lower among rural households than among urban households. When incomes are averaged over 3 years and when adjustments are made for the size and composition of households, income inequality decreases. Social welfare functions are posited that allow for a trade-off between increases in income and increases in income inequality. These suggest strong increases in well-being for urban households in the PRC. In comparison, the corresponding changes in rural households are much smaller.
Article
The authors focus on team performance in complex systems. Representative empirical literature is reviewed and models of team performance are discussed. The role of mental models in team performance is considered and several propositions developed that focus on mental models as mechanisms for forming expectations and explanations of team behaviors. The implications of these propositions for team performance and training are elaborated, particularly in terms of likely performance problems if mechanisms for forming expectations and explanations are deficient. The results of two initial studies that support the plausibility of the propositions are reported
Article
The paper explains why open source software is an instance of a potentially broader phenomenon. Specifically, I suggest that nonproprietary peer-production of information and cultural materials will likely be a ubiquitous phenomenon in a pervasively networked society. I describe a number of such enterprises, at various stages of the information production value chain. These enterprises suggest that incentives to engage in nonproprietary peer production are trivial as long as enough contributors can be organized to contribute. This implies that the limit on the reach of peer production efforts is the modularity, granularity, and cost of integration of a good produced, not its total cost. I also suggest reasons to think that peer-production can have systematic advantages over both property-based markets and corporate managerial hierarchies as a method of organizing information and cultural production in a networked environment, because it is a better mechanism for clearing information about human capital available to work on existing information inputs to produce new outputs, and because it permits largers sets of agents to use larger sets of resources where there are increasing returns to the scale of both the set of agents and the set of resources available for work on projects. As capital costs and communications costs decrease in importance as factors of information production, the relative advantage of peer production in clearing human capital becomes more salient.
Experimental study on the impact of the 6+1 trait writing model on student achievement in writing . Association for Supervision and Curriculum Development Conference
  • M Kozlow
  • P Bellamy
  • Kozlow M.
Surowiecki, J. The wisdom of crowds
  • J Surowiecki
  • Surowiecki J.
Raymond, E. The cathedral and the bazaar
  • E Raymond
  • Raymond E.