Article

Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Hit songs, books, and movies are many times more successful than average, suggesting that “the best” alternatives are qualitatively different from “the rest”; yet experts routinely fail to predict which products will succeed. We investigated this paradox experimentally, by creating an artificial “music market” in which 14,341 participants downloaded previously unknown songs either with or without knowledge of previous participants' choices. Increasing the strength of social influence increased both inequality and unpredictability of success. Success was also only partly determined by quality: The best songs rarely did poorly, and the worst rarely did well, but any other result was possible.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The Matthew effect also applies to the success of products, which is often accurately predicted by various dimensions of the products' early success, e.g., for online content [48], books [1], scientific papers and patents [49][50][51][52]. In addition to the empirical support for the link between early and subsequent success in observational studies, researchers have devised laboratory and field experiments that have demonstrated the causal nature of this link [53][54][55]. ...
... Overall, these findings substantially inform the ensuing debates about whether great success truly reflects great potential [2,22,37,38,53,55,66,76,77]. Indeed, one of the most consistent findings in social and behavioral sciences is that past success is a strong predictor of future success (See Section Success breeds success). ...
... On the other hand, definitions based on the outcomes of an individual's actions depend on evaluations of success and failure, which do not always reflect societal values and can potentially perpetuate inequalities [206]. Some studies have already started identifying the conditions under which we see a decoupling of merit and success-e.g., through agent-based models and experiments [53,66]-suggesting ways to prevent those conditions. These studies, however, rely on simplified scenarios in which a single dimension of merit is clearly identifiable, which is rarely the case in real social systems. ...
Preprint
Full-text available
Understanding the collective dynamics behind the success of ideas, products, behaviors, and social actors is critical for decision-making across diverse contexts, including hiring, funding, career choices, and the design of interventions for social change. Methodological advances and the increasing availability of big data now allow for a broader and deeper understanding of the key facets of success. Recent studies unveil regularities beneath the collective dynamics of success, pinpoint underlying mechanisms, and even enable predictions of success across diverse domains, including science, technology, business, and the arts. However, this research also uncovers troubling biases that challenge meritocratic views of success. This review synthesizes the growing, cross-disciplinary literature on the collective dynamics behind success and calls for further research on cultural influences, the origins of inequalities, the role of algorithms in perpetuating them, and experimental methods to further probe causal mechanisms behind success. Ultimately, these efforts may help to better align success with desired societal values.
... Once infused with social dynamics, these simulations reveal large variations both within and across realizations (De Vany, 2004;Richard, 1998;Watts, 2002), even when the underlying products are identical in intrinsic quality (Adler, 1985). As noted by Salganik et al. (2006), however, stochastic models have limitations of their own. Empirical tests of their predictions demand comparisons across multiple realizations of a stochastic process. ...
... To fill this gap, Salganik et al. (2006) launched the largest experiment to date on the psychology of cultural products adoption. After assembling 48 unknown songs from unknown bands, the authors randomly assigned >14,000 subjects to one of two conditions. ...
... Three lessons may be derived from Salganik et al.'s (2006) study. First, computing GINI coefficients, the authors found that social influence increases inequality (i.e., popular songs become more popular and unpopular songs more unpopular). ...
Article
Full-text available
What makes cultural products such as edutainment (i.e., online talks) successful versus not? Asked differently, which characteristics make certain addresses more (vs. less) appealing? Across 12 field and lab studies, we explore when, why, and for whom the information load carried in TED talks causes them to gain (vs. lose) popularity. First and foremost, we uncover a negative effect whereby increases in the number of topics broached in a talk (i.e., information load) hurt viewer adoption. The cause? Processing disfluency. As information load soars, content becomes more difficult to process, which in turn reduces interest. Probing process further, we show this effect fades among audience members with greater need for cognition, a personality trait marking a penchant for deep and broad information processing. Similarly, the effect fades among edutainment viewers favoring education goals (i.e., cognitive enrichment) whereas it amplifies among those favoring entertainment (i.e., hedonic pleasure). Our investigation also documents the counterintuitiveness of our findings (i.e., how individuals mispredict which talks they would actually [dis]like). From these results, we derive theoretical insights for processing fluency research and the psychology of cultural products adoption (i.e., we weigh in on when, why, and for whom fluency has favorable vs. unfavorable downstream effects). We also derive prescriptive insights for (a) players of the edutainment industry whose very business hinges on curating appealing content (e.g., TED, Talks@Google, The Moth, Big Think, Spotify) and (b) communicators of all creeds wishing to broaden their reach and appeal (e.g., professors, scientists, politicians, journalists, bloggers, podcasters, content editors, online community managers).
... The usefulness of such rankings is predicated on the wisdom of the crowd [33]: high-quality choices will gain early popularity, and in turn become more likely to be selected because they are more visible. Furthermore, knowledge of what is popular can be construed as a form of social influence; an individual's behavior may be guided by choices of peers or neighbors [23,19,20,18,29,2,6]. These mechanisms imply that, in a system where users have access to popularity or engagement cues (like ratings, number of views, likes, and so on), high-quality content will "bubble up" and allow for a more cost-efficient exploration of the space of choices. ...
... Cultural markets, such as social media, the entertainment industry, and the world of fashion are known for their continuous rate of innovation and inherent offs are common in social learning environments [28]. Salganik et al. created a music-sharing platform to determine under which conditions one can predict popular musical tracks [29]. The experiments showed that in the absence of popularity cues, a reliable proxy for quality could be determined by aggregate consumption patterns. ...
... The premature convergence to sub-optimal ranking caused by excessive popularity bias is also reflected in the increased variance of the average quality across runs of the model for larger values of both α and β. This is consistent with the increase in variance of outcomes observed in other studies [29,16]. ...
Preprint
Algorithms that favor popular items are used to help us select among many choices, from engaging articles on a social media news feed to songs and books that others have purchased, and from top-raked search engine results to highly-cited scientific papers. The goal of these algorithms is to identify high-quality items such as reliable news, beautiful movies, prestigious information sources, and important discoveries --- in short, high-quality content should rank at the top. Prior work has shown that choosing what is popular may amplify random fluctuations and ultimately lead to sub-optimal rankings. Nonetheless, it is often assumed that recommending what is popular will help high-quality content "bubble up" in practice. Here we identify the conditions in which popularity may be a viable proxy for quality content by studying a simple model of cultural market endowed with an intrinsic notion of quality. A parameter representing the cognitive cost of exploration controls the critical trade-off between quality and popularity. We find a regime of intermediate exploration cost where an optimal balance exists, such that choosing what is popular actually promotes high-quality items to the top. Outside of these limits, however, popularity bias is more likely to hinder quality. These findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.
... Recent evidence, however, has shown that the collective decision of the crowd is not foolproof. One known limitation, for example, is social influence, which biases individual judgments and degrades crowd performance [22], obscuring the underlying quality of choices [29]. We try to answer whether crowd wisdom limitations affect a common crowdsourcing application, question answering boards. ...
... Qualitatively, voters and askers in unaccepted questions had similar behavior to those in accepted questions, but quantitatively, we found variations on regression coefficients, potentially suggesting voters behave differently in this hold-out set. We also consider an answer's rank in the list of answers (what we refer to as web page order) and score, because these variables affect how much attention the answer receives [29,21,35]. The other attributes were also examined as additional factors that could affect how answers are voted or accepted. ...
... Psychologists and behavioral scientists have identified a wide array of cognitive heuristics, which introduce predictable biases into human behavior. Social influence, aka "bandwagon effect", is one such heuristic: people pay attention to the choices of others [29]. We find, however, that this affect is not very significant in Stack Exchange. ...
Preprint
Crowds can often make better decisions than individuals or small groups of experts by leveraging their ability to aggregate diverse information. Question answering sites, such as Stack Exchange, rely on the "wisdom of crowds" effect to identify the best answers to questions asked by users. We analyze data from 250 communities on the Stack Exchange network to pinpoint factors affecting which answers are chosen as the best answers. Our results suggest that, rather than evaluate all available answers to a question, users rely on simple cognitive heuristics to choose an answer to vote for or accept. These cognitive heuristics are linked to an answer's salience, such as the order in which it is listed and how much screen space it occupies. While askers appear to depend more on heuristics, compared to voting users, when choosing an answer to accept as the most helpful one, voters use acceptance itself as a heuristic: they are more likely to choose the answer after it is accepted than before that very same answer was accepted. These heuristics become more important in explaining and predicting behavior as the number of available answers increases. Our findings suggest that crowd judgments may become less reliable as the number of answers grow.
... 23,24 Some research has understood social influence as a social preference trend or social pressure formed by the choices of other individuals, which is known as early social influence. A series of studies have explained how individuals can be strongly influenced by others' choices, [25][26][27] pointing out that this external early social influence can achieve the optimal option. This study mainly focuses on the early social influence formed by the unconscious social pressure generated by other individuals collectively. ...
... Research on early social influence mainly focus on the self-reinforcing dynamic that forms the cumulative advantage of dominated behaviors, which leads to the "Mathew effect." [25][26][27]36,44,45 Under the impact of early social influence, players may not only learn behaviors of opinion leaders, but also imitate masses. There exists a strong connection between collective action and early social influence in the classic theories, and most of the existing studies inclining to the viewpoint of irrationality suggest that the number of participants plays an important role in forming a collective action. ...
... When early social influence is considered, the basic explanation is that early social influence brings inequality and path dependence, and thus form the collective aberration, as can be seen from the experiments of Music Lab. 25 Van de Rijt has found that the social system can be self-correcting, under which the influenced world tends to evolve to the intrinsic status of the independent world, while early social influence acts as a structural force that impedes the self-correction. 36 He et al. 3 has also proved that a higher level of influence leads to a higher structural force. ...
Article
Full-text available
The logic of collective action has laid a foundation for the research of public choice, and the success of collective action has been a long-term discussion when free-riding mechanism is considered in the dynamics. This study proposes a , which provides a novel dimension for explaining the logic of collective action. Under the framework, the accumulation of early social influence, conformity, and the pressure of relationship updating in small groups is discussed. The experiment results show that the accumulation of early social influence indirectly promotes the participants of collective action; conformity is conducive to stimulating collective action, but relies on the accumulation of early social influence; the pressure of relationship updating plays the small-group role, which promotes the participation of collective actions; all these effects are helpful in forming the cascade of cooperators, and prevent the coexistence of participants and non-participants of collective action.
... Together, inadequate problem specification and inconsistent evaluation criteria have obscured a question of fundamental importance: To the extent that predictions are less accurate than desired, is it simply that the existing combination of methods and data is insufficient; or is it that the phenomenon itself is to some extent inherently unpredictable [46]? Although both explanations may produce the same result in a given context, their implications are qualitatively different-the former implies that with sufficient ingenuity and/or effort, failures of prediction can in principle always be corrected, whereas the latter implies a performance limit beyond which even theoretically perfect predictions cannot progress [57]. ...
... Although both explanations may produce the same result in a given context, their implications are qualitatively different-the former implies that with sufficient ingenuity and/or effort, failures of prediction can in principle always be corrected, whereas the latter implies a performance limit beyond which even theoretically perfect predictions cannot progress [57]. In other words, if socioeconomic predictions are more like predicting a die roll than the return time of a comet, then even a "perfect" prediction would yield only an expected probability of success, leaving a potentially large residual error with respect to individual outcomes that could not be reduced with any amount of additional information or improved modeling [46,27]. ...
... Bakshy et al. reached three main conclusions that are germane to the current discussion: first, that by far the most informative feature was the past success of the seed user; second, that after factoring in past success neither additional seed features nor content features added appreciably to model performance; and third, that even the best performing model was able to explain only about a third of the observed variance (R 2 ≈ 0.34). This last result is notable because the model itself was extremely well calibrated-i.e. on average predicted and actual cascade size were highly correlated (R 2 ≈ 0.98)-implying that errors in predicting individual outcomes derive at least in part from intrinsic stochasticity in the underlying generative process [46,27], and hence are to some extent ineradicable. ...
Preprint
How predictable is success in complex social systems? In spite of a recent profusion of prediction studies that exploit online social and information network data, this question remains unanswered, in part because it has not been adequately specified. In this paper we attempt to clarify the question by presenting a simple stylized model of success that attributes prediction error to one of two generic sources: insufficiency of available data and/or models on the one hand; and inherent unpredictability of complex social systems on the other. We then use this model to motivate an illustrative empirical study of information cascade size prediction on Twitter. Despite an unprecedented volume of information about users, content, and past performance, our best performing models can explain less than half of the variance in cascade sizes. In turn, this result suggests that even with unlimited data predictive performance would be bounded well below deterministic accuracy. Finally, we explore this potential bound theoretically using simulations of a diffusion process on a random scale free network similar to Twitter. We show that although higher predictive power is possible in theory, such performance requires a homogeneous system and perfect ex-ante knowledge of it: even a small degree of uncertainty in estimating product quality or slight variation in quality across products leads to substantially more restrictive bounds on predictability. We conclude that realistic bounds on predictive accuracy are not dissimilar from those we have obtained empirically, and that such bounds for other complex social systems for which data is more difficult to obtain are likely even lower.
... How and how much will the habit of the typical reader change? A large body of work has investigated questions of this kind at the individual, user level [8,25,26,35]. For instance, a nice experiment described in [25] suggests that even a simple type of feedback, such as providing the ranking based on the number of downloads, may significantly boost market shares. ...
... A large body of work has investigated questions of this kind at the individual, user level [8,25,26,35]. For instance, a nice experiment described in [25] suggests that even a simple type of feedback, such as providing the ranking based on the number of downloads, may significantly boost market shares. Studies such as these provide precious insights but it is difficult to derive from them quantitatively accurate predictions about markets in the longrun. ...
... Several studies indicate that popularity feedback (e.g. number of downloads, user ratings, number of views, etc) can be a powerful determinant of online behaviour [25,5,35]. The second ingredient is, for lack of a better terminology, the "web of kinship" among users. ...
Preprint
In this paper we introduce a mathematical model that captures some of the salient features of recommender systems that are based on popularity and that try to exploit social ties among the users. We show that, under very general conditions, the market always converges to a steady state, for which we are able to give an explicit form. Thanks to this we can tell rather precisely how much a market is altered by a recommendation system, and determine the power of users to influence others. Our theoretical results are complemented by experiments with real world social networks showing that social graphs prevent large market distortions in spite of the presence of highly influential users.
... We expect popularity and majority opinion to be a contingency factor in the above-alluded relationships. Previous research shows that consumers infer the correctness of opinions from their popularity Salganik et al., 2006). According to the affective aspect of EASI theory (Van Kleef, 2017), people tend to conform to normative behavior, aligning with others' positive expectations. ...
... A high eWOM volume indicates popularity and greater interest Salganik et al., 2006;Duan et al., 2008), enhancing the social influence of positive emotional content. This notion is based on the following reasons. ...
Article
Movies evoke strong human emotions that resonate in the content of online reviews, known as electronic word of mouth (eWOM). Existing literature suggests that positive content enhances, and negative content reduces movie sales, with negative content having a stronger effect. We demonstrate that there is a limit to the negative emotional content’s impact on sales, beyond which it boosts sales. We extract positive and negative emotional content from 80 Hollywood movie reviews (N = 23,046) and empirically examine their influence on movie sales. Results indicate that for high eWOM volume, the effect of positive content increases progressively, while negative content has diminishing returns. Grounded in ‘Emotions as Social Information’ (EASI) theory, this study provides marketers and researchers with a new understanding of how emotional content in eWOM influences moviegoers’ choice of movie and sales. Our typology—Flop, Hype, Star, and Hit—helps predict movie sales and design targeted strategies.
... To show how impactful algorithmic ranking can be, Joachims et al. (2017) examined users' clickthrough behavior on Google's result page and found that participants' trust in Google's retrieval/ranking function led them to click on highly ranked links regardless of their quality or relevance to the query. Additionally, Salganik, Dodds, and Watts (2006) found that ranking songs in music markets by total downloads produced more unpredictability and inequality compared to groups who had songs ranked randomly. We extend this line of work by examining the influence of algorithmic ranking on social media trending feeds. ...
... Lastly, because our findings were based on observational data, future work will have to examine more user-centered effects of algorithmic ranking-similar to Salganik, Dodds, and Watts (2006); Joachims et al. (2017), but on social media. Specifically, future work can structure controlled experiments that test the impacts of algorithmic rank on individual users and their perceptions of highly ranked content. ...
Preprint
Platforms are increasingly relying on algorithms to curate the content within users' social media feeds. However, the growing prominence of proprietary, algorithmically curated feeds has concealed what factors influence the presentation of content on social media feeds and how that presentation affects user behavior. This lack of transparency can be detrimental to users, from reducing users' agency over their content consumption to the propagation of misinformation and toxic content. To uncover details about how these feeds operate and influence user behavior, we conduct an empirical audit of Reddit's algorithmically curated trending feed called r/popular. Using 10K r/popular posts collected by taking snapshots of the feed over 11 months, we find that the total number of comments and recent activity (commenting and voting) helped posts remain on r/popular longer and climb the feed. Using over 1.5M snapshots, we examine how differing ranks on r/popular correlated with engagement. More specifically, we find that posts below rank 80 showed a sharp decline in activity compared to posts above, and that posts at the top of r/popular had a higher proportion of undesired comments than those lower down. Our findings highlight that the order in which content is ranked can influence the levels and types of user engagement within algorithmically curated feeds. This relationship between algorithmic rank and engagement highlights the extent to which algorithms employed by social media platforms essentially determine which content is prioritized and which is not. We conclude by discussing how content creators, consumers, and moderators on social media platforms can benefit from empirical audits aimed at improving transparency in algorithmically curated feeds.
... The Matthew effect also applies to the success of products, which is often accurately predicted by various dimensions of the products' early success, e.g., for online content 48 , books 1 , scientific papers and patents [49][50][51][52] . In addition to the empirical support for the link between early and subsequent success in observational studies, researchers have devised laboratory and field experiments that have demonstrated the causal nature of this link [53][54][55] . ...
... Overall, these findings substantially inform the ensuing debates about whether great success truly reflects great potential 2,22,37,38,53,55,63,73,74 . Indeed, one of the most consistent findings in social and behavioral sciences is that past success is a strong predictor of future success (See above Section: "Success breeds success"). ...
Article
Full-text available
Understanding the collective dynamics behind the success of ideas, products, behaviors, and social actors is critical for decision-making across diverse contexts, including hiring, funding, career choices, and the design of interventions for social change. Methodological advances and the increasing availability of big data now allow for a broader and deeper understanding of the key facets of success. Recent studies unveil regularities beneath the collective dynamics of success, pinpoint underlying mechanisms, and even enable predictions of success across diverse domains, including science, technology, business, and the arts. However, this research also uncovers troubling biases that challenge meritocratic views of success. This review synthesizes the growing, cross-disciplinary literature on the collective dynamics behind success and calls for further research on cultural influences, the origins of inequalities, the role of algorithms in perpetuating them, and experimental methods to further probe causal mechanisms behind success. Ultimately, these efforts may help to better align success with desired societal values.
... However, a setting in which social influence occurs may bring a high degree of unpredictability. Salganik et al. [37] claim that with social the popularity of an item is not an aggregate of individual preferences and therefore cannot be predicted even with perfect information: "there are inherent limits on the predictability of outcomes, irrespective of how much skill or information one has" [37]. These methods offer useful insights, but are not directly relevant to the problem focus of this study. ...
... However, a setting in which social influence occurs may bring a high degree of unpredictability. Salganik et al. [37] claim that with social the popularity of an item is not an aggregate of individual preferences and therefore cannot be predicted even with perfect information: "there are inherent limits on the predictability of outcomes, irrespective of how much skill or information one has" [37]. These methods offer useful insights, but are not directly relevant to the problem focus of this study. ...
Preprint
Predicting the popularity of online content has attracted much attention in the past few years. In news rooms, for instance, journalists and editors are keen to know, as soon as possible, the articles that will bring the most traffic into their website. The relevant literature includes a number of approaches and algorithms to perform this forecasting. Most of the proposed methods require monitoring the popularity of content during some time after it is posted, before making any longer-term prediction. In this paper, we propose a new approach for predicting the popularity of news articles before they go online. Our approach complements existing content-based methods, and is based on a number of observations regarding article similarity and topicality. First, the popularity of a new article is correlated with the popularity of similar articles of recent publication. Second, the popularity of the new article is related to the recent historical popularity of its main topic. Based on these observations, we use time series forecasting to predict the number of visits an article will receive. Our experiments, conducted on a real data collection of articles in an international news website, demonstrate the effectiveness and efficiency of the proposed method.
... An individual's behavior often depends on the actions of others [8,9,17,29,30,33]. This phenomenon is manifested daily in the decisions people make to adopt a new technology [28,31] or idea [7,33], listen to music [29], engage in risky behavior [4], or join a social movement [17,30]. ...
... An individual's behavior often depends on the actions of others [8,9,17,29,30,33]. This phenomenon is manifested daily in the decisions people make to adopt a new technology [28,31] or idea [7,33], listen to music [29], engage in risky behavior [4], or join a social movement [17,30]. As a result, a variety of behaviors are said to be 'contagious', because they spread through the population as people observe others adopting the behavior and then adopt it themselves. ...
Preprint
Social behaviors are often contagious, spreading through a population as individuals imitate the decisions and choices of others. A variety of global phenomena, from innovation adoption to the emergence of social norms and political movements, arise as a result of people following a simple local rule, such as copy what others are doing. However, individuals often lack global knowledge of the behaviors of others and must estimate them from the observations of their friends' behaviors. In some cases, the structure of the underlying social network can dramatically skew an individual's local observations, making a behavior appear far more common locally than it is globally. We trace the origins of this phenomenon, which we call "the majority illusion," to the friendship paradox in social networks. As a result of this paradox, a behavior that is globally rare may be systematically overrepresented in the local neighborhoods of many people, i.e., among their friends. Thus, the "majority illusion" may facilitate the spread of social contagions in networks and also explain why systematic biases in social perceptions, for example, of risky behavior, arise. Using synthetic and real-world networks, we explore how the "majority illusion" depends on network structure and develop a statistical model to calculate its magnitude in a network.
... Some studies considered information other than acoustic features. In fact, social factors sometimes play an important role in determining whether a song would be popular or not [25]. In [26], it was shown that seasonal music preference has significant impact on music popularity (e.g., Christmas carols in December). ...
... For Debut, Length, and Kurtosis, the prediction performance of each feature group is overall inferior to that for the other popularity metrics. In fact, the debut ranking of a song may be more influenced by awareness and popularity of the artist, promotion, or social trend, as in other cultural products [25], [36], [37]. The group of Complexity is the most effective among the three groups in predicting Max, Std, and Sum, for which relatively good performance was observed in Fig. 7. MFCC shows the best prediction performance for Length, Mean, Skewness, and Kurtosis, although the accuracies are not high for Length and Kurtosis. ...
Preprint
Understanding music popularity is important not only for the artists who create and perform music but also for the music-related industry. It has not been studied well how music popularity can be defined, what its characteristics are, and whether it can be predicted, which are addressed in this paper. We first define eight popularity metrics to cover multiple aspects of popularity. Then, the analysis of each popularity metric is conducted with long-term real-world chart data to deeply understand the characteristics of music popularity in the real world. We also build classification models for predicting popularity metrics using acoustic data. In particular, we focus on evaluating features describing music complexity together with other conventional acoustic features including MPEG-7 and Mel-frequency cepstral coefficient (MFCC) features. The results show that, although room still exists for improvement, it is feasible to predict the popularity metrics of a song significantly better than random chance based on its audio signal, particularly using both the complexity and MFCC features.
... However, understanding and ultimately predicting human preferences and behaviors is also important from a fundamental point of view. Indeed, the digital traces that we leave with all sorts of everyday activities (shopping, communicating with others, traveling) are ushering in a new kind of computational social science [5,6], which aims to shed light on human mobility [7,8], activity patterns [9], decision-making processes [10], social influence [11][12][13], and the impact of all these in collective human behavior [14,15]. ...
... Putting together Eqs. (2), (9) and (10), and under the assumption of no prior knowledge about the models (p(M)~const:), we have ...
Preprint
Full-text available
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a "new" computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups.
... Predicting social media engagement is inherently challenging due to the multivariate nature of online content and the feedback loops in user reactions [9,26,30,45,67]. Controlled experiments help isolate the impact of individual factors [59,66]. For instance, Lakkaraju et al. [40] analyzed images posted on various subreddits with different titles and found that while effective titles boost engagement, timing effects often overshadow their influence. ...
Preprint
In today's cross-platform social media landscape, understanding factors that drive engagement for multimodal content, especially text paired with visuals, remains complex. This study investigates how rewriting Reddit post titles adapted from YouTube video titles affects user engagement. First, we build and analyze a large dataset of Reddit posts sharing YouTube videos, revealing that 21% of post titles are minimally modified. Statistical analysis demonstrates that title rewrites measurably improve engagement. Second, we design a controlled, multi-phase experiment to rigorously isolate the effects of textual variations by neutralizing confounding factors like video popularity, timing, and community norms. Comprehensive statistical tests reveal that effective title rewrites tend to feature emotional resonance, lexical richness, and alignment with community-specific norms. Lastly, pairwise ranking prediction experiments using a fine-tuned BERT classifier achieves 74% accuracy, significantly outperforming near-random baselines, including GPT-4o. These results validate that our controlled dataset effectively minimizes confounding effects, allowing advanced models to both learn and demonstrate the impact of textual features on engagement. By bridging quantitative rigor with qualitative insights, this study uncovers engagement dynamics and offers a robust framework for future cross-platform, multimodal content strategies.
... The underlying rationale of herding is that consumers have a desire for social approval, causing them to conform to a group's choice to fit in (Cialdini & Goldstein, 2004). When facing an adoption decision, consumers thus tend to converge on the choices of others (Salganik et al., 2006). Furthermore, consumers may believe that the observed group possesses superior information to judge a product's quality, which lets consumers overrule their own initial information to benefit from the social group's hard-won knowledge (Bikhchandani et al., 1998). ...
Article
Full-text available
Many launch strategies for new products now aim at building pre-release consumer buzz (PRCB), defined as consumers' collective expressions of anticipation for an upcoming product. While a positive association of PRCB with innovation success has been established, little is known about how, under what conditions, and to what extent PRCB influences consumers' adoption decisions. This research sheds light on these issues by investigating PRCB's contagious nature as one of the concept's defining characteristics. Drawing on herding theory, the authors develop a conceptual framework and provide comprehensive experimental evidence that consumers' exposure to PRCB for a new product triggers distinct psychological mechanisms that influence their own adoption decisions: PRCB-observing consumers exhibit both greater social attraction to the "buzz movement" (group-related evaluation) as well as more curiosity and higher quality expectations about the new product (product-related evaluation). Furthermore, these effects are particularly strong for consumers who are highly susceptible to social influence and for products with low popular appeal. The authors complement their consumer-level analysis with an illustrative market-level what-if analysis that approximates the financial consequences of PRCB's contagious effects. Results suggest that the financial impact of PRCB can be substantial but differs significantly across scenarios, depending on product type and consumer segment. These findings have important implications for the management of innovations before launch.
... Many studies have found that higher-quality options (whether products, ideas, candidates, etc.) are not necessarily the most successful. For example, Salganik et al. (2006) discovered through simulated experiments that the success of cultural products is only partially related to quality factors: generally speaking, high-quality products do not perform poorly, and low-quality products do not perform very well, but other outcomes are possible. Social influence is a key factor. ...
Article
Full-text available
This study explores how limited consumer attention influences market concentration in e-commerce. Consumer attention, a scarce resource amidst abundant product information, plays a crucial role in shaping market dynamics. Despite its importance, the effect of limited consumer attention on e-commerce market concentration has not been extensively studied. Using agent-based modeling, we examine the interplay of consumer behaviors, bounded rationality, and social interactions in complex markets. Our results reveal that e-commerce market concentration persists even without product differentiation among sellers. Notably, a negative correlation emerges between consumer attention and market concentration, consistent across different market sizes and social connection densities. These findings provide theoretical insights into market concentration patterns in the Internet economy and contribute to the broader understanding of market structure dynamics.
... It may be expected that in bringing the treatment group embeddings closer to the mean, the treatment biases content exposure among the treated towards more popular posts. This is because the average user's embeddings are likely to be closer to the preferences of the largest number of users on the platform, potentially making them "viral" (Salganik et al., 2006). However, Table A.1 shows that the treatment group was exposed to less popular posts than the control groups because the random numbers picked to generate preference weights for the treatment group were not representative of actual user preferences on the platform. ...
Preprint
Full-text available
Social media algorithms are thought to amplify variation in user beliefs, thus contributing to radicalization. However, quantitative evidence on how algorithms and user preferences jointly shape harmful online engagement is limited. I conduct an individually randomized experiment with 8 million users of an Indian TikTok-like platform, replacing algorithmic ranking with random content delivery. Exposure to "toxic" posts decreases by 27%, mainly due to reduced platform usage by users with higher interest in such content. Strikingly, these users increase engagement with toxic posts they find. Survey evidence indicates shifts to other platforms. Model-based counterfactuals highlight the limitations of blanket algorithmic regulation.
... Unfortunately, it is generally difficult to characterize the causes of virality based on the content of media, as Salganik et al. (2006) demonstrates, so we cannot say what causes some videos to experience this lagged explosion in popularity. ...
Article
Full-text available
We present a large-scale quantitative analysis of anglophone politics channels on YouTube, with three distinct units of analysis: channels, comments, and videos. We demonstrate that although channels have been entering the YouTube system at a roughly constant rate since 2008, there is serious inequality in the attention received by different channels and videos. Furthermore, prolific commenters are responsible for an astonishing amount of activity: 50% of total comments are written by just over 2% of all commenters. The toxicity for which YouTube comments are famous tends to be more pronounced among these super-users than among infrequent commenters. Our findings have important implications for the way in which YouTube viewers interpret what they see as representative of public opinion.
... Theoretical work on sequential (or simultaneous) choice has shown that social influence can have dramatic consequences, as people might be led astray by the already established popularity of an option and eventually settle on options of inferior quality [1,2,4,6,12]. Large scale empirical studies on ranking and upvoting/downvoting interfaces [16,17] have also demonstrated that social influence can substantially alter the social dynamics and the long-term outcomes of online systems, influencing the evaluations and/or popularity of the curated content. Surprisingly, there is comparatively little work assessing the implications of social influence on online rating interfaces, where people evaluate items on a specific rating scale (e.g., 1-10 stars)-potentially the most common sequential interaction systems on the web [7,15,22,24]. ...
Preprint
Full-text available
Theoretical work on sequential choice and large-scale experiments in online ranking and voting systems has demonstrated that social influence can have a drastic impact on social and technological systems. Yet, the effect of social influence on online rating systems remains understudied and the few existing contributions suggest that online ratings would self-correct given enough users. Here, we propose a new framework for studying the effect of social influence on online ratings. We start from the assumption that people are influenced linearly by the observed average rating, but postulate that their propensity to be influenced varies. When the weight people assign to the observed average depends only on their own latent rating, the resulting system is linear, but the long-term rating may substantially deviate from the true mean rating. When the weight people put on the observed average depends on both their own latent rating and the observed average rating, the resulting system is non-linear, and may support multiple equilibria, suggesting that ratings might be path-dependent and deviations dramatic. Our results highlight potential limitations in crowdsourced information aggregation and can inform the design of more robust online rating systems.
... It must be noted that the mere heuristic of choosing popular items is not necessarily negative in itself, as high popularity is generally correlated with high quality (we already noted before how this social learning heuristic is favored by the human species). However, this is not always the case [96]. Because of human popularity bias, people often ignore other aspects that can influence the popularity and success of items that are not directly related to quality [97] and, more importantly, assign a disproportionate assessment of quality to popular items. ...
Article
Full-text available
The importance of recommender systems has grown in recent years, as these systems are becoming one of the primary ways in which we access content on the Internet. Along with their use, concerns about the fairness of the recommendations they propose have rightfully risen. Recommender systems are known to be affected by popularity bias, the disproportionate preference towards popular items. While this bias stems from human tendencies, algorithms used in recommender systems can amplify it, resulting in unfair treatment of end-users and/or content creators. This article proposes a narrative review of the relevant literature to characterize and understand this phenomenon, both in human and algorithmic terms. The analysis of the literature highlighted the main themes and underscored the need for a multi-disciplinary approach that examines the interplay between human cognition, algorithms, and socio-economic factors. In particular, the article discusses how the overall fairness of recommender systems is impacted by popularity bias. We then describe the approaches that have been used to mitigate the harmful effects of this bias and discuss their effectiveness in addressing the issue, finding that some of the current approaches fail to face the problem in its entirety. Finally, we identify some open problems and research opportunities to help the advancement of research in the fairness of recommender systems.
... For example, a new dance move may gain popularity when a famous person promotes it on social media, but this popularity may fade away if the dance move proves too difficult to learn or if it is too awkward to imitate in public. Understanding the mechanisms that shape the global properties of complex cultural artifacts has been central to numerous studies in anthropology (Henrich, 2016), psychology (Tomasello, 2009), biology (Whiten, 2019), sociology (Salganik, Dodds, & Watts, 2006), linguistics (Gray & Atkinson, 2003;Kirby et al., 2008), and cultural evolution research (Mesoudi, 2011). ...
Preprint
Full-text available
Understanding how cognitive and social mechanisms shape the evolution of complex artifacts such as songs is central to cultural evolution research. Social network topology (what artifacts are available?), selection (which are chosen?), and reproduction (how are they copied?) have all been proposed as key influencing factors. However, prior research has rarely studied them together due to methodological challenges. We address this gap through a controlled naturalistic paradigm whereby participants (N=2,404) are placed in networks and are asked to iteratively choose and sing back melodies from their neighbors. We show that this setting yields melodies that are more complex and more pleasant than those found in the more-studied linear transmission setting, and exhibits robust differences across topologies. Crucially, these differences are diminished when selection or reproduction bias are eliminated, suggesting an interaction between mechanisms. These findings shed light on the interplay of mechanisms underlying the evolution of cultural artifacts.
... Such platforms allow for collecting experimental data at a scale and pace unavailable in physical laboratories. For example, Salganik et al. (2006) designed an artificial music market with 14,341 participants to study the effects of social influence on individual and collective decision-making in cultural markets. ...
Article
Full-text available
With the growing complexity of knowledge production, social science must accelerate and open up to maintain explanatory power and responsiveness. This goal requires redesigning the front end of the research to build an open and expandable knowledge infrastructure that stimulates broad collaborations, enables breaking down inertia and path dependencies of conventional approaches, and boosts discovery and innovation. This article discusses the coordinated open-source model as a promising organizational scheme that can supplement conventional research infrastructure in certain areas. The model offers flexibility, decentralization, and community-based development and aligns with open science ideas, such as reproducibility and transparency. Similar solutions have been successfully applied in natural science, but social science needs to catch up. I present the model’s design and consider its potential and limitations (e.g., regarding development, sustainability, and coordination). I also discuss open-source applications in various areas, including a case study of an open-source survey harmonization project Comparative Panel File.
... Further, EE evolution is not fully predetermined but instead influenced by social processes, some of which are genuinely unknowable ex ante (Salganik et al, 2006). For example, few would have been able to foresee that a surging demand for vaccines driven by the COVID-19 ...
Article
Full-text available
The term 'external enabler' (EE) is a collective label for non-trivial changes to emerging and established organizations' (macro) environment, such as new technologies, regulatory changes, demographic and sociocultural trends, and changes to the natural environment. Two fundamental assumptions of EE scholarship are that all such changes benefit some organizations, and that different types of change can offer similar benefits and strategic potentials for strategic action. The external enabler framework (EEF) develops structure and vocabulary to guide cumulative knowledge development across different types of such environmental changes to inspire and guide research and practice regarding strategic and fortuitous leveraging of EEs in entrepreneurial pursuits. Citation statistics and a growing number of applications and extensions suggest that the EEF has been well received, and its application area has expanded well beyond its original domain of independent business startups to also include mission-oriented ventures, new initiatives and growth of established organizations of all sizes and ages, and the creation or rejuvenation of industries, ecosystems, and local economies. In this article, we update and elaborate on the EEF and review the scholarship that has evolved around it to take stock of past developments and guide and inspire future research and practice in this important domain.
... A homepage is intended to present a collection of articles as a cohesive bundle; individual articles do not exist in isolation (Tufte, 1990). Predicting the placement of a single article without considering the context of other articles would be overly noisy and potentially ineffective (Salganik et al., 2006). Conversely, attempting to predict the placement of all articles simultaneously poses a combinatorial challenge that is computationally infeasible. ...
Preprint
Full-text available
Information prioritization plays an important role in how humans perceive and understand the world. Homepage layouts serve as a tangible proxy for this prioritization. In this work, we present NewsHomepages, a large dataset of over 3,000 new website homepages (including local, national and topic-specific outlets) captured twice daily over a three-year period. We develop models to perform pairwise comparisons between news items to infer their relative significance. To illustrate that modeling organizational hierarchies has broader implications, we applied our models to rank-order a collection of local city council policies passed over a ten-year period in San Francisco, assessing their "newsworthiness". Our findings lay the groundwork for leveraging implicit organizational cues to deepen our understanding of information prioritization.
... Second, most cultural products are borne out of collaborations between artists and practitioners that act as a conduit for ideas and inspirations. Accordingly there have been notable scientific studies of culture and cultural phenomena from the network perspective focusing on the relationships among cultural products, creators, consumers, etc. [14][15][16]. In this paper we study the network of the creators and practitioners of culture to understand the patterns of collaborations and associations, and what they tell us about the nature of cultural prominence and diversity. ...
Preprint
Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena -- which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists.
... Social influence signals are widely used in such settings and help promote popular products to maximize market efficiency. However, it has been argued that social influence makes these markets unpredictable (14). As a result, social influence has been presented in a negative light. ...
Preprint
Can live music events generate complex contagion in music streaming? This paper finds evidence in the affirmative, but only for the most popular artists. We generate a novel dataset from Last.fm, a music tracking website, to analyse the listenership history of 1.3 million users over a two-month time horizon. We use daily play counts along with event attendance data to run a regression discontinuity analysis in order to show the causal impact of concert attendance on music listenership among attendees and their friends network. First, we show that attending a music artist's live concert increases that artist's listenership among the attendees of the concert by approximately 1 song per day per attendee (p-value<0.001). Moreover, we show that this effect is contagious and can spread to users who did not attend the event. However, the extent of contagion depends on the type of artist. We only observe contagious increases in listenership for well-established, popular artists (.06 more daily plays per friend of an attendee [p<0.001]), while the effect is absent for emerging stars. We also show that the contagion effect size increases monotonically with the number of friends who have attended the live event.
... Much of the most credible evidence about peer effects in humans and primates comes from small experiments in artificial social environments (Asch, 1956;Sherif, 1936;Whiten et al., 2005;Herbst and Mas, 2015). In some cases, field experiments modulating tie formation and group membership (Sacerdote, 2001;Zimmerman, 2003;Lyle, 2007;Carrell et al., 2009;Centola, 2010;Firth et al., 2016), shocks to group or peer behavior (Aplin et al., 2015;Banerjee et al., 2013;Bond et al., 2012;Cai et al., 2015;Eckles et al., 2016;van de Waal et al., 2013), or subsequent exposure to peer behaviors (Aral and Walker, 2011;Bakshy et al., 2012a;Salganik et al., 2006) have been possible, but in many cases these experimental designs are infeasible. Thus, much recent work on peer effects uses observational data from new large-scale measurement of behavior (Aral et al., 2009;Bakshy et al., 2011;Friggeri et al., 2014;Ugander et al., 2012;Allen et al., 2013) or longitudinal surveys (Christakis and Fowler, 2007;Iyengar et al., 2011;Banerjee et al., 2013;Card and Giuliano, 2013;Christakis and Fowler, 2013;Fortin and Yazbeck, 2015). ...
Preprint
Peer effects, in which the behavior of an individual is affected by the behavior of their peers, are posited by multiple theories in the social sciences. Other processes can also produce behaviors that are correlated in networks and groups, thereby generating debate about the credibility of observational (i.e. nonexperimental) studies of peer effects. Randomized field experiments that identify peer effects, however, are often expensive or infeasible. Thus, many studies of peer effects use observational data, and prior evaluations of causal inference methods for adjusting observational data to estimate peer effects have lacked an experimental "gold standard" for comparison. Here we show, in the context of information and media diffusion on Facebook, that high-dimensional adjustment of a nonexperimental control group (677 million observations) using propensity score models produces estimates of peer effects statistically indistinguishable from those from using a large randomized experiment (220 million observations). Naive observational estimators overstate peer effects by 320% and commonly used variables (e.g., demographics) offer little bias reduction, but adjusting for a measure of prior behaviors closely related to the focal behavior reduces bias by 91%. High-dimensional models adjusting for over 3,700 past behaviors provide additional bias reduction, such that the full model reduces bias by over 97%. This experimental evaluation demonstrates that detailed records of individuals' past behavior can improve studies of social influence, information diffusion, and imitation; these results are encouraging for the credibility of some studies but also cautionary for studies of rare or new behaviors. More generally, these results show how large, high-dimensional data sets and statistical learning techniques can be used to improve causal inference in the behavioral sciences.
... In his recent book "Everything Is Obvious: Once You Know the Answer" [39], the sociologist and network science pioneer Duncan J. Watts, suggests that both narrative fallacy and hindsight bias operate with particular force when people observe unusually successful outcomes and consider them as the necessary product of hard work and talent, while they mainly emerge from a complex and interwoven sequence of steps, each depending on precedent ones: if any of them had been different, an entire career or life trajectory would almost surely differ too. This argument is also based on the results of a seminal experimental study, performed some years before by Watts himself in collaboration with other authors [40], where the success of previously unknown songs in an artificial music market was shown not to be correlated with the quality of the song itself. And this clearly makes very difficult any kind of prediction, as also shown in another more recent study [41]. ...
Preprint
The largely dominant meritocratic paradigm of highly competitive Western cultures is rooted on the belief that success is due mainly, if not exclusively, to personal qualities such as talent, intelligence, skills, efforts or risk taking. Sometimes, we are willing to admit that a certain degree of luck could also play a role in achieving significant material success. But, as a matter of fact, it is rather common to underestimate the importance of external forces in individual successful stories. It is very well known that intelligence or talent exhibit a Gaussian distribution among the population, whereas the distribution of wealth - considered a proxy of success - follows typically a power law (Pareto law). Such a discrepancy between a Normal distribution of inputs, with a typical scale, and the scale invariant distribution of outputs, suggests that some hidden ingredient is at work behind the scenes. In this paper, with the help of a very simple agent-based model, we suggest that such an ingredient is just randomness. In particular, we show that, if it is true that some degree of talent is necessary to be successful in life, almost never the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals. As to our knowledge, this counterintuitive result - although implicitly suggested between the lines in a vast literature - is quantified here for the first time. It sheds new light on the effectiveness of assessing merit on the basis of the reached level of success and underlines the risks of distributing excessive honors or resources to people who, at the end of the day, could have been simply luckier than others. With the help of this model, several policy hypotheses are also addressed and compared to show the most efficient strategies for public funding of research in order to improve meritocracy, diversity and innovation.
... As a result of this cognitive heuristic, known as "position bias" [33], people pay more attention to items at the top of a list than those in lower positions. Social influence bias, communicated through social signals, helps direct attention to online content that has been liked, shared or approved by many others [34,35]. Cognitive heuristics interact with how a web site displays information to users to alter the dynamics of social contagion. ...
Preprint
The many decisions people make about what to pay attention to online shape the spread of information in online social networks. Due to the constraints of available time and cognitive resources, the ease of discovery strongly impacts how people allocate their attention to social media content. As a consequence, the position of information in an individual's social feed, as well as explicit social signals about its popularity, determine whether it will be seen, and the likelihood that it will be shared with followers. Accounting for these cognitive limits simplifies mechanics of information diffusion in online social networks and explains puzzling empirical observations: (i) information generally fails to spread in social media and (ii) highly connected people are less likely to re-share information. Studies of information diffusion on different social media platforms reviewed here suggest that the interplay between human cognitive limits and network structure differentiates the spread of information from other social contagions, such as the spread of a virus through a population.
... Online social networks act as information conduits for real-world news and events [7], largely driven by collective attention from multiple social media actors [50]. Collective human attention drives various social, economic and technological phenomenon, such as herding behavior in financial markets [44], formation of trends [2], popularity of news [50], web pages [39], and music [41], propagation of memes [24], ideas, opinions and topics [40], person-toperson word-of-mouth advertising and viral marketing [24], and diffusion of product and innovation [4]. Moreover, it is the key phenomenon underlying social media reporting of emerging topics and breaking news [31]. ...
Preprint
Full-text available
Today, social media provide the means by which billions of people experience news and events happening around the world. However, the absence of traditional journalistic gatekeeping allows information to flow unencumbered through these platforms, often raising questions of veracity and credibility of the reported information. Here we ask: How do the dynamics of collective attention directed toward an event reported on social media vary with its perceived credibility? By examining the first large-scale, systematically tracked credibility database of public Twitter messages (47M messages corresponding to 1,138 real-world events over a span of three months), we established a relationship between the temporal dynamics of events reported on social media and their associated level of credibility judgments. Representing collective attention by the aggregate temporal signatures of an event reportage, we found that the amount of continued attention focused on an event provides information about its associated levels of perceived credibility. Events exhibiting sustained, intermittent bursts of attention were found to be associated with lower levels of perceived credibility. In other words, as more people showed interest during moments of transient collective attention, the associated uncertainty surrounding these events also increased.
... To ensure that the effects we observed were not pathdependent (i.e., if a discussion breaks down by chance because of a single user), we created eight separate "universes" for each condition [76], for a total of 32 universes. Each universe was seeded with the same comments, but were otherwise entirely independent. ...
Preprint
In online communities, antisocial behavior such as trolling disrupts constructive discussion. While prior work suggests that trolling behavior is confined to a vocal and antisocial minority, we demonstrate that ordinary people can engage in such behavior as well. We propose two primary trigger mechanisms: the individual's mood, and the surrounding context of a discussion (e.g., exposure to prior trolling behavior). Through an experiment simulating an online discussion, we find that both negative mood and seeing troll posts by others significantly increases the probability of a user trolling, and together double this probability. To support and extend these results, we study how these same mechanisms play out in the wild via a data-driven, longitudinal analysis of a large online news discussion community. This analysis reveals temporal mood effects, and explores long range patterns of repeated exposure to trolling. A predictive model of trolling behavior shows that mood and discussion context together can explain trolling behavior better than an individual's history of trolling. These results combine to suggest that ordinary people can, under the right circumstances, behave like trolls.
... Identifying what proceeds in predictable directions, as opposed to drifting upon the tides of fashion, would be of great utility in understanding the evolution of knowledge. It is wasted effort to try to predict the future of randomly drifting fashionable buzzwords [2,15], but one might hope to predict selected elements, such as valid new scientific terms. The kind of evolutionary analysis used here is generally applicable to any case study where popularity can be presented in the form of frequencies and ranked lists over time. ...
Preprint
The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example.
Article
Why would a law-abiding occupational community support members engaged in legally prohibited actions? We propose that lawbreaking can elicit informal support when it is construed as a disinterested action—intended to serve the community rather than the perpetrator. We study how illegal remixing (“bootlegging”) affects an artist’s ability to secure opening act and other performance opportunities in the electronic dance music (EDM) community, whose members endorse the substance of copyright law but whose norms about bootlegging are ambiguous. Data on 38,784 disc jockeys (DJs) across 97 countries over 10 years reveal that producing bootlegs is associated with more opportunities to perform, compared to producing official remixes or original music. This effect disappears when community members view bootlegging as a self-serving action—primarily designed to benefit the perpetrator. An online experiment and an expert survey rule out the possibility that bootlegs are considered more creative, of higher quality, or better able to attract attention. We shed additional light on our proposed mechanism by analyzing data from 34 interviews with EDM professionals. This helps us to explain how a lawbreaker can paradoxically be perceived as serving the community, thereby eliciting active community support for their action.
Article
Full-text available
To elucidate the evolutionary dynamics of culture, we must address fundamental questions such as whether we can interpolate and extrapolate cultural evolution, whether the time series of cultural evolution is distinguishable from its reverse, what factors determine the direction of change, and how the cultural influence of a creative work from the viewpoint of an instant is correlated with that from the viewpoint of a later instant. To answer these questions, the evolution of classical Japanese poetry, waka, specifically tanka, was investigated. Phylogenetic networks were constructed on the basis of the vector representation obtained using a neural language model. The parent–child relationship in the phylogenetic networks exhibited significant agreement with a previously established honkadori (allusive variation) phrase-borrowing relationship. The real phylogenetic networks were distinguishable from the time-reversed and shuffled ones. Two anthologies could be interpolated but not extrapolated. The number of children of a poem in the phylogenetic networks, the proxy variable of its cultural influence, evaluated at an instant, was positively correlated with that evaluated later. A poem selected for an authoritative anthology tended to have 10–50% more children than a similar but nonselected poem, implying the existence of the Matthew effect. A model with mean-reverting self-excitation replicated these results.
Article
There is increasing evidence that social networks matter not only for long-distance moves but also for short-distance residential mobility. And the emerging structural sorting perspective is integrating networks into understandings of segregation processes. We add to this literature by considering how former school peers influence residential choices. We use Swedish register data describing the residential histories of cohorts of students who attended the same primary or secondary schools in Sweden. We trace their residential choices in young adulthood and estimate the effect of distance to peers on these choices. To account for selection, we use the spatial configuration of older cohorts who attended the same schools to adjust for peer similarity on unobserved preferences and attitudes. Using conditional logistic regression models of residential destinations, we find that individuals are more likely to choose a neighbourhood close to former school peers. Drawing on a linked lives perspective, we also consider how the peer effects change over the early adult life-course. The models imply that other networks can displace the social influence of primary and secondary school peers. While our analysis does not consider segregation as an outcome, our results suggest that schools may play a role in reproducing patterns of segregation within and between generations.
Chapter
Collective intelligence is a novel construct which posits that a large number of people can work together to come up with better answers to various problems in less time than would be spent individually or in groups. Based on this premise, the University Institute for Biocomputing and Complex Systems Physics Research (BIFI) of the University of Zaragoza and the company Kampal Data Solutions created the collective intelligence platform named ‘Collective Learning.’ This chapter presents the various experiments that have been carried out in the educational field with this platform, and it highlights how the results obtained have enhanced our knowledge about the generation of collective intelligence and its impact on the possible learning of digital skills and online risk management competencies by the participating students.
Article
Full-text available
Outcomes in the cultural arena are due to many factors but are there general rules that can suggest what makes some cultural traits successful and others not? Research in cultural evolution theory distinguishes factors related to social influence (such as copying from the majority, or from certain individuals) from factors related to individual, nonsocially influenced, propensities such as evolved cognitive predispositions, or physical, biological, and environmental constraints. Here, we show, using analytical and individual-based models, that individual preferences, even when weak, determine the equilibrium point of cultural dynamics when acting together with nondirectional social influence in three out of four cases we study. The results have implications regarding the importance of keeping into account individual-level, nonsocial, factors, when studying cultural evolution, as well as regarding the interpretation of cross-cultural regularities, that must be expected, but can be product of weak directional forces, intensified by social influence.
Book
Full-text available
Una medición como la efectuada mediante el Modelo Bifactorial Inercia-Incertidumbre, determinando el estado potencial de los apoyos electorales a los partidos, presenta múltiples utilidades estando entre ellas diagnosticar el campo de incertidumbres, evaluar las dinámicas internas de los electorados o formar parte de un procedimiento para el análisis de intervención. Una utilidad que procede de formar parte de un proceso de conocimiento científico. Una situación muy diferente a la de las predicciones electorales secretas, tal y como se difunden en España, que realmente forman parte de un proceso de comunicación política. Este libro se estructura en dos partes. En la primera parte se realiza una presentación teórica y metodológica de los elementos principales presentes en el desafío de medir el posible efecto de las campañas electorales y eventos intermedios. En ese sentido se revisa la bibliografía más significativa tanto sobre los métodos de medición como sobre los posibles elementos intervinientes que pueden actuar como mecanismos modificadores de tendencia. En cualquier proceso de medición de impacto los indicadores o índices utilizados ocupan un lugar central. En la práctica operativa, los escenarios configurados mediante el Modelo Bifactorial Inercia-Incertidumbre definen un sistema de indicadores que responden al establecimiento de diferentes factores. Metodológicamente, la estimación que se publica es un índice compuesto por el conjunto de indicadoras medidas mediante los escenarios. Así mismo, se presta una atención especial al debate sobre el efecto en la opinión pública de la publicación de encuestas. El segundo apartado expone dos metodologías principales en el análisis de intervención: las basadas en datos longitudinales y las que utilizan una única medición como referencia en relación con un objetivo. Para cada una de las dos metodologías se presentan los elementos básicos que le dan forma, seguidos de un ejemplo de aplicación. En términos longitudinales se analizan los posibles efectos de campaña en las elecciones generales en España de julio de 2023. En el análisis de datos sincrónicos procedentes de una única encuesta, se introduce el empleo del Modelo Bifactorial Inercia-Incertidumbre como herramienta para el análisis de intervención incluyendo el estudio de caso de las elecciones europeas de 2024. La conclusión en ambos estudios de caso es equivalente: dinámicas estacionarias (equilibrios Nash) aproximan los resultados electorales a las mediciones previas mientras que dinámicas evolucionarias o fuertemente dinámicas modifican sensiblemente la decisión final de los electorados. En el caso de las elecciones al Parlamento Europeo, recuperando nuevamente el papel fundamental de la movilización. Las dinámicas de movilización y desmovilización fueron clave para estudiar los procesos electorales en España hasta 2015.
Article
Increasingly, crowdfunding is transforming financing for many people worldwide. Yet we know relatively little about how, why, and when funding outcomes are impacted by signaling between funders. We conduct two studies of N=500 and N=750 participants involved in crowdfunding to investigate the effect of “crowd signals,” i.e., certain characteristics deduced from the amounts and timing of contributions, on the decision to fund. In our first study, we find that, under a variety of conditions, contributions of heterogeneous amounts arriving at varying time intervals are significantly more likely to be selected than homogeneous contribution amounts and times. The impact of signaling is strongest among participants who are susceptible to social influence. The effect is remarkably general across different project types, fundraising goals, participant interest in the projects, and participants’ altruistic attitudes. Our second study using less strict controls indicates that the role of crowd signals in decision-making is typically unrecognized by participants. Our results underscore the fundamental nature of social signaling in crowdfunding. They highlight the importance of designing around these crowd signals and inform user strategies both on the project creator and funder side .
Article
Social interactions among consumers, especially in the consumption of digital goods, have become commonplace in an increasingly interconnected marketplace. As such, understanding the impact of such interactions on consumer behavior carries major academic and practitioner significance. This research focuses on an important but under-researched aspect of social influence and the underlying mechanisms that may be driving peer effects. Using the context of a massive online peer-to-peer game, we estimate peer effects arising from anonymous peers. The findings suggest a robust and significant influence of anonymous peers on one’s own purchase behavior. Further, we find that peer effects arising from anonymous peers are largely driven by informational reasons as opposed to competitive concern even in a competitive online gaming context. That is, players rely on others’ purchase decisions despite anonymity; moreover, such influence is more pronounced for inexperienced players with little contextual knowledge. To demonstrate the effectiveness of leveraging the nature of peer influence in online game settings, we provide an illustrative example using an intervention simulation. Finally, we discuss the implications of our findings for marketing academia and practitioners in the video game industry.
Article
We propose a simple yet comprehensive conceptual framework for the identification of different sources of error in research with digital behavioural data. We use our framework to map potential sources of error in 25 years of research on reputation effects in peer-to-peer online market platforms. Using a meta-dataset comprising 346 effect sizes extracted from 109 articles, we apply meta-dominance analysis to quantify the relative importance of different error components. Our results indicate that 85% of explained effect size heterogeneity can be attributed to the measurement process, which comprises the choice of platform, data collection mode, construct operationalisation and variable transformation. Error components attributable to the sampling process or publication bias capture relatively small parts of the explained effect size heterogeneity. This approach reveals at which stages of the research process researcher decisions may affect data quality most. This approach can be used to identify potential sources of error in established strands of research beyond the literature of behavioural data from online platforms.
Article
Problem definition: Short video format platforms like NetEase and TikTok are attention economies that host user-generated content, in the form of combined video and audio elements, and operate on principles of network virality. This study explores user content creation strategies by focusing on decisions around content release frequency and the incorporation of adopted content in newly generated creations. Methodology/results: We theoretically explore the role of the following three mechanisms influencing these decisions: (a) social hierarchies in the platform’s network structure, (b) virality of content on the platform, and (c) algorithmic interventions through the ranking of content recommendations. To empirically study these relationships, we use a detailed log of user activity on NetEase Cloud Village registered during the month of November 2019 and carry out regression-based analyses. We find that high-status users, measured through their follower count, adapt their content release frequency strategically, slowing down after successful content to avoid dilution of potential virality. We also observe that high-status users generally deviate from prevailing viral trends in their content creation. However, following exceptionally successful releases, they tend to conform more to viral content, suggesting a risk-averse approach. Finally, contrary to algorithm aversion, users prefer content recommended by the platform. However, high-status users disregard algorithms to retain agency in decision making. Managerial implications: Our findings provide a holistic understanding of the content creation process and suggest that platforms could strategically adjust algorithmic ranking policies to foster content creation diversity while catering to the preferences of users with different status levels. Supplemental Material: The online appendices are available at https://doi.org/10.1287/msom.2021.0332 .
Article
Taste is central to the sociology of culture and a frequently invoked explanans in the discipline at large. Yet, it remains a semantically ambiguous polyseme that has been understood and operationalized in often divergent ways by sociologists. In this essay, we survey contemporary empirical research on cultural tastes and use retroductive reasoning from measurements of taste to clarify the semantic ambiguity surrounding taste. We argue that taste should be conceptualized as a person’s thick subjectivity in a cultural field, that is to say a fundamentally multidimensional orientation that describes how we feel, consume, and praise in cultural fields. Recognizing the inherent multidimensionality to taste allows us to refine our understanding of complex taste phenomena. We outline a family of complex tastes using characteristic antinomies among their constituent modalities of action, and use a case study to show how each variety corresponds to extant folk concepts about taste.
Article
Full-text available
Drawing upon institutionalist theory this article analyzes how the introduction of new cultural objects produced for a mass audience is managed through an organized discourse. Data come from announcements of prime-time television series in development for the 1991-92 season by the four U.S. television networks. Maximum-likelihood logit analyses support the conclusion that network programmers working in a highly institutionalized context use reputation, imitation, and genre as rhetorical strategies to rationalize and legitimize their actions. This study contributes to institutionalist theory and the sociology of culture by explaining the content and consequences of business discourse in a culture industry.
Article
Full-text available
The preselection of goods for potential consumption is a feature common to all industries. In order for new products or ideas to reach consumers, they must first be processed favorably through a system of organizations whose units filter out a large proportion of candidates before they arrive at the consumption stage (Barnett 1953). Much theory and research on complex organizations is concerned with isolated aspects of this process by which innovations flow through organization systems-such as the relation of research and development units to the industrial firm (Burns and Stalker 1961; Wilensky 1968); or problems encountered by public agencies attempting to implement new policy decisions (Selznick 1949; Bailey and Mosher 1968; Moynihan 1969).
Article
Full-text available
A meta-analysis of conformity studies using an Asch-type line judgment task (1952, 1956) was conducted to investigate whether the level of conformity has changed over time and whether it is related cross-culturally to individualism–collectivism. The literature search produced 133 studies drawn from 17 countries. An analysis of US studies found that conformity has declined since the 1950s. Results from 3 surveys were used to assess a country's individualism–collectivism, and for each survey the measures were found to be significantly related to conformity. Collectivist countries tended to show higher levels of conformity than individualist countries. Conformity research must attend more to cultural variables and to their role in the processes involved in social influence. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
An informational cascade occurs when it is optimal for an individual, having observed the actions of those ahead of him, to follow the behavior of the preceding individual with regard to his own information. The authors argue that localized conformity of behavior and the fragility of mass behaviors can be explained by informational cascades. Copyright 1992 by University of Chicago Press.
Article
Full-text available
This study employs a stochastic model developed by G. Udny Yule and Herbert A. Simon as the probability mechanism underlying the consumer's choice of artistic products and predicts that artistic outputs will be concentrated among a few lucky individuals. We find that the probability distribution implied by the stochastic model provides an excellent description of the empirical data in the popular music industry, suggesting that the stochastic model may represent the process generating the superstar phenomenon. Because the stochastic model does not require differential talents among individuals, our empirical results support the notion that the superstar phenomenon could exist among individuals with equal talent. Copyright 1994 by MIT Press.
Article
Full-text available
This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.
Article
We examine the robustness of information cascades in laboratory experiments. Apart from the situation in which each player can obtain a signal for free (as in the experiment by Anderson and Holt (1997), American Economic Review, 87 (5), 847-862), the case of costly signals is studied where players decide whether or not to obtain private information, at a small but positive cost. In the equilibrium of this game, only the first player buys a signal and makes a decision based on this information whereas all following players do not buy a signal and herd behind the first player. The experimental results show that too many signals are bought and the equilibrium prediction performs poorly. To explain these observations, the depth of the subjects' reasoning process is estimated, using a statistical error-rate model. Allowing for different error rates on different levels of reasoning, we find that the subjects' inferences become significantly more noisy on higher levels of the thought process, and that only short chains of reasoning are applied by the subjects.
Article
When a series of individuals with private information announce public predictions initial conformity can create an "information cascade" in which later predictions match the early announcements. This paper reports an experiment in which private signals are draws from an unobserved urn. Subjects make predictions in sequence and are paid if they correctly guess which of two urns was used for the draws. If initial decisions coincide, then it is rational for subsequent decision makers to follow the established pattern, regardless of their private information. Rational cascades formed in most periods in which such an imbalance occurred.
Article
This paper identifies the leadership style, entrepreneurship, as a strategy employed by large organizations to cope with turbulent market environments. First, evidence from the popular music recording industry shows the several means by which the potentially disruptive consequences of entrepreneurship are reduced. Changes in the level of market turbulence since World War II are then explored to show that the scope of entrepreneurship is directly associated with the degree of turbulence. Finally, other organizational settings in which entrepreneurship and turbulence seem to be linked are identified, in order to suggest the generality of the relationship.
Article
Although measures of inequality are increasingly used to compare nations, cities, and other social units, the properties of alternative measures have received little attention in the sociological literature. This paper considers both theoretical and methodological implications of several common measures of inequality. The Gini index is found to satisfy the basic criteria of scale invariance and the principle of transfers, but two other measures--the coefficent of variation and Theil's measure--are usually preferable. While none of these measures is strictly appropriate for interval-level data, valid comparisons can be made in special circumstances. The social welfare function is considered as an alternative approach for developing measures of inequality, and methods of estimation, testing, and decomposition are presented.
Book
The advancement of social theory requires an analytical approach that systematically seeks to explicate the social mechanisms that generate and explain observed associations between events. These essays, written by prominent social scientists, advance criticisms of current trends in social theory and suggest alternative approaches. The mechanism approach calls attention to an intermediary level of analysis in between pure description and story-telling, on the one hand, and grand theorizing and universal social laws, on the other. For social theory to be of use for the working social scientist, it must attain a high level of precision and provide a toolbox from which middle range theories can be constructed.
Article
Beginning in 1997, the price of concert tickets took off and ticket sales declined. From 1996 to 2003, for example, the average concert price increased by 82%, while the CPI increased by 17%. Explanations for price growth include (1) the possible crowding out of the secondary ticket market, (2) rising superstar effects, (3) Baumols and Bowen's disease, (4) increased concentraion of promoters, and (5) the erosion of complementarities between concerts and album sales because of file sharing and CD copying. The article tentatively concludes that the decline in complementarities is the main cause of the recent surge in concert prices.
Article
The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascades-herein called global cascades-that occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.
Article
The Economics of Superstars sets out to explain the relationship between talent and success in the arts, but there is no agreement about what this relationship is. But whatever its other features may be, superstardom means that market output is concentrated on just a few artists. Concentration always raises the question of efficiency. Superstardom may be inefficient not only because it raises prices for consumers but also because it deprives other artists of the opportunity to practice art. Artists who do not practice art lose psychic income. Because psychic income cannot be transferred from one person to another, the loss of this income may be inefficient. This chapter reviews theories of superstardom and theories about the emergence of stars. The efficiency of superstardom is discussed in terms of effects on consumers and the use of publicity rights by the star. The chapter goes on to deal with the loss of opportunities to practice art that are caused by superstardom and suggests ways to alleviate the problem. Finally the empirical literature that tests the different theories of superstardom is reviewed.
  • A B Krueger
A. B. Krueger, J. Labor Econ. 23, 1 (2005).
  • K H Chung
  • R A K Cox
K. H. Chung, R. A. K. Cox, Rev. Econ. Stat. 76, 771 (1994).
  • P M Hirsch
P. M. Hirsch, Am. J. Sociology 77, 639 (1972).
  • W T Bielby
  • D D Bielby
W. T. Bielby, D. D. Bielby, Am. J. Sociology 99, 1287 (1994).
  • R B Cialdini
  • N J Goldstein
R. B. Cialdini, N. J. Goldstein, Annual Rev. Psych. 55, 591 (2004).
  • D J Watts
D. J. Watts, Proc. Natl. Acad. Sci. U.S.A. 99, 5766 (2002).
  • S Bikhchandani
  • D Hirshleifer
  • I Welch
S. Bikhchandani, D. Hirshleifer, I. Welch, J. Pol. Econ. 100, 992 (1992).
  • D Kü Bler
  • G Weizsäcker
D. Kü bler, G. Weizsäcker, Rev. Econ. Stud. 71, 425 (2004).
Hausel for developing the MusicLab Web site; J. Booher-Jennings for design work; S. Hasker for helpful conversations; and A. Cohen, B. Thomas, and D. Arnold at Bolt Media for their assistance in recruiting participants. Supported in part by an NSF Graduate Research Fellowship (to M
  • P We Thank
We thank P. Hausel for developing the MusicLab Web site; J. Booher-Jennings for design work; S. Hasker for helpful conversations; and A. Cohen, B. Thomas, and D. Arnold at Bolt Media for their assistance in recruiting participants. Supported in part by an NSF Graduate Research Fellowship (to M.J.S.), NSF grants SES-0094162 and SES-0339023, the McDonnell Foundation, and Legg Mason Funds.
Hausel for developing the MusicLab Web site; J. Booher-Jennings for design work; S. Hasker for helpful conversations; and A. Cohen B. Thomas and D. Arnold at Bolt Media for their assistance in recruiting participants
  • P We Thank
Vany Hollywood Economics
  • A De