ArticlePDF Available

Two Faces of Decomposability in Search: Evidence from the Recorded Music Industry 1995-2015

Authors:

Abstract and Figures

We propose that decomposability may generate a trade-off across different stages of search. We compare (1) decomposed search, the process of searching by producing a decomposed module , and (2) integrated search, the process of searching by producing a full-scale product. In the variation generation stage, decomposability can allow firms to experiment with more alternatives at the same time than an integrated search. However, in the selection and retention stages, a decomposed search may be more vulnerable to imperfect evaluation than an integrated search. It may increase the chance of missing out on promising alternatives after the first evaluation because the low cost of a decomposed search makes firms less committed to each alternative. We test our theory with a unique empirical setting, the recorded music industry, where singles (i.e., decomposed products) and albums (i.e., integrated products) have coexisted since the early twentieth century. In the variation generation stage, single-producing firms experiment with 35.22% more new artists than album-only-producing firms. In the selection and retention stage, single-producing firms are 69.57% more likely to neglect top-tier artists who failed in their first releases because single-producing firms have a higher performance target (i.e., lower commitment) than album-only-producing firms.
Content may be subject to copyright.
Two Faces of Decomposability in Search:
Evidence from the Recorded Music Industry 1995-2015
Sungyong Chang
London Business School
schang@london.edu
June 6, 2020
Abstract
We propose that decomposability may generate a trade-off across different stages of search.
We compare (1) decomposed search, the process of searching by producing a decomposed mod-
ule, and (2) integrated search, the process of searching by producing a full-scale product. In the
variation generation stage, decomposability can allow firms to experiment with more alterna-
tives at the same time than an integrated search. However, in the selection and retention stages,
a decomposed search may be more vulnerable to imperfect evaluation than an integrated search.
It may increase the chance of missing out on promising alternatives after the first evaluation
because the low cost of a decomposed search makes firms less committed to each alternative.
We test our theory with a unique empirical setting, the recorded music industry, where sin-
gles (i.e., decomposed products) and albums (i.e., integrated products) have coexisted since the
early twentieth century. In the variation generation stage, single-producing firms experiment
with 35.22% more new artists than album-only-producing firms. In the selection and retention
stage, single-producing firms are 69.57% more likely to neglect top-tier artists who failed in
their first releases because single-producing firms have a higher performance target (i.e., lower
commitment) than album-only-producing firms.
Keywords: Decomposability, Evolutionary Perspective on Search, Behavioral Theory of the
Firm, Alternative Evaluation
Acknowledgement: I am grateful to Bruce Kogut, Stephan Meier, Evan Rawley, Vanessa Burbano, Bo Cowill,
Sendil Ethiraj, Donal Crilly, Olenka Kacperczyk, Christina Fang, Richard R. Nelson, Damon Phillips, Kylie Jiwon
Hwang, Jenna Song, Daniel Keum, and seminar participants at Boston University, Columbia Business School, Harvard
Business School, HEC Paris, HKUST, INSEAD, London Business School, New York University, Ohio State University,
University of Connecticut, University of Minnesota, University of North Carolina Chapel Hill, University of Wisconsin-
Madison, University of Washington for helpful comments and suggestions. I gratefully acknowledge funding support
from the Jerome A. Chazen Institute for Global Business, Center for Japanese Economy and Business, and Sanford
Bernstein & Co. Center for Leadership and Ethics at Columbia Business School.
1
1 Introduction
Simon’s (1962) work on the architecture of complexity provides building blocks for analyzing
how the properties of complex systems affect the discovery of promising alternatives (e.g., resources,
technologies, or products). He emphasizes that one of the fundamental features of complex systems
is decomposability, the fact that patterns of interactions among elements of a complex system are
not diffuse but tend to be tightly clustered into nearly isolated subsets of interactions (i.e., modules).
Subsequent theoretical studies have shown that decomposability helps firms discover a promising
option by facilitating module-level experimentations (i.e., producing more variations) (Kogut and
Bowman, 1995, Baldwin and Clark, 2000, Marengo, Dosi, Legrenzi, and Pasquali, 2000, Ethiraj
and Levinthal, 2004, Fang and Kim, 2018). However, as Knudsen and Levinthal (2007) note, a
critical facet has been largely underexplored in this tradition, namely, how firms select and retain
the promising ones among experimented options. To advance our understanding of this topic, we
explore the roles of decomposability in the discovery of new promising alternatives in the different
stages of the search. We argue that decomposability may facilitate experimentation in the variation
generation stage but can decrease the efficacy of selection and retention.
The evolutionary perspective on search (e.g., Zollo and Winter, 2002, Knudsen and Levinthal,
2007, Posen and Levinthal, 2012) provides useful insights into how decomposability can create a
trade-off across different stages of the discovery of new alternatives. First, some possible alter-
natives are experimented in the variation generation stage. The realization from these draws are
then evaluated, and some of the highly evaluated alternatives will be selected and retained. Across
the different stages, we compare the two modes of search: (1) the case in which firms experiment
with and evaluate an alternative by producing a decomposed module (i.e., decomposed search) and
(2) the case in which firms experiment with and evaluate an alternative by producing a full-scale
product (i.e., integrated search). For example, in the music industry, firms can experiment with a
new artist and evaluate the talent of the new artist either by producing a single (i.e., a release of
2
a single song – a decomposed module) or producing an album (i.e., a release of multiple songs – a
full-scale product).
First, we argue that a decomposed search may be beneficial during the variation generation
stage. March and Simon (1958) note that as choice sets are not available ex-ante to firms but must
be constructed, experimenting with new alternatives is the first step of a search. We emphasize that
a decomposed search not only lowers the cost of generating an alternative but also enables parallel
experimentations (Marengo et al., 2000, Ethiraj and Levinthal, 2004) because a decomposed search
helps managers focus on a subset (i.e., a smaller number of attributes) of the whole search space
(i.e., all possible attributes). If firms experiment with more new alternatives, firms will benefit
because it increases the chance of discovering promising alternatives with higher upside potentials.
However, the gains from a decomposed search may hurt the discovery of promising alternatives
in the selection and retention stage. Knudsen and Levinthal (2007) note that the evaluation of
alternatives is likely to be imperfect. A promising option that faces an unlucky failure in the first
evaluation may be mistakenly considered unpromising by firms and loses future chances in these
firms. Under the condition of noise in evaluation (e.g., Caves, 2000, Knudsen and Levinthal, 2007,
Fang, Kim, Miliken, 2014), it is challenging for firms to infer an alternative’s true quality from a
one-shot experimentation result. We argue that a decomposed search will be more likely to make
such errors than an integrated search because a decomposed search makes firms less committed to
each alternative. More precisely, as Simon (1955) notes, if the evaluated performance of an alter-
native satisfies a performance target (i.e., a minimum performance criterion), the firm will select
and retain the option for future use. Otherwise, firms will search for other alternatives (Cyert and
March, 1963, Greve, 2003). We argue that a decomposed search may increase the performance
target because the low cost of a decomposed search decreases commitment in each alternative,
resulting in an early termination in investment in each alternative (e.g., Staw, 1976; Guler, 2007,
Wong and Kwong, 2018).
3
We demonstrate the dual roles of decomposability in search with the recorded music industry
where (1) artists’ talent (i.e., each artist is an alternative in our setting) is the most important source
of creativity and profit and (2) decomposed products (i.e., singles) and integrated products (i.e.,
albums) have coexisted since the early 20th century. We collect and match multiple databases: Mu-
sicBrainz, AcousticBrainz, Spotify APIs, and Discogs. The sample covers 114,488 artists, 1,026,309
songs, and 9,667 music firms in 29 countries from 1995 to 2015. The results from OLS models,
instrumental variable estimators, and matching estimators support our prediction on the two faces
of decomposability in the discovery of new talent. First, single-producing firms facilitate parallel
experimentation in the variation generation stage. Single-producing firms experiment with more
new artists, some of whom may turn out to be talented artists. However, decomposability has a
weakness in the later stage of the search process: selection and retention. Single-producing firms
are more likely to miss out on talented artists who experience failure with their first releases due
to an increased performance target for a decomposed search.
This study contributes to the core tenets of the behavioral theory of the firm. Our theory and
findings have useful implications regarding the roles of decomposability in search by bridging the
two distinct strands of the behavioral theory of the firm: the literature on the architecture of com-
plexity (e.g., Simon, 1962, Baldwin and Clark, 2000, Ethiraj et al., 2008) and the literature on
adaptive search (e.g., Cyert and March, 1963, Knudsen and Levinthal, 2007, Fang, Kim, Miliken,
2014). We offer a nuanced theory on the trade-offs that decomposability-enabled experimentations
may generate. First, starting with the work of Nelson (1961), strategy scholars have examined the
role of parallel experimentations in the discovery of new solutions (e.g., Eggers, 2012, Eggers and
Green, 2012, Posen, Matignoni, Levinthal, 2012). Our theory highlights that decomposability has
not been explored as a characterization of parallel experimentation with new alternatives such as
new resources, technologies, assets, or workers.
Second, this study advances our understanding of the role of noise (i.e., imperfect evaluation) in
4
complex problem-solving. As Zollo and Winter (2002) note, a search is primarily carried out through
efforts aimed at generating the necessary range of new options as well as selecting the most appro-
priate ones. In selecting appropriate options, while prior studies have examined heterogeneity in
forecasting ability and its origin (e.g., Makadok and Walker, 2000, Adner and Helfat, 2003, Denrell
and Fang, 2010), our study views a decomposed search as a heuristic to complement forecasting
abilities. Theoretically, we pinpoint a hidden drawback of decomposed search (i.e., the omission
error in giving a second chance to the existing option). Additionally, the experimentation-oriented
management practices (e.g., design thinking or lean startup) have recently become popular. As
many such practices result from technological changes and innovations (e.g., unbundling, exter-
nally available software development kits, cloud computing) enabling decomposed search, it is the
appropriate timing to examine how and when this type of experimentation is effective.
2 Theory and Hypotheses
Studies on the behavioral theory of the firm have long recognized the multi-phased nature of
the innovation process (e.g., Zollo and Winter, 2002, Knudsen and Levinthal, 2007). This process
starts with a search for new options (i.e., variation generation), followed by an evaluation of those
new options, then concludes with selection and retention (e.g., Keum and See, 2019). This perspec-
tive on search provides important insights into how decomposability can create trade-offs across
different phases of the innovation process.
2.1 Decomposability in the variation generation stage
Since Simon (1955) characterized much of the discovery process as a sequential search process,
management scholars have explored the problem that arises with the discovery of new alternatives.
The optimal solution to the discovery problem draws on the “bandit” literature (e.g., Kogut and
5
Kulatilaka, 1994, Denrell and March, 2001, Posen and Levinthal, 2012, Lee and Puranam, 2016).
This tradition describes experimentation as a trial-and-error process (e.g., Kulkarni and Simon,
1990, Thomke, von Hippel, and Franke, 1998). Through experimentation, firms can reveal informa-
tion about new alternatives; if one or more new options outperform existing options, the new ones
will replace the old. Leiponen and Helfat (2010) note, the likelihood of obtaining a favorable draw
from a distribution of payoffs increases as the number of draws increases. Therefore, the benefits
of experimentation are derived from information on whether new choices have upside potential.
We argue that a decomposed search can be beneficial in the variation generation stage. First, a
decomposed search greatly reduces the costs of experimenting with new alternatives because exper-
imenting with a partial product (e.g., a module) generates useful information on the potential of
an alternative (Baldwin and Clark, 2000, Ethiraj and Levinthal, 2004). With a decomposed search
enforced, it is possible to test an alternative without producing the whole system. The low cost of
experimentation allows firms to experiment with more new options. As a decomposed search slashes
the cost to free up experimentation capacity as well as make possible what-if experiments that, in
the past, have been either prohibitively expensive or nearly impossible to carry out. In contrast, the
high cost of experimentation has long put a damper on companies’ attempts to test a new alter-
native (Thomke, 2003). Empirical research has demonstrated that the low cost of experimentation
facilitates testing with more new products, technologies, or startups. For example, Ewens, Nanda,
and Rhodes-Kropf (2018) provide evidence that the low cost of experimentation encourages venture
capitalists to invest in more new startups, in an investment strategy called “spray and pray.
Second, a decomposed search offers the advantage of the parallelism of experimentations (Bald-
win and Clark, 2000, Marengo et al., 2000, Loch et al., 2001). As Nelson (1961, p. 353) emphasizes,
with parallel experimentations, firms can benefit from information acquired by engaging in multiple
alternatives simultaneously, rather than sequentially. As alternatives constitute different approaches
to solving the same problem, the number of alternatives that a firm would experiment with simul-
6
taneously is an important factor in search. Loch, Terwiesch, and Thomke (2001) provide computer
simulation as an exemplary technology to facilitate less costly and faster experimentations, resulting
in a more promising solution concept that could not be tested with costlier and slower technologies.
In sum, decomposability reduces the cost of experimenting with new alternatives and facilitates
the parallelism of experimentations. Therefore, we hypothesize the following:
Hypothesis 1. Firms that implement a decomposed search will experiment with more new
alternatives at the same time than firms that implement an integrated search.
2.2 Decomposability in the selection and retention stages: Missing out on promis-
ing options after evaluation
We argue that under imperfect evaluation, the gains from a decomposed search in the variation
generation stage may encounter a trade-off with a disadvantage in selecting and retention stages.
Nelson (1961) notes that the problem of choosing among alternatives is a difficult one, and it is
easy to make choices which, ex-post, turn out to be the wrong ones in the presence of noise. The
behavioral models of selection can be distinguished from the conventional economic models of selec-
tion, in part, because the evaluation of searched alternatives is likely to be imperfect (e.g., Knudsen
and Levinthal, 2007, Fang, Kim, Miliken, 2014). In the presence of noise, firms may erroneously
falsely reject a superior alternative. For example, in our research setting, talented artists who face
an unlucky failure from their first releases may mistakenly be considered untalented by firms and
lose future production opportunities.
As Simon noted (1955), if the evaluated performance of an option satisfies a minimum per-
formance criterion, the firm will select and retain the option for future use. Otherwise, firms will
search for other alternatives. Subsequent literature has christened such minimum criterion as the
performance target or aspiration level (Cyert and March, 1963, Greve, 2003, Lant and Shapira,
2008). Since then, a large body of behavioral research has explored the implication of the conse-
quences of falling short of or exceeding the performance target. For example, Keum and Eggers
7
(2018) emphasize that the performance target has an allocative role in influencing the acquisition
of new resources and alternatives.
Following this tradition, we take a step forward to examine how a performance target is de-
termined. We argue that decomposability is an important determinant of the performance-target-
setting process. Related to the performance target, we focus on a type of omission error: missing
out on promising alternatives after the first evaluation of those alternatives. This type of error
occurs when the alternative faces an unlucky draw in its first evaluation, and the evaluated perfor-
mance does not satisfy the performance target. In the presence of noise, no matter whether firms
experiment with a decomposed module or an integrated product, it is inevitable to make this type
of omission errors. We argue that a decomposed search may increase the performance target for
selection and retention, resulting in a higher chance of missing out on promising alternatives that
were evaluated as unpromising ones in their first evaluations (i.e., not giving a second evaluation
chance to a promising alternative).
In the presence of noise in performance evaluation, if the performance target becomes higher,
firms will be more likely to miss out on promising alternatives after the first evaluation. Then
why does the decomposed search lead to a higher performance target? Our explanation is based
on a behavioral bias of the sequential search process. One primary way of explaining behavioral
biases is to point to limitations in information processing (e.g., Tversky and Kahneman, 1974). A
type of limitation in information processing that has garnered attention from scholars of behav-
ioral decision-making is the escalation of commitment in sequential investments (e.g., Staw, 1976,
Brockner 1992, Wong and Kwong, 2018). The escalation of commitment is a behavioral bias in
which an individual or a firm facing negative outcomes from an investment nevertheless continues
the behavior instead of terminating investment.
From the escalation of commitment perspective, strategy scholars have explored various topics,
including commercial lending, corporate risk-taking, and R&D policy (e.g., McNamara, Moon, and
8
Bromiley, 2002, Guler, 2007, Egger, 2012). These scholars commonly note that the advantage of
sequential investments critically relies on investors’ effectiveness in terminating unsuccessful in-
vestments based on updated information. These scholars, however, have documented an opposite
pattern wherein decision-makers often continue their commitment to the prior decision despite the
presence of negative feedback. For example, Guler (2007) documents the escalation of commitment
in the venture capital firms’ staged investment decision, in which venture capital firms would be
better off investing fewer rounds in each venture.
We apply the escalation of commitment logic in our theoretical setup. We propose that an in-
tegrated search, which requires a higher resource commitment, may result in a lower performance
target for subsequent investment (i.e., selection and retention) compared to a decomposed search.
In contrast, as a decomposed search entails a lower resource commitment, it may lead to a higher
performance target than an integrated search. Therefore, even though two alternatives show the
same evaluated performance in their first evaluation, if one alternative was tried out with a decom-
posed search, it would be less likely to be given a second evaluation chance than the case of an
integrated search.
While prior work has focused on the negative effect of the escalation of commitment in sequen-
tial decision making. We highlight that escalation of commitment may have a positive influence
on discovering new promising alternatives. This is because firms may avoid not giving a second
evaluation chance to a promising alternative. Figure 1 visualizes this mechanism by comparing a
decomposed search and an integrated search. In Figure 1, the true quality of an alternative is 5,
and the evaluated performance is a random draw from the normal distribution, N(5,1). Let us
assume that 5 is high enough to be a promising alternative. A decomposed search has a higher
performance target (i.e., lower commitment); for example, in Panel A, the performance target of
the decomposed search is 4, and the chance of missing out on promising options with the quality
five will be 15.85%. However, in Panel B, an integrated search has a lower performance target
9
(i.e., higher commitment). As the performance target is 3, the chance of missing out on promising
options with the quality level 5 will be 2.27%, which is lower than that of a decomposed search.
-Insert Figure 1 about here-
In sum, we predict that under imperfect evaluation, a decomposed search may increase the
chance of missing out on promising alternatives after their first evaluation. Thus, we hypothesize
the following:
Hypothesis 2. Firms that implement a decomposed search will be more likely to miss
out on promising alternatives after their first evaluation than firms that implement an
integrated search.
3 Empirical Context: The Recorded Music Industry
3.1 The discovery of new artists in the recorded music industry
Music firms are called record labels or, more simply, labels. In this study, we focus on music
production firms that conduct the talent scouting and development of new artists (i.e., a type of
alternative), called “artists and repertoire (A&R),” and that maintain contracts with recording
artists or bands. The International Federation of the Phonographic Industry (IFPI, 2015, p. 9)
describes music as an investment-intensive business, as the first major activity that music firms
have traditionally undertaken is the discovery of new artists. Indeed, music firms’ investment in
A&R and marketing in 2014 totaled more than $4.3 billion, which accounts for more than 10% of
global music sales (IFPI, 2015).
According to the IFPI, it takes at least $500,000 to experiment with a new artist. Common
features of contracts signed with emerging artists include the payment of advances, recording costs,
tour support, video production, and marketing and promotion costs, as shown in Table 1. In com-
parison with music firms, online music providers, including online music service providers such as
Spotify, iTunes, YouTube, or SoundCloud, spend no money on an upfront investment for talent dis-
10
covery. This phenomenon indicates that music firms remain the largest upfront investors in artists’
careers.
-Insert Table 1 about here-
Many would-be artists seek to make their music available to consumers, for example, by sub-
mitting “demo tapes” to music firms. These potential music products differ substantially both in
their ex-ante promise (how broadly appealing the artist would be if their work were produced)
and in their ex-post success (how successful they become) (Benner and Waldfogel, 2016). Even
artists cannot accurately assess their own talent (Caves, 2001, Tervio, 2009). For example, Elvis
Presley is one of the most significant cultural icons of the 20th century and is referred to as “the
King of Rock and Roll.” Guralnick (2012) notes that Elvis did not understand his talent, and he
chose Sun Records, a small independent music firm in the 1950s, in the hope of being discovered.
After he finished his audition with Sun Records, the CEO, Sam Phillips, and his secretary wrote
Elvis’ name, and the secretary added her own commentary: “Good ballad singer. Hold.” Today,
Sun Records is known as the music firm where a music genre, rock and roll, was born because it
discovered Elvis Presley. This history evidently shows that it is challenging for firms to evaluate
artists’ talent ex-ante.
In addition, even after having a chance to produce and release music, artists may face unlucky
commercial results with their first release. The talent of an artist partially determines the odds of
the success of their music. After failures with their first releases, many artists experience termi-
nations in their contracts. They are forced to seek new opportunities with other music firms. Our
data analysis indicates that, from 1960-2015, only 29% of artists had a second production chance
with the same firm. The other 71% had to leave their first music firm. Some of the artists who
left the first firm became top-notch, talented artists. One of our key informants, from Sony Music
Entertainment, introduced Mumford and Sons, a music group that experienced failure with their
first release, as an exemplary case in which Sony missed out on talented artists after an initial
11
failure. The group initially signed with Chess Records (a subsidiary of Sony Music Entertainment)
and debuted in 2009 with the single “Mumford & Sons.” As executives at Chess Records were dis-
appointed by the commercial outcome of the group’s first single, the company did not extend the
contract with Mumford and Sons. The group had to leave, but, luckily, they were given a second
chance with another company, Island Records, a subsidiary of Universal Music Group. At Island
Records, Mumford and Sons made popular songs like “I will wait,” and they won a Grammy award
in 2014. This case is considered one of Sony Music’s biggest mistakes in the 2000s.
3.2 Singles vs. albums and the advent of iTunes
A release is a broad term that covers two different forms: singles or albums. Music firms com-
monly classify a release with a small number of songs as a single and a release with a large number
of songs as an album (which has, on average, twelve songs in our sample). If a music firm produces
some of its work as a single (i.e., a decomposed product) instead of an album (i.e., an integrated
product), the production and marketing cost is smaller. In addition, another key informant from
Sony Music Entertainment states that producing albums requires a greater cost and commitment
than producing singles: “Albums are much more expensive. More studio time, more writing, more
production hours, more involvement of producers, more people’s input [are] needed on an album
vs. a single. [Albums] also require far more hours in the recording and mixing studios, more hours
of mastering. It comes out to [albums being] at least ten times more expensive than singles in terms
of music production costs.
Many new artists debuted with singles. Elvis Presley made five singles with Sun Records. An-
other good example is the discography of Aqua, a Danish-Norwegian dance-pop band. This band
debuted with a single and made singles until they had an international breakthrough with their
single, “Barbie Girl,” in 1997. More recently, “Cheerleader,” a song recorded by Jamaican singer
OMI, was released as a single by OUFAH, an independent music firm. The song first appeared on
12
the Billboard Hot 100 in the United States in early May 2015 and ranked first on the music chart
in 26 countries.
The artistic and commercial importance of the single (as compared to the album) has varied
over time and across countries. Britt (1989) notes that “the single enjoyed its peak in the 1960s
[during] the rise of musical phenomena like the British Invasion, Motown, and R&B.” However,
starting from the mid-sixties, the album became a greater focus as artists created albums of coher-
ent, themed songs. Bob Dylan’s first album, Bob Dylan, is an exemplary case. As a result, singles
generally received increasingly less attention in the United States, Japan, and South Korea com-
pared to albums. However, in other countries like the UK, the Netherlands, and Australia, singles
survived as a different form of music; singles continued to be produced and sold, and they maintain
their popularity in these countries.
Firms’ decision making between two different forms (singles vs. albums) was greatly affected by
digitization. In particular, digitization shook the music industry in the late 1990s with the MP3 for-
mat, introduced in 1993. Many people illegally downloaded songs through file-sharing websites like
Napster. Apple opened the iTunes Music Store (hereafter “iTunes”), the first online music store, in
2003 as a way to solve the piracy problem. Steve Jobs succeeded in convincing the five major labels
to offer their content through iTunes, which provided a market for songs as well as albums. The
impact of iTunes was great, as indicated by the fact that iTunes accounted for 88% of the US online
music market share in the late 2000s. The popularity of singles increased after the introduction of
iTunes. This change brought about by iTunes increased music firms’ incentive to produce singles.
In 2004, only 19% of music firms produced a single(s) in the US, but this proportion increased to
33% in 2011.
Beginning in 2004, iTunes became available in many countries other than the US. Appendix B
summarizes the history of iTunes’ market entry into foreign countries. The appendix shows that
the timing of iTunes’ introduction varies across different countries. Apple attained great success in
13
the world music market. On the 10th of October 2012, the iTunes Store was reported to have a
64% share of the global online music market. For example, the entry of Apple’s iTunes into Japan
in August 2005 received a great deal of attention and was successful. The popularity of singles in-
creased significantly after the introduction of iTunes in Japan. The proportion of single-producing
firms in Japan increased from 35% in 2004 to 74% in 2011. We use iTunes’ staggered entries into
29 countries as an instrumental variable to mitigate a potential concern regarding endogeneity be-
tween single production and talent level of the artists. The endogeneity concern regarding single
production and how to utilize digitization as a shock are explained in the following methodology
section.
4 Empirical Strategy
4.1 Sample
The sample consists of all music production firms reported on the MusicBrainz database from
1995 to 2015. The sample includes only music production firms because other types of music firms
lack A&R executives or teams, who search for and recruit new artists. We choose 29 countries
that have more than 200 unique music production firms in the MusicBrainz database. We exclude
firm-years in which firm idoes not release any song at year t. The final sample consists of 29,317
firm-years associated with 9,667 firms; the panel is unbalanced.
4.2 Variables
4.2.1 Independent variables
When we test the first and third hypotheses, the unit of analysis is the firm-year observation.
The independent variable is a dummy, Dummy_F irm_Singleit , that is equal to 1 when at least
one release of firm iis released as a single in year tand is otherwise 0. An alternative measure of
14
this variable, the proportion of singles to all releases of firm iin year t, is also used to check the
robustness of the results in the additional analyses. When we test the second hypothesis, the unit
of analysis is the artist. We use a dummy variable, Dummy_Artist_Singlei, that is equal to one
if the first release of artist iis produced as a single and is otherwise 0.
4.2.2 Dependent variables
The first dependent variable measures the number of new artists (N umber_Artistsit ) in music
firm iat year t. Second, we measure the case of missing out on talented artists after the failure of
their first releases with the dummy variable, M iss_T al ented_Artisti, which is equal to 1 if the
artist is part of the top 20% in lifetime popularity but experienced failure with their first release
(which was not included in the top 10% of popular songs) and then moved to another firm to
produce their subsequent release. We use lifetime popularity scores from the Spotify Echo Nest
API and song popularity scores from the Spotify Web API. We use unique international standard
recording code (ISRC) ids, song names, and artists’ names for songs to match the Spotify Web API
and the MusicBrainz data. As shown in Figure 2, both lifetime popularity and song popularity have
skewed distributions. The cutoffs (i.e., top 20% lifetime popularity and top 10% song popularity)
are based on our interview with a former EMI producer, who notes that only 10% of releases break
even, and these songs are made mostly by top 20% artists. We further test the robustness of the
results to different cutoffs and find that the results are qualitatively identical.
-Insert Figure 2 about here-
4.2.3 Control variables
We control for (1) the number of songs (lag 1 year) as a firm size proxy and (2) the mean
number of artists’ prior releases (lag 1 year) as a firm status proxy. We also include (3) a dummy
that takes the value of one if firm iproduced at least one top-5% song in the previous year, (4)
15
the Herfindahl index for music genres in firm i, (5) a dummy for entrepreneurial firms (firm age
5), (6) year dummies, (7) genre complexity (by following Piazzai and Wijnberg, 2019; the details
are in Appendix C), (8) genre dummies, and (9) country dummies. We also use these variables to
match similar firms before running the main stage regressions.
4.3 Empirical specification
4.3.1 Baseline OLS models and endogeneity issues
We use OLS regressions as the baseline tests. We add a vector of control variables that might
influence music firms’ decisions on producing singles. Thus, our initial specification is
Number_Artistsit =β0+β1Dummy_F irm_Singleit +β2Xit +Ci+Git +Tt+Fi+eit ,
Miss_T alented_Artistsi =β0+β1Dummy_Artist_Singlei+β2Xit +Ci+Git +Tt+Fi+eit ,
where iindexes firms, tindexes calendar year, Xit is a set of observable characteristics of the firm
described above as control variables, Ciis the country fixed effect, Git is the genre fixed effect, Ttis
the year fixed effect, and Fiis the firm-fixed effect. Standard errors are clustered at the firm level.
Whereas equations (1) and (2) control for the correlation between producing singles and the
control variables, concerns may still arise about selection based on omitted variables (Hamilton
and Nickerson, 2003, Rawley and Simcoe, 2013, Semadeni, Withers, and Certo, 2014). In an ideal
experimental design, we would randomly assign single production status and measure the ex-post
difference in the number of new artists. In practice, we observe changes in both the production of
singles and the number of new artists. One important potential omitted variable is a firm’s intention
to discover new talent. If a music firm’s desire to discover new talent is greater than that of other
music firms, the firm may be more likely to produce singles. At the same time, it may experiment
with and subsequently discover more new talent.
16
4.3.2 Instrumental variable estimators (2SLS)
We attempt to address this endogeneity and omitted variable issue by utilizing two instrumental
variables: the staggered introduction of iTunes and the country-level proportion of albums to all
releases in the previous year. The first instrumental variable is a dummy variable that takes a value
of one if iTunes was introduced in year tin country cand 0 otherwise. The staggered market entry
of iTunes into 29 countries offers an exogenous variation. The introduction of iTunes increases the
commercial importance of the single significantly because music is sold in the form of individual
songs that were previously only or primarily sold as parts of albums. As the introduction timing
of iTunes and other digital services is mainly determined by the difference in intellectual property
regimes between the US and local countries, rather than due to differences in countries’ talent
discoveries, these variables are less correlated with factors in the error term that influence music
firms’ decisions on experimenting with new artists.
The second instrumental variable is the country-level proportion of albums to all releases in
the previous year. As we noted earlier, in some countries, singles survived as a different format of
music release, even in the 1990s. For example, in 1995, 47% of UK music releases were produced as
singles. In contrast, at that time, only 22% of US releases were singles. In addition, South Korea
is an extreme example; no music in South Korea was produced as a single in 1995. We use this
variation to complement the first instrumental variable. This instrument captures the differences
in the popularity of albums (compared to singles) across countries, complementing the dummy
variable for iTunes’ staggered entry.
In the first stage regression, we estimate the following equation: Dummy_F irm_Singleit =
βIV Zit +µit , where Zit is a set of firm characteristics, the other fixed effects, and instrumental
17
variables, and µit is an error term. Then, we estimate the second stage OLS regression model:
Number_Artistsit =β0+β1ˆ
Dummy_F ir m_Singleit +β2Xit +Ci+Git +Tt+Fi
+[ηit +β1(Dummy_F ir m_Singleit ˆ
Dummy_F ir m_Singleit ].
Miss_T alented_Artistsit =β0+β1ˆ
Dummy_Artist_Singlei+β2Xit +Ci+Git +Tt+Fi
+[ηit +β1(Dummy_Artist_Singleiˆ
Dummy_Artist_Singlei].
An ideal instrumental variable would generate firm-level variation in the incentives to produce
singles, thereby allowing us to control for market-specific trends in the discovery of new talent.
Unfortunately, we could not identify any firm-level instruments; therefore, our identification strategy
is vulnerable to omitted variables that are correlated with both our country-level instruments and
firm-level change in the discovery of new talent. However, we expect any resulting bias to be small
because our specification controls for a number of time-varying observables at the firm level.
4.3.3 Propensity score matching
To complement the 2SLS models, we use a matching estimator: propensity score matching.
Matching estimators control for selection bias by creating a matched sample of treatment and
control observations that are similar to the observable characteristics (Rosenbaum and Rubin,
1983). To implement propensity score matching, we estimate a probit model of the firm’s decision
to produce singles and use fitted values from that model as estimates of the propensity score. We
then trim extreme values and firm-year observations off the common support of the propensity
score distribution to obtain our matched sample.
4.4 Sample Statistics
Table 2 reports the descriptive statistics on all the variables at the firm-year level. First, the de-
scriptive statistics for the independent variables show that the proportion of firm-year observations
18
that produce at least one single is 39.3%, and the proportion of artists whose first release is a single
is 71.7%. Second, we have the three firm-level dependent variables: number of new artists and a
dummy variable that takes the value of one if firm imisses out on the top 20% of artists whose first
release was a single. The average number of new artists is 1.178, and the variance has a large value
(2.001). In addition, we report the descriptive statistics on the two instrumental variables. First,
the proportion of firm-year observations that iTunes was introduced in country cis 0.587: 58.7%
of firm-year observations are after the introduction of iTunes, and the other 41.3% observations
are before the introduction of iTunes. Second, the average of the second instrumental variable, the
country-level proportion of albums to all releases in the previous year, is 0.736. Finally, correlations
among the variables are provided in Appendix D.
-Insert Table 2 about here-
5 Results
5.1 Does producing singles increase the number of new artists in the firm?
We test whether producing singles increases the number of new artists in the music firm. We
compare single-producing firms and album-only-producing firms. Table 3 shows the results from
the tests on the impact of producing singles on the number of new artists in the firm. We estimate
five different versions of the same equation: OLS with and without the control variables, propensity
score weighted regression with and without the control variables, and the instrumental variables
analysis (2SLS).
Column 1 reports the estimates from a simple OLS specification without the control variables.
We find a strong correlation between producing singles and the number of new artists. Specifically,
the number of new artists in the firm increases by 0.62 in firms that produce some of their work
as a single(s) compared to those that produce their work only as albums. The Tstatistic of the
19
coefficient is 13.07, which means the p-value is close to 0. Next, Column 2 shows the results from
the same model after adding the control variables and other effects, and the results show that the
coefficient is smaller than the coefficient in Column 1 (0.4147) and that it is positive and significant;
the Tstatistic of the coefficient is 13.25, meaning the p-value is small.
Columns 3 and 4 present estimates from the same model after matching to control for observ-
able differences between single-producing firms and non-single-producing firms. The coefficients
from these matching models are 0.4914 and 0.4761, and their Tstatistics are 12.52 and 12.58,
respectively. In Column 5, we present estimates from the 2SLS model, which controls for the po-
tential endogeneity of producing singles by using the two instrumental variables. The first-stage
relationship between the introduction of iTunes and producing singles by the firm is positive (T
statistic: 2.05), and the first-stage relationship between country-level firms’ proportion of albums to
all releases in the previous year and the production of singles by the firm is strongly negative: the
Tstatistic on this instrumental variable is -11.97. Overall, the first-stage Tstatistics indicate pow-
erful instruments, alleviating the concern regarding weak instruments (see, for example, Semadeni,
Withers, and Certo, 2014 for summaries of this issue). Because we have more instrument variables
than endogenous variables, we conduct a test of overidentifying restrictions using Sargan’s Jstatis-
tic. The overidentification test weakly does not reject the null hypothesis that all of our instrument
variables are valid (χ2(1) = 1.332; p= 0.2484), alleviating concerns about an over-identification
problem. In the second stage, the estimated changes in the talent level of new artists are positive.
The coefficient is 1.4955 (Zstatistic: 3.12, p-value: 0.002). Although our 2SLS estimate in Column
5 is noisier than the matching estimate in Column 4, the difference in coefficients is statistically
significant at 5%; the Zstatistic of this difference is 2.13. Collectively, the findings in Table 3
suggest that, when firms produce singles, they experiment with more talented artists than when
they produce artists’ work only as albums.
-Insert Table 3 about here-
20
5.2 Does producing singles increase the chance of missing out on talented new
artists after their first releases?
We turn now to the second hypothesis, which tests the impact of producing singles on missing
out on talented new artists after the failure of their first releases. Table 4 shows the results from
the tests on the effect of producing singles on omission errors in giving a second chance to new
artists. We estimate the effect with different models: logistic regression, matched sample logistic
regression, and instrumental variables analysis (2SLS). Column 1 reports the estimates from a
simple logistic regression model without control variables. We find a positive relationship between
producing singles and omission errors. Specifically, the estimate, 0.5763, in Column 1 suggests that
omission errors are associated with an increase of 64.02% when firms experiment with a new artist
through producing a single.
Columns 2-4 present estimates from the matched sample logistic regressions. Regardless of the
presence of the control variables and other effects, the odds ratio estimates have similar values:
1.009 and 0.8569. We calculate the marginal effects of the estimates in Column 3 and report them
in Column 4. The coefficient, 0.0285, suggests that the effect size of producing singles is a 2.85%
increase in the chance of missing out on the top 20% of talented new artists. Finally, we present
the estimates from the 2SLS model with the same two instrumental variables. The first-stage T
statistics on the relationship between instrumental variables and producing singles are 4.68 and -
15.07. These Tstatistics indicate powerful instruments. In the second stage, the estimated changes
in the omission error are positive (0.0642) with a p-value of 0.006.
-Insert Table 4 about here-
21
5.3 Does producing singles increase the performance target for selection and
retention?
We further test the mechanism for the second hypothesis. We propose that the increased chance
of missing out on talented artists may come from an increased performance target of the single pro-
duction. To study the performance target, first, we use another dependent variable, the popularity
of the most popular song in the first release (single or album) of the new artists who did not receive
a second chance with the same firm. As the dependent variable has a continuous value, we use
linear regressions, including OLS, matched sample OLS, and 2SLS.
Table 5 demonstrates a strong relationship between producing singles and the performance
target of the single production. Columns 1 to 5 report estimates from the models where the de-
pendent variable is the talent level of the most talented new artist in the firm. The coefficients
(2.3862, 2.6879, 3.2609, 2.3828, and 2.2736) are positive and significant: the p-values for these three
estimates are 0.000, 0.000, 0.000, 0.000, and 0.012, respectively. The reported Tstatistics for instru-
mental variables in Column 5 are 2.91 and -32.68, demonstrating that the instruments are powerful
enough, and Sargan’s Jstatistic is 0.127 (p= 0.7213), which alleviates concerns about an over-
identification problem. As 5.0337 is the average popularity score of the most popular song in the
first release produced by the new artists who did not receive a second chance with the same firm,
single production is associated with at least a 47.33% (= 2.3828/5.0337) increase in performance
target for selection and retention.
-Insert Table 5 about here-
5.4 Robustness checks
Difference in the quality of experimented alternatives: One potential concern that has not been
addressed is the possibility that when firms produce singles, firms may experiment with different
types of talent whose potential may be higher than when they produce albums. Because of the high
22
uncertainty in predicting the talent of an artist before production and commercialization in the
music industry, the term “nobody knows principle” has been coined by industry experts (Caves,
2000). Therefore, it is unlikely that the single production itself may bring more talented artists than
album production. One way to rule out this concern is by comparing the talent levels of new artists
between single-producing firms and album-only-producing firms. We analyze two different samples:
firm-level sample and artist-level sample. We estimate the three different regression models (i.e.,
propensity matching estimators and 2SLS) described in the prior subsection with the two samples.
We do not find a strong relationship between producing singles and the average talent level
of new artists. Table 6 Columns 1 to 3 report estimates from the firm-level analysis. Although
the coefficients (0.0654, 0.0905, and 2.5758) are positive, they are not strongly significant: the p-
values for these three estimates are 0.792, 0.715, and 0.142, respectively. Columns 4 to 6 report
estimates from the artist-level analysis. The results are similar. The first two coefficients (-0.0058
and -0.0133) are negative and are not strongly significant: the p-values for these two estimates are
0.925 and 0.831, respectively. Although the coefficient in Column 6 (2SLS) is positive (1.4064), it
is not strongly significant; the p-value of the coefficient is 0.670. In sum, we find no evidence for a
difference in the average quality of new artists for whom firms experimented with single production.
-Insert Table 6 about here-
Noisier quality signal from single production: An alternative story may explain the positive asso-
ciation between single production and the chance of missing out on talented artists after the first
release; in particular, the increase in the omission errors may come from a nosier quality signal
of the single production. In reality, even though music firms allow an artist to produce an album,
the firms focus on one title song in the commercialization process. Thus, albums also produce a
similarly noisy signal on the talent of new artists. One way to rule out this concern is by considering
commission errors in giving second chances to untalented artists. If single production generates a
noisier signal than album production, we would see that single-producing firms make more com-
23
mission errors in giving a second chance to new artists (as well as omission errors in giving second
chances). This is because, in the presence of a noisier signal (from single production), new artists
will be more likely to face lucky draws as well as unlucky draws. If our mechanism works (i.e.,
the single production is associated with a higher performance target); however, we will see fewer
commission errors because of the increased performance target for selection and retention.
Table 8 shows that the patterns regarding commission errors are more consistent with our sug-
gested theoretical mechanism than the alternative story (i.e., the noisier-signal story). The results
suggest that single production decreases the chance of making commission errors because single
production increases the performance target for selection and retention. In Table 7, Columns 1 to
5 report estimates from the models where the dependent variable is the dummy, which takes one if
the same firm gives a second chance to the bottom 80% artists. The coefficients (-0.0420, -0.7274,
-0.7102, -0.1225, and -0.0669) are positive and significant: the p-values for these three estimates are
0.000, 0.000, 0.000, 0.000, and 0.006, respectively.
-Insert Table 7 about here-
Other robustness checks: Our findings are robust to the use of an alternative measure for the
independent variable: the proportion of singles to all releases in firm i. The results of this robustness
check are in Table 8 Panel A. We report the results from OLS and 2SLS models. The results are
qualitatively identical to the results for our baseline models, alleviating concerns about the measure
of our independent variable. Also, our findings are robust to the use of the different cutoff levels
for measuring the chance of missing out on talented artists. In Table 8 Panel B, we compare the
results from models with the different cutoff levels for measuring top talented artists: top 20%, top
15%, top 10%, and top 5%. Across the different cutoffs, we find a qualitatively identical pattern.
-Insert Table 8 about here-
24
6 Discussion and Conclusions
Management practices for experimentation, such as the lean startup or design thinking, have
recently become popular (e.g., Ries, 2011) because a growing number of technological innovations
lower the cost of experimentations through decomposition in the search process. For example, the
increasing popularity of software development kits (SDKs) has decomposed the software develop-
ment process into (1) developing SDKs and (2) utilizing SDKs. The exemplary case of an SDK is
the video game engine in the video game industry. Because the introduction of such SDKs slashes
the cost of experimentations for video game companies, many video game companies produce more
video games without the hassle of developing their SDKs. Our theory and findings suggest that
under imperfect evaluation, such an increase in the number of new video games may turn out to be
unprofitable because video game companies need to be more committed to each video game idea
(e.g., experiment with the same video game idea multiple times to refine it). Likewise, we highlight
that these experimentation-oriented management practices may have a hidden drawback, missing
out on promising ideas after experimentation.
To rigorously examine these two opposite effects of a decomposed search, we divide the dis-
covery process into the variation generation stage and the selection and retention stage. First, we
predict that the decomposed search generates more variations, some of which may turn out to be
promising options. However, the decomposed search may lead to worse selection and retention due
to the increase in performance target. This phenomenon occurs because, under imperfect evalua-
tion, experimentation can reveal partial information only, and a decomposed search makes firms
less committed to each alternative. Our findings demonstrate that music firms that produce some
of their products as singles tend to experiment with more new artists. However, they are more likely
to miss out on talented artists who experienced failures in their first releases. In sum, a decomposed
search generates a trade-off between variation (i.e., experimenting with more options) and selection
and retention (i.e., missing out on promising options after decomposed experimentations).
25
One of the general approaches for search effectiveness is to decompose the overall problem into
subproblems. Prior theoretical work has explored the performance implications of partitioning a
system into subsystems by focusing on whether a decomposed or integrated search is superior (e.g.,
Levinthal and Warglien, 1999), how to solve coordination problem between modules under high
complexity (e.g., Rivkin and Siggelkow, 2003), and how granular each module should be to solve
a complex problem (e.g., Ethiraj and Levinthal 2004). This stream of work has utilized the sim-
ulation methodology. It has been based on the two common assumptions that (1) a decomposed
search facilitates experimentations, and (2) managers would not change their performance target
no matter whether their search mode is decomposed or an integrated. In this study, we explore
whether these two assumptions represent search behavior in the real world. By exploring these
assumptions, we attempt to enhance our understanding of this topic and help scholars have a more
realistic assumption set in future research.
Regarding the first assumption, we provide evidence that is consistent with the first assumption
that a decomposed search facilitates the creation of more variations through parallel experimenta-
tions. The finding helps us to advance our understanding of parallel experimentations. In particu-
lar, the decomposability perspective provides a nuanced theory on the parallel experimentations by
bridging the literature on the architecture of complexity with the literature on parallel innovation.
Starting with the work of Nelson (1961), strategy scholars have examined the role of parallel experi-
mentation in the discovery of new solutions (e.g., Marengo et al., 2000, Ethiraj and Levinthal, 2004,
Eggers, 2012, Posen, Matignoni, Levinthal, 2012). Our theory highlights that decomposability has
been missing from the characterization of heterogeneous parallel experimentation and the discovery
of new alternatives such as new technologies, assets, or workers. Indeed, our findings demonstrate
that decomposability would be one of the main factors facilitating parallel experimentations.
More importantly, regarding the second assumption, we provide theory and evidence on how
search mode affects the performance target that managers set for selection and retention. Our find-
26
ings suggest that a decomposed search increases the performance target, leading to the conclusion
that the benefit of a decomposed search could be limited compared to the prediction from the
prior theoretical work. This finding highlights that the performance target plays an essential role in
search, especially under imperfect evaluation. In this sense, this study makes useful contributions
to the core tenet of the behavioral theory of the firm (e.g., Cyert and March, 1963, Adner and
Helfat, 2003, Denrell, Fang, Liu, 2019). Prior studies on imperfect evaluation have examined the
bias and performance implications of the heterogeneity in forecasting ability (e.g., Makadok and
Walker, 2000, Denrell and Fang, 2010) and organization structure (Sah and Stiglitz, 1986, Knudsen
and Levinthal, 2007, Csaszar, 2012). Our study bridges the literature on imperfect evaluation with
the literature on the architecture of the complex system, which is another important stream of the
behavioral theory of the firm. We consider a decomposed search as a heuristic to complement fore-
casting abilities or organization structure. We highlight a hidden drawback of decomposed search
(i.e., not giving a second chance to a promising option) in search.
Finally, this study speaks to the burgeoning empirical literature on decomposability and com-
plexity (e.g., Zhou, 2011, Ganco, 2013, Piazzai and Wijnberg, 2019, Ethiraj and Zhou, 2019). Since
scholars started to conceptualize organizations as a complex adaptive system (e.g., March, 1991,
Levinthal, 1997), they have adopted various models from complex science (Holland and Miller,
1991, Kauffman, 1995, Watts and Strogatz, 1998) and advanced our understanding of the adaptive
search process. However, as Baumann, Schmidt, and Stieglitz (2019) note, to date, this theoreti-
cal work has been only incidentally complemented by empirical research, and the theoretical and
empirical studies remain rather disconnected. We attempt to tighten the link between theoretical
and empirical work by analyzing an unusual setting to measure decomposability and its role in
discovering new promising alternatives.
27
References
Adner, R, & Helfat C. E. (2003). Corporate effects and dynamic managerial capabilities. Strategic
Management Journal, 24(10), pp.1011-1025.
Baldwin, C.Y, & Clark K. B. (2000). Design Rules: The Power of Modularity. MIT press, Cam-
bridge, MA.
Baumann, O., Schmidt, J., & Stieglitz, N. (2019). Effective search in rugged performance land-
scapes: A review and outlook. Journal of Management, 45(1), pp.285-318.
Benner, M. J., & Waldfogel, J. (2016). The song remains the same? Technological change and po-
sitioning in the recorded music industry. Strategy Science, 1(3), pp.129-147.
Brockner, J. (1992). The escalation of commitment to a failing course of action: Toward theoretical
progress. Academy of Management Review, 17(1), 39-61.
Britt, B. (1989). The 45-rpm single will soon be history. Los Angeles Daily News, p.4.
Caves, R. E. (2000). Creative Industries: Contracts between Art and Commerce. Harvard University
Press, Cambridge, MA.
Csaszar, F. A. (2012). Organizational structure as a determinant of performance: Evidence from
mutual funds. Strategic Management Journal, 33(6), pp.611-632.
Cyert, R. M., & March, J. G. (1963). A Behavioral Theory of the Firm. Prentice-Hall, Englewood
Cliffs, NJ.
Denrell, J., & March, J. G. (2001). Adaptation as information restriction: The hot stove effect.
Organization Science, 12(5), pp.523-538.
Denrell, J., & Fang, C. (2010). Predicting the next big thing: Success as a signal of poor judgment.
Management Science, 56(10), pp.1653-1667.
Denrell, J., Fang, C., & Liu, C. (2014). Perspective—Chance explanations in the management sci-
ences. Organization Science, 26(3), pp.923-940.
Eggers, J. P. (2012). Falling flat failed technologies and investment under uncertainty. Administra-
tive Science Quarterly, 57(1), pp.47-80.
Eggers, J. P., & Green, E. (2012). Choosing not to choose: A behavioral perspective on parallel
search. DRUID 2012 Conference Proceeding, June (Vol. 19).
Ethiraj, S. K., & Levinthal, D. (2004). Modularity and innovation in complex systems. Management
Science, 50(2), pp.159-173.
Ethiraj, S. K., Levinthal, D., & Roy, R. R. (2008). The dual role of modularity: Innovation and
imitation. Management Science, 54(5), 939-955.
Ethiraj, S. K., & Zhou, Y. M. (2019). Fight or flight? Market positions, submarket interdependen-
cies, and strategic responses to entry threats. Strategic Management Journal.
Ewens, M., Nanda, R., & Rhodes-Kropf, M. (2018). Cost of experimentation and the evolution of
venture capital. Journal of Financial Economics, 128(3), pp.422-442.
28
Fang, C., & Kim, J. J. (2018). The power and limits of modularity: A replication and reconciliation.
Strategic Management Journal, 39(9), pp.2547-2565.
Fang, C., Kim, J. J., & Milliken, F. J. (2014). When bad news is sugarcoated: Information distor-
tion, organizational search and the behavioral theory of the firm. Strategic Management Journal,
35(8), pp.1186-1201.
Ganco, M. (2013). Cutting the Gordian knot: The effect of knowledge complexity on employee
mobility and entrepreneurship. Strategic Management Journal, 34(6), 666-686.
Guler, I. (2007). Throwing good money after bad? Political and institutional influences on sequential
decision making in the venture capital industry. Administrative Science Quarterly, 52(2), 248-285.
Guralnick, P. (2012). Last Train to Memphis: the Rise of Elvis Presley. Little, Brown.
Greve, H. R. (2003). A behavioral theory of R&D expenditures and innovations: Evidence from
shipbuilding. Academy of Management Journal, 46(6), 685-702.
Hamilton, B. H. &, Nickerson J. A. (2003). Correcting for endogeneity in strategic management
research. Strategic Organization, 1(1), pp.51-78.
Holland, J. H., & Miller, J. H. (1991). Artificial adaptive agents in economic theory. The American
Economic Review, 81(2), 365-370.
International Federation of Phonographic Industry. (2015). IFPI releases definitive statistics on
global market for recorded music. http://www.ifpi.org/sitecontent/publications/rin_order.html.
[(August 2) London, UK].
Kauffman, S. (1996). At Home in the Universe: The Search for the Laws of Self-organization and
Complexity. Oxford University Press. UK.
Keum, D. D., & Eggers, J. P. (2018). Setting the bar: The evaluative and allocative roles of orga-
nizational aspirations. Organization Science, 29(6), 1170-1186.
Keum, D. D., & See, K. E. (2017). The influence of hierarchy on idea generation and selection in
the innovation process. Organization Science, 28(4), pp.653-669.
Knudsen, T., & Levinthal, D. A. (2007). Two faces of search: Alternative generation and alternative
evaluation. Organization Science, 18(1), pp.39-54.
Kogut, B., & Bowman, E. H. (1995). Modularity and permeability as principles of design (pp. 243-
260). Redesigning the Firm, New York: Oxford Univ. Press.
Kogut, B., & Kulatilaka, N. (1994). Operating flexibility, global manufacturing, and the option
value of a multinational network. Management Science, 40(1), pp.123-139.
Kulkarni, D., & Simon, H. A. (1990). Experimentation in machine discovery. Computational Models
of Scientific Discovery and Theory Formation (edited by Jeff Shrager and Pat Langley).
Lant, T., & Shapira, Z. (2008). Managerial reasoning about aspirations and expectations. Journal
of Economic Behavior & Organization, 66(1), 60-73.
29
Leiponen, A., & Helfat, C. E. (2010). Innovation objectives, knowledge sources, and the benefits of
breadth. Strategic Management Journal, 31(2), pp.224-236.
Lee, E., & Puranam, P. (2016). The implementation imperative: Why one should implement even
imperfect strategies perfectly. Strategic Management Journal, 37(8), pp.1529-1546.
Levinthal, D. A. (1997). Adaptation on rugged landscapes. Management Science, 43(7), pp.934-950.
Levinthal, D. A., & Warglien, M. (1999). Landscape design: Designing for local action in complex
worlds. Organization Science, 10(3), 342-357.
Loch, C. H., Terwiesch, C., & Thomke, S. (2001). Parallel and sequential testing of design alterna-
tives. Management Science, 47(5), pp.663-678.
McNamara, G., Moon, H., & Bromiley, P. (2002). Banking on commitment: Intended and unin-
tended consequences of an organization’s attempt to attenuate escalation of commitment. Academy
of Management Journal, 45(2), 443-452.
Makadok R., & Walker G. (2000). Identifying a distinctive competence: forecasting ability in the
money fund industry. Strategic Management Journal, 21(8), pp.853-864.
March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science,
2(1), pp.71-87.
March, J.G. & Simon, H.A., (1958). Organizations. John Wiley & Sons, New York.
Marengo, L., Dosi, G., Legrenzi, P., & Pasquali, C. (2000). The structure of problem-solving knowl-
edge and the structure of organizations. Industrial and Corporate Change, 9(4), pp.757-788.
Nelson, R. R. (1961). Uncertainty, learning, and the economics of parallel research and development
efforts. The Review of Economics and Statistics, pp.351-364.
Piazzai, M., & Wijnberg, N. M. (2019). Product proliferation, complexity, and deterrence to imi-
tation in differentiated-product oligopolies. Strategic Management Journal, 40(6), pp.945-958.
Posen, H. E, & Levinthal, D. A. (2012). Chasing a moving target: Exploitation and exploration in
dynamic environments. Management Science, 58(3), pp.587-601.
Posen, H. E, Martignoni, D, & Levinthal, D. A. (2013). E Pluribus Unum: Organizational Size and
the Efficacy of Learning. Available at SSRN 2210513.
Rawley, E., & Simcoe, T. S. (2013). Information technology, productivity, and asset ownership:
Evidence from taxicab fleets. Organization Science, 24(3), pp.831-845.
Ries, E. (2011). The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to
Create Radically Successful Businesses. Currency.
Rivkin, J. W., & Siggelkow, N. (2003). Balancing search and stability: Interdependencies among
elements of organizational design. Management Science, 49(3), 290-311.
Sah, R. K., & Stiglitz, J. E. (1986). The architecture of economic systems: Hierarchies and pol-
yarchies. The American Economic Review, 716-727.
Semadeni, M., Withers, M. C., & Trevis Certo, S. (2014). The perils of endogeneity and instru-
mental variables in strategy research: Understanding through simulations. Strategic Management
30
Journal, 35(7), 1070-1079.
Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69:
99-118.
Simon, H. A. (1962). The architecture of complexity. Proceedings of American Philosophical Soci-
ety, 106, 467–482.
Staw, B. M. (1976). Knee-deep in the big muddy: A study of escalating commitment to a chosen
course of action. Organizational Behavior and Human Performance, 16(1), 27-44.
Thomke, S. H. (2003). Experimentation matters: unlocking the potential of new technologies for
innovation. Harvard Business Press, Cambridge, MA.
Thomke, S, Von Hippel, E., & Franke, R. (1998). Modes of experimentation: an innovation process
and competitive variable. Research Policy, 27(3), pp.315-332.
Tervio, M. (2009). Superstars and mediocrities: Market failure in the discovery of talent. The Re-
view of Economic Studies, 76(2), pp.829-850.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science,
185(4157), 1124-1131.
Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’networks. Nature,
393(6684), 440.
Wong, K. F. E., & Kwong, J. Y. (2018). Resolving the judgment and decision-making paradox
between adaptive learning and escalation of commitment. Management Science, 64(4), 1911-1925.
Zhou, Y. M. (2011). Synergy, coordination costs, and diversification choices. Strategic Management
Journal, 32(6), pp.624-639.
Zollo, M., & Winter, S. G. (2002). Deliberate learning and the evolution of dynamic capabilities.
Organization Science, 13(3), pp.339-351.
31
Figure 1: Decomposed search vs. integrated search in performance target and omission
errors after first evaluation
Panel A. Decomposed search and omission errors after first evaluation
Panel B. Integrated search and omission errors after first evaluation
Note: The true quality of alternative iis 5, and the evaluated performance of an alternative is a random draw from
a normal distribution (N(5,1)). The dark-colored (red-colored) area means the cases of the omission errors in giving
second evaluation chances to an alternative.
32
Figure 2: Popularity distributions
Panel A. Artist lifetime popularity distribution
Panel B. Song popularity distribution
Note: We excluded observations with popularity 0 from both distributions. Figure 2 shows that popularity distribu-
tions are highly skewed. Also, it demonstrates that the song popularity distribution in Panel B is more skewed than
the artist lifetime popularity in Panel A.
33
Table 1: Cost of producing a release in the music industry
Item Cost
Cash advance US $50,000 - 350,000
Recording US $150,000 - 500,000
Video production US $50,000 - 300,000
Tour support US $50,000 - 150,000
Marketing and promotion US $200,000 - 700,000
Total US $500,000 - $2,000,000
Source: International Federation of the Phonographic Industry report, 2014
Table 2: Summary statistics
Level of Std.
Variable name Variation Mean Dev. Min. Max.
Independent variables:
(Dummy) one if firm produces at least one single Firm 0.393 0.488 0 1
(Dummy) one if the first release of artist iis produced by a single Artist 0.717 0.152 0 1
Dependent variables:
Number of new artists Firm 1.178 2.001 0 52
(Dummy) one if firm misses out on top 20% talented artists Firm 0.019 0.137 0 1
after their first release
Other variables:
(Dummy) one if iTunes was introduced in country cCountry 0.587 0.492 0 1
County-level proportion of albums to all releases (year t-1) Country 0.736 0.141 0.357 1
Complexity of firm main genre Genre 33.740 1.143 26.277 39.585
log (Number of songs) Firm 2.534 1.498 0 7.517
log (Mean number of artists’ prior releases) Firm 1.059 1.044 0 6.174
(Dummy) one if firm produced at least one top 5% song (year t-1) Firm 0.039 0.194 0 1
Herfindahl index for genre in firm Firm 0.893 0.206 0.174 1
(Dummy) one if firm is an entrepreneurial firm Firm 0.493 0.500 0 1
Other statistics:
Number of songs Firm 35.010 59.582 1 1838
Number of releases (albums + singles) Firm 3.905 6.099 1 237
Number of singles Firm 1.184 3.151 0 142
Proportion of singles to releases Firm 0.264 0.382 0 1
Year Firm 2005.102 5.570 1995 2015
Number of firm-year observations 29,217
Number of unique firms 9,667
Number of unique artists 42,422
Number of unique releases 114,488
Number of unique songs 1,026,390
34
Table 3: (Variation generation stage) Does producing singles facilitate experimenting
with more new artists?
DV: Number of new artists
(1) (2) (3) (4) (5)
Propensity Propensity
OLS OLS Matching Matching 2SLS
(Dummy) one if firm produces at least one single 0.6200 0.4147 0.4914 0.4761 1.4955
(0.0474) (0.0315) (0.0392) (0.0378) (0.4788)
(p= 0.000) (p= 0.000) (p= 0.000) (p= 0.000) (p= 0.002)
Complexity of firm main genre -0.0082 -0.0048 -0.0086
(0.0100) (0.0184) (0.0096)
log (No. of songs) (1 year lag) 0.3625 0.1407 0.2163
(0.0233) (0.0236) (0.0145)
log (Mean no. of artists’ prior releases) (1 year lag) -0.1932 -0.0010 -0.1162
(0.0180) (0.0247) (0.0162)
(Dummy) one if firm produced at least one top 5% song 0.7149 0.2528 0.1745
(0.1881) (0.0880) (0.1248)
Herfindahl index for genre in label -1.1286 -0.8580 -0.6841
(0.1112) (0.0945) (0.1022)
(Dummy) one if firm is an entrepreneurial firm 0.1410 0.1017 0.0718
(0.0277) (0.0684) (0.0237)
Constant 0.9904 1.5270 1.1486 1.5563 1.6503
(0.2380) (0.3993) (0.3498) (0.7177) (0.3908)
Year effect no yes no yes yes
Genre effect no yes no yes yes
Country effect no yes no omitted omitted
Firm effect no no no yes yes
2SLS first-stage summary statistics
T-statistic: (Dummy) one if iTunes was introduced 2.05
in country c
T-statistic: Country-level proportion of Albums -11.97
to all releases in the previous year
Adjusted R20.0945 0.1975 0.0297 0.0507 n.a.
N29,317 29,317 16,980 16,980 29,317
Note: Standard errors are clustered at the firm level. The Sargan’s J-statistic (χ2(1)) is 1.332 (p= 0.2484), alleviating
concerns about an over-identification problem.
35
Table 4: (Selection and retention stages) Does producing singles increase the chance
of missing out on talented artists after their first releases?
DV: (Dummy) one if firm miss out on
top 20% talented artists after their first release
(1) (2) (3) (4) (5)
Matched Matched Matched
Sample Sample Sample
Logit Logit Logit Logit
(Odds (Odds (Odds (Marginal
ratios) ratios) ratios) effects) 2SLS
(Dummy) one if the first release of artist i0.5763 1.0090 0.8569 0.0285 0.0642
is produced by a single (0.0711) (0.2001) (0.2065) (0.0069) (0.0232)
(p= 0.000) (p= 0.000) (p= 0.000) (p= 0.000) (p= 0.006)
Complexity of firm main genre -0.0207 -0.0007 -0.0011
(0.0979) (0.0033) (0.0005)
log (No. of songs) (1 year lag) -0.0217 -0.0007 0.0005
(0.0458) (0.0015) (0.0009)
log (Mean no. of artists’ prior releases) (1 year lag) -0.3406 -0.0113 -0.0022
(0.0812) (0.0027) (0.0011)
(Dummy) one if firm produced at least a top 5% song -0.0790 -0.0026 -0.0099
(0.2505) (0.0083) (0.0034)
Herfindahl index for artists’ genre in firm -0.2898 -0.0096 -0.0111
(0.2748) (0.0091) (0.0035)
(Dummy) one if firm is an entrepreneurial firm -0.5794 -0.0193 -0.0044
(0.1407) (0.0047) (0.0023)
Constant -4.7532 -5.6186 -4.5419 4.3985
(0.1110) (0.3902) (3.2700) (0.3124)
Year effect no no yes yes yes
Genre effect no no yes yes yes
Country effect no no yes yes omitted
Firm effect no no no no yes
2SLS first-stage summary statistics
T-statistic: (Dummy) one if iTunes was introduced 4.68
in country c
T-statistic: Country-level proportion of albums -15.07
to all releases in the previous year
Adjusted R2/ Pseudo R20.0080 0.0087 0.0773 0.0773 n.a.
N42,422 17,373 11,751 11,751 42,422
Note: Standard errors are clustered at the firm level. The Sargan’s J-statistic (χ2(1)) is 0.0333 (p= 0.8551), alleviating
concerns about an over-identification problem.
36
Table 5: (Selection and retention stages) Does producing singles increase the perfor-
mance target of new artists’ first releases for selection and retention?
Sample: Firm Level
DV: Popularity of the most popular song
among the releases of new artists
who did not have second chances in firm
(1) (2) (3) (4) (5)
Propensity Propensity
OLS OLS Matching Matching 2SLS
(Dummy) one if the first release of artist i2.3862 2.6879 3.2609 2.3828 2.2736
is produced by a single (0.2751) (0.2250) (0.4749) (0.3531) (0.9026)
(p= 0.000) (p= 0.000) (p= 0.000) (p= 0.000) (p= 0.012)
Complexity of firm main genre -0.0941 -0.5076 -0.0145
(0.0704) (0.2801) (0.0605)
log (No. of songs) (1 year lag) 1.0147 1.4079 0.9057
(0.0707) (0.1313) (0.0865)
log (Mean no. of artists’ prior releases) (1 year lag) 1.6738 1.8363 0.9822
(0.1171) (0.2195) (0.1138)
(Dummy) one if firm produced at least a top 5% song 26.1820 24.4423 8.7423
(0.5521) (0.8240) (0.3691)
Herfindahl index for artists’ genre in firm -12.6696 -13.5437 -7.4939
(0.4617) (0.8111) (0.3934)
(Dummy) one if firm is an entrepreneurial firm -0.4769 -0.4346 -0.5537
(0.1971) (0.3616) (0.2357)
Constant 4.8777 -24.3093 -2.4799 14.1524 -305.9536
(0.3621) (33.4547) (2.5125) (9.8857) (33.0588)
Year effect no yes no yes yes
Genre effect no yes no yes yes
Country effect no yes no omitted omitted
Firm effect no no no yes yes
2SLS first-stage summary statistics
T-statistic: (Dummy) one if iTunes was introduced 2.91
in country c
T-statistic: Country-level proportion of albums -32.68
to all releases in the previous year
Adjusted R2/ Pseudo R20.0042 0.4645 0.1389 0.5118 n.a.
N20,434 20,434 8,641 8,641 20,434
Note: Standard errors are clustered at the firm level. The Sargan’s J-statistic (χ2(1)) is 0.127 (p= 0.7213), alleviating
concerns about an over-identification problem.
37
Table 6: Difference in average talent level of artists between singles and albums
Sample: Firm-level Sample: Artist-level
DV: Average Talent level of DV: Talent level of
new artists artist i
(1) (2) (3) (4) (5) (6)
Propensity Propensity Propensity Propensity
Matching Matching Matching Matching
OLS OLS 2SLS Logit Logit 2SLS
(Dummy) one if firm produces at least one single 0.0654 0.0905 2.5758
(0.2478) (0.2480) (1.7558)
(p= 0.792) (p= 0.715) (p= 0.142)
(Dummy) one if the first release of artist i-0.0058 -0.0133 1.4064
is produced by a single (0.0616) (0.0619) (3.3042)
(p= 0.925) (p= 0.831) (p= 0.670)
Complexity of firm main genre -0.1997 -0.1306 -0.0259 -0.3071
(0.1337) (0.0593) (0.0322) (0.0716)
log (No. of songs) (1 year lag) -0.2695 0.1607 0.0368 -0.1776
(0.1139) (0.0467) (0.0177) (0.1021)
log (Mean no. of artists’ prior releases) (1 year lag) 0.2028 -0.3727 -0.0853 -0.1450
(0.1190) (0.0787) (0.0317) (0.1082)
(Dummy) one if firm produced at least a top 5% song 0.8245 0.4996 0.2166 0.3283
(0.4289) (0.6035) (0.0828) (0.3355)
Herfindahl index for artists’ genre in firm -1.2511 -1.2085 -0.0904 -1.4681
(0.4811) (0.5029) (0.1029) (0.3457)
(Dummy) one if firm is an entrepreneurial firm -0.0049 0.2372 0.0376 0.7410
(0.3391) (0.1324) (0.0568) (0.2521)
Constant 12.9682 21.2401 11.1143 -0.8624 0.0239 1220.1770
(3.8657) (6.0759) (2.6811) (0.1899) (1.1022) (62.8332)
Year effect yes yes yes yes yes yes
Genre effect yes yes yes yes yes yes
Country effect yes yes omitted yes yes omitted
Firm effect no no yes no no yes
N16,980 16,980 29,317 11,751 11,751 42,422
Note: Standard errors are clustered at the firm level.
38
Table 7: Does producing singles increase commission errors in selection and retention
stage? (giving second chances to unpromising options)
DV: Commission Error: (Dummy) one if firm gives a second
chance to bottom 80% artists
(1) (2) (3) (4) (5)
Matched Matched Matched
Sample Sample Sample
Logit Logit Logit
(Odds (Odds (Marginal
OLS ratios) ratios) effects) 2SLS
(Dummy) one if the first release of artist i-0.0420 -0.7274 -0.7102 -0.1225 -0.0669
is produced by a single (0.0045) (0.0610) (0.0612) (0.0104) (0.0243)
(p= 0.000) (p= 0.000) (p= 0.000) (p= 0.000) (p= 0.006)
Complexity of firm main genre -0.0068 -0.0012 -0.0001
(0.0314) (0.0054) (0.0022)
log (No. of songs) (1 year lag) 0.0904 0.0156 0.0073
(0.0165) (0.0028) (0.0022)
log (Mean no. of artists’ prior releases) (1 year lag) -0.0933 -0.0161 -0.0050
(0.0274) (0.0047) (0.0032)
(Dummy) one if firm produced at least a top 5% song 0.0275 0.0047 -0.0003
(0.0738) (0.0127) (0.0100)
Herfindahl index for artists’ genre in firm 0.2133 0.0368 0.0250
(0.0932) (0.0161) (0.0109)
(Dummy) one if firm is an entrepreneurial firm -0.1407 -0.0243 -0.0197
(0.0510) (0.0088) (0.0064)
Constant 0.8130 1.6611 1.5798 -29.4653
(0.0062) (0.1789) (1.0768) (1.9617)
Year effect no no yes yes yes
Genre effect no no yes yes yes
Country effect no no yes yes omitted
Firm effect no no no no yes
2SLS first-stage summary statistics
T-statistic: (Dummy) one if iTunes was introduced 4.68
in country c
T-statistic: Country-level proportion of albums -15.07
to all releases in the previous year
Adjusted R2/ Pseudo R20.0022 0.0814 0.0842 0.0842 n.a.
N42,422 33,601 11,751 11,751 42,422
Note: Standard errors are clustered at the firm level. The Sargan’s J-statistic (χ2(1)) is 0.0333 (p= 0.8551), alleviating
concerns about an over-identification problem.
39
Table 8: Other robustness checks
Panel A. (Hypothesis 1) An alternative measure for single production:
proportion of singles to all releases in firm
DV: Number of new artists DV: Talent level of the most
talented new artist in firm
(1) (2) (3) (4)
OLS 2SLS OLS 2SLS
Proportion of singles to all releases 0.4641 1.7207 1.1941 5.2083
(0.0441) (0.3966) (0.2162) (2.3139)
(p= 0.000) (p=0.000) (p= 0.000) (p= 0.024)
Control variables yes yes yes yes
Constant yes yes yes yes
Year effect yes yes yes yes
Genre effect yes yes yes yes
Country effect yes omitted yes omitted
Firm effect no yes no yes
Adjusted R20.2030 n.a. 0.1676 n.a.
N29,317 29,317 29,317 29,317
Panel B. (Hypothesis 2) Different cutoffs for talented artists
Top 20% Top 15% Top 10% Top 5%
(1) (2) (3) (4)
Matched Matched Matched Matched
Sample Sample Sample Sample
Logit Logit Logit Logit
(Odds ratio) (Odds ratio) (Odds ratio) (Odds ratio)
(Dummy) one if the first release of 0.8569 0.9095 0.8091 0.7465
artist iis produced by a single (0.2065) (0.2609) (0.3125) (0.4444)
(p= 0.000) (p=0.000) (p= 0.010) (p= 0.093)
Control variables yes yes yes yes
Constant yes yes yes yes
Year effect yes yes yes yes
Genre effect yes yes yes yes
Country effect yes yes yes yes
Adjusted R20.0773 0.0714 0.0860 0.1057
N11,751 11,751 11,751 11,751
Note: Standard errors are clustered at the firm level.
40
Appendix A. Description on Databases
1. Musicbrainz Database
MusicBrainz is a project that aims to create an open content music database. MusicBrainz was
founded as a database for software applications to look up audio CD (compact disc) information
on the Internet. MusicBrainz has expanded its goals to reach beyond a compact disc metadata
storehouse to become a structured open online database for music. The MusicBrainz Database
covers information about artists, release groups, releases, recordings, works, and labels, as well as
the many relationships between them. The database also contains a history of all the changes that
the MusicBrainz community has made to the data (Musicbrainz, 2016). The first strength of Mu-
sicbrainz database is its large coverage. The second strength is that it has information on labels.
On the contrary, Spotify APIs do not offer the label information on their databases (Highfield, 2007).
2. Spotify Echo Nest API
The Echo Nest was a music intelligence and data platform for developers and media companies.
Its creators intended it to perform music identification, recommendation, playlist creation, audio
fingerprinting, and analysis for consumers and developers. On March 6, 2014 Spotify announced
that they had acquired The Echo Nest. Spotify shut down the Echo Nest API on May 31, 2016 and
developers was encouraged to use the Spotify Echo Nest API which integrate the original Echo Nest
API instead. The Echo Nest offered their database of data about 30 million songs aggregated from
web crawling, data mining, and digital signal processing techniques. The strength is from using
multiple sources such as web crawling, data mining, and digital signal processing, they measure
popularity of artists. This is derived from many sources, including play counts, mentions on the
web, mentions in music blogs, music reviews, Twitter, Facebook, and the catalogue of streaming
applications.
The Echo Nest offered two popularity measures: familiarity and hotness. We choose familiar-
ity as our popularity measure because it measures the life-time popularity for artists. Specifically,
familiarity measures how well known in artist is. One can understand familiarity as the likelihood
that any person selected at random will have heard of the artist. Beatles have a familiarity close to
1, while a band like ‘Hot Rod Shopping Cart’ has a familiarity close to zero. On the other hand,
hotness corresponds to how much buzz the artist is getting right now. Figure A1 shows the results.
The x-axis is familiarity, and the y-axis is hotness. Clearly there is a correlation between hotness
and familiarity. Familiar artists tend to be hotter than non-familiar artists. At the top right are the
Billboard chart toppers like Kanye West and Taylor Swift, while at the bottom left are artists that
you have never heard of like Mystery Fluid. This plot shows artists as well as the popular artists
that are cooling off. Outliers to the left and above the main diagonal are the rising stars (Music
Machinery, 2009).
The last strength of the Echo Nest is that it offers a ID matching scheme for many different
databases. The Echo Nest eliminated some of the trouble with mapping IDs with Project Rosetta
Stone. Rosetta Stone is to allow a researcher to use any music id from any music API with the
Echo Nest web services. I use this matching scheme to merge multiple databases.
41
Figure A1. Two popularity measures and their relation
Source: Music Machinery (https://musicmachinery.com/2009/12/09/a-rising-star-or/).
3. Spotify Web API
In June 2014, Spotify released a new Web API that allowed third-party developers to integrate
Spotify content in their own applications (Spotify, 2014). The Spotify Web API is a web service
that can be accessed by programs through the Hypertext Transfer Protocol. It returns data about
albums, artists, tracks, playlists and other Spotify resources in JSON format. We use song popu-
larity data. The popularity of a track is based on (1) the total number of plays compared to other
tracks and (2) how recent those plays are. We extract song popularity data with the International
Standard Recording Code code and match with Musicbrainz and Echo Nest data.
References
The Echo Nest Lab, 2009, The Map of Music Styles (http://static.echonest.com/playlist/moms/)
Retreived on 2016-09-18.
Highfield, Ashley. 2007. Keynote speech given at IEA Future Of Broadcasting Conference
(http://www.bbc.co.uk/pressoffice/speeches/stories/highfield_iea.shtml), BBC Press Office.
Retrieved on 2016-09-18.
Musicbrainz, 2016, Musicbrainz Database (https://musicbrainz.org/doc/MusicBrainz_
Database#Core_data). Retreived on 2016-09-18.
Music Machinery, 2009, Hottt or Nottt? (https://musicmachinery.com/2009/12/09/
a-rising-star-or/) Retreived on 2016-09-18.
Spotify, 2014, Say Hello to Our New Web API. Spotify Developer Website
(https://developer.spotify.com/news-stories/2014/06/17/say-hello-new-web-api/).
Retreived on 2016-09-18.
42
Appendix B. History of staggered entries of iTunes into 29 sample countries
Ranking by no. of No of firms
Country Entry time firms in Musicbrainz in Musicbrainz
1 United States 28-Apr-03 1 16,916
2 United Kingdom 15-Jun-04 2 8,914
3 France 15-Jun-04 5 2,974
4 Germany 15-Jun-04 3 5,398
5 Austria 26-Oct-04 23 355
6 Belgium 26-Oct-04 14 729
7 Finland 26-Oct-04 10 1,329
8 Greece 26-Oct-04 21 366
9 Italy 26-Oct-04 6 1,919
10 Netherlands 26-Oct-04 9 1,490
11 Portugal 26-Oct-04 26 258
12 Spain 26-Oct-04 11 1,249
13 Canada 3-Dec-04 8 1,638
14 Ireland 6-Jan-05 28 234
15 Sweden 10-May-05 7 1,660
16 Norway 10-May-05 16 546
17 Switzerland 10-May-05 17 528
18 Denmark 10-May-05 19 443
19 Japan 4-Aug-05 4 3,676
20 Australia 25-Oct-05 12 1,152
21 New Zealand 6-Dec-05 24 307
22 Mexico 4-Aug-09 29 217
23 Czech Republic 29-Sep-11 25 288
24 Estonia 29-Sep-11 22 359
25 Poland 29-Sep-11 15 573
26 Argentina 13-Dec-11 27 235
27 Brazil 13-Dec-11 18 467
28 Russia 4-Dec-12 13 817
29 Turkey 4-Dec-12 20 375
Total - - 55,412%
Note: We exclude countries which have fewer than 200 unique music production firms in the Musicbrainz database.
The final sample consists of 29,317 firm-years associated with 9,667 firms.
43
Appendix C. Measure of Genre Complexity
A rugged landscape illustrates the basic challenge posed by complex problems (Levinthal, 1997,
Baumann and Siggelkow, 2013). Each location on the landscape represents a combination of activ-
ities, and the height of the location represents the performance of the combination. An important
source of complexity is the interdependence among activities, which results in a rugged landscape
with many local peaks and valleys. Bounded rational firms will have difficulty experimenting with
new options and will tend to stick to local peaks. This landscape metaphor tells us that, as complex-
ity increases, firms will more likely be scattered across different local peaks. Prior work measures
heterogeneity with product subspace as a proxy for product space complexity (e.g., Barroso and
Giarratana, 2013, Piazzai and Wijnberg, 2019).
Specifically, we follow Piazzai and Wijnberg’s approach (2019) to measure the complexity of the
music genre. We measure complexity for each music genre during each year of the sample period
by using the Discogs and AcousticBrainz databases. First, genre information on each release and
music genre come from the Discogs database. The Discogs database has 14 music genres: Blues,
Brass and Military, Children, Classical, Electronic, Folk, Funk and Soul, Hip Hop, Jazz, Latin,
Pop, Reggae, Rock, and Stage and Screen. We exclude two music genres—Brass and Military, and
Stage and Screen—because these two genres are not considered commercial music. Second, we use
the musical attribute data from AcousticBrainz. AcousticBrainz provides musical attributes and
fingerprints of songs including danceability, acousticness, energy, valence (happiness), speechness,
mode (major or minor), track length, primary key, scale and frequency of the primary key, scale
of the most frequent chord progression key, average number of beats per minutes, and beat count.
By using these attributes, we calculate the centroid of each genre during each year of observation
and compute the Mahalanobis distance of each product in the genre-year from this centroid. We
use the mean distance as our moderator because it increases with the degree of heterogeneity in
product attributes.
44
Appendix D. Correlations
123456789
1 (Dummy) one if firm produces at least one single 1.0000
2 Complexity of firm main genre -0.0593 1.0000
3log (No. of songs) (1 year lag) 0.0143 -0.0533 1.0000
4log (Mean no. of artists’ prior releases) (1 year lag) 0.0821 -0.0396 0.5268 1.0000
5 (Dummy) one if firm produced at least a top 5% song 0.1546 -0.0351 0.1882 0.1834 1.0000
6 Herfindahl index for artists’ genre in firm -0.1601 -0.0069 -0.3081 -0.2184 -0.2225 1.0000
7 (Dummy) one if firm is an entrepreneurial firm 0.0192 0.0258 -0.5264 -0.4396 -0.1139 0.1984 1.0000
8 (Dummy) one if iTunes was introduced in country c-0.0492 0.0779 0.0803 0.1048 0.0333 -0.0410 -0.1437 1.000
9 Country-level proportion of albums to all releases in the previous year -0.2846 0.0431 0.0526 -0.0463 -0.0173 -0.0011 -0.0279 0.1477 1.000
45
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Research Summary Game theory suggests that, in oligopolistic markets characterized by non‐price competition, dominant incumbents can use product proliferation to occupy a region of the product space (i.e., a subspace) and deter rivals from imitating their products. In part, this is because product proliferation makes the introduction of close substitutes comparatively less profitable; in part, it is because the strategy conveys a threat of retaliation to potential imitators. Yet this threat is only credible if the proliferator has high costs of exit from the occupied region of space. We hypothesize that complexity, as a property of product (sub)spaces, generates exit costs for the proliferator and increases the deterrent power of its strategy. We test this hypothesis by studying sequential product introductions in the U.S. recording industry, 2004–2014. Managerial Summary Differentiated‐product markets are often concentrated in the hands of a few dominant organizations, which strive to keep on equal footing by offering similar products. In these markets, a product proliferation strategy can help one of the dominant incumbents claim a particular submarket as its territory. Investing heavily in that submarket communicates a threat that the proliferator will retaliate against invaders to protect these investments. However, this threat is not credible enough to deter rivals unless the occupied submarket is sufficiently complex in terms of product attributes, as precisely this kind of complexity makes it harder for proliferators to back down if challenged. We find evidence of this mechanism in an analysis of product competition among major record companies and discuss implications for strategic decision‐making.
Article
Full-text available
The creation of novel strategies, the pursuit of entrepreneurial opportunities, and the development of new technologies, capabilities, products, or business models all involve solving complex problems that require making a large number of highly interdependent choices. The challenge that complex problems pose to boundedly rational managers—the need to find a high-performing combination of interdependent choices—is akin to identifying a high peak on a rugged performance “landscape” that managers must discover through sequential search. Building on the NK model that Levinthal introduced into the management literature in 1997, scholars have used simulation methods to construct performance landscapes and examine various aspects of effective search processes. We review this literature to identify common themes and mechanisms that may be relevant in different managerial contexts. Based on a systematic analysis of 71 simulation studies published in leading management journals since 1997, we identify six themes: learning modes, problem decomposition, cognitive representations, temporal dynamics, distributed search, and search under competition. We explain the mechanisms behind the results and map all of the simulation articles to the themes. In addition, we provide an overview of relevant empirical studies and discuss how empirical and formal work can be fruitfully combined. Our review is of particular relevance for scholars in strategy, entrepreneurship, or innovation who conduct empirical research and apply a process lens. More broadly, we argue that important insights can be gained by linking the notion of search in rugged performance landscapes to practitioner-oriented practices and frameworks, such as lean startup or design thinking.
Article
Full-text available
Research summary We ask two questions: First, what are the underlying mechanisms that explain the power of modularity? Second, is the power of modularity robust in non‐modular problems? We replicate and then reconcile the key results in two prior models on modularity: E&L (Ethiraj and Levinthal, 2004a) and S‐search (Marengo et al., 2000). Our results yield several important insights. First, a significant portion of the advantage enjoyed by S‐search is attributed to multi‐bit mutation. Second, organizational evaluation needs to be used in combination with multi‐bit mutation. Third, when the underlying problem structure becomes non‐modular, S‐search outperforms E&L search, even though the advantage is reduced. More generally, organizational designers need to pay close attention to how different elements of modular search interact, and avoid making incremental adjustments. Managerial summary Modularity in product or organizational design is an approach that divides a system into smaller modules and attempts to augment the system level performance by experimenting with new modules. Because of its potential benefits such as parallel problem solving, adaptability in turbulent environment, or high speed in experimentation, both scholars and practitioners subscribed to the “power of modularity” thesis. Despite its popularity, there are significant number of cases where the superiority of modular design does not hold. We compare and contrast two representative prior studies that had different views on modeling organizational evolution under a modular design principle. By doing so, we are able to uncover what contributes to the superiority of modular design. Our results suggest that, when conducting experimentation under a modular design, it is important to 1) experiment multiple decision components simultaneously within a single module; 2) allow evaluation of the changes to be made by the module‐level manager not by the organization‐level manager. When the manager does not know whether the modularity in organizational design fits with the modularity in the task, it is advised to do multiple experimentation in a single module at a time while allowing the organization‐level manager to evaluate the changes. This article is protected by copyright. All rights reserved.
Article
Full-text available
A paradox in organizational research on judgment and decision making is that although the law-of-effect in adaptive learning suggests that people’s tendency to take a decision decreases after the decision receives negative consequences, people often exhibit an opposite action pattern of escalation of commitment. To address this paradox, this paper proposes that the unit of law-of-effect can be extended from a decision level to a strategy level (i.e., a group of planned decisions). A strategy organizes decisions from being consistent with the law-of-effect in one extreme to being consistent with escalation of commitment in another extreme. This paper shows that the favorability of the law-of-effect strategy (versus the escalation strategy) is likely to be underestimated at the beginning of a learning process. This underestimation stabilizes over time because negative consequences decrease the likelihood of choosing the law-of-effect strategy in the future. Accordingly, escalation strategy will be preferred more in the learning process, thereby developing a pattern of behaviors that is contradictory to the law-of-effect at the decision level but consistent with the law-of-effect at the strategy level. This learning pattern was demonstrated in three simulations with different combinations that were specified to capture different aspects of escalation of commitment. This learning perspective offers a novel explanation of escalation, suggesting that escalation may occur without distorted motivation or cognition. Data and the supplementary material are available at https://doi.org/10.1287/mnsc.2016.2686. This paper was accepted by Yuval Rottenstreich, judgment and decision making.
Article
Research Summary This paper examines how incumbent firms’ market positions and interdependencies across their submarkets influence their response to the threat of entry. We adapt a model of capacity deterrence to show that because premium and low‐cost incumbents face different demand functions and operating costs, they experience different tradeoffs between ignoring, deterring, and accommodating threatened entry. In addition, the interdependencies within and between a premium incumbent's submarkets influence its response. Using data on incumbent responses to entry threats from Southwest Airlines between 2003 and 2012, we find that (1) full‐service incumbents expanded capacity while low‐cost incumbents did not respond significantly, and (2) full‐service incumbents expanded capacity less aggressively in submarkets that had less substitutable customer segments and submarkets that were more complementary with their unthreatened submarkets. Managerial Summary An immutable market position is a core competitive advantage. Using data on incumbent responses to entry threats from Southwest Airlines between 2003 and 2012, we find that (1) full‐service (FSC) incumbents expanded capacity while low‐cost (LCC) incumbents did not respond significantly, and (2) FSCs expanded capacity less aggressively on routes that expected to have a large number of business passengers and routes that connect to their international hubs. These results suggest two sources of positional immutability: While one set of past choices (e.g., those about submarket substitutability or complementarity) provide a barrier against imitation, another set of past choices (e.g., those about products and costs) generate incentives for a tough defense, both deterring entry by firms from a different position. This article is protected by copyright. All rights reserved.
Article
This study explores the determinants of organizational aspirations, proposing that aspirations play dual roles that create important tension for managers. On one hand, aspirations serve an evaluative role as a benchmark for assessing performance. On the other, they have an allocative role in influencing the acquisition of limited resources. Our theory suggests that managers strategically adapt organizational aspirations to balance the tension between the two concerns. They set more aggressive aspirations when facing increased pressure to acquire resources, but set more conservative targets when the costs of missing performance targets are higher. In the context of annual management forecasts, which allow us to directly observe performance targets and their deviation from traditional aspiration measures, we find that external factors influencing the intensity of resource pressure and the cost of missing performance targets determine the aggressiveness of organizational aspirations. This study highlights a novel antecedent of aspirations that complements existing explanations, linking agency and governance research with behavioral theory.
Article
We study how technological shocks to the cost of starting new businesses have led the venture capital model to adapt in fundamental ways over the prior decade. We both document and provide a framework to understand the changes in the investment strategy of venture capitalists (VCs) in recent years — an increased prevalence of a “spray and pray” investment approach — where investors provide a little funding and limited governance to an increased number of startups that they are more likely to abandon, but where initial experiments significantly inform beliefs about the future potential of the venture. This adaptation and related entry by new financial intermediaries has led to a disproportionate rise in innovations where information on future prospects is revealed quickly and cheaply, and reduced the relative share of innovation in complex technologies where initial experiments cost more and reveal less.
Article
The link between organizational structure and innovation has been a longstanding interest of organizational scholars, yet the exact nature of the relationship has not been clearly established. Drawing on the behavioral theory of the firm, we take a process view and examine how hierarchy of authority—a fundamental element of organizational structure reflecting degree of managerial oversight—differentially influences behavior and performance in the idea generation versus idea selection phases of the innovation process. Using a multimethod approach that includes a field study and a lab experiment, we find that hierarchy of authority is detrimental to the idea generation phase of innovation, but that hierarchy can be beneficial during the screening or selection phase of innovation. We also identify a behavioral mechanism underlying the effect of hierarchy of authority on selection performance and propose that selection is a critical organizational capability that can be strategically developed and managed through organizational design. Our investigation helps clarify the theoretical relationship between structure and innovation performance and demonstrates the behavioral and economic consequences of organizational design choice. The online appendix is available at https://doi.org/10.1287/orsc.2017.1142 .
Article
The problem of designing, coordinating and managing complex systems is central to the management and organizations literature. Recent writings have emphasized the important role of modularity in enhancing the adaptability of such complex systems. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the virtues of parallelism that modularity offers and the destabilizing effects of overly refined modularization. In addition, high levels of integration can lead to modest levels of search and a premature fixation on inferior designs. The model captures some key aspects of technological evolution as a joint process of autonomous firm level innovation and the interaction of systems and modules in the marketplace. We discuss the implications of these arguments for product and organization design.