ArticlePDF Available

When Success Is Rare and Competitive: Learning from Others' Success and My Failure at the Speed of Formula One

Authors:

Abstract and Figures

Organizations can learn from prior successes and failures to improve organizational performance. Few learning-curve studies have investigated this phenomenon at the individual level. A notable exception found that surgeons learn from their own success and others’ failure. Success in surgery is common and individually independent from other surgeries. We study learning from success and failure in a context where success is rare and competitive: Formula One (F1) racing. Only one driver will win a race, preventing the other competitors from winning. Even severe failures causing drivers to abandon the race are common. We investigate two types of abandonments: car failures and driver failures. Our data set covers F1 from the start of F1 in 1950 through 2017, yielding 21,487 driver-race observations. We find that win probability follows an inverted U-shaped function of racing experience. We also find that drivers learn from their own success, teammates’ success, as well as own car failures. However, drivers do not learn from their own driver failures. A teammate’s win increases the probability of winning the next race by 1.8%. An own car failure increases the probability of winning the next race by 1.9%. We use two characteristics of success, frequency and competitiveness, to define a spectrum of organizational settings. Placement of our F1 findings and the surgery findings on this spectrum reveals when managers can expect benefits from their own versus others’ success and failure. This paper was accepted by Charles Corbett, operations management.
Content may be subject to copyright.
When Success Is Rare and Competitive: Learning from Others
Success and My Failure at the Speed of Formula One
Michael A. Lapr´
e,
a,
* Candace Cravey
b
a
Owen Graduate School of Management, Vanderbilt University, Nashville, Tennessee 37203;
b
School of Law, Universityof Virginia,
Charlottesville, Virginia 22903
*Corresponding author
Contact: m.lapre@vanderbilt.edu,https://orcid.org/0000-0003-2259-8739 (MAL); keu5dc@virginia.edu (CC)
Received: March 16, 2020
Revised: May 17, 2021; August 23, 2021
Accepted: August 27, 2021
Published Online in Articles in Advance:
https://doi.org/10.1287/mnsc.2022.4324
Copyright: © 2022 The Author(s)
Abstract. Organizations can learn from prior successes and failures to improve organiza-
tional performance. Few learning-curve studies have investigated this phenomenon at the
individual level. A notable exception found that surgeons learn from their own success
and othersfailure. Success in surgery is common and individually independent from other
surgeries. We study learning from success and failure in a context where success is rare
and competitive: Formula One (F1) racing. Only one driver will win a race, preventing the
other competitors from winning. Even severe failures causing drivers to abandon the race
are common. We investigate two types of abandonments: car failures and driver failures.
Our data set covers F1 from the start of F1 in 1950 through 2017, yielding 21,487 driver-race
observations. We nd that win probability follows an inverted U-shaped function of racing
experience. We also nd that drivers learn from their own success, teammatessuccess, as
well as own car failures. However, drivers do not learn from their own driver failures. A
teammates win increases the probability of winning the next race by 1.8%. An own car fail-
ure increases the probability of winning the next race by 1.9%. We use two characteristics
of success, frequency and competitiveness, to dene a spectrum of organizational settings.
Placement of our F1 ndings and the surgery ndings on this spectrum reveals when man-
agers can expect benets from their own versus otherssuccess and failure.
History: Accepted byCharles Corbett, operations management.
Open Access Statement: This work is licensed under a Creative Commons Attribution-NonCommercial-
NoDerivatives 4.0 International License. You are free to download this work and share with others,
but cannot change in any way or use commercially without permission, and you must attribute this
work as Management Science. Copyright © 2022 The Author(s). https://doi.org/10.1287/mnsc.2022.
4324, used under a Creative Commons Attribution License: https://creativecommons.org/licenses/
by-nc-nd/4.0/.
Supplemental Material: The online appendix and data are available at https://doi.org/10.1287/mnsc.2022.
4324.
Keywords:learning curve quality failure learning Formula One racing industry study
1. Introduction
I had matured into an analytical professor of informa-
tion relating to the conditions of the track, my physi-
cal well-being and the car, to a point where I could
clearly define issues out of a blur into crystal clear
focus in milliseconds.
Jackie Stewart, Three-time Formula One World
Champion
I think crashes are necessary for the career of any rac-
ing driver. What matters is whether you learn your
lesson from them or not.
Niki Lauda, Three-time Formula One World Champion
Organizations can improve performance as a function
of experience. This learning-curve phenomenon has
been observed in many contexts for performance meas-
ures such as cost, quality, and survival (Lapr ´
eand
Nembhard 2010,Argote2013). Interestingly, organizations,
teams, and individuals show tremendous variation in
the rate with which they learn. Understanding what
contributes to the variation in learning rates remains an
active eld of inquiry. Recently, scholars have distin-
guished successful experience from failure experience.
For example, do defective outputs and good outputs
contribute equally to learning by doing (Lapr´
eand
Nembhard 2010, Dahlin et al. 2018)? Most of the studies
on learning from success and failure are at the organi-
zational level. However, to better understand organiza-
tional learning, a better understanding of individual
learning is required.
A notable exception at the individual level is KC
et al. (2013). The authors study patient mortality in
cardiac surgery, where patient survival determines
the distinction between success and failure. KC et al.
(2013)nd that surgeons learn from their own prior
1
MANAGEMENT SCIENCE
Articles in Advance, pp. 116
ISSN 0025-1909 (print), ISSN 1526-5501 (online)
http://pubsonline.informs.org/journal/mnsc
February 14, 2022
success, but own prior failures worsen surgeon
performance. In contrast, surgeons learn from other
surgeonsfailures (in the same hospital), but not from
other surgeonssuccess. The authors explain that sur-
geons attribute success to themselves but failure to
external factors. The authors pose the question of how
their ndings will hold up in contexts where success
is rare as opposed to common. Research on failure has
focused on settings where success is common (Haunschild
and Sullivan 2002). In cardiac surgery, fortunately, suc-
cess is common, whereas in contexts such as sports,
auctions, and bidding for contracts, success is both rare
and competitive. Winning prevents opponents from
winning. In contrast, surgeons do not attempt to
increase the mortality rate for other surgeons.
In this paper, we ask what is the role of learning from
success and failure from own experience and others
experience when success is rare and competitive. We
study individual learning from success and failure in
the context of Formula One (F1) racing. Only one driver
will win a Grand Prix race, thereby preventing all other
drivers from winning that raceessentially a zero-sum
game. So, in F1, success is both rare and competitive. In
F1, even severe failures forcing drivers to abandon the
race occur with high frequency. Eichenberger and
Stadelmann (2009) classify abandonments in F1 as
equipment dropouts or human dropouts. Accidents can
have different causes, and therefore learning can be
greater from some accidents than others (Baum and
Dahlin 2007). Hence, it is worthwhile to investigate
learning from different types of failures (Haunschild
and Sullivan 2002, Baum and Dahlin 2007). In this
paper, we study learning from both types of severe fail-
ures in F1: car failures and driver failures.
2. Learning in Formula One
2.1. The Context
Formula One (F1) is a series of Grand Prix races on
unique tracks throughout the world. F1 has been held
every year since the inaugural season in 1950 (Smith
2016). Some races are held on occasional tracks, such
as the Monaco Grand Prix, which uses regular streets,
whereas most races are held on permanent race tracks,
such as Monza in Italy or the Circuit of the Americas
in Austin, TX. A few races are held on semipermanent
tracks combining regular streets with a permanent
track, such as Sochi in Russia. A typical race weekend
has 20 or more drivers. First, drivers use prerace prac-
tice sessions to set upthe car. This is a complicated
process unique to each track, because tracks differ tre-
mendously in terms of straights and corners (Lauda
1977). A car setup involves many trade-offs; for exam-
ple, a change in a wing setting could reduce straight-
line speed but improve cornering speed. Next, prerace
qualifying session(s) determine the order of the cars
on the starting grid for the race. In the actual race, the
drivers with the top nishing positions score points.
At the conclusion of the race, the top three nishers
receive trophies at a podium ceremony. The driver
with the most points at the end of the season is
crowned the driversworld champion. In F1, a team
(formally entrant) is the entity who registers their
car(s) and driver(s) for a race. Typically, teams enter
two cars. Teams prepare and maintain the cars during
the race weekend. Cars are designed and built by con-
structors. Cars differ signicantly across constructors. In
the early decades, there were both works teamswho
constructed their own cars and privateer teamswho
bought cars from constructors. Since 1981, privateers are
no longer allowed to enter a race. So, nowadays, team
and constructorare synonymous. In 1958, F1 added a
constructorsworld championship (Smith 2016). Cars
with the top nishing positions score points counting
toward the constructorsworld championship. So, a
driver win also counts as a team win. Both the drivers
world championship and the constructorsworld cham-
pionship are prestigious and nancially important for
F1 teams (Stewart 2009). F1 has become a big business
with global appeal. In 2016, U.S. rm Liberty Media
bought F1 for $4.4 billion. In 2017, F1 enjoyed 352
million unique viewers worldwide (formula1.com).
As Gino and Pisano (2011, p. 70) note, Racing may
seem a long way from the world of business, but in
fact it provides a perfect laboratory for research on
learning.Race results are objective measures of per-
formance, so drivers always know how their longitu-
dinal performance stacks up against the competition.
Since only one driver can win a race, racing is a great
setting to study zero-sum competition in tournaments
(Bothner et al. 2007). Recently, F1 has been used for
empirical research (Aversa et al. 2015). Castellucci
et al. (2011) studied the impact of aging on points
scored by F1 drivers. The authors found an inverted
U-shaped relationship between age and productivity.
Eichenberger and Stadelmann (2009) and Bell et al.
(2016) built empirical models to nd rankings of the
best F1 drivers. Castellucci and Ertug (2010) studied
the role of status in exchange relationships. The
authors found that high-status F1 teams can secure
more engine modications and redesigns from their
engine suppliers than low-status teams can. Studying
escalation of competition, Piezunka et al. (2018) found
that F1 drivers of similar status (in terms of character-
istics such as age and points scored) are more likely to
collide. None of these papers study learning curves in
F1. One notable exception is the work of Mour˜
ao
(2017), who found that own prior success increases
the probability of winning a race. However, Mour˜
ao
(2017) did not study learning from teammatessuc-
cess, own failures, or teammatesfailures. Mour˜
ao
(2018) found that podium positions prolong F1 driver
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
2Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
careers, whereas failures to nish races shorten F1
driver careers. These ndings further substantiate the
importance of studying learning from success and fail-
ure in F1.
In F1, learning is imperative. Each year, teams intro-
duce new cars. During the year, teams and drivers
develop their cars. For each race, they need to learn
the best car setup for the driver-car-track combination.
Performance in a race depends critically on this
driver-car-track specic setup of the car. For example,
[no] two Grand Prix tracks call for the same opti-
mum transmission gearing(Lauda 1977, p. 66). In the
early decades of F1, two-time world champion
Graham Hill used to keep a small black book, in
which he recorded every race and lap time, every
mechanical detail of the cars he drove spring ratings,
valve settings on the dampers, every roll stiffness.
Nothing was omitted(Stewart 2009,p.143).Nowa-
days, F1 cars use 140 sensors yielding 15 gigabytes of
data each race weekend. Regardless of the era in F1,
every race weekend, each team is guring out the
optimal car setup to maximize performance of the
driver-car-track combination (Stewart 2009).
2.2. Hypotheses
Our F1 performance measure is the probability of
winning a Grand Prix race. Winning races is the ulti-
mate goal for F1 drivers (Prost 1990, Aversa et al.
2015). Three-time world champion Jackie Stewart
describes the importance of winning:
But for most sporting champions even second place is
regarded as just another form of losing. If anybody
offered me a million pounds today to tell them how
many times I finished second in a Grand Prix, I
wouldnt know the answer because it doesnt matter.
On the other hand, I remember each of my wins.
(Stewart 2009, pp. 5253)
Winning a race is a measure of performance relative
to all the other drivers in a race. The U-shaped
learning-curve literature has studied relative measures
of performance at the organizational level (Ingram and
Baum 1997, Baum and Ingram 1998, Ingram and
Simons 2002,Lapr
´
eandTsikriktsis2006). This literature
expects absolute measures of performance such as ef-
ciency to continue to improve as a function of experi-
ence. Yet, building on the notion of competency traps
(Levitt and March 1988), this literature argues that rela-
tive measures of performancesuch as organizational
survival, protability, and customer dissatisfaction
will follow a U-shaped function of experience. Initially,
relative performance improves as a function of operat-
ing experience as organizations learn to perfect their
routines. However, in the long run, routines can become
obsolete as the organizational environment changes.
Baum and Ingram (1998), for example, documented
U-shaped learning curves for organizational failure in
the hotel industry. Similarly, Lapr´
eandTsikriktsis
(2006) observed U-shaped learning curves for customer
dissatisfaction in the airline industry.
As an organization consists of individuals, we
expect a U-shaped relationship between experience
and relative measures of performance for individuals
as well. Haltiwanger et al. (1999), for example, docu-
mented lower levels of individual sales per employee
in rms with higher proportions of older workers
(above the age of 55). So, at higher ages, individual
performance can decline. In F1, Castellucci et al.
(2011) found that points scored by F1 drivers followed
an inverted U-shaped function of age. Even though
age had a positive effect on points scored, age squared
had a negative effect on points scored. Since learning
curves capture performance as a function of experi-
enceas opposed to age (Argote 2013)we expect
win probability to follow an inverted U-shaped func-
tion of racing experience.
Several reasons contribute to the eventual decline in
win probability in F1. Graham Hill won world cham-
pionship titles in 1962 and 1968. He won 14 races
between 1962 and 1969. In the U.S. Grand Prix of
1969, he had a bad accident and suffered serious leg
injuries (Smith 2016). Although he would recover and
continue to race until retiring from F1 in 1975, he
never won another race again. Many successful, expe-
rienced drivers end up with less competitive teams at
the end of their careers for various reasons. Emerson
Fittipaldi, for example, won world championship
titles with Lotus in 1972 and McLaren in 1974, but, by
1975, his commitment and input werent always at
his usual level(M´
enard et al. 2006, p. 33). In 1976, Fit-
tipaldi chose to race for a less-competitive team that
he had founded himself with his brother. Whereas Fit-
tipaldi had won 14 races from 1970 through 1975, he
would not win a single race from 1976 until the end of
his career in 1980. Damon Hill peaked in 1996, the
year he won the world championship driving for Wil-
liams. In his 67th race, he notched his 21st career win
in the last race of 1996. At the end of 1996, Williams
informed Hill that he was not being retained (M´
enard
et al. 2006). In his nal three years, he drove for less
competitive teams Arrows and Jordan, winning only
one race. He retired from F1 after his 115th race at the
end of 1999. Because of factors including declining
driver skills, injuries, and moving to less competitive
teams at the end of a careervoluntary or involun-
tarywe hypothesize the following.
Hypothesis 1. For F1 drivers, win probability follows an
inverted U-shaped function of racing experience.
KC et al. (2013) found that surgeons do not learn
from other surgeonssuccess in the same hospital. We
argue the opposite in F1, where success is rare. When
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
Management Science, Articles in Advance, pp. 116, © 2022 The Author(s) 3
success is rareand consequently failure is common
success by others becomes benecial to learn from
(Baum and Dahlin 2007). Any learning-by-doing effect
builds on repetition. Setting up a car requires experi-
mentation with many variables (gear ratios, camber
and toe for tires, suspension settings, etc.). Variability
in car settings can create noise, which makes it hard to
learn from experiments (Bohn 1995). The simplest way
to overcome this problem is to increase the sample size
of useful observations (Bohn and Lapr´
e2011). Success
by others can augment own success to accumulate use-
ful experience more quickly. So, when success is rare,
success from others increases the number of successes
that an individual can learn from.
Learning from others, however, is not straightfor-
ward. Lapr´
eetal.(2000) found that transfer of a suc-
cessful practice within the same organization requires
both a proven practice (in F1 the setup of a winning
car) and a causal understanding of the principles
behind the practice. Cars are very different from team
to team. However, a teammates car is identical,
meaning it has the same chassis (with its many
components), the same engine, and the same tires.
Moreover, all components originate from the same
suppliers. Hence, the engineering principles used in
the setup (Lauda 1977) of a teammates winning car
have high validity because the teammates success
was achieved with the same car.
Will teammates actually share winning car setups?
In F1, teams have a major incentive to share such
knowledge because points scored by both drivers
count toward the constructorsworld championship.
Yet, each driver wants to win races themselves. So,
are drivers on a team cooperative or competitive? We
provide both driver and team anecdotes of coopera-
tion on F1 teams. Jackie Stewart described knowledge
sharing with his teammate Franc¸ois Cevert:
I always made a conscious effort to help [Franc¸ois
Cevert] develop, and from the start we discussed
everything. The relationship worked on both sides. . . .
We drove essentially the same car in 1971, 1972 and
1973, and, in testing, practice, qualifying and before the
race, I was happy to share everything with him, from
my overall strategy to the gear ratios I would use, to
the gears I was planning to take at each and every cor-
ner, and all my braking distances. There were no
secrets. (Stewart 2009, pp. 269270)
The Austrian Grand Prix of 1986 illustrates the ben-
ets of working together. Keke Rosberg and Alain
Prost were the two drivers for McLaren. In a divide-
and-conquer strategy, during practice Rosberg tried
out the traditional settings, whereas Prost experi-
mented with some new settings. During the warm-up
before the race, Prost was unhappy with the new
setup of his car. The team changed the setup for
Prosts car in a classic manner, approximating the
settings used by Rosberg (Prost 1990). The car
behaved perfectly, and Prost won the race.
Teams want drivers to cooperate because of the
constructorsworld championship. Drivers can clearly
benet from cooperation, as the Rosberg-Prost exam-
ple shows. However, each driver also wants to win.
So, for teams, keeping a spirit of knowledge sharing
while still allowing drivers to race for individual suc-
cess is key. Across four decades, Ross Brawn worked
for several F1 teams in roles ranging from engineer to
technical director to team principal. In these roles, he
contributed to 10 driversworld championships
(seven with Michael Schumacher) and 10 constructors
world championships for Williams, Benetton, Ferrari,
and Brawn (his own team). When asked about how to
handle the psychological battle between two team-
mates, Brawn replied:
You cant avoid it completely, but the blatant and
ugly stuff I think I generally managed to avoid. It can
be destructive to the team, because it can seep into
the mechanics, into the engineers. I always wanted a
competitive spirit between all the crew, but it was a
balancing act of then pulling them back together and
saying we are all in this together. This is one team. So
if you do something to benefit your driver at the
expense of the other driver, thats unacceptable.
(Brawn and Parr 2017, p. 36)
Based on the sample-size argument, the engineering
principles behind a teammates winning car setup,
and the motivation to share knowledge within the
team, we hypothesize the following.
Hypothesis 2. In F1, drivers learn from teammatesprior
successes.
KC et al. (2013) argued that surgeons do not learn
from their own failures due to attribution theory
individuals attribute failures to external factors. Fortu-
nately, in healthcare, success is common and failure is
rare. In F1, on the other hand, failure is common.
Moreover, F1 allows us to distinguish between two
different types of reasons causing drivers to abandon
a race: car failures, such as blown engines and broken
suspensions, and driver failures, such as accidents
and collisions. Consequently, F1 provides a context to
address the call for research in which we can differen-
tiate failures (Baum and Dahlin 2007). Dahlin et al.
(2018) provide an excellent review of the literature on
failure learning. The authors identify three mecha-
nisms to learn from failure: opportunity, motivation,
and ability. Failure learning is difcult, as it requires
all three mechanisms (Dahlin et al. 2018).
The opportunity to learn from failure comes from
the information about previous failures. Failures that
are larger in magnitude, more frequent, and salient
have greater information content and thus provide
learning opportunities (Chuang and Baum 2003,Baum
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
4Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
and Dahlin 2007,MadsenandDesai2010, Dahlin et al.
2018). Car failures are signicant, as they are costly both
in terms of money and sporting losses. Car failures are
frequent. From 1950 to 2017, only 4.5% of all driver-race
observations represent wins, whereas 30% of all driver-
race observations represent car failures. Car failures are
salient, as they prohibit a driver from competing. As the
adage in F1 goesto nish rst, rst youve got to n-
ish. Clearly, in F1, car failures are large, frequent, and
salient and provide learning opportunities. Complex
problems provide a greater opportunity to learn from
the information about prior failures (Stan and Vermeu-
len 2013). An F1 car is very complex. Car failures have
many different causes: engine, gearbox, transmission,
suspension, tires, brakes, and so on. Heterogeneous
causes benet learning about complex systems, resulting
in more in-depth analysis of underlying causes as
opposed to blaming an operator (Haunschild and
Sullivan 2002). Thus, own car failures allow for in-depth
root-cause analysis. Car failures can often be traced back
to engineering, manufacturing, machining, assembly, or
suppliers (Stewart 2009). For example, Stewart had to
retire from the Monaco Grand Prix in 1969 because the
drive shaft failed. An investigation subsequently
revealed the problem was directly caused by a bad batch
of universal couplings that had slipped through the out-
side suppliersinspection(Stewart 2009, p. 202).
Motivation to learn from failure is the desire to
invest in reducing failure frequency (Dahlin et al.
2018). A failure is the result of performance falling
short of an aspirational level. A shortfall in perfor-
mance triggers a search for solutions (Baum and
Dahlin 2007). When success is rare and competitive,
many failures can intensify the search for solutions to
become competitive because competitors aspire to
win. Car failures are frustrating, but they happen.
You have to accept it, resolve it, regroup and move
on(Stewart 2009, p. 263). When own success experi-
ence is rare, the only substantial own experience left
to analyze is failure experience. Attributing failures to
external factors in order to avoid blame can hinder
learning from own failure (KC et al. 2013, Desai 2015).
However, as we have mentioned, own car failures can
be investigated with objective root-cause analysis, pre-
venting the driver from being blamed. Attributing
failure to the car as opposed to the driver enhances
the willingness to learn from car failures.
Ability to learn from failure refers to identifying
failure, understanding failure, and implementing sol-
utions to prevent future failures (Dahlin et al. 2018). In
F1, identifying failure is trivial. F1 teams continuously
monitor their cars and will immediately notice if a
driver abandons a race due to a car failure. After-
event reviews enhance failure understanding (Ellis
et al. 2006). In F1, there are both comprehensive
reviews an hour or so after the race as well as race
debriefs on the day after the race (Brawn and Parr
2017). Next, we illustrate how F1 drivers are able to
learn from car failures in the closing stages of a race.
Three-time world champion Niki Lauda explains:
You should only drive as fast as is necessary to win.
If I am lying first and [my pit crew] show me +10
[seconds lead over the driver in second place] and I
still have ten laps to go, then I will drop back about
one second per lap . . . and that is what the others
also do if they are in front, particularly the really cool
and intelligent drivers like Fittipaldi. What looks like
the dramaof the finish is often nothing more than
the leaders deliberating slackening off, to spare his
own car. (Lauda 1977, p. 197)
Toward the end of a race, the driver lying rst
should focus on bringing the car home for the win. The
leader should not unnecessarily push a complex F1 car
with all its components to the limit. Any component is
susceptible to failure. In the closing stages of the race,
the leader can effectively take the time to reect on past
car failures, listen for any unusual sounds indicative of
any potential car problems, and nurse the car home
(Lauda 1977). Since own car failures provide opportu-
nity, motivation, and ability to learn from prior failures,
we hypothesize the following.
Hypothesis 3. In F1, drivers learn from their own prior
car failures.
Next, we compare learning from abandonments
due to driver failures versus car failures. Driver fail-
ures are less frequent than car failures. Of all driver-
race observations, 12% are driver failures, whereas
30% are car failures. So, lower frequency of driver fail-
ures reduces leaning opportunities.
If failures are less concentrated and more broadly
dispersed, then there will be a search for more thor-
ough knowledge regarding causal and contributing
factors (Desai 2015). The distribution of car failures is
less concentrated, as there are many possible reasons
for car failures (engine, gearbox, transmission, suspen-
sion, brakes, tires, etc.). On the other hand, the distri-
bution of driver failures causing abandonment is
more concentrated. Single-driver accidents and multi-
driver collisions are typically caused by driver error
or driving style. Hence, the difference in distribution
of failures reduces learning from driver failures.
Heterogeneous causes in car failures can shift focus
away from blaming the driver toward in-depth root-
cause analysis (Haunschild and Sullivan 2002).
Conversely, driver failures consist of accidents and
collisions. In multidriver collisions, a driver can place
blame with another driver. In single-driver accidents,
drivers may at least partially attribute accidents to
external factors. In motor racing . . . , you often hear
people complaining about their bad luck(Stewart
2009, p. 202). Consequently, as drivers attribute driver
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
Management Science, Articles in Advance, pp. 116, © 2022 The Author(s) 5
failures to external factors, drivers reduce their will-
ingness to learn from their own driver failures. Dahlin
et al. (2018, p. 270) note that using external attribu-
tions to avoid altering onesmethod of working is an
individual-level defense mechanism demonstrating
both lack of motivation and inability to learn.
Piezunka et al. (2018) found that F1 drivers of simi-
lar status are more likely to collide. The authors quote
Damon Hill: If I am pushed, I will push back, that is
the way I am. I am very British. We dont like to be
pushed around. When the chips are down we might
have to step into grey areas(Piezunka et al. 2018,
p. E3362). The escalation of competition into conict
among F1 drivers suggests that F1 drivers have less
motivation to learn from collisions. Given the lower
levels of opportunity and motivation to learn from
driver failures compared with car failures, we hypoth-
esize the following.
Hypothesis 4. In F1, drivers learn less from their own
prior driver failures than from their own prior car failures.
3. Data and Method
We collected data on all F1 Grand Prix races from the
start of F1 in 1950 through 2017. Our main data sources
were formula1.com, M ´
enard et al. (2006), and Wikipe-
dia: WikiProject Formula One. All three sources docu-
ment for each race, the starting position for each driver
on the grid and the race result, which is the nishing
position or a did not nish(DNF)ifthedriverdidnot
nish the race. Eichenberger and Stadelmann (2009)clas-
sify DNFs as human dropouts or technical dropouts.
Similarly, we use the DNF reasons and race reports
documented on statsf1.com, race-database.com, grand-
prix.com, Wikipedia: WikiProject Formula One, and
M´
enard et al. (2006) to classify each DNF as Driver DNF
or Car DNF. (We describe the classication procedure in
the online appendix.) A Driver DNF is typically caused
by an accident or a collision: 95.7% of Driver DNF
observations are single-driver accidents or multidriver
collisions. The few remaining Driver DNF observations
concern disqualications (e.g., for ignoring a ag) or
drivers being physically unt to continue a race (e.g.,
due to exhaustion). A Car DNF is the result of a failure
due to a broken engine, gearbox, transmission, suspen-
sion, tire, brake, and so on. Unlike Driver DNF, there are
many possible causes for a Car DNF. After a car failure,
cars are xed before the next race, either by repairing/
replacing the broken part or by rebuilding the car.
From 1950 through 1960, the Indy 500 race counted
toward F1, even though the Indy 500 was the only
race not held under FIA rules (the FIA is the govern-
ing body of F1). Regular F1 drivers ignored the Indy
500, and Indy 500 drivers ignored the regular F1 races.
The Indy 500 results were essentially irrelevant for the
F1 championship (Smith 2016). Excluding the eleven
Indy 500 races, our data set includes 965 races and 656
drivers for a total of 21,616 driver-race observations.
In the early years of F1, drivers would sometimes
share a drive.At some point during a race, a driver
would come into the pits and turn the car over to another
driver. This drive-share practice typically happened dur-
ing long, hot races when drivers would get tired. From
1950 through 1957, drivers who shared a drive and n-
ished in the points, shared the points. In 1958, drivers
could no longer score points with drive shares. Teams
quickly stopped sharing drives. Drive shares account for
0.7% of our observations. For 129 of these observations,
drivers drove multiple cars. If a driver crashed, then the
driver could take over a teammates car and get another
chance in the same race. Limiting our data set to drivers
who only drove one car in a race, we end up with 965
races and 655 drivers for a total of 21,487 driver-race
observations. In robustness tests excluding all drive
shares, none of our ndings changed.
Out of all 21,487 observations, 29.8% are Car DNF
observations and 12.1% are Driver DNF observations.
In contrast, only 4.5% of the observations represent
wins. Clearly, in F1, success is rare and severe failure
is common.
3.1. Variables
Our main dependent variable is Win
dr
1 if driver d
won race r, and 0 otherwise. We also use an alterna-
tive dependent variable Podium
dr
1 if driver dn-
ished rst, second, or third in race r, and 0 otherwise.
Our main independent variables measure different
types of experience. Let Cumulative Races
dr
be the
number of races started by driver dprior to race r.We
include (Cumulative Races
dr
)
2
to test for an inverted
U-shaped learning-curve effect. To capture own suc-
cess experience, we dene Cumulative Wins
dr
as the
number of races won by driver dprior to race r. Simi-
larly, we calculate Cumulative Car DNF
dr
and Cumula-
tive Driver DNF
dr
to capture own failure experiences.
We measure teammatessuccess experience with
Cumulative Teammate Wins
dr
which sums the wins by
drivers who at the time of the win were on the same
team as driver dprior to race r. Likewise, we calculate
Cumulative Teammate Car DNF
dr
and Cumulative Team-
mate Driver DNF
dr
to measure teammatesfailure
experiences. At the end of a season, several drivers
might change teams. As an example of driver churn,
the appendix shows the teams and teammates for
Alain Prosts career. The example illustrates which
teammates contributed to Prosts cumulative team-
mate experiences. We dene cumulative podium vari-
ables analogous to the cumulative win variables.
We use several control variables. Let Grid Position
dr
be the position for driver don the grid at the start of
race r(1 for the rst driver, 2 for the second, etc.). The
control variable Grid Position is determined by prerace
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
6Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
qualifying session(s). The faster the driver-car combi-
nation on the track, the better the Grid Position. There
is variation in driver ability and car performance both
within a race and across races. Note that Grid Position
is a succinct control variable for how fast the driver-
car-track combination is (Piezunka et al. 2018). The
combination of these three elements (driver, car, and
track) varies from observation to observation, and this
variation can be very different from merely three xed
effects held constant for the entire data set. First, con-
sider a driver moving from a great team to a mediocre
team. In 1996, Damon Hill drove for Williamsthe
best team in 1996. Hill won the world championship,
and his teammate Jacques Villeneuve was the runner-
up. In 16 races, Hills average grid position was 1.44. In
12 classied nishes (covering 90% of the race dis-
tance), his average nishing position was 1.75 (includ-
ing eight wins). In 1997, in stark contrast, Hill drove for
Arrows, a team which had never won a race. His aver-
age grid position was 11.8. He had six DNFs. In 10 clas-
sied nishes, his average nishing position was 9.3
(no wins). Grid position controls for the car-driver dif-
ferences between Hill driving for Williams in 1996 ver-
sus Hill driving for Arrows in 1997. Moreover, teams
competitive performance varies signicantly as well.
Since 2005, Williams has only won a single race (in
2012). Second, consider variations in car-track combina-
tions. Some cars perform comparatively better on cer-
tain tracks. In 1980, Renault was the only team with
turbo engines. Three races were held at altitude (Brazil,
South Africa, and Austria), where Renault had an
advantage (M´
enard et al. 2006). The average starting
position for the two Renault drivers, Jabouille and
Arnoux, at the three altitude races was 2.2 compared
with 8.5 at all other tracks. The two Renault drivers
combined for three wins in 1980, all obtained at the
threealtitudetracks.Attheothertracks,Renaulthad
more DNFs than classied nishes. Grid position con-
trols for the differential car-track performance of the
1980 Renault at altitude versus sea level. In subsequent
years, other teams switched to turbo engines. By 1985,
every team used turbo engines and Renaultsaltitude
advantage had vanished.
Using dates of birth and race dates, we determine
Age
dr
as the age of driver dat the time of race r
(Castellucci et al. 2011). We dene Home Edge
dr
1if
race rwas held in the home country of driver d,and
0 otherwise (Castellucci et al. 2011). Finally, we use
race controls: X
ir
is the value for the ith control vari-
able for race r. The race controls are wet weather
conditions, occasional track, permanent track, and
number of drivers starting (Castellucci et al. 2011).
Both Smith (2016) and Wikipedia document the data
for all of these race controls.
For robustness tests, we introduce several variables.
We create individual driver dummy variables for each
of the drivers who won three or more world champi-
onship titles. These triple world champions are con-
sidered the greatest in the sportJuan Manuel Fan-
gio, Jack Brabham, Jackie Stewart, Niki Lauda, Nelson
Piquet, Ayrton Senna, Alain Prost, Michael Schu-
macher, Sebastian Vettel and Lewis Hamilton. To
measure failures by competitors, we calculate Cumula-
tive Competitor Driver DNF
dr
as the number of driver
failures by other teamsdrivers in races started by
driver dprior to race r. In the online appendix, we
explain how we code the Driver DNF observations as
single-driver accidents and multidriver collisions.
Subsequently, we create Cumulative Single-driver DNF
and Cumulative Multidriver DNF variables for the focal
driver, teammates, and competitors. In the online
appendix, we dene additional variables for robust-
ness tests. Table 1shows the summary statistics.
3.2. Methodology
Following KC et al. (2013) and Clark et al. (2018), we
estimate learning-curve effects in a logistic regression
framework. As these authors note, this log-linear form
is identical to the theoretically derived learning-curve
model by Lapr´
eetal.(2000) and Lapr´
e and Tsikriktsis
(2006):
ln Pr(Windr 1)
1Pr(Windr 1)α0+α1Grid Positiondr
+α2Agedr +α3Home Edgedr
+
i
αirXir +βCumulative Racesdr
+γ(Cumulative Racesdr)2+edr:
A positive value for βand a negative value for γ
would support Hypothesis 1. In the full model, we
replace Cumulative Races with own and teammates
success and failure experience:
ln Pr(Windr 1)
1Pr(Windr 1)α0+α1Grid Positiondr +α2Agedr
+α3Home Edgedr +
i
αirXir
+β1Cumulative Winsdr
+β2Cumulative Teammate Winsdr
+β3Cumulative Car DNFdr
+β4Cumulative Driver DNFdr
+β5Cumulative Teammate
Car DNFdr
+β6Cumulative Teammate
Driver DNFdr
+γ(Cumulative Racesdr)2+edr:
A positive value for β
2
would support Hypothesis 2;a
positive value for β
3
would support Hypothesis 3;β
3
>
β
4
would support Hypothesis 4.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
Management Science, Articles in Advance, pp. 116, © 2022 The Author(s) 7
In logistic regression, Hosmer et al. (2013)recommend
at least 10 events per parameter to avoid overtting.
With 965 wins in our data set, Hosmersrecommenda-
tion means that we should not estimate more than 96
parameters. Thus, we cannot jointly include dummy
variables for drivers, teams, and races. Fortunately,
we do not have to do so. As we have explained, Grid
Position succinctly controls for the driver-car-track
performance across all observations. Furthermore, in
robustness tests, we separately include dummy varia-
bles to control for several driver, team, track, and
year xed effects.
Within a single race, observations are not indepen-
dent. If one driver wins the race, then the other driv-
ers lose. There are two approaches commonly used to
model correlated binary data: a random effects model
(also called a cluster-specic model) and a population
average model (Hosmer et al. 2013). The cluster-
specic model is most useful when the goal is to pro-
vide inferences for covariates that can change within
cluster, whereas the population average model is
likely to be more useful for covariates that are cons-
tant within cluster(Hosmer et al. 2013, p. 317). Our
observations are clustered by race. Most of the inde-
pendent variables of interest, such as the experience
variables, vary within a race, whereas some of the
control variables such as the race controls are constant
within a race. Therefore, we report all of our estima-
tions obtained with the cluster-specic model. We
also estimated all of our models with the population
average model. Both methods yield the same ndings
and the same support for our hypotheses.
4. Empirical Results
Table 2shows the logistic regression results for Win.
The rst model provides evidence for an inverted
U-shaped function of Cumulative Races. The positive
and statistically signicant coefcient for Cumulative
Races combined with the negative and statistically sig-
nicant coefcient for (Cumulative Races)
2
supports
Hypothesis 1. In the second model, the positive and
statistically signicant coefcient for Cumulative Wins
is evidence that drivers learn from their own prior
successes. The positive and statistically signicant
coefcient for Cumulative Teammate Wins supports
Hypothesis 2. The positive and statistically signicant
coefcient for Cumulative Car DNF supports Hypothe-
sis 3. The coefcient for Cumulative Driver DNF is
smaller than the coefcient for Cumulative Car DNF.
We use a Wald test to determine that the two coef-
cients are statistically signicantly different from each
other. The Wald test rejects that the two coefcients
are equal (p<0.05). So, Hypothesis 4is supported.
The coefcient for the teammatesdriver failures is
positive and statistically signicant, whereas the coef-
cient for teammatescar failures is not signicant.
Our research focus is individual driver learning. In
F1, learning can also take place at the team level. Team
xed effects can control for team factors that can contrib-
utetolearning.Wecannotincludeateamdummyfor
every team, because we would substantially exceed the
recommended maximum of 96 parameters that we can
estimate with just 965 wins (Hosmer et al. 2013). How-
ever, a small portion of teams wins a disproportionate
share of races. There are 20 teams with ve wins or
more. In fact, these 20 teams have at least eight wins.
With this cutoff, we have at least eight wins per team
dummy variable. This is close to 10 wins per dummy
variablewhich is the preferred minimum (Hosmer
et al. 2013). These 20 teams account for 97.6% of all wins.
See the appendix for more information on the teams.
Models (3) and (4) in Table 2show the results when we
control for these 20 team xed effects. The results are
robustall four hypotheses continue to be supported.
Table 1. Summary Statistics
Variable Mean Standard deviation Min Max
Win 0.0448 0.2069 0 1
Podium 0.1342 0.3406 0 1
Cumulative Races 59.85 59.86 0 321
Cumulative Wins 3.81 9.72 0 91
Cumulative Teammate Wins 3.27 6.73 0 55
Cumulative Car DNF 16.69 17.03 0 113
Cumulative Driver DNF 7.26 7.52 0 39
Cumulative Teammate Car DNF 15.69 16.29 0 92
Cumulative Teammate Driver DNF 7.30 8.27 0 45
Grid Position 11.96 6.84 1 34
Age 30.13 5.14 17.46 55.80
Home Edge 0.0828 0.2756 0 1
Wet Weather Conditions 0.1575 0.3643 0 1
Permanent Track 0.7301 0.4439 0 1
Occasional Track 0.1534 0.3604 0 1
Number of Drivers Starting 22.83 3.03 6 34
Note. N 21,487.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
8Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
For the average F1 driver, the estimates imply that
the decline starts at the 122nd race, which is after the
mean cumulative number of races plus one standard
deviation. The inverted U-shape for experience is
observed in the sample, as there are 51 drivers with
more than 122 races. The average Pr(Win) starting from
an average of 12th place is 0.0072. Learning from a
teammates win increases Pr(Win) by 0.00013, or 1.8%,
on average. Learning from an own car failure increases
Pr(Win) by 0.00014 (1.9%) on average. Note that Grid
Position effectively controls for the performance of the
driver-car-track combination. The negative and statisti-
cally signicant coefcient for Grid Position means that
the higher position number (i.e., farther away from the
front) implies a lower probability of winning the race.
Figure 1illustrates that learning effects depend heavily
on Grid Position.Startingfromrst place, an additional
teammates win (own car failure) can increase Pr(Win)
by as much as 0.0044 (0.0047).
We conducted several additional robustness tests.
Including individual dummies for all 655 drivers
would substantially exceed the recommended maxi-
mum of 96 independent variables. Moreover, for 559
winless drivers representing 45% of the observations,
driver dummies would perfectly predict the depen-
dent variable. Software packages deal with perfect
prediction in one of two ways: (i) drop perfectly
predicted observations, or (ii) retain all variables and
produce numerically unstable estimates (Hosmer et al.
2013). Either approach biases the estimates. Instead, to
investigate the potential impact of driver effects, we
introduce several driver-related control variables.
First, we include driver dummy variables for each of
the drivers who won three or more world champion-
ship titles. These exceptional drivers account for 42%
of all race wins. The rst model in Table 3shows that
our results are robust when we include triple world
champion xed effects. In the second model in Table 3,
we include both triple world champion xed effects as
well as the aforementioned team xed effects. Again,
our results are robust. All four hypotheses continue to
be supported. Second, Online Appendix Tables 910
show the robustness of our results when we include
additional driver-related variables (driver priority on
the team, quality differences between teammates, num-
ber of cars entered by the drivers team, grid penalties,
and protecting the lead in the driverschampionship
standings toward the end of the season).
Sporting regulations in F1 have evolved over time.
For many seasons, the FIA has modied point scoring
systems, qualifying procedures, and technical speci-
cations such as engine size. To control for changes in
sporting regulations, we introduce year dummies.
Our results (reported in Online Appendix Table 12)
Table 2. Logistic Regression Models for Win: Base Models
(1) (2) (3) (4)
Cumulative Races 0.0118***
(0.0018)
0.0123***
(0.0020)
Cumulative Wins 0.0416***
(0.0040)
0.0426***
(0.0043)
Cumulative Teammate Wins 0.0280***
(0.0073)
0.0175*
(0.0078)
Cumulative Car DNF 0.0162**
(0.0054)
0.0189**
(0.0062)
Cumulative Driver DNF 0.0223
(0.0116)
0.0135
(0.0125)
Cumulative Teammate Car DNF 0.0059
(0.0049)
0.0066
(0.0052)
Cumulative Teammate Driver DNF 0.0219*
(0.0089)
0.0269**
(0.0095)
(Cumulative Races)
2
0.00004***
(0.00001)
0.00004***
(0.00001)
0.00005***
(0.00001)
0.00006***
(0.00001)
Team xed effects No No Yes Yes
Grid Position 0.5209***
(0.0243)
0.4798***
(0.0236)
0.4740***
(0.0251)
0.4388***
(0.0242)
Age Yes Yes Yes Yes
Home edge Yes Yes Yes Yes
Wet weather conditions Yes Yes Yes Yes
Permanent track Yes Yes Yes Yes
Occasional track Yes Yes Yes Yes
Number of drivers starting Yes Yes Yes Yes
Constant Yes Yes Yes Yes
Wald χ
2
642.30*** 863.91*** 736.97*** 915.82***
Notes. N 21,487. Standard errors adjusted for clustering on observations by race in parentheses.
*Signicant at 0.05; ** at 0.01; and *** at 0.001.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
Management Science, Articles in Advance, pp. 116, © 2022 The Author(s) 9
are robust when we include year dummies. Similarly,
our results are robust when we include track dum-
mies (Online Appendix Table 13).
Models (1) and (2) in Table 3show that drivers do
not learn from their own driver failures or from team-
matesdriver failures. When failures are attributed to
external factors, ideas from outside the organization
might be necessary to change behavior (Baum and
Dahlin 2007, Dahlin et al. 2018). In model (3) in Table 3,
we introduce competitorsdriver failures. The coef-
cient for Cumulative Competitor Driver DNF is not sig-
nicant. However, we obtain additional insight in
model (4), where we replace all Driver DNF variables
with Single-driver DNF and Multidriver DNF measures.
Consistent with model (3), drivers do not learn from
either their own driver failures or their teammates
driver failures. Drivers do not learn from competitors
single-driver accidents either. Conversely, the coef-
cient for Cumulative Competitor Multidriver DNF is posi-
tive and signicant, indicating that drivers do learn
from competitorsmultidriver collisions. Our hypothe-
ses continue to be supported, even when we control
for competitor failure experience and type of driver
failure (single-driver accident vs. multidriver collision).
To examine whether endogeneity is a concern, we
use three approaches. First, we examine whether sam-
ple selection bias is a concern as it relates to high-
performing teams recruiting the highest-performing
drivers to drive for their teams. The clearest example
is Michael Schumacher driving for Ferrari and win-
ning ve consecutive world championship titles.
There are only six pairings of the highest-performing
drivers driving for the same high-performing team
resulting in three or more world championships. See
Table A.3. These six collaborations represent 663 out
of 21,487 observations, which is just 3%. Yet, these six
collaborations account for 25% of all the wins in the
data set. So, could these six collaborations between
the highest-performing drivers driving for the same
high-performing team lead to any sample selection
bias? In robustness tests, we omit these six collabora-
tions representing 3% of our sample yet 25% of all the
wins. The results reported in Online Appendix Table
14 show that our ndings are robust. All of our
hypotheses continue to be supported.
For the second approach to examine possible endo-
geneity, we follow the procedure used by Clark et al.
(2018) to assess reverse causality for experience and
the dependent variable. We estimate a regression
model with Cumulative Races as the dependent vari-
able and lagged Win as one of the independent varia-
bles. We also include the top team dummies and Age.
The results reported in Online Appendix Table 18
show that lagged Win is not signicant. Hence, racing
experience does not seem to be endogenously deter-
mined by prior wins.
For the third way to examine endogeneity, we fol-
low the approach by Muthulingam and Agrawal
(2016). To break the potential mechanical relationship
between the experience variables and the dependent
variable, we estimate our learning-curve models with
increased lags (two, three, and four races) for our
experience variables. The models including both
team-xed effects and triple world-champion xed
effects are robust. The only experience variables that
are positive and signicant are the same as in Table 3:
own wins, teammate wins, own car failures, and com-
petitorsmultidriver collisions. Moreover, the results
reported in Online Appendix Tables 19 and 20 show
that all of our hypotheses continue to be supported.
Figure 1. (Color online) Single Learning-Curve Effects
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 102030405060708090
Pr(Win)
Experience
1 Own Wins
1 Teammate Wins
1 Own DNF Car
5 Own Wins
5 Teammate Wins
5 Own DNF Car
9 Own Wins
9 Teammate Wins
9 Own DNF Car
Notes. Rewriting the learning-curve model with just Grid Position, a single experience variable, and the estimated parameters, we get
Pr(Win)e
ˆ
αGrid Position+
ˆ
βExperience=(e
ˆ
αGrid Position+
ˆ
βExperience +1).The1OwnWinscurve shows this estimated Pr(Win) as a function of Own Wins for a
novice driver starting from rst, holding all other experience variables at zero. The top (middle, bottom) three curves show such single learning-
curve effects starting from rst (fth, ninth) for the Own Wins,Teammate Wins,andOwn Car DNF experience variables. Actual increases in
Pr(Win) will be less because (Cumulative Races)
2
has a negative coefcient.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
10 Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
Starting a race, the ultimate goal for a driver is to
win the race. Finishing on the podium is a measure of
success that is less rare (13.4% vs. 4.5%). Table 4shows
the logistic regression results for Podium. Compared
with the win results in Table 2, the podium results in
models (1) and (2) provide support for Hypotheses 1,
3,and4. Once we control for triple world champion
xed effects in addition to the 20 team xed effects in
model (3), all four hypotheses are supported. Compar-
ing model (3) in Table 4with model (2) in Table 3, the
only different insight is that the negative coefcient
for Cumulative Driver DNF is statistically signicant.
So, in the podium analysis, drivers persist in making
the same failures just like in the surgery study (KC
et al. 2013). In model (4), we include competitor fail-
ures, and we replace all Driver DNF variables with
Single-driver DNF and Multidriver DNF measures. All
four hypotheses continue to be supported. Further-
more, consistent with Table 3, drivers learn from com-
petitorsmultidriver collisions. Lastly, model (4)
shows that driverspersistence in making the same
failures stems from repeating single-driver accidents.
In Section 5, we discuss the signicance of the success
and failure coefcients across Tables 3and 4.
5. Discussion and Conclusion
5.1. Discussion of Results
The U-shaped learning-curve literature has argued
that relative performance measures such as organiza-
tional survival, protability, and customer dissatisfac-
tion follow a U-shaped function of experience (Ingram
and Baum 1997, Baum and Ingram 1998, Ingram and
Table 3. Logistic Regression Models for Win: Driver Effects and Competitor Failures
(1) (2) (3) (4)
Cumulative Wins 0.0301***
(0.0053)
0.0320***
(0.0060)
0.0296***
(0.0062)
0.0286***
(0.0062)
Cumulative Teammate Wins 0.0305***
(0.0075)
0.0200*
(0.0079)
0.0174*
(0.0082)
0.0202*
(0.0084)
Cumulative Car DNF 0.0201***
(0.0060)
0.0223***
(0.0068)
0.0195**
(0.0072)
0.0208**
(0.0071)
Cumulative Driver DNF 0.0240
(0.0125)
0.0122
(0.0138)
0.0234
(0.0169)
Cumulative Single-driver DNF 0.0433
(0.0271)
Cumulative Multidriver DNF 0.0267
(0.0266)
Cumulative Teammate Car DNF 0.0074
(0.0056)
0.0114
(0.0063)
0.0117
(0.0063)
0.0107
(0.0063)
Cumulative Teammate Driver DNF 0.0180
(0.0103)
0.0180
(0.0111)
0.0098
(0.0130)
Cumulative Teammate Single-driver DNF 0.0294
(0.0243)
Cumulative Teammate Multidriver DNF 0.0010
(0.0204)
Cumulative Competitor Driver DNF 0.0013
(0.0009)
Cumulative Competitor Single-driver DNF 0.0012
(0.0020)
Cumulative Competitor Multidriver DNF 0.0041*
(0.0019)
(Cumulative Races)
2
0.00004***
(0.00001)
0.00005***
(0.00001)
0.00005***
(0.00001)
0.00005***
(0.00001)
Triple world champion xed effects Yes Yes Yes Yes
Team xed effects No Yes Yes Yes
Grid Position 0.4707***
(0.0239)
0.4316***
(0.0244)
0.4320***
(0.0244)
0.4303***
(0.0244)
Age Yes Yes Yes Yes
Home edge Yes Yes Yes Yes
Wet weather conditions Yes Yes Yes Yes
Permanent track Yes Yes Yes Yes
Occasional track Yes Yes Yes Yes
Number of drivers starting Yes Yes Yes Yes
Constant Yes Yes Yes Yes
Wald χ
2
913.52*** 972.18*** 989.44*** 1097.76***
Notes. N 21,487. Standard errors adjusted for clustering on observations by race in parentheses.
*Signicant at 0.05; ** at 0.01; and *** at 0.001.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
Management Science, Articles in Advance, pp. 116, © 2022 The Author(s) 11
Simons 2002,Lapr
´
e and Tsikriktsis 2006). Our study
extends this body of literature by studying win proba-
bility. Whereas survival, probability, and dissatisfac-
tion are all indicators of competitiveness over time,
winning a race is an immediate outcome of a competi-
tive contest. So, winning versus losing a race immedi-
ately captures how well all competitors fared relative
to each other. In F1, we nd that win probability fol-
lows an inverted U-shaped function of racing
experience.
Mour˜
ao (2017) found that a win in the previous F1
race, as well as prior podiums on the same track,
increased win probability. We nd that F1 drivers
learn from all prior wins (on all tracks) to increase win
probability. For example, while leading a race in the
closing stages of a race, drivers can draw from all
prior win experiences such as the race tactics
described by Lauda (1977). Furthermore, Mour˜
ao
(2017) found that the percent of team podiums per
start did not affect win probability. Percent of team
podiums incorporates all drivers who have driven for
the team, including all of the drivers in the past who
did not overlap with the focal driver. Focusing only
on overlapping teammates can help explain why we
do nd learning from teammateswins, whereas the
percent of team podiums did not increase win proba-
bility for Mour˜
ao (2017).
Haunschild and Sullivan (2002) note that one issue
with research on failure is that failures are generally
rare events. Consequently, studying frequent failures
should be promising because such research could
prot from investigating whether all failures affect
Table 4. Logistic Regression Models for Podium
(1) (2) (3) (4)
Cumulative Races 0.0087***
(0.0012)
Cumulative Podiums 0.0156***
(0.0022)
0.0111***
(0.0029)
0.0095**
(0.0030)
Cumulative Teammate Podiums 0.0055
(0.0030)
0.0071*
(0.0032)
0.0067*
(0.0033)
Cumulative Car DNF 0.0099**
(0.0037)
0.0131***
(0.0040)
0.0102*
(0.0042)
Cumulative Driver DNF 0.0176*
(0.0075)
0.0210**
(0.079)
Cumulative Single-driver DNF 0.0442**
(0.0146)
Cumulative Multidriver DNF 0.0273
(0.0153)
Cumulative Teammate Car DNF 0.0039
(0.0035)
0.0005
(0.0039)
0.0012
(0.0039)
Cumulative Teammate Driver DNF 0.0110
(0.0062)
0.0126
(0.0068)
Cumulative Teammate Single-driver DNF 0.0066
(0.0145)
Cumulative Teammate Multidriver DNF 0.0124
(0.0120)
Cumulative Competitor Single-driver DNF 0.0004
(0.0011)
Cumulative Competitor Multidriver DNF 0.0024*
(0.0012)
(Cumulative Races)
2
0.00004***
(0.00000)
0.00004***
(0.00000)
0.00003***
(0.00000)
0.00004***
(0.00001)
Triple world champion xed effects No No Yes Yes
Team xed effects Yes Yes Yes Yes
Grid Position 0.2833***
(0.0091)
0.2734***
(0.0090)
0.2712***
(0.0091)
0.2709***
(0.0091)
Age Yes Yes Yes Yes
Home edge Yes Yes Yes Yes
Wet weather conditions Yes Yes Yes Yes
Permanent track Yes Yes Yes Yes
Occasional track Yes Yes Yes Yes
Number of drivers starting Yes Yes Yes Yes
Constant Yes Yes Yes Yes
Wald χ
2
2,042.86*** 2,305.49*** 2,412.99*** 2,457.58***
Notes. N 21,487. Standard errors adjusted for clustering on observations by race in parentheses.
*Signicant at 0.05; ** at 0.01; and *** at 0.001.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
12 Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
learning, possibly comparing across different types of
failure events(Haunschild and Sullivan 2002, p. 382).
Similarly, according to Baum and Dahlin (2007), acci-
dents vary greatly in cause and consequence. The
authors suggest that it seems likely that more will be
learned from some accidents than from others. For-
mula One racing provides an opportunity to address
these calls for research on learning from frequent fail-
ures of different types. Studying different types of fail-
ures, we nd different learning effects for car failures,
single-driver accidents, and multidriver collisions.
First, F1 drivers learn from own car failures. Heteroge-
neous causes make car failures amenable to objective
root-cause analysis without concerns about blaming
the driver. As own car failures are frequent, there is
less of a need to learn from otherscar failures.
Second, own single-driver accidents reduce the
probability to nish on the podium. Driver failures
with homogeneous causes are amenable to assigning
blame. Just like in KC et al. (2013), attribution theory
comes into play. Drivers might attribute single-driver
accidents to factors outside of their control (Stewart
2009). As KC et al. (2013) suggest, individuals might
not make an effort to analyze what went wrong. Conse-
quently, drivers might not change their driving behav-
ior and future performance suffers as a result. Staw
(1981) calls this persistence escalation of commitment
to awed behavior. However, own single-driver acci-
dents do not affect win probability. These results sug-
gest that, in order to win, F1 drivers need to unlearn
their disruptive escalation of commitment observed in
thepodiumanalysis.Whilethebestdriversmanageto
do so, for many drivers this is hard. As Lauda (1977,p.
18) notes, I think these crashes are necessary for the
career of any racing driver. What matters is whether
you learn your lesson from them or not. There are peo-
ple who were driving ve years ago in Formula III [a
lower-level racing division] and half killing themselves
and theyre still driving there and theyre still going off
the road.Although F1 drivers do not learn from
single-driver accidents to improve win probability,
they have to at least learn to eliminate their disruptive
escalation of commitment observed in the podium
analysis.
Third, F1 drivers learn from competitorsmulti-
driver collisions. F1 teams do not provide competitors
with access to crashed cars for analysis, so we do not
expect any learning from competitorscar failures. It
could be hard to learn from competitorssingle-driver
accidents, which might depend on car setups, and
drivers do not have access to competitorscar setups.
It is, however, feasible to review lm and study com-
petitorsmultidriver collisions. Not having been
involved in these collisions, drivers can objectively
analyze these collisions between other drivers. Avoid-
ing a situation where a driver or a teammate could be
blamed, drivers can objectively assess competitors
and answer questions such as, Where should a driver
be on the track compared with other cars? When try-
ing to overtake, what gaps should you go for? How
long can you stay in another drivers blind spot at
very high speeds?
5.2. Success Characteristics and
Learning Effects
We use two characteristics of success to dene a spec-
trum of organizational settings: frequency of success
and competitiveness. Table 5shows the spectrum. Fre-
quency of success ranges from rare to common. Com-
petitiveness during events ranges from a zero-sum
game to individually independent events. We illus-
trate the spectrum with our F1 win study, our F1
podium study, and KC et al.s (2013) surgery study.
The F1 win study falls on one end of the spectrum,
where success is rare and essentially a zero-sum
game. The surgery study falls on the other end of the
spectrum, where success is common and individually
independent of other events. In the long term, sur-
geonsperformance should impact their ability to get
subsequent case referrals. So, we do not imply that
there is no competition between surgeons over an
extended period of time. However, during a single
surgery, other surgeons are not trying to prevent the
focal surgeon from having a success, which is exactly
Table 5. Success Characteristics and Learning Effects
Frequency of success Rare Less rare Common
Competitiveness during event Zero-sum game Less competitive Individually independent
Study F1 win
(Table 3)
F1 podium
(Table 4)
Cardiac surgery
(KC et al. 2013)
Own success +Own Wins ≈+Own Podiums ≈+Own Patient Success
Otherssuccess (in the same organization) +TeammatesWins ≈+TeammatesPodiums >n.s. OthersPatient Success
Own failure +Own Car DNF ≈+Own Car DNF >Own Patient Failure
n.s. Own Driver DNF >Own Driver DNF ≈−Own Patient Failure
Othersfailure (in the same organization) n.s. TeammatesCar DNF n.s. TeammatesCar DNF <+OthersPatient Failure
n.s. TeammatesDriver DNF n.s. TeammatesDriver DNF <+OthersPatient Failure
Notes. +() represents a positive (negative) and statistically signicant coefcient estimate. n.s. means not signicant. We ip the signs for
estimates from KC et al. (2013), as the surgery studyused failure (as opposed to success) as the dependent variable.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
Management Science, Articles in Advance, pp. 116, © 2022 The Author(s) 13
what F1 drivers are trying to do in the F1 win study.
The F1 podium study falls in-between the F1 win
study and the surgery study. Study placement on the
spectrum allows us to gain insights in terms of
observed learning effects.
Table 5shows that learning from own success hap-
pens in all three studies. However, Table 5indicates
that learning from otherssuccess (in the same organi-
zation) occurs for F1 win and F1 podium, but not in
the surgery study. So, only when success is rare, do
individuals turn to others in the same organization to
increase the number of successes they can learn from.
Table 5shows that learning from own car failures
happens for F1 win and F1 podium. Furthermore,
own driver failures do not affect F1 win, whereas such
failures are disruptive for F1 podium. Similarly, own
patient failures are disruptive in the surgery study.
So, learning from own failures depends on the type of
failure. Car failures with heterogeneous causes are
amenable to root-cause analysis, which can lead to
learning from own car failures when such failures are
frequent and eliminating failure is required to be com-
petitive (to nish rst, rst youve got to nish). In
contrast, driver failures with homogeneous causes are
amenable to assigning blame. Escalation of commit-
ment to awed behavior (Staw 1981) occurs toward
the right of the spectrum in Table 5, where success is
more common and less competitive.
F1 drivers do not learn from teammatescar failures.
We suggest two possible explanations. First, in the
development of Hypothesis 3(drivers learning from
own prior car failures), we mention that the leader can
effectively take time to reect on past car reliability
problems, listen for any unusual sounds indicative of
any potential reliability issues, and nurse the car
home(Lauda 1977). It could be difcult to describe to
a teammate exactly what unusual sounds a driver has
learned from. Second, the frequency of own prior car
failures is large, providing ample learning opportuni-
ties. Hence, drivers might not feel the need to learn
from teammatescar failures.
5.3. Avenues for Future Research
It should be fruitful for future research to study learn-
ing from own and otherssuccess and failure experi-
ence in other contexts and place such ndings in Table 5.
Is learning in pharmaceuticals, auctions, and bidding
for construction contracts similar to F1 win? How
does learning in a local restaurant market compare
with F1 podium? Do learning effects in nonprot
organizations follow the pattern observed in KC et al.
(2013)? Adding other contexts to Table 5should help
managers understand what types of experience might
help or hurt organizational performance in different
settings.
In Table 5, we identify two dimensions that impact
learning from own and otherssuccess and failure: fre-
quency of success and competitiveness. Future research
is needed to identify other dimensions, for example,
direct observation of othersexperience. In F1, drivers
spend most of the race on a different part of the track
than their teammates. Consequently, drivers have lim-
ited opportunity to observe teammates during a race.
Conversely, in sports such as baseball or curling, com-
petitors can directly observe others. How does direct
observation change learning effects? What other dimen-
sions impact learning effects across settings?
Our study demonstrates that differentiating failure
is important to identify learning-from-failure effects.
Win probability increases as a function of prior car
failures, whereas podium probability decreases as a
function of prior driver failures. One limitation of our
study is that for driver failures, we cannot identify
who caused multidriver collisions. Another limitation
is that we do not know root-cause resolutions for car
failures. Was the car failure caused by assembly, pro-
cesses at a supplier, quality control? Future research
should study the impact of root-cause analysis on
learning effects. Do different types of root causes lead
to different learning effects? How is learning from dif-
ferent root causes affected by supplier relationships,
engineering, manufacturing, assembly, machining,
and quality control?
Another promising avenue for future research would
focus on differentiating success. In F1, each win is
equally valuable. However, in industries for movies,
gaming, and pharmaceuticals, not every successful prod-
uct is equally protable. Blockbusters are much more
valuable than titles or drugs that make just small prots.
How do learning effects change as a function of the mag-
nitude of success?
Acknowledgments
The authors gratefully acknowledge Anna Lapr´
e and Jes-
sica Noble for research assistance during the data collec-
tion, cross-checking, and coding process. The authors
thank department editor Charles Corbett, the associate
editor, and three anonymous reviewers for the thoughtful
and constructive feedback during the review process. The
authors also thank Yasin Alan, Peter Haslag, M ¨
umin
Kurtulus¸, Megan Lawrence, Tim Vogus, participants at
the Carnegie School of Organizational Learning 2018 Con-
ference, INFORMS 2018 Annual Meeting, POMS 2018 and
2021 Annual Conferences, as well as seminar participants
at Georgetown University and Vanderbilt University for
helpful comments.
Appendix
There is signicant driver churn across teams from season
to season. As an example, Table A.1 shows the teams and
teammates for Alain Prost. Prost drove for four different
teams in his career: McLaren (two stints), Renault, Ferrari,
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
14 Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
and Williams. During his career, he had 10 different team-
mates. For the cumulative teammate experience variables,
we sum the experiences by teammates when the driver
was on the same team as that teammate. For example, to
calculate Cumulative Teammate Wins
dr
for Prost for race 10
in 1984, we sum the wins by Watson (on McLaren) in
1980, the wins by Arnoux (on Renault) in 1981 and 1982,
the wins by Cheever (on Renault) in 1983, and the wins
by Lauda (on McLaren) in 1984 in races 1 through 9.
Table A.2 lists the 20 teams with at least ve wins.
These 20 teams include all 17 teams who have won the
constructorschampionship or would have won in the
19501957 period when the constructorschampionship
did not yet exist. The other three teams are Walker, the
most successful privateer team, Renault (19771985), who
pioneered the turbo engine and thereby revolutionized
the sport, and Ligier (19761996), a runner-up for the con-
structorschampionship.
References
Argote L (2013) Organizational Learning: Creating, Retaining and
Transferring Knowledge 2nd ed. (Springer, New York).
Aversa P, Furnari S, Haeiger S (2015) Business model congurations
and performance: A qualitative comparative analysis in Formula
One racing, 20052013. Indust. Corporate Change 24(3):655676.
Baum JAC, Dahlin KB (2007) Aspiration performance and railroads
patterns of learning from train wrecks and crashes. Organ. Sci.
18(3):368385.
Baum JAC, Ingram P (1998) Survival-enhancing learning in the Man-
hattan hotel industry, 18981980. Management Sci. 44(7):9961016.
Bell A, Smith J, Sabel CE, Jones K (2016) Formula for success: Multi-
level modelling of Formula One driver and constructor perfor-
mance, 19502014. J. Quant. Anal. Sports 12(2):99112.
Bohn RE (1995) Noise and learning in semiconductor manufactur-
ing. Management Sci. 41(1):3142.
Bohn RE, Lapr ´
e MA (2011) Accelerated learning by experimenta-
tion. Jaber MY, ed. Learning Curves: Theory, Models, and Applica-
tions (CRC Press, Boca Raton, FL), 191209.
Bothner MS, Kang J-H, Stuart TE (2007) Competitive crowding and
risk taking in a tournament: Evidence from NASCAR racing.
Admin. Sci. Quart. 52(2):208247.
Brawn R, Parr A (2017) Total Competition: Lessons in Strategy from
Formula One (Simon & Schuster, London).
Castellucci F, Ertug G (2010) Whats in it for them? Advantages of
higher-status partners in exchange relationships. Acad. Manage-
ment J. 53(1):149166.
Castellucci F, Padula M, Pica G (2011) The age-productivity gra-
dient: Evidence from a sample of F1 drivers. Labour Econom.
18(4):464473.
Chuang Y-T, Baum JAC (2003) Its all in the name: Failure-induced
learning by multiunit chains. Admin. Sci. Quart. 48(1):3359.
Clark JR, Kuppuswamy V, Staats BR (2018) Goal relatedness and
learning: Evidence from hospitals. Organ. Sci. 29(1):100117.
Dahlin KB, Chuang Y-T, Roulet TJ (2018) Opportunity, motivation,
and ability to learn from failures and errors: Review, synthesis,
and ways to move forward. Acad. Management Ann. 12(1):252277.
Desai V (2015) Learning through the distribution of failures within
an organization: Evidence from heart bypass surgery perfor-
mance. Acad. Management J. 58(4):10321050.
Table A.1. Teams and Teammates for Alain Prost
Year Team Teammate Teammate
1980 McLaren John Watson
1981 Renault Ren´
e Arnoux
1982 Renault Ren´
e Arnoux
1983 Renault Eddie Cheever
1984 McLaren Niki Lauda
1985 McLaren Niki Lauda
(races 113, 1516)
John Watson
(race 14)
1986 McLaren Keke Rosberg
1987 McLaren Stefan Johansson
1988 McLaren Ayrton Senna
1989 McLaren Ayrton Senna
1990 Ferrari Nigel Mansell
1991 Ferrari Jean Alesi
1992 ——
1993 Williams Damon Hill
Notes. In 1985, in qualifying for race 13, Lauda injured his wrist. In
race 14, John Watson drove for the injured Lauda. It was Watsons
only F1 race after retiring from F1 at the end of 1983.
Table A.2. Teams with at Least Five Wins: 19502017
Team Seasons Wins
Alfa Romeo* 19501951 10
Benetton* 19862001 27
Brabham* 19621992 35
Brawn* 2009 8
BRM* 19511977 17
Cooper* 19531968 12
Ferrari* 19502017 228
Ligier* 19761996 9
Lotus* 19581994 74
Maserati* 19501957 9
McLaren* 19662017 182
Mercedes* (1950s) 19541955 9
Mercedes* (2010s) 20102017 67
Renault* (1970s1980s) 19771985 15
Renault* (2000s) 20022011 20
Red Bull* 20052017 55
Tyrrell** 19681998 33
Vanwall* 19541960 9
Walker*** 19531970 9
Williams* 19772017 114
Note. The 20 teams listed combine for 942 wins in 965 races, i.e.,
97.6% of all wins; 15 other teams combine for the remaining 23 wins.
* denotes a team who built their own chassis; ** Tyrrell had nine
wins with a Matra (19681969), one win with a March (1970), and 23
wins as a works team with a Tyrrell (19711983); *** Privateer team
Walker had four wins with a Cooper (19581959) and ve wins with
a Lotus (19601968).
Table A.3. Highest-Performing Drivers Driving for High-
Performing Teams
Driver Team Stint World champion
Jackie Stewart Tyrrell 19681973 1969, 1971, 1973
Alain Prost McLaren 19841989 1985, 1986, 1989
Ayrton Senna McLaren 19881993 1988, 1990, 1991
Michael Schumacher Ferrari 19962006 2000, 2001, 2002,
2003, 2004
Sebastian Vettel Red Bull 20092014 2010, 2011, 2012, 2013
Lewis Hamilton Mercedes 20132017* 2014, 2015, 2017*
*Our data set ends in 2017. Lewis Hamilton also became world
champion with Mercedes in 2018, 2019, and 2020.
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
Management Science, Articles in Advance, pp. 116, © 2022 The Author(s) 15
Eichenberger R, Stadelmann D (2009) Who is the best Formula 1
driver? An economic approach to evaluating talent. Econom.
Anal. Policy 39(3):389406.
Ellis S, Mendel R, Nir M (2006) Learning from successful and failed
experience: The moderating role of kind of after-event review.
J. Appl. Psych. 91(3):669680.
Gino F, Pisano GP (2011) Why leaders dont learn from success.
Harvard Bus. Rev. 89(4):6874.
Haltiwanger JC, Lane JI, Spletzer JR (1999) Productivity differences
across employers: The roles of employer size, age, and human
capital. Amer. Econom. Rev. 89(2):9498.
Haunschild PR, Sullivan BN (2002) Learning from complexity:
Effects of prior accidents and incidents on airlineslearning.
Admin. Sci. Quart. 47(4):609643.
Hosmer DW, Lemeshow S, Sturdivant RX (2013) Applied Logistic
Regression, 3rd ed. (Wiley, Hoboken, NJ).
Ingram P, Baum JAC (1997) Opportunity and constraint: Organiza-
tionslearning from the operating and competitive experience of
industries. Strategic Management J. 18(Summer Special Issue):7598.
Ingram P, Simons T (2002) The transfer of experience in groups of
organizations: Implications for performance and competition.
Management Sci. 48(12):15171533.
KC D, Staats BR, Gino F (2013) Learning from my success and
othersfailure: Evidence from minimally invasive cardiac sur-
gery. Management Sci. 59(11):24352449.
Lapr´
e MA, Nembhard IM (2010) Inside the organizational learning
curve: Understanding the organizational learning process.
Foundations Trends Tech. Inform. Oper. Management 4(1):1103.
Lapr´
e MA, Tsikriktsis N (2006) Organizational learning curves for
customer dissatisfaction: Heterogeneity across airlines. Manage-
ment Sci. 52(30):352366.
Lapr´
e MA, Mukherjee AS, Van Wassenhove LN (2000) Behind the
learning curve: Linking learning activities to waste reduction.
Management Sci. 46(5):597611.
Lauda N (1977) The Art and Science of Grand Prix Driving (Motor-
books, Osceola, WI).
Levitt F, March JG (1988) Organizational learning. Annual Rev.
Sociol. 14:319340.
Madsen PM, Desai V (2010) Failing to learn? The effects of failure
and success on organizational learning in the global orbital
launch vehicle industry. Acad. Management J. 53(3):451476.
M´
enard P, Cahier B, Roebuck N (2006) The Great Encyclopedia of For-
mula 1, 3rd ed. (Chronosports, St-Sulpice, Switzerland).
Mour˜
ao P (2017) The Economics of MotorsportsThe Case of Formula
One (Palgrave-Macmillan, London).
Mour˜
ao PR (2018) Surviving in the shadowsAn economic and
empirical discussion about the survival of the non-winning F1
drivers. Econom. Anal. Policy 59:5468.
Muthulingam S, Agrawal A (2016) Does quality knowledge spillover
at shared suppliers? An empirical investigation. Manufacturing
Service Oper. Management 18(4):525544.
Piezunka H, Lee W, Haynes R, Bothner MS (2018) Escalation of
competition into conict in competitive networks of Formula
One drivers. Proc.Natl.Acad.Sci.USA115(15):E3361E3367.
Prost A (1990) Life in the Fast Lane (Stanley Paul, London).
Smith R (2016) Formula 1: All the Races: The World Championship Story
Race-by-Race 19502015 (Evro, Dorset, UK).
Stan M, Vermeulen F (2013) Selection at the gate: Difcult cases,
spillovers, and organizational learning. Organ. Sci. 24(3):796812.
Staw BM (1981) The escalation of commitment to a course of action.
Acad. Management Rev. 6(4):577587.
Stewart J (2009) Winning Is Not Enough (Headline, London).
Lapr´
e and Cravey: Learning from Success and Failure at the Speed of Formula One
16 Management Science, Articles in Advance, pp. 116, © 2022 The Author(s)
... In their study on component sharing, Ramdas and Randall (2008) provide a good overview. Tucker et al. (2007) and Huckman et al. (2009) examine the role of teams in learning, whereas Lapré and Cravey (2022) and KC et al. (2013) study learning from others' success and mistakes. Those studies typically focus on explicit training interventions or learning opportunities. ...
Article
Problem definition: Firms increasingly use augmented reality (AR) devices to improve their production ramp-up processes. These devices appear useful, yet little is known about their broader impact on worker productivity and behavior. Academic/practical relevance: Efficient production ramp-ups are particularly important when product life cycles are short. An ongoing debate among academics and practitioners pertains to how Industry 4.0, and AR devices in particular, can accelerate the ramp-up. The current study provides empirical evidence related to AR in the production ramp-up context, examines the strengths and weaknesses of AR, and tests four hypotheses, leading to a more nuanced view of AR use in the manufacturing ramp-up. Methodology: A framed field experiment in a manufacturing plant provides a test of how quickly workers can perform new tasks with and without AR support and how the use of AR affects their ability to suggest process improvements. Results: When faced with a new task, workers instructed by AR smart glasses use 43.8% less time to complete the task compared with a control group that relies on paper-based instructions. However, workers that use AR glasses consistently use 23% more time than the control group when both groups repeat the task without either AR or paper-based instructions. Task difficulty moderates this relationship; workers assigned to a more difficult task benefit the most from AR instructions. After the devices are removed, workers instructed based on paper improve their productivity faster through learning than those instructed by AR. In addition, the former group suggests better process improvements than the latter one. Managerial implications: Although these results indicate substantially higher productivity resulting from AR devices, they also support the view that, once instructed through AR devices, workers come to rely on this new technology without fully internalizing the task. This failure to internalize their task then leads workers to suggest less useful process improvements.
Article
Full-text available
This article investigates the factors that escalate competition into dangerous conflict. Recent sociological theorizing claims that such escalations are particularly likely in dyads of structurally equivalent people (i.e., actors who have the same relations with the same third parties). Using panel data on Formula One races from 1970 through 2014, we model the probability that two drivers collide on the racetrack (an observable trace of conflict) as a function of their structural equivalence in a dynamic network of competitive relationships. Our main hypothesis, that the likelihood of conflict rises with structural equivalence, receives empirical support. Our findings also show that the positive association between structural equivalence and conflict is neither merely a matter of contention for official position nor an artifact of inherently hostile parties spatially exposed to each other. Our analyses further reveal that this positive association is concentrated in a number of theoretically predictable conditions: among age-similar dyads, among stronger performers, in stable competitive networks, and in safe, rather than dangerous, weather conditions. Implications for future research on conflict, networks, and tournaments are discussed. FULL TEXT FREELY AVAILABLE at http://www.pnas.org/content/early/2018/03/20/1717303115
Article
Full-text available
While organizations and individuals tend to focus on learning from success, research has shown that failure can yield crucial insights in various contexts that range from small mistakes and errors, product recalls, accidents, and medical errors, to large-scale disasters. This review of the literature identifies three mechanisms—opportunity, motivation, and ability—through which individuals, groups and organizations learn from failure, and it bridges the gaps between different levels of analysis. Opportunity to learn from failure mostly takes the shape of more information about errors and failures that are generated by one's own and others' prior failures or near-failures. Motivation to learn from failure is hindered by punitive leaders and organizations. Finally, ability to learn from failure partly relies on inherent attitudes and characteristics; but can be further developed through thoughtful analysis and transfers of successful routines. Our review leads us to distinguish between erroneous versus correct processes and adverse versus successful outcomes to better understand the full gamut of events that are faced by organizations. We identify the existence of noisy learning environment, where spurious successes (when erroneous processes still lead to successful outcomes) and spurious failures (when correct processes are combined with adverse outcomes) lower the opportunity to learn. Considering noisy learning situations is helpful when understanding the differences between slow- and fast-learning environments. We conclude our review by identifying a number of unexplored areas we hope scholars will address to better our understanding of failure learning.
Article
Full-text available
We use a unique empirical setting to investigate the spillover of quality knowledge across supply chains and to the examine contingencies that affect such spillover. We analyze the quality performance of 191 suppliers, who utilize the same facilities to manufacture similar products for two distinct businesses: one that makes cars and the other that makes commercial vehicles. From 2006 to 2009, the car business undertook 2,121 quality improvement initiatives at these suppliers, while the commercial vehicles business did not undertake any such initiatives. We find that the quality knowledge developed through the quality improvement initiatives undertaken by the car business does not easily spill over to benefit the commercial vehicles business. Quality knowledge spills over under three conditions: (1) when quality improvement efforts are focused on organizational members, as opposed to when they focus on routines or technology; (2) when quality improvement efforts focus on the output activities of suppliers, not when they focus on the input or in-process activities; and (3) when quality knowledge is developed at suppliers with low complexity in their operations. Our results provide insights on managing quality at shared suppliers.
Article
Organizational learning is central to a number of strategic theories. Recent arguments, however, identify risks associated with learning from own experience in the form of overattention to the short term and local conditions. The experience of the industry may offer opportunities for organizational learning that the experience of the organization does not, because industry experience is more varied, and not tied to the path-dependent history of any one organization. We investigate the influence of own experience and of two types of industry experience on the failure rates of U.S. hotel chains. The two types of industry experience are operating experience, which is a discounted sum of the units operated by U.S. hotel chains in the history of the industry, and competitive experience, which is a discounted sum of the number of failures of U.S. hotel chains in the history of the industry. We find that (a) organizations initially benefit from their own experience, but are harmed in the long run, (b) generalist organizations are more weakly affected by their own experience than specialists, (c) organizations benefit from their industry’s operating experience, accumulated both before and after the organization’s entry, and (d) organizations benefit from their industry’s competitive experience, but only after the organization’s entry. © 1997 by John Wiley & Sons, Ltd.
Article
F1 drivers are the (most) visible faces of a F1 team’s performance. Good performances ensure a lengthier contract between drivers and teams. Reversely, humble performances may jeopardize the renewal of drivers’ contracts to their teams. This paper will study the capacity of F1 drivers surviving professionally in competition. Considering two major samples of drivers (without points or without victories) and two types of ‘exits’ (exiting the team or exiting the F1 competition), various regressions of Cox survival models and of parametric regressions have been obtained. The main results suggest that recent worse standings results, higher ages and a higher number of withdrawals contribute to the shortening of F1 careers. It has also been observed that the early decades of competition were not known for providing a higher number of races for drivers. Reversely, adding podium positions (even without winning) ensures a longer professional life in F1.
Article
Organizations vary significantly in the rates at which they learn from experience (i.e., learning by doing). While prior work has explored how different categories of prior experience affect learning outcomes, limited attention has been paid to the role played by the organizational context. We focus on one important aspect of an organization's context-goals-and examine how the degree of goal relatedness across an organization's diverse set of activities affects the rate at which it learns from experience. In doing so, we argue that even where otherwise diverse activities are knowledge related, if they are not goal related, learning by doing is likely to suffer. Using data from the hospital industry our findings suggest that goal relatedness is an important consideration when it comes to learning. Although goal-related teaching aids learning by doing in clinical care, we find that strong academic affiliations (and the research-oriented tasks and goals they bring with them) may detract from it.
Article
This book, the first study of its kind, examines the economics behind motorsports, in particular Formula One. Chapters discuss the costs involved in Formula racing and how they are borne by teams, promoters and racers. The book also looks at how society, the public and the private sectors stand to benefit economically from the motorsport industry. Other issues like the economics of TV rights, sponsorship and sustainability are also addressed, again for the first time in an economics book. Moving beyond the economics of what happens off the track, the book also undertakes a serious examination of what goes in to making a winning team and what having a winning racer can do for a team’s fortunes. Mourão’s highly relevant and contemporary book also looks at how motorsport teams confront the challenges of the modern sporting world, including the changing dynamics of sports media and considers the future of Formula 1 as motorsports evolve.