ArticlePDF Available

Development and Deployment at Facebook

Authors:

Abstract and Figures

Internet companies such as Facebook operate in a "perpetual development" mindset. This means that the website continues to undergo development with no predefined final objective, and that new developments are deployed so that users can enjoy them as soon as they're ready. To support this, Facebook uses both technical approaches such as peer review and extensive automated testing, and a culture of personal responsibility.
Content may be subject to copyright.
Development and Deployment at Facebook
Dror G. Feitelson
Hebrew University
Eitan Frachtenberg
Facebook
Kent L. Beck
Facebook
Abstract
More than one billion users log in to Facebook at least once a month to connect and share
content with each other. Among other activities, these users upload over 2.5 billion content
items every day. In this article we describe the development and deployment of the software
that supports all this activity, focusing on the site’s primary codebase for the Web front-end.
Information on Facebook’s architecture and other software components is available elsewhere.
Keywords D.2.10.i Rapid prototyping; D.2.18 Software Engineering Process; D.2.19 Software
Quality/SQA; D.2.2.c Distributed/Internet based software engineering tools and techniques; D.2.5.r
Testing tools; D.2.7.e Evolving Internet applications.
Facebook’s main development characteristics are speed and growth. The front-end is under
continuous development by hundreds of software engineers. These engineers commit code to the
version control system up to 500 times a day, recording changes in some 3,000 files. Naturally,
unique developers by week
’05 ’06 ’07 ’08 ’09 ’10 ’11 ’12
active developers
0
100
200
300
400
500
600
700
800 commits per month
’05 ’06 ’07 ’08 ’09 ’10 ’11 ’12
number of commits [1000s]
0
2
4
6
8
10
12
14 codebase size
’05 ’06 ’07 ’08 ’09 ’10 ’11 ’12
LoC [millions]
0
2
4
6
8
10
Figure 1:
Different aspects of Facebook growth: growth of the number of engineers working on
the code, growth in the total activity of these engineers, and growth of the codebase itself. Dips in
the number of engineers correspond to the winter holidays; peaks are caused by summer interns.
In the codebase data we removed around 800,000 lines for internal use that existed from 2009 to
2011. The data was extracted from Web front-end git repository, which has more than 360,000
commits since June 2005.
1
the rate of development activity has grown tremendously over the years, and so has the codebase
itself (Fig. 1). The binary executable file run by Facebook servers to serve incoming requests is
now about 1.5 GB in size.
Web companies like Facebook differ from conventional software companies in that the software
they develop runs on their own servers, and is not installed at customer locations. This enables
rapid updates to the software, and allows fine-grained control over versions and configurations. At
Facebook, this deployment has led to a practice of daily and weekly “push” of new code to the
servers. Before being pushed, code is subject to peer review, internal use, and extensive automated
testing. After the code push, engineers carefully monitor the site’s behavior identify any sign of
trouble. But such technical facilities are not enough. Facebook also relies on a culture of personal
responsibility, where every engineer is responsible for code they write and, when necessary, code
they did not write that is affecting users or colleagues. This culture treats failures as an opportunity
for improvement rather than as an occasion for assigning blame.
1 Perpetual Development
Facebook, like practically all other Internet-based companies, operates in perpetual development
mode, in which engineers continuously develop new features and make them available to users.
Consequently, the system also grows continuously, possibly at a super-linear rate. These two
attributes, growth and rapid deployment, are the chief challenges that engineers need to overcome.
Software engineering textbooks typically assume a scenario where software is built for hire. In
such a situation engineers first need to learn about the application domain and understand the goals
for the new software.
At Facebook, the engineers are also users, so they have first-hand knowledge of what the system
does and what services it provides. Moreover, internal use of Facebook tends to be more intensive
than most use, so there is continuous tension between first-hand knowledge and knowledge derived
from examining wide-spread use. Out of this tension programmers generate ideas to improve the
product base.
But the fact that engineers have first-hand knowledge of the application is just one aspect
of the departure from traditional software development. Even more important is the mind-set
of perpetual development. Traditional software products are finite by definition, with delimited
scope and a predefined completion date. This is the basis for drawing the contract to produce the
software, defining acceptance tests, and the problems that arise when projects fall behind schedule
or overspend their budget.
Sites like Facebook will never be completed. The mindset is that the system will continue to
be developed indefinitely.
Software that continues to evolve over long time periods actually exists in many domains. For
example, the Linux operating system has evolved continuously since its first official release in
1994, growing 80-fold in the process [3]. However, new Linux versions are released two to three
months apart. Internet-based companies like Facebook evolve at a much faster pace (Fig. 2).
The development rate is also reflected in the terminology used to describe it. In the context of
2
one day
continuous
deployment
weeksmonthsonce
evolutionary
development agile
developmentunified process
waterfall or Facebook
<hour
Figure 2:
Timescales of making new developments available. Facebook typically deploys new
code every day, balancing rapid development with foresight and monitoring.
the waterfall model, the ultimate goal is delivering the software product. In the context of agile
development or evolutionary systems such as Linux, we would speak of periodic releases. But
the practices used by Internet companies have come to be known as continuous deployment. This
reflects the habit of deploying new code as a series of small changes as soon as they are ready [5].
In such companies the software that provides the service resides on the company’s web servers,
thus deploying new software to the servers immediately makes it available to all users, without any
need for downloads and local installation.
A direct result of perpetual development is that the software grows and grows. The codebase
for Facebook’s front end now stands at more than 10.5 million lines of actual code (without com-
ment lines and blank lines), of which nearly 8.5 million are written in PHP. Moreover, the rate
of growth is superlinear with time (Fig. 1). This contradicts Lehman’s seminal work on software
evolution which predicts that progress will be slowed down when size (and complexity) increase
[7]. The contradiction may be explained as coming from different assumptions: Lehman assumed
an essentially constant workforce, whereas Facebook enjoys a growing engineer base. The ability
to rapidly grow the workforce indicates that the need for communication and coordination between
engineers is probably not as restrictive as predicted by Brooks’s Law. Similar superlinear growth
trends have also been observed in some open-source projects, notably the Linux operating system
kernel [4]. Specifically, our data regarding the Facebook codebase enjoys an excellent fit with a
quadratic growth model1, similarly to many open-source projects.
An important attribute of continuous deployment is that it facilitates live experimentation using
A/B testing. The innovations implemented by engineers are immediately deployed, and real users
can experience them. This enables a careful comparison of the new features with the base case
(that is, the current site) in terms of their effect on user behavior [6]. While this typically involves
only a small subset of users, at Facebook’s volume of activity even a very small subset quickly
generates enough data to assess the impact of the tested features. Thus engineers can immediately
identify what works in practice and what does not.
A/B Testing
One important attribute of continuous deployment is that it facilitates live experimentation using
A/B testing. The innovations that engineers implement are deployed immediately for real users
to experience. This lets engineers carefully compare the new features with the base case (that is,
1LoC = 317177 1148 ×d+ 1.966 ×d2where d is days since the first data point, with R2= 0.996. fitting was
done on cleaned data (see Fig. 1)
3
week of first commit
1 2 3 4 5 6 more
developers
0
100
200
300
400
500
600
Figure 3:
Distribution of time from start of employment to first commit of Facebook bootcam-
pers. Some employees in the ’more’ category do not start with bootcamp right away, e.g., when
transferring to engineering from a different department.
the current site) in terms of how those features affect user behavior [6]. Although this typically
involves only a small subset of users, at Facebook’s volume of activity, even a very small subset
quickly generates enough data to assess the tested features’ impact. Thus, they can immediately
identify what works in practice and what doesn’t. A/B testing is an experimental approach to find-
ing what users want, rather than trying to elicit requirements in advance and writing specifications.
Moreover, it allows for situations where users use new features in unexpected ways. Among other
things, this enables engineers to learn about the diversity of users, and appreciate their different
approaches and views of Facebook. To improve the data obtained from tests, Facebook employs
in-house usability tests with user focus groups in addition to testing the deployed product on a
large scale [2].
Continuous Deployment
Continuous deployment also has important benefits from a software production viewpoint. Fre-
quent deployments imply that each deployment introduces only a limited amount of new code.
This reduces (but doesn’t eliminate) the risk that something will go wrong. Frequent deployment
approximates serial rollout, which is easier to debug; moreover, all commits are individually tested
for regressions. All new Facebook employees undergo a six-week bootcamp in which they’re
encouraged to commit new code as soon as possible (see Figure 3), partly to overcome the fear
of releasing new code. The ability to deploy code quickly in small increments and without fear
enables rapid innovation. Another benefit of small and rapid deployments is that we can easily
identify the source of and solutions to emerging problems: they’re most likely the most recently
deployed changes in the code, and still fresh in engineers’ minds.
Ostensibly, rapid deployment is at odds with feature development that requires large changes
to the codebase. The solution is to break down such changes into a sequence of smaller and safer
4
ones, hidden behind an abstraction
(a practice aptly called “branch by abstraction” [5]). For example, consider the delicate issue
of migrating data from an existing store to a new one. This can be broken down as follows:
1. Encapsulate access to the data in an appropriate data type.
2. Modify the implementation to store data in both the old and the new stores.
3. Bulk migrate existing data from the old store to the new store. This is done in the background
in parallel to writing new data to both stores.
4. Modify the implementation to read from both stores and compare the obtained data.
5. When convinced that the new store is operating as intended, switch to using the new store
exclusively (the old store may be maintained for some time to safeguard against unforeseen
problems).
Facebook has used this process to transparently migrate database tables containing hundreds of
billions of rows to new storage formats.
In addition, deploying new code does not necessarily imply that it is immediately available to
users. Facebook uses a tool called “Gatekeeper” to control which users see which features of the
code. Thus it is possible for engineers to incrementally deploy and test partial implementations of
a new service without exposing them to end users.
All front-end engineers at Facebook work on a single stable branch of the code, which also
promotes rapid development, since no effort is spent on merging long-lived branches into the trunk.
But there is still a distinction between code in development and code that is ready to be deployed.
Developers use the git version control system locally for their daily work, until the code is ready
to push. The stable version for deployment is maintained using subversion (for historical reasons).
When ready to be pushed, new code must first be merged with the stable version in the centralized
repository, after which engineers can commit their changes into subversion.
Given the rapid rate of development, it is not surprising that engineers typically commit new
code several times each week (Fig. 4). Moreover, the typical intervals between successive commits
by the same engineer are a few hours, with a median of 10 hours. However, the distribution of
intervals is multi-modal, and intervals of a day or even multiple days also occur.
Determining the optimal deployment cycle in general is outside the scope of this paper. Some
of the factors going into the decision are: the cost of each deployment, the probability and cost of
errors, the probability and value of incremental benefits, the skill of the engineers involved, and the
culture of the organization. Adding to the complexity of the decision is that many of these factors
can be optimized, so the optimal cycle can change.
Some Internet companies allow all engineers to deploy their code immediately when they con-
sider it ready, with no need for authorization by anyone else. This may lead to a rate of many
new deployments per day. But for a company that handles large amounts of personal data like
Facebook, the risk of privacy breaches warrants more oversight. Facebook therefore employs a
combination of daily and weekly deployments, as described below.
5
developer commit rate
avg. commits per week
010 20 30 40 50
developers
0
100
200
300
400
500
avg. commits per week
0.01 0.1 1 10 100
survival probability
0.0001
0.001
0.01
0.1
1
intervals between commits
interval [hours]
020 40 60 80
instances [1000s]
0
10
20
30
40
50
60
70
80
90
Figure 4:
Distribution of the commit rate of engineers with at least 10 commits (measured as
number of commits divided by range of active weeks), and distribution of the intervals between
commits by the same engineer. Inset shows the tail of first distribution, indicating that only about
1% of the engineers average more than 10 commits per week.
2 Pushing New Features
The push process balances the rate of innovation with risk control. Development culture helps
control risk just as much as do automated tools. The risks involved in introducing new software
grow with scale, which has three main dimensions: more engineers, more lines of code, and more
users. With more engineers, more gets done per unit of time, so more new code is generated for
each push and must undergo testing. When the system is larger, more interactions occur between
different components, and more things can go wrong. More users can employ the system in more
ways and increase the volume of data that it must handle. Reducing the risks to zero is impossible,
so Web companies must allocate oversight resources judiciously. For example, code concerned
with privacy is held to a higher standard than code that deals with less sensitive issues.
Part of the allocation of oversight is the distinction between a daily push and the weekly push.
The weekly push is the default, and involves thousands of changes. On Sunday afternoon the
code to be pushed is placed in the subversion repository operated by the release engineers. It
then undergoes extensive automatic testing, including tens of thousands of regression tests for
correctness and performance. It also becomes part of the “latest” build, meaning it is the default
version being used by Facebook employees. The push itself then occurs on Tuesday afternoon.
The release engineers responsible for the push process assign engineers with “push karma”
based on past performance (namely how often their code caused problems). If an engineer has bad
karma, his or her code contributions undergo more oversight before being accepted to the push.
Importantly, the goal is to manage risk, not to rank performance, and push karma is not made
public. Additional inputs affecting the amount of oversight exercised over new code are the size of
the change and the amount of discussion about it during code reviews; higher levels for either of
6
major fix
production fix
visible fix
product launch
internal only
user test/metrics
other
Figure 5:
Distribution of reasons for using a daily push.
these indicate higher risk.
Release engineers perform a smaller push twice daily on other workdays, for several possible
reasons (see Figure 5). In extreme cases, additional pushes might occur during the week or even
over the weekend.
When code is accepted to the weekly or daily push, it should have already passed personal
unit tests and a code review. At Facebook, code review occupies a central position. Every line of
code thats written is reviewed by a different engineer than the original author. This serves multiple
purposes: the original engineer is motivated to ensure that the code is of high quality, the reviewer
comes with a fresh mind and might find defects or suggest alternatives, and, in general, knowledge
about coding practices and the code itself spreads throughout the company. The Phabricator code
review tool (http://phabricator.org) facilitates many common engineering operations
on a large codebase. It enables engineers to:
Browse current and historical versions of the source code.
View suggested code changes and discuss them in-line.
Bug and task tracking.
Wiki-based documentation.
All these features are integrated with each other and with the source control system to reduce
friction incidental to writing and committing code changes.
Engineers and release engineers conduct the code tests and administer a battery of regression
tests, including on the user interface using Watir (http://watir.com) and WebDriver (http:
//code.google.com/p/selenium). In addition, Facebook employees effectively test the
latest code while using it internally. This exercises the code under realistic conditions, and all
employees can report any defects they encounter. A helpful property of having all employees
double as testers is that as the number of code changes grows with the company, the number of
testers follows suit automatically. The outcome of all this testing is increased confidence that the
pushed code won’t break the system.
7
Another important testing tool, Perflab, can accurately assess how the new code affects perfor-
mance before its installed on production servers. Problems that Perflab or other tests uncover that
engineers can’t resolve within a short time might call for removing a specific code revision from
the push and delaying it to a subsequent push, after engineers resolve the problems. Engineers
must monitor and correct even small performance issues continuously, because if such problems
are left to accumulate, they can quickly lead to capacity and performance problems. Perflab charts
let the team visually compare the variance a code change introduces to the variance that’s inherent
in the existing product and identify emerging problems.
The weekly push itself occurs in stages. The first stage is deployment to H1, a set of internal
servers accessible only to Facebook engineers. These servers are used for a final round of testing
from the engineers who contributed code to the push.
The second stage is deployment to H2, a few thousand machines that serve a small fraction
of real-world users. If the new code doesn’t raise any alerts at H2, it’s pushed to H3, which is
full deployment on all servers. If problems arise, engineers will fix them, and the cycle repeats.
Alternatively, the code might be rolled back to the previous version. Two kinds of rollback exist:
The typical rollback reverts a single commit and any dependencies (which are few or nonexistent
owing to the practice of small and independent commits, as well as the high frequency of commits
and pushes). A much rarer rollback occurs when the entire binary must revert to the previous
working version.
Facebook operates numerous servers in dozens of clusters spread across four geographical lo-
cations. Pushing a new version of the code to all these servers isn’t trivial. The deployed executable
size is around 1.5 Gbytes, including the Web server and compiled Facebook application. The code
and data propagate to all servers via BitTorrent, which is configured to minimize global traffic
by exploiting cluster and rack affinity. The time needed to propagate to all the servers is roughly
20 minutes. The Facebook site’s responsiveness isnt affected when code is updated; rather, each
server in its turn switches to the new version. A small amount of excess capacity helps facilitate
the staggered transition.
As a matter of policy, all engineers who contributed code must be available online during the
push. The release system verifies this by contacting them automatically using a system of IRC
bots; if an engineer is unavailable (at least for daily pushes), his or her commit will be reverted.
This means that the number of people on call is proportional to the number of code changes being
pushed — again, ensuring that the process is scalable.
Note that for a large and complex application such as Facebook, it isn’t always obvious whether
a problem has occurred. For example, a small bug in the ranking function that wrongly prioritizes
some newsfeed stories over others would be easy to miss. Facebook thus continuously monitors the
system’s health with a combination of internal tools such as Claspin (http://www.facebook.
com/notes/facebook-engineering/monitoring-cache-with-claspin/10151076705703920
and external sources such as tweet analysis.
As noted earlier, an important component of testing new features is testing them under real use
— first, internal use by Facebook employees, and later use by subsets of real users worldwide. It’s
impractical to perform such testing by deploying code on all the servers and then removing it to
stop the test, especially considering that hundreds of such tests could be occurring simultaneously.
8
Instead, the deployed code includes all that’s been developed, both in production and under test,
using Gatekeeper to control what code paths are actually active. Thus, engineers can turn tests on
and off at will, and also apply them to only select user groups based on criteria such as country
or age group. Gatekeeper can also be used to turn off new code that’s causing problems, thereby
reducing the need to immediately deploy a correction.
Gatekeeper also lets engineers conduct a dark launch, in which code is launched and installed
on all the servers, but users don’t see it because its user interface components are switched off. Such
a launch can be used to test scalability and performance. For example, when Facebook introduced
its chat server, it was initially deployed in a version that sent dummy chat messages without any
user involvement. This stress-tested the chat servers under a realistic workload at scale, without
users knowing about it. When the system was stable enough to support a real workload, the dummy
messages were turned off and the user interface turned on.
3 Personal Responsibility
Facebook has roughly 1,000 development engineers and three release engineers who orchestrate
the daily and weekly pushes. However, it doesn’t have a separate quality assurance (QA) team or
any other designated testers. In response to specific complaints, engineers can explore source code
completely unrelated to their regular work, submitting fixes or at least detailed defect reports.
The absence of a separate QA team starkly contrasts with most traditional software companies,
where engineers develop code and might also write and perform some basic unit tests, but then
throw their code over the wall to the QA team. Such teams are composed of professional testers
who write, maintain, and administer a whole battery of tests. This separation leads to various
problems, including the need for testers to learn the code, and a perception of hierarchy in which
development is regarded higher than testing.
At Facebook, engineers conduct any unit tests for their newly developed code. In addition,
the code must pass all the accumulated regression tests, which are administered automatically as
part of the commit and push process. As mentioned earlier, all new code must be supported by
engineers attending the push on IRC in case problems occur with their code.
Developers must also support the operational use of their software — a combination that’s
become known as “devops. This further motivates writing good code and testing it thoroughly.
Developers’ personal stake in keeping the system running smoothly complements the engineering
procedures and lets the system maintain quality at scale. Methodologies and tools aren’t enough
by themselves because they can always be misused. Thus, a culture of personal responsibility is
critical.
Consequently, most source files are modified by only a few engineers (see Figure 6). Although
at least one other engineer reviews all changes before they’re committed, a third of the source files
have only been edited by one engineer, and another quarter by two. Only 10 percent of the files
are handled by more than seven engineers. On the other hand, the distribution of engineers per file
has a heavy tail, with the most widely shared file handled by no fewer than 870 distinct engineers.
These widely shared files are predominantly library files and also include major configuration and
9
developers per file
developers
12 3 4 5 6 7 8 9 1011121314151617181920
file instances [1000s]
0
10
20
30
40
50
60
70
80 developers per file
developers
0.1 1 10 100 1000
survival probability
1e−06
1e−05
0.0001
0.001
0.01
0.1
Figure 6:
Most files are handled by only few engineers. However, the distribution has a heavy tail:
the probability that more than
x
engineers will handle a file (the “survival probability”) drops off
slowly with
x
.
top-level PHP files.
Responsibility for personally developed code is just one aspect in a culture of mutual respon-
sibility. Another comes from experimentation with alternative solutions to large-scale challenges.
For example, when Facebook identified PHP’s performance as a major factor in infrastructure cost,
engineers proposed three different solutions with different risks and gains. Initially, All three were
developed in parallel, but as more of a collaboration than a competition. In particular, the heads
of the different teams identified when their projects were no longer worthwhile because another
team’s solution was proving to be better.
Eventually, the most ambitious alternative prevailed (producing the HipHop compiler; http:
//github.com/facebook/hiphop-php), but the other two werent a waste: they provided
important backup capability while needed and were terminated soon after it was evident that a
better option was viable.
In another stark break from traditional practices, even work assignment at Facebook is per-
sonally driven by the engineers. All new engineers first undergo bootcamp, where they become
acquainted with Facebook’s codebase, culture, and processes; then they choose to join the team
where they feel they can play to their strengths and enjoy the work, while aligning with the com-
pany’s priorities, not unlike open-source projects [8]. Naturally, it is also possible to move between
teams. One mechanism supporting team mobility is the hackamonth, whereby engineers join an-
other team for several weeks of work on new ideas in that team’s domain. Subsequently, they can
officially join the team.
On a smaller scale, innovations are encouraged by breaking the routine with frequent, day-long
hackathons. Such break-out time occurs in other companies as well — for example, Google lets
engineers spend 20 percent of their time on projects of their choice. Facebook hackathons are
focused and intensive, and foster interactions among all parts of the company — not just engineers
10
commits per day
Sun Mon Tue Wed Thu Fri Sat
number of commits [1000s]
0
10
20
30
40
50
60
70
80 commits per hour all days
04 8 12 16 20 24
number of commits
0
1000
2000
3000
4000
5000
6000
7000
8000
sun
mon
tue
wed
thu
fri
sat
Figure 7:
Sustainable work practices as reflected by the distribution of committing new code on
different days of the week and different hours of the day.
but also finance, legal, and other departments. Many prominent Facebook features began during
hackathons, including Timeline, chat, video, and HipHop.
The flip side of personal responsibility is responsibility toward the engineers themselves. Due
to the perpetual development mindset, Facebook culture upholds the notion of sustainable work
rates. The hacker culture doesn’t imply working impossible hours. Rather, engineers work normal
hours, take lunch breaks, take weekends off, go on vacation during the winter holidays, and so on
(see Figure 7). In particular, daily code pushes arent scheduled for weekends.
4 Summary
Software development at Facebook runs contrary to many of the common practices of the industry.
The main points we have covered include:
There is no detailed plan to achieve a final, well-specified product.
Engineers work directly on a common codebase with no branches and merging.
There is no separate QA team responsible for testing.
New code is released at a high rate, currently twice every working day.
Engineers self-select what to work on.
There is no assignment of blame for failures.
But this does not reflect a lack of regard to established procedure. Rather, it is a willful adjustment
and optimization of the software development process to the unique circumstances at Facebook:
The product cannot be specified in advance, and it must evolve continuously at a rapid pace.
11
Figure 8:
Facebook’s version of the deployment pipeline, showing the multiple controls over new
code.
Engineers have first-hand experience in the domain, but also need to test innovations on real
users to see what works.
Personal responsibility by the engineers who wrote the code can replace quality assurances
obtained by a separate testing organization.
Testing on real users at scale is possible, and provides the most precise and immediate feed-
back.
Learning from experience is more important and beneficial than chastising those responsible
for a failure.
Importantly, all these practices aren’t just a disjoint set, but rather gel into a coherent engi-
neering culture that combines with a process to provide considerable oversight on new code (see
Figure 8). Together, these practices balance the need for quick turnaround with that for oversight,
robustness, and correctness. Although some practices are unique to Web-based companies such as
Facebook, others are applicable in general. Indeed, the practices Facebook follows have much in
common with agile software development.
Perhaps the biggest surprise is how far individual responsibility can substitute for specializa-
tion, methodologies, and formalized procedures. Practices chosen to make up for blame and self-
protection have no place in a team of engineers willing to take responsibility for the entire system.
The time and energy liberated by taking a positive, responsible approach to software development
has touched the lives of more than a seventh of the planet.
Acknowledgments
We would like to thank Chuck Rossi, Boris Dimitrov, and Facebook’s communication team for
their insightful comments.
12
To Read More
On-line sources about Facebook’s software development practices include the following:
Jolie O’Dell, Move fast, break things: Four stories for hackers from Facebook (interview with
Jay Parikh), 26 Jun 2012. http://venturebeat.com/2012/06/26/facebook-hacker-stories/
Andrew Bosworth, Facebook Engineering Bootcamp, 19 Nov 2009. http://www.facebook.
com/note.php?note\_id=177577963919
Steven Grimm, Facebook Engineering: What kind of automated testing does Facebook do?,
29 Jun 2010. http://www.quora.com/Facebook-Engineering/What-kind-of-automated-
Mike Schroepfer, Culture of Innovation, Nov 2010. http://www.youtube.com/watch?
v=DfN1YaYdgRg
Release engineering and push karma, interview with release engineer Chuck Rossi, 5 Apr
2012. https://www.facebook.com/notes/facebook-engineering/release-engineering-
10150660826788920
References
[1] B. Atikoglu, Y. Xu, E. Frachtenberg, S. Jiang, and M. P. zny. Workload analysis of a large-
scale key-value store. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst.,
pages 53–64, Jun 2012.
[2] P. Chilana, C. Holsberry, F. Oliveira, and A. Ko. Designing for a billion users: A case study
of Facebook. In SIGCHI Conf. Human Factors in Comput. Syst., pages 419–432, May 2012.
[3] D. G. Feitelson. Perpetual development: A model for the Linux kernel life cycle. J. Syst. &
Softw., 85(4):859–875, Apr 2012.
[4] M. W. Godfrey and Q. Tu. Evolution in open source software: A case study. In 16th Intl.
Conf. Softw. Maintenance, pages 131–142, Oct 2000.
[5] J. Humble and D. Farley. Continuous Delivery. Addison-Wesley, 2010.
[6] R. Kohavi, R. Longbotham, D. Sommerfield, and R. M. Henne. Controlled experiments on
the web: Survey and practical guide. Data Mining & Knowledge Discovery, 18(1):140–181,
Feb 2009.
[7] M. M. Lehman, D. E. Perry, and J. F. Ramil. Implications of evolution metrics on software
maintenance. In 14th Intl. Conf. Softw. Maintenance, pages 208–217, Nov 1998.
[8] E. S. Raymond. The cathedral and the bazaar. URL www.catb.org/˜esr/writings/cathedral-
bazaar/cathedral-bazaar, 2000.
13
[9] Royal Pingdom Blog. Exploring the software behind Facebook, the world’s largest site. URL
http://royal.pingdom.com/2010/06/18/the-software-behind-facebook/, 18 Jun 2010. (Visited
27 Sep 2010).
[10] A. Thusoo, Z. Shao, S. Anthony, D. Borthakur, N. Jain, J. Sen Sharma, R. Murthy, and
H. Liu. Data warehousing and analytics infrastructure at Facebook. In SIGMOD Intl. Conf.
Management of Data, pages 1013–1020, Jun 2010.
14
... When a system has DevOps adoption, the software is deployed and delivered faster, and there is less possibility of unexpected mistakes. At the same time, if the production environment doesn't have sufficient security reviews, there is a high risk of vulnerabilities in the process [22]. The organization is now incorporating DevSecOps, which could be a great approach to maintain the security issues, and that helps to increase the quality of software during the lifecycle of software development [22,46]. ...
... At the same time, if the production environment doesn't have sufficient security reviews, there is a high risk of vulnerabilities in the process [22]. The organization is now incorporating DevSecOps, which could be a great approach to maintain the security issues, and that helps to increase the quality of software during the lifecycle of software development [22,46]. To maintain security issues and improve software quality during the software development lifecycle, organizations are increasingly turning towards DevSecOps as a viable approach [22,46] Our findings on DevOps success factors also suggest that there are several key elements that contribute to the successful implementation of DevOps practices in organizations. ...
... The organization is now incorporating DevSecOps, which could be a great approach to maintain the security issues, and that helps to increase the quality of software during the lifecycle of software development [22,46]. To maintain security issues and improve software quality during the software development lifecycle, organizations are increasingly turning towards DevSecOps as a viable approach [22,46] Our findings on DevOps success factors also suggest that there are several key elements that contribute to the successful implementation of DevOps practices in organizations. These include strong leadership, clear communication, cross-functional collaboration, automation, continuous improvement, and a culture of experimentation and innovation. ...
Conference Paper
DevOps is a methodology that seeks to unify development and operations teams in organizations, aiming to facilitate faster software delivery and promote collaboration to build a positive company culture. Our research aims to investigate the current state-of-the-art of DevOps, align academic research with industry practices, and identify critical success factors. We conducted a comprehensive literature review using a variety of databases and search engines, which revealed that several factors are essential to the success of DevOps, including DevOps culture, automation processes, continuous integration, and deployment, monitoring, and feedback, standardization with tools, team leadership, and DecSecOps for security issues. While DevOps has gained significant attention, it remains essential to understand practitioners' perspectives. Our research has the potential to strengthen the concepts and ideas of critical success factors, broaden DevOps practices and perspectives for professionals, and enhance academic knowledge in this area.
... Already in 2007, Kohavi et al. [2] published an experience report on experimentation at Microsoft and provided guidelines on how to conduct randomized controlled experiments in software engineering practice. Several industry leaders have adopted the practice of CE and published infor-mation on their CE processes or infrastructure, such as Microsoft [3], Google [4] and Facebook [5]. ...
... We observe differences in CE implementation between companies with advanced CE processes and infrastructure, and those without. Our empirical research provides a complementary view of CE compared to those provided by the industry leading companies [3], [4], [5] and academic reference models [6], [7]. ...
Article
Continuous experimentation (CE) is used by many internet-facing companies to improve the value of their products based on user feedback gathered, e.g., through online experiments using A/B testing. Frameworks and theories for CE have been derived through academic research from applications in large Internet-facing companies. To assist practitioners in a broader range of companies, we herein present guidelines for CE, based on empirical data from interviews with 27 practitioners at 12 companies of varying sizes and CE maturity. The guidelines include advice derived from our (previously published) theory of factors that affect CE (FACE). These practitioner guidelines may assist companies in making informed decisions concerning their CE practices and contribute to efficient and effective experimentation. Our advice includes building processes and infrastructure gradually, combining quantitative and qualitative methods, focusing on goals and incentives for CE, and being mindful of ethics.
... The front-stage, backstage metaphor characterizes the division between visible content and underlying technical infrastructure (DeNardis, 2012;DeNardis & Hackl, 2015). The frontstage, back-stage metaphor also corresponds to Facebook's organizational structure, as a large tech company organizes its teams and divisions (e.g., public relations) in part by front-versus back-end considerations (Feitelson et al., 2013) As such, the front-stage, back-stage metaphor reflects the architecture of software systems that constitute Facebook's platform, as well as Facebook's organizational structure. ...
Article
Full-text available
Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)—authenticity regulation and privacy controls—in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman’s metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon front-stage data relations: information that users can see from other users—whether that is content that users can see from “bad actors”, or information that other users can see about oneself. At the same time, these projects relegate back-stage data relations—information flows between users constituted by recommendation and targeted advertising systems—to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.
... All rights reserved. testing (Gomez-Uribe and Hunt 2015; Kohavi and Longbotham 2017;Feitelson, Frachtenberg, and Beck 2013). This involves randomly assigning users to receive the item recommended by one of two candidate policies, and the relative performance of each policy is directly measured. ...
Article
Recommendation strategies are typically evaluated by using previously logged data, employing off-policy evaluation methods to estimate their expected performance. However, for strategies that present users with slates of multiple items, the resulting combinatorial action space renders many of these methods impractical. Prior work has developed estimators that leverage the structure in slates to estimate the expected off-policy performance, but the estimation of the entire performance distribution remains elusive. Estimating the complete distribution allows for a more comprehensive evaluation of recommendation strategies, particularly along the axes of risk and fairness that employ metrics computable from the distribution. In this paper, we propose an estimator for the complete off-policy performance distribution for slates and establish conditions under which the estimator is unbiased and consistent. This builds upon prior work on off-policy evaluation for slates and off-policy distribution estimation in reinforcement learning. We validate the efficacy of our method empirically on synthetic data as well as on a slate recommendation simulator constructed from real-world data (MovieLens-20M). Our results show a significant reduction in estimation variance and improved sample efficiency over prior work across a range of slate structures.
... The second key liminal innovation practice in the ICT industry is the incorporation of information from the use of an ICT into its development. As, Mertens (2018) as well as Orlikowski and Scott (2021) have shown, this is not unique to the liminal innovation of ICTs, but the ICT industry does place a particularly high value on learning from use (Alnafessah et al. 2021;Feitelson, Frachtenberg, and Beck 2013;Forsgren 2018;Forsgren and Humble 2016). Already in the early 2000s, the ICT industry shifted away from waterfall methods of innovation, which relied on extensive planning and anticipating what might happen, to agile methods, which instead emphasise experimentation and reacting to what is happening (Fowler and Highsmith 2001). ...
Article
Full-text available
ICTs are ubiquitous in today's digitised societies, but Responsible Innovation (RI) approaches are ill-equipped to address the liminal nature of ICT innovation. ICTs remain malleable after their diffusion. As a result, they are often in use and in development at the same time. This liminality has allowed developers to innovate by updating established ICTs and to incorporate information about the use of ICTs into their innovation. This clashes with RI approaches, which focus on emerging technologies and anticipation. We suggest three adaptations to overcome the gap between RI's anticipatory approach and the liminal innovation of ICTs: (1) RI must broaden its scope to include both emerging and established ICTs. (2) RI must focus on how developers monitor their in-use ICTs, as this information greatly influences innovation. And (3) RI must reflect on how to retrospectively care for issues with in-use ICTs, now that it is possible to update in-use ICTs.
Chapter
Product development teams often struggle to add value\hyp enhancing features without increasing maintenance costs at the same time. A data-driven approach, especially through controlled online experiments (A/B tests), is crucial. A/B testing compares a control variant (existing product) with a treatment variant (modified product) in real-world settings, allowing companies to make informed decisions based on user behavior data. This paper explores how AI can streamline the experimentation lifecycle by enhancing efficiency and reducing manual workload. Based on a qualitative-empirical study, we identified AI use cases in each step of the lifecycle, which could facilitate the experimentation activities. Focusing on AI's role in hypothesis formulation, experiment design, and data analysis, the paper advances the understanding of how to automate and optimize experimentation in product development. The presented framework guides practitioners in identifying potential use cases of AI in the product experimentation lifecycle.
Research
Full-text available
Continuous Integration (CI) and Continuous Deployment (CD) have become pivotal processes in modern software development, aiming to enhance software quality and expedite delivery. By evaluating case studies, reviewing literature, and analyzing industry reports, we explore how CI/CD processes enable automated testing, frequent code integration, and rapid deployment. Furthermore, we assess how these practices affect key metrics such as defect detection, time to market, and system stability in Linux environments, where open-source software development practices dominate. The findings highlight that CI/CD pipelines improve software quality by fostering early bug detection and reduce delivery time through automation. However, they also pose challenges in terms of infrastructure complexity and security vulnerabilities. This paper concludes by recommending best practices for implementing CI/CD in Linux-based projects to maximize the benefits while mitigating the risks. INTRODUCTION In recent years, the software development industry has undergone a significant shift towards automation, driven by the need for faster release cycles, better code quality, and higher levels of productivity. Central to this shift are Continuous Integration (CI) and Continuous Deployment (CD), processes that have become fundamental in DevOps practices. CI involves the automatic integration of code changes into a shared repository, followed by automated testing to ensure that the new changes do not introduce defects. CD extends this by automatically deploying the tested code to production environments, allowing for frequent and reliable releases. The benefits of CI/CD pipelines are often discussed in terms of their ability to improve software quality, speed up delivery, and foster collaboration among developers. Linux-based systems, known for their open-source nature and flexibility, have been an ideal environment for the adoption of CI/CD practices. The Linux kernel, one of the largest open-source projects, employs CI/CD pipelines to manage contributions from developers worldwide, highlighting the scalability and efficiency of these processes. This paper aims to fill this gap by examining how CI/CD affects key performance indicators like defect rates, release frequency, and system stability in Linux-based environments. The rest of the paper is structured as follows. The next section provides an overview of CI/CD methodologies and tools commonly used in Linux systems. This is followed by an analysis of the impact of CI/CD on software quality, focusing on metrics such as defect detection and system reliability. The subsequent section discusses the influence of CI/CD on delivery speed, emphasizing factors like release frequency, automation, and developer productivity. The paper concludes with a discussion on the challenges of implementing CI/CD in Linux systems and best practices for overcoming them.
Article
Full-text available
Key-value stores are a vital component in many scale-out enterprises, including social networks, online retail, and risk analysis. Accordingly, they are receiving increased attention from the research community in an effort to improve their performance, scalability, reliability, cost, and power consumption. To be effective, such efforts require a detailed understanding of realistic key-value workloads. And yet little is known about these workloads outside of the companies that operate them. This paper aims to address this gap. To this end, we have collected detailed traces from Facebook's Memcached deployment, arguably the world's largest. The traces capture over 284 billion requests from five different Memcached use cases over several days. We analyze the workloads from multiple angles, including: request composition, size, and rate; cache efficacy; temporal patterns; and application use cases. We also propose a simple model of the most representative trace to enable the generation of more realistic synthetic workloads by the community. Our analysis details many characteristics of the caching workload. It also reveals a number of surprises: a GET/SET ratio of 30:1 that is higher than assumed in the literature; some applications of Memcached behave more like persistent storage than a cache; strong locality metrics, such as keys accessed many millions of times a day, do not always suffice for a high hit rate; and there is still room for efficiency and hit rate improvements in Memcached's implementation. Toward the last point, we make several suggestions that address the exposed deficiencies.
Conference Paper
Full-text available
Scalable analysis on large data sets has been core to the functions of a number of teams at Facebook - both engineering and non-engineering. Apart from ad hoc analysis of data and creation of business intelligence dashboards by analysts across the company, a number of Facebook's site features are also based on analyzing large data sets. These features range from simple reporting applications like Insights for the Facebook Advertisers, to more advanced kinds such as friend recommendations. In order to support this diversity of use cases on the ever increasing amount of data, a flexible infrastructure that scales up in a cost effective manner, is critical. We have leveraged, authored and contributed to a number of open source technologies in order to address these requirements at Facebook. These include Scribe, Hadoop and Hive which together form the cornerstones of the log collection, storage and analytics infrastructure at Facebook. In this paper we will present how these systems have come together and enabled us to implement a data warehouse that stores more than 15PB of data (2.5PB after compression) and loads more than 60TB of new data (10TB after compression) every day. We discuss the motivations behind our design choices, the capabilities of this solution, the challenges that we face in day today operations and future capabilities and improvements that we are working on.
Article
Full-text available
The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.
Article
I anatomize a successful open-source project, fetchmail, that was run as a deliberate test of some surprising theories about software engineering suggested by the history of Linux. I discuss these theories in terms of two fundamentally different development styles, the "cathedral" model of most of the commercial world versus the "bazaar" model of the Linux world. I show that these models derive from opposing assumptions about the nature of the software-debugging task. I then make a sustained argument from the Linux experience for the proposition that "Given enough eyeballs, all bugs are shallow", suggest productive analogies with other self-correcting systems of selfish agents, and conclude with some exploration of the implications of this insight for the future of software.
Article
Facebook is the world's largest social network, connecting over 800 million users worldwide. The type of phenomenal growth experienced by Facebook in a short time is rare for any technology company. As the Facebook user base approaches the 1 billion mark, a number of exciting opportunities await the world of social networking and the future of the web. We present a case study of what it is like to design for a billion users at Facebook from the perspective of designers, engineers, managers, user experience researchers, and other stakeholders at the company. Our case study illustrates various complexities and tradeoffs in design through a Human-Computer Interaction (HCI) lens and highlights implications for tackling the challenges through research and practice.
Article
Software evolution is widely recognized as an important and common phenomenon, whereby the system follows an ever-extending development trajectory with intermittent releases. Nevertheless there have been only few lifecycle models that attempt to portray such evolution. We use the evolution of the Linux kernel as the basis for the formulation of such a model, integrating the progress in time with growth of the codebase, and differentiating between development of new functionality and maintenance of production versions. A unique element of the model is the sequence of activities involved in releasing new production versions, and how this has changed with the growth of Linux. In particular, the release follow-up phase before the forking of a new development version, which was prominent in early releases of production versions, has been eliminated in favor of a concurrent merge window in the release of 2.6.x versions. We also show that a piecewise linear model with increasing slopes provides the best description of the growth of Linux. The perpetual development model is used as a framework in which commonly recognized benefits of incremental and evolutionary development may be demonstrated, and to comment on issues such as architecture, conservation of familiarity, and failed projects. We suggest that this model and variants thereof may apply to many other projects in addition to Linux.
Conference Paper
Most studies of software evolution have been performed on systems developed within a single company using traditional management techniques. With the widespread availability of several large software systems that have been developed using an “open source” development approach, we now have a chance to examine these systems in detail, and see if their evolutionary narratives are significantly different from commercially developed systems. The paper summarizes our preliminary investigations into the evolution of the best known open source system: the Linux operating system kernel. Because Linux is large (over two million lines of code in the most recent version) and because its development model is not as tightly planned and managed as most industrial software processes, we had expected to find that Linux was growing more slowly as it got bigger and more complex. Instead, we have found that Linux has been growing at a super-linear rate for several years. The authors explore the evolution of the Linux kernel both at the system level and within the major subsystems, and they discuss why they think Linux continues to exhibit such strong growth
Conference Paper
In the context of a hypothesis attributing the slow progress in achieving major global software process improvement, in part, to overlooking the role of feedback in that process, the FEAST/1 project is studying the impact of feedback on software evolution. Amongst its activities the project is analysing metrics of the evolution of several industrial systems, ranging from a financial transaction system to a very large real time system. The similarities which have emerged from a comparison of evolution metrics from several systems, support conclusions reached in a 1970s study of OS/360 evolution. The latest results suggest some refinement of earlier conclusions but indicate that both the metrics and the conclusions derived from them must be taken into account in the planning and implementation of successful software maintenance. Papers discussing the FEAST/1 results may accessed via the FEAST web page [fwp98]
Article
The FEAST/1 project is studying the impact of feedback on E-type software evolution, and a hypothesis which attributes the failure to achieve major software process improvement, in part, to overlooking its role. Amongst its activities the FEAST/1 project [leh97] is studying metrics of the evolution of several industrial software systems, ranging from a financial transaction system to a very large real time system. When comparing evolution metrics from so widely different systems, similarities emerge which support conclusions reached in a 1970s study of OS/360 evolution, enabling their further refinement and suggesting that, both metrics and conclusions derived from them are relevant and should be taken into consideration for successful software maintenance. Keywords Software:- maintenance, evolution, metrics, dynamics, feedback, improvement; Lehman's laws 1 Feedback in the Software Process Some years ago one of the present authors wondered why major sustained improvement of the ind...
Facebook Engineering Bootcamp; www.facebook.com/note.php?note_id=177577963919 Facebook Engineering: What Kind of Automated Testing Does Facebook Do
  • bullet A Bosworth
  • S Grimm
@BULLET A. Bosworth, " Facebook Engineering Bootcamp, " 19 Nov. 2009; www.facebook.com/note.php?note_id=177577963919. @BULLET S. Grimm, " Facebook Engineering: What Kind of Automated Testing Does Facebook Do? " 29 June 2010; www.quora.com/ Facebook-Engineering/What-kind-of-automated-testing-does-Facebook-do.