Conference PaperPDF Available

DORA Platform: DevOps Assessment and Benchmarking

Authors:

Abstract

In today’s business environment, organizations are challenged with changing customer demands andQuery expectations, competitor pressures, regulatory environments, and increasingly sophisticated outside threats at a faster rate than in years past.
adfa, p. 1, 2011.
© Springer-Verlag Berlin Heidelberg 2011
DORA Platform: DevOps Assessment and Benchmarking
Nicole Forsgren1,3, Monica Chiarini Tremblay1, Debra VanderMeer1, Jez Humble2,3
1College of Business, Florida International University
nicolefv@gmail.com, tremblay,vanderd{@fiu.edu}
2DORA
Berkely
humble@berkeley.edu
1 Introduction
In today’s business environment, organizations are challenged with changing custom-
er demands and expectations, competitor pressures, regulatory environments, and
increasingly sophisticated outside threats at a faster rate than in years past. In order
for organizations to manage these challenges, they need the ability to deliver software
with both speed and stability. Yearly or even quarterly software releases are no longer
the norm. Organizations from technology (e.g., Etsy and Amazon) to retail (e.g.,
Nordstrom and Target) and others are using integrated software development and
delivery practices to deliver value to their users, beat their competitors to market, and
pivot when the market demands.
Given the complexity of the modern software and infrastructure landscape, technolo-
gy transformation is a non-trivial task. Software development and delivery include
several key capabilities: strong technical practices, decoupled architectures, lean man-
agement practices, and a trusting organizational culture. Technical teams and organi-
zations are left with an increasingly long list of potential capabilities to develop to
improve their ability to deliver software. Technology transformation is a portfolio
management problem, with technology leaders must allocate limited resources for to a
broad spectrum of potential areas for capability improvement to deliver the greatest
benefit.
Having a view to an organization’s current performance is key to any improvement
initiative. Formal assessments are one way to achieve this visibility. Assessments and
scorecards are not a new idea (e.g., CMMI [1]), but the industry has yet to find ways
to holistically measure and assess technology capabilities in ways that are based in
research and are repeatable, scalable, and offer industry benchmarks. Currently, most
commercial assessments consist of interviews conducted by a team of consultants [2].
These are heavyweight, expensive, not scalable, and subject to bias from the facilita-
tors (for example, a covert goal can be to try to sell software or continued consulting
services). Furthermore, by their nature, these assessments are myopic and only offer a
comparison within the firm. Most commercial assessments do not have external data;
research shows comparison benchmarks can drive performance improvements.
The DORA (DevOps Research and Assessment) platform presented in this paper
seeks to address these limitations. We do this in three stages. First, we build our as-
sessment on prior research that investigates capabilities and drives improvements in
the ability to develop and deliver software. Second, we refine our assessment on psy-
chometric methods that are statistically valid, reliable, and therefore consistent and
repeatable. Third, we build our platform on a SaaS model that is scalable and provides
industry-wide benchmarks.
2 Foundation
The traditional waterfall methodology treats development as a highly structured pro-
cess that inhibits rapid software development. As a methodology, it does not allow
developers to quickly respond to market and needs, nor to incorporate feedback from
discoveries during the delivery process. The Agile manifesto and related agile meth-
odologies emerged as an attempt to address limitations to traditional waterfall meth-
ods by leveraging feedback and embracing change through short incremental, iterative
delivery cycles. While agile methods address aspects of the challenges seen in tradi-
tional methodologies and help to speed up the development process, they are often
subject to limitations in the planning (i.e., upstream) and deployment (i.e., down-
stream) stages.
In response to these developments, the DevOps movement was started in the late
2000’s. One notable difference of DevOps from agile is its extension of agile process-
es beyond the development role downstream into IT operations. Additional differ-
ences include the application of lean manufacturing concepts, such as WIP limits and
visualization of work. Finally, and most importantly, DevOps highlights the im-
portance of communication across organizational boundaries and a high-trust culture
[3]. To maintain competitive advantage in the market, enterprise technology leaders
are undertaking technology transformations to move from traditional and agile meth-
odologies to DevOps methodologies that are continually improved.
Understanding where an organization currently performs (i.e., measuring and baselin-
ing performance) is important to any continuous improvement initiative (e.g., [4]).
Many organizations lack the instrumentation or expertise to measure their capabilities
in a holistic way, and therefore seek external assessment options.
The research to support this type of assessment exists, both in industry reports (e.g.,
[5,6,7]) and in academic papers (e.g., [3,8]). However, for several reasons, there cur-
rently are no direct ways for technology leaders to apply these findings to their organ-
izations with easy, scalable methods. First, current methods focus on qualitative ap-
proaches, which are not scalable and are not appropriate for comparison across time
periods and organizations. Second, few in industry understand how to apply behavior-
al psychometric models and methods; nor do they sufficiently understand analysis,
research design, and implementation requirements. Third, team members may not feel
safe reporting system and environment performance to internal leaders, but do feel
safer reporting to an external anonymized SaaS system [9]. Finally, the unavailability
of external benchmarking data to drive performance comparisons, and the inability to
measure improvement quantitatively over time in relation to changes in the rest of the
industry, prevents teams from understanding the dynamic wider context in which they
operate. This can lead to teams failing to take sufficiently strong action, and falling
further behind the industry over time.
3 The DORA assessment tool
To address the aforementioned challenges, the DORA assessment tool was designed
to target business leaders, either directly or indirectly (i.e., through channel partners
such as consultancies or system integrators, who can offer the assessment as part of a
larger engagement). The DORA assessment tool process is illustrated in Figure 1.
The DORA Assessment Platform is built using PHP and is a SaaS solution hosted in
AWS, with the data stored and processed in AWS East Region 1 with two independ-
ent availability zones for disaster recovery. The tool collects no personally identifia-
ble information (PII) and stores no IP addresses, making the assessment and analysis
appropriate for use in all geographies (for example, the assessment meets UK privacy
law guidelines).
Figure 1. Assessment Tool Process
3.1 Measurement Components
IT performance is comprised of four measurements: lead time for changes, deploy
frequency, mean time to restore (MTTR), and change fail rate. Lead time is how long
it takes an organization to go from code commit to code successfully running in pro-
duction or in a releasable state. Deploy frequency is how often code is deployed.
MTTR is long it generally takes to restore service when a service incident occurs
(e.g., unplanned outage, service impairment). Change fail percentage are the percent-
age of changes that result in degraded service or subsequently require remediation
(e.g., lead to service impairment, service outage, require a hotfix, fix forward, patch).
Key capabilities are measured among four main dimensions. The technical dimen-
sion includes practices that are important components of the continuous delivery par-
adigm, such as: the use of version control, test automation, deployment automation,
trunk-based development, and shifting left on security. The process dimension in-
cludes several ideas from lean manufacturing such as: visualization of work (such as
dashboards), decomposition of work (allowing for single piece flow), and work in
process limits. The measurement dimension includes the use of metrics to make busi-
ness decisions and the use of monitoring tools. And finally, the cultural dimension
includes measures of culture that are indicative of high trust and information flow, the
value of learning, and job satisfaction.
3.2 Survey Deployment
The DORA assessment surveys technologists along the full software product delivery
value stream (i.e., those in development, test, QA, IT operations, information security,
and product management). This is different from other assessments in that all tech-
nologists are polled and not just a handful, and practitioners on the ground are as-
sessed, not just leadership. The surveys include psychometric measures that capture
system and team behaviors along four key dimensions: technical, lean management,
monitoring, and cultural practices. Completing the survey takes approximately 20
minutes, and draws on prior work (see [2, 4, 5, 6, 7] for the latent constructs that are
referenced in the assessment).
The engagement model for the DORA assessment is one where technology leaders
can act on results analysis provided. This may mean looking to internal champions
and technical expertise, or it may mean engaging consultants to build out roadmaps
and act on the guidance provided. When running an assessment, a survey manager
meets with a client to determine the right sampling strategy for optimum data collec-
tion. The survey manager then partners with the client to send survey invitations to
the client teams, and the platform collects responses.
At the end of data collection, the responses are analyzed, and the reports are generat-
ed. These reports are sent to the client management team, and optionally, to the client
teams. The DORA assessment tool delivers the following: 1. Measurement of key
capabilities described above; 2. Benchmarking these key capabilities against their
own organization, the industry, and their aspirational peers in the industry; and 3.
Identification of priorities for high impact investments for capability development.
4 Case Study
We present a case study demonstrating the utility of the DORA platform. Fin500 is a
Fortune 500 company, and one of the ten largest banks in the United States. The com-
pany focuses on an innovative approach to customized services and offerings; this
innovative approach requires an ability to develop and deliver quality software with
rapidly. The Fin500 team was interested in the DORA assessment platform because
the various measurement and assessment tools they had been using were either too
narrow, too complicated, didn’t offer actionable insights, or didn’t show them how
they compared against the industry. Crucially, these other tools didn’t identify which
capabilities were the most important for them to focus on first. Only the DORA plat-
form provided a solution that provided all three things: holistic measurement, an in-
dustry benchmark, and identification of most important capabilities.
Following assessments across 17 teams and seven business units, DORA’s analysis
identified two key areas for capability development: automating change control pro-
cesses and trunk-based development. Trunk-based development is a coding practice
characterized by developers using one single mainline in a code repository; branches
have very short lifetimes before being merged into master; and application teams
rarely or never having “code lock” periods when no one can check in code or do pull
requests due to merging conflicts, code freezes, or stabilization phases.
While the team was aware that their change approval processes were a likely candi-
date for improvement, the analysis provided an evidence-based second opinion,
providing the necessary leverage to prioritize it. Trunk-based development proved to
be a bigger challenge: some were skeptical that this would be a key driver for IT per-
formance improvement. But the analysis was clear; these capabilities were key.
Fin500 created organization-wide working groups and workshops on branching strat-
egies and worked to reduce the amount of manual approvals happening in their
change approval processes. In just two months, the team was able to increase the
number of releases to production from 40 to over 800. Furthermore, this improvement
occurred with no increase in production incidents or outages.
The teams and their leadership also commented on the value of participating in the
assessment, since the survey itself highlights and reinforces behaviors and best prac-
tices across the dimensions described above. The DORA assessment becomes both a
measurement and a learning opportunity, creating a shared understanding of how to
drive improvement across the organization. A screencast of the DORA platform can
be seen here: http://bit.ly/2k5SYJW.
5 References
1. Team, C. P. (2010). CMMI® for Development, Version 1.3, Improving processes for de-
veloping better products and services. no. CMU/SEI-2010-TR-033. Software Engineering
Institute.
2. Shetty, Y. K. (1993). Aiming high: competitive benchmarking for superior performance.
Long Range Planning, 26(1), 39-44.
3. Forsgren, N., J. Humble (2016). “The Role of Continuous Delivery in IT and Organiza-
tional Performance.” In the Proceedings of the Western Decision Sciences Institute
(WDSI) 2016, Las Vegas, NV.
4. Shetty, Y. K. (1993). Aiming high: competitive benchmarking for superior perfor-
mance. Long Range Planning, 26(1), 39-44.
5. Forsgren Velasquez, N., Kim, G., Kersten, N., & Humble, J. (2014). 2014 State of DevOps
Report.
6. Puppet Labs and IT Revolution (2015). 2015 State of DevOps Report.
7. Brown, A., Forsgren, N., Humble, J., Kersten, N., & Kim, G. (2016). 2016 State of
DevOps Report.
8. Forsgren, N., J. Humble (2016). "DevOps: Profiles in ITSM Performance and Contributing
Factors." In the Proceedings of the Western Decision Sciences Institute (WDSI) 2016, Las
Vegas, NV
9. Lawler, E. E., Nadler, D., & Cammann, C. (1980). Organizational assessment: Perspec-
tives on the measurement of organizational behavior and the quality of work life. John
Wiley & Sons.
... T ODAY, IT organizations are increasingly challenged with ever-changing customer requirements, competition, regulatory environments and sophisticated outside threats [1]. Therefore, the need to establish a competitive advantage by doing things faster and better than competitors [2], like delivering and supporting software, with reliability and in a predictable form has become increasingly important [3]. ...
... Following an impact mapping [19] where we should work backwards from the Outcome [9]. This might be beneficial for practitioners, since it will later improve a DevOps assessment [1] that is done to ensure Outcomes are met and evaluate [20] each DevOps Capability. The framework is instantiated and analyzed in Section 6.5. ...
... As a result, it is called "waste" since it ties up value. Prioritize work, limit the number of things that people are working on [1], [8], and focus on getting a few high-priority tasks done [77]. These limits [98] are identified and enforced. ...
Article
Context: To meet the demands of customers and market, IT organizations are seeking to implement DevOps. While many succeed in DevOps adoption, others lack the knowledge on how to incorporate DevOps culture, process, measurements, and techniques in their business. Thus, successful adoption is still inconsistent, highlighting the need to provide management with relevant information to support the development of DevOps Capabilities effectively. But what are these Capabilities? Unfortunately, there is still a lack of clarity about DevOps Capabilities and their relationships to DevOps Practices and Outcomes among researchers and practitioners. Objective: This research aims to gather community consensus on the relationship between Capabilities and Practices, so a better DevOps implementation can be mapped. Seeking to define DevOps Capabilities and Practices concepts and to identify, organize and summarize Capabilities as they relate to Practices. Method: A MLR is conducted, with 93 documents gathered and thoroughly examined from throughout the community, including books, scientific articles, white papers, and conferences, among others. Results: This survey contributes a list of 37 organized Capabilities, their mentions in literature, and their definitions. The concepts of Practices and Capabilities were mapped and categorized in an ordered taxonomy. It is concluded that industry research has much outweighed scientific research on this topic, with Capabilities evolving dynamically over time, reinforcing team collaboration and communication as the most crucial one. The study’s Outcomes will assist researchers and practitioners understand how Capabilities and Practices are related at different levels and how to better implement them. Index Terms—DevOps Capabilities, DevOps Practices, Software Engineering Process, Software release management and delivery, Software Development, Multivocal Literature Review
... This, in turn, requires us to 72 address the following specific research challenges. 73 • RQ1: How can the end-user app reviews help the de-74 velopment team to precisely identify the app features 75 which are of immediate concern? 76 • RQ2: How can the user sentiments be integrated 77 into the recommendation system to identify the more 78 relevant reviews that are significant for the technology 79 value stream? ...
... In order to shift the focus of the technol-1279 ogy value stream towards delivering business value to 1280 the customer, there is a need to concentrate on meeting 1281 users' needs by amplifying and reducing feedback loops 1282 at the same time [72]. DevOps Research and Assess-1283 ment (DORA) [73] shows that development teams achieve 1284 Fig. 8: Key findings and implications -Except for online job search applications, reviews of all other app categories is dominated by bug and fault reports. -All apps have a high percentage of reviews that suggests improvements in the respective apps. ...
Article
Full-text available
An emerging feature of mobile application software is the need to quickly produce new versions to solve problems that emerged in previous versions. This helps adapt to changing user needs and preferences. In a continuous software development process, the user reviews collected by the apps themselves can play a crucial role to detect which components need to be reworked. This paper proposes a novel framework that enables software companies to drive their technology value stream based on the feedback (or reviews) provided by the end-users of an application. The proposed end-to-end framework exploits different Natural Language Processing (NLP) tasks to best understand the needs and goals of the end users. We also provide a thorough and in-depth analysis of the framework, the performance of each of the modules, and the overall contribution in driving the technology value stream.
... DevOps refers to a collection of collaborative and diverse efforts utilized to carry out software development tasks [8,9]. The concept of DevOps was initially introduced to improve and accelerate business delivery by facilitating the effective coordination between development and operation teams [7,8]. ...
... DevOps refers to a collection of collaborative and diverse efforts utilized to carry out software development tasks [8,9]. The concept of DevOps was initially introduced to improve and accelerate business delivery by facilitating the effective coordination between development and operation teams [7,8]. According to Leite et al. [9], "DevOps is a multicultural movement within the organization to accelerate the delivery of business use cases by making the collaboration between development (Dev) and Operation (Ops) teams". ...
Article
Full-text available
DevOps (development and operations) is a collective and multidisciplinary organizational effort used by many software development organizations to build high-quality software on schedule and within budget. Implementing DevOps is challenging to implement in software organizations. The DevOps literature is far away from providing a guideline for effectively implementing DevOps in software organizations. This study is conducted with the aim to develop a readiness model by investigating the DevOps-related factors that could positively or negatively impact DevOps activities in the software industry. The identified factors are further categorized based on the internal and external aspects of the organization, using the SWOT (strengths, weaknesses, opportunities , threats) framework. This research work is conducted in three different phases: (1) investigating the factors, (2) categorizing the factors using the SWOT framework, and finally, (3) developing an analytic hierarchy process (AHP)-based readiness model of DevOps factors for use in software organizations. The findings would provide a readiness model based on the SWOT framework. The proposed framework could provide a roadmap for organizations in the software development industry to evaluate and improve their implementation approaches to implement a DevOps process.
... Goals can be set for a project from financial, customer, internal operation, and employee learning and growth perspectives [17]. For example, story points, customer satisfaction scores, lifetime direct and indirect revenue of product features, or public benefits generated by non-profits may be selected as output or outcome metrics, and one or a combination of DevOps Research and Assessment (DORA) metrics may be used to represent infrastructure resistance [35]. Employee satisfaction scores, management ratings, and turnover rates may be used to reflect individual motivation or skills. ...
Article
Full-text available
Engineering productivity measures how effectively engineering teams produce valuable outcomes during a specific time frame. In this study, we aim to quantitatively explore the use of organizational Ohm's law to measure productivity in software engineering. According to this law, organizational productivity is directly proportional to outcome-output efficiency and organizational potential, while inversely proportional to organizational resistance. We derive measurable metrics from this law, which include engineer time allocation, engineer motivation, engineer skills, outcome-output alignment, lines of code, and usage-based business impact. We propose various practical methods to obtain these metrics and explain the considerations to determine the appropriate methods. Using these metrics, we comprehensively study and document four software product development projects. We apply the law to measure the engineering productivity of the projects. Our research supports the validity of organizational Ohm's law using lines of code output and usage-based business impact outcome. Our work demonstrates that organizational Ohm's law could provide an effective and intuitive model for understanding engineering productivity challenges.
... These metrics are deployment frequency, lead time for changes, change failure rate and mean time to recover. Using this platform, Fin500 was able to increase the number of releases to production from 40 to over 800 [6]. ...
Preprint
Full-text available
In modern Web development, it is expected that systems operating in the same area can be easily integrated. For common integration points, it is recommended to use a standardized data model and a common interface during the development as this will facilitate further integrations. The use of the cloud infrastructure is increasingly popular in telemedicine, but taking into account the goals, the productivity of the development, the availability of the system and the various regulations, choosing the right solution is not trivial. Inclouded platform consists of numerous currently active telemedical microservices that are working with a common software development kit. This tool provides a standardized data model for document-oriented database systems, has support for public and private clouds by using the classic Data Access Object (DAO) analogy and contains a lot of convenient functions as well. Furthermore, it is found that our solutions can significantly increase development productivity and is confirmed by measurements taken which involved software developers.
Article
Context: Information Technology (IT) organizations are aiming to implement DevOps capabilities in order to fulfill market, customers and internal needs. While many are successful with DevOps implementation, others still have difficulty to measure DevOps success in their organization. As a result, the effectiveness of assessing DevOps remains erratic. This emphasizes the need to withstand management in measuring the implementation process with suitable DevOps Metrics. But what are these metrics? Objective: This research seeks to provide relevant DevOps Metrics in order to facilitate the efficiency of DevOps adoption and better analyze DevOps performance in enterprises. Method: A Multivocal Literature Review (MLR) is conducted, with 139 documents gathered and thoroughly examined from throughout the community, including books, scientific articles, white papers, and conferences, among others. Results: This article conducts an extensive and rigorous MLR, contributing with a definition of DevOps Metrics, 22 main metrics, their definitions, importance and categorization in sets of KPIs, as well as exposing clear indicators on how to improve them. It is also discussed how metrics could be put into practice and what constitutes a change in the context of DevOps Metrics. The study’s outcomes will assist researchers and practitioners understand DevOps Metrics and how to better implement them.
Conference Paper
Full-text available
DevOps is a culture-oriented software development and operations methodology, and software organizations increasingly adopting it due to its flexibility and good business gains. Especially, in Covid situations DevOps give a good support to software development organizations to carry development and operations activities through cloud-native DevOps in work from home environment and starting new businesses. The technological advancement makes the world a global village, though, there is dire need of guidelines and standards for the adoption of DevOps across the borders and out of office. Considering the significance of DevOps in current era of software business, we plan to develop a roadmap aiming to assist the software development organizations to adopt DevOps process over the globe and in work-from-home format. In this study, we are proposing an initial idea towards the development of robust and comprehensive guide for DevOps adoption in geographically distributed environment.
Conference Paper
This review gives a clear picture of DevOps and its benefits when it is adopted by an organization. Present study is based on the formal guidelines for the systematic literature review process. We used the standard methodology for the preparation of literature review, we employed a manual search with the use of search strings that have a DevOps as a center topic and strings has been applied on four databases. The objective of present study is to review the current status of DevOps from 2010 to 2020 using primary and secondary studies to review the articles related to DevOps.
Chapter
In this paper, we use a design science approach to develop a mobile app for lung cancer patients that facilitates their interactions with their clinicians, manages and reports on their health status, and provides them access to medical information/education. This paper contributes to the information systems literature by demonstrating the value of design science research to co-create solutions that advance health care outcomes through technological innovations. The design process engaged a diverse cast of experts and methods, such as a survey of oncologists and cancer patients, a workshop, roundtables and interviews with leading patient and clinician association representatives and focus groups, including two panels each of clinicians and cancer patients. Our approach also develops actionable knowledge that is grounded in evidence from the field, including design guidelines that recapitulate what we learned from the design-testing-redesign cycles of our artefact.
Article
Full-text available
DevOps is a new software engineering paradigm adopted by various software organizations to develop the quality software within time and budget. The implementation of DevOps practices is critical, and there are no guidelines to assess and improve the DevOps activities in software organizations. Hence, there is a need to develop a readiness model for DevOps (RMDevOps) with an aim to assist the practitioners for implementation of DevOps practices in software firms. To achieve the study objective , we conducted a systematic literature review (SLR) study to identify the critical challenges and associated best practices of DevOps. A total of 18 challenges and 73 best practices were identified from the 69 primary studies. The identified challenges and best practices were further evaluated by conducting a survey with industry practitioners. The RMDevOps was developed based on other well-established models in software engineering domain, for example, software process improvement readiness model (SPIRM) and software outsourcing vendor readiness model (SOVRM). Finally, case studies were conducted with three different organizations with an aim to validate the developed model. The results show that the RMDevOps is effective to assess and improve the DevOps practices in software organizations. K E Y W O R D S best practices, case study, guidelines, readiness model
Conference Paper
Full-text available
This study investigates the impacts of continuous delivery practices in organizations. Continuous delivery is a set of practices designed to optimize the process of taking changes from version control to production or release to manufacturing. Key elements include comprehensive use of version control, automation of the test and deployment process, and the application of continuous integration to rapidly validate the correctness of every change through running the automated build and test process. When viewed through the lens of organizational capabilities theory, CD is an inside-out spanning process. We propose that the use of CD in organizations affects factors that impact the tech workforce today (e.g., perceived burnout and deployment pain), and software delivery performance (e.g., change fail rates and IT performance). We also propose that it indirectly influences organizational performance through IT performance (which is itself a spanning process). We empirically test our model with survey data collected from 4,976 respondents around the world, and find that our hypotheses are supported. The paper's implications for research and practice are discussed.
Conference Paper
Full-text available
This paper describes IT performance using the Economic Order Quantity (EOQ) model, and presents the key factors of effective information technology (IT) team performance in the context of DevOps and IT service management. It then presents the development of a construct to measure IT team performance in the context of DevOps, based on three factors, two throughput measures (lead time to deploy and deployment frequency) and a stability measure (mean time to recover); these correspond to EOQ variables batch size (measured as deployment frequency) and transaction cost (lead time to deploy and mean time to recover). Based on a sample of 7,522 IT professionals worldwide, we conduct a hierarchical cluster analysis and find that the throughput and stability measures move together, consistent with EOQ. The analysis reveals three IT performance profiles: high, medium, and low. Further analysis shows that innovations can be used to impact these IT performance profiles. Implications for research and practice are discussed. As IT became a critical business enabler from the 1970s onwards, frameworks for managing the lifecycle of IT services became popular. This was led by ITIL [1]. However, the improved efficiencies and reliability offered by these ITIL best practices were not designed for fast IT service delivery. The software development practices in the first three versions of ITIL were based upon the waterfall or V-model. Software projects delivered using this model typically last many months or years before being released to users [2]. The advent of the internet changed the nature of IT service management (ITSM), bringing complex, internet-scale hosted services that required the ability to build, evolve and operate rapidly changing, secure, resilient systems at scale [3]. Agile software development
Article
This study investigates the impacts of continuous delivery practices in organizations. Continuous delivery is a set of practices designed to optimize the process of taking changes from version control to production or release to manufacturing. Key elements include comprehensive use of version control, automation of the test and deployment process, and the application of continuous integration to rapidly validate the correctness of every change through running the automated build and test process. When viewed through the lens of organizational capabilities theory, CD is an inside-out spanning process. We propose that the use of CD in organizations affects factors that impact the tech workforce today (e.g., perceived burnout and deployment pain), and software delivery performance (e.g., change fail rates and IT performance). We also propose that it indirectly influences organizational performance through IT performance (which is itself a spanning process). We empirically test our model with survey data collected from 4,976 respondents around the world, and find that our hypotheses are supported. The paper’s implications for research and practice are discussed.
Article
This paper describes IT performance using the Economic Order Quantity (EOQ) model, and presents the key factors of effective information technology (IT) team performance in the context of DevOps and IT service management. It then presents the development of a construct to measure IT team performance in the context of DevOps, based on three factors, two throughput measures (lead time to deploy and deployment frequency) and a stability measure (mean time to recover); these correspond to EOQ variables batch size (measured as deployment frequency) and transaction cost (lead time to deploy and mean time to recover). Based on a sample of 7,522 IT professionals worldwide, we conduct a hierarchical cluster analysis and find that the throughput and stability measures move together, consistent with EOQ. The analysis reveals three IT performance profiles: high, medium, and low. Further analysis shows that innovations can be used to impact these IT performance profiles. Implications for research and practice are discussed.
Article
In Competitive Benchmarking a firm's performance is measured against that of ‘best-in-class companies’ to determine how to achieve performance levels. Business functions are analysed as processes which produce a product or customer service, and benchmarking can be applied to strategy, operations or management support functions. Customers are the primary source for market and competitive intelligence. Benchmarking should be a continuous process and should aim not just to match but to beat the competition. The article describes the process at Xerox Corporation.
CMMI® for Development, Version 1.3, Improving processes for developing better products and services
  • C P Team
Team, C. P. (2010). CMMI® for Development, Version 1.3, Improving processes for developing better products and services. no. CMU/SEI-2010-TR-033. Software Engineering Institute.
Organizational Assessment: Perspectives on the Measurement of Organizational Behavior and the Quality of Work Llife
  • E E Lawler
  • D Nadler
  • C Cammann
  • EE Lawler
Lawler, E. E., Nadler, D., & Cammann, C. (1980). Organizational assessment: Perspectives on the measurement of organizational behavior and the quality of work life. John Wiley & Sons.