Figure 3 - uploaded by Maurice Dawson
Content may be subject to copyright.
IBM System Science Institute Relative Cost of Fixing Defects 

IBM System Science Institute Relative Cost of Fixing Defects 

Source publication
Article
Full-text available
This article examines the integration of secure coding practices into the overall Software Development Life Cycle (SDLC). Also detailed is a proposed methodology for integrating software assurance into the Department of Defense Information Assurance Certification & Accreditation Process (DIACAP). This method for integrating software assurance helps...

Citations

... Implementation of equally distributed security measures in every phases of software development phases are crucial and has significantly reduce vulnerability of the system as well as reducing the cost and time consume to develop the system. This is because installing a patching software will be much more expensive than solving the issues in real time during the SDLC phases [13]. Indirectly, this has improved software quality as well as development productivity and efficiency. ...
Preprint
Full-text available
p> The advancement of technology has made the development of software applications become unstoppable. The wide use of software applications has increased the threat to cyber security. The recent pandemic required the organization to adapt and manage new threats and cyberattacks due to the rising number of cybercrime activities all around the digital ecosystem. This situation has led to the importance of ensuring that the software is safe to use. Therefore, software development that emphasizes security aspects in every phase of the software development life cycle (SDLC) should be prioritized and practised to minimize cybersecurity problems. In this study, a document survey be conducted to achieve an understanding of secure software development processes and activities. The source of information is retrieved from different reliable resources of scientific research databases such as IEEE, Science Direct and Google Scholar. Moreover, trusted web resources also be referenced to support the argument in the literature study. Findings show that there are several steps of security measures for every phase of SDLC that should be conducted to improve the security performance of the software developed. The author also suggests solutions for dealing with current issues in secure software development which include educating and training the development team on secure coding practices, utilizing automated tools for software testing and implementing continuous automated scanning of threats and vulnerabilities in the system environment.</p
... Implementation of equally distributed security measures in every phases of software development phases are crucial and has significantly reduce vulnerability of the system as well as reducing the cost and time consume to develop the system. This is because installing a patching software will be much more expensive than solving the issues in real time during the SDLC phases [13]. Indirectly, this has improved software quality as well as development productivity and efficiency. ...
Preprint
Full-text available
p> The advancement of technology has made the development of software applications become unstoppable. The wide use of software applications has increased the threat to cyber security. The recent pandemic required the organization to adapt and manage new threats and cyberattacks due to the rising number of cybercrime activities all around the digital ecosystem. This situation has led to the importance of ensuring that the software is safe to use. Therefore, software development that emphasizes security aspects in every phase of the software development life cycle (SDLC) should be prioritized and practised to minimize cybersecurity problems. In this study, a document survey be conducted to achieve an understanding of secure software development processes and activities. The source of information is retrieved from different reliable resources of scientific research databases such as IEEE, Science Direct and Google Scholar. Moreover, trusted web resources also be referenced to support the argument in the literature study. Findings show that there are several steps of security measures for every phase of SDLC that should be conducted to improve the security performance of the software developed. The author also suggests solutions for dealing with current issues in secure software development which include educating and training the development team on secure coding practices, utilizing automated tools for software testing and implementing continuous automated scanning of threats and vulnerabilities in the system environment.</p
... The previously mentioned costs of implementing fairness metrics and auditing practices are smaller than the costs of addressing system limitations later in the development process. Implementing solutions at the design phase, compared to the testing phase or deployment stage, can ultimately cut costs by a significant amount (Dawson et al. 2010). ...
Article
Full-text available
Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.
... Dawson et al. [6] examined the integration of secure coding practice into a secure SDLC, compliant with the standard adopted by the Department of Defense Information Assurance Certification and Accreditation Process (DIACAP). They show the importance of integrating software security assurance as a means of protecting the application layer, where, according to them, more than half of the vulnerabilities are found in a system. ...
... On the other hand, a pipeline implementing a continuous deployment scenario would also include the Release & Monitoring activities described above. 6 https://owasp.org/www-project-top-ten/ The first pipeline (Developing) takes the application code as input from a git repository and addresses most of the Developing macro-phase activities, starting with the Quality Code Analysis. ...
... Some sample numbers from Google are that it takes $5 to fix a bug during unit testing and $5,000 to do so during system testing [13]. The IBM System Science Institute reported that bugs were 15 times more costly to fix during testing than during the design phase [11]. In addition, they found that patching a bug in the maintenance phase was 100 times more costly than having fixed it in the design phase. ...
Article
Fuzzing is the process of finding security vulnerabilities in code by creating inputs that will activate the exploits. Grammar-based fuzzing uses a grammar, which represents the syntax of all inputs a target program will accept, allowing the fuzzer to create well-formed complex inputs. This thesis conducts an in-depth study on two blackbox grammar-based fuzzing methods, GLADE and Learn&Fuzz, on their performance and usability to the average user. The blackbox fuzzer Radamsa was also used to compare fuzzing effectiveness. From our results in fuzzing PDF objects, GLADE beats both Radamsa and Learn&Fuzz in terms of coverage and pass rate. XML inputs were also tested, but the results only show that XML is a relatively simple input as the coverage results were mostly the same. For the XML pass rate, GLADE beats all of them except for the SampleSpace generation method of Learn&Fuzz. In addition, this thesis discusses interesting problems that occur when using machine learning for fuzzing. With experience from the study, this thesis proposes an improvement to GLADE’s user-friendliness through the use of a configuration file. This thesis also proposes a theoretical improvement to machine learning fuzzing through supplementary examples created by GLADE.
... After all, one of the main reasons why technology providers engage with auditors is that it is cheaper and easier to address system vulnerabilities early in the development process. For example, it can costs up to 15 times more to fix a software bug found during the testing phase than fixing the same bug found in the design phase [140]. This suggests that-despite the associated costs-businesses have clear incentives to design and implement effective EBA procedures. ...
Article
Full-text available
Ethics-based auditing (EBA) is a structured process whereby an entity’s past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA—such as the feasibility and effectiveness of different auditing procedures—have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.
... In this work, we propose to apply the surprisal measure to software engineering artefacts, motivated by many researchers arguing that software developers need to be aware of unusual or surprising events in their repositories, e.g., when summarizing project activity [19], notifying developers about unusual commits [7,9], and for the identification of malicious content [26]. The basic intuition is that catching bad surprises early will save effort, cost, and time, since bugs cost significantly more to fix during implementation or testing than in earlier phases [17], and by extension, bugs cost more the longer they exist in a product after being reported and before being addressed. ...
Preprint
Full-text available
Background. From information theory, surprisal is a measurement of how unexpected an event is. Statistical language models provide a probabilistic approximation of natural languages, and because surprisal is constructed with the probability of an event occuring, it is therefore possible to determine the surprisal associated with English sentences. The issues and pull requests of software repository issue trackers give insight into the development process and likely contain the surprising events of this process. Objective. Prior works have identified that unusual events in software repositories are of interest to developers, and use simple code metrics-based methods for detecting them. In this study we will propose a new method for unusual event detection in software repositories using surprisal. With the ability to find surprising issues and pull requests, we intend to further analyse them to determine if they actually hold importance in a repository, or if they pose a significant challenge to address. If it is possible to find bad surprises early, or before they cause additional troubles, it is plausible that effort, cost and time will be saved as a result. Method. After extracting the issues and pull requests from 5000 of the most popular software repositories on GitHub, we will train a language model to represent these issues. We will measure their perceived importance in the repository, measure their resolution difficulty using several analogues, measure the surprisal of each, and finally generate inferential statistics to describe any correlations.
... This shows that security is one of the serious issues in the current era that need to be addressed carefully during SDLC. Further, the relative cost of addressing bugs and failure increase as the project progress as mentioned in the IBM system science institute report [26]. Therefore, handling security from the beginning of the project is necessary to save the software from future security breaches. ...
... However, the relative cost of fixing defects grows significantly through the software development life-cycle. [1] found that resolving the defects in maintenance can cost 100 times more compared to early detection and fix. Software defect prediction (SDP) plays an important role of reducing the cost by recognizing defect-prone modules of a software prior to testing [2]. ...
... It is time consuming, disrupts schedules, and hurts the reputation of software products. Moreover, it is generally accepted that fixing bugs costs more the later they are found and that maintenance is costlier than initial development (Boehm, 1981;Boehm & Basili, 2001;Boehm & Papaccio, 1988;Dawson et al., 2010;Hackbarth et al., 2016). The effort invested in bug fixing therefore reflects on the health of the development process. ...
Article
Full-text available
The effort invested in software development should ideally be devoted to the implementation of new features. But some of the effort is invariably also invested in corrective maintenance, that is in fixing bugs. Not much is known about what fraction of software development work is devoted to bug fixing, and what factors affect this fraction. We suggest the Corrective Commit Probability (CCP), which measures the probability that a commit reflects corrective maintenance, as an estimate of the relative effort invested in fixing bugs. We identify corrective commits by applying a linguistic model to the commit messages, achieving an accuracy of 93%, higher than any previously reported model. We compute the CCP of all large active GitHub projects (7,557 projects with 200+ commits in 2019). This leads to the creation of an investment scale, suggesting that the bottom 10% of projects spend less than 6% of their total effort on bug fixing, while the top 10% of projects spend at least 39% of their effort on bug fixing — more than 6 times more. Being a process metric, CCP is conditionally independent of source code metrics, enabling their evaluation and investigation. Analysis of project attributes shows that lower CCP (that is, lower relative investment in bug fixing) is associated with smaller files, lower coupling, use of languages like JavaScript and C# as opposed to PHP and C++, fewer code smells, lower project age, better perceived quality, fewer developers, lower developer churn, better onboarding, and better productivity.