Article

Integrating Software Assurance into the Software Development Life Cycle (SDLC)

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This article examines the integration of secure coding practices into the overall Software Development Life Cycle (SDLC). Also detailed is a proposed methodology for integrating software assurance into the Department of Defense Information Assurance Certification & Accreditation Process (DIACAP). This method for integrating software assurance helps in properly securing the application layer as that is where more than half of the vulnerabilities lie in a system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Dawson et al. [6] examined the integration of secure coding practice into a secure SDLC, compliant with the standard adopted by the Department of Defense Information Assurance Certification and Accreditation Process (DIACAP). They show the importance of integrating software security assurance as a means of protecting the application layer, where, according to them, more than half of the vulnerabilities are found in a system. ...
... On the other hand, a pipeline implementing a continuous deployment scenario would also include the Release & Monitoring activities described above. 6 https://owasp.org/www-project-top-ten/ The first pipeline (Developing) takes the application code as input from a git repository and addresses most of the Developing macro-phase activities, starting with the Quality Code Analysis. ...
... Security aspects should be integrated in all phases of the development cycle to avoid the costs which can increase exponentially. Relative defect repair costs are shown in Fig. 2. [16] A discovery of vulnerability showed that the software code could involve reconstruction and implementation. It will require a lot of time and money. ...
... The figure above has shown that the defects identified in the testing phase are 15 times more expensive than the design phase; the defects identified in the testing phase are 2 times more expensive than the implementation phase. The defects found in maintenance are most costly which is 7 times more expensive than the testing phase [16]. ...
Preprint
Full-text available
Information protection is becoming a focal point for designing, creating and implementing software applications within highly integrated technology environments. The use of a safe coding technique in the software development process is required by many industrial IT security standards and policies. Despite current cyber protection measures and best practices, vulnerabilities still remain strong and become a huge threat to every developed software. It is crucial to understand the position of secure software development for security management, which is affected by causes such as human security-related factors. Although developers are often held accountable for security vulnerabilities, in reality, many problems often grow from a lack of organizational support during development tasks to handle security. While abstract safe coding guidelines are generally recognized, there are limited low-level secure coding guidelines for various programming languages. A good technique is required to standardize these guidelines for software developers. The goal of this paper is to address this gap by providing software designers and developers with direction by identifying a set of secure software development guidelines. Additionally, an overview of criteria for selection of safe coding guidelines is performed along with investigation of appropriate awareness methods for secure coding.
... Security flaws found early in the process are generally easier to solve and significantly cheaper compared to later SDLC stages. This is similar and in line, to any other bug fixes in software development as some time ago stated by Boehm [20] and as further supported by Dawson [21], recommending Secure SDLC where an IT product is one with security built in rather than security retrofitted. ...
... The fact that security is not a main element in the Agile or DevOps approach is not strange. Given the importance though and impact of security in software nowadays, the recommendation is made that security cannot be seen as a feature but must be an inherent part of the software development approach in line with [20], [21]. Secondly, we conclude that not only the hard controls (Content and Process) should be taken into account. ...
... One strategy to mitigate software-based vulnerabilities is to integrate security measures throughout the software development lifecycle, from initial development to postdeployment phases. Detecting and addressing vulnerabilities early in the development process significantly reduces total development time and cost (Dawson et al., 2010;Hackbarth et al., 2016;NIST, 2002). With increasing cyberattacks due to software vulnerabilities, integration of not only the development and operations functions (DevOps) but also the security function (DevSecOps) has been suggested (IBM, 2020). ...
Article
Full-text available
The availability of powerful head-mounted displays (HMDs) has made virtual reality (VR) a mainstream technology and spearheaded the idea of immersive virtual experiences within the Metaverse-a shared and persistent virtual world. Companies are eagerly investing in various VR products and services, aiming to be early adopters and create new revenue streams by taking advantage of the hype surrounding VR and the Metaverse. However, unique privacy and security issues associated with VR arise from the data collected by both VR applications and peripherals. Given that VR HMDs equipped with intrusive sensors designed to track eye movements, facial expressions, and other biometric data are already available in the market, it is essential to integrate security and privacy into the VR application development lifecycle. This study presents a hypothetical case that revolves around a team of programmers and cybersecurity experts tasked to develop new VR applications for a technology conglomerate that recently shifted its attention towards the Metaverse. Building on development, security, and operations (DevSecOps) practice, the case study tasks participants to consider secure software development, threat modeling, and adoption of security and privacy frameworks in the context of VR application development. This study contributes to IS education by emphasizing potential privacy and security issues associated with this rapidly evolving technology. Additionally, it demonstrates how the implementation of DevSecOps practices can effectively address potential security challenges throughout the software development process.
... After all, it is often cheaper to address vulnerabilities early in software development processes. Dawson et al. (2010) estimated that it can cost up to 15 times more to fix a bug in an AI system when it is found during the testing phase rather than the deployment phase. ...
Preprint
Full-text available
AI auditing is a rapidly growing field of research and practice. This review article, which doubles as an editorial to Digital Societys topical collection on Auditing of AI, provides an overview of previous work in the field. Three key points emerge from the review. First, contemporary attempts to audit AI systems have much to learn from how audits have historically been structured and conducted in areas like financial accounting, safety engineering and the social sciences. Second, both policymakers and technology providers have an interest in promoting auditing as an AI governance mechanism. Academic researchers can thus fill an important role by studying the feasibility and effectiveness of different AI auditing procedures. Third, AI auditing is an inherently multidisciplinary undertaking, to which substantial contributions have been made by computer scientists and engineers as well as social scientists, philosophers, legal scholars and industry practitioners. Reflecting this diversity of perspectives, different approaches to AI auditing have different affordances and constraints. Specifically, a distinction can be made between technology-oriented audits, which focus on the properties and capabilities of AI systems, and process oriented audits, which focus on technology providers governance structures and quality management systems. The next step in the evolution of auditing as an AI governance mechanism, this article concludes, should be the interlinking of these available (and complementary) approaches into structured and holistic procedures to audit not only how AI systems are designed and used but also how they impact users, societies and the natural environment in applied settings over time.
... Software engineering is an iterative process: tests are often failed, which leads to changes in the software. The earlier issues are detected, and corresponding changes made, the less expensive changes are [15]. Therefore, it is crucial to validate and verify the requirements and the corresponding specifications in the requirements phase. ...
... SDLC is a widely used methodology employed by the software industry with the purpose of designing, developing, and testing software of superior quality. The primary objective of the Software Development Life Cycle is to provide software of superior quality that fulfils or beyond the expectations of customers, while also ensuring timely completion and compliance to budgeted costs [18], [19]. ...
Article
Full-text available
Floods are a prevalent environmental concern that frequently impacts urban regions. In conjunction with the growth of urbanization, the expansion of urban areas frequently gives rise to issues related to flooding. Samarinda, the administrative center of East Kalimantan Province, is intrinsically linked to the problems of flooding and waterlogging. The occurrence of floods in this urban area is frequently observed during periods of heavy rainfall and elevated water discharge from the Mahakam River and its associated tributaries. The utilization of the Web Geographics Information System to present maps depicting areas susceptible to flooding serves as a digital instrument and informational resource for urban communities. This research aims to design and implement WebGIS software for the classification of flood-prone regions in Samarinda City, employing the Object-Oriented Design System (OODS) approach. The process of flood classification modeling involves the utilization of various factors, including rainfall data, road kinds, slope gradients, and land use types. The design of this WebGIS system incorporates OODS principles, utilizing the Unified Modeling Language (UML) methodology. This approach encompasses many UML techniques, including Use Case Diagrams, Class Diagrams, and Entity Relationship Diagrams. The outcome of this study resulted in the design and implementation of the WebGIS program, which serves the purpose of classifying flood-prone areas in Samarinda City. This result encompasses object designs and interfaces that visually represent flood zones on a map. The system validation test was conducted by comparing the classification results and historical flood data, with a result of 90%.
... Implementation of equally distributed security measures in every phases of software development phases are crucial and has significantly reduce vulnerability of the system as well as reducing the cost and time consume to develop the system. This is because installing a patching software will be much more expensive than solving the issues in real time during the SDLC phases [13]. Indirectly, this has improved software quality as well as development productivity and efficiency. ...
Preprint
Full-text available
p> The advancement of technology has made the development of software applications become unstoppable. The wide use of software applications has increased the threat to cyber security. The recent pandemic required the organization to adapt and manage new threats and cyberattacks due to the rising number of cybercrime activities all around the digital ecosystem. This situation has led to the importance of ensuring that the software is safe to use. Therefore, software development that emphasizes security aspects in every phase of the software development life cycle (SDLC) should be prioritized and practised to minimize cybersecurity problems. In this study, a document survey be conducted to achieve an understanding of secure software development processes and activities. The source of information is retrieved from different reliable resources of scientific research databases such as IEEE, Science Direct and Google Scholar. Moreover, trusted web resources also be referenced to support the argument in the literature study. Findings show that there are several steps of security measures for every phase of SDLC that should be conducted to improve the security performance of the software developed. The author also suggests solutions for dealing with current issues in secure software development which include educating and training the development team on secure coding practices, utilizing automated tools for software testing and implementing continuous automated scanning of threats and vulnerabilities in the system environment.</p
... Implementation of equally distributed security measures in every phases of software development phases are crucial and has significantly reduce vulnerability of the system as well as reducing the cost and time consume to develop the system. This is because installing a patching software will be much more expensive than solving the issues in real time during the SDLC phases [13]. Indirectly, this has improved software quality as well as development productivity and efficiency. ...
Preprint
Full-text available
p> The advancement of technology has made the development of software applications become unstoppable. The wide use of software applications has increased the threat to cyber security. The recent pandemic required the organization to adapt and manage new threats and cyberattacks due to the rising number of cybercrime activities all around the digital ecosystem. This situation has led to the importance of ensuring that the software is safe to use. Therefore, software development that emphasizes security aspects in every phase of the software development life cycle (SDLC) should be prioritized and practised to minimize cybersecurity problems. In this study, a document survey be conducted to achieve an understanding of secure software development processes and activities. The source of information is retrieved from different reliable resources of scientific research databases such as IEEE, Science Direct and Google Scholar. Moreover, trusted web resources also be referenced to support the argument in the literature study. Findings show that there are several steps of security measures for every phase of SDLC that should be conducted to improve the security performance of the software developed. The author also suggests solutions for dealing with current issues in secure software development which include educating and training the development team on secure coding practices, utilizing automated tools for software testing and implementing continuous automated scanning of threats and vulnerabilities in the system environment.</p
... The previously mentioned costs of implementing fairness metrics and auditing practices are smaller than the costs of addressing system limitations later in the development process. Implementing solutions at the design phase, compared to the testing phase or deployment stage, can ultimately cut costs by a significant amount (Dawson et al. 2010). ...
Article
Full-text available
Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.
... Some sample numbers from Google are that it takes $5 to fix a bug during unit testing and $5,000 to do so during system testing [13]. The IBM System Science Institute reported that bugs were 15 times more costly to fix during testing than during the design phase [11]. In addition, they found that patching a bug in the maintenance phase was 100 times more costly than having fixed it in the design phase. ...
Article
Fuzzing is the process of finding security vulnerabilities in code by creating inputs that will activate the exploits. Grammar-based fuzzing uses a grammar, which represents the syntax of all inputs a target program will accept, allowing the fuzzer to create well-formed complex inputs. This thesis conducts an in-depth study on two blackbox grammar-based fuzzing methods, GLADE and Learn&Fuzz, on their performance and usability to the average user. The blackbox fuzzer Radamsa was also used to compare fuzzing effectiveness. From our results in fuzzing PDF objects, GLADE beats both Radamsa and Learn&Fuzz in terms of coverage and pass rate. XML inputs were also tested, but the results only show that XML is a relatively simple input as the coverage results were mostly the same. For the XML pass rate, GLADE beats all of them except for the SampleSpace generation method of Learn&Fuzz. In addition, this thesis discusses interesting problems that occur when using machine learning for fuzzing. With experience from the study, this thesis proposes an improvement to GLADE’s user-friendliness through the use of a configuration file. This thesis also proposes a theoretical improvement to machine learning fuzzing through supplementary examples created by GLADE.
... After all, one of the main reasons why technology providers engage with auditors is that it is cheaper and easier to address system vulnerabilities early in the development process. For example, it can costs up to 15 times more to fix a software bug found during the testing phase than fixing the same bug found in the design phase [140]. This suggests that-despite the associated costs-businesses have clear incentives to design and implement effective EBA procedures. ...
Article
Full-text available
Ethics-based auditing (EBA) is a structured process whereby an entity’s past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA—such as the feasibility and effectiveness of different auditing procedures—have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.
... In this work, we propose to apply the surprisal measure to software engineering artefacts, motivated by many researchers arguing that software developers need to be aware of unusual or surprising events in their repositories, e.g., when summarizing project activity [19], notifying developers about unusual commits [7,9], and for the identification of malicious content [26]. The basic intuition is that catching bad surprises early will save effort, cost, and time, since bugs cost significantly more to fix during implementation or testing than in earlier phases [17], and by extension, bugs cost more the longer they exist in a product after being reported and before being addressed. ...
Preprint
Full-text available
Background. From information theory, surprisal is a measurement of how unexpected an event is. Statistical language models provide a probabilistic approximation of natural languages, and because surprisal is constructed with the probability of an event occuring, it is therefore possible to determine the surprisal associated with English sentences. The issues and pull requests of software repository issue trackers give insight into the development process and likely contain the surprising events of this process. Objective. Prior works have identified that unusual events in software repositories are of interest to developers, and use simple code metrics-based methods for detecting them. In this study we will propose a new method for unusual event detection in software repositories using surprisal. With the ability to find surprising issues and pull requests, we intend to further analyse them to determine if they actually hold importance in a repository, or if they pose a significant challenge to address. If it is possible to find bad surprises early, or before they cause additional troubles, it is plausible that effort, cost and time will be saved as a result. Method. After extracting the issues and pull requests from 5000 of the most popular software repositories on GitHub, we will train a language model to represent these issues. We will measure their perceived importance in the repository, measure their resolution difficulty using several analogues, measure the surprisal of each, and finally generate inferential statistics to describe any correlations.
... This shows that security is one of the serious issues in the current era that need to be addressed carefully during SDLC. Further, the relative cost of addressing bugs and failure increase as the project progress as mentioned in the IBM system science institute report [26]. Therefore, handling security from the beginning of the project is necessary to save the software from future security breaches. ...
Article
Full-text available
Security is critical to the success of software, particularly in today's fast-paced, technology-driven environment. It ensures that data, code, and services maintain their CIA (Confidentiality, Integrity, and Availability). This is only possible if security is taken into account at all stages of the SDLC (Software Development Life Cycle). Various approaches to software quality have been developed, such as CMMI (Capability maturity model integration). However, there exists no explicit solution for incorporating security into all phases of SDLC. One of the major causes of pervasive vulnerabilities is a failure to prioritize security. Even the most proactive companies use the “patch and penetrate” strategy, in which security is accessed once the job is completed. Increased cost, time overrun, not integrating testing and input in SDLC, usage of third-party tools and components, and lack of knowledge are all reasons for not paying attention to the security angle during the SDLC, despite the fact that secure software development is essential for business continuity and survival in today's ICT world. There is a need to implement best practices in SDLC to address security at all levels. To fill this gap, we have provided a detailed overview of secure software development practices while taking care of project costs and deadlines. We proposed a secure SDLC framework based on the identified practices, which integrates the best security practices in various SDLC phases. A mathematical model is used to validate the proposed framework. A case study and findings show that the proposed system aids in the integration of security best practices into the overall SDLC, resulting in more secure applications.
... If we take the time difference between the first and the last event, this will likely be close to the total development time. This can be done for all users of both the control and [112,113]. ...
... However, the relative cost of fixing defects grows significantly through the software development life-cycle. [1] found that resolving the defects in maintenance can cost 100 times more compared to early detection and fix. Software defect prediction (SDP) plays an important role of reducing the cost by recognizing defect-prone modules of a software prior to testing [2]. ...
... It is time consuming, disrupts schedules, and hurts the reputation of software products. Moreover, it is generally accepted that fixing bugs costs more the later they are found and that maintenance is costlier than initial development (Boehm, 1981;Boehm & Basili, 2001;Boehm & Papaccio, 1988;Dawson et al., 2010;Hackbarth et al., 2016). The effort invested in bug fixing therefore reflects on the health of the development process. ...
Article
Full-text available
The effort invested in software development should ideally be devoted to the implementation of new features. But some of the effort is invariably also invested in corrective maintenance, that is in fixing bugs. Not much is known about what fraction of software development work is devoted to bug fixing, and what factors affect this fraction. We suggest the Corrective Commit Probability (CCP), which measures the probability that a commit reflects corrective maintenance, as an estimate of the relative effort invested in fixing bugs. We identify corrective commits by applying a linguistic model to the commit messages, achieving an accuracy of 93%, higher than any previously reported model. We compute the CCP of all large active GitHub projects (7,557 projects with 200+ commits in 2019). This leads to the creation of an investment scale, suggesting that the bottom 10% of projects spend less than 6% of their total effort on bug fixing, while the top 10% of projects spend at least 39% of their effort on bug fixing — more than 6 times more. Being a process metric, CCP is conditionally independent of source code metrics, enabling their evaluation and investigation. Analysis of project attributes shows that lower CCP (that is, lower relative investment in bug fixing) is associated with smaller files, lower coupling, use of languages like JavaScript and C# as opposed to PHP and C++, fewer code smells, lower project age, better perceived quality, fewer developers, lower developer churn, better onboarding, and better productivity.
... Finding and fixing bugs accounts for a significant portion of maintenance cost for software (Britton et al., 2013), and the cost of bug fixing increases exponentially with time (Dawson et al., 2010). Hence, automatic program repair has long been a focus of software engineering, with the goal of lowering down the cost and reducing the introduction of new bugs during bug fixing. ...
Preprint
Full-text available
The advance in machine learning (ML)-driven natural language process (NLP) points a promising direction for automatic bug fixing for software programs, as fixing a buggy program can be transformed to a translation task. While software programs contain much richer information than one-dimensional natural language documents, pioneering work on using ML-driven NLP techniques for automatic program repair only considered a limited set of such information. We hypothesize that more comprehensive information of software programs, if appropriately utilized, can improve the effectiveness of ML-driven NLP approaches in repairing software programs. As the first step towards proving this hypothesis, we propose a unified representation to capture the syntax, data flow, and control flow aspects of software programs, and devise a method to use such a representation to guide the transformer model from NLP in better understanding and fixing buggy programs. Our preliminary experiment confirms that the more comprehensive information of software programs used, the better ML-driven NLP techniques can perform in fixing bugs in these programs.
... Similarly, dependency analysis tools [treatment of E2.2, E2.3] are designed to be run together with code analysis tools. Hence, the code security analysis is often postponed to the prerelease stage (or even skipped altogether), which exponentially increases the cost of fixing discovered vulnerabilities [44]. ...
Preprint
Full-text available
Pushed by market forces, software development has become fast-paced. As a consequence, modern development projects are assembled from 3rd-party components. Security & privacy assurance techniques once designed for large, controlled updates over months or years, must now cope with small, continuous changes taking place within a week, and happening in sub-components that are controlled by third-party developers one might not even know they existed. In this paper, we aim to provide an overview of the current software security approaches and evaluate their appropriateness in the face of the changed nature in software development. Software security assurance could benefit by switching from a process-based to an artefact-based approach. Further, security evaluation might need to be more incremental, automated and decentralized. We believe this can be achieved by supporting mechanisms for lightweight and scalable screenings that are applicable to the entire population of software components albeit there might be a price to pay.
... It was adopted in order to hastily deliver the system within the given timeframe of 6 months, at a very low cost. Moreover, RAD provides the frequent developer-to-customer communication during the system development phases, hence it maintains the customer's satisfaction (Naz and Khan, 2015;Hirschberg, 2015;Dawson et al., 2010). Figure 1 illustrates the RAD model used in this research study. ...
Article
Full-text available
This study aims to develop an integrated e-health platform for enhancing delivery of HIV/AIDS healthcare information in Tanzania, which consists of a mobile application and a web-based system. The study is based on the system's functional and non-functional requirements for an e-health system for delivery of HIV/AIDS healthcare information. The Rapid Application Development (RAD) model was adopted during the system development. The system requirements were modelled into Data Flow Diagram (DFD) in order to obtain the clear flow of the HIV/AIDS healthcare information between the clients and HIV/AIDS healthcare practitioners. With the use of different software development tools and environment such as Android studio and Symfony framework; both android application and web-based system were developed. Finally, the developed system was tested for individual module functioning as well as the functioning of the fully integrated system. The user acceptance survey gave the mean score of above 4 on the scale of 5 for each tested aspect of the system. These scores show that the developed system was positively accepted by the users and commended the Ministry of Health and to deploy the system for enhanced delivery of HIV/AIDS healthcare information.
... And since bugs are time consuming, disrupt schedules, and hurt the general credibility, lowering the bug rate has value regardless of other implications-thereby lending value to having a low CCP. Moreover, it is generally accepted that fixing bugs costs more the later they are found, and that maintenance is costlier than initial development [16], [17], [19], [29]. Therefore, the cost of low quality is even higher than implied by the bug ratio difference. ...
Preprint
We present a code quality metric, Corrective Commit Probability (CCP), measuring the probability that a commit reflects corrective maintenance. We show that this metric agrees with developers' concept of quality, informative, and stable. Corrective commits are identified by applying a linguistic model to the commit messages. Corrective commits are identified by applying a linguistic model to the commit messages. We compute the CCP of all large active GitHub projects (7,557 projects with at least 200 commits in 2019). This leads to the creation of a quality scale, suggesting that the bottom 10% of quality projects spend at least 6 times more effort on fixing bugs than the top 10%. Analysis of project attributes shows that lower CCP (higher quality) is associated with smaller files, lower coupling, use of languages like JavaScript and C# as opposed to PHP and C++, fewer developers, lower developer churn, better onboarding, and better productivity. Among other things these results support the "Quality is Free" claim, and suggest that achieving higher quality need not require higher expenses.
... Moreover supports into cloud as well. [10], [11], [12]. For the gathered literature, the study has been used various methods to collect relevant data, used Web of science, Scopus, Elsevier, Emerald and Science direct to get meaningful information in a selected area of the topic. ...
Article
Full-text available
Abstract The paper developed and tested a human resource management system in the university. To achieve the aim, objectives, the stud y designed methodology and taken the opinion from the users . To reach to the results percentage methods was adopted and data collected with a structured questionnaire. The study found that the program well executed and employees usage human resource management system in the human resource department of universitie s. The tools were useful for developers , researchers and Keywords: Human resource management system, universities, visual basic, oracle and employees. Classification: Technical paper.
... 3.2.4 A common taxonomy for computer system errors is the software development lifecycle stage (see table 5); it is often asserted that the cost of fixing an error at each stage is ten times the cost of fixing it in the previous stage (Dawson, Burrell, Rahim, & Brewster, 2010 We add in the less commonly included stages of concept (was it a good idea to do this in the first place?) at the beginning, and decommissioning (what are the problems caused by getting rid of the product) at the end. ...
Preprint
Full-text available
In this paper we examine historical failures of artificial intelligence (AI) and propose a classification scheme for categorizing future failures. By doing so we hope that (a) the responses to future failures can be improved through applying a systematic classification that can be used to simplify the choice of response and (b) future failures can be reduced through augmenting development lifecycles with targeted risk assessments.
... The costs of fixing a bug during the maintenance phase of a product is estimated to be a hundred times larger then during implementation [17,26]. Unit testing should be done right from the start of a project. ...
Thesis
Full-text available
How can we improve the unit testing skills of recent graduates to improve the connection with the software industry?
... Indeed, given the lack of an underlying model, it was impossible to identify most of these errors before runtime, since the ROS graph is not available when the system is not running. This makes these errors quite costly in terms of resources spent to fix them later in the development cycle [8]. In addition, in the generated architecture, topics and connections are defined at modeling stage. ...
Conference Paper
Full-text available
Designing a robotic application is a challenging task. It requires a vertical expertise spanning various fields, starting from hardware and low-level communication to high-level architectural solution for distributed applications. Today a single expert cannot undertake the entire effort of creating a robust and reliable robotic application. The current landscape of robotics middlewares, ROS in primis, does not offer a solution for this problem yet; developers are expected to be both architectural designers and domain experts. In our previous works we used the Architecture Analysis and Description Language to define a model-based approach for robot development, in an effort to separate the competences of software engineers and robotics experts, and to simplify the merge of software artifacts created by the two categories of developers. In this work we present a practical use-case, i.e., an autonomous wheelchair, and how we used a combination of model-based developed and automatic code generation to completely re-design and re-implement an existing architecture originally written by hand.
Conference Paper
We conducted a large-scale fine-grained empirical study in which we quantitatively analyzed the commit histories of 200 Open-Source (OS) Python software systems, whose software repositories were publicly available on GitHub, for a total of 164,980 commits analyzed. We focused on commits—this is why our study is considered fine-grained—to investigate the spread and evolution of security concerns. To detect security concerns at a commit level, we used SonarQube, a popular Static Application Security Testing (SAST) tool. We found, among other things, that: security concerns are spread in OS Python software systems (on average, about 11 security concerns per commit) and tend to survive more than a couple of weeks and a dozen commits; and critical security concerns, despite their high severity level, are the most spread and tend to survive the most. Also, we found that 47 different kinds of security concerns were introduced into the source code of the studied software systems, and the top eight (per number of introductions) are severe and account for 87% of all introduced security concerns. Python developers should pay more attention to security concerns, especially those critical, and use secure coding practices, automated tools, or even DevSecOps to avoid the introduction of security concerns into their source code or fix them as soon as possible.
Article
Full-text available
AI auditing is a rapidly growing field of research and practice. This review article, which doubles as an editorial to Digital Society’s topical collection on ‘Auditing of AI’, provides an overview of previous work in the field. Three key points emerge from the review. First, contemporary attempts to audit AI systems have much to learn from how audits have historically been structured and conducted in areas like financial accounting, safety engineering and the social sciences. Second, both policymakers and technology providers have an interest in promoting auditing as an AI governance mechanism. Academic researchers can thus fill an important role by studying the feasibility and effectiveness of different AI auditing procedures. Third, AI auditing is an inherently multidisciplinary undertaking, to which substantial contributions have been made by computer scientists and engineers as well as social scientists, philosophers, legal scholars and industry practitioners. Reflecting this diversity of perspectives, different approaches to AI auditing have different affordances and constraints. Specifically, a distinction can be made between technology-oriented audits, which focus on the properties and capabilities of AI systems, and process-oriented audits, which focus on technology providers’ governance structures and quality management systems. The next step in the evolution of auditing as an AI governance mechanism, this article concludes, should be the interlinking of these available—and complementary—approaches into structured and holistic procedures to audit not only how AI systems are designed and used but also how they impact users, societies and the natural environment in applied settings over time.
Article
The promise of Model Based Systems Engineering (MBSE) includes the ability to detect potential errors earlier and more accurately. This paper examines whether modern Digital Engineering (DE) techniques could have averted engineering disasters of the past had they been employed at the time. Three case studies are presented: Apollo XIII, Therac‐25, and a modern surface naval system. For each, the nature of the system and the error are discussed, and an abbreviated architectural model is presented, using a style designed for a semi‐automated model syntax analy‐sis technique called validation. A validation suite was executed against the example model to de‐termine if the defect which caused the failure was detected. Practitioners of systems engineering interested will benefit through these technical examples in leveraging MBSE for early develop‐ment defect reduction. Conclusions about which types of defects are detectable using modern MBSE techniques are presented with recommendations for future research.
Article
When a website is not accessible for users with disabilities, there is a cost imposed on that user. This study quantifies the time disparity experienced by blind users when they encounter web accessibility barriers. The resulting data illustrates a significant reason why web accessibility is imperative, by highlighting and measuring the cost of inaccessibility through the lens of the inequity that accessibility barriers place on the time of users with disabilities in both personal and business settings. This data is needed for policymakers who are creating new regulations or statutes, as well as for informing future technical standards for web accessibility.
Article
Full-text available
Maximalist, interconnected set of experiences straight out of sci-fi, based on 3D virtual environment through personal computing, and augmented reality headsets-a world known as the Metaverse-this is the futuristic vision of internet that technology giants are investing in. There has been some research on data privacy risks in the metaverse; however, detailed research on the cybersecurity risks of virtual reality platforms like metaverse have not been performed. This research paper addresses this gap of understanding the various possible cybersecurity risks on metaverse platforms. This study tries to understand the risks associated with metaverse by describing the technologies supporting metaverse platform and understanding the inherent cybersecurity threats in each of these technologies. Further, the paper proposes a cybersecurity risk governance regulatory framework to mitigate these risks.
Chapter
SecTutor is a tutoring system that uses adaptive testing to select instructional modules that allow users to pursue secure programming knowledge at their own pace. This project aims to combat one of the most significant cybersecurity challenges we have today: individuals’ failure to practice defensive, secure, and robust programming. To alleviate this, we introduce SecTutor, an adaptive online tutoring system, to help developers understand the foundational concepts behind secure programming. SecTutor allows learners to pursue knowledge at their own pace and according to their own interests, based on assessments that identify and structure educational modules based on their current level of understanding.
Chapter
Firmware verification for small and medium industries is a challenging task; as a matter of fact, they generally do not have personnel dedicated to such activity. In this context, verification is executed very late in the design flow, and it is usually carried on by the same engineers involved in coding and testing. The specifications initially discussed with the customers are generally not formalised, leading to ambiguity in the expected functionalities. The adoption of a more formal design flow would require the recruitment of people with expertise in formal and semi-formal verification, which is not often compatible with the budget resources of small and medium industries. The alternative is helping the existing engineers with tools and methodologies they can easily adopt without being experts in formal methods.
Article
Full-text available
This new era brings new promises of technology that will bring economic and societal benefits. Artificial Intelligence is to be the disruptor for work and even military technological applications. However, developers and end-users will play keys roles in how this technology is developed and ultimately used. Among these two groups, there are cybersecurity concerns that need to be considered. In this paper, the researchers address the process of secure development and testing. Also, for the end-user appropriate methods, procedures, and recommendations are defined that can mitigate the overall use of this technology within an enterprise.
Article
Trust negotiation is a type of trust management model for establishing trust between entities by a mutual exchange of credentials. This approach was designed for online environments, where the attributes of users, such as skills, habits, behaviour and experience are unknown. Required criteria of trust negotiation must be supported by a trust negotiation model in order to provide a functional, adequately robust and efficient application. Such criteria were identified previously. In this paper we are presenting a model specification using a UML-based notation for the design of trust negotiation. This specification will become a part of the Software Development Life Cycle, which will provide developers a strong tool for incorporating trust and trust-related issues into the software they create. The specification defines components and their layout for the provision of the essential functionality of trust negotiation on one side as well as optional, additional features on the other side. The extra features make trust negotiation more robust, applicable for more scenarios and may provide a privacy protection functionality.
Chapter
Industry 4.0 will impact the systems engineering landscape and cybersecurity in the future. The education needs of system engineers working in these environments will change as the system landscape adapt to the Industry 4.0 changes. This research aims to explore the impact of Industry 4.0 on systems engineering and security requirements which must be catered for in future in this changing Industry 4.0 landscape. Although it is not certain yet how the landscape will change, this research starts to explore what the potential education needs could be for system engineers to understand all future cybersecurity requirements. The results of this research indicate that security requirements engineering will be needed in the first requirements stage of the systems development life cycle. Secondly, a new set of expert engineering skills will be required to identify future threats and vulnerabilities which could impact the system landscape. These results can be used as a guideline to start thinking how system engineers should be educated for the future.
Article
This paper considers capabilities and benefits of aircraft-sized radio/radar frequency anechoic chambers for Test and Evaluation (T&E) of Electronic Warfare (EW), radar and other electromagnetics aspects of air and ground platforms. There are few such chambers worldwide. Initially developed to reduce costs, timescales and risks associated with open-air range flight testing of EW systems, their utility has expanded to most areas of platforms’ electromagnetics’ T&E. A key feature is the ability to conduct T&E of nationally sensitive equipment and systems, fully installed on platforms, in absolute privacy. Chambers’ capabilities and uses are described, with emphasis on key infrastructure and instrumentation. Non-EW uses are identified and selected topics elaborated. Operation and maintenance are discussed, based on experiential knowledge from international use and the authors’ 30 years’ involvement with BAE Systems’ EW Test Facility. A view is provided of trends and challenges whose resolution could further increase chamber utility. National affordability challenges also suggest utility expansion to support continuing moves, from expensive and difficult to repeat flight test and operational evaluation trials, towards an affordability-driven optimal balance between modelling and simulation, and real-world testing of platforms.
Article
IEEE Secure Development Conference (SecDev) is a venue for presenting ideas, research, and experience about how to develop secure systems. Please visit secdev.ieee.org for more information about the upcoming conference in September.
Thesis
This thesis examined the three core themes: the role of education in cyber security, the role of technology in cyber security, and the role of policy in cyber security in which the papers are published. The associated works are published in referred journals, peer reviewed book chapters, and conference proceedings. Research can be found in the following outlets; 1. Security Solutions for Hyperconnectivity and the Internet of Things, 2. Developing Next-Generation Countermeasures for Homeland Security Threat Prevention, 3. New Threats and Countermeasures in Digital Crime and Cyber Terrorism, 4. International Journal of Business Continuity and Risk Management, 5. Handbook of Research on 3-D Virtual Environments and Hypermedia for Ubiquitous Learning, 6. Information Security in Diverse Computing Environments, 7. Technology, Innovation, and Enterprise Transformation, 8. Journal of Information Systems Technology and Planning, 9. Encyclopedia of Information Science and Technology. The short comings and gaps in cyber security research is the research focus on hypeconnectivity of people, and technology to include the policies that provide the standards for security hardened systems. Prior research on cyber and homeland security reviewed the three core themes separately rather than jointly. This study examined the research gaps within cyber security as it relates to core themes in an effort to develop stronger policies, education programs, and hardened technologies for cyber security use. This work illustrates how cyber security can broken into these three core areas and used together to address issues such as developing training environments for teaching real cyber security events. It will further show the correlations between technologies and policies for system Certification & Accreditation (C&A). Finally, it will offer insights on how cyber security can be used to maintain security for international and national security. The overall results of my study provide guidance how on to create an ubiquitous learning (u-learning) environment to teach cyber security concepts, craft polices that affect secure computing, effects on national, and international security. The overall research has been improving the role of cyber security in education, technology, and policy.
Secure software development-the role of it audit
  • O Aras
  • C Barbara
  • L Jeffrey
Aras, O, Barbara, C, & Jeffrey, L. (2008). Secure software development-the role of it audit. Information Systems Control Journal, 4.
Software assurance best practices for air force weapon and information technology systems – are we bleeding? Published manuscript, Department of Systems and Engineering Management, Air Force Institute of Technology, Wright-Patterson Air Force Base, OH. Retrieved from http
  • R Maxon
Maxon, R. (2008). Software assurance best practices for air force weapon and information technology systems – are we bleeding?. Published manuscript, Department of Systems and Engineering Management, Air Force Institute of Technology, Wright-Patterson Air Force Base, OH. Retrieved from http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA480286&Location=U2&doc=GetTRDoc.pdf
The art of software security assessment
  • M Dowd
  • J Mcdonald
  • J Schuh
Dowd, M, McDonald, J, & Schuh, J. (2007). The art of software security assessment. Boston, MA: Pearson Education, Inc.
Software assurance best practices for air force weapon and information technology systems-are we bleeding
  • R Maxon
Maxon, R. (2008). Software assurance best practices for air force weapon and information technology systems-are we bleeding?. Published manuscript, Department of Systems and Engineering Management, Air Force Institute of Technology, Wright-Patterson Air Force Base, OH. Retrieved from http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA480286&Location=U2&doc=GetTRDoc.pdf