Figure 4 - uploaded by Mike Ryan
Content may be subject to copyright.
Source publication
While the concepts of verification and validation are commonplace, in practice the terms are often used interchangeably and the context of their use is not made clear, resulting in the meaning of the concepts frequently being misunderstood. This paper addresses the use of the terms verification and validation and identifies their various meanings i...
Contexts in source publication
Context 1
... Engineering is an iterative and recursive process. Requirements development and design occur top-down as shown on the left side of the SE "V" as shown in Figure 4. Systems engineering starts with the concept stage where stakeholder needs, expectations, and requirements are elicited, documented, and baselined. ...
Similar publications
With the advent of W3C standards such as DID, VCs, and DPKI beyond 2020, the industry has reached a new level where a technological infrastructure overhaul is possible. By employing blockchain and other Decentralized Ledger Technologies, it is believed that we can eliminate the requirement for paper-based verification. Researchers are aware of the...
Citations
... Unfortunately, the concepts of V&V are commonplace and both terms are considered as interchangeable. A definer is embedded at the beginning of the word that distinctly indicates its context based on the V&V and the systems engineering "V" Model [3]. Agent-based simulations are cost-effective methods for V&V assessments, although they present challenges as well. ...
Actual challenges with data in physical infrastructure include 1) the adversity of its velocity based on access and retrieval, thus integration; 2) its value as its intrinsic quality; 3) its extensive volume with a limited variety in terms of systems and finally, 4) its veracity, as data can be modified to obtain an economical advantage. Physical infrastructure design based on agile project management and minimum viable products provides benefits against the traditional waterfall method. Agile supports an early return of investment that promotes circular re-investing while making the product more adaptable to variable social-economical environments. However, Agile also presents inherent issues due to its iterative approach. Furthermore, project information requires an efficient record of the aims, requirements, and governance not only for the investors, owners, or users, but also to keep evidence in future health & safety and other statutory compliance. In order to address these issues, this article presents a Validation and Verification (V&V) model for Data Marketplaces with a hierarchical process; each data V&V stage provides a layer of data abstraction, value-added services and authenticity based on Artificial Intelligence (AI). In addition, this proposed solution applies a Distributed Ledger Technology (DTL) for a decentralised approach where each user keeps and maintains the data within a ledger. The presented model is validated in real Data Marketplace applications: 1) live data for Newcastle urban observatory smart city project where data is collected from sensors embedded within the Smart city via APIs. 2) static data for University College London (UCL) – Real Estate – PEARL Project where different project users and stakeholders introduce data into a (Project Information Model) PIM.
... The used models have to be validated and verified (Ryan & Wheatcraft, 2017). Hence, the Translator has to provide strategies to show that the model is fit for purpose (verification) and produces reliable results (validation). ...
The EMMC Translators Guide provides a vision for industrial users (Clients) how to benefit from a systematic materials modelling translation process, that covers translating an industrial need/challenge into a solution by means of materials modelling and simulation tools. The experts that are performing this process of providing a Translation service are called Translators in Materials Modelling. They often act as a team and propose an assistance and consulting for companies. Translator(s) can be either academics, software owners, independent consultants, modellers or code developers with the relevant expertise, and even be employees of the Client company.
The EMMC Translation concept for materials modelling was collaboratively developed by engaged European Stakeholders from industry and academia in a bottom-up approach facilitated by the European Union and the EMMC within the EMMC-CSA project. The aim of the Translators Guide is providing Translators with an (orientation) basis which they may follow in an agile and personalised way, to facilitate and safeguard a successful and efficient mutually agreed workflow (course of action) in an industrially oriented modelling project.
In the current contribution, we aim to further contextualise the Translators Guide with Translation scenarios that have evolved since the EMMC-CSA project ended in 2019. The interdisciplinary team of authors will give an outlook focussing on tools under development, opportunities upon maturing (learning by doing) and challenges from diversification that we expect to manifest in the 2020s.
... The generic system life cycle may be illustrated as shown in Figure 4. (SEBoK, 2016) According to Forsberg & Mooz (1991), "the technical aspect of the project cycle is envisioned as a "Vee", starting with User needs on the upper left and ending with a User-validated system on the upper right." Ryan & Wheatcraft (2017) schematizes the Vee-Model as illustrated in Figure 5. ...
... "Design-it-now-fix-it-later" can be very expensive , while more advanced is the progress of the project, the cost of design modification or alteration increases; it is calculated that 85% of the rework costs are originated by mistakes in the definition of system requirements Szejka et al., 2014;De Weck & Willcox, 2010) Figure 5 Engineering Vee-model (Ryan & Wheatcraft, 2017) In Figure 6 it can be seen that at early stages of the system life cycle is less expensive and easier to change the requirements, compared to later stages, where the cost is higher as well the difficulty of change. Figure 6 Life Cycle commitment, system-specific knowledge, and cost. ...
... Localization of the Stakeholder Needs and Requirement definition process (adapted from Ryan & Wheatcraft, 2017) According to several reference documents, such as Faisandier (2012), Ryan et al. (2015), or ISO 15288 (2015 to name a few, the objective of this process is to clearly and concisely elicit a set of stakeholder needs and expectations, and to transform them into stakeholder requirements, whose realization is verifiable in operation. Figure 38 shows how real needs are perceived and expressed by the stakeholders, to finally become stakeholder requirements. ...
Companies are competing to put their products on the market. In this race, knowledge of the quality characteristics that end users require for the product is sometimes presupposed or misunderstood. The result is often a product that does not achieve the purpose for which it was designed and manufactured. In this context, is it possible to guide the development process methodologically in order to ensure the quality of a product? With reference to Systems Engineering, it is at the stage of Concept in the life cycle of the system that the needs of stakeholders are collected, translated first into stakeholder requirements and then into system requirements. This thesis therefore addresses these steps as a priority. It proposes a methodology to ensure that stakeholder needs are well understood and properly translated into system requirements. The proposal complies with the ISO 15288 (2015) quality standard and incorporates the Lean principles. The thesis also proposes a tool that supports the methodology. The results obtained from several case studies developed at the Tecnológico Nacional de México, Instituto Tecnológico de Toluca (ITTol), Mexico, demonstrate the effectiveness of the proposed methodology. Its use increases the likelihood that the delivered product will meet stakeholder expectations, reduces requirement changes due to misidentification of needs and, therefore, the costs incurred by these changes, and ensures faster delivery of the product to the market.
... Tightly related and complemented by the software verification process, they together address "all software life cycle processes including acquisition, supply, development, operation and maintenance", as defined in the IEEE Standard for Software Verification and Validation (V&V) [12]. V&V are commonplace concepts in software engineering literature, but these terms are often used interchangeably in practice [13]. Indeed, both processes serve different purposes since verification is linked to the early stages of the software development life cycle, focusing on building the software correctly, while validation is commonly placed at the end of the development process, providing "evidence that the software and its associated products satisfy system requirements allocated to software at the end of each life cycle, solve the right problem, and satisfy intended use and user needs". ...
Supporting e-Science in the EGI e-Infrastructure requires extensive and reliable software, for advanced computing use, deployed across over approximately 300 European and worldwide data centers. The Unified Middleware Distribution (UMD) and Cloud Middleware Distribution (CMD) are the channels to deliver the software for the EGI e-Infrastructure consumption. The software is compiled, validated and distributed following the Software Provisioning Process (SWPP), where the Quality Criteria (QC) definition sets the minimum quality requirements for EGI acceptance. The growing number of software components currently existing within UMD and CMD distributions hinders the application of the traditional, manual-based validation mechanisms, thus driving the adoption of automated solutions. This paper presents umd-verification, an open-source tool that enforces the fulfillment of the QC requirements in an automated way for the continuous validation of the software products for scientific disposal. The umd-verification tool has been successfully integrated within the SWPP pipeline and is progressively supporting the full validation of the products in the UMD and CMD repositories. While the cost of supporting new products is dependant on the availability of Infrastructure as Code solutions to take over the deployment and high test coverage, the results obtained for the already integrated products are promising, as the time invested in the validation of products has been drastically reduced. Furthermore, automation adoption has brought along benefits for the reliability of the process, such as the removal of human-associated errors or the risk of regression of previously tested functionalities.
ApplicationApplication-specific reasoning mechanisms (ASRMs) development is a rapidly growing domain of systems engineering. A demonstrative implementation of an active recommender system (ARS) was realized to support designing ASRMs and to circumvent procedural obstacles by providing context-sensitive recommendations. The specific problem for the research presented in this paper was the development of a synthetic validation agent (SVA) to simulate the decisional behaviour of designers and to generate data about the usefulness of the recommendations. The fact of the matter is that the need for the SVA was raised by the pandemic, which prevented involving groups of human designers in the recommendation testing process. The reported research had three practical goals: (i) development of the logical fundamentals for the SVA, (ii) computational implementation of the SVA, and (iii) application of the SVA in data generation for the evaluation of usefulness of recommendation. The SVA is based on a probabilistic decisional model that quantifies decisional options according to the assumed decisional tendencies. The three key concepts underlying the SVA are (i) decisional logic, (ii) decisional knowledge, and (iii) decisional probability. These together enable generation of reliable data about the decisional behaviours of human designers concerning the obtained recommendations. The completed tests proved the above assumption.
Tech-leading organizations are embracing the forthcoming artificial intelligence revolution. Intelligent systems are replacing and cooperating with traditional software components. Thus, the same development processes and standards in software engineering ought to be complied in artificial intelligence systems. This study aims to understand the processes by which artificial intelligence-based systems are developed and how state-of-the-art lifecycle models fit the current needs of the industry. We conducted an exploratory case study at ING, a global bank with a strong European base. We interviewed 17 people with different roles and from different departments within the organization. We have found that the following stages have been overlooked by previous lifecycle models: data collection, feasibility study, documentation, model monitoring, and model risk assessment. Our work shows that the real challenges of applying Machine Learning go much beyond sophisticated learning algorithms – more focus is needed on the entire lifecycle. In particular, regardless of the existing development tools for Machine Learning, we observe that they are still not meeting the particularities of this field.
Requirements are the language we use to communicate stakeholder needs for a system of interest to developers, designers, builders, coders, testers, and other stakeholders. Increasingly, there is debate about which means (form and media) of communications is best for communicating requirements and sets of requirements. As part of this debate, one means of communication (such as diagrams, use cases, or text‐based) is often advocated (with a lot of passion in many cases) over the other means of communication. Which side is taken in the debate depends on the specific idea or concept being communicated together with the domain, culture, people, and processes used within the specific organization. Consequently, there is no single “best” means of communication that applies to all the various types of requirements. Each means of communication has its strengths and weaknesses depending on which message is being communicated, who the sender is, who the receiver is, the needs of the receiver, and the filters used by both the sender to encode the message and the receiver who must decode the message. This paper goes back to the basics of communication theory to address this debate and concludes that, while each means of communication has value, no single means is sufficient to clearly, completely, and consistently communicate all of the various types and categories of requirements. Each of the means advocated represent a visualization (graphic or text) of the system from a perspective based on the intent of the message being communicated. With this viewpoint, rather than focusing on a single type of visualization, a set of visualizations is needed to effectively communicate requirements. This set of visualizations needs to be based on an underlying, integrated data and information model of the system of interest.