Chapter

Software Testing Approach for Digital Twin Verification and Validation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The increasing use of Digital Twin (DT) solutions in different domains demands for development of Verification and Validation (V&V) frameworks to guarantee the effectiveness of the implemented DTs. However, a considerable research gap has been identified in this field. Current state of the research is mainly concentrated on V&V of models in DTs and excluded important aspects such as data interoperability and functionality ofDTservices. To extend the scope ofV&V, it is crucial to include these aspects. This paper presents a novel framework for V&V of DTs that considers all the mentioned aspects. This framework combines formal methods with software testingmethods forV&V. It utilizes formal methods in a top-down manner and it will then use the software testing methods in a bottom-up manner.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Digital twins are digital representations of real-world entities constantly fed by dynamic, bidirectional communication and updates throughout the lifecycle of these sophisticated paired systems. Developing digital twins in production engineering creates new avenues for engineering design in the context of digital transformation in manufacturing and sociotechnical systems. This paper reviews the foundational concepts, meth-odologies, and applications of digital twins in engineering design, covering both their architecture and development (engineering of digital twins), and their utilisation to enhance design activities (engineering with digital twins). An overview of the current state-of-the-art is presented, challenges are highlighted, and future research directions are addressed.
Article
Full-text available
As digitalization is permeating all sectors of society toward the concept of “smart everything,” and virtual technologies and data are gaining a dominant place in the engineering and control of intelligent systems, the Digital Twin (DT) concept has surfaced as one of the top technologies to adopt. This paper discusses the DT concept from the viewpoint of Modeling and Simulation (M&S) experts. It both provides literature review elements and adopts a commentary-driven approach. We first examine the DT from a historical perspective, tracing the historical development of M&S from its roots in computational experiments to its applications in various fields and the birth of DT-related and allied concepts. We then approach DTs as an evolution of M&S, acknowledging the overlap in these different concepts. We also look at the M&S workflow and its evolution toward a DT workflow from a software engineering perspective, highlighting significant changes. Finally, we look at new challenges and requirements DTs entail, potentially leading to a revolutionary shift in M&S practices. In this way, we hope to foster the discussion on DTs and provide the M&S expert with innovative perspectives.
Article
Full-text available
Digital twins are considered among the most important technologies to optimize production systems and support decision making. To benefit from their functionalities, it is essential to guarantee a correct alignment between the physical system and the associated digital model, as well as to assess the validity of the digital model online. This operation should be conducted rapidly and with a small data set, especially in highly dynamic contexts. Further, the whole behaviour of a system may be of interest rather than the sole average performance. Traditional validation techniques are limited because of the restrictive assumptions and the need for large amounts of data. This work defines the problem of checking the validity of digital twins for production planning and control while the physical system is operating. A methodology describing the data and the types of validation is proposed including a set of techniques to be used at different levels of detail. The congruence between the physical system and the corresponding digital model is measured by treating data as sequences and measuring their similarity level with digitally-produced data by exploiting a proper comparison technique. The numerical experiments show the potential of the proposed approach and its applicability in realistic settings.
Article
Full-text available
Digital Twins (DTs) have been gaining popularity in various applications, such as smart manufacturing, smart energy, smart mobility, and smart healthcare. In simple terms, DT is described as a virtual replica of a given physical product, system, or process. It consists of three major segments: the physical entity, its virtual counterpart, and the connections between them. While the data is collected from a physical entity, processed at the virtual layer, and accessed in the form of a DT at the application layer, it is exposed to several security risks. To ensure the applicability of a DT system, it is imperative to understand these security risks and their implications. However, there is a lack of a framework that can be used to assess the security of a DT. This paper presents a framework in which the security of a DT can be analyzed with the help of a formal verification technique. The framework captures the defense of the system at different layers and considers various attacks at each layer. The security of the DT system is represented as a state-transition system and the security properties are captured in temporal logic. Probabilistic model checking (PMC) is used to verify the systems against these properties. In particular, the framework is used to analyze the probability of success and the cost of various potential attacks that can occur at each layer in a DT system. The applicability of the proposed framework is demonstrated with the help of a detailed case study in the healthcare domain.
Article
Full-text available
Aiming at the difficulties in modelling, simulation and verification in digital twin workshop, a modelling and online training method for digital twin workshop is proposed. This paper describes a multi-level digital twin aggregate modelling method, including the status attributes, the static performance attributes and the fluctuation performance attributes, and designs a digital twin organisation system, namely, digital twin graph. According to the data demand for digital twin aggregates, a spatio-temporal data model is constructed. The digital twin model training method using truncated normal distribution is presented. Furthermore, a verification method based on real-virtual error for a digital twin model is proposed. The effectiveness of real-time status monitoring, online model training and simulation for production is verified by a case.
Conference Paper
Full-text available
This paper presents the conceptualisation of a framework that combines digital twins with runtime verification and applies the techniques in the context of security monitoring and verification for satellites. We focus on special considerations needed for space missions and satellites, and we discuss how digital twins in such applications can be developed and how the states of the twins should be synchronised. In particular, we present state synchronisation methods to ensure secure and efficient long-distance communication between the satellite and its digital twin on the ground. Building on top of this, we develop a runtime verification engine for the digital twin that can verify properties in multiple temporal logic languages. We end the paper with our proposal to develop a fully verified satellite digital twin system as future work.
Article
Full-text available
Digital twin technology can be considered as an innovation accelerator. By providing a live copy of physical systems, digital twins bring to the table numerous advantages such as accelerated business processes, enhanced productivity, and faster innovation with reduced costs. For these numerous advantages digital twin is an ideal solution for several problems in domains such as Industry 4.0, education, healthcare and smart cities. However, to make sure the digital twin contributes effectively to these domains by representing a synchronized real-time copy of the physical system, the network connecting the physical and digital twins should fulfill a set of requirements such as low latency of real-time communication, data security and quality. This paper provides an overview on the technology of digital twin and its application domains with a detailed discussion on its networking requirements and proposed enabling technologies to fulfill them.
Article
Full-text available
Many countries and governments consider smart cities as solutions to global warming, population growth and resource depletion. Numerous challenges arise on the way to a smart city. Digital twins, along with the internet of things (IoT), the fifth-generation wireless systems (5G), blockchain, collaborative computing, simulation and artificial intelligence (AI) technologies, offer great potential in the transformation of current urban governance paradigm toward smart city. In this paper, the concept of digital twin city (DTC) was proposed. The characteristics, key technologies and application scenarios of a digital twin city is elaborated. Further, we discuss the theories, research directions and framework towards digital twin city.
Conference Paper
Full-text available
The Environmental Control System (ECS) of an aircraft is responsible for regulating and conditioning the airflow into the cockpit, cabin and avionics bay. The ECS is composed of several complex sub-systems and components that are reported as key unscheduled maintenance drivers for legacy aircraft by aircraft operators. Furthermore, the incorporated temperature and flow control valves in these sub-systems have the capability to mask potential faults at the component level, making the diagnostic process very challenging. To overcome this challenge, the aviation industry is currently proactively exploring the predictive maintenance approach that allows real-time monitoring of the key systems, sub-systems and components. In the context of the ECS, this necessitates the requirement to equip the system with appropriate condition monitoring capabilities. To do this, the performance characteristics of the ECS at sub-system and component level needs to be well understood under a wide-range of aircraft operating scenarios. Existing literature provides component level and system level analyses of the ECS. However, it lacks an experimentally verified and validated ECS sub-system and component level simulation tool (ECS Digital Twin), capable of simulating the thermodynamic performance and component health state parameters under wide-ranging aircraft operational scenarios. The ECS Digital Twin (DT) developed by the Cranfield University IVHM Centre offers the capability to simulate healthy and faulty cases of the Passenger Air Conditioner (PACK). This paper proposes a methodology for full-scale experimental Verification & Validation (V&V) of the developed ECS DT, to enable component level simulation, and enabling accurate diagnostics, of the civil aircraft ECS. The paper reports on progress to date in this project.
Article
Full-text available
When, in 1956, Artificial Intelligence (AI) was officially declared a research field, no one would have ever predicted the huge influence and impact its description, prediction, and prescription capabilities were going to have on our daily lives. In parallel to continuous advances in AI, the past decade has seen the spread of broadband and ubiquitous connectivity, (embedded) sensors collecting descriptive high dimensional data, and improvements in big data processing techniques and cloud computing. The joint usage of such technologies has led to the creation of digital twins, artificial intelligent virtual replicas of physical systems. Digital Twin (DT) technology is nowadays being developed and commercialized to optimize several manufacturing and aviation processes, while in the healthcare and medicine fields this technology is still at its early development stage. This paper presents the results of a study focused on the analysis of the state-of-the-art definitions of DT, the investigation of the main characteristics that a DT should possess, and the exploration of the domains in which DT applications are currently being developed. The design implications derived from the study are then presented: they focus on socio-technical design aspects and DT lifecycle. Open issues and challenges that require to be addressed in the future are finally discussed.
Book
Full-text available
Theory of Modeling and Simulation: Discrete Event & Iterative System Computational Foundations, Third Edition, continues the legacy of this authoritative and complete theoretical work. It is ideal for graduate and PhD students and working engineers interested in posing and solving problems using the tools of logico-mathematical modeling and computer simulation. Continuing its emphasis on the integration of discrete event and continuous modeling approaches, the work focuses light on DEVS and its potential to support the co-existence and interoperation of multiple formalisms in model components. New sections in this updated edition include discussions on important new extensions to theory, including chapter-length coverage of iterative system specification and DEVS and their fundamental importance, closure under coupling for iteratively specified systems, existence, uniqueness, non-deterministic conditions, and temporal progressiveness (legitimacy).
Thesis
Full-text available
This thesis proposes a methodology which integrates formal methods in the specification, design, verification and validation processes of complex, concurrent and distributed systems with discrete events perspectives. The methodology is based on the graphical language HILLS (High Level Language for System Specification) that we defined. HiLLS integrates software engineering and system theoretic views for the specification of systems. Precisely, HiLLS integrates concepts and notations from DEVS (Discrete Event System Specification), UML (Unified Modeling Language) and Object-Z. The objectives of HILLS include the definition of a highly communicable graphical concrete syntax and multiple semantic domains for simulation, prototyping, enactment and accessibility to formal analysis. Enactment refers to the process of creating an instance of system executing in real-clock time. HILLS allows hierarchical and modular construction of discrete event systems models while facilitating the modeling process due to the simple and rigorous description of the static, dynamic, structural and functional aspects of the models. Simulation semantics is defined for HiLLS by establishing a semantic mapping between HiLLS and DEVS; in this way each HiLLS model can be simulated by a DEVS simulator. This approach allow DEVS users to use HiLLS as a modeling language in the modeling phase and use their own stand alone or distributed DEVS implementation package to simulate the models. An enactment of HiLLS models is defined by adapting the observer design-pattern to their implementation. The formal verification of HiLLS models is made by establishing morphisms between each level of abstraction of HILLS and a formal method adapted for the formal verification of the properties at this level. The formal models on which are made the formal verification are obtained from HILLS specifications by using the mapping functions. The three levels of abstraction of HILLS are: the Composite level, the Unitary level and the Traces level. These levels correspond respectively to the following levels of the system specification hierarchy proposed by Zeigler: CN (Coupled Network), IOS (Input Output System) and IORO (Input Output Relation Observation). We have established morphisms between the Composite level and CSP (Communicating Sequential Processes), between Unitary level and Z and we expect to use temporal logics like LTL, CTL and TCTL to express traces level properties. HiLLS allows the specification of both static and dynamic structure systems. In case of dynamic structure systems, the composite level integrates both sate-based and process-based properties. To handle at the same time state-based and process-based properties, morphism is established between the dynamic composite level and CSPZ (a combination of CSP and Z); The verification and validation process combine simulation, model checking and theorem proving techniques in a common framework. The model checking and theorem proving of HILLS models are based on an integrated tooling framework composed of tools supporting the notations of the selected formal methods in the established morphisms.
Article
The multi-analysis modeling of a complex system is the act of building a family of models which allows to cover a large spectrum of analysis methods (such as simulation, formal methods, enactment…) that can be performed to derive various properties of this system. The High-Level Language for Systems Specification (HiLLS) has recently been introduced as a graphical language for discrete event simulation, with potential for other types of analysis, like enactment for rapid system prototyping. HiLLS defines an automata language that also opens the way to formal verification. This paper provides the building blocks for such a feature. That way, a unique model can be used not only to perform both simulation and enactment experiments but also to allow the logical analysis of properties without running any experiment. Therefore, it saves from the effort of building three different analysis-specific models and the need to align them semantically.
Conference Paper
A Digital Twin (DT) refers to a digital replica of physical assets, processes and systems. DTs integrate artificial intelligence, machine learning and data analytics to create dynamic digital models that are able to learn and update the status of the physical counterpart from multiple sources. A DT, if equipped with appropriate algorithms will represent and predict future condition and performance of their physical counterparts. Current developments related to DTs are still at an early stage with respect to buildings and other infrastructure assets. Most of these developments focus on the architectural and engineering/construction point of view. Less attention has been paid to the operation & maintenance (O&M) phase, where the value potential is immense. A systematic and clear architecture verified with practical use cases for constructing a DT is the foremost step for effective operation and maintenance of assets. This paper presents a system architecture for developing dynamic DTs in building levels for integrating heterogeneous data sources, support intelligent data query, and provide smarter decision-making processes. This will further bridge the gaps between human relationships with buildings/regions via a more intelligent, visual and sustainable channels. This architecture is brought to life through the development of a dynamic DT demonstrator of the West Cambridge site of the University of Cambridge. Specifically, this demonstrator integrates an as-is multi-layered IFC Building Information Model (BIM), building management system data, space management data, real-time Internet of Things (IoT)-based sensor data, asset registry data, and an asset tagging platform. The demonstrator also includes two applications: (1) improving asset maintenance and asset tracking using Augmented Reality (AR); and (2) equipment failure prediction. The long-term goals of this demonstrator are also discussed in this paper.
Conference Paper
In this paper, an approach to incorporate a digital twin for legacy production systems is presented. Hardware-in-the-loop setups are routinely used by manufacturing companies to carry out virtual commissioning. However, manufacturing companies having online legacy production systems are still struggling to incorporate a digital twin due to the absence of verified and validated simulation models. Companies that use virtual commissioning as a part of their engineering tool chain, usually perform offline verification of the simulation model. This approach is based solely on visual inspection and is a tedious task as each aspect of the model has to be visually validated. For legacy systems, only assessing the behavior visually in the absence of updated documents can result in an incorrect simulation model, i.e. simulating incorrect behavior with respect to the specification. Due to this, such simulation models cannot be incorporated in the engineering tool chain, as the simulated results can lead to improper decisions and can even cause equipment damage. This paper presents a platform and an approach, based on model-based testing, that is a first step for manufacturing companies to incorporate a validated simulation model for existing online production systems that will serve as a digital twin.
Thesis
Model-based systems engineering methodologies such as Simulation, Formal Methods (FM) and Enactment have been used extensively in recent decades to study, analyze, and forecast the properties and behaviors of complex systems. The results of these analyses often reveal subtle knowledge that could enhance deeper understanding of an existing system or provide timely feedbacks into a design process to avert costly (and catastrophic) errors that may arise in the system. Questions about different aspects of a system are usually best answered using some specific analysis methodologies; for instance, system's performance and behavior in some specified experimental frames can be efficiently studied using appropriate simulation methodologies. Similarly, verification of properties such as, liveness, safeness and fairness are better studied with appropriate formal methods while enactment methodologies may be used to verify assumptions about some time-based and human-in-the-loop activities and behaviors. Therefore, an exhaustive study of a complex (or even seemingly simple) system often requires the use of different analysis methodologies to produce complementary answers to likely questions. There is no gainsaying that a combination of multiple analysis methodologies offers more powerful capabilities and rigor to test system designs than can be accomplished with any of the methodologies applied alone. While this exercise will provide (near) complete knowledge of complex systems and helps analysts to make reliable assumptions and forecasts about their properties, its practical adoption is not commensurate with the theoretical advancements, and evolving formalisms and algorithms, resulting from decades of research by practitioners of the different methodologies. This shortfall has been linked to the prerequisite mathematical skills for dealing with most formalisms, which is compounded by little portability of models between tools of different methodologies that makes it mostly necessary to bear the herculean task of creating and managing several models of same system in different formalisms. Another contributing factor is that most of existing computational analysis environments are dedicated to specific analysis methodologies (i.e., simulation or FM or enactment) and are usually difficult to extend to accommodate other approaches. Thus, one must learn all the formalisms underlining the various methods to create models and go round to update all of them whenever certain system variables change. The contribution of this thesis to alleviating the burdens on users of multiple analysis methodologies for exhaustive study of complex systems can be described as a framework that uses Model-Driven Engineering (MDE) technologies to federate simulation, FM and enactment analysis methodologies behind a unified high-level specification language with support for automated synthesis of artifacts required by the disparate methodologies. This framework envelops four pieces of contributions: i) a methodology that emulates the Model- Driven Architecture (MDA) to propose an independent formalism to integrate the different analysis methodologies. ii) Integration of concepts from the three methodologies to provide a common metamodel to unite some selected formalisms for simulation, FM and enactment. Iii) Mapping rules for automated synthesis of artifacts for simulation, FM and enactment from a common reference model of a system and its requirements. iv) A framework for the enactment of idiscrete event systems. We use the beverage vending system as a running example throughout the thesis. (...)
Article
We present HiLLS (High Level Language for System Specification), a graphical formalism that allows to specify Discrete Event System (DES) models for analysis using methodologies like simulation, formal methods and enactment. HiLLS’ syntax is built from the integration of concepts from System Theory and Software Engineering aided by simple concrete notations to describe the structural and behavioral aspects of DESs. This paper provides the syntax of HiLLS and its simulation semantics which is based on the Discrete Event System Specification (DEVS) formalism. From DEVS-based Modeling and Simulation (M&S) perspective, HiLLS is a platform-independent visual language with generic expressions that can serve as a front-end for most existing DEVS-based simulation environments with the aid of Model-Driven Engineering (MDE) techniques. It also suggests ways to fill some gaps in existing DEVS-based visual formalisms that inhibit complete specification of the behavior of complex DESs. We provide a case study to illustrate the core features of the language.
Article
A new way of portraying the technical aspect of the project cycle clarifies the role and responsibility of system engineering to a project. This new three dimensional graphic illustrates the end-to-end involvement of system engineering in the project cycle, clarifies the relationship of system engineering and design engineering, and encourages the implementation of concurrent engineering.
A Simulation based approach to digital twin’s interoperability verification & validation
  • M K Traore
  • S Gorecki
  • Y Ducq