Working PaperPDF Available

Origins of the Digital Twin Concept

  • Digital Twin Institute
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
Digital Twin:
Mitigating Unpredictable, Undesirable Emergent
Behavior in Complex Systems (Excerpt)
Dr. Michael Grieves and John Vickers
While the terminology has changed over time, the basic concept of the
Digital Twin model has remained fairly stable from its inception in 2002. It is
based on the idea that a digital informational construct about a physical system
could be created as an entity on its own. This digital information would be a “twin”
of the information that was embedded within the physical system itself and be
linked with that physical system through the entire lifecycle of the system.
The concept of the Digital Twin dates back to a University of Michigan
presentation to industry in 2002 for the formation of a Product Lifecycle
Management (PLM) center. The presentation slide, as shown in Figure 3 and
originated by Dr. Grieves, was simply called “Conceptual Ideal for PLM.”
However, it did have all the elements of the Digital Twin: real space, virtual
space, the link for data flow from real space to virtual space, the link for
information flow from virtual space to real space and virtual sub-spaces.
The premise driving the model was that each system consisted of two
systems, the physical system that has always existed and a new virtual system
that contained all of the information about the physical system. This meant that
there was a mirroring or twinning of systems between what existed in real space
to what existed in virtual space and vice versa.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
The PLM or Product Lifecycle Management in the title meant that this was
not a static representation, but that the two systems would be linked throughout
the entire lifecycle of the system. The virtual and real systems would be
connected as the system went through the four phases of creation, production
(manufacture), operation (sustainment/support), and disposal.
This conceptual model was used in the first executive PLM courses at the
University of Michigan in early 2002, where it was referred to as the Mirrored
Spaces Model. It was referenced that way in a 2005 journal article (Grieves
2005). In the seminal PLM book, Product Lifecycle Management: Driving the
Next Generation of Lean Thinking, the conceptual model was referred to as the
Information Mirroring Model (Grieves 2006).
The concept was greatly expanded in Virtually Perfect: Driving Innovative
and Lean Products through Product Lifecycle Management (Grieves 2011),
where the concept was still referred to as the Information Mirroring Model.
However, it is here that the term, Digital Twin, was attached to this concept by
reference to the co-author’s way of describing this model. Given the
descriptiveness of the phrase, Digital Twin, we have used this term for the
conceptual model from that point on.
The Digital Twin has been adopted as a conceptual basis in the
astronautics and aerospace area in recent years. NASA has used it in their
technology roadmaps (Piascik, Vickers et al. 2010) and proposals for sustainable
space exploration (Caruso, Dumbacher et al. 2010). The concept has been
proposed for next generation fighter aircraft and NASA vehicles (Tuegel,
Ingraffea et al. 2011, Glaessgen and Stargel 2012)1, along with a description of
the challenges (Tuegel, Ingraffea et al. 2011) and implementation of as-builts
(Cerrone, Hochhalter et al. 2014).
What would be helpful are some definitions to rely on when referring to the
Digital Twin and its different
manifestations. We would propose
the following as visualized in
Figure 4:
Digital Twin (DT) - the
Digital Twin is a set of virtual
information constructs that fully
describes a potential or actual
physical manufactured product
from the micro atomic level to the
macro geometrical level. At its
1 In a comment, the Glaessgen paper attributes the origin of “Digital Twin” to DARPA
without any citation. We cannot find any actual support for this claim.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
optimum, any information that could be obtained from inspecting a physical
manufactured product can be obtained from its Digital Twin. Digital Twins are of
two types: Digital Twin Prototype (DTP) and Digital Twin Instance (DTI). DT’s are
operated on in a Digital Twin Environment (DTE)
Digital Twin Prototype (DTP) - this type of Digital Twin describes the
prototypical physical artifact. It contains the informational sets necessary to
describe and produce a physical version that duplicates or twins the virtual
version. These informational sets include, but are not limited to, Requirements,
Fully annotated 3D model, Bill of Materials (with material specifications), Bill of
Processes, Bill of Services, and Bill of Disposal.
Digital Twin Instance (DTI) - this type of Digital Twin describes a specific
corresponding physical product that an individual Digital Twin remains linked to
throughout the life of that physical product. Depending on the use cases required
for it, this type of Digital Twin may contain, but again is not limited to, the
following information sets: A fully annotated 3D model with General
Dimensioning and Tolerances (GD&T) that describes the geometry of the
physical instance and its components, a Bill of Materials that lists current
components and all past components, a Bill of Process that lists the operations
that were performed in creating this physical instance, along with the results of
any measurements and tests on the instance, a Service Record that describes
past services performed and components replaced, and Operational States
captured from actual sensor data, current, past actual, and future predicted.
Digital Twin Aggregate (DTA) – this type of Digital Twin is the aggregation
of all the DTIs. Unlike the DTI, the DTA may not be an independent data
structure. It may be a computing construct that has access to all DTIs and
queries them either ad-hoc or proactively. On an ad hoc basis, the computing
construct might ask, “What is the Mean Time Between Failure (MTBF) of
component X.” Proactively, the DTA might continually examine sensor readings
and correlate those sensor readings with failures to enable prognostics.
Digital Twin Environment (DTE)- this is an integrated, multi-domain
physics application space for operating on Digital Twins for a variety of purposes.
These purposes would include:
Predictive - the Digital Twin would be used for predicting future behavior
and performance of the physical product. At the Prototype stage, the prediction
would be of the behavior of the designed product with components that vary
between its high and low tolerances in order to ascertain that the as-designed
product met the proposed requirements. In the Instance stage, the prediction
would be a specific instance of a specific physical product that incorporated
actual components and component history. The predictive performance would be
based from current point in the product's lifecycle at its current state and move
forward. Multiple instances of the product could be aggregated to provide a range
of possible future states.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
Interrogative - this would apply to DTI’s as the realization of the DTA.
Digital Twin Instances could be interrogated for the current and past histories.
Irrespective of where their physical counterpart resided in the world, individual
instances could be interrogated for their current system state: fuel amount,
throttle settings, geographical location, structure stress, or any other
characteristic that was instrumented. Multiple instances of products would
provide data that would be correlated for predicting future states. For example,
correlating component sensor readings with subsequent failures of that
component would result in an alert of possible component failure being
generated when that sensor pattern was reported. The aggregate of actual
failures could provide Bayesian probabilities for predictive uses.
As indicated by the 2002 slide in Figure 3, the reference to PLM indicated
that this conceptual model was and is intended to be a dynamic model that
changes over the lifecycle of the system. The system emerges virtually at the
beginning of its lifecycle, takes physical form in the production phase, continues
through its operational life, and is eventually retired and disposed of.
In the create phase, the physical system does not yet exist. The system
starts to take shape in virtual space as a Digital Twin Prototype (DTP). This is not
a new phenomenon. For most of human history, the virtual space where this
system was created existed only in people’s minds. It is only in the last quarter of
the 20th century that this virtual space could exist within the digital space of
This opened up an entire new way of system creation. Prior to this leap in
technology, the system would have to have been implemented in physical form,
initially in sketches and blueprints but shortly thereafter made into costly
prototypes, because simply existing in people’s minds meant very limited group
sharing and understanding of both form and behavior.
In addition, while human minds are a marvel, they have severe limitations
for tasks like these. The fidelity and permanence of our human memory leaves a
great deal to be desired. Our ability to create and maintain detailed information in
our memories over a long period of time is not very good. Even for simple
objects, asking us to accurately visualize its shape is a task that most of us would
be hard-pressed to do with any precision. Ask most of us to spatially manipulate
complex shapes, and the results would be hopelessly inadequate.
However, the exponential advances in digital technologies means that the
form of the system can be fully and richly modeled in three dimensions. In the
past, emergent form in complex and even complicated system was a problem
because it was very difficult to insure that all the 2D diagrams fit together when
translated into 3D objects.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
In addition, where parts of the system move, understanding conflicts and
clashes ranged from difficult to impossible. There was substantial wasted time
and costs in translating 2D blueprints to 3D physical models, uncovering form
problems, and going back to the 2D blueprints to resolve the problems and
beginning the cycle anew.
With 3D models, the entire system can be brought together in virtual
space, and the conflicts and clashes discovered cheaply and quickly. It is only
once that these issues of form have been resolved that the translation to physical
models need to occur.
While uncovering emergent form issues is a tremendous improvement
over the iterative and costly two-dimensional blueprints to physical models, the
ability to simulate behavior of the system in digital form is a quantum leap in
discovering and understanding emergent behavior. System creators can now test
and understand how their systems will behave under a wide variety of
environments, using virtual space and simulation.
Also as shown in Figure 3, the ability to have multiple virtual spaces as
indicated by the blocks labeled VS1…VSn meant that that the system could be
put through destructive tests inexpensively. When physical prototypes were the
only means of testing, a destructive test meant the end of that costly prototype
and potentially its environment. A physical rocket that blows up on the launch
pad destroys the rocket and launch pad, the cost of which is enormous. The
virtual rocket only blows up the virtual rocket and virtual launch pad, which can
be recreated in a new virtual space at close to zero cost.
The create phase is the phase in which we do the bulk of the work in filling
in the system’s four emergent areas: PD, PU, UD, and UU. While the traditional
emphasis has been on verifying and validating the requirements or predicted
desirable (PD) and eliminating the problems and failures or the predicted
undesirable (PU), the DTP model is also an opportunity to identify and eliminate
the unpredicted undesirable (UU). By varying simulation parameters across the
possible range they can take, we can investigate the non-linear behavior in
complex systems that may have combinations or discontinuities that lead to
catastrophic problems.
Once the virtual system is completed and validated, the information is
used in real space to create a physical twin. If we have done our modeling and
simulation correctly, meaning we have accurately modeled and simulated the
real world in virtual space over a range of possibilities, we should have
dramatically reduced the number of UUs.
This is not to say we can model and simulate all possibilities. Because of
all the possible permutations and combinations in a complex system, exploring
all possibilities may not be feasible in the time allowed. However, the exponential
advances in computing capability mean that we can keep expanding the
possibilities that we can examine.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
It is in this create phase that we can attempt to mitigate or eradicate the
major source of UUs – ones caused by human interaction. We can test the virtual
system under a wide variety of conditions with a wide variety of human actors.
System designers often do not allow for conditions that they cannot conceive of
occurring. No one would think of interacting with system in such a way – until
people actually do just that in moments of panic in crisis.
Before this ability to simulate our systems, we often tested systems using
the most competent and experienced personnel because we could not afford
expensive failures of physical prototypes. But most systems are operated by a
relatively wide range of personnel. There is an old joke that goes, “What do they
call the medical student who graduates at the bottom of his or her class?”
Answer, “Doctor.” We can now afford to virtually test systems with a diversity of
personnel, including the least qualified personnel, because virtual failures are not
only inexpensive, but they point out UUs that we have not considered.
We next move into the next phase of the lifecycle, the production phase.
Here we start to build physical systems with specific and potentially unique
configurations. We need to reflect these configurations, the as-builts, as a DTI in
virtual space so that we can have knowledge of the exact specifications and
makeup of these systems without having to be in possession of the physical
So in terms of the Digital Twin, the flow goes in the opposite direction from
the create phase. The physical system is built. The data about that physical build
is sent to virtual space. A virtual representation of that exact physical system is
created in digital space.
In the support/sustain phase, we find out whether our predictions about
the system behavior were accurate. The real and virtual systems maintain their
linkage. Changes to the real system occur in both form, i.e., replacement parts,
and behavior, i.e., state changes. It is during this phase that we find out whether
our predicted desirable performance actually occurs and whether we eliminated
the predicted undesirable behaviors.
This is the phase when we see those nasty unpredicted undesirable
behaviors. If we have done a good job in ferreting out UUs in the create phase
with modeling and simulation, then these UUs will be annoyances but will cause
only minor problems. However, as has often been the case in complex systems
in the past, these UUs can be major and costly problems to resolve. In the
extreme cases, these UUs can be catastrophic failures with loss of life and
In this phase the linkage between the real system and virtual system goes
both ways. As the physical system undergoes changes we capture those
changes in the virtual system so that we know the exact configuration of each
system in use. On the other side, we can use the information from our virtual
systems to predict performance and failures of the physical systems. We can
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
aggregate information over a range of systems to correlate specific state
changes with the high probability of future failures.
As mentioned before, the final phase, disposal / decommissioning, is often
ignored as an actual phase. There are two reasons in the context of this topic
why the disposal phase should receive closer attention. The first is that
knowledge about a system’s behavior is often lost when the system is retired.
The next generation of the system often has similar problems that could have
been avoided by using knowledge about the predecessor system. While the
physical system may need to be retired, the information about it can be retained
at little cost.
Second, while the topic at hand is emergent behavior of the system as it is
in use, there is the issue of emergent impact of the system on the environment
upon disposal. Without maintaining the design information about what material is
in the system and how it is to be disposed of properly, the system may be
disposed of in a haphazard and improper way.
Caruso, P., D. Dumbacher and M. Grieves (2010). Product Lifecycle Management and the Quest for
Sustainable Space Explorations. AIAA SPACE 2010 Conference & Exposition. Anaheim, CA.
Cerrone, A., J. Hochhalter, G. Heber and A. Ingraffea (2014). "On the Effects of Modeling As-
Manufactured Geometry: Toward Digital Twin." International Journal of Aerospace Engineering 2014.
Glaessgen, E. H. and D. Stargel (2012). The digital twin paradigm for future nasa and us air force vehicles.
AAIA 53rd Structures, Structural Dynamics, and Materials Conference, Honolulu, Hawaii.
Grieves, M. (2005). "Product Lifecycle Management: the new paradigm for enterprises." Int. J. Product
Development 2(Nos. 1/2): 71-84.
Grieves, M. (2006). Product Lifecycle Management: Driving the Next Generation of Lean Thinking. New
York, McGraw-Hill.
Grieves, M. (2011). Virtually perfect : Driving Innovative and Lean Products through Product Lifecycle
Management. Cocoa Beach, FL, Space Coast Press.
Piascik, R., J. Vickers, D. Lowry, S. Scotti, J. Stewart. and A. Calomino (2010). Technology Area 12:
Materials, Structures, Mechanical Systems, and Manufacturing Road Map, NASA Office of Chief
Tuegel, E. J., A. R. Ingraffea, T. G. Eason and S. M. Spottswood (2011). "Reengineering Aircraft
Structural Life Prediction Using a Digital Twin." International Journal of Aerospace Engineering 2011.
... The underlying concept of DTs has already been described extensively in the literature. Conceptual knowledge stems from a variety of studies, including those on autonomous vehicles [2], product development [3], manufacturing [4], intelligent production [5], and model-based development [6]. ...
... According to [6], the concept of DTs was first mentioned in 2000 by Michael Grieves in the context of manufacturing. He dates the concept back to a presentation on the development of a Product Lifecycle Management (PLM) center at the University of Michigan, [3] while [4] dates this work to 2003. The concept is based on the idea that a digital information construct can be created using a physical system as an independent object. ...
... The concept is based on the idea that a digital information construct can be created using a physical system as an independent object. This digital information is called a "twin"; it forms the virtual representation of the physical object and is linked to this physical object throughout its lifecycle by mutual data synchronization [3]. Hence, the underlying research area deals with PLM [4]. ...
Full-text available
Digital Twins (DT) as digital representations are increasingly becoming operational design tools in a variety of contexts. Although a common understanding of the concept and the underlying development procedure would facilitate DT applications, only limited information has been published on the essential stages of development and fundamental development activities. This paper examines the extent to which an abstract, and thus generally applicable model for the development of Digital Twins can be identified. In order to come up with such a reference procedure , a structured analysis of published development experiences has been performed. Three major application domains, namely product lifecycle management, manufacturing, and predictive maintenance , could be detailed and cross-checked. For each of these domains, a contextual development model could be derived from empirically valid design and engineering practices. The data also allowed for the determination of which way each model corresponds to existing Digital Twin concepts. The use of a standard modeling notation enabled the integration of the domain-specific models into a single Digital Twin development model. As a result, developers are guided by domain-independent and-dependent development steps. Due to its generic structure, the model can serve for further domain explorations.
... The concept was then re-proposed in the Winter Simulation Conference 1992 (New York), where DTs were generally denoted as 'simulation models' to assist with problem-solving and decision making, although their validity was questionable [11]. The first use of the contemporary term "Digital Twin" came in 2002 when Grieves [12] defined a DT as a 'Conceptual Ideal for Product Lifecycle Management' [13,14]. In the last decade, DTs have been refined (e.g. ...
... In the last decade, DTs have been refined (e.g. [13,15]), and their application has flourished in the manufacturing industry, where they allowed faster production time, cost reduction, and prediction of system malfunctions [14,16]. The concept has also been successfully implemented for aircraft and NASA spaceships [17,18], Formula 1 vehicles [19], and offshore oil and gas facilities [20]. ...
Full-text available
Digital Twins (DTs) are forecasted to be used in two-thirds of large industrial companies in the next decade. In the Architecture, Engineering and Construction (AEC) sector, their actual application is still largely at the prototype stage. Industry and academia are currently reconciling many competing definitions and unclear processes for developing DTs. There is a compelling need to establish DTs as practice in AEC by developing common procedures and standards tailored to the sector's procedures and use cases. This paper proposes a step-by-step workflow process for developing a DT for an existing asset in the built environment, providing a proof-of-concept case study based on the Clifton Suspension Bridge in Bristol (UK). To achieve its aim, this paper (i) reviews the state-of-the-art of DTs in Civil Engineering, (ii) proposes a working DT-based workflow framework for the built environment applicable to existing assets, (iii) applies the framework and develops of the physical-virtual architecture to a case study of bridge management, and finally (iv) discusses insights from the application. The main novelty lies in the development of a versatile methodological framework that can be applied to the broad context of civil infrastructure. This paper's importance resides in the knowledge challenge, value proposition and operation dictated by developing a DT workflow for the built environment, which ultimately represents a relevant use case for the digital transformation of national infrastructure.
... The technologies above enable most use cases of 5G networks, and the primary use cases are enhanced mobile broadband capabilities compute many simulated equipment failure scenarios or tested communication failures [8]. Nowadays, it is possible to define a DT as an advanced system that can provide highfidelity models [9]. ...
Full-text available
5G networks require dynamic network monitoring and advanced security solutions. This work performs the essential steps to implement a basic 5G digital twin (DT) in a warehouse scenario. This study provides a paradigm of end-to-end connection and encryption to internet of things (IoT) devices. Network function virtualization (NFV) technologies are crucial to connecting and encrypting IoT devices. Innovative logistical scenarios are undergoing constant changes in logistics, and higher deployment of IoT devices in logistic scenarios, such as warehouses, demands better communication capabilities. The simulation tools enable digital twin network implementation in planning. Altair Feko (WinProp) simulates the radio behavior of a typical warehouse framework. The radio behavior can be exported as a radio simulation dataset file. This dataset file represents the virtual network’s payload. GNS3, an open-source network simulator, performs data payload transmission among clients to servers using custom NFV components. By transmitting data from client to server, we achieved end-to-end communication. Additionally, custom NFV components enable advanced encryption standard (AES) adoption. In summary, this work analyzes the round-trip time (RTT) and throughput of the payload data packages, in which two data packages, encrypted and non-encrypted, are observed.
... Therefore, the researchers explained the definition of DT in more detail based on the characteristics of the research objects in the industry. In the manufacturing industry, Grieves proposed "the DT is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level" [31]. Urbina Coronado et al. [32] proposed that DT is a digital model that can be used for offline simulation and analysis and then can be used to control the entire manufacturing process. ...
Full-text available
With the development of artificial intelligence, big data, Internet of Things, and other technologies, digital twin has gained great attention and become a current research topic. Using digital twin technology, the digital twin model can be constructed in the cyber space that is fully equivalent to the physical entity. It is always consistent with the physical entity in the operation process, which greatly improves the dynamic perception and prediction ability of the real world. After the development in recent years, digital twin has gradually changed from the initial concept discussion to the study of model framework and implementation method. However, because the research objects in different industries have great differences in their own composition, service conditions, and application scenarios, they have personalized characteristics in modeling strategies and usage methods. Therefore, based on different industries, this paper reviews the current articles on digital twins and distinguishes the focus of digital twin modeling research; subsequently, the relevant supporting techniques and methods are summarized according to their different importance for digital twin modeling. Based on the review in this paper, future researchers can conduct targeted research on digital twin technology in term of the characteristics of the objects in their industry.
... However, to reach the full potential of process understanding and (individualized) process control in which frying conditions are optimally tuned with respect to the raw-materials' and products' characteristics and their frying behavior (including "product-process interactions") so that potato chips with optimum quality are produced while resource and process efficiency are optimized the development of a fullyfledged Digital Twin is key. According to Grieves (2016), in the development of digital twins three general stages can be classified: 1) digital models of varying degrees of accuracy, which are virtual representations of a procudt or physical system, they can physics based or data driven and in some cases a combination of the two (hybrid models); however, there is no automated data exchange between the physical and the digital sphere; 2) digital shadows which usually are elaborate digital models which incorpotate an automated upload of information from the real world object to the virtual one; the digital shadow is primarily an instrument to transfer the real world into the digital one and could be used for e.g. decision support; 3) digital twins enable bidirectional information flow in real-time between the physical and digital sphere; they aim to use simulations and (process) models to generate an image that is as accurate as possible and can e.g. ...
Full-text available
Potato chips production is a traditional food process. To achieve uniform product quality, raw materials are usually rigorously sorted. Traditionally, the process is conducted in a single stage approach leading to high quality losses. Recently, dynamically optimized frying processes have been found to result in higher product quality. Consequently, industrial continuous deep-fat fryers convey potato disks through several zones pre-set at different temperatures. However, these improved systems still do not take the variabilities in frying kinetics among potatoes into consideration. To address this issue and decrease uncertainties in end-product quality, frying conditions of each zone must be optimized, physiochemical properties of the various raw tubers and their frying kinetics taking into account. This paper, therefore, presents a novel approach for an intelligent frying process with embedded computer vision systems providing continuous monitoring of product quality and, therefore, facilitate dynamic control of frying conditions in order to meet desired quality attributes in the final product. An extensive literature review of the key physiochemical attributes of raw potato tubers is presented, followed by an introduction to novel pre-treatment technologies, and the importance of optimal frying conditions. An overview of the potentials for using computer vision systems for the assessment of said quality criteria is given, followed by a detailed description of the envisioned frying process. The paper concludes that the realization of intelligent frying processes necessitates the development of fully fledged digital twins of the process and the products, combining physics based and data driven modelling with real time sensing and control. Terminology: Chips refer to thin slices of potato while French fries refers to wedges/stripes
... In 2002, Michael Grieves, who introduced the concept of DT, defined it as "A set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physically manufactured product can be obtained from its Digital Twin" [1]. When defined as such, a Digital Twin is comprised of three components ( Figure 1): ...
Full-text available
One of the most promising technologies that is driving digitalization in several industries is Digital Twin (DT). DT refers to the digital replica or model of any physical object (physical twin). What differentiates DT from simulation and other digital or CAD models is the automatic bidirectional exchange of data between digital and physical twins in real-time. The benefits of implementing DT in any sector include reduced operational costs and time, increased productivity, better decision making, improved predictive/preventive maintenance, etc. As a result, its implementation is expected to grow exponentially in the coming decades as, with the advent of Industry 4.0, products and systems have become more intelligent, relaying on collection and storing incremental amounts of data. Connecting that data effectively to DTs can open up many new opportunities and this paper explores different industrial sectors where the implementation of DT is taking advantage of these opportunities and how these opportunities are taking the industry forward. The paper covers the applications of DT in 13 different industries including the manufacturing, agriculture, education, construction, medicine, and retail, along with the industrial use case in these industries.
The electric power sector is one of the later sectors in adopting digital twins and models in the loop for its operations. This article firstly reviews the history, the fundamental properties, and the variants of such digital twins and how they relate to the power system. Secondly, first applications of the digital twin concept in the power and energy business are explained. It is shown that the trans-disciplinarity, the different time scales, and the heterogeneity of the required models are the main challenges in this process and that co-simulation and co-modeling can help. This article will help power system professionals to enter the field of digital twins and to learn how they can be used in their business.
Full-text available
The erratic modern world introduces challenges to all sectors of societies and potentially introduces additional inequality. One possibility to decrease the educational inequality is to provide remote access to facilities that enable learning and training. A similar approach of remote resource usage can be utilized in resource-poor situations where the required equipment is available at other premises. The concept of Industry 5.0 (i5.0) focuses on a human-centric approach, enabling technologies to concentrate on human–machine interaction and emphasizing the importance of societal values. This paper introduces a novel robotics teleoperation platform supported by the i5.0. The platform reduces inequality and allows usage and learning of robotics remotely independently of time and location. The platform is based on digital twins with bi-directional data transmission between the physical and digital counterparts. The proposed system allows teleoperation, remote programming, and near real-time monitoring of controlled robots, robot time scheduling, and social interaction between users. The system design and implementation are described in detail, followed by experimental results.
Due to the implementation of new technologies, the healthcare sector now produces more data than ever before. This data is of high importance to patients but in many cases it is inaccessible. To counteract this effect, many mobile apps have been developed to aid patients in the management of their personal health data. In this article we will present an analysis and comparison of several apps of this sort, selected from those available within the Portuguese market. The goal of this analysis is to create a design framework for a new personal health management app to be developed. It was concluded that despite an ample offer, there is still opportunity to produce a differentiated application for this market, by including innovative features and methods of displaying information, such as 3D models.
Full-text available
A simple, nonstandardized material test specimen, which fails along one of two different likely crack paths, is considered herein. The result of deviations in geometry on the order of tenths of a millimeter, this ambiguity in crack path motivates the consideration of as-manufactured component geometry in the design, assessment, and certification of structural systems. Herein, finite element models of as-manufactured specimens are generated and subsequently analyzed to resolve the crack-path ambiguity. The consequence and benefit of such a “personalized” methodology is the prediction of a crack path for each specimen based on its as-manufactured geometry, rather than a distribution of possible specimen geometries or nominal geometry. The consideration of as-manufactured characteristics is central to the Digital Twin concept. Therefore, this work is also intended to motivate its development.
Full-text available
Reengineering of the aircraft structural life prediction process to fully exploit advances in very high performance digital computing is proposed. The proposed process utilizes an ultrahigh fidelity model of individual aircraft by tail number, a Digital Twin, to integrate computation of structural deflections and temperatures in response to flight conditions, with resulting local damage and material state evolution. A conceptual model of how the Digital Twin can be used for predicting the life of aircraft structure and assuring its structural integrity is presented. The technical challenges to developing and deploying a Digital Twin are discussed in detail.
Virtually Perfect is the key to products being both innovative and lean in the 21st century. Virtual products, which are the digital information about the physical product, create value for both product producers and their customers throughout the entire product lifecycle of create, build, sustain, and dispose. Both product producers and users will need to change their perspective of products being only physical to a perspective of products being dual in nature: both physical and virtual.
Conference Paper
Future generations of NASA and U.S. Air Force vehicles will require lighter mass while being subjected to higher loads and more extreme service conditions over longer time periods than the present generation. Current approaches for certification, fleet management and sustainment are largely based on statistical distributions of material properties, heuristic design philosophies, physical testing and assumed similitude between testing and operational conditions and will likely be unable to address these extreme requirements. To address the shortcomings of conventional approaches, a fundamental paradigm shift is needed. This paradigm shift, the Digital Twin, integrates ultra-high fidelity simulation with the vehicle's on-board integrated vehicle health management system, maintenance history and all available historical and fleet data to mirror the life of its flying twin and enable unprecedented levels of safety and reliability.
Product Lifecycle Management (PLM) is a developing paradigm. One way to develop an understanding of PLM's characteristic and boundaries is to propose models that help us conceptualise both holistic and component views in compact packages. Models can give us both a rich way of thinking about overall concepts and can identify areas where we need to explore issues that such models raise. In this paper, the author proposes and discusses two such related models, the Product Lifecycle Management Model (PLM Model) and the Mirrored Spaces Model (MSM) and investigates the conceptual and technical issues raised by these models.
  • R Piascik
  • J Vickers
  • D Lowry
  • S Scotti
  • J Stewart
  • A Calomino
Piascik, R., J. Vickers, D. Lowry, S. Scotti, J. Stewart. and A. Calomino (2010). Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map, NASA Office of Chief Technologist.
Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map
  • R Piascik
  • J Vickers
  • D Lowry
  • S Scotti
  • J Stewart
  • A Calomino
Piascik, R., J. Vickers, D. Lowry, S. Scotti, J. Stewart. and A. Calomino (2010). Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map, NASA Office of Chief Technologist.