Working PaperPDF Available

Origins of the Digital Twin Concept

Authors:
  • Digital Twin Institute
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
Digital Twin:
Mitigating Unpredictable, Undesirable Emergent
Behavior in Complex Systems (Excerpt)
Dr. Michael Grieves and John Vickers
III.#The#Digital#Twin#Concept#
While the terminology has changed over time, the basic concept of the
Digital Twin model has remained fairly stable from its inception in 2002. It is
based on the idea that a digital informational construct about a physical system
could be created as an entity on its own. This digital information would be a “twin”
of the information that was embedded within the physical system itself and be
linked with that physical system through the entire lifecycle of the system.
Origins'of'the'Digital'Twin'Concept'
The concept of the Digital Twin dates back to a University of Michigan
presentation to industry in 2002 for the formation of a Product Lifecycle
Management (PLM) center. The presentation slide, as shown in Figure 3 and
originated by Dr. Grieves, was simply called “Conceptual Ideal for PLM.”
However, it did have all the elements of the Digital Twin: real space, virtual
space, the link for data flow from real space to virtual space, the link for
information flow from virtual space to real space and virtual sub-spaces.
The premise driving the model was that each system consisted of two
systems, the physical system that has always existed and a new virtual system
that contained all of the information about the physical system. This meant that
there was a mirroring or twinning of systems between what existed in real space
to what existed in virtual space and vice versa.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
The PLM or Product Lifecycle Management in the title meant that this was
not a static representation, but that the two systems would be linked throughout
the entire lifecycle of the system. The virtual and real systems would be
connected as the system went through the four phases of creation, production
(manufacture), operation (sustainment/support), and disposal.
This conceptual model was used in the first executive PLM courses at the
University of Michigan in early 2002, where it was referred to as the Mirrored
Spaces Model. It was referenced that way in a 2005 journal article (Grieves
2005). In the seminal PLM book, Product Lifecycle Management: Driving the
Next Generation of Lean Thinking, the conceptual model was referred to as the
Information Mirroring Model (Grieves 2006).
The concept was greatly expanded in Virtually Perfect: Driving Innovative
and Lean Products through Product Lifecycle Management (Grieves 2011),
where the concept was still referred to as the Information Mirroring Model.
However, it is here that the term, Digital Twin, was attached to this concept by
reference to the co-author’s way of describing this model. Given the
descriptiveness of the phrase, Digital Twin, we have used this term for the
conceptual model from that point on.
The Digital Twin has been adopted as a conceptual basis in the
astronautics and aerospace area in recent years. NASA has used it in their
technology roadmaps (Piascik, Vickers et al. 2010) and proposals for sustainable
space exploration (Caruso, Dumbacher et al. 2010). The concept has been
proposed for next generation fighter aircraft and NASA vehicles (Tuegel,
Ingraffea et al. 2011, Glaessgen and Stargel 2012)1, along with a description of
the challenges (Tuegel, Ingraffea et al. 2011) and implementation of as-builts
(Cerrone, Hochhalter et al. 2014).
Defining'the'Digital'Twin'
What would be helpful are some definitions to rely on when referring to the
Digital Twin and its different
manifestations. We would propose
the following as visualized in
Figure 4:
Digital Twin (DT) - the
Digital Twin is a set of virtual
information constructs that fully
describes a potential or actual
physical manufactured product
from the micro atomic level to the
macro geometrical level. At its
1 In a comment, the Glaessgen paper attributes the origin of “Digital Twin” to DARPA
without any citation. We cannot find any actual support for this claim.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
optimum, any information that could be obtained from inspecting a physical
manufactured product can be obtained from its Digital Twin. Digital Twins are of
two types: Digital Twin Prototype (DTP) and Digital Twin Instance (DTI). DT’s are
operated on in a Digital Twin Environment (DTE)
Digital Twin Prototype (DTP) - this type of Digital Twin describes the
prototypical physical artifact. It contains the informational sets necessary to
describe and produce a physical version that duplicates or twins the virtual
version. These informational sets include, but are not limited to, Requirements,
Fully annotated 3D model, Bill of Materials (with material specifications), Bill of
Processes, Bill of Services, and Bill of Disposal.
Digital Twin Instance (DTI) - this type of Digital Twin describes a specific
corresponding physical product that an individual Digital Twin remains linked to
throughout the life of that physical product. Depending on the use cases required
for it, this type of Digital Twin may contain, but again is not limited to, the
following information sets: A fully annotated 3D model with General
Dimensioning and Tolerances (GD&T) that describes the geometry of the
physical instance and its components, a Bill of Materials that lists current
components and all past components, a Bill of Process that lists the operations
that were performed in creating this physical instance, along with the results of
any measurements and tests on the instance, a Service Record that describes
past services performed and components replaced, and Operational States
captured from actual sensor data, current, past actual, and future predicted.
Digital Twin Aggregate (DTA) – this type of Digital Twin is the aggregation
of all the DTIs. Unlike the DTI, the DTA may not be an independent data
structure. It may be a computing construct that has access to all DTIs and
queries them either ad-hoc or proactively. On an ad hoc basis, the computing
construct might ask, “What is the Mean Time Between Failure (MTBF) of
component X.” Proactively, the DTA might continually examine sensor readings
and correlate those sensor readings with failures to enable prognostics.
Digital Twin Environment (DTE)- this is an integrated, multi-domain
physics application space for operating on Digital Twins for a variety of purposes.
These purposes would include:
Predictive - the Digital Twin would be used for predicting future behavior
and performance of the physical product. At the Prototype stage, the prediction
would be of the behavior of the designed product with components that vary
between its high and low tolerances in order to ascertain that the as-designed
product met the proposed requirements. In the Instance stage, the prediction
would be a specific instance of a specific physical product that incorporated
actual components and component history. The predictive performance would be
based from current point in the product's lifecycle at its current state and move
forward. Multiple instances of the product could be aggregated to provide a range
of possible future states.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
Interrogative - this would apply to DTI’s as the realization of the DTA.
Digital Twin Instances could be interrogated for the current and past histories.
Irrespective of where their physical counterpart resided in the world, individual
instances could be interrogated for their current system state: fuel amount,
throttle settings, geographical location, structure stress, or any other
characteristic that was instrumented. Multiple instances of products would
provide data that would be correlated for predicting future states. For example,
correlating component sensor readings with subsequent failures of that
component would result in an alert of possible component failure being
generated when that sensor pattern was reported. The aggregate of actual
failures could provide Bayesian probabilities for predictive uses.
The'Digital'Twin'Model'throughout'the'Lifecycle'
As indicated by the 2002 slide in Figure 3, the reference to PLM indicated
that this conceptual model was and is intended to be a dynamic model that
changes over the lifecycle of the system. The system emerges virtually at the
beginning of its lifecycle, takes physical form in the production phase, continues
through its operational life, and is eventually retired and disposed of.
In the create phase, the physical system does not yet exist. The system
starts to take shape in virtual space as a Digital Twin Prototype (DTP). This is not
a new phenomenon. For most of human history, the virtual space where this
system was created existed only in people’s minds. It is only in the last quarter of
the 20th century that this virtual space could exist within the digital space of
computers.
This opened up an entire new way of system creation. Prior to this leap in
technology, the system would have to have been implemented in physical form,
initially in sketches and blueprints but shortly thereafter made into costly
prototypes, because simply existing in people’s minds meant very limited group
sharing and understanding of both form and behavior.
In addition, while human minds are a marvel, they have severe limitations
for tasks like these. The fidelity and permanence of our human memory leaves a
great deal to be desired. Our ability to create and maintain detailed information in
our memories over a long period of time is not very good. Even for simple
objects, asking us to accurately visualize its shape is a task that most of us would
be hard-pressed to do with any precision. Ask most of us to spatially manipulate
complex shapes, and the results would be hopelessly inadequate.
However, the exponential advances in digital technologies means that the
form of the system can be fully and richly modeled in three dimensions. In the
past, emergent form in complex and even complicated system was a problem
because it was very difficult to insure that all the 2D diagrams fit together when
translated into 3D objects.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
In addition, where parts of the system move, understanding conflicts and
clashes ranged from difficult to impossible. There was substantial wasted time
and costs in translating 2D blueprints to 3D physical models, uncovering form
problems, and going back to the 2D blueprints to resolve the problems and
beginning the cycle anew.
With 3D models, the entire system can be brought together in virtual
space, and the conflicts and clashes discovered cheaply and quickly. It is only
once that these issues of form have been resolved that the translation to physical
models need to occur.
While uncovering emergent form issues is a tremendous improvement
over the iterative and costly two-dimensional blueprints to physical models, the
ability to simulate behavior of the system in digital form is a quantum leap in
discovering and understanding emergent behavior. System creators can now test
and understand how their systems will behave under a wide variety of
environments, using virtual space and simulation.
Also as shown in Figure 3, the ability to have multiple virtual spaces as
indicated by the blocks labeled VS1…VSn meant that that the system could be
put through destructive tests inexpensively. When physical prototypes were the
only means of testing, a destructive test meant the end of that costly prototype
and potentially its environment. A physical rocket that blows up on the launch
pad destroys the rocket and launch pad, the cost of which is enormous. The
virtual rocket only blows up the virtual rocket and virtual launch pad, which can
be recreated in a new virtual space at close to zero cost.
The create phase is the phase in which we do the bulk of the work in filling
in the system’s four emergent areas: PD, PU, UD, and UU. While the traditional
emphasis has been on verifying and validating the requirements or predicted
desirable (PD) and eliminating the problems and failures or the predicted
undesirable (PU), the DTP model is also an opportunity to identify and eliminate
the unpredicted undesirable (UU). By varying simulation parameters across the
possible range they can take, we can investigate the non-linear behavior in
complex systems that may have combinations or discontinuities that lead to
catastrophic problems.
Once the virtual system is completed and validated, the information is
used in real space to create a physical twin. If we have done our modeling and
simulation correctly, meaning we have accurately modeled and simulated the
real world in virtual space over a range of possibilities, we should have
dramatically reduced the number of UUs.
This is not to say we can model and simulate all possibilities. Because of
all the possible permutations and combinations in a complex system, exploring
all possibilities may not be feasible in the time allowed. However, the exponential
advances in computing capability mean that we can keep expanding the
possibilities that we can examine.
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
It is in this create phase that we can attempt to mitigate or eradicate the
major source of UUs – ones caused by human interaction. We can test the virtual
system under a wide variety of conditions with a wide variety of human actors.
System designers often do not allow for conditions that they cannot conceive of
occurring. No one would think of interacting with system in such a way – until
people actually do just that in moments of panic in crisis.
Before this ability to simulate our systems, we often tested systems using
the most competent and experienced personnel because we could not afford
expensive failures of physical prototypes. But most systems are operated by a
relatively wide range of personnel. There is an old joke that goes, “What do they
call the medical student who graduates at the bottom of his or her class?”
Answer, “Doctor.” We can now afford to virtually test systems with a diversity of
personnel, including the least qualified personnel, because virtual failures are not
only inexpensive, but they point out UUs that we have not considered.
We next move into the next phase of the lifecycle, the production phase.
Here we start to build physical systems with specific and potentially unique
configurations. We need to reflect these configurations, the as-builts, as a DTI in
virtual space so that we can have knowledge of the exact specifications and
makeup of these systems without having to be in possession of the physical
systems.
So in terms of the Digital Twin, the flow goes in the opposite direction from
the create phase. The physical system is built. The data about that physical build
is sent to virtual space. A virtual representation of that exact physical system is
created in digital space.
In the support/sustain phase, we find out whether our predictions about
the system behavior were accurate. The real and virtual systems maintain their
linkage. Changes to the real system occur in both form, i.e., replacement parts,
and behavior, i.e., state changes. It is during this phase that we find out whether
our predicted desirable performance actually occurs and whether we eliminated
the predicted undesirable behaviors.
This is the phase when we see those nasty unpredicted undesirable
behaviors. If we have done a good job in ferreting out UUs in the create phase
with modeling and simulation, then these UUs will be annoyances but will cause
only minor problems. However, as has often been the case in complex systems
in the past, these UUs can be major and costly problems to resolve. In the
extreme cases, these UUs can be catastrophic failures with loss of life and
property.
In this phase the linkage between the real system and virtual system goes
both ways. As the physical system undergoes changes we capture those
changes in the virtual system so that we know the exact configuration of each
system in use. On the other side, we can use the information from our virtual
systems to predict performance and failures of the physical systems. We can
Excerpted based on: Trans-Disciplinary Perspectives on System Complexity All rights reserved
aggregate information over a range of systems to correlate specific state
changes with the high probability of future failures.
As mentioned before, the final phase, disposal / decommissioning, is often
ignored as an actual phase. There are two reasons in the context of this topic
why the disposal phase should receive closer attention. The first is that
knowledge about a system’s behavior is often lost when the system is retired.
The next generation of the system often has similar problems that could have
been avoided by using knowledge about the predecessor system. While the
physical system may need to be retired, the information about it can be retained
at little cost.
Second, while the topic at hand is emergent behavior of the system as it is
in use, there is the issue of emergent impact of the system on the environment
upon disposal. Without maintaining the design information about what material is
in the system and how it is to be disposed of properly, the system may be
disposed of in a haphazard and improper way.
References:
Caruso, P., D. Dumbacher and M. Grieves (2010). Product Lifecycle Management and the Quest for
Sustainable Space Explorations. AIAA SPACE 2010 Conference & Exposition. Anaheim, CA.
Cerrone, A., J. Hochhalter, G. Heber and A. Ingraffea (2014). "On the Effects of Modeling As-
Manufactured Geometry: Toward Digital Twin." International Journal of Aerospace Engineering 2014.
Glaessgen, E. H. and D. Stargel (2012). The digital twin paradigm for future nasa and us air force vehicles.
AAIA 53rd Structures, Structural Dynamics, and Materials Conference, Honolulu, Hawaii.
Grieves, M. (2005). "Product Lifecycle Management: the new paradigm for enterprises." Int. J. Product
Development 2(Nos. 1/2): 71-84.
Grieves, M. (2006). Product Lifecycle Management: Driving the Next Generation of Lean Thinking. New
York, McGraw-Hill.
Grieves, M. (2011). Virtually perfect : Driving Innovative and Lean Products through Product Lifecycle
Management. Cocoa Beach, FL, Space Coast Press.
Piascik, R., J. Vickers, D. Lowry, S. Scotti, J. Stewart. and A. Calomino (2010). Technology Area 12:
Materials, Structures, Mechanical Systems, and Manufacturing Road Map, NASA Office of Chief
Technologist.
Tuegel, E. J., A. R. Ingraffea, T. G. Eason and S. M. Spottswood (2011). "Reengineering Aircraft
Structural Life Prediction Using a Digital Twin." International Journal of Aerospace Engineering 2011.
... Over two-thirds of those businesses are expected to own a minimum of one digital twin aspect in their outputs by 2022 (Gartner, 2019), which can address by digitizing their operations, service mentality, and marketing acceleration. The digital twins were defined initially in 2002 (Grieves and Vickers, 2016) as "Digital infomedia construction of a physical system as a single entity." began from a basic conceptual model of the product (i.e., drawer) life cycle management (PLM) to the term 'twin,' which refers to the fact that will link these digital data to the physical while incomplete existence, this think may enhance the closed stakeholders' chain cycle. ...
... Moreover, the study determines the degree of fidelity (completed) and the indicator of temporal integration in real-time or offline as discussed and reviewed (Grieves M. et al., 2016) ...
Article
Full-text available
Late order loss for difficulty handling (loading and unloading) activities left an alarm message to make traditional transportation handling for distribution e-commerce more accessible through mitigating it to semi-auto actions. This article discusses the idea of Vehicle Containers made up of Permutational Drawers, i.e., VCPD, that ensure ergonomic handling. The proposed Ergonomic Digital Twin (EDT) manages the VCPD by the Internet of things, i.e., IoT. The VCPD object has two dimensions: drawers' size and motion mechanism. These targets are implemented via establishing the digital twins' model, i.e., DT, for these drawers to test its qualifying for implementation, mainly if supported by IoT. There is still much confusion regarding the DT and how it will apply to the VCPD in medium-sized schemes for transportation enterprises. This work activates the IoT to bolster and simplify transportation activities through designing VCPD and control via a unified framework having several standard steps to reduce execution time, effort, and transportation costs.
... However, to reach the full potential of process understanding and (individualized) process control in which frying conditions are optimally tuned with respect to the raw-materials' and products' characteristics and their frying behavior (including "product-process interactions") so that potato chips with optimum quality are produced while resource and process efficiency are optimized the development of a fullyfledged Digital Twin is key. According to Grieves (2016), in the development of digital twins three general stages can be classified: 1) digital models of varying degrees of accuracy, which are virtual representations of a procudt or physical system, they can physics based or data driven and in some cases a combination of the two (hybrid models); however, there is no automated data exchange between the physical and the digital sphere; 2) digital shadows which usually are elaborate digital models which incorpotate an automated upload of information from the real world object to the virtual one; the digital shadow is primarily an instrument to transfer the real world into the digital one and could be used for e.g. decision support; 3) digital twins enable bidirectional information flow in real-time between the physical and digital sphere; they aim to use simulations and (process) models to generate an image that is as accurate as possible and can e.g. ...
Article
Potato chips production is a traditional food process. To achieve uniform product quality, raw materials are usually rigorously sorted. Traditionally, the process is conducted in a single stage approach leading to high quality losses. Recently, dynamically optimized frying processes have been found to result in higher product quality. Consequently, industrial continuous deep-fat fryers convey potato disks through several zones pre-set at different temperatures. However, these improved systems still do not take the variabilities in frying kinetics among potatoes into consideration. To address this issue and decrease uncertainties in end-product quality, frying conditions of each zone must be optimized, physiochemical properties of the various raw tubers and their frying kinetics taking into account. This paper, therefore, presents a novel approach for an intelligent frying process with embedded computer vision systems providing continuous monitoring of product quality and, therefore, facilitate dynamic control of frying conditions in order to meet desired quality attributes in the final product. An extensive literature review of the key physiochemical attributes of raw potato tubers is presented, followed by an introduction to novel pre-treatment technologies, and the importance of optimal frying conditions. An overview of the potentials for using computer vision systems for the assessment of said quality criteria is given, followed by a detailed description of the envisioned frying process. The paper concludes that the realization of intelligent frying processes necessitates the development of fully fledged digital twins of the process and the products, combining physics based and data driven modelling with real time sensing and control. Terminology: Chips refer to thin slices of potato while French fries refers to wedges/stripes
... In 2002, Michael Grieves, who introduced the concept of DT, defined it as "A set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physically manufactured product can be obtained from its Digital Twin" [1]. When defined as such, a Digital Twin is comprised of three components ( Figure 1): ...
Article
Full-text available
One of the most promising technologies that is driving digitalization in several industries is Digital Twin (DT). DT refers to the digital replica or model of any physical object (physical twin). What differentiates DT from simulation and other digital or CAD models is the automatic bidirectional exchange of data between digital and physical twins in real-time. The benefits of implementing DT in any sector include reduced operational costs and time, increased productivity, better decision making, improved predictive/preventive maintenance, etc. As a result, its implementation is expected to grow exponentially in the coming decades as, with the advent of Industry 4.0, products and systems have become more intelligent, relaying on collection and storing incremental amounts of data. Connecting that data effectively to DTs can open up many new opportunities and this paper explores different industrial sectors where the implementation of DT is taking advantage of these opportunities and how these opportunities are taking the industry forward. The paper covers the applications of DT in 13 different industries including the manufacturing, agriculture, education, construction, medicine, and retail, along with the industrial use case in these industries.
... The digital twin (DT) is identified as a Gartner strategic technology for 2019 [22]. They combine some of the Industry 4.0 (I4.0) technologies to generate a virtual representation of a product, process, or system from inception, through production, to operation (MoL) and finally disposal [23]. DTs have also been proposed to virtually represent operators in human-interfacing CPS [24]. ...
Article
Full-text available
A digital twin is a “live” virtual replica of a sensorised component, product, process, human, or system. It accurately copies the entity being modelled by capturing information in real time, or near real time, from the entity, through embedded sensors and the Internet-of-Things. Many applications of digital twins in the manufacturing industry have been investigated. This article focuses on, and contributes to, the development of product digital twins to reduce the impact of quantity, quality, and demand uncertainties in remanufacturing. Starting from issues specific to remanufacturing, the article derives the functional requirements for a product digital twin for remanufacturing and proposes a Unified Modelling Language (UML) model of a generic asset to be remanufactured. The model is used in an example which highlights the need to translate existing knowledge and data into an integrated system to realise a product digital twin, capable of supporting remanufacturing process planning.
... The concept of the digital twin was first described during a lecture on "Product Lifecycle Management (PLM)" by Michael Grieves (2016) at the University of Michigan. NASA adopted the concepts for its next-generation aircrafts and published the first definition of a digital twin that is still widely used today: "A digital twin is an integrated multi-physics, multi-scale, probabilistic simulation of an as-built vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to represent the life of the corresponding flying twin." ...
Article
Full-text available
Modularized construction with precast concrete elements has many advantages, such as shorter construction times, higher quality, flexibility, and lower costs. These advantages are mainly due to its potential for prefabrication and series production. However, the production processes are still craftsmanship, and automation rarely occurs. Fundamental to the automation of production is digitization. In recent years, the manufacturing industry made significant progress through the intelligent networking of components, machines, and processes in the introduction of Industry 4.0. A key concept of Industry 4.0 is the digital twin, which represents both components and machines, thus creating a dynamic network in which the participants can communicate with each other. So far, BIM and digital twins in construction have focused mainly on the structure as a whole and do not consider feedback loops from production at the component level. This paper proposes a framework for a digital twin for the industrialized production of precast concrete elements in series production based on the asset administration shell (AAS) from the context of Industry 4.0. For this purpose, relevant production processes are identified, and their information requirements are derived. Data models and corresponding AAS for precast concrete parts will be created for the identified processes. The functionalities of the presented digital twin are demonstrated using the use case of quality control for a precast concrete wall element. The result shows how data can be exchanged with the digital twin and used for decision-making.
Chapter
Due to the implementation of new technologies, the healthcare sector now produces more data than ever before. This data is of high importance to patients but in many cases it is inaccessible. To counteract this effect, many mobile apps have been developed to aid patients in the management of their personal health data. In this article we will present an analysis and comparison of several apps of this sort, selected from those available within the Portuguese market. The goal of this analysis is to create a design framework for a new personal health management app to be developed. It was concluded that despite an ample offer, there is still opportunity to produce a differentiated application for this market, by including innovative features and methods of displaying information, such as 3D models.
Article
Digital twin for the automated production of precast concrete elements. Using precast concrete elements in construction has many advantages, such as cost and planning reliability, precision, and efficiency. In most cases, the production of precast concrete elements is not fully automated. There are many manual work steps that prevent a further increase in productivity. The basis for full automation is the use of information and communication technology. A digital twin as an integrated data model is necessary for this. This paper examines how the combination of Building Information Modeling with methods from the context of Industry 4.0, which enable largely self-organized and decentralized production, can provide the basis for the complete automation and networking of all production steps. Based on the administration shell, which implements the digital twin in Industry 4.0, a suitable description for digital twins of precast concrete modules is being developed by developing data and interaction models for industrialized production. , Ernst und Sohn. All rights reserved.
Article
Full-text available
Through the transformation that the electrical sector has been passing by, improvements in asset management and the guarantee of sustainable and quality services have become essential aspects for power companies. Thus, the digitalization of energy utilities presents itself as an important and crucial process. A concept that involves a variety of innovative trends is the digital twin. It consists of a 3D virtual replica of existing physical objects and real-time monitoring of certain measures. By developing a digital twin in the electrical power grid, a virtual replica of the network is obtained providing network virtual maps, 3D asset models, dynamic and real-time data of grid assets, and IoT sensing. All these data can feed a platform where AI-based models and advanced field operation technologies and solutions will be applied. With a Network Digital Twin©development, applications involving on-field activities can be improved through augmented reality (AR) and virtual reality (VR) to enhance workforce operations. This paper discusses the best practices for the development of a digital twin for the electrical power sector. These practices were found during the development of a project carried out by Enel Distribuição São Paulo, applying a living lab concept in the densest region of Brazil. The results of this paper present 3D images captured with specialized tools, and how they influence the workforce activities of human interface operation. Furthermore, financial and operational returns are presented through a cost–benefit analysis for each relevant aspect.
Article
Full-text available
We can observe self-organization properties in various systems. However, modern networked dynamical sociotechnical systems have some features that allow for realizing the benefits of self-organization in a wide range of systems in economic and social areas. The review examines the general principles of self-organized systems, as well as the features of the implementation of self-organization in sociotechnical systems. We also delve into the production systems, in which the technical component is decisive, and social networks, in which the social component dominates; we analyze models used for modeling self-organizing networked dynamical systems. It is shown that discrete models prevail at the micro level. Furthermore, the review deals with the features of using continuous models for modeling at the macro level.
Article
Full-text available
A simple, nonstandardized material test specimen, which fails along one of two different likely crack paths, is considered herein. The result of deviations in geometry on the order of tenths of a millimeter, this ambiguity in crack path motivates the consideration of as-manufactured component geometry in the design, assessment, and certification of structural systems. Herein, finite element models of as-manufactured specimens are generated and subsequently analyzed to resolve the crack-path ambiguity. The consequence and benefit of such a “personalized” methodology is the prediction of a crack path for each specimen based on its as-manufactured geometry, rather than a distribution of possible specimen geometries or nominal geometry. The consideration of as-manufactured characteristics is central to the Digital Twin concept. Therefore, this work is also intended to motivate its development.
Article
Full-text available
Reengineering of the aircraft structural life prediction process to fully exploit advances in very high performance digital computing is proposed. The proposed process utilizes an ultrahigh fidelity model of individual aircraft by tail number, a Digital Twin, to integrate computation of structural deflections and temperatures in response to flight conditions, with resulting local damage and material state evolution. A conceptual model of how the Digital Twin can be used for predicting the life of aircraft structure and assuring its structural integrity is presented. The technical challenges to developing and deploying a Digital Twin are discussed in detail.
Book
Virtually Perfect is the key to products being both innovative and lean in the 21st century. Virtual products, which are the digital information about the physical product, create value for both product producers and their customers throughout the entire product lifecycle of create, build, sustain, and dispose. Both product producers and users will need to change their perspective of products being only physical to a perspective of products being dual in nature: both physical and virtual.
Conference Paper
Future generations of NASA and U.S. Air Force vehicles will require lighter mass while being subjected to higher loads and more extreme service conditions over longer time periods than the present generation. Current approaches for certification, fleet management and sustainment are largely based on statistical distributions of material properties, heuristic design philosophies, physical testing and assumed similitude between testing and operational conditions and will likely be unable to address these extreme requirements. To address the shortcomings of conventional approaches, a fundamental paradigm shift is needed. This paradigm shift, the Digital Twin, integrates ultra-high fidelity simulation with the vehicle's on-board integrated vehicle health management system, maintenance history and all available historical and fleet data to mirror the life of its flying twin and enable unprecedented levels of safety and reliability.
Article
Product Lifecycle Management (PLM) is a developing paradigm. One way to develop an understanding of PLM's characteristic and boundaries is to propose models that help us conceptualise both holistic and component views in compact packages. Models can give us both a rich way of thinking about overall concepts and can identify areas where we need to explore issues that such models raise. In this paper, the author proposes and discusses two such related models, the Product Lifecycle Management Model (PLM Model) and the Mirrored Spaces Model (MSM) and investigates the conceptual and technical issues raised by these models.
  • R Piascik
  • J Vickers
  • D Lowry
  • S Scotti
  • J Stewart
  • A Calomino
Piascik, R., J. Vickers, D. Lowry, S. Scotti, J. Stewart. and A. Calomino (2010). Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map, NASA Office of Chief Technologist.
Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map
  • R Piascik
  • J Vickers
  • D Lowry
  • S Scotti
  • J Stewart
  • A Calomino
Piascik, R., J. Vickers, D. Lowry, S. Scotti, J. Stewart. and A. Calomino (2010). Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map, NASA Office of Chief Technologist.