I have elaborated an extensive Digital Twin review as part of my PhD program. The thing is that it is quite extensive (200+ references and about 70 pages including all of them) and one journal has already rejected it... I think that nearly all the information is relevant, and I suppose that I would be able at most to reduce about 10 pages by summarizing. I also need the journal in which to publish my article to be Q1. Does anybody know a journal that does not put strong restrictions in terms of the length of articles?
Thank you so much in advance,
I have prepared an article that performs a literature review on the concept of Digital Twin. Since the doctoral program in which I am is a program based on the compendium of publications, it is necessary that all the articles that I do are published in a journal that appears in the latest list published by the Journal Citation Reports (SCI and / or SSCI) or SCOPUS. In addition, at least one of them must be in the first or second quartile of its category. Therefore, I would like to try to get this article published in one that is in the Q1 quartile, so that I can take this burden off my shoulders. For now I have found the following:
- IEEE Access: Telecommunications – Scie, Computer Science, Information Systems – Scie (Q1); Engineering, Electrical & Electronic – Scie (Q2).
- ACM Computing Surveys: Computer Science, Theory & Methods – Scie (Q1).
- Expert Systems with Applications: Operations Research & Management Science – Scie (Q1); Computer Science, Artificial Intelligence – Scie (Q1); Engineering, Electrical & Electronic – Scie (Q1).
- Future Generation Computer Systems: Computer Science, Theory & Methods – Scie (Q1).
- Engineering Applications Of Artificial Intelligence: Computer Science, Artificial Intelligence – Scie (Q1); Engineering, Electrical & Electronic – Scie (Q1); Engineering, Multidisciplinary – Scie (Q1); Automation & Control Systems – Scie (Q1).
- Artificial Intelligence: Computer Science, Artificial Intelligence – Scie (Q1).
- Journal of Computational Science: Computer Science, Interdisciplinary Applications – Scie (Q2); Computer Science, Theory & Methods – Scie (Q1).
- Advances in Engineering Software: Computer Science, Interdisciplinary Applications – Scie (Q2); Computer Science, Software Engineering – Scie (Q1); Engineering, Multidisciplinary – Scie (Q1).
- Decision Support Systems: Operations Research & Management Science – Scie (Q1); Computer Science, Artificial Intelligence – Scie (Q1); Computer Science, Information Systems – Scie (Q1).
- Information and Software Technology: Computer Science, Software Engineering – Scie (Q1); Computer Science, Information Systems – Scie (Q2).
- Journal of Industrial Information Integration: Computer Science, Interdisciplinary Applications – Scie (Q1); Engineering, Industrial – Scie (Q1).
- Journal of Network and Computer Applications: Computer Science, Interdisciplinary Applications – Scie (Q1); Computer Science, Hardware & Architecture – Scie (Q1); Computer Science, Software Engineering – Scie (Q1).
- Applied Sciences: Chemistry, Multidisciplinary – Scie (Q3); Materials Science, Multidisciplinary – Scie (Q3); Physics, Applied – Scie (Q2); Engineering, Multidisciplinary – Scie (Q2).
- Future Internet: Computer Science, Information Systems (Q2).
- Applied System Innovation: Telecommunications – Esci (Q3); Computer Science, Information Systems – Esci (Q3); Engineering, Electrical & Electronic – Esci (Q3).
Does anyone have a recommendation for any other? Is it advisable to avoid generic journals? The more categories the journals deal with, the better?
Thank you very much in advance.
I have been asked to develop the state of the art related to the Digital Twin and I have tried to read many of the articles dealing with the topic that are present in the literature, paying special attention to the conceptual ones and reviews. However, the more I read, far from converging towards a unified vision, the more I got the impression that each author develops his own ideas and theories about the Digital Twin, largely depending on the application sector and the objectives to be achieved. So much so, that this concept is turning out to be a nightmare for me, I could not imagine such heterogeneity in terms of perspectives (just my actual perception). Here is a list of some topics that I find difficult to understand:
- Is it possible to represent an intangible entity with the Digital Twin? Many authors argue so and recognize both tangible and intangible entities for developing Digial Twins. However, even nowadays, many publications continue giving Digital Twin definitions or descriptions where only physical entities are considered. Moreover, when talking about the flow of communications between the real and virtual worlds, the entity of the real world is commonly supposed physical. When tackling other aspects such as the lifecycle, the real entity is normally assumed to be physical and related to a product in the manufacturing domain. There is a trick to this question, because if the Digital Twin is supposed to track the entire lifecycle of the represented real entity and a Digital Twin only comes into existence when the real entity has already been physically built (as-built), what about early lifecycle phases such as design?
- Generally, from what I have read, aggregation and composition of Digital Twins is allowed. Just as a real-world entity may itself be composed of several elements (each of which may have its own Digital Twin), a Digital Twin may itself be composed of several Digital Twins. Is a 1:1 (bijective) relationship between the real-world entity and its Digital Twin always assumed? In my opinion it should be like this...
- Does the development of a Digital Twin imply the need for bidirectional communication between the represented entity and the Digital Twin itself? Normally a one-way communication from the real entity to the virtual entity is assumed (in that direction). In my opinion, this link is more than enough to enable the convergence between both worlds and the synchronization of the Digital Twin, since thanks to it the virtual entity will always have the possibility to reflect the real one (in real time or not). However, the existence of a link in the opposite direction (from the virtual entity to the real one) is not always considered. It should be noted that it brings great value, since it enables the Digital Twin to control or act on the represented real entity. This increases its usability and the number of possible applications. However, is it an intrinsic characteristic of the Digital Twin? It should be noted that in some cases it may be difficult to achieve and may lead to different interpretations. Just consider the intangible entities.... It seems complicated to establish an automatic bidirectional flow of communications. To give an example, one can think of production processes. They are real concepts but intangible as such, although it is true that if we think about the characteristic of composition/aggregation of digital twins, a productive process, despite being intangible, can be in turn composed of digital sub-twins of real physical entities such as production cells, in which case the aforementioned bidirectional flow would be feasible... We are almost getting into very abstract and philosophical issues. Another example where I consider bidirectional flow to be complicated is e-health, a field where there is already research on the Digital Twin. For example, it would be feasible to develop a Digital Twin of a diabetic person to monitor his or her blood glucose level. Based on the data collected, the Digital Twin could provide, for example, nutritional recommendations through an application to improve their condition, being the person's own responsibility to read and implement them. Would this be considered bidirectional communication? It affects the represented entity, but indirectly... For now I do not see very viable, at least in this sector, that the Digital Twin gets to control organs through actuators or other devices implanted in the human being's own body where the aforementioned bidirectional communication has a place.
- Is the concept of lifecycle tracking an essential characteristic of the Digital Twin? In my opinion there are domains where Digital Twins have actually been developed where this view does not fit. Additionally, there are authors that do not consider the need of the Digital Twin to track the whole lifecycle, but the necessary subset of it.
- Many definitions or descriptions of the Digital Twin pose it as a whole by which it is possible to have a real-time representation of a real-world entity and its traceability throughout its lifecycle. Only the term Digital Twin is used. However, there are other approaches, whereby, based on the stage of the life cycle and the level of realization of the real entity to be represented, other concepts such as the Digital Model or Digital Thread are introduced. To cite a few examples, Grieves proposes the concept of Digital Twin Prototype and Digital Twin Instance (together with Digital Twin Aggregate and also Digital Twin Environment). Another proposal comes from Madni, that proposed Pre-Digital Twin and Digital Twin (together with Adaptative Digital Twin and Intelligent Digital Twin). Hribernik introduces the concept of Product Avatar and Parent Avatar. In a similar approach, Eigner present and distinguish the concept of Digital Model, Digital Thread and Digital Twin. Stark also talks about the Digital Prototype / Digital Master, Digital Shadow and Digital Twin... Among all these, is there one commonly known as the "right" approach? Or all of them are just different visions from different authors?
- So far and based on what I have read, I have been understanding the Digital Thread as the Digital Model proposed by Eigner or the Digital Master / Prototype proposed by Stark. I get the impression that the same thing happens in other publications, even more so when only the concepts of Digital Twin and Digital Thread are elaborated. In the case of Eigner I think he focuses more on the links itself when referring to the Digital Thread. Is this possible? Do you have the same impression? It may also be that I have misunderstood everything up to now (I hope not to be the case)…
- In the following I expose a use case related to a van manufacturing company. I am really interested in seeing how would you understand and name each of the aspects I mention below. The paragraphs that present the situation are written in italics, and my thoughts in plain text without any formatting:
- The introduction of a new van model usually involves, among other things, arduous exterior, interior and component design tasks. For this purpose, 3D modeling tools are commonly used, whose resulting models prove to be very useful for subsequent prototyping, testing and design refinement steps. All these resulting assets launch the Digital Thread of the new van to be manufactured, framed within the design stage. From this moment, in addition to the vehicle manufacturing company, it is common for all those component or service suppliers to participate, also gaining access to the pertinent models. Would you refer to this as the Digital Model/Master of the generic van model to be produced? As you can see I am saying that at this point the Digital Thread is being launched, thus understanding it a the Digital Thread… I suppose that at this initial design stage, in addition to the different designs and general product specifications that give rise to the generic van and all its possible services (variant free at this point), it is also possible to run different simulations to check the correction of the product and the design decisions made. Since there is not yet a physical van or a physical prototype of the van, these types of product instances should be referred to as Digital Twins? Would it make sense to build a physical prototype at this stage? Does it makes sense? I suppose that it also depends on the domain and use-cases…
- With the final general design checked and validated, the production phase of the van begins, where many of the previously created and properly updated models are used to drive different manufacturing processes. It is at this point where, depending on the final customer and unexpected orders, some reengineering on the general models of the new van might be necessary. In fact, it is not the same a common van, a camper van, an armored cash-in-transit van or a van for transporting people with reduced mobility. To make the changes, the general vehicle models are inherited and the appropriate redesigns and validations are carried out, with the active participation of all the involved suppliers. In case such vehicles are to be mass-produced, a digital subthread or the inclusion of vehicle variants on the original digital thread could even be considered. Here different variants of vans are presented that come from the same generic van model… They share common attributes but an initial customization stage starts, although more can be expected in advance… Say, for example, that from the generic van model a passenger and a box/goods vans can be derivated, both of which admit much more customization based on the final customer requirements and orders. How to call to this subgeneral van models? Digital Sub-Model/Master? Digital Sub-Thread? You wouldn't call them Digital Twins yet, would you? Nevertheless, at this phase I see more feasible the fact of building physical prototypes in case they are needed... Again, here different design decisions and simulations could be carried out with the developed models (and in conjunction with the stakeholders) to check the vans to be subsequently produced.
- In any of the cases, whenever the changes or information to be introduced on the general models are specific to a particular unit to be produced, the Digital Twin associated to that particular van arises. In its models, in addition to including specific information on its design, the data and peculiarities associated with its production are also recorded. During the operation phase, these Digital Twin models are updated in real time based on the data coming from the physical van, so that they constantly reflect its status. New models for diagnostic and prognostic purposes could also be generated by using such information. I think most of you will more or less agree with this. However, I believe that with this I am violating the multiple definitions of the Digital Twin that indicate that it enables traceability throughout the entire product lifecycle, as it seems to only intervene from the manufacturing phase onwards and not in the design phase. Maybe, once a specific van (particular product without the variance inherent to the general design) is to be produced further final simulations, checks and subtle redesigns could be performed with the derived general models that would still be considered as part of the design. Another question related to the information stored in the Digital Twin that comes to me has to do with the following. The design aspects (or configuration items) that can be shared by more than one van and that are not particular of the specific van to be produced have to be part of the Digital Twin itself, or might be stored in their respective Digital Model/Master or Sub-Model/Master knowing that thanks to the Digital Thread that information would be reachable from Digital Twin (or that the Digital Twin could be expanded or enriched with that information)? Note that here I am adopting the view of Eiger and Stark, as I am considering the existence of a Digital Model / Master and interpreting the Digital Thread as the links itself... Remember that in the first question I was assuming the Diigtal Thread to have the role of the Digital Model / Master...
I am so sorry for the length of all this questions but I have tried to explain them well and in a clear manner. I would be very grateful if you could give me your views on any of these points.
Thank you very much in advance.
I am currently working on a PhD project for a car manufacturing company, which basically consists of creating a predictive maintenance application for the machines that are currently used to fill the air conditioning circuits of vehicles. In essence, each cycle consists of two phases designed to perform checks on the circuit followed by a last one in which the corresponding refrigerant gas is charged. Specifically, in a first phase the circuit is pressurized in order to detect leaks from inside to outside the circuit, in a second phase a vacuum is exerted on the circuit in order to detect leaks from the outside to the inside, and finally, if no leak is detected, the circuit is filled. Regarding the data collected, for each of the phases different readings are taken of the pressure reached inside the circuit, except for the gas loading phase:
- First phase (pressurization): A total of three pressure readings are taken at different times (pressurization, stabilization and control).
- Second phase (vacuum): A total of 4 readings are taken at different time instants (release of the circuit pressure to the atmospheric one, vacuum, vacuum stabilization and control).
- Third phase (charge): Grams of gas charged in the circuit.
The attached FillingCurve.png file shows the typical theoretical curve of a filling cycle with the three aforementioned phases. As for the data, the attached SampleDataTable.png table presents a small sample of them.
The objective proposed to me is to model and monitor these variables so that, following a predictive maintenance strategy, it is possible to predict their trends and detect possible anomalies in real time, allowing to anticipate failures in the pressurization of the machines or in the pump in charge of the vacuum. With regard to the results of the cycles, it is worth mentioning that only those NOKs associated with the filling console have been taken into account, discarding the cycles that have been NOKs because the vehicle circuit itself had defects (leaks, bad connections, etc.). In any case, it is to be noted that the factory does not fully trust the assigned NOK labels... so maybe it would be better to just consider the OK samples...
As far as I understand, these data constitute time series, a completely new field for me. I have some experience in supervised and unsupervised classification problems using classical machine learning algorithms, as well as in computer vision using deep learning, but none with respect to time series. One of the problems I have encountered is that the classical techniques for dealing with these types of data, such as ARIMA and its variants are only valid for equispaced time series. However, this does not apply to my case because of the industrial context from which it comes: the machine is not filling continuously, there are line stops, breaks, vacations, maintenance stops, etc.
Can anyone guide me on the way to go? Does anyone know the techniques that can be applied to this type of time series? I would appreciate any kind of help, idea or suggestion, because although I thought it would not be so complicated and that the modeling of the time series was in a very mature state, the truth is that I am quite lost.
I believe that in order to apply the classical techniques, one option would be to summarize the data in new time intervals (hourly, for example), although this is not an alternative with which I feel very comfortable.
Thank you so much in advance.
I am currently working on the state of the art of the digital twin concept, and after seeing the categorization made by Kritzinger, W. et al. (2018) [
- According to the definition, we can speak of a "true" digital twin as long as the data flows between an existing physical object and a digital object are fully integrated and automated in both directions. However, I have found articles where the developed digital twins, instead of acting or controlling directly their respective physical twin, they send warnings or recommendations to a person so that s/he performs the corresponding actions. Hence, to my understanding, there is no an automated flow of information from the digital twin back to the physical object. In this cases, should we consider the developed digital twins as real or as digital shadows instead?
- Regarding the composition of digital twins, is it possible a digital twin to be internally composed of several digital twins? To exemplify this, we could think of a factory that builds a certain product, which is composed of several components that already contain their own digital twin implemented from the supplier. Therefore, we could consider that the digital twin of the final product is, among other things, the composition of all the digital twins of its components. I ask this because I have not seen this property widely developed throughout the literature. Instead, it is common to see that the digital twin is composed of physical and/or data-driven computational models and data that describe in real time the behavior of its physical counterpart.
- Does it have sense to speak about the lifecycle stage in which a digital twin is framed (Design - Development/Manufacturing - Operation/Service - Dismissal/Retire) if it does not represent an object/product? As an example, we could think about the digital twin of an industrial process, of a human being, of disaster management in an Smart City, etc. In my opinion, it does not make sense...
Thank you so much in advance