Article

Collaborative Resilience: Taxonomy-Informed Neural Networks for Smart Assets' Maintenance in Hostile Industry 4.0 Environments

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper explores knowledge-informed machine learning and particularly taxonomy-informed neural networks (TINN) to enhance data-driven smart assets' maintenance by contextual knowledge. Focusing on assets within the same class that may exhibit subtle variations, we introduce a weighted Lehmer mean as a dynamic mechanism for knowledge integration. The method considers semantic distances between the asset-in-question and others in the class, enabling adaptive weighting based on behavioural characteristics. This preserves the specificity of individual models, accommodating heterogeneity arising from manufacturing and operational factors. In the adversarial learning context, the suggested method ensures robustness and resilience against adversarial influences. Our approach assumes a kind of federated learning from neighbouring assets while maintaining a detailed understanding of individual asset behaviours within a class. By combining smart assets with digital twins, federated learning, and adversarial knowledge-informed machine learning, this study underscores the importance of collaborative intelligence for efficient and adaptive maintenance strategies in Industry 4.0 and beyond.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
Human-Robot Collaboration combines the reliability of robots with human adaptability. It is a prime candidate to respond to the trend of Mass Customization which requires frequent reconfiguration with variable lot sizes. But the close contact between humans and robots creates new safety risks, and ergonomic factors like robot-induced stress need to be considered. Therefore we propose a human-centric Digital Twin framework, where information about the human is stored and processed in a dedicated Digital Twin and can be transmitted to the robot’s Digital Twin for human-aware adaptations. We envision and briefly discuss three possible applications. Our framework has the potential to advance collaborative robotics but inherits technical challenges that come with Digital Twin based approaches and human modelling.
Article
Full-text available
Due to the global uncertainty caused by social problems such as COVID-19 and the war in Ukraine, companies have opted for the use of emerging technologies, to produce more with fewer resources and thus maintain their productivity; that is why the market for wearable artificial intelligence (AI) and wireless sensor networks (WSNs) has grown exponentially. In the last decade, maintenance 4.0 has achieved best practices due to the appearance of emerging technologies that improve productivity. However, some social trends seek to explore the interaction of AI with human beings to solve these problems, such as Society 5.0 and Industry 5.0. The research question is: could a human-in-the-loop-based maintenance framework improve the resilience of physical assets? This work helps to answer this question through the following contributions: first, a search for research gaps in maintenance; second, a scoping literature review of the research question; third, the definition, characteristics, and the control cycle of Maintenance 5.0 framework; fourth, the maintenance worker 5.0 definition and characteristics; fifth, two proposals for the calculation of resilient maintenance; and finally, Maintenance 5.0 is validated through a simulation in which the use of the worker in the loop improves the resilience of an Industrial Wireless Sensor Network (IWSN).
Article
Full-text available
There is a recognized need for mass personalization for sustainability at scale. Mass personalization is becoming a leading research trend in the latest Industrial Revolution, whereas substantial research has been undertaken on the role of Industry 4.0 enabling technologies. The world is moving beyond mass customization, while manufacturing has led to mass personalization ahead of other industries. However, most studies have not treated human capabilities, machines, and technologies as sustainable collaboration. This research investigates mass personalization as a common goal under the latest Industrial revolutions. Also, it proposes a Reference Architecture Model for achieving mass personalization that contributes to understanding how Industry 5.0 enhances Industry 4.0 for higher resilience and sustainability through a human-centric approach. The study implies that Human Capital 5.0 leads collaboration with machines and technologies, bringing more value-added and sustainable products.
Article
Full-text available
Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.
Article
Full-text available
Demands for more accurate machine learning models have given rise to rethinking current modeling approaches that were deemed unsuitable, primarily due to their computational complexity and the lack of availability and accessibility to representative data. In Industry 4.0, rapid advancements in Digital Twin (DT) technologies and the pervasiveness of cost-effective sensor technologies have pushed the incorporation of artificial intelligence, particularly data-driven machine learning models, for use in smart manufacturing. However, the persistent issue with such models is their high sensitivity to the training data and the lack of interpretability in the outcomes, at times generating unrealistic results. The incorporation of knowledge into the machine learning pipeline has been earmarked as the most promising approach to address such issues. This paper aims to answer this call through a Knowledge-embedded Machine Learning (KML) framework for smart manufacturing, which embeds knowledge from experience and, or physics information into the machine learning pipeline, thus making the outcomes from these models more representative of real applications. The merits of KML were then presented through comparative studies showing its capability to outperform knowledge-based and data-driven models. This promising outcome led to the development of frameworks that can potentially incorporate KML for smart manufacturing applications such as Prognostics and Health Management (PHM) and DT, further supporting the usefulness of the proposed KML framework.
Article
Full-text available
This study uses a design science research methodology to develop and evaluate the Pi-Mind agent, an information technology artefact that acts as a responsible, resilient, ubiquitous cognitive clone – or a digital copy – and an autonomous representative of a human decision-maker. Pi-Mind agents can learn the decision-making capabilities of their “donors” in a specific training environment based on generative adversarial networks. A trained clone can be used by a decision-maker as an additional resource for one’s own cognitive enhancement, as an autonomous representative, or even as a replacement when appropriate. The assumption regarding this approach is as follows: when someone was forced to leave a critical process because of, for example, sickness, or wanted to take care of several simultaneously running processes, then they would be more confident knowing that their autonomous digital representatives were as capable and predictable as their exact personal “copy”. The Pi-Mind agent was evaluated in a Ukrainian higher education environment and a military logistics laboratory. In this paper, in addition to describing the artefact, its expected utility, and its design process within different contexts, we include the corresponding proof of concept, proof of value, and proof of use.
Article
Full-text available
Billions of IoT devices will be deployed in the near future, taking advantage of faster Internet speed and the possibility of orders of magnitude more endpoints brought by 5G/6G. With the growth of IoT devices, vast quantities of data that may contain users' private information will be generated. The high communication and storage costs, mixed with privacy concerns, will increasingly challenge the traditional eco-system of centralized over-the-cloud learning and processing for IoT platforms. Federated learning (FL) has emerged as the most promising alternative approach to this problem. In FL, training data-driven machine learning models is an act of collaboration between multiple clients without requiring the data to be brought to a central point, hence alleviating communication and storage costs and providing a great degree of user-level privacy. However, there are still some challenges existing in the real FL system implementation on IoT networks. In this article, we discuss the opportunities and challenges of FL in IoT platforms, as well as how it can enable diverse IoT applications. In particular, we identify and discuss seven critical challenges of FL in IoT platforms and highlight some recent promising approaches toward addressing them.
Article
Full-text available
The digital twin for metal additive manufacturing is under development by unifying all simulation models integrated with the sensory observations for bi-directional data flow powered by machine learning. This paper presents physics-based simulation modelling for predicting the thermal field solution and molten pool geometry in laser powder bed fusion (L-PBF) process for Ti6Al4V, SS316L, and IN625 metal powders. Two-dimensional (2D) thermal field on the powder bed surface along the laser path and the hatch direction are computed for a moving laser heat source using implicit numerical formulation to understand the temperature rise during L-PBF process. The temperature field into the powder is computed using the temperature solution obtained along the laser path. The results are compared against the literature to corroborate the computed molten pool geometry. This numerical solution approach is found to be highly practical for computing thermal fields and suitable for digital twin development in metal additive manufacturing. Reference to this paper should be made as follows: Yang, L. and Özel, T. (2021) 'Physics-based simulation models for digital twin development in laser powder bed fusion', Int.
Article
Full-text available
This paper introduces a dynamic knowledge-graph approach for digital twins and illustrates how this approach is by design naturally suited to realizing the vision of a Universal Digital Twin. The dynamic knowledge graph is implemented using technologies from the Semantic Web. It is composed of concepts and instances that are defined using ontologies, and of computational agents that operate on both the concepts and instances to update the dynamic knowledge graph. By construction, it is distributed, supports cross-domain interoperability, and ensures that data are connected, portable, discoverable, and queryable via a uniform interface. The knowledge graph includes the notions of a “base world” that describes the real world and that is maintained by agents that incorporate real-time data, and of “parallel worlds” that support the intelligent exploration of alternative designs without affecting the base world. Use cases are presented that demonstrate the ability of the dynamic knowledge graph to host geospatial and chemical data, control chemistry experiments, perform cross-domain simulations, and perform scenario analysis. The questions of how to make intelligent suggestions for alternative scenarios and how to ensure alignment between the scenarios considered by the knowledge graph and the goals of society are considered. Work to extend the dynamic knowledge graph to develop a digital twin of the UK to support the decarbonization of the energy system is discussed. Important directions for future research are highlighted.
Article
Full-text available
The concepts brought by Industry 4.0 have been explored and gradually applied.The cybersecurity impacts on the progress of Industry 4.0 implementations and their interactions with other technologies require constant surveillance, and it is important to forecast cybersecurity-related challenges and trends to prevent and mitigate these impacts. The contributions of this paper are as follows: (1) it presents the results of a systematic review of industry 4.0 regarding attacks, vulnerabilities and defense strategies, (2) it details and classifies the attacks, vulnerabilities and defenses mechanisms, and (3) it presents a discussion of recent challenges and trends regarding cybersecurity-related areas for Industry 4.0. From the systematic review, regarding the attacks, the results show that most attacks are carried out on the network layer, where dos-related and mitm attacks are the most prevalent ones. Regarding vulnerabilities, security flaws in services and source code, and incorrect validations in authentication procedures are highlighted. These are vulnerabilities that can be exploited by dos attacks and buffer overflows in industrial devices and networks. Regarding defense strategies, Blockchain is presented as one of the most relevant technologies under study in terms of defense mechanisms, thanks to its ability to be used in a variety of solutions, from Intrusion Detection Systems to the prevention of Distributed dos attacks, and most defense strategies are presented as an after-attack solution or prevention, in the sense that the defense mechanisms are only placed or thought, only after the harm has been done, and not as a mitigation strategy to prevent the cyberattack. Concerning challenges and trends, the review shows that digital sovereignty, cyber sovereignty, and data sovereignty are recent topics being explored by researchers within the Industry 4.0 scope, and GAIA-X and International Data Spaces are recent initiatives regarding data sovereignty. A discussion of trends is provided, and future challenges are pointed out.
Article
Full-text available
Internet of Things (IoT) in industrial settings now leads to the development of a new generation of systems designed to improve the operational efficiency of the new paradigm of smart manufacturing plants. Thereby, the current article introduces in detail the definitions, concepts, standards, and other important aspects related to smart manufacturing, cooperative robotics, and Machine Learning techniques (ML). The paper highlights the opportunities presented by the new paradigm and the challenges faced in effectively implementing it in the industrial context. Especially, the focus is on the challenges associated with the architectures, communications technology, and protocols that enable the integration and deployment of machine learning algorithms to improve the execution of tasks daily performed by the collaboration of human operators, machines, and robots. Finally, it also provides a systematic review of state-of-the-art research efforts for the fields aforementioned, and a designed platform for integrating collaborative robotics and machine learning established on six layers and four hierarchies of the RAMI 4.0 (Reference Architectural Model Industry 4.0) is presented.
Article
Full-text available
This paper presents a summary of mechanisms for the evolution of artificial intelligence in ‘internet of things’ networks. Firstly, the paper investigates how the use of new technologies in industrial systems improves organisational resilience supporting both a technical and human level. Secondly, the paper reports empirical results that correlate academic literature with Industry 4.0 interdependencies between edge components to both external and internal services and systems. The novelty of the paper is a new approach for creating a virtual representation operating as a real-time digital counterpart of a physical object or process (i.e., digital twin) outlined in a conceptual diagram. The methodology applied in this paper resembled a grounded theory analysis of complex interconnected and coupled systems. By connecting the human–computer interactions in different information knowledge management systems, this paper presents a summary of mechanisms for the evolution of artificial intelligence in internet of things networks.
Article
Full-text available
Despite its great success, machine learning can have its limits when dealing with insufficient training data. A potential solution is the additional integration of prior knowledge into the training process which leads to the notion of informed machine learning. In this paper, we present a structured overview of various approaches in this field. We provide a definition and propose a concept for informed machine learning which illustrates its building blocks and distinguishes it from conventional machine learning. We introduce a taxonomy that serves as a classification framework for informed machine learning approaches. It considers the source of knowledge, its representation, and its integration into the machine learning pipeline. Based on this taxonomy, we survey related research and describe how different knowledge representations such as algebraic equations, logic rules, or simulation results can be used in learning systems. This evaluation of numerous papers on the basis of our taxonomy uncovers key methods in the field of informed machine learning.
Article
Full-text available
Industry standards pertaining to Human-Robot Collaboration (HRC) impose strict safety requirements to protect human operators from danger. When a robot is equipped with dangerous tools, moves at a high speed or carries heavy loads, the current safety legislation requires the continuous on-line monitoring of the robot’s speed and a suitable separation distance from human workers. The present paper proposes to make a virtue out of necessity by extending the scope of on-line monitoring to predicting failures and safe stops. This has been done by implementing a platform, based on open access tools and technologies, to monitor the parameters of a robot during the execution of collaborative tasks. An automatic machine learning (ML) tool on the edge of the network can help to perform the on-line predictions of possible outages of collaborative robots, especially as a consequence of human-robot interactions. By exploiting the on-line monitoring system , it is possible to increase the reliability of collaborative work, by eliminating any unplanned downtimes during execution of the tasks, by maximising trust in safe interactions and by increasing the robot’s lifetime. The proposed framework demonstrates a data management technique in industrial robots considered as a physical cyber-system. Using an assembly case study, the parameters of a robot have been collected and fed to an automatic ML model in order to identify the most significant reliability factors and to predict the necessity of safe stops of the robot. Moreover, the data acquired from the case study have been used to monitor the manipulator’ joints; to predict cobot autonomy and to provide predictive maintenance notifications and alerts to the end-users and vendors.
Article
Full-text available
Ontologies have long been employed in the life sciences to formally represent and reason over domain knowledge and they are employed in almost every major biological database. Recently, ontologies are increasingly being used to provide background knowledge in similarity-based analysis and machine learning models. The methods employed to combine ontologies and machine learning are still novel and actively being developed. We provide an overview over the methods that use ontologies to compute similarity and incorporate them in machine learning methods; in particular, we outline how semantic similarity measures and ontology embeddings can exploit the background knowledge in ontologies and how ontologies can provide constraints that improve machine learning models. The methods and experiments we describe are available as a set of executable notebooks, and we also provide a set of slides and additional resources at https://github.com/bio-ontology-research-group/machine-learning-with-ontologies.
Article
Full-text available
There are no doubts that artificial and human intelligence enhance and complement each other. They are stronger together as a team of Collective (Collaborative) Intelligence. Both require training for personal development and high performance. However, the approaches to training (human vs. machine learning) are traditionally very different. If one needs efficient hybrid collective intelligence team, e.g. for managing processes within the Industry 4.0, then all the team members have to learn together. In this paper we point out the need for bridging the gap between the human and machine learning, so that some approaches used in machine learning will be useful for humans and vice-versa, some knowledge from human pedagogy can be useful also for training the artificial intelligence. When this happens, we all will come closer to the ultimate goal of creating a University for Everything capable of educating human and digital “workers” for the Industry 4.0. The paper also considers several thoughts on training digital assistants of the humans together in a team.
Article
Full-text available
Choice of a distance metric is a key for the success in many machine learning and data processing tasks. The distance between two data samples traditionally depends on the values of their attributes (coordinates) in a data space. Some metrics also take into account the distribution of samples within the space (e.g. local densities) aiming to improve potential classification or clustering performance. In this paper, we suggest the Social Distance metric that can be used on top of any traditional metric. For a pair of samples x and y, it averages the two numbers: the place (rank), which sample y holds in the list of ordered nearest neighbors of x; and vice versa, the rank of x in the list of the nearest neighbors of y. Average is a contraharmonic Lehmer mean, which penalizes the difference between the numbers by giving values greater than the Arithmetic mean for the unequal arguments. We consider normalized average as a distance function and we prove it to be a metric. We present several modifications of such metric and show that their properties are useful for a variety of classification and clustering tasks in data spaces or graphs in a Geographic Information Systems context and beyond.
Chapter
Artificial intelligence is an unavoidable asset of Industry 4.0. Artificial actors participate in real-time decision-making and problem solving in various industrial processes, including planning, production, and management. Their efficiency, as well as intelligent and autonomous behavior is highly dependent on the ability to learn from examples, which creates new vulnerabilities exploited by security threats. Today's disruptive attacks of hackers go beyond system's infrastructures targeting not only hard-coded software or hardware, but foremost data and trained decision models, in order to approach system's intelligence and compromise its work. This paper intends to reveal security threats which are new in the industrial context by observing the latest discoveries in the AI domain. Our focus is data poisoning attacks caused by adversarial training samples and subsequent corruption of machine learning process.
Article
More and more, we may expect to see robots working side by side with humans as technology advances. Collaborative robot (cobot) is a methodology that investigates the cognitive and physical interaction between humans and robots as they work together to achieve a common goal. In the canon of cobot’s writings, typically, one constructs a cognitive model that, among other things, takes in data about the users and the world and transforms them into data that the robot can use. It’s only been relatively recently that we’ve begun to use machine learning to provide a cognitive framework and behavioral building block for high-quality human resource management. As a result, this study’s purpose suggests a comprehensive literature analysis on machine learning’s use of humans– Putting together robots to work together. A large number of publications were chosen for this analysis, and a grouping of works was performed based on the type. It proposes a paradigm of cognitive characteristics, evaluation measures, and collaborative tasks. After that, a comprehensive evaluation examines the features of several classes of machine learning, deep learning algorithms, and the sensing techniques employed; it is carried out. In addition to these realizations, the significance of machine learning techniques is emphasized—to account for temporal factors. When the key characteristics of these works are compared, patterns begin to emerge with the cobot and provide directions for future efforts, contrasting them with other parts of the cobot that have not been reviewed.
Article
Federated learning is one of the emerging areas of research in computer science. It has shown great potential in some application areas and we are witnessing evidence of new approaches where millions or even billions of IoT devices can contribute collectively to achieve a common goal of machine learning through federation. However, existing approaches are primarily suitable for single-task learning with a single objective in a single task owner where it is assumed that the majority of devices contributing to federated learning have a similar design or device type and restrictions. We argue that the true potential of federated learning can only be realised if we have a dynamic and open ecosystem where devices, industrial units, machine manufacturers, non-governmental agencies, and governmental entities can contribute toward learning for multiple tasks and objectives in a crowdsourced manner. In this article, we propose a multi-level framework that shows how federated learning, IoT, and crowdsourcing can come hand-in-hand with each other to make a robust ecosystem of multi-level federated learning for Industry 4.0. This helps build future intelligent applications for Industry 4.0 such as predictive maintenance and fault detection for systems in smart manufacturing units. In addition, we also highlight several use-cases of multi-level federated learning where this approach can be implemented in Industry 4.0. Moreover, if the approach is implemented successfully, besides enhancement in performance it will also help towards a greater common goal e.g. UN Sustainable Goal No 13 i.e. reduction in carbon footprint.
Chapter
Gears are the most critical and commonly used machine elements. They are used for a variety of applications throughout various industries like energy, aerospace, transportation, mining, agriculture, manufacturing, etc. The failure of gear may result in cataclysmic shutdowns causing significant production, economic losses and even human casualties. Fault diagnosis is an important component of condition-based maintenance and it has gained much attention for the safe operation of the gearboxes. But in many applications, fault diagnosis is not accurate. Sensor-based condition monitoring systems help in enhancing the quality of measurement data. A lot of research has been done in developing machine learning algorithms and models to extract information of fault status from the sensor data. Data acquisition of the meaningful sensor data from the gearbox is an open challenge. This makes the estimation of the remaining useful time of gearbox more challenging. This paper presents a literature review of the different IoT-based techniques used to acquire health status data from the gearbox.KeywordsCondition monitoringGearbox
Article
Convolutional Neural Network is one of the famous members of the deep learning family of neural network architectures, which is used for many purposes, including image classification. In spite of the wide adoption, such networks are known to be highly tuned to the training data (samples representing a particular problem), and they are poorly reusable to address new problems. One way to change this would be, in addition to trainable weights, to apply trainable parameters of the mathematical functions, which simulate various neural computations within such networks. In this way, we may distinguish between the narrowly focused task-specific parameters (weights) and more generic capability-specific parameters. In this paper, we suggest a couple of flexible mathematical functions (Generalized Lehmer Mean and Generalized Power Mean) with trainable parameters to replace some fixed operations (such as ordinary arithmetic mean or simple weighted aggregation), which are traditionally used within various components of a convolutional neural network architecture. We named the overall architecture with such an update as a hyper-flexible convolutional neural network. We provide mathematical justification of various components of such architecture and experimentally show that it performs better than the traditional one, including better robustness regarding the adversarial perturbations of testing data.
Article
Unexpected downtime remains a costly, preventable burden in manufacturing. To mitigate and eliminate unexpected downtime, manufacturers have incorporated machine learning to diagnose equipment faults and determine the equipment remaining useful life. However, such models suffer from a lack of failure data and context knowledge surrounding data gathered from production (i.e. rich labeled sets). The purpose of this paper is to conduct an algorithm comparison study over a previously collected contrived anomaly data set from an industrial robot. The goal of the study is to measure the effectiveness of algorithms to eliminate bias and variance from the classification results. The tested system is a 6-DOF collaborative robot from Universal Robots, on which an anomalous condition is artificially induced on a robot to simulate robot overload. The different algorithms are assessed based on their accuracy in determining the overloading case on the robot. From the analysis of data-driven machine learning and deep learning models, a deep learning regression was determined as the best model from the assessment of the data both qualitatively (low overfitting and bias) and quantitatively (98% overall accuracy). As part of the future research direction, different failure cases should be created based on wear applied to specific robot joint components and an adaptability assessment carried out for the model to other robot paths.
Article
Don’t just predict problems – prescribe a solution: that’s the premise behind prescriptive maintenance (PsM). Better and more data, coupled with Artificial Intelligence, Simulation, and Optimization are the keys to unlocking the benefits of PsM. However, PsM and Production Planning and Control (PPC) functions in a Cyber-Physical Production Environment still need to be integrated from an application-oriented perspective. The paper proposes a framework that establishes the missing link between the PsM and PPC. A novel decision support prototype system for smart planning leveraging on the shop floor digital twin has been developed to explore the feasibility, challenges, and benefits of an integrated determination of production schedules and maintenance plans in a real Make-to-Order/Engineer-to-Order manufacturing environment.
Article
Nowadays, the industrial environment is characterised by growing competitiveness, short response times, cost reduction and reliability of production to meet customer needs. Thus, the new industrial paradigm of Industry 4.0 has gained interest worldwide, leading many manufacturers to a significant digital transformation. Digital technologies have enabled a novel approach to decision-making processes based on data-driven strategies, where knowledge extraction relies on the analysis of a large amount of data from sensor-equipped factories. In this context, Predictive Maintenance (PdM) based on Machine Learning (ML) is one of the most prominent data-driven analytical approaches for monitoring industrial systems aiming to maximise reliability and efficiency. In fact, PdM aims not only to reduce equipment failure rates but also to minimise operating costs by maximising equipment life. When considering industrial applications, industries deal with different issues and constraints relating to process digitalisation. The main purpose of this study is to develop a new decision support system based on decision trees (DTs) that guides the decision-making process of PdM implementation, considering context-aware information, quality and maturity of collected data, severity, occurrence and detectability of potential failures (identified through FMECA analysis) and direct and indirect maintenance costs. The decision trees allow the study of different scenarios to identify the conditions under which a PdM policy, based on the ML algorithm, is economically profitable compared to corrective maintenance, considered to be the current scenario. The results show that the proposed methodology is a simple and easy way to implement tool to support the decision process by assessing the different levels of occurrence and severity of failures. For each level, savings and the potential costs have been evaluated at leaf nodes of the trees aimed at defining the most suitable maintenance strategy implementation. Finally, the proposed DTs are applied to a real industrial case to illustrate their applicability and robustness.
Article
Industry 4.0 is revolutionizing manufacturing, increasing flexibility, mass customization, quality and productivity. In today's competitive manufacturing scenario, maintenance is one of the most critical issues and companies are approaching its digital transformation from technological and management perspectives. This article carries out a systematic literature review aimed to investigate how maintenance tasks and maintenance management strategies are changing in Industry 4.0 context, analyzing the state-of-the-art of Industry 4.0 technologies currently employed in maintenance and the resulting potential innovations in maintenance policies and manufacturing management. In addition, the most relevant trends in current maintenance policies have been investigated, such as “remote maintenance” and the attractive possibility of a “self-maintenance”. Also, the importance of human factor has been considered. The results are summarized in a comprehensive database, to provide, through concepts and empirical evidence present in literature, examples and strategies for the implementation of maintenance in Industry 4.0.
Article
Purpose In recent years, Industry 4.0 has received immense attention from academic community, practitioners and the governments across nations resulting in explosive growth in the publication of articles, thereby making it imperative to reveal and discern the core research areas and research themes of Industry 4.0 extant literature. The purpose of this paper is to discuss research dynamics and to propose a taxonomy of Industry 4.0 research landscape along with future research directions. Design/methodology/approach A data-driven text mining approach, Latent Semantic Analysis (LSA), is used to review and extract knowledge from the large corpus of the 503 abstracts of academic papers published in various journals and conference proceedings. The adopted technique extracts several latent factors that characterise the emerging pattern of research. The cross-loading analysis of high-loaded papers is performed to identify the semantic link between research areas and themes. Findings LSA results uncover 13 principal research areas and 100 research themes. The study discovers “smart factory” and “new business model” as dominant research areas. A taxonomy is developed which contains five topical areas of Industry 4.0 field. Research limitations/implications The data set developed is based on systematic article refining process which includes the keywords search in selected electronic databases and articles limited to English language only. So, there is a possibility that other related work may not be captured in the data set which may be published in other than examined databases and are in non-English language. Originality/value To the best of the authors’ knowledge, this study is the first of its kind that has used the LSA technique to reveal research trends in Industry 4.0 domain. This review will be beneficial to scholars and practitioners to understand the diversity and to draw a roadmap of Industry 4.0 research. The taxonomy and outlined future research agenda could help the practitioners and academicians to position their research work.
  • A Baratta
  • A Cimino
  • M G Gnoni
  • F Longo
Baratta, A., Cimino, A., Gnoni, M. G., and Longo, F. (2023) 'Human robot collaboration in Industry 4.0: A literature review', Procedia Computer Science, Vol. 217, pp. 1887-1895.
Hybrid threats against Industry 4.0: Adversarial training of resilience
  • O Kaikova
  • V Terziyan
  • T Tiihonen
  • M Golovianko
  • S Gryshko
  • L Titova
Kaikova, O., Terziyan, V., Tiihonen, T., Golovianko, M., Gryshko, S., and Titova, L. (2022) 'Hybrid threats against Industry 4.0: Adversarial training of resilience', E3S Web of Conferences, Vol. 353, 03004.