Conference Paper

Data-Driven Reliability Modeling of Smart Manufacturing Systems Using Process Mining

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Accurate reliability modeling and assessment of manufacturing systems leads to lower maintenance costs and higher profits. However, the complexity of modern Smart Manufacturing Systems poses a challenge to traditional expert-driven reliability modeling techniques. The growing research field of data-driven reliability modeling seeks to harness the abundance of data from such systems to improve and automate the reliability modeling processes. In this paper, we propose the use of Process Mining techniques to support the extraction of reliability models from event data generated in Smart Manufacturing Systems. More specifically, we extract a stochastic Petri net which can be used to analyze the overall system reliability as well as to test new system configurations. We demonstrate our approach with an illustrative case study of a flow shop manufacturing system with parallel operations. The results indicate, that using Process Mining techniques to extract accurate reliability models is feasible.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Preprint
Full-text available
In recent years, there has been a significant increase in the deployment of Cyber-physical Production Systems (CPPS) across various industries. CPPS consist of interconnected devices and systems that combine physical and digital elements to enhance the efficiency, productivity, and reliability of manufacturing processes. Due to the continuous and fast-paced evolution of the behavior of CPPS, there is an increasing interest in generating data-driven Discrete-event Simulation (DES) models of such systems. The validation of these models, however, remains a challenge, and traditional approaches may be insufficient to ensure their accuracy. To address this challenge, we propose a framework for validating data-driven DES models of CPPS. We emphasize the importance of continuously monitoring the validity of data-driven DES models and updating them when necessary to ensure their accuracy over time. We, furthermore, demonstrate our proposed approach through a case study in reliability assessment and discuss challenges and limitations of our framework.
Conference Paper
Complex manufacturing systems produce highly engineered products with long product cycle times and are characterized by complex production process behaviors. Ensuring the reliability of these systems is critical to meet customer demands, improve product quality and minimize production losses. The collection and storage of data by sensors and information systems respectively enable the automatic generation and analysis of reliability models of complex manufacturing systems, reducing the need for expert knowledge of the processes. In this article, we propose a novel approach to generate data-driven reliability models of complex manufacturing systems. Our method extracts the models from event logs that capture relevant events related to material flow in a system, and state logs, which capture operational state changes in a system's production resources. We, furthermore, simulate the derived data-driven reliability models using discrete-event simulation and validate the models to ensure their robustness. We demonstrate the successful application of our method using a case study from the wafer fabrication domain. The results of our case study indicate that data-driven reliability assessment of complex manufacturing systems is feasible and can provide rapid insights into such systems. In addition, the extracted models can be used to support decisions related to maintenance planning, parts procurement and system configuration.
Article
Full-text available
Modern manufacturing systems can benefit from the use of digital tools to support both short- and long-term decisions. Meanwhile, such systems reached a high level of complexity and are frequently subject to modifications that can quickly make the digital tools obsolete. In this context, the ability to dynamically generate models of production systems is essential to guarantee their exploitation on the shop-floors as decision-support systems. The literature offers approaches for generating digital models based on real-time data streams. These models can represent a system more precisely at any point in time, as they are continuously updated based on the data. However, most approaches consider only isolated aspects of systems (e.g., reliability models) and focus on a specific modeling purpose (e.g., material flow identification). The research challenge is therefore to develop a novel framework that systematically enables the combination of models extracted through different process mining algorithms. To tackle this challenge, it is critical to define the requirements that enable the emergence of automated modeling and simulation tasks. In this paper, we therefore derive and define data requirements for the models that need to be extracted. We include aspects such as the structure of the manufacturing system and the behavior of its machines. The paper aims at guiding practitioners in designing coherent data structures to enable the coupling of model generation techniques within the digital support system of manufacturing companies.
Article
Full-text available
Adoption of digital twins in smart factories, that model real statuses of manufacturing systems through simulation with real time actualization, are manifested in the form of increased productivity, as well as reduction in costs and energy consumption. The sharp increase in changing customer demands has resulted in factories transitioning rapidly and yielding shorter product life cycles. Traditional modeling and simulation approaches are not suited to handle such scenarios. As a possible solution, we propose a generic data-driven framework for automated generation of simulation models as basis for digital twins for smart factories. The novelty of our proposed framework is in the data-driven approach that exploits advancements in machine learning and process mining techniques, as well as continuous model improvement and validation. The goal of the framework is to minimize and fully define, or even eliminate, the need for expert knowledge in the extraction of the corresponding simulation models. We illustrate our framework through a case study.
Conference Paper
Full-text available
Reliability is one of the most important performance indicators in contemporary production facilities. Increasing reliability of manufacturing systems results in their prolonged lifetimes, and reduced maintenance and repair costs. Reliability modeling is a common technique for deriving reliability measurements and illustrating relevant fault-dependencies. There is a significant body of research focusing on hardware-and software reliability models, such as Fault Trees, Petri Nets and Markov Chains. Up until now, development of reliability models has been a labor-intensive and expert-knowledge-driven process. To remedy that, through the prevalence of data stemming from the new and technologically advanced manufacturing systems, we propose that data generated in modern manufacturing lines could be used to either automate or at least to support development of reliability models. In this paper, we elaborate on the details of our proposed framework for data-driven reliability assessment of cyber-physical production systems. We, furthermore, introduce a case study that will aid the development and testing of the proposed novel data-driven approach.
Article
Full-text available
The latest developments in industry involved the deployment of digital twins for both long and short term decision making, such as supply chain management, production planning and control. Modern production environments are frequently subject to disruptions and consequent modifications. As a result, the development of digital twins of manufacturing systems cannot rely solely on manual operations. Recent contributions proposed approaches to exploit data for the automated generation of the models. However, the resulting representations can be excessively accurate and may also describe activities that are not significant for estimating the system performance. Generating models with an appropriate level of detail can avoid useless efforts and long computation times, while allowing for easier understanding and re-usability. This paper proposes a method to automatically discover manufacturing systems and generate adequate digital twins. The relevant characteristics of a production system are automatically retrieved from data logs. The proposed method has been applied on two test cases and a real manufacturing line. The experimental results prove its effectiveness in generating digital models that can correctly estimate the system performance.
Article
Full-text available
Reliability is the measure of the likelihood that a product, system or service will perform its intended function adequately for a specified period of time. Low reliability of a manufacturing system, besides the costly repairs and replacements, also implies reduced production, and consequently, significantly reduced profits. Therefore, it is very important to have a way to assess reliability, as a key performance metric for manufacturing systems, and cyber-physical systems in general. The newly developed information and communication technologies that are increasingly becoming part of the current and future manufacturing systems, both allow and invite for more sophisticated approaches to assessing reliability of manufacturing systems as opposed to the traditional expert knowledge-based approaches. In this paper, we describe the significance of evaluating reliability for the progress and acceptance of the Industry 4.0 technologies, as well as the new directions and possibilities for enhanced reliability analysis that these new technologies can provide. Finally, we provide an overview of the implications of these novel ways of analyzing reliability in the context of Industry 4.0.
Article
Full-text available
With the development and application of advanced technologies such as Cyber Physical System, Internet of Things, Industrial Internet of Things, Artificial Intelligence, Big Data, Cloud Computing, Blockchain, etc., more manufacturing enterprises are transforming to intelligent enterprises. Smart manufacturing systems (SMSs) have become the focus of attention of some countries and manufacturing enterprises. At present, there are some applications of SMSs in different industrial fields. However, there is still a lack of a unified definition of SMSs and a unified analysis of requirements. In order to have a comprehensive understanding of SMSs, this paper summarized the evolution, definition, objectives, functional requirements, business requirements, technical requirements, and components of SMSs. At the same time, it points out the current development status and level. Based on above, an autonomous SMSs model driven by dynamic demand and key performance indicators is proposed. Through the review of this paper, the reference can be provided for the transformation of more manufacturing enterprises from the traditional to the intellectualized ones.
Article
Full-text available
Information and communication technology is undergoing rapid development, and many disruptive technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, have emerged. These technologies are permeating the manufacturing industry and enable the fusion of physical and virtual worlds through cyber-physical systems (CPS), which mark the advent of the fourth stage of industrial production (i.e., Industry 4.0). The widespread application of CPS in manufacturing environments renders manufacturing systems increasingly smart. To advance research on the implementation of Industry 4.0, this study examines smart manufacturing systems for Industry 4.0. First, a conceptual framework of smart manufacturing systems for Industry 4.0 is presented. Second, demonstrative scenarios that pertain to smart design, smart machining, smart control, smart monitoring, and smart scheduling, are presented. Key technologies and their possible applications to Industry 4.0 smart manufacturing systems are reviewed based on these demonstrative scenarios. Finally, challenges and future perspectives are identified and discussed.
Article
Full-text available
In this paper, the minimal paths-and-cuts technique is developed to handle fault tree analysis (FTA) on the critical components of industrial robots. This analysis is integrated with the reliability block diagram (RBD) approach in order to investigate the robot system reliability. The model is implemented in a complex advanced manufacturing system having autonomous guided vehicles (AGVs) as material handling devices. FTA grants cause and effects and hierarchical properties to the model. On the other hand, RBD simplifies the complex system of AGVs for reliability evaluation. The results show that by filtering the paths in a manufacturing system for AGVs, the reliability is highly depended on the mostly occupied paths by AGVs. The failure probability for the AGV is considered to follow exponential probability distribution and thus the whole system reliability using minimal paths and cuts method is obtained 0.8741.
Article
Full-text available
Automated guided vehicles (AGVs) are being extensively used for intelligent transportation and distribution of materials in warehouses and autoproduction lines due to their attributes of high efficiency and low costs. Such vehicles travel along a predefined route to deliver desired tasks without the supervision of an operator. Much effort in this area has focused primarily on route optimisation and traffic management of these AGVs. However, the health management of these vehicles and their optimal mission configuration have received little attention. To assure their added value, taking a typical AGV transport system as an example, the capability to evaluate reliability issues in AGVs are investigated in this paper. Following a failure modes effects and criticality analysis (FMECA), the reliability of the AGV system is analysed via fault tree analysis (FTA) and the vehicles mission reliability is evaluated using the Petri net (PN) method. By performing the analysis, the acceptability of failure of the mission can be analysed, and hence the service capability and potential profit of the AGV system can be reviewed and the mission altered where performance is unacceptable. The PN method could easily be extended to have the capability to deal with fleet AGV mission reliability assessment.
Article
Full-text available
The paper discusses a problem of reliable performance of production processes. Reliability analyses of production systems regard considering many different factors and requirements. As a result , basic definitions connected with production system/process have been defined. Later, there has been also presented the literature review in reliability engineering, what gives the possibility to present a comparison analysis of known reliability models for production systems performance. Based on the presented literature summary, the multidimensional definition of production process reliability is developed with short example of its implementation possibilities. The work ends up with summary and directions for further research.
Article
Full-text available
The degree in which a system is operational in a given horizon of time is the key indicator of service quality perceived by business users. The availability of critical systems is a function of the system reliability, maintainability and accessibility of support resources. Based on a comparative analysis, selecting an adequate method of availability assessment requires typical structures identification, reliability indices characterization and evaluation of non-reliability impact of the parts on system availability. Frequently used, binomial method leads to optimistic results due to bivalent states of components (operating/fault). For serial or parallel reliability structures, a realistic evaluation of system availability is obtained by applying polynomial or direct analysis based on reliability block diagram. A complex structure implies a successive assessment of structural components allowed by Monte-Carlo simulation providing accurate indices values. The paper presents a method of availability evaluation using Monte-Carlo simulation techniques which allows a comparative analysis of component impact on the system availability.
Conference Paper
Full-text available
Process discovery is the problem of, given a log of observed behaviour, finding a process model that ‘best’ describes this behaviour. A large variety of process discovery algorithms has been proposed. However, no existing algorithm guarantees to return a fitting model (i.e., able to reproduce all observed behaviour) that is sound (free of deadlocks and other anomalies) in finite time. We present an extensible framework to discover from any given log a set of block-structured process models that are sound and fit the observed behaviour. In addition we characterise the minimal information required in the log to rediscover a particular process model. We then provide a polynomial-time algorithm for discovering a sound, fitting, block-structured model from any given log; we give sufficient conditions on the log for which our algorithm returns a model that is language-equivalent to the process model underlying the log, including unseen behaviour. The technique is implemented in a prototypical tool.
Article
Full-text available
Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and, typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we have developed techniques for discovering workflow models. The starting point for such techniques is a so-called "workflow log" containing information about the workflow process as it is actually being executed. We present a new algorithm to extract a process model from such a log and represent it in terms of a Petri net. However, we also demonstrate that it is not possible to discover arbitrary workflow processes. We explore a class of workflow processes that can be discovered. We show that the α-algorithm can successfully mine any workflow represented by a so-called SWF-net.
Chapter
In recent years, data science emerged as a new and important discipline. It can be viewed as an amalgamation of classical disciplines like statistics, data mining, databases, and distributed systems. Existing approaches need to be combined to turn abundantly available data into value for individuals, organizations, and society. Moreover, new challenges have emerged, not just in terms of size (“Big Data”) but also in terms of the questions to be answered. This book focuses on the analysis of behavior based on event data. Process mining techniques use event data to discover processes, check compliance, analyze bottlenecks, compare process variants, and suggest improvements. In later chapters, we will show that process mining provides powerful tools for today’s data scientist. However, before introducing the main topic of the book, we provide an overview of the data science discipline.
Book
CONTEXT OF RELIABILITY ANALYSIS. An Overview. Illustrative Cases and Data Sets. BASIC RELIABILITY METHODOLOGY. Collection and Preliminary Analysis of Failure Data. Probability Distributions for Modeling Time to Failure. Basic Statistical Methods for Data Analysis. RELIABILITY MODELING, ESTIMATION, AND PREDICTION. Modeling Failures at the Component Level. Modeling and Analysis of Multicomponent Systems. Advanced Statistical Methods for Data Analysis. Software Reliability. Design of Experiments and Analysis of Variance. Model Selection and Validation. RELIABILITY MANAGEMENT, IMPROVEMENT, AND OPTIMIZATION. Reliability Management. Reliability Engineering. Reliability Prediction and Assessment. Reliability Improvement. Maintenance of Unreliable Systems. Warranties and Service Contracts. Reliability Optimization. EPILOGUE. Case Studies. Resource Materials. Appendices. References. Indexes.
Article
Mission reliability model is important for evaluation of the production capability of a manufacturing system. The manufacturing system mission reliability refers to the ability that the manufacturing system completes the production mission under specified conditions and within the specified time. The production mission includes two factors: quality of products and the productivity. Calculation of traditional evaluation parameters like Cpk and Ppk excludes the abnormal interruption of production and inspection errors. For some manufacturing systems with low degree of automation in the field of domestic weapons, both production disruptions and misjudgments in inspection processes have an impact on the mission reliability of the manufacturing system. A method of mission reliability modeling of discrete manufacturing system is proposed in this paper. The manufacturing system is composed of several processes. Abnormal interruption of production, inspection errors and substandard quality parameters of the products are involved in the modeling of the process mission reliability. The sequential and concurrent relationship between the processes is also taken into account in the modeling process.
Article
Petri nets are a powerful technique widely used in the modeling and analysis of complex manufacturing systems and processes. Due to their capability in modeling the dynamics of the systems, Petri nets have been combined with fault tree analysis techniques to determine the average rate of occurrence of system failures. Current methods in combining Petri nets with fault trees for system failure analysis compute the average rate of occurrence of system failures by tracking the markings of the Petri net models. The limitations of these methods are that tracking the markings of a Petri net represented by a reachability tree can be very complicated as the size of the system grows. Therefore, these methods offer less flexibility in analyzing sequential failures in the system. To overcome the limitations of the current methods in applying Petri nets for system failure assessment, this paper expands and extends the concept of counters used in Petri net simulation to perform the failure and reliability analysis of complex systems. The presented method allows the system failures to be modeled using general Petri nets with inhibitor arcs and loops, which employs fewer variables than existing marking-based methods and substantially accelerates the computations. It can be applied to real system failure analysis where basic events can have different failure rates. Copyright © 2003 John Wiley & Sons, Ltd.
Article
In this paper, I provide a tutorial exposition on maximum likelihood estimation (MLE). The intended audience of this tutorial are researchers who practice mathematical modeling of cognition but are unfamiliar with the estimation method. Unlike least-squares estimation which is primarily a descriptive tool, MLE is a preferred method of parameter estimation in statistics and is an indispensable tool for many statistical modeling techniques, in particular in non-linear modeling with non-normal data. The purpose of this paper is to provide a good conceptual explanation of the method with illustrative examples so the reader can have a grasp of some of the basic principles.
Article
Fault-tree analysis (FTA) is a powerful technique used to identify the root causes of undesired event in system failure by constructing a tree of sub-events, spreading into bottom events, procreating the fault and finally heading to the top event. From integrating expert’s knowledge and experience in terms of providing the possibilities of failure of bottom events, an algorithm of the intuitionistic fuzzy fault-tree analysis is proposed in this paper to calculate fault interval of system components and to find the most critical system component for the managerial decision-making based on some basic definitions. The proposed method is applied for the failure analysis problem of printed circuit board assembly (PCBA) to generate the PCBA fault-tree, fault-tree nodes, then directly compute the intuitionistic fuzzy fault-tree interval, traditional reliability, and the intuitionistic fuzzy reliability interval. The result of this proposed method is compared with the existing approaches of fault-tree methods.
Process Mining for Python (PM4Py): Bridging the Gap Between Process-and Data Science
  • A Berti
  • S J Van Zelst
  • W Van Der
  • Aalst
Berti, A., S. J. van Zelst, and W. van der Aalst. 2019, May. "Process Mining for Python (PM4Py): Bridging the Gap Between Process-and Data Science". In Proceedings of the ICPM Demo Track 2019, 13-16. Aachen, Germany.
Implementation of the Presented Approach for Data-Driven Reliability Modeling
  • Friederich
Friederich, Jonas 2022, April. "Implementation of the Presented Approach for Data-Driven Reliability Modeling". https: //github.com/jo-chr/data-driven-reliability-modeling, accessed 14 th September 2022.
Discovering Block-Structured Process Models from Event Logs -A Constructive Approach". In Application and Theory of Petri Nets and Concurrency
  • S J J Leemans
  • D Fahland
  • W M P Van Der Aalst
Leemans, S. J. J., D. Fahland, and W. M. P. van der Aalst. 2013. "Discovering Block-Structured Process Models from Event Logs -A Constructive Approach". In Application and Theory of Petri Nets and Concurrency, edited by J.-M. Colom and J. Desel, Lecture Notes in Computer Science, 311-329. Berlin, Heidelberg: Springer.
Fitter: Fit Data to Many Distributions
  • Thomas Cokelaer
Process Mining for Python (PM4Py): Bridging the Gap Between Process-and Data Science
  • berti