## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... Multiple AI applications [2,18] in the context of CPPS proved that a factored representation in the form of a feature vector can easily be derived from CPPS time-series data. We hence create a feature-vector-based state-space representation that allows for solving of CPPS planning problems with SMT and training of ML models. ...

... Nonetheless, these approaches require lists of state transitions or action sequences created by a human operator, random exploration, or other domain-specific process [21] and are not explicitly available for CPPS. Other AI approaches, like [2,18] use factored representation in the form of a vector of attributes that can easily be derived from CPPS time-series data. Each state of the system is partitioned into a fixed set of variables or attributes; each having a value of tpye Boolean, real number, or selected from a fixed set of symbols [25]. ...

Cyber-Physical Production Systems (CPPS) are highly complex systems, making the application of AI planning approaches for production planning challenging. Most AI planning approaches require comprehensive domain descriptions, which model the functional dependencies within the CPPS. Though, due to their high complexity, creating such domain descriptions manually is considered difficult, tedious, and error-prone. Therefore, we propose a novel generic planning approach, which can integrate mathematical formulas or Machine Learning models into a symbolic SMT-based planning algorithm, thus shedding the need for complex manually created models. Our approach uses a feature-vector-based state-space representation as an interface of symbolic and sub-symbolic AI, and can identify a solution to CPPS planning problems by determining the required production steps, their sequence, and their parametrization. We evaluate our approach on twelve planning problems from a real CPPS, demonstrating its ability to express complex dependencies within production steps as mathematical formulas or integrating ML models.

... Reconfiguration aims at recovering a system from a fault by automatically adapting the system configuration, such that production, which was interrupted due to the fault, can be maintained, possibly by an adapted control [5]. In other words, the goal of reconfiguration is to transfer the system to a valid configuration, i.e., a configuration that allows normal system operation according to [6]. Hence, effects of faults are minimized and production outages are reduced, accepting a degradation of the system performance, e.g., a reduction of speed, if necessary [5]. ...

... An algorithm solving the reconfiguration problem of CPPS should (R3.1) take restrictions on the solution space coming from the CPPS into account since enumerating all possibilities to adapt to a fault is not possible due to combinatorial explosion [5], (R3.2) be compatible with static models containing qualitative system information, i.e., information about causal dependencies in the system, and a binary validity of configurations [6], and (R3.3) enable a direct integration of expert knowledge and intuitive modeling. Propositional logic is used widely for diagnosis [12] and planning [9] since it mimics human reasoning [15]. ...

The increasing size and complexity of Cyber-Physical Production Systems (CPPS) lead to an increasing number of faults such as broken components or interrupted connections. Nowadays, faults are handled manually which is time-consuming because for most operators mapping from symptoms (i.e. warnings) to repair instructions is rather difficult. To enable CPPS to adapt to faults autonomously, reconfiguration, i.e. the identification of a new configuration that allows either reestablishing production or a safe shutdown, is necessary. This article addresses the reconfiguration problem of CPPS and presents a novel algorithm called AutoConf.
AutoConf operates on a hybrid automaton that models the CPPS and a specification of the controller to construct a qualitative system model. This qualitative system model is based on propositional logic and represents the CPPS in the reconfiguration context.
Evaluations on an industrial use case and simulations from process engineering illustrate the effectiveness and examine the scalability of AutoConf.

... The RL supports the parameters adjustment through the learning process. Authors in [32] present a reconfiguration algorithm based on first-order logic, which can be integrated directly into the automation software. The goal of reconfiguration is to identify the necessary changes to a system in the presence of faults. ...

... No No Galaske et al. [28] No No No Zhang et al. [29] No No No Park et al. [30] No No No Balzereit et al. [32] No No No Bicocchi et al. [4] No No No Our Approach Yes Yes Yes as well as an initial repository of recovery services. In a nutshell, the context model is organised over the three perspectives of the product that is being produced, of the production process and of the structure of smart machines in the CPPS. ...

Cyber-physical systems are hybrid networked cyber and engineered physical elements that record data (e.g. using sensors), analyse them using connected services, influence physical processes and interact with human actors using multi-channel interfaces. Examples of CPS interacting with humans in industrial production environments are the so-called cyber-physical production systems (CPPS), where operators supervise the industrial machines, according to the human-in-the-loop paradigm. In this scenario, research challenges for implementing CPPS resilience, promptly reacting to faults, concern: (i) the complex structure of CPPS, which cannot be addressed as a monolithic system, but as a dynamic ecosystem of single CPS interacting and influencing each other; (ii) the volume, velocity and variety of data (Big Data) on which resilience is based, which call for novel methods and techniques to ensure recovery procedures; (iii) the involvement of human factors in these systems. In this paper, we address the design of resilient cyber-physical production systems (R-CPPS) in digital factories by facing these challenges. Specifically, each component of the R-CPPS is modelled as a smart machine, that is, a cyber-physical system equipped with a set of recovery services, a Sensor Data API used to collect sensor data acquired from the physical side for monitoring the component behaviour, and an operator interface for displaying detected anomalous conditions and notifying necessary recovery actions to on-field operators. A context-based mediator, at shop floor level, is in charge of ensuring resilience by gathering data from the CPPS, selecting the proper recovery actions and invoking corresponding recovery services on the target CPS. Finally, data summarisation and relevance evaluation techniques are used for supporting the identification of anomalous conditions in the presence of high volume and velocity of data collected through the Sensor Data API. The approach is validated in a food industry real case study.

... There exist multiple approaches in the literature that exploit digital technologies to fully automate these tasks [23]. This includes, for example, the use of optimization algorithms for configuration planning [10,24] and production scheduling [25], production control by reinforcement learning agents [26,27] and the extensive use of simulation in all planning phases [6,28]. Although automation of individual tasks is available, a continuous process for production reconfigurations that allows the integration and combination of individual approaches is missing. ...

Caused by the trend of shorter product lifecycles, higher numbers of product variants and volatile markets, production systems face increasingly short periods with unchanged requirements. Therefore, the capability of manufacturing systems to reconfigure fast and cost-efficiently to changed requirements becomes a crucial factor for companies to maintain their competitiveness. Currently, reconfigurations of manufacturing systems are, on the one hand, limited due to technical constraints of the used hardware and software. On the other hand, reconfigurations require a lot of time due to manual engineering processes, planning procedures and inefficient deployment of changed production system configurations. Well-known response mechanisms for reducing reconfiguration efforts are the concepts of flexibility and changeability. This paper shows how the challenges of applying these concepts, such as managing complex modular systems or handling high reconfiguration frequencies, can be addressed with introduction of a new approach. With the paradigm shift towards software-defined manufacturing, the full potential of flexibility and changeability can be accessed. Software-defined manufacturing allows to largely decouple the production task from the operating production hardware and to manage the configuration of the production system via a continuous and highly digitized adaption process. By exploiting technologies like data mining and digital twins, the digital planning process determines new configurations of the production that fulfill changed requirements. Subsequently, the new configuration can be validated and procedures for the deployment to the production system can be determined.

... For the experimental results, we use the Three-Tank System and a Two-Tank system which has already been used for reconfiguration purposes [28]. The system consists of two tanks T 1 , T 2 . ...

Reconfiguration aims at recovering a system from a fault by automatically adapting the system configuration, such that the system goal can be reached again. Classical approaches typically use a set of pre-defined faults for which corresponding recovery actions are defined manually. This is not possible for modern hybrid systems which are characterized by frequent changes. Instead, AI-based approaches are needed which leverage on a model of the non-faulty system and which search for a set of reconfiguration operations which will establish a valid behavior again. This work presents a novel algorithm which solves three main challenges: (i) Only a model of the non-faulty system is needed, i.e. the faulty behavior does not need to be modeled. (ii) It discretizes and reduces the search space which originally is too large -- mainly due to the high number of continuous system variables and control signals. (iii) It uses a SAT solver for propositional logic for two purposes: First, it defines the binary concept of validity. Second, it implements the search itself -- sacrificing the optimal solution for a quick identification of an arbitrary solution. It is shown that the approach is able to reconfigure faults on simulated process engineering systems.

Driven by shorter innovation and product life cycles as well as economic volatility, the demand for reconfiguration of production systems is increasing. Thus, a systematic literature review on reconfiguration management in manufacturing is conducted within this work in order to determine by which degree this is addressed by the literature. To approach this, a definition of reconfiguration management is provided and key aspects of reconfigurable manufacturing systems as well as shortcomings of today’s manufacturing systems reconfiguration are depicted. These provide the basis to derive the requirements for answering the formulated research question. Consequently, the methodical procedure of the literature review is outlined, which is based on the assessment of the derived requirements. Finally, the obtained results are provided and noteworthy insights are given.

The paper describes a novel use of planning in Reconfigurable Manufacturing. Authors considered the nodes of a manufacturing plant as individual AI-based agents able to reason on continuously updated representation of their domain model, plan their own actions, and execute them. The paper aims at clarifying the role of planning, its connection with both a goal selection mechanism, and the agent's knowledge. It describes in detail how a planning system has been customized for the task of planning and execution and shows results of a realistic simulation on a manufacturing plant.

Today, Cyber-Physical Production Systems (CPPS) are controlled by manually written software, therefore the software is not able to adapt to unforeseen faults or external system changes. So even if a fault is diagnosed correctly, the system normally needs to be repaired manually by a human operator. To implement the vision of an autonomous system, besides self-diagnosis a self-reconfiguration or self-repair step is needed. Here reconfiguration is the task of restoring valid system behavior after an invalid system behavior occurred. For complex CPPS, finding such a new valid configuration always requires a system model covering all potential new configurations-only for rather simple systems the possible reconfigurations for a fault can be modeled explicitly. Unfortunately , such models are hardly available for such systems. To solve this challenge, in this paper, a novel approach for the automated reconfiguration of CPPS is presented. It is based on Satisfiabil-ity Modulo Theories and operates on observed system data as well as on information about the system topology. By doing this, the mod-eling efforts are reduced. To evaluate this new approach, a simulation of such CPPS is used.

In smart factories, maintenance is still an important aspect to safeguard the performance of their production. Especially in case of failures of machine components diagnosis is a time-consuming task. This paper presents an approach for a cyber-physical failure management system, which uses information from machines such as programmable logic controller or sensor data and IT systems to support the diagnosis and repairing process. Key element is a model combining the different information sources to detect deviations and to determine a probable failed component. Furthermore, the approach is prototypically implemented for leakage detection in compressed air networks.

One of the most significant directions in the development of computer science and information and communication technologies is represented by Cyber-Physical Systems (CPSs) which are systems of collaborating computational entities which are in intensive connection with the surrounding physical world and its on-going processes, providing and using, at the same time, data-accessing and data-processing services available on the internet. Cyber-Physical Production Systems (CPPSs), relying on the newest and foreseeable further developments of computer science, information and communication technologies on the one hand, and of manufacturing science and technology, on the other, may lead to the 4th Industrial Revolution, frequently noted as Industry 4.0. The key-note will underline that there are significant roots generally – and particularly in the CIRP community – which point towards CPPSs. Expectations and the related new R&D challenges will be outlined.

This paper presents one perspective on recent developments related to software engineering in the industrial automation sector that spans from manufacturing factory automation to process control systems and energy automation systems. The survey's methodology is based on the classic SWEBOK reference document that comprehensively defines the taxonomy of software engineering domain. This is mixed with classic automation artefacts, such as the set of the most influential international standards and dominating industrial practices. The survey focuses mainly on research publications which are believed to be representative of advanced industrial practices as well.

A recent trend in intelligent machines and manufacturing has been toward reconfigurable manufacturing systems. Such systems move away from a fixed factory line executing an unchanging set of operations and toward the goal of an adaptable factory structure. The logical next challenge in this area is that of online reconfigurability. With this capability, machines can reconfigure while running, enable or disable capabilities in real time, and respond quickly to changes in the system or the environment (including faults). We propose an approach to achieving online reconfigurability based on a high level of system modularity supported by integrated, model-based planning and control software. Our software capitalizes on many advanced techniques from the artificial intelligence research community, particularly in model-based domain-independent planning and scheduling, heuristic search, and temporal resource reasoning. We describe the implementation of this design in a prototype highly modular, parallel printing system.

A tailored model of a system is the prerequisite for various analysis tasks, such as anomaly detection, fault identification, or quality assurance. This paper deals with the algorithmic learning of a system's behavior model given a sample of observations. In particular, we consider real-world production plants where the learned model must capture timing behavior, dependencies between system variables, as well as mode switches - in short: hybrid system's characteristics. Usually, such model formation tasks are solved by human engineers, entailing the well-known bunch of problems including knowledge acquisition, development cost, or lack of experience. Our contributions to the outlined field are as follows. (1) We present a taxonomy of learning problems related to model formation tasks. As a result, an important open learning problem for the domain of production system is identified: The learning of hybrid timed automata. (2) For this class of models, the learning algorithm HyBUTLA is presented. This algorithm is the first of its kind to solve the underlying model formation problem at scalable precision. (3) We present two case studies that illustrate the usability of this approach in realistic settings. (4) We give a proof for the learning and runtime properties of HyBUTLA. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.

The control of dynamic systems, which aims to minimize the deviation of state variables from reference values in a continuous state space, is a central domain of cybernetics and control theory. The objective of action planning is to find feasible state trajectories in a discrete state space from an initial state to a state satisfying the goal conditions, which in principle addresses the same issue on a more abstract level. We combine these approaches to switch between dynamic system characteristics on the fly, and to generate control input sequences that affect both discrete and continuous state variables. Our approach (called Domain Predictive Control) is applicable to hybrid systems with linear dynamics and discretizable inputs. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.

Nonlinear hybrid dynamical systems are the main focus of this paper. A modeling framework is proposed, feedback control strategies
and numerical solution methods for optimal control problems in this setting are introduced, and their implementation with
various illustrative applications are presented. Hybrid dynamical systems are characterized by discrete event and continuous
dynamics which have an interconnected structure and can thus represent an extremely wide range of systems of practical interest.
Consequently, many modeling and control methods have surfaced for these problems. This work is particularly focused on systems
for which the degree of discrete/continuous interconnection is comparatively strong and the continuous portion of the dynamics
may be highly nonlinear and of high dimension. The hybrid optimal control problem is defined and two solution techniques for
obtaining suboptimal solutions are presented (both based on numerical direct collocation for continuous dynamic optimization):
one fixes interior point constraints on a grid, another uses branch-and-bound. These are applied to a robotic multi-arm transport
task, an underactuated robot arm, and a benchmark motorized traveling salesman problem.

Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays,
and uninterpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in
various software verification and analysis applications.

isolating the cause for this behavior. This report is concerned with the detection and isolation of abrupt faults through analysis of the transients that occur after the fault. We have developed a comprehensive framework for monitoring and diagnosis of dynamical systems that attempts to overcome the difficulties associated with quantitative techniques. A key step in our monitoring and diagnosis framework is the transformation of measurements into symbols that encode the trends in the measurements. That is, the symbols are the signs of the first and second time derivatives of the measurements. This report outlines how these symbols are used in diagnosis, considers several methods for estimating the symbols from measured data, and shows the behavior of the method we have chosen to use on measurements taken with our cooling system diagnosis testbed.

A qualitative physics predicts and explains the behavior of mechanisms in qualitative terms. The goals for the qualitative physics are (1) to be far simpler than the classical physics and yet retain all the important distinctions (e.g., state, oscillation, gain, momentum) without invoking the mathematics of continuously varying quantities and differential equations, (2) to produce causal accounts of physical mechanisms that are easy to understand, and (3) to provide the foundations for commonsense models for the next generation of expert systems.This paper presents a fairly encompassing account of qualitative physics. First, we discuss the general subject of naive physics and some of its methodological considerations. Second, we present a framework for modeling the generic behavior of individual components of a device based on the notions of qualitative differential equations (confluences) and qualitative state. This requires developing a qualitative version of the calculus. The modeling primitives induce two kinds of behavior, intrastate and interstate, which are governed by different laws. Third, we present algorithms for determining the behavior of a composite device from the generic behavior of its components. Fourth, we examine a theory of explanation for these predictions based on logical proof. Fifth, we introduce causality as an ontological commitment for explaining how devices behave.

Fault-tolerant control aims at a graceful degradation of the behaviour of automated systems in case of faults. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults that bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault throughout the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. Design methods for diagnostic systems and fault-tolerant controllers are presented for processes that are described by analytical models, by discrete-event models or that can be dealt with as quantised systems. Five case studies on pilot processes show the applicability of the presented methods. The theoretical results are illustrated by two running examples used throughout the book. The book addresses engineering students, engineers in industry and researchers who wish to get a survey over the variety of approaches to process diagnosis and fault-tolerant control. The authors have extensive teaching experience with graduates and PhD students as well as industrial experts. Parts of this book have been used in courses for this audience. The authors give a thorough introduction to the main ideas of diagnosis and fault-tolerant control and present some of their most recent research achievements that they have obtained together with their research groups in a close cooperation within European research projects. The second edition includes new material about reconfigurable control, diagnosis of nonlinear systems, and remote diagnosis. The application examples are extended by a steering-by-wire system and the air path of a diesel engine, both of which include experimental results. The bibliographical notes at the end of all chapters have been up-dated. The chapters end with exercises to be used in lectures.

Qualitative simulation is a key inference process in qualita tive causal reasoning, In this paper, we present the QSIM al gorithm, a new algorithm for qualitative simulation that gener alizes the best features of rxisting algorithms, and allows direct comparisons among alternate approaches. QSIM is an efficient constraint-satisfaction algorithm that can follow either its stan dard semantics allowing the creation of new landmarks, or the {+, 0, -} semantics where 0 is the only landmark value, by chang ing a table of legal state-transitions. We argue that the QSIM semantics make more appropriate qualitative distinctions since the { + ,0,-} semantics can collapse the distinction among in creasing, stable, or decreasing oscillation. We also show that (a) qualitative simulation algorithms can be proved to produce ev ery actual behavior of the mechanism being modeled, but (b) existing qualitative simulation algorithms, because of their lo cal points of view, can predict spurious behaviors not produced by any mechanism satisfying the structural description. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mecha- nism descriptions.

This paper examines fundamental problems underlying difficulties
encountered by pattern recognition algorithms, neural networks, and rule
systems. These problems are manifested as combinatorial complexity of
algorithms, of their computational or training requirements. The paper
relates particular types of complexity problems to the roles of a priori
knowledge and adaptive learning. Paradigms based on adaptive learning
lead to the complexity of training procedures, while nonadaptive
rule-based paradigms lead to complexity of rule systems. Model-based
approaches to combining adaptivity with a priori knowledge lead to
computational complexity. Arguments are presented for the Aristotelian
logic being culpable for the difficulty of combining adaptivity and a
priority. The potential role of the fuzzy logic in overcoming current
difficulties is discussed. Current mathematical difficulties are related
to philosophical debates of the past

The control of dynamic systems, which aims to minimize the deviation of state variables from reference values in a continuous state space, is a central domain of cybernetics and control theory. The objective of action planning is to find feasible state trajectories in a discrete state space from an initial state to a state satisfying the goal conditions, which in principle addresses the same issue on a more abstract level. We combine these approaches to switch between dynamic system characteristics on the fly, and to generate control input sequences that affect both discrete and continuous state variables. Our approach (called Domain Predictive Control) is applicable to hybrid systems with linear dynamics and discretizable inputs.

A tailored model of a system is the prerequisite for various analysis tasks, such as anomaly detection, fault identification, or quality assurance. This paper deals with the algorithmic learning of a system’s behavior model given a sample of observations. In particular, we consider real-world production plants where the learned model must capture timing behavior, dependencies between system variables, as well as mode switches—in short: hybrid system’s characteristics. Usually, such model formation tasks are solved by human engineers, entailing the well-known bunch of problems including knowledge acquisition, development cost, or lack of experience. Our contributions to the outlined field are as follows. (1) We present a taxonomy of learning problems related to model formation tasks. As a result, an important open learning problem for the domain of production system is identified: The learning of hybrid timed automata. (2) For this class of models, the learning algorithm HyBUTLA is presented. This algorithm is the first of its kind to solve the underlying model formation problem at scalable precision. (3) We present two case studies that illustrate the usability of this approach in realistic settings. (4) We give a proof for the learning and runtime properties of HyBUTLA.

Quadcopters are susceptible to internal and external influences , many of which may lead to faults. To ensure a safe and reliable flight, the quadcopter needs to recover autonomously from faults. However , existing approaches mainly rely on parametrical faults or require a predefinition of possible faults which is not realistic for a complex real-world scenario. The recovery from unforeseen faults and structural faults like a failing engine is still an open research gap. Hence, in this paper, a concept for the automated reconfiguration, i.e. the automated recovery from a fault, which only uses information about non-faulty system behavior and is able to handle structural changes is presented. From the information about non-faulty behavior a non-faulty system model is created using established machine learning methods. Thus, faults are detected by learned model and no pre-definition of faults is needed. The system structure is modeled using a logical calculus which allows for modeling available system parts and the causal coherences between these. The approach is applied to a simulation of a quadcopter which underlies a structural fault. It is shown that the approach extends the capabilities of a quadcopter to handle faults autonomously and ensure stability and reliability.

Today, Cyber-Physical Production Systems (CPPS) are controlled by manually written software, therefore the software is not able to adapt to unforeseen events and faults. So even if a fault is diagnosed automatically, the system normally needs to be repaired manually by a human operator. So to implement the vision of an autonomous system, besides self-diagnosis also a self-reconfiguration or self-repair step is needed. Here reconfiguration is the task of restoring valid system behavior after an invalid system behavior occurred. For complex CPPS, finding such a new valid configuration always requires a system model covering all potential new configurations-only for rather simple systems the possible recon-figurations for a fault can be modeled explicitly. Unfortunately, such models are hardly available for complex systems. This paper presents a novel approach for the automated reconfigu-ration of CPPS to solve this challenge. It is based on the combination of residual-based fault detection and logical calculi to draw causal coherences. The approach operates on observed system data and information about the system topology. By doing this, the modeling efforts are reduced. To evaluate the new approach, a simulation of such CPPS is used.

Model-based diagnosis (MBD) is difficult to use in practice because it requires a model of the diagnosed system, which is often very hard to obtain. We explore theoretically how observing the system when it is in a normal state can provide information about the system that is sufficient to learn a partial system model that allows automated diagnosis. We analyze the number of observations needed to learn a model capable of finding faulty components in most cases. Then, we explore how knowing the system topology can help us to learn a useful model from the normal observations for settings in which many of the internal system variables cannot be observed. Unlike other data-driven methods, our learned model is safe, in the sense that subsystems identified as faulty are guaranteed to truly be faulty.

Proton exchange membrane fuel cells are considered as one of the most promising power sources for transportation in the near future. Proper and robust thermal management is a key issue in fuel cell applications. This paper mainly studies the sensor fault detection and isolation and fault tolerant control for the thermal management of proton exchange membrane fuel cell systems. The thermal model of fuel cell is established and analyzed by structure analysis, and the residual generator of sensor faults are designed by Dulmage Mendelsohn decomposition to rearrange the bi-adjacency matrix of the thermal model. A sliding-mode-based active fault tolerant control strategy is proposed for the thermal management of fuel cell systems. The effectivenesses of the proposed fault detection and isolation method and active fault tolerant control strategy are verified on a fuel cell test bench. The experimental results show that the temperature of the proton exchange membrane fuel cell stack can be maintained at the reference value with high accuracy even when sensor fails.

This paper is focused on the observer-based adaptive fuzzy control problem for nonlinear stochastic systems with the nonstrict-feedback form, in which some complicated and challenging issues including unmeasurable states, input quantization and actuator faults are addressed. The fuzzy logic systems are introduced to approximate the nonlinear functions existing in the control system. A fuzzy observer is designed to observe the unavailable state variables. In order to handle the negative effects resulting from input quantization and actuator faults, a damping term with the estimation of unknown bounds as well as a positive time-varying integral function are constructed, respectively. Furthermore, an observer-based adaptive fuzzy control scheme is proposed for the considered systems to compensate for the effects of input quantization and actuator fault based on adaptive backstepping approach. The proposed control strategy can guarantee that all the signals in the closed-loop system are bounded. Finally, simulation results are provided to illustrate the effectiveness of the proposed adaptive control scheme.

Production systems are a typical example of cyber-physical systems (CPS) in which a variety of machines, actuators, sensors and control systems are interwoven to produce products as efficiently as possible. Even though sophisticated condition monitoring systems are deployed, stoppage, breaks, and other types of failures still happen. To avoid catastrophic operational disruptions, desirably the production system itself resiliently and autonomously responses to failures. This paper reports a design method for a resilient architecture of a cyber-physical production system that can deal with disturbances and failures in a discrete-event process. A physical demonstrator was built to demonstrate its reconfiguration capabilities.

Fixed-time cooperative control is currently a hot research topic in multi-agent systems since it can provide a guaranteed settling time, which does not depend on initial conditions. Compared with asymptotic cooperative control algorithms, fixed-time cooperative control algorithms can achieve better closed-loop performance and disturbance rejection properties. Different from finite-time control, fixed-time cooperative control produces the faster rate of convergence and provides an explicit estimation of the settling time independent of initial conditions, which is desirable for multi-agent systems. This paper aims at presenting an overview of recent advances in fixed-time cooperative control of multi-agent systems. Some fundamental concepts about finite- and fixed-time stability and stabilization are first recalled with insight understanding. Then recent results in finite- and fixed-time cooperative control are reviewed in detail and categorized according to different agent dynamics. Finally, this paper raises several challenging issues that need to be addressed in the near future.

The introduction of Industry 4.0 and rapid development of Manufacturing Cyber-Physical Systems (MCPS), as well as the increasing demand for multi-variety, small batch and personalized customization, pose a huge challenge to the traditional manufacturing systems. In order to meet the production requirements for fast iteration and realize agile and efficient manufacturing resource allocation, this paper proposes an ontology-based resource reconfiguration method from the perspective of resource utilization. First, an intelligent device ontology that describes the intelligent manufacturing resource is established using the Web Ontology Language (OWL). On this basis, the relational database is associated with the ontology of manufacturing system, which makes the manufacturing resources be mapped to the model instances. Finally, we analyze the equipment reconfiguration of intelligent manipulator as an application case, which explains the proposed method for resource reconfiguration based on ontology, and verifies its feasibility in manufacturing. Lastly, this study provides a new method for reconfigurable research of manufacturing resources.

This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multi-agent domains.

This paper addresses fault-tolerant control (FTC) issues for linear systems with model uncertainty and multiplicative faults. The left and right coprime factorization techniques are first adopted for system modeling. Then, the fault detection (FD) approaches are investigated in the coprime factorization context. Based on the information provided by the FD systems, the corresponding FTC architectures and design schemes are presented. Moreover, the gap metric techniques are applied to fault detectability analysis, including the fault detectability indicators to quantify the detection performance in the presence of model uncertainty. The effectiveness of the developed methods for industrial application is illustrated by a case study on a dc motor.

The paper describes a novel use of planning in Reconfigurable Manufacturing. Authors considered the nodes of a manufacturing plant as individual AI-based agents able to reason on continuously updated representation of their domain model, plan their own actions, and execute them. The paper aims at clarifying the role of planning, its connection with both a goal selection mechanism, and the agent’s knowledge. It describes in detail how a planning system has been customized for the task of planning and execution and shows results of a realistic simulation on a manufacturing plant.

In this chapter, we discuss the problem of fault diagnosis for complex systems in two different contexts: static and dynamic probabilistic graphical models of systems. The fault diagnosis problem is represented using a tripartite probabilistic graphical model. The first layer of this tripartite graph is composed of components of the system, which are the potential sources of failures. The condition of each component is represented by a binary state variable which is zero if the component is healthy and one otherwise. The second layer is composed of tests with binary outcomes (pass or fail) and the third layer is the noisy observations associated with the test outcomes. The cause–effect relations between the states of components and the observed test outcomes can be compactly modeled in terms of detection and false alarm probabilities. For a failure source and an observed test outcome, the probability of fault detection is defined as the probability that the observed test outcome is a fail given that the component is faulty, and the probability of false alarm is defined as the probability that the observed test outcome is a fail given that the component is healthy. When the probability of fault detection is one and the probability of false alarm is zero, the test is termed perfect; otherwise, it is deemed imperfect. In static models, the diagnosis problem is formulated as one of maximizing the posterior probability of component states given the observed fail or pass outcomes of tests. Since the solution to this problem is known to be NP-hard, to find near-optimal diagnostic solutions, we use a Lagrangian (dual) relaxation technique, which has the desirable property of providing a measure of suboptimality in terms of the approximate duality gap. Indeed, the solution would be optimal if the approximate duality gap is zero. The static problem is discussed in detail and some interesting properties, such as the reduction of the problem to a set covering problem in the case of perfect tests, are discussed. We also visualize the dual function graphically and introduce some insights into the static fault diagnosis problem. In the context of dynamic probabilistic graphical models, it is assumed that the states of components evolve as independent Markov chains and that, at each time epoch, we have access to some of the observed test outcomes. Given the observed test outcomes at different time epochs, the goal is to determine the most likely evolution of the states of components over time. The application of dual relaxation techniques results in significant reduction in the computational burden as it transforms the original coupled problem into separable subproblems, one for each component, which are solved using a Viterbi decoding algorithm. The problems, as stated above, can be regarded as passive monitoring, which relies on synchronous or asynchronous availability of sensor results to infer the most likely state evolution of component states. When information is sequentially acquired to isolate the faults in minimum time, cost, or other economic factors, the problem of fault diagnosis can be viewed as active probing (also termed sequential testing or troubleshooting). We discuss the solution of active probing problems using the information heuristic and rollout strategies of dynamic programming. The practical applications of passive monitoring and active probing to fault diagnosis problems in automotive, aerospace, power, and medical systems are briefly mentioned.

Qualitative simulation is a key inference process in qualitative causal reasoning. However, the precise meaning of the different proposals and their relation with differential equations is often unclear. In this paper, we present a precise definition of qualitative structure and behavior descriptions as abstractions of differential equations and continuously differentiable functions. We present a new algorithm for qualitative simulation that generalizes the best features of existing algorithms, and allows direct comparisons among alternate approaches. Starting with a set of constraints abstracted from a differential equation, we prove that the QSIM algorithm is guaranteed to produce a qualitative behavior corresponding to any solution to the original equation. We also show that any qualitative simulation algorithm will sometimes produce spurious qualitative behaviors: ones which do not correspond to any mechanism satisfying the given constraints. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mechanism descriptions.

Product configuration, a widely used technology in product family design, is one of the most effective technologies of mass customization strategies which have been deployed by many companies for years. Nevertheless, the mass customization needs to cover the management of the whole customizable product cycle. In order to assist the development of mass customization, it is essential to extend the configuration technology to product family process planning, which is the technological essence of process configuration. In this article the process configuration task is confirmed based on the analysis of characteristics of process planning. Compared with the solving scheme of product configuration, the process configuration is then mapped into a generative constraint satisfaction problem (GCSP), and the variables and constraints of the process configuration GCSP model are identified respectively. An algorithm based on backtracking algorithm is introduced to complete the process configuration. Finally, an experiment on machining process configuration for satellite plate panel verifies the validity of our algorithm.

Resilience often refers to a property of social and ecological systems. Recently, resilience is applied to engineered systems, referring to their capability to recover their functions after partial damage to lead to successes from failures. In this paper, the concept of engineering resilience is revisited and clarified. A new definition of the general production system is proposed, upon which the concept of the resilient manufacture system (RMS) is proposed. Furthermore, four guidelines for design and management of the RMS are proposed. Examples are discussed to illustrate the applications of these guidelines toward the RMS.

One-sided specification intervals are frequent in industry, but the process capability analysis is not well developed theoretically for this case. Most of the published articles about process capability focus on the case when the specification interval is two-sided. Furthermore, usually the assumption of normality is necessary. However, a common practical situation is process capability analysis when the studied characteristic has a skewed distribution with a long tail towards large values and an upper specification limit only exists. In such situations it is not uncommon that the smallest possible value of the characteristic is 0 and that this also is the best value to obtain. We propose a new class of indices for such a situation with an upper specification limit, a target value zero, and where the studied characteristic has a skewed, zero-bound distribution with a long tail towards large values. A confidence interval for an index in the proposed class, as well as a decision procedure for deeming a process as capable or not, is discussed. These results are based on large sample properties of the distribution of a suggested estimator of the index. A simulation study is performed, assuming the quality characteristic is Weibull distributed, to investigate the properties of the suggested decision procedure. Copyright © 2007 John Wiley & Sons, Ltd.

a b s t r a c t This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems.

Model-based diagnosis (MBD) tackles the problem of troubleshooting systems starting from a description of their structure and function (or behavior). Time is a fundamental dimension in MBD: the behavior of most systems is time-dependent in one way or another. Temporal MBD, however, is a difficult task and indeed many simplifying assumptions have been adopted in the various approaches in the literature. These assumptions concern different aspects such as the type and granularity of the temporal phenomena being modeled, the definition of diagnosis, the ontology for time being adopted. Unlike the atemporal case, moreover, there is no general “theory” of temporal MBD which can be used as a knowledge-level characterization of the problem.In this paper we present a general characterization of temporal model-based diagnosis. We distinguish between different temporal phenomena that can be taken into account in diagnosis and we introduce a modeling language which can capture all such phenomena. Given a suitable logical semantics for such a modeling language, we introduce a general characterization of the notions of diagnostic problem and explanation, showing that in the temporal case these definitions involve different parameters. Different choices for the parameters lead to different approaches to temporal diagnosis.We define a framework in which different dimensions for temporal model-based diagnosis can be analyzed at the knowledge level, pointing out which are the alternatives along each dimension and showing in which cases each one of these alternatives is adequate. In the final part of the paper we show how various approaches in the literature can be classified within our framework. In this way, we propose some guidelines to choose which approach best fits a given application problem.

Satisfiability Modulo Theories (SMT) is about checking the satis- fiability of logical formulas over one or more theories. The problem draws on a combination of some of the most fundamental areas in computer science. It combines the problem of Boolean satisfiability with domains, such as, those studied in convex optimization and term- manipulating symbolic systems. It also draws on the most prolific problems in the past century of symbolic logic: the decision problem, completeness and incompleteness of logical theories, and finally com- plexity theory. The problem of modularly combining special purpose algorithms for each domain is as deep and intriguing as finding new algorithms that work particularly well in the context of a combina- tion. SMT also enjoys a very useful role in software engineering. Mod- ern software, hardware analysis and model-based tools are increasingly complex and multi-faceted software systems. However, at their core is invariably a component using symbolic logic for describing states and transformations between them. A well tuned SMT solver that takes into account the state-of-the-art breakthroughs usually scales orders of magnitude beyond custom ad-hoc solvers.

This paper presents a model-based diagnostic method designed in the context of process supervision. It has been inspired by both artificial intelligence and control theory. AI contributes tools for qualitative modeling, including causal modeling, whose aim is to split a complex process into elementary submodels. Control theory, within the framework of fault detection and isolation (FDI), provides numerical models for generating and testing residuals, and for taking into account inaccuracies in the model, unknown disturbances and noise. Consistency-based reasoning provides a logical foundation for diagnostic reasoning and clarifies fundamental assumptions, such as single fault and exoneration. The diagnostic method presented in the paper benefits from the advantages of all these approaches. Causal modeling enables the method to focus on sufficient relations for fault isolation, which avoids combinatorial explosion. Moreover, it allows the model to be modified easily without changing any aspect of the diagnostic algorithm. The numerical submodels that are used to detect inconsistency benefit from the precise quantitative analysis of the FDI approach. The FDI models are studied in order to link this method with DX component-oriented reasoning. The recursive on-line use of this algorithm is explained and the concept of local exoneration is introduced.

Consistency-based diagnosis is one of the most widely used approaches to model-based diagnosis within the artificial intelligence community. It is usually carried out through an iterative cycle of behavior prediction, conflict detection, candidate generation, and candidate refinement. In that process conflict detection has proven to be a nontrivial step from the theoretical point of view. For this reason, many approaches to consistency-based diagnosis have relied upon some kind of dependency-recording. These techniques have had different problems, specially when they were applied to diagnose dynamic systems. Recently, offline dependency compilation has established itself as a suitable alternative approach to online dependency-recording. In this paper we propose the possible conflict concept as a compilation technique for consistency-based diagnosis. Each possible conflict represents a subsystem within system description containing minimal analytical redundancy and being capable to become a conflict. Moreover, the whole set of possible conflicts can be computed offline with no model evaluation. Once we have formalized the possible conflict concept, we explain how possible conflicts can be used in the consistency-based diagnosis framework, and how this concept can be easily extended to diagnose dynamic systems. Finally, we analyze its relation to conflicts in the general diagnosis engine (GDE) framework and compare possible conflicts with other compilation techniques, especially with analytical redundancy relations (ARRs) obtained through structural analysis. Based on results from these comparisons we provide additional insights in the work carried out within the BRIDGE community to provide a common framework for model-based diagnosis for both artificial intelligence and control engineering approaches.

In this work we propose several algorithms to solve the
reconfiguration problem for linear and hybrid systems. In particular, we
consider the decision about the usage of redundant hardware in order to
compensate for faults. While this problem can be translated into a
constrained model predictive control framework, the computational
complexity grows very fast, as the number of possible decisions
increases. In this work we propose schemes, that require low
computational effort. We discuss the applicability of the methods
considering the reconfiguration of the three tank benchmark system

The bond-graph method is a graphical approach to modeling in which component energy ports are connected by bonds that specify the transfer of energy between system components. Power, the rate of energy transport between components, is the universal currency of physical systems. Bond graphs are inherently energy based and thus related to other energy-based methods, including dissipative systems and port-Hamiltonians. This article has presented an introduction to bond graphs for control engineers. Although the notation can initially appear daunting, the bond graph method is firmly grounded in the familiar concepts of energy and power. The essential element to be grasped is that bonds represent power transactions between components

A standardized language for reuse and exchange of models is needed. An international design group has designed such a language called Modelica. Modelica is a modern language built on non-causal modeling with mathematical equations and object-oriented constructs to facilitate reuse of modeling knowledge.

- I Matei
- J De Kleer
- A Feldman
- R Rai
- S Chowdhury

I. Matei, J. de Kleer, A. Feldman, R. Rai, and S. Chowdhury, "Hybrid modeling: Applications in real-time diagnosis," arXiv preprint
arXiv:2003.02671, 2020.

IEC 61360 -Standard data element types with associated classification scheme

IEC, IEC 61360 -Standard data element types with associated classification scheme, 2017.

Hybrid modeling: Applications in real-time diagnosis

- I Matei
- J De Kleer
- A Feldman
- R Rai
- S Chowdhury

Standard data element types with associated classification scheme

- Iec