Article

Performance-driven closed-loop optimization and control for smart manufacturing processes in the cloud-edge-device collaborative architecture: A review and new perspectives

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
The ongoing paradigm transition from Industry 4.0 to Industry 5.0 is driving toward a new industrial vision rooted in addressing human and planetary needs rather than solely focusing on innovation for profit. One of the most significant shifts that defines Industry 5.0 is the change in focus from technology-driven progress to a genuinely human-centric approach. This means that the industrial sector should prioritize human needs and interests at the core of the production process. Instead of replacing workers on the shop floor, technologies should enhance their capabilities, leading to a safer and more fulfilling work environment. Consequently, the role of industrial operators is undergoing a substantial transformation. This subject has garnered increasing interest from both researchers and industries. However, there is a lack of comprehensive literature covering the concept of Operator 4.0. To address this gap, this paper presents a systematic literature review of the role of Operator 4.0 within the manufacturing context. Out of the 1333 papers retrieved from scientific literature databases, 130 scientific papers met the inclusion criteria and underwent detailed analysis. The study aims to provide an extensive overview of Operator 4.0, analyzing the occupational risks faced by workers and the proposed solutions to support them by leveraging the key enabling technologies of Industry 4.0. The paper places particular emphasis on human aspects, which are often overlooked although the successful implementation of technologies heavily relies on who uses them and how they are utilized. Finally, the paper discusses open issues and challenges and puts forth suggestions for future research directions.
Article
Full-text available
The advent of distributed computing and mobile clouds made it possible to transfer and distribute the heavy processes of complex workflows to the cloud. Managing and with the help of the mobile cloud, virtualization and shared resources managing the executing large‐scale workflows is possible. Since virtual resources are used, scheduling and resource allocation are important and key research topics in public cloud environment and mobile clouds. The cloud model of mobile devices with public or virtual cloud, which consists of temporary mobile devices, is an interesting topic to be investigated. Workflow scheduling and availability of resources in mobile clouds by high mobility and Energy efficient, it is one of the challenges and problems that must be investigated one of the main challenges in this research area. The purpose of scheduling algorithms is to improve service quality criteria by observing the constraints. In this article, the main goal is to extensively review the scheduling algorithms for the complex scientific workflow in public and mobile clouds. Furthermore, a review of the use of heuristic and meta‐heuristic techniques in dynamic and static states, with various constraints for task scheduling in cloud and mobile computing has been performed. This article accomplishes a review of using existing techniques for scheduling tasks in mobile clouds. We also present a comprehensive analysis and systematic comparison between these scheduling algorithms.
Article
Full-text available
The Fourth Industrial Revolution, also named Industry 4.0, is leveraging several modern computing fields. Industry 4.0 comprises automated tasks in manufacturing facilities, which generate massive quantities of data through sensors. These data contribute to the interpretation of industrial operations in favor of managerial and technical decision-making. Data science supports this interpretation due to extensive technological artifacts, particularly data processing methods and software tools. In this regard, the present article proposes a systematic literature review of these methods and tools employed in distinct industrial segments, considering an investigation of different time series levels and data quality. The systematic methodology initially approached the filtering of 10,456 articles from five academic databases, 103 being selected for the corpus. Thereby, the study answered three general, two focused, and two statistical research questions to shape the findings. As a result, this research found 16 industrial segments, 168 data science methods, and 95 software tools explored by studies from the literature. Furthermore, the research highlighted the employment of diverse neural network subvariations and missing details in the data composition. Finally, this article organized these results in a taxonomic approach to synthesize a state-of-the-art representation and visualization, favoring future research studies in the field.
Article
Full-text available
Condition monitoring (CM) of industrial processes is essential for reducing downtime and increasing productivity through accurate Condition-Based Maintenance (CBM) scheduling. Indeed, advanced intelligent learning systems for Fault Diagnosis (FD) make it possible to effectively isolate and identify the origins of faults. Proven smart industrial infrastructure technology enables FD to be a fully decentralized distributed computing task. To this end, such distribution among different regions/institutions often subject to so-called data islanding is limited to privacy, security risks, and industry competition due to the limitation of legal regulations or conflicts of interest. Therefore, Federated Learning (FL) is considered an efficient process of separating data from multiple participants to collaboratively train an intelligent and reliable FD model. As no comprehensive study has been introduced on this subject to date as far as we know, such a re-view-based study is considered urgently needed. Within this scope, our work is devoted to re-viewing recent advances in FL applications for process diagnostics, while FD methods, challenges, and future prospects are given special attention.
Article
Full-text available
Internet of Things (IoT) is made up with growing number of facilities, which are digitalized to have sensing, networking and computing capabilities. Traditionally, the large volume of data generated by the IoT devices are processed in a centralized cloud computing model. However, it is no longer able to meet the computational demands of large-scale and geographically distributed IoT devices for executing tasks of high performance, low latency, and low energy consumption. Therefore, edge computing has emerged as a complement of cloud computing. To improve system performance, it is necessary to partition and offload some tasks generated by local devices to the remote cloud or edge nodes. However, most of the current research work focuses on designing efficient offloading strategies and service orchestration. Little attention has been paid to the problem of jointly optimizing task partitioning and offloading for different application types. In this paper, we make a comprehensive overview on the existing task partitioning and offloading frameworks, focusing on the input and core of decision engine of the framework for task partitioning and offloading. We also propose comprehensive taxonomy metrics for comparing task partitioning and offloading approaches in the IoT cloud-edge collaborative computing framework. Finally, we discuss the problems and challenges that may be encountered in the future.
Article
Full-text available
The increasing concerns with adverse environmental issues have led to the proliferation of renewable energy resources (RESs), which have been expanded more recently to multi-energy systems (MESs) in various parts of the world. MES can improve energy efficiency and reduce carbon emission by co-optimizing multiple forms of energy, including electricity, natural gas, heating, cooling, etc., which provide a promising approach to carbon neutrality. Energy hub (EH) is an efficient framework for MES modeling and management, where various energy carriers are optimally converted, utilized, and stored for satisfying certain sociopolitical and socioeconomic mandates. This paper presents a comprehensive review of available EH optimization and control studies. First, we introduce basic concepts and EH modeling methods. Then, we conduct a systematic review of optimization methods, as well as state-of-the-art solution algorithms for EH planning, operation, and trading. Furthermore, we analyze an internet of things (IoT) based EH control structure and review the corresponding state estimation, communication, and control methods for managing large EH data sets. Finally, we present and discuss several research topics for future research.
Article
Full-text available
The arrival of the intelligent manufacturing and industrial internet era brings more and more opportunities and challenges to modern industry. Specifically, the revolution of the production mode of traditional manufacturing is undergoing thanks to the techniques including but not limited to digits, network, intelligence, and industrial automation fields. As the core link between intelligent manufacturing and industrial internet platform, industrial Big Data analytics has been paid more and more attention by academia and industry. The efficient mining of the high-value information covered under industrial Big Data and the utilization of the real-life industrial process are among the hottest topics at present. Meanwhile, with the advanced development of industrial automation toward knowledge automation, the learning paradigm of industrial Big Data analytics is also evolving accordingly. Therefore, starting from the perspective of industrial Big Data analytics and aiming at the corresponding industrial scenarios, this article actively explores the revolution of the learning paradigm under the background of industrial Big Data: 1) The evolution of the industry Big Data analytics paradigm is analyzed, that is, from isolated learning to lifelong learning, and their relationships are further summarized; 2) Mainstream directions of lifelong learning are listed, and their applications in industrial scenarios are discussed in detail; 3) Prospects and future directions are given.
Article
Full-text available
The study of big data analytics (BDA) methods for the data-driven industries is gaining research attention and implementation in today’s industrial activities, business intelligence, and rapidly changing the perception of industrial revolutions. The uniqueness of big data and BDA has created unprecedented new research calls to solve data generation, storage, visualization, and processing challenges. There are significant gaps in knowledge for researchers and practitioners on the right information and BDA tools to extract knowledge in large significant industrial data that could help to handle big data formats. Notwithstanding various research efforts and scholarly studies that have been proposed recently on big data analytic processes for industrial performance improvements. Comprehensive review and systematic data-driven analysis, comparison, and rigorous evaluation of methods, data sources, applications, major challenges, and appropriate solutions are still lacking. To fill this gap, this paper makes the following contributions: presents an all-inclusive survey of current trends of BDA tools, methods, their strengths, and weaknesses. Identify and discuss data sources and real-life applications where BDA have potential impacts. Other main contributions of this paper include the identification of BDA challenges and solutions, and future research prospects that require further attention by researchers. This study provides an insightful recommendation that could assist researchers, industrial practitioners, big data providers, and governments in the area of BDA on the challenges of the current BDA methods, and solutions that would alleviate these challenges.
Article
Full-text available
In this paper, we propose an approximate dynamic programming approach for an energy-efficient unrelated parallel machine scheduling problem. In this scheduling problem, jobs arrive at the system randomly, and each job’s ready and processing times become available when an order is placed. Therefore, we consider the online version of the problem. Our objective is to minimize a combination of makespan and the total energy costs. The energy costs include cost of energy consumption of machines for switching on, processing, and idleness. We propose a binary program to solve the optimization problem at each stage of the approximate dynamic program. We compare the results of the approximate programming approach against an integer linear programming formulation of the offline version of the scheduling problem and an existing heuristic method suitable for scheduling problem with ready times. The results show that the approximate dynamic programming algorithm outperforms the two off-line methods in terms of solution quality and computational time.
Article
Full-text available
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.
Article
Full-text available
Polynomial Regression Surface (PRS) is a commonly used surrogate model for its simplicity, good interpretability, and computational efficiency. The performance of PRS is largely dependent on its basis functions. With limited samples, how to correctly select basis functions remains a challenging problem. To improve prediction accuracy, a PRS modeling approach based on multitask optimization and ensemble modeling (PRS-MOEM) is proposed for rational basis function selection with robustness. First, the training set is partitioned into multiple subsets by the cross validation method, and for each subset a sub-model is independently constructed by optimization. To effectively solve these multiple optimization tasks, an improved evolutionary algorithm with transfer migration is developed, which can enhance the optimization efficiency and robustness by useful information exchange between these similar optimization tasks. Second, a novel ensemble method is proposed to integrate the multiple sub-models into the final model. The significance of each basis function is scored according to the error estimation of the sub-models and the occurrence frequency of the basis functions in all the sub-models. Then the basis functions are ranked and selected based on the bias-corrected Akaike’s information criterion. PRS-MOEM can effectively mitigate the negative influence from the sub-models with large prediction error, and alleviate the uncertain impact resulting from the randomness of training subsets. Thus the basis function selection accuracy and robustness can be enhanced. Seven numerical examples and an engineering problem are utilized to test and verify the effectiveness of PRS-MOEM.
Article
Full-text available
Continuous and emerging advances in Information and Communication Technology (ICT) have enabled Internet-of-Things (IoT)-to-Cloud applications to be induced by data pipelines and Edge Intelligence-based architectures. Advanced vehicular networks greatly benefit from these architectures due to the implicit functionalities that are focused on realizing the Internet of Vehicle (IoV) vision. However, IoV is susceptible to attacks, where adversaries can easily exploit existing vulnerabilities. Several attacks may succeed due to inadequate or ineffective authentication techniques. Hence, there is a timely need for hardening the authentication process through cutting-edge access control mechanisms. This paper proposes a Blockchain-based Multi-Factor authentication model that uses an embedded Digital Signature (MFBC_eDS) for vehicular clouds and Cloud-enabled IoV. Our proposed MFBC_eDS model consists of a scheme that integrates the Security Assertion Mark-up Language (SAML) to the Single Sign-On (SSO) capabilities for a connected edge to cloud ecosystem. MFBC_eDS draws an essential comparison with the baseline authentication scheme suggested by Karla and Sood. Based on the foundations of Karla and Sood’s scheme, an embedded Probabilistic Polynomial-Time Algorithm (ePPTA) and an additional Hash function for the Pi generated during Karla and Sood’s authentication were proposed and discussed. The preliminary analysis of the proposition shows that the approach is more suitable to counter major adversarial attacks in an IoV-centered environment based on the Dolev–Yao adversarial model while satisfying aspects of the Confidentiality, Integrity, and Availability (CIA) triad.
Article
Full-text available
Model-based predictive control (MPC) describes a set of advanced control methods, which make use of a process model to predict the future behavior of the controlled system. By solving a—potentially constrained—optimization problem, MPC determines the control law implicitly. This shifts the effort for the design of a controller towards modeling of the to-be-controlled process. Since such models are available in many fields of engineering, the initial hurdle for applying control is deceased with MPC. Its implicit formulation maintains the physical understanding of the system parameters facilitating the tuning of the controller. Model-based predictive control (MPC) can even control systems, which cannot be controlled by conventional feedback controllers. With most of the theory laid out, it is time for a concise summary of it and an application-driven survey. This review article should serve as such. While in the beginnings of MPC, several widely noticed review paper have been published, a comprehensive overview on the latest developments, and on applications, is missing today. This article reviews the current state of the art including theory, historic evolution, and practical considerations to create intuitive understanding. We lay special attention on applications in order to demonstrate what is already possible today. Furthermore, we provide detailed discussion on implantation details in general and strategies to cope with the computational burden—still a major factor in the design of MPC. Besides key methods in the development of MPC, this review points to the future trends emphasizing why they are the next logical steps in MPC.
Article
Full-text available
With the innovation and development of detection technology, various types of sensors are installed to monitor the operating status of equipment in modern industry. Compared with the same type of sensors for monitoring, heterogeneous sensors can collect more comprehensive complementary fault information. Due to the large distribution differences and serious noise pollution of heterogeneous sensor data collected in industrial sites, this brings certain challenges to the development of heterogeneous data fusion strategies. In view of the large distribution difference in the feature spatial of heterogeneous data and the difficulty of effective fusion of fault information, this paper presents a multi-scale deep coupling convolutional neural network (MDCN), which is used to map the heterogeneous fault information from different feature spaces to the common spaces for full fusion. Specifically, a multi-scale convolution module (MSC) with multiple filters of different sizes is adopted to extract multi-scale fault features of heterogeneous sensor data. Then, the maximum mean discrepancy (MMD) is applied to measure the distance between different spatial features in the coupling layer, and the common failure information in the heterogeneous data is mined by minimizing MMD to fuse effectively in order to identify the failure state of the device. The validity of this method is verified by the data collected on a first-level parallel gearbox mixed fault experiment platform.
Article
Full-text available
Federated learning is a deep learning optimization method that can solve user privacy leakage, and it has positive significance in applying industrial equipment fault diagnosis. However, edge nodes in industrial scenarios are resource-constrained, and it is challenging to meet the computational and communication resource consumption during federated training. The heterogeneity and autonomy of edge nodes will also reduce the efficiency of synchronization optimization. This paper proposes an efficient asynchronous federated learning method to solve this problem. This method allows edge nodes to select part of the model from the cloud for asynchronous updates based on local data distribution, thereby reducing the amount of calculation and communication and improving the efficiency of federated learning. Compared with the original federated learning, this method can reduce the resource requirements at the edge, reduce communication, and improve the training speed in heterogeneous edge environments. This paper uses a heterogeneous edge computing environment composed of multiple computing platforms to verify the effectiveness of the proposed method.
Article
A production workshop with mobile robots can be considered as a hybrid system consisting of a production system and a transportation one. Mobile robots are responsible for transferring production tasks among the machines of a production system and constitute a multi-robot transport system. It is highly coupled with a production system because of the interdependency that exists between production scheduling and mobile robot assignment. In this work, we study their integrated optimization problem for a mobile robot-based job shop with blocking properties. Its aim is to minimize total completion time as an objective function to improve overall operational efficiency. We consider the speed of a robot that varies according to whether it is loaded or not. We formulate this new problem into a mixed integer linear program to provide an algebraic description. Then, we propose a constraint programming method to solve it with high efficiency. The superiority of constraint programming over mixed integer linear programming in terms of the number of variables and constraints is analyzed. Numerous experiments on benchmark examples show that constraint programming can well handle the concerned problem. Under a one-hour time limit, it can exactly solve its instances while mixed integer linear programming cannot. Under a one-minute time limit, it obtains much better solutions than mixed integer linear programming and heuristic strategies, thus implying its high potential to be put into industrial applications. Note to Practitioners —The integration of mobile robots into production workshops has emerged as a pivotal strategy to enhance the operational efficiency of an advanced manufacturing system. This integration transforms the traditional job shop into a hybrid system, where mobile robots play a crucial role in transferring production tasks among machines. This brings a unique challenge to practitioners due to the intricate interdependencies between production scheduling and mobile robot assignment. The focus of our study is on the optimization of a mobile robot-based job shop to improve its overall operational efficiency. Our approach involves formulating this complex problem as a mixed-integer linear program, thereby providing a concise mathematical representation. We propose a constraint programming method to solve the problem efficiently. Through numerous experiments on benchmark examples, our findings indicate that the proposed constraint programming method can well solve the concerned problem given long or short solution time. This underscores its high potential for practical implementation in industrial scenarios.
Article
Rational allocation of resources can improve the profit margin of a steel enterprise. This paper deals with a multi-product multi-stage multi-period resource allocation problem. In it, product manufacturing involves multiple continuous production stages, each of which has parallel machines. According to process requirements, the tasks assigned to a machine need to be produced in batches. The process route of a product is a sequential combination of machines each of which is to be selected from a stage. The process route for each product and the batching rules of each machine are known in advance. Multi-period production means that the tasks released before a planning period can be processed in any of its periods. The demand for each product type in each period and the capacity of each machine are predetermined. Considering a customer’s demand, we optimally allocate machines for products in each planning period to achieve their efficient utilization. The objective is to minimize the sum of various costs related to transportation, resources, unmet demand, and product inventory. A mixed integer linear program is developed for the concerned problem. A fix-and-optimize heuristic with variable neighborhood size is newly designed to obtain high-quality solutions. Its solutions are compared with those of CPLEX (a commercial software) given a fixed solution time. Experimental results show that it can accurately solve small-scale instances and find better solutions than CPLEX for most large-scale instances. Comparison experiments are conducted and the results show that the proposed algorithm has excellent accuracy, speed, and stability in addressing the concerned problem. Note to Practitioners —As demand for steel products gradually shows a trend towards multiple varieties, small batches, and personalized customization, it increases the difficulty for practitioners to rationally allocate resources for their production in a steel enterprise. It is hard to achieve rational material and machine resource allocation subject to complex constraints for processing multiple products in multiple production stages and periods. To deal with a multi-product multi-stage multi-period resource allocation problem, it is essential to design efficient and stable algorithms. A fix-and-optimize heuristic with variable neighborhood size is thus proposed for addressing it. The method can decompose the problem into a series of subproblems according to a decomposition scheme. They are iteratively solved. In this work, our goal is to help practitioners to deal with the challenging resource allocation problem in a short time. The effectiveness of the proposed algorithm is validated and tested by comparing its results with those of a commercially available exact solver called CPLEX on various problem instances. Extensive experimental results demonstrate its effectiveness. It can quickly solve small-scale instances with no statistically significant difference from the optimal solutions obtained by CPLEX. When addressing large-scale instances, the proposed algorithm shows better solution performance than CPLEX in a given running time. The algorithm is flexible, accurate, and fast, which implies its great application potential for resource allocation in steel enterprises.
Article
Batch scheduling problems are NP-hard and often coupled with logistics optimization problems in industrial manufacturing scenarios, further increasing the challenge of decision-making. This work focuses on a stainless steel hot-rolling production process, where it is essential to maintain the temperature of the products from the previous process beyond a certain threshold. Our work diverges from typical hot-rolling production processes by necessitating the utilization of alternative thermal devices, thereby intricately linking batch scheduling with logistics optimization and resulting in a novel lexicographical dual-objective optimization problem. To address this complex problem, we first introduce a mathematical model formulated as a mixed integer program, providing exact solutions of the concerned problem but taking much computation time. To provide effective and efficient solutions, we then propose an enhanced simulated annealing algorithm, which integrates destruction and construction methods inspired by iterated greedy algorithms. This algorithm is tailored to the specific characteristics of the problem, incorporating specialized encoding-decoding mechanisms, neighborhood search operators, and a Metropolis acceptance criterion. Our experimental results highlight the effectiveness of proposed approaches, demonstrating their superiority over competitive peers. Thus, this research contributes valuable insights and innovative solutions to the scheduling and optimization challenges inherent in batch manufacturing processes with temperature constraints and thermal devices.
Article
Computer-aided process planning is the bridge between computer-aided design and computer-aided manufacturing. With the advent of the intelligent manufacturing era, process knowledge is important for process planning. Knowledge graph is a semantic representation method of knowledge that has attracted extensive attention from the industry and academia. Process planning using the process knowledge graph has become an important development direction for computer-aided process planning. From the analysis of the published reviews, there have been many computer-aided process planning reviews with different focuses. We focus on the techniques and applications of knowledge graph in manufacturing process planning. Therefore, this paper comprehensively reviews knowledge graphs in manufacturing process planning. We analyze the key technologies of process knowledge graph, including process knowledge representation, process knowledge extraction, process knowledge graph construction, process knowledge graph refinement, process knowledge graph validation, and process generation. We also explore the combination of process knowledge graphs and large language models. Finally, potential future research directions are proposed.
Article
Billet heating temperature directly affects the quality of the billet, but the existing technology cannot measure the billet surface temperature. Therefore, we accurately predict the temperature of the furnace by a soft sensor to approximate the billet temperature. Limited by the complexity of the heating process and the lack of computing resources in factories, the existing research pays little attention to or even cannot meet the temperature prediction requirements of multiple heating zones in multiple heating furnaces. To address the above problems, we propose a temperature prediction method for multi-heating furnaces based on transfer learning and knowledge distillation. The method establishes a multisource domain model of a source domain furnace on a cloud platform. Then, the multisource knowledge is transferred to the target domain heating furnace, and the target domain teacher model is established by fine-tuning the source domain model through the target domain data. Next, to efficiently predict the furnace temperature, a shallow multi-task student model is established at the edge server to predict multiple heating zones in the target furnace. Furthermore, a knowledge distillation method for regression prediction is proposed so that the student model can improve the prediction accuracy under the guidance of the cloud teacher model. The effectiveness of the method is verified by experiments on 20 different heating zone datasets in two heating furnaces and two wind turbine datasets. The consistency of multiple experiments shows that this method can not only improve the accuracy by transferring the source domain knowledge but also reduce the size of model parameters by knowledge distillation on the premise of meeting the requirements of prediction error.
Article
Representing and reasoning uncertain causalities have diverse applications in fault diagnosis for industrial systems. Owing to the complicated dynamics and a multitude of uncertain factors in such systems, it is hard to implement efficient diagnostic inference when a process fault occurs. The cubic dynamic uncertain causal graph was proposed for graphically modeling and reasoning about the fault spreading behaviors in the form of causal dependencies across multivariate time series. However, in certain large-scale scenarios with multiconnected and time-varying causalities, the existing inference algorithm is incapable of dealing with the logical reasoning process in an efficient manner. We, therefore, explore the solutions to enhanced computational efficiency. Causality graph decomposition and simplification, and graphical transformation, are proposed to reduce model complexity and form a minimal causality graph. An algorithm, event-oriented early logical absorption, is also presented for logical reasoning. It is mainly intended to minimize the computational costs for compulsory absorption operations in the early stage of reasoning process. The effectiveness of the proposed algorithm is verified on the secondary loop model of nuclear power plant by utilizing the fault data derived from a nuclear power plant simulator. The results show the anticipated capability of efficient fault diagnosis of the proposed algorithm for large-scale dynamic systems.
Article
The process industry manufacturing process is a branch of processes that uses, petroleum, natural gas, coal, ore, biomass, and other resources as main raw materials, and provides bulk raw materials for national economic construction, and also provides, high-end electronic chemicals, polymers and alloy steels for strategic emerging, industries such as integrated circuits, 5G, aerospace, and high-end equipment, through, physical and chemical reactions, gas-liquid-solid multiphase coexistence and, continuous complex materials conversion processes. In this comment, the future of, smart process manufacturing will be discussed.
Article
Based on the theory of cloud control system, an intelligent power plant cloud control system (IPPCCS) is designed to overcome problems of complex objects, multi-sources heterogenous data, "information island" and the poor ability of overall optimization scheduling in modern electric power enterprise. To solve problems of strong fluctuation and poor disturbance resistance of green power generation, a machine learning method is used to obtain the short-term prediction value of wind and solar power based on their history data. Then in the cloud, the economic model predictive control (EMPC) algorithm is applied to provide the power predictive scheduling strategy of water turbines by real-time rolling optimization, to ensure the robustness of green energy complementary power generation, consume wind and solar power fully and reduce the frequency of starting/stopping and crossing the vibration zones of the turbines, which both provides clear and stable energy support for the users and protects the devices. The simulations show the effectiveness of the proposed method in an example of regional cloud data center.
Article
As a key component of Industrial Internet, identification and resolution has been considered as a promising technology to realize the interconnections among physical and virtual entities and to promote interoperability of digital objects, which improves the industrial efficiency and transparency of the supply chain. In this article, we focus on the identification and resolution design in the Industrial Internet. A general identification and resolution architecture is proposed to guide how to develop and improve technologies and schemes in terms of the joint requirements of service, role, function, implement, and security in Industrial Internet. Based on the proposed architecture, three key technologies are studied. Specifically, an identifiable digital object (IDO) model is constructed to enable the different Industrial Internet platforms to manage and interact data uniformly. Then, a hybrid structure-based identification and resolution system is designed, based on which identification registration and resolution procedures are proposed. Furthermore, a trustable system based on blockchain is deployed to guarantee the data credibility. Finally, extensive practical experiments validate the effectiveness of the proposed technologies. The proposed architecture and technologies have been used in the practical construction and deployment of national top-level nodes and secondary-level nodes to sustain the identification and resolution services for the Industrial Internet in China.
Article
Temporal data contain a wealth of valuable information, playing an essential role in various machine-learning tasks. Slow feature analysis (SFA), one of the most classic temporal feature extraction models, has been deeply explored in two decades of development. SFA extracts slowly varying features as high-level representations of temporal data. Its core idea of "slow" has been proven to be consistent with the nature of biological vision and beneficial in capturing significant temporal information for various tasks. So far, SFA has evolved into numerous improved versions and is widely applied in many fields such as computer vision, industrial control, remote sensing, signal processing, and computational biology. However, there currently lacks an insightful review of SFA. In this article, a comprehensive overview of SFA and its extensions is provided for the first time. The formulation and optimization of SFA are introduced. Two mainstream solutions, geometric interpretation, and a gradient-based training method of SFA are presented and discussed. Following that, a taxonomy of the current progress of SFA is proposed. We classify improved versions of SFA into six categories, including dual-input SFA (DISFA), online slow feature analysis (OSFA), probabilistic SFA (PSFA), multimode SFA, nonlinear SFA, and discrete labeled SFA. For each category, we illustrate its main ideas, mathematical principles, and applicable scenarios. In addition, the practical applications of SFA are summarized and presented. Finally, we bring new insights into SFA according to its research status and provide potential research directions, which may serve as a good reference for promoting future work.
Article
With the development of modern industrial processes towards integration and complexity, industrial process operation monitoring is of great significance to ensure the plant safety, product quality, and operating efficiency. However, the inherent nonlinear, dynamic, and plant-wide characteristics make it difficult to evaluate the operating performance accurately. To handle this issue, a key performance indicator-related operating performance assessment method based on distributed improved minimal redundancy maximal relevance and kernel output-relevant common trend analysis (ImRMR-KOCTA) is proposed in this paper. First, by replacing mutual information with maximal information coefficient, the minimal redundancy maximal relevance is improved to describe the interdependencies between process variables and key performance indicators, and the correlated variables are retained in each subsystem. Second, based on kernel functions and outputrelevant common trend analysis, the assessment model is developed for describing the nonlinearity and dynamicity in each subsystem. Then, operating performance level is determined by Bayesian inference and predefined rules. Finally, a validation on a hot strip mill process is given to verify the effectiveness of the proposed method.
Article
In the context of the Industrial Internet of things (IIoT), large-scale IIoT data is generated, which can be effectively mined to provide valuable information for condition monitoring (CM). However, traditional CM methods cannot meet unprecedented challenges concerning large-scale IIoT data transmission, storage and analysis. Therefore, manufacturers have begun to shift from the traditional manufacturing paradigm to smart manufacturing, which integrates the encapsulated manufacturing services and the enabling cloud-edge computing technology to handle large-scale IIoT data. To enhance the agility, scalability and portability of traditional manufacturing services, a microservices-based cloud-edge collaborative CM platform for smart manufacturing systems is proposed. First, leveraging the microservices management system, the lightweight edge and cloud services are constructed from the microservices level, which enables flexible deployment and upgrade of services. Next, the proposed platform architecture effectively integrates the computing and storage capabilities of the cloud layer and the real-time nature of the edge layer, where the cloud-edge collaborative mechanism is introduced to achieve real-time diagnosis and enhance prognosis accuracy. Finally, based on the proposed system, the diagnosis and prognosis tasks are implemented on a manufacturing line, and the results show that the diagnostic accuracy is 90% and the prediction error is 50%.
Article
The industrial process monitoring and operating performance assessment techniques are of great significance to ensure the safety and efficiency of the production and to improve the comprehensive economic benefits for the modern enterprises. In this paper, a new key performance indicator (KPI) oriented nonlinear process monitoring and operating performance assessment method is proposed based on the improved Hessian locally linear embedding (HLLE), in view of the problems of strong nonlinearity, high dimension and information redundancy in actual industrial process data. Firstly, in order to characterise the similarities of samples in both temporal and spatial dimensions, a new measurement, based on Finite Markov theory, is defined to replace the Euclidean distance in traditional HLLE. Secondly, by mining the relationships between process variables and the key performance indicator, the KPI oriented feature extraction method is developed. On this basis, the monitoring statistics is constructed and the corresponding control limit is determined for the real-time fault detection. After that, a new operating performance assessment approach based on sliding window Kullback–Leibler divergence is put forward to facilitate maintenance or adjustments. Finally, the proposed method is applied to the hot strip mill process, and the results show the effectiveness.
Article
Compared with a single fault, the occurrence, evolution, and composition of coupling faults have more uncertainties and diversities, which make coupling fault classification a challenging topic in academic research and industrial application areas. This brief addresses the classification problems of coupling faults from a new perspective. Specifically, the main innovations are: 1) a classification framework for coupling faults is first proposed, which integrates multiple kernel learning and multilabel dimensionality reduction; 2) label correlations and nonlinear characteristics among coupling faults are fully explored aiming at improving classification performance; and 3) a trace ratio form of l1l_{1} norm-based objective function is designed for improving the robustness of multilabel classifier. Extensive experiments on the hot rolling process (HRP) are finally given to validate the effectiveness of the proposed scheme.
Article
In cloud manufacturing systems, fault diagnosis is essential for ensuring stable manufacturing processes. The most crucial performance indicators of fault diagnosis models are generalization and accuracy. An urgent problem is the lack and imbalance of fault data. To address this issue, in this article, most of existing approaches demand the label of faults as a priori knowledge and require extensive target fault data. These approaches may also ignore the heterogeneity of various equipment. We propose a cloud-edge collaborative method for adaptive fault diagnosis with label sampling space enlarging, named label-split multiple-inputs convolutional neural network, in cloud manufacturing. First, a multiattribute cooperative representation-based fault label sampling space enlarging approach is proposed to extend the variety of diagnosable faults. Besides, a multi-input multi-output data augmentation method with label-coupling weighted sampling is developed. In addition, a cloud-edge collaborative adaptation approach for fault diagnosis for scene-specific equipment in cloud manufacturing system is proposed. Experiments demonstrate the effectiveness and accuracy of our method.
Article
Quality-related fault detection is crucial for improving system reliability, reducing production costs and ensuring product quality, which has been an emerging area of practical interest. An industrial plant-wide process contains several interactive subprocesses and large number of variables, which makes traditional centralized monitoring methods face severe challenges. In this context, a new distributed detection framework for quality-related faults is designed, which will provide a reasonable solution to increase the monitoring reliability and economic efficiency for industrial plant-wide processes. To be specific, the main innovations are: 1) a performance-based process decomposition method is proposed combined mechanical knowledge with affinity propagation (AP) clustering algorithm, which will be helpful to find the common variables between subprocesses; 2) a new dynamic mixed kernel partial least squares (DMKPLS) model with information interaction is built for local monitoring; 3) Bayesian inference is implemented to establish statistical indicators for monitoring the plant-wide process. Furthermore, Tennessee Eastman (TE) process is adopted to verify the fault detection performance of the proposed framework.
Article
The development of cloud manufacturing enables data-driven process monitoring methods to reflect the real industrial process states accurately and timely. However, traditional process monitoring methods cannot update learned models once they are deployed to edge devices, which leads to model mismatch when confronted time-varying data. In addition, limited resources on the edge prevent it from deploying complex models. Therefore, this article proposes a novel cloud-edge collaborative process monitoring method. First, historical data of industrial processes are collected to establish a dictionary learning model and train the dictionary and classifier in the cloud. Then, the model is simplified and deployed to the edge. The edge layer monitors the process states, including fault detection and working condition recognition, and determines whether a model mismatch has occurred based on an error-triggered strategy. Both numerical simulation and industrial roasting process results verify the superiority of the proposed method.
Article
Semi-supervised learning acts as an effective way to leverage massive unlabeled data. In this paper, we propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL) , which combines the well-known contrastive loss in self-supervised learning with the cross entropy loss in semi-supervised learning, and jointly optimizes the two objectives in an end-to-end way. The highlight is that different from self-training based semi-supervised learning that conducts prediction and retraining over the same model weights, SsCL interchanges the predictions over the unlabeled data between the two branches, and thus formulates a co-calibration procedure, which we find is beneficial for better prediction and avoids being trapped in local minimum. Towards this goal, the contrastive loss branch models pairwise similarities among samples, using the pseudo labels generated from the cross entropy branch, and in turn calibrates the prediction distribution of the cross entropy branch with the contrastive similarity. We show that SsCL produces more discriminative representation and is beneficial to semi-supervised learning. Notably, on ImageNet with ResNet50 as the backbone, SsCL achieves 60.2%\bm {60.2\%} and 72.1%\bm {72.1\%} top-1 accuracy with 1% and 10% labeled samples respectively, which significantly outperforms the baseline, and is better than previous semi-supervised and self-supervised methods.
Article
Industrial Internet of things (IIoT), which consists of massive IoT devices and industrial infrastructures such as wireless access points to acquire intelligent services, has been devoted as a critical physical information platform to realizing the fourth industrial revolution, otherwise known as Industry 4.0. Currently, a new paradigm called mobile edge computing (MEC) has brought an opportunity to accelerate the development of IIoT. It has powerful computing capabilities and can be well-used to provide low-latency services to execute some computation applications generated by IIoT devices. However, the existing methods may not be directly used for IIoT scenarios due to the large size of IIoT devices and the characteristics of the applications, as well as the limited and heterogeneous resources of edge servers. In view of this, the computation offloading and resource allocation are formulated as a multi-objective optimization problem, and an end-edge-cloud collaborative intelligent optimization method is devised in this paper. Comprehensive experiments and evaluations are carried out to prove the effectiveness and efficiency of our proposed method with regard to the energy consumption and time consumption of IIoT devices, as well as resource utilization and load balancing of edge servers.
Article
With the rapid advancement of in-process measurements and sensor technology driven by zero-defect manufacturing applications, high-dimensional heterogeneous processes that continuously collect distinct physical characteristics frequently appear in modern industries. Such large-volume high-dimensional data place a heavy demand on data collection, transmission, and analysis in practice. Thus, practitioners often need to decide which informative data streams to observe given the resource constraints at each data acquisition time, which poses significant challenges for multivariate statistical process control and quality improvement. In this article, we propose a generic online nonparametric monitoring and sampling scheme to quickly detect mean shifts occurring in heterogeneous processes when only partial observations are available at each acquisition time. Our innovative idea is to seamlessly integrate the Thompson sampling (TS) algorithm with a quantile-based nonparametric cumulative sum (CUSUM) procedure to construct local statistics of all data streams based on the partially observed data. Furthermore, we develop a global monitoring scheme of using the sum of top- r{r} local statistics, which can quickly detect a wide range of possible mean shifts. Tailored to monitoring the heterogeneous data streams, the proposed method balances between exploration that searches unobserved data streams for possible mean shifts and exploitation that focuses on highly suspicious data streams for quick shift detection. Both simulations and a case study are comprehensively conducted to evaluate the performance and demonstrate the superiority of the proposed method. Note to Practitioners —This paper is motivated by the critical challenge of online process monitoring by considering the cost-effectiveness and resource constraints in practice (e.g., limited number of sensors, limited transmission bandwidth or energy constraint, and limited processing time). Unlike the existing methodologies which rely on the restrictive assumptions (e.g., normally distributed, exchangeable data streams) or require historical full observations of all data streams to be available offline for training, this paper proposes a novel monitoring and sampling strategy that allows the practitioners to cost-effectively monitor high-dimensional heterogeneous data streams that contain distinct physical characteristics and follow different distributions. To implement the proposed methodology, it is necessary: (i) to identify sample quantiles for each data stream based on historical in-control data offline; (ii) to determine which data streams to observe at each acquisition time based on the resource constraints; and (iii) to automatically screen out the suspicious data streams to form the global monitoring statistic. Experimental results through simulations and a case study have shown that the proposed method has much better performance than the existing methods in reducing detection delay and effectively dealing with heterogeneous data streams.
Article
Many core technologies of Industry 4.0 have gained substantial advancement in recent years. Digital Twin (DT) has become the key technology and tool for manufacturing industries to realize intelligent cyber-physical integration and digital transformation by leveraging these technologies. Although there have been many DT-related works, there is no standard definition, unified framework, and implementation approach of DT until now. Widely developing DTs for the manufacturing industry is still challenging. Thus, this paper proposes a novel implementation framework of digital twins for intelligent manufacturing, denoted as IF-DTiM, which possesses several distinct merits to distinguish itself from previous works. First, IF-DTiM fully utilizes new-generation container technology so that DT-related applications and services can be packaged in a self-contained way, rapidly deployed, and robustly operated with the capabilities of failover, autoscaling, and load balancing. Second, it leverages existing intelligent cloud manufacturing services to realize the intelligence for DT externally in a scalable and plug-and-play manner instead of using traditional approaches to embed intelligence in DT. Third, IF-DTiM contains Product DT for products, Equipment DT (i.e., EQ DT) for equipment, and Process DT for production lines, which can generically fulfill the demands and scenarios to achieve intelligent manufacturing for various manufacturing industries. Testing results show that IF-DTiM can achieve remarkable performance in rapid deployment and real-time data exchanges of DT-related applications. Finally, we develop an example DTiM system for CNC machining based on IF-DTiM to demonstrate its efficacy and applicability in facilitating the manufacturing industry to build their DT systems. Note to Practitioners —Developing Digital Twin (DT) systems to realize intelligent manufacturing is challenging. The proposed IF-DTiM (Implementation Framework of Digital Twins for Intelligent Manufacturing) provides a novel container-technology and cloud-manufacturing-service-based systematic methodology for building DTiM. In this paper, we present the system architecture and several operational scenarios (e.g., how to create and use DTs) of IF-DTiM, together with the design of its core functional mechanisms (e.g., rapid deployment scheme for DT, real-time data exchange for DT, DT interface pattern, and general workflow architecture for DT). Also, an example DTiM system for CNC machining based on IF-DTiM is presented to facilitate the practitioners to adopt the designs and niches in IF-DTiM to build their desired DTiM systems.
Article
The utilization rate of raw materials during the hydrometallurgical leaching process has a great influence on the whole economic benefits of the hydrometallurgy plant, so it is necessary for the leaching process to improve the utilization of materials using optimization control methods. In this paper, the dynamic model for the hydrometallurgical leaching process of a gold hydrometallurgy plant is first built based on the reaction mechanism of the process. Then, the model parameters are identified using least-squares fitting. Thereafter, with the maximum economic benefit as the objective function, the steady-state economic optimization model of the leaching process is established, and an improved particle swarm optimization algorithm is used to solve the model. Taking the optimization results as the control objective, a model predictive control method based on an improved differential evolution algorithm is proposed to control the leaching process, so as to improve the intractability and anti-disturbance performance of the controller for the leaching process. The simulation results show that the proposed optimization and control methods achieve satisfactory effects.
Article
With the continuous improvement of hardware computing power, edge computing of industrial data has been gradually applied. In the past decade, the promotion of edge computing has also greatly improved the efficiency of industrial production. Compared with the conventional cloud computing, it not only saves the bandwidth consumption of data transmission, but also ensures the terminal data security to a certain extent. However, the continuous update of attack types also put forward new requirements for the privacy protection of industrial edge computing. So it should fundamentally solve the risk of industrial data leakage in the process of deep model training in edge terminal. In this paper, we propose a new federated edge learning framework based on hybrid differential privacy and adaptive compression for industrial data processing. Specifically, it first completes the adaptive gradient compression preparation, then constructs the industrial federated learning model, and finally makes use of adaptive differential privacy model to optimize, so as to complete the privacy protection towards the transmission of gradient parameters in industrial environment. By optimizing the hybrid differential privacy and adaptive compression, we can better prevent the terminal data privacy against inference attacks. The experimental results show that this method is very effective in the industrial edge computing situation, and it also opens up a new direction for the effect of differential privacy in federated learning.
Article
The conventional LQG based economic performance design has found its difficulty in industrial application and so far, there is still no systematic and effective way to improve economic performance. As learned from the LQG benchmark performance assessment method, the economic performance improvement in MPC systems can be realized through adjusting controller parameters in addition to the well-known setpoints change approach. Therefore, we take advantage of LQG and iterative learning control (ILC) to propose a new two-layer periodical economic performance improvement strategy applicable in industrial MPC systems. By dividing the whole time into multiple intervals called periods and optimize the performance periodically, the economics finally reach its optimal. Promoted twice in a certain period, the performance acquires its first promotion through fixed variance obtained from the lower MPC layer, which transforms the nonlinear economic performance function (EPF) of LQG into a linear one. The ILC-based weight coefficients adjustment algorithm then provides the parameters to the MPC controller in the next period with the updating principle based on the idea of minimizing the tracking error between the current controller economic performance and an optimal one, which realizes the second performance improvement. Room for the second promotion is analyzed and convergence of the algorithm is proved. Finally, the effectiveness and applicability of the strategy are verified via a typical industrial separation process.
Article
In order to satisfy safety requirements of modern plant-wide processes, multiblocks-based distributed monitoring strategies are often used to obtain higher monitoring performance, and their two critical issues refer to suitable multi-blocks partition for reducing uncertainties and local-global fault interpret perception for practical physical meaning. To handle these problems, a novel multi-level knowledge-graph (MLKG) based on combining domain experts knowledge and monitoring data are constructed to describe characteristics of plant-wide processes. And then numerous monitoring variables of each node (block) can be used to calculate the node status which can be used to realize fault detection by exceeding corresponding thresholds. Creatively, numerous node status of multi-level can be aggregated into the top-level node status to globally characterize the system health to realize fault detection. Finally, methods such as variables contribute rate can be adopted to locally locate the fault to achieve fault location, which can be regarded as an attempt to interpret the fault detection results. Results of benchmark and practical-case-application can be used to demonstrate the effectiveness and applicability of this proposed method.
Article
Swarm intelligence algorithms are a subset of the artificial intelligence (AI) field, which is increasing popularity in resolving different optimization problems and has been widely utilized in various applications. In the past decades, numerous swarm intelligence algorithms have been developed, including ant colony optimization (ACO), particle swarm optimization (PSO), artificial fish swarm (AFS), bacterial foraging optimization (BFO), and artificial bee colony (ABC). This review tries to review the most representative swarm intelligence algorithms in chronological order by highlighting the functions and strengths from 127 research literatures. It provides an overview of the various swarm intelligence algorithms and their advanced developments, and briefly provides the description of their successful applications in optimization problems of engineering fields. Finally, opinions and perspectives on the trends and prospects in this relatively new research domain are represented to support future developments.
Article
Edge services provide an effective and superior means of real-time transmissions and rapid processing of information in the Industrial Internet of Things (IIoT). However, the continuous increase in the number of smart devices results in privacy leakage and insufficient model accuracy of edge services. To tackle these challenges, in this research, we propose a blockchain-based machine learning framework for edge services (BML-ES) in IIoT. Specifically, we construct novel smart contracts to encourage multi-party participation of edge services to improve the efficiency of data processing. Moreover, we propose an aggregation strategy to verify and aggregate model parameters to ensure the accuracy of the decision tree learning model. Finally, based on SM2 public-key cryptosystem, we protect data security and prevent data privacy leakage in edge services. Theoretical analysis and simulation experiments indicate that BML-ES framework is effective and efficient, and is better suitable to improve the accuracy of edge services in IIoT.
Article
To properly monitor dynamic large-scale processes, a new distributed dynamic process monitoring strategy named multi-block dynamic weighted principal component regression (DWPCR) is developed in this paper. Because complex plant-wide processes have multiple operation units and complex correlations among variables, traditional global process monitoring models may suppress local fault information and fail to identify incipient faults and local faults for large-scale processes. Besides, product quality determines the economic benefits of the enterprise. Motivated by these problems, this work studies the distributed quality monitoring strategy. At first, the idea of community partition in complex networks is used for multiple subblock division for a large number of process variables in this new monitoring framework. Then, the monitoring model for each subblock is established by the proposed DWPCR approach. Moreover, a novel weighting key components strategy based on fault information is proposed to monitor the process. Finally, the comprehensive monitoring result is fused by Bayesian inference. The superiority of the proposed distributed DWPCR strategy is testified in the case study.
Article
The biggest challenge of task scheduling in Fog computing is to satisfy users' dynamic requirements in real‐time with Fog nodes' limited resource capacities. Fog nodes' heterogeneity and an obligation to complete tasks by the deadline while minimizing cost and energy consumption makes the scheduling process more challenging. This article facilitates a deeper understanding of the research issues through a detailed taxonomy and distinguishes significant challenges in existing work. Furthermore, the paper investigates existing solutions for various challenges, presents a meta‐analysis on quality of service parameters and tools used to implement Fog task scheduling algorithms. This systematic review will help potential researchers easily identify specific research problems and future directions to enhance scheduling efficiency.
Article
In this paper, a cloud-edge collaboration based control framework is proposed for the voltage regulation and economic operation in incremental distribution networks (IDN). The voltage regulation and economic operation, usually considered in separated aspects, can be integrated in a hierarchical control method by coordinating the active power and reactive power of distributed generators (DGs) and distributed storages (DSs) in an ‘'active’' mode. Promising the voltage security of the IDN, the upper-level multi-objective optimization is formulated to maximize the consumption of the DGs, moreover, the lower-level model predictive control (MPC) aims to regulate the dynamics of the DGs and DSs based on the established state space model. Time delay in the downstream channel is considered due to the open environment of the proposed control framework, which can be eliminated by using the predictive compensation mechanism derived from the MPC with considering model uncertainty. Finally, simulation results demonstrate the validity and robustness of the proposed method.
Article
Through new technologies development, customers can make or cancel an order at any time, which disrupts the established production schedule. This reality forced many companies to become sensitive in dealing with this situation through rescheduling processes. While efficiency criteria are used to assess the performance of a scheduling system, in dynamic environments, stability criteria measure the impact of job deviation. Differently from previous works, this paper investigates a new performance measure to simultaneously assess schedule efficiency by the total weighted waiting times, and schedule stability by the weighted completion time deviation. This mix could be a very helpful and significant criterion in industrial and health care environments. The studied problem considers an identical parallel machine rescheduling with jobs arriving over time. Based on a predictive-reactive strategy, a Mixed Integer Linear Programming model (MILP) is developed, as well as an iterative methodology for dealing with the online part. At last, numerical results are presented, discussing the impact of the efficiency-stability coefficient on the system performance, as well as the computing time to solve the described problem.
Article
Since the Third Industrial Revolution, technology and the global economy have developed rapidly. Driven by market demand and the development of science and technology, the organisational model of the production system has evolved, which has in turn caused changes in the methods of production scheduling. In the context of the newest industrial revolution (Industry 4.0), this review aims to examine the evolution of production scheduling in terms of economics and technology. First, literature on production scheduling is summarised and analysed from the perspectives of centralised/decentralised scheduling, distributed scheduling, and cloud manufacturing scheduling. Second, future challenges and trends in the development of production scheduling are discussed in view of the globalisation of manufacturing and changes in production modes enabled by new technologies. Finally, based on the findings of this review, we make a prediction for the future expansions of the customer-centric value chain as well as changes in product design and production methods brought by product personalisation.
Article
In recent years, the manufacturing industry has faced various global challenges. One of these challenges is the increasing frequent adjustment and reconfiguration of production lines, necessitated by the diversification of customer demands. To improve the processing efficiency after production line reconfiguration, this paper puts forward a group learning architecture towards intelligent equipment. With this architecture, swarm intelligence can be accomplished through group learning. From the perspective of edge intelligence, this paper addresses key technology issues within the areas of data acquisition and preprocessing, cyber-physical fusion, knowledge extraction and sharing, and equipment performance self-optimization. The proposed approaches are much more useful for improving the processing efficiency of the reconfigured production line.