Figure 5 - uploaded by Benjamin Siegert
Content may be subject to copyright.
Prototype Setup with Placement Area (blue), Raspberry Pi (yellow), Camera (location indicated by red arrow), User Input Devices (violet), User Output Device (green)
Source publication
In this paper, a novel lightweight incremental class learning algorithm for live image recognition is presented. It features a dual memory architecture and is capable of learning formerly unknown classes as well as conducting its learning across multiple instances at multiple locations without storing any images. In addition to tests on the ImageNe...
Context in source publication
Similar publications
This paper designs a wheelchair with multi-function nursing ability. With Raspberry Pi as the main control board, it has functions such as assisting in getting in and out of bed, term inal monitoring, intelligent risk avoidance. It also has the ability of rehabilitation training for th e elderly with stroke or similar conditions, to prevent muscle...
Citations
... However, deep learning models usually assume that the training and testing data are drawn from the same distribution, i.e., supervised learning (SL), making the models rely on a large number of annotated training samples . Collecting and annotating training data for assembly processes is a time-consuming and labor-intensive task that requires substantial manual effort (Maschler et al. 2020). ...
... To address this issue, researchers have developed various techniques to overcome data scarcity. For instance, (Li, Zhang, Ding, and Sun, 2020) employed data augmentation to increase the training data for intelligent rotating machinery fault inspection, while (Krüger, Lehr, Schlueter, and Bischoff, 2019) focused on inherent features and (Maschler, Kamm, Jazdi, and Weyrich, 2020) used incremental learning for industry part recognition. Synthetic data generated from CAD models have also been used to expand training datasets for deep learning in various industrial applications, as described in Cohen et al. (2020); Dekhtiar et al. (2018); Wong et al. (2019); Horváth et al. (2022). ...
In the manufacturing industry, automatic quality inspections can lead to improved product quality and productivity. Deep learning-based computer vision technologies, with their superior performance in many applications, can be a possible solution for automatic quality inspections. However, collecting a large amount of annotated training data for deep learning is expensive and time-consuming, especially for processes involving various products and human activities such as assembly. To address this challenge, we propose a method for automated assembly quality inspection using synthetic data generated from computer-aided design (CAD) models. The method involves two steps: automatic data generation and model implementation. In the first step, we generate synthetic data in two formats: two-dimensional (2D) images and three-dimensional (3D) point clouds. In the second step, we apply different state-of-the-art deep learning approaches to the data for quality inspection, including unsupervised domain adaptation, i.e., a method of adapting models across different data distributions, and transfer learning, which transfers knowledge between related tasks. We evaluate the methods in a case study of pedal car front-wheel assembly quality inspection to identify the possible optimal approach for assembly quality inspection. Our results show that the method using Transfer Learning on 2D synthetic images achieves superior performance compared with others. Specifically, it attained 95% accuracy through fine-tuning with only five annotated real images per class. With promising results, our method may be suggested for other similar quality inspection use cases. By utilizing synthetic CAD data, our method reduces the need for manual data collection and annotation. Furthermore, our method performs well on test data with different backgrounds, making it suitable for different manufacturing environments.
... The heterogeneity of the data brings special challenges for machine learning algorithms, which are nowadays mostly used for the analysis of data in industrial applications to gain new insights. The algorithms are typically trained for an application on one kind of data, such as object detection in images [6] or failure classification in time series [7]. However, these models are trained for one specific data source setup. ...
Machine learning implementations in an industrial setting poses various challenges due to the heterogeneous nature of the data sources. A classical machine learning algorithm cannot adapt to dynamic changes in the environment, such as the addition, removal, or failure of a data source. However, to handle heterogeneous data and the challenges coming with this, it is a mandatory capability to build robust and adaptive machine learning models for industrial applications. In this work, a novel architecture for robust and adaptive machine learning is proposed to address these challenges. For this, an architecture consisting of different modular layers is developed, where different models can be easily plugged in. The architecture can handle heterogeneous data with different fusion techniques, which are discussed and evaluated in this paper. The proposed architecture is then evaluated on two public datasets for condition monitoring of automation systems to prove its robustness and adaptiveness. The architecture is compared with baseline models and shows more robust performance in case of failing/removed data sources. In addition, new data sources can easily be added without the need to retrain the whole model. Furthermore, the architecture can detect and locate faulty data sources.
... This needs to be considered when developing and evaluating analysis algorithms for heterogeneous data. There are specialized and very successful machine learning models for specific data modalities and tasks, such as object detection in images (Maschler et al., 2020), anomaly detection (Lindemann et al., 2020), or failure classification based on time-series data (Kamm et al., 2022a). However, these classical machine learning algorithms don't use the often given variety (Wilcke et al., 2017;Damoulas and Girolami, 2009). ...
In many application domains data from different sources are increasingly available to thoroughly monitor and describe a system or device. Especially within the industrial automation domain, heterogeneous data and its analysis gain a lot of attention from research and industry, since it has the potential to improve or enable tasks like diagnostics, predictive maintenance, and condition monitoring. For data analysis, machine learning based approaches are mostly used in recent literature, as these algorithms allow us to learn complex correlations within the data. To analyze even heterogeneous data and gain benefits from it in an application, data from different sources need to be integrated, stored, and managed to apply machine learning algorithms. In a setting with heterogeneous data sources, the analysis algorithms should also be able to handle data source failures or newly added data sources. In addition, existing knowledge should be used to improve the machine learning based analysis or its training process. To find existing approaches for the machine learning based analysis of heterogeneous data in the industrial automation domain, this paper presents the result of a systematic literature review. The publications were reviewed, evaluated, and discussed concerning five requirements that are derived in this paper. We identified promising solutions and approaches and outlined open research challenges, which are not yet covered sufficiently in the literature.
... Examples of the application of time-series based machine learning models are the failure analysis of electronic devices using CNNs [9], anomaly detection in discrete manufacturing with LSTM networks [10], or indoor localization based on 5G signals [11]. Further, machine learning models are often applied to image-based applications, such as object recognition [12] or solar cell defect detection [13]. With more available data from different sources, multimodal machine learning is gaining increasing interest. ...
... The reason is, deep learning-based object detection methods usually assume that the training data and testing data are drawn from the same distribution, i.e., supervised learning, making the model rely on a large amount of annotated training samples [2]. Collecting annotated training samples requires plenty of time and manual labor in assembly due to the diversity and complexity of assembly approaches and environments [3]. ...
... Since deep learning has been progressively implemented in industrial quality inspection, different research has aimed to solve the limited annotated training data challenges. There is research focused on data augmentation [6], inherent features [7], and deep transfer learning [3]. No method has yet tried to use the domain adaptation method to solve the problem as far as the authors are aware. ...
A challenge to apply deep learning-based computer vision technologies for assembly quality inspection lies in the diverse assembly approaches and the restricted annotated training data. This paper describes a method for overcoming the challenge by training an unsupervised domain adaptive object detection model on annotated synthetic images generated from CAD models and unannotated images captured from cameras. On a case study of pedal car front-wheel assembly, the model achieves promising results compared to other state-of-the-art object detection methods. Besides, the method is efficient to implement in production as it does not require manually annotated data.
... However, practical experience shows that there are some differences in the characteristics of the echo spectrum between multi-aircraft and single target, and experienced operators can effectively distinguish these differences. Deep learning technology has a good application in the visual application [19], so applying deep learning technology to multi-aircraft recognition will achieve good results. There is no need to calculate statistics or other parameters in the recognition based on deep learning method, while only need the classifier network trained to automatically recognize the subtle differences of spectral features in the range-Doppler image. ...
Over-the-horizon radar (OTHR) is an important equipment for the ultralong-range early warning in the military, but the use of constant false-alarm rate (CFAR), which is a traditional detection method, makes it difficult in multi-aircraft formation recognition. To solve this problem, a multi-aircraft formation recognition method based on deep transfer learning in OTHR is proposed. First, the range-Doppler images of aircraft formation in OTHR are simulated, which are composed of four categories of samples. Secondly, a recognition model based on Convolutional Neural Network (CNN) and CFAR detection technology is constructed, whose training method is designed as a two-step transfer. Finally, the trained model can well distinguish the spectral characteristics of aircraft formation, and then recognize the aircraft number of a formation. Experiments show that the proposed method is better than the traditional CFAR detection method, and can detect the number of aircraft more accurately in the formation with the same false alarm rate.
... Machine vision is an important branch of artificial intelligence (AI). A convolutional neural network (CNN) is a typical deep-learning technology applied in machine vision [2]. CNNs have been widely used for image recognition and object detection [3], [4]. ...
Interdisciplinary integration of theory and practice is imperative as a course requirement in emerging engineering education, and in the public elective course "Machine Vision Algorithm Training". Considering the entire teaching process, including pre-training, in-training, and post-training, this paper discusses the course construction and content in detail in terms of project-based learning (PBL). The PBL teaching approach and evaluation methods are described in detail through a comprehensive face recognition training case based on a convolutional neural network (CNN) and Raspberry Pi. Through project design training from shallower to deeper, interdisciplinary integration of theory and practice is cultivated, stimulating interest in course study. The results demonstrate that PBL teaching improves the engineering application and innovative abilities of students.
... Various research projects have aimed at solving the problem of limited labeled training data for deep learning. There is research focused on data augmentation [9], inherent features [4], and deep transfer learning [10]. However, the majority of research focuses on 2D image data. ...
This paper proposes an end-to-end method for automatic assembly quality inspection based on a point cloud domain adaptation model. The method involves automatically generating labeled point clouds from various CAD models and training a model on those point clouds together with a limited number of unlabeled point clouds acquired by 3D cameras. The model can then classify newly captured point clouds from 3D cameras to execute assembly quality inspection with promising performance. The method has been evaluated in an industry case study of pedal car front-wheel assembly. By utilizing CAD data, the method is less time-consuming for implementation in production.
... Artificial Intelligence (AI) is considered a key technology for data processing and shows a great deal of potential for benefits [5]. Throughout the domain of AI, Machine Learning (ML) methods and techniques are one of the main drivers; especially Deep Learningbased approaches show outstanding results for specific tasks, e.g. in image recognition [6]. These approaches need a huge amount of well-structured and homogeneous data for training a ...
... Artificial Intelligence (AI) is considered a key technology for data processing and shows a great deal of potential for benefits [5]. Throughout the domain of AI, Machine Learning (ML) methods and techniques are one of the main drivers; especially Deep Learningbased approaches show outstanding results for specific tasks, e.g. in image recognition [6]. These approaches need a huge amount of well-structured and homogeneous data for training a Fig. 1. ...
With the rise of the Internet of Things and Industry 4.0, the number of digital devices and their produced data increases tremendously. Due to the heterogeneity of devices, the generated data is mostly heterogeneous and unstructured. This challenges established approaches for knowledge discovery, which typically consume structured data from one source. The paper first describes aspects of data heterogeneity and their relevance for Industry 4.0 systems. Following, the upcoming challenges for different steps inside the knowledge discovery process for Industry 4.0 systems, such as for data integration and data mining, are discussed. Additionally, it mentions approaches to tackle them.
... These algorithms consume training data to model the observed input-output behavior in a data-driven approach and have shown impressive results in various domains (e.g. Natural Language Processing [3] or Image Recognition [4]). Due to data-driven learning, these models are considered as "black-box" whose outputs are not comprehensible. ...
... After the localization of and detection of an anomaly, the anomalous measurement is fed into a data-driven classifier for failure classification, which maps the input to a categorical output, as given in (4). The possible categories in our scenario are obtained from Table 1 with seven possible failure categories , ∈ [0,6] (two hard failures and five soft failures). ...
... The pooling size is 4 and the dropout rate is 0.25. The 1d-convolutional layers have filter sizes of 64, 32, 16,8,4 and kernel sizes of 5, 5, 3, 3 and 3. For all hidden layers (five convolutional and one fully connected) a rectifier linear unit is used as the activation function. ...
Electronic devices are one of the key factors for recent advances in smart production systems or automotive. Reliability and robustness are key issues. To further increase this reliability, occurring failures in an electronic device has to be investigated in post-production failure analysis processes. One recent technique to detect and locate failures in electronic components is Time-Domain Reflectometry. This method offers the chance to detect several kinds of failures (e.g. a hard or soft failure) and localize the failure nondestructively. In theory, this can be determined following defined physical formulas. Nevertheless, the received signals are not perfect and mixed with noise from the measurement device or disturbed by nonoptimal material properties. In addition, complex architectures of devices are hard to model based on analytical models. Thus, these models solely are not sufficient for the failure analysis process. For this reason, a hybrid modeling approach is proposed, using a Machine Learning model in combination with physical models to detect and characterize the failure and its exact position. The Machine Learning model will be trained with simulated Time-Domain Reflectometry data.