To read the full-text of this research, you can request a copy directly from the authors.
... Um et. al also tested a solution of transmitting captured images from smart glasses to a server for processing and re-transmitting the results, thus reducing the strain on the batteries . The servers were placed in the architecture in the form of edge computing. ...
... This solution would also have the added advantage of being less dependent on specific smart glass interface designs. The results did not show an improvement in time with current wireless technology . Szajna et al. also proposes a setup using edge computing but uses it to monitor the production line rather than visualization . ...
This article aims to provide a better understanding of Augmented Reality Smart Glasses (ARSG) for assembly operators from two perspectives, namely, manufacturing engineering and technological maturity. A literature survey considers both these perspectives of ARSG. The article's contribution is an investigation of the current status as well as challenges for future development of ARSG regarding usage in the manufacturing industry in relation to the two perspectives. This survey thereby facilitate a better future integration of ARSG in manufacturing. Findings include that commercially available ARSG differ considerably in their hardware specifications. The Technological Readiness Level (TRL) of some of the components of ARSG is still low, with displays having a TRL of 7 and tracking a TRL of 5. A mapping of tracking technologies and their suitability for industrial ARSG was done and identified Bluetooth, micro-electro mechanical sensors (MEMS) and infrared sensors as potentially suitable technologies to improve tracking. Future work identified is to also explore the operator perspective of ARSG in manufacturing.
... More specifically, we provide the requirements of the tasks, the ML methods used to fulfil the tasks, and the advantages and disadvantages of those methods. Table 2 provides a collection of most  Object detection CNN Real-time moving object detection for AR  Object detection Background-foreground nonparametric-based Deep-learning-based smart task assistance for wearable AR (HoloLens)  Object detection and instance segmentation Mask R-CNN Interacting with IoT devices in AR environment  Real-time hand gesture recognition (2D) CNN Edge-assisted distributed DNN for mobile WebAR  Object recognition DNN Object detection and tracking for face and eyes AR  Object detection, object recognition History of Oriented Gradientsm, Haar-like features AR surgical scene understanding improved with ML  Object identification Random forest Dynamic image recognition methods for AR  Feature extraction, object recognition CNN, XGBoost AR platform for interactive aerodynamic design and analysis  Object recognition Manifold learning AR retail product identification  Object detection SSD Improving retail shopping experience with AR  Object recognition ResNet50 (CNN) AR instructional system for mechanical assembly  Object detection Faster R-CNN Deep learning for AR  Object tracking, light estimation CNN AR for radiology  Image segmentation CNN AR design personalisation for facial accessory products  Facial tracking AdaBoost Low cost AR for automotive industry  Feature extraction, object classification Linear SVM, CNN AR training framework for neonatal endotracheal intubation  Assessing task performance CNN AR video calling with WebRTC API  Semantic segmentation CNN AR gustatory manipulation  Food-to-food translation GAN Edge Edge-based inference for ML at Facebook  Image classification DNN Supporting vehicle-to-edge for vehicle AR  Object detection Deep CNN (YOLO) AR platform for operators in production environments  Object detection SSD Spatial AR with single IR camera  3D pose estimation, image classification Hough Forests, Random Ferns Federated learning for low-latency object detection and classification  Object classification modelling Federated learning ...
... Examples of server-based deployment include usage of CNNs for object detection to support AR tracking , Mask R-CNN for object detection and instance segmentation to support smart task assistance for HoloLens-deployed AR , and CNN for object recognition in improving retail AR shopping experiences . Additionally, some works explicitly state their usage of edge servers, citing the processing acceleration gained by the object detection and recognition tasks when a GPU is used to execute the functions, for example, using YOLO to accomplish object detection to support vehicle-to-edge AR , and SSD for supporting an edge-based AR platform for operators in production environments . Comparatively, there are several research works which deploy object detection and recognition in an enclosed client system, i.e., without needing to offload computation components to an external server or device. ...
Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and performs seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences by using MAR devices to provide universal accessibility to digital contents. Over the past 20 years, a number of MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discusses the latest studies on MAR through a top-down approach: 1) MAR applications; 2) MAR visualisation techniques adaptive to user mobility and contexts; 3) systematic evaluation of MAR frameworks including supported platforms and corresponding features such as tracking, feature extraction plus sensing capabilities; and 4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields, current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.
... Digital Twins provide means to monitor and control Cyber-Physical Systems (CPSs) in various domains, such as smart manufacturing , biology , or autonomous driving . They serve different purposes, such as analysis , control , or behavior prediction . ...
Digital Twins in smart manufacturing must be highly adaptable for different challenges, environments, and system states. In practice, there is a need for enabling the configuration of Digital Twins by domain experts. Low-code approaches seem to be a meaningful solution for configuration purposes but often lack extension options. We propose a model-driven low-code approach for the configuration and reconfiguration of Digital Twins using language plugins. This approach uses model-driven software engineering and software language engineering methods to derive a configurable digital twin implementation. Moreover, we discuss some remaining challenges such as interoperability, language modularity, evolution, integration of assistive services, collaborative development, and web-based debugging.
... In order to improve the yield of assembly operations, providing support to human workers is necessary. Augmented Reality (AR) could be used, reducing the number of engineering/production management resources needed to provide assembly operators with cognitive support to perform their tasks [45,46]; as well as cognitive/handling skills transfer systems , self-adapting automatic quality control  or cognitive automation strategies . Automation needs to ensure human safety, which led to research on Human-Robot Collaboration (HRC) plan recognition and trajectory prediction , and the concept of "safety bubble" . ...
In a demand context of mass customization, shifting towards the mass personalization of products, assembly operations face the trade-off between highly productive automated systems and flexible manual operators. Novel digital technologies—conceptualized as Industry 4.0—suggest the possibility of simultaneously achieving superior productivity and flexibility. This article aims to address how Industry 4.0 technologies could improve the productivity, flexibility and quality of assembly operations. A systematic literature review was carried out, including 234 peer-reviewed articles from 2010–2020. As a result, the analysis was structured addressing four sets of research questions regarding (1) assembly for mass customization; (2) Industry 4.0 and performance evaluation; (3) Lean production as a starting point for smart factories, and (4) the implications of Industry 4.0 for people in assembly operations. It was found that mass customization brings great complexity that needs to be addressed at different levels from a holistic point of view; that Industry 4.0 offers powerful tools to achieve superior productivity and flexibility in assembly; that Lean is a great starting point for implementing such changes; and that people need to be considered central to Assembly 4.0. Developing methodologies for implementing Industry 4.0 to achieve specific business goals remains an open research topic.
... Finally, Volkan et al.  proposed an edge computing system as a solution of implementing Cyber-Physical system. Um et al.  also shows the usage of auxiliary devices for manual operators supported by object detection service running on the edge device. These studies will be the technological basis to create an CPPS that can utilize AI services on the shop-floor. ...
Flexibility in mass-customized manufacturing can be supported significantly by the introduction of Cyber-Physical Production System and the connection of production modules to AI (artificial intelligence) Cloud services. Even though there exist standardized protocols from device to IT system, there are still challenges for the synchronization between cyber-model and physical object, and the application of decision making in the cyber-model. Although high performance machine learning services make the Cloud a preferred computation node, possible unstable connection with manufacturing resources enforce new service distribution approaches in the network. This paper proposes an Edge Computing architecture which is the mediator between machines, by providing local Cloud services with fast response time and preprocessing resources for a vast amount of data. As an illustrative example the selected Edge service pre-processes data form an augmented reality device in order to communicate with the cyber-model in real time. The Edge platform controls the computing resources and prioritizes all processes of Edge Services for a dynamic update of production lines and human-machine-interaction.
Modern technologies and recently developed digital solutions make their way into all aspects of lives of individuals and businesses, and manufacturing industry is no exception. In the era of digital revolution of industry, manufacturing processes can benefit from digitalization technologies immensely. Digital twin (DT) is a technology concept that aims to create a digital mirror of a physical system with a constant data flow between two components. This idea can be used for monitoring and optimization of the present system as well as forecasting and estimating future states of it. There have been theoretical and practical studies conducted on DT in manufacturing area. This systematic literature review (SLR) aims to summarize the current state of literature and shine a light on open areas for future research. Using a rigorous SLR method, 247 relevant studies from 2015 to 2020 are examined to answer a set of research questions. The current state of DT in manufacturing literature is analyzed and explained with an emphasis on where the future studies may go in this area.
As factories move towards Industry 4.0, the replacement of control communications based on Ethernet and other wired technologies with wireless becomes an imperative to fulfill the envisioned flexibility and easy reconfigurability of the production facilities. While wired connections are often unwanted for their limited flexibility and high maintenance costs, common wireless technologies in manufacturing such as Bluetooth, Wi-Fi, ZigBee, or 4G cannot fulfil the requirements of timeliness, reliability, data rates, scalability and availability of cyber-physical production systems. Today, the fifth-generation (5G) broadband cellular network technology holds the promise for an enhanced Quality of Service (QoS) and Quality of Experience (QoE) that unlock a huge amount of value and opportunity. However, at present, application of 5G wireless communication technology in the manufacturing industry is still at its infancy. In this paper, 5G is presented as a game changer for human-machine and human-robot interaction in three manufacturing use cases (i.e. uRLLC-based human-robot collaboration, eMBB-based AR-assisted operations and mMTC-based interaction with the digital twin). A QoS/QoE model is presented to drive the implementation of 5G-aided solutions for the Operator 4.0, thus maximizing the network quality and acceptance rate of digital technologies in the shop floor (that have never been widely accepted and used in practice by the industrial workforce). This paper explores how a 5G could finally provide the necessary network infrastructure for the human-machine symbiosis in the factory of the future.
The purpose of this work is to analyze trends in the use of augmented reality technologies in Russian organizations. The authors consider the problem of implementing AR technologies in the economic processes of enterprises in the conditions of Russian economic, legislative, social and technical barriers. The relevance of this problem is confirmed by the high demand of Russian enterprises for AR projects, while the Russian consumer market for augmented reality technologies is lagging behind the world market. This article analyzes the materials of analytical and consulting companies, as well as current data and indicators that characterize consumers of augmented reality projects and the market of AR technologies. As a result, opportunities to overcome barriers to the introduction of augmented reality technologies and prospects for the development of the Russian consumer market for these technologies were identified.
In the Industry 4.0 era, the Digital Twin (DT), virtual copies of the system that are able to interact with the physical counterparts in a bi-directional way, seem to be promising enablers to replicate production systems in real time and analyse them. A DT should be capable to guarantee well-defined services to support various activities such as monitoring, maintenance, management, optimization and safety. Through an analysis of the current picture of manufacturing and a literature review about the already existing DT environment, this paper identifies what is still missing in the implemented DT to be compliant to their description in literature. Particular focuses of this paper are the degree of integration of the proposed DT with the control of the physical system, in particular with the Manufacturing Execution Systems (MES) when the production system is based on the Automation Pyramid, and the services offered from these environments, comparing them to the reference ones.
This paper proposes also a practical implementation of a DT in a MES equipped assembly laboratory line of the School of Management of the Politecnico di Milano. The application has been created to pose the basis to overcome the missing implementation aspects found in literature. In such a way, the developed DT paves the way for future research to close the loop between the MES and the DT taking into consideration the number of services that a DT could offer in a single environment.
Maintenance processes are generally subdivided into tasks with specified goals for the concerned practitioners. Collaboration is achieved through coordination, cooperation and communication. Smart devices and the Internet of Things (IoT) improve communication between men and machine alike, so the potential gain on cooperation and coordination in an IoT-enabled environment is examined. In this approach a concept for a framework is proposed to create Augmented Reality based collaboration assistant systems. AR is not only a tool for the visualization of maintenance data, but also for team-wide communication and the display of warnings or other coordination-related indications. The framework is validated with the presented use case in a laboratory environment.
In this paper, we study the trade-off between accuracy and speed when building an object detection system based on convolutional neural networks. We consider three main families of detectors --- Faster R-CNN, R-FCN and SSD --- which we view as "meta-architectures". Each of these can be combined with different kinds of feature extractors, such as VGG, Inception or ResNet. In addition, we can vary other parameters, such as the image resolution, and the number of box proposals. We develop a unified framework (in Tensorflow) that enables us to perform a fair comparison between all of these variants. We analyze the performance of many different previously published model combinations, as well as some novel ones, and thus identify a set of models which achieve different points on the speed-accuracy tradeoff curve, ranging from fast models, suitable for use on a mobile phone, to a much slower model that achieves a new state of the art on the COCO detection challenge.
In today's business environment, the efficiency of warehouses can be critical for the efficiency of the overall supply chains they belong to. As a result, new technologies are being tested and adopted in industry to improve the performance of warehouse operations. An example technology that has recently gained interest by both academia and industry is augmented reality. In this paper, we investigate the opportunities arising from the usage of augmented reality in warehouses as well as the barriers for its industrial adoption. This is done via a series of practitioners interviews and via an experiment designed using Google Glass. Our results indicate that even though the technology is not mature enough at the moment, the potential benefits it can offer
make it promising for the near future.
One of the main problems encountered in manual assembly workstations is human error in performing the operations. Several approaches are currently used to face this problem, such as intensive training of personnel, poka-yoke devices or invasive sensing systems (e.g. sensing gloves) used for monitoring the process and detect wrong procedures or errors in joining the parts. This paper proposes an innovative system based on the interaction between a force sensor and an augmented reality (AR) equipment used to give to the worker the necessary information about the correct assembly sequence and to alert him in case of errors. The force sensor is placed under the workbench and it is used to monitor the assembly process by collecting force and torque data with respect to an XYZ reference system; a pattern recognition technique allows the error identification and the selection of the appropriate recovery procedure. Two AR devices have been tested in this application: a video-mixing spatial display and an optical see-through apparatus, comparing the pro and cons of these two solutions. The first device includes a CCD camera positioned over the workstation and an LCD display used by the worker as a support for the correct execution of assembly operations and receiving instructions about recovery procedures. The latter consists of a head mounted display (HMD) having the capability of reflecting projected images in front of the worker's eyes, allowing a real-world view with the superimposition of virtual objects. The CCD camera is also used for identifying errors that are not detectable by the force sensor. At the end, a case study concerning a typical assembly procedure is presented and discussed.
Manual spot welding loses the comparison with automated spot welding, not because of a higher execution time, but due to an inferior quality of welded points, mostly a low repeatability. It is not a human fault. Human welder is compelled to operate without having at disposal the knowledge of significant process features that are known by the robot: exact position of the welding spot, electric parameters to be adopted for every specific point, quality of the welded spot and, based on it, possible need for repetition of a defective weld.
We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our perception and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome.
Caching at mobile edge servers can smooth temporal traffic variability and reduce the service load of base stations in mobile video delivery. However, the assignment of multiple video representations to distributed servers is still a challenging question in the context of adaptive streaming, since any two representations from different videos or even from the same video will compete for the limited caching storage. Therefore it is important, yet challenging, to optimally select the cached representations for each edge server in order to effectively reduce the service load of base station while maintaining a high quality of experience (QoE) for users. To address this, we study a QoE-driven mobile edge caching placement optimization problem for dynamic adaptive video streaming that properly takes into account the different rate-distortion (R-D) characteristics of videos and the coordination among distributed edge servers. Then, by the optimal caching placement of representations for multiple videos, we maximize the aggregate average video distortion reduction of all users while minimizing the additional cost of representation downloading from the base station, subject not only to the storage capacity constraints of the edge servers, but also to the transmission and initial startup delay constraints of the users. We formulate the proposed optimization problem as an integer linear program (ILP) to provide the performance upper bound, and as a submodular maximization problem with a set of knapsack constraints to develop a practically feasible cost benefit greedy algorithm. The proposed algorithm has polynomial computational complexity and a theoretical lower bound on its performance. Simulation results further show that the proposed algorithm is able to achieve a near-optimal performance with very low time complexity. Therefore, the proposed optimization framework reveals the caching performance upper bound for general adaptive video streaming systems, while the proposed algorithm provides some design guidelines for the edge servers to select the cached representations in practice based on both the video popularity and content information.
Augmented Reality (AR) systems in last few years show great potentialities in the manufacturing context: recent pilot projects were developed for supporting quicker product and process design, as well as control and maintenance activities. The high technological complexity together with the wide variety of AR devices require a high technological skill; on the other hand, evaluating their actual impacts on the manufacturing process is still an open question. Few recent studies have analysed this topic by using qualitative approaches based on an “ex post” analysis - – i.e. after the design and/or the adoption of the AR system - for evaluating the effectiveness of a developed AR application. The paper proposes an expert based tool for supporting production managers and researchers in effectively evaluating a preliminary ex-ante feasibility analysis for assessing quantitatively most efficient single AR devices (or combinations) to be applied in specific manufacturing processes. A multi-criteria model based on Analytic Hierarchy Process (AHP) method has been proposed to provide decision makers with quantitative knowledge for more efficiently designing AR applications in manufacturing. The model allows to integrate, in the same decision support tool, technical knowledge regarding AR devices with critical process features characterizing manufacturing processes, thus allowing to assess the contribution of the AR device in a wider prospective compared to current technological analyses. A test case study about the evaluation of AR system in on-site maintenance service is also discussed aiming to validate the model, and to outline its global applicability and potentialities. Obtained results highlighted the full efficacy of the proposed model in supporting ex-ante feasibility studies.
In 1991, Mark Weiser described the vision of a future world under the name of Ubiquitous Computing. Since then, many details of the described vision have become reality: Our mobile phones are powerful multimedia systems, our cars computer systems on wheels, and our homes are turning into smart living environments. All these advances must be turned into products for very cost-sensitive world markets in shorter cycles than ever before.
We present an augmented reality system that supports human workers in a rapidly changing production environment. By providing spatially registered information on the task directly in the user's field of view the system can guide the user through unfamiliar tasks (e.g. assembly of new products) and visualize information directly in the spatial context were it is relevant. In the first version we present the user with picking and assembly instructions in an assembly application. In this paper we present the initial experience with this system, which has already been used successfully by several hundred users who had no previous experience in the assembly task.
In 1991, Mark Weiser described the vision of a future world under the name of Ubiquitous Computing. Since then, many details of the described vision have become reality: Our mobile phones are powerful multimedia systems, our cars computer systems on wheels, and our homes are turning into smart living environments. All these advances must be turned into products for very cost-sensitive world markets in shorter cycles than ever before. Today, the resulting requirements for design, setup, and operation of our factories become crucial for success. In the past, we often increased the complexity in structures and control systems, resulting in inflexible monolithic production systems. But the future must become leannot only in organization, but also in planning and technology! We must develop technologies which allow us to speed up planning and setup, to adapt to rapid product changes during operation, and to reduce the planning effort. To meet these challenges we should also make use of the smart technologies of our daily lives. But for industrial use, there are many open questions to be answered. The existing technologies may be acceptable for consumer use but not yet for industrial applications with high safety and security requirements. Therefore, the SmartFactoryKL initiative was founded by industrial and academic partners to create and operate a demonstration and research test bed for future factory technologies. Many projects develop, test, and evaluate new solutions. This presentation describes changes and challenges, and it summarizes the experience gained to date in the SmartFactoryKL.
We discuss about industrial augmented reality. Each industrial process imposes its own peculiar requirements. This creates the need for specialized technical solutions, which in turn poses new sets of challenges. Because most industries must concern themselves with at least some of these industrial procedures, we consider design, commissioning, manufacturing, quality control, training, monitoring and control, and service and maintenance. AR lets users reconstruct virtual models of their area of interest and visualize models within their static views of a real scene.
Summary form only given. The author is working on two research projects in Boeing Computer Services that have to do with virtual reality technology. The first involves importing aircraft CAD data into a VR environment. Applications include a side range of engineering and design activities, all of which involve being able to view and interact with the CAD geometry as if one were inside an actual physical mockup of the aircraft. He refers to the technology being explored in the second project as “Augmented Reality”. This entails the use of a see-through head-mounted display with an optical focal length of about 20 inches, along with VR-style position/orientation sensing system. The intended application area is in touch labor manufacturing: superimposing diagrams or text onto the surface of a workpiece and stabilizing it there on specific coordinates, so that the appropriate information needed by a factory worker for each step of a manufacturing or assembly operation appears on the surface of the workpiece as if it were painted there. The hardest technical problem for augmented reality is position tracking. Long-range head position/orientation sensing systems that can operate in factory environments are needed. This requirement and others give rise to some interesting computational problems, including wearer registration and position sensing using image processing