In this work, we introduce a novel setup for an augmented workplace, which allows for defining and interacting with user-defined tangibles. State-of-the-art tangible user interface systems equip both the underlying surface and the tangible control with sensors or markers. At the workplace, having unique tangibles for each available action results in confusion. Furthermore, tangible controls mix with regular objects and induce a messy desk. Therefore, we introduce the concept of user-defined tangibles, which enable a spontaneous binding between physical objects and digital functions. With user-defined tangibles, the need for specially designed tangible controls disappears and each physical object on the augmented workplace can be turned into a tangible control. We introduce our prototypical system and outline our proposed interaction concept.
Due to increasing complexity of products and the demographic change at manual assembly workplaces, interactive and context-aware instructions for assembling products are becoming more and more important. Over the last years, many systems using head-mounted displays (HMDs) and in-situ projection have been proposed. We are observing a trend in assistive systems using in-situ projection for supporting workers during work tasks. Recent advances in technology enable robust detection of almost every work step, which is done at workplaces. With this improvement in robustness, a continuous usage of assistive systems at the workplace becomes possible. In this work, we provide results of a long-term study in an industrial workplace with an overall runtime of 11 full workdays. In our study, each participant assembled at least three full workdays using in-situ projected instructions. We separately considered two different user groups comprising expert and untrained workers. Our results show a decrease in performance for expert workers and a learning success for untrained workers.
These days, customers can choose and configure nearly every detail of a product they want to buy, making each manufactured product a unique piece rather than just another part in a large-scale production. The availability of such complex built-to-order systems is introducing a more complex manufacturing process, requiring more cognitive effort compared to the traditional built-to-stock approach. The manual-assembly workplace is thus becoming a challenging environment. To help the workers better manage such environments, the authors have built an assistive system that can be integrated directly into the workplace using in-situ projection for augmented reality. This article is part of a special issue on digitally enhanced reality.
Today's working society tries to integrate more and more impaired workers into everyday working processes. One major scenario for integrating impaired workers is in the assembly of products. However, the tasks that are being assigned to cognitively impaired workers are easy tasks that consist of only a small number of assembly steps. For tasks with a higher number of steps, cognitively impaired workers need instructions to help them with assembly. Although supervisors provide general support and assist new workers while learning new assembly steps, sheltered work organizations often provide additional printed pictorial instructions that actively guide the workers. To further improve continuous instructions, we built a system that uses in-situ projection and a depth camera to provide context-sensitive instructions. To explore the effects of in-situ instructions, we compared them to state-of-the-art pictorial instructions in a user study with 15 cognitively impaired workers at a sheltered work organization. The results show that using in-situ instructions, cognitively impaired workers can assemble more complex products up to 3 times faster and with up to 50% less errors. Further, the workers liked the in-situ instructions provided by our assistive system and would use it for everyday assembly.
With the demographic change and a generally increasing product complexity, there is a growing demand for assistance technology to cognitively support workers during industrial production processes. Many approaches including head-mounted displays, smart gloves, or in-situ projections have been suggested to provide cognitive support for the workers. Recently, research focused on improving the cognitive feedback by using activity recognition to make it context-aware. Thereby an assistance technology is able to detect work steps and provide additional feedback in case the worker makes mistakes. However, when designing feedback for a rather monotonous task, such as product assembly, it should be designed in a way it does neither over-challenge nor under-challenge the worker. In this paper, we sketch out requirements for providing cognitive assistance at the workplace that can adapt to the worker's needs in real-time. Further, we discuss challenges and provide design suggestions.
The continuous digitalization will have a lasting effect on the working world. It results in changing work conditions and cognitive requirements with respect to the human worker. Here, the manual assembly workplace is an outstanding example for the growing complexity and variety of digital work processes in manufacturing industries. This thesis presents a novel approach to support cognitive work processes and to automate corresponding information flows at the manual assembly workplace. It combines both disciplines, computer science and psychology, in order to systematically derive and develop the technological components and procedures of cognitive information assistance, based on the psychological models for human understanding and acting. The proposed approach of cognitive information assistance aims at supporting the human thinking, learning, memorizing and remembering as sub-processes of conscious decision making. It addresses the situational dependency on changing assembly requirements and follows a methodological framework which combines the psychological models of team cognition and cognitive apprenticeship. Both are used for the technological engineering of collaboration between the human worker and the cognitive information assistance system. In this thesis cognitive requirements at the manual assembly workplace as well as technological conditions are used to develop a generalized architecture for cognitive information assistance and to detail the corresponding components and procedures. This architecture was realized with the Plant@Hand assembly assistance system based on the cognitive architecture Soar. The assistance system automatically guides workers through the execution of assembly tasks and generates work instructions based on the current work situation and on provided data from manufacturing information systems. The conceptual approach of cognitive information assistance was evaluated in comparison to traditional assembly manuals in order to analyze the effects on the quality and effciency of the manual assembly. The evaluation showed a significant reduction decrease of assembly errors and a significant effciency increase with the usage of cognitive information assistance. The scientific and technical contribution of this thesis are the transdisciplinary concepts, methods and procedures for supporting cognitive work processes and to automate corresponding information flows.
With projectors and depth cameras getting cheaper, assistive systems in industrial manufacturing are becoming increasingly ubiquitous. As these systems are able to continuously provide feedback using in-situ projection, they are perfectly suited for supporting impaired workers in assembling products. However, so far little research has been conducted to understand the effects of projected instructions on impaired workers. In this paper, we identify common visualizations used by assistive systems for impaired workers and introduce a simple contour visualization. Through a user study with 64 impaired participants we compare the different visualizations to a control group using no visual feedback in a real world assembly scenario, i.e. assembling a clamp. Furthermore, we introduce a simplified version of the NASA-TLX questionnaire designed for impaired participants. The results reveal that the contour visualization is significantly better in perceived mental load and perceived performance of the participants. Further, participants made fewer errors and were able to assemble the clamp faster using the contour visualization compared to a video visualization, a pictorial visualization and a control group using no visual feedback.
Durch den Einzug von computergestützten Systemen in industrielle Produktionsprozesse (Industrie 4.0) werden mehr und mehr Anwendungen möglich, die Arbeitern während komplexen Aufgaben helfen können. Dieser Beitrag beschäftigt sich mit der manuellen Kommissionierung und stellt einen Überblick über computergestützte Assistenzsysteme für diese Tätigkeit vor. Hier-bei liegt der Fokus auf der Mensch-Computer Schnittstelle, welche im Zuge der Industrie 4.0 eine zunehmend größere Bedeutung er-fährt. Zuerst wird ein Überblick über die Mensch-Computer Schnittstellen gegeben, die in Industrie und Forschung vorge-schlagen wurden. Danach werden projektionsbasierte Kommissi-onierassistenzsysteme vorgestellt, die im Rahmen des Forschungs-projektes motionEAP entworfen wurden.
Research on how to take advantage of Augmented Reality and Virtual Reality applications and technologies in the domain of manufacturing has brought forward a great number of concepts, prototypes, and working systems. Although comprehensive surveys have taken account of the state of the art, the design space of industrial augmented and virtual reality keeps diversifying. We propose a visual approach towards assessing this space and present an interactive, community-driven tool which supports interested researchers and practitioners in gaining an overview of the aforementioned design space. Using such a framework we collected and classified relevant publications in terms of application areas and technology platforms. This tool shall facilitate initial research activities as well as the identification of research opportunities. Thus, we lay the groundwork, forthcoming workshops and discussions shall address the re-finement.
Order Picking is not only one of the most important but also most mentally demanding and error-prone tasks in the industry. Both stationary and wearable systems have been introduced to facilitate this task. Existing stationary systems are not scalable because of the high cost and wearable systems have issues being accepted by the workers. In this paper, we introduce a mobile camera-projector cart called OrderPickAR, which combines the benefits of both stationary and mobile systems to support order picking through Augmented Reality. Our system dynamically projects in-situ picking information into the storage system and automatically detects when a picking task is done. In a lab study, we compare our system to existing approaches, i.e, Pick-by-Paper, Pick-by-Voice, and Pick-by-Vision. The results show that using the proposed system, order picking is almost twice as fast as other approaches, the error rate is decreased up to 9 times, and mental demands are reduced up to 50%.
We are observing a trend that more and more manual assembly workplaces are equipped with sensor technology to assist workers during complex work tasks. These assistive systems mostly use visual feedback for providing assembly instructions or hinting at errors. However, a red light indicating an error might not always be the best solution for communicating that an error was made, or might be overlooked in stressful situations. Therefore, we extended an assitsive system to compare haptic, auditory, and visual error feedback at the manual assembly workplace. Through two user studies, we first determine suitable variants for each error feedback modality and second compare the error feedback modalities against each other. The results show that haptic feedback is appropriate for retaining the worker's privacy, and auditory feedback is perceived as most distracting. The subjective feedback reveals interesting insights for future research in communicating errors as a combination of haptic and visual feedback might lead to noticing an error faster.
Order picking is one of the most complex and error-prone tasks that can be found in the industry. To support the workers, many order picking instruction systems have been proposed. A large number of systems focus on equipping the user with head-mounted displays or equipping the environment with projectors to support the workers. However combining the user-worn design dimension with in-situ projection has not been investigated in the area of order picking yet. With this paper, we aim to close this gap by introducing HelmetPickAR: a body-worn helmet using in-situ projection for supporting order picking. Through a user study with 16 participants we compare HelmetPickAR against a state-of-the-art Pick-by-Paper approach. The results reveal that HelmetPickAR leads to significantly less cognitive effort for the worker during order picking tasks. While no difference was found in errors and picking time, the placing time increases.
This paper introduces the Social Augmented Learning (SAL) application, with which Augmented Reality (AR) can be applied in vocational training and on-the-job training situations. In this way, complex interdependencies of modern industrial machines can be visualized immediately, which facilitates the transmission and training of knowledge-intensive work tasks and leads to an increased training quality. We show the current state of research of AR-use in vocational training and, based on the identified research gaps, formulate the requirements on which the SAL application is based. We will then describe the development, implementation and evaluation of this application, with a focus on the application design. In the context of SAL, we will then answer previously formulated research questions and show further potential for research in the field of vocational training with AR.