Diagram of a single chip DLP projector 2

Diagram of a single chip DLP projector 2

Source publication
Conference Paper
Full-text available
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual an...

Contexts in source publication

Context 1
... the wide range of technologies and hardware configurations (example for DLP shown in Figure 1), the output of a video projector can be seen as an inverse pinhole camera (diagram shown in Figure 4) due to the grid disposition of the projected image and the very low distortion that modern projectors have. As such, rendering of 3D virtual environments for spatial augmented reality can be performed efficiently using an extended version of the OpenGL projection matrix 1 (presented in Equation (8)). ...
Context 2
... intrinsic parameters of a DLP projector can be computed using image analysis of complementary gray code patterns (example in Figure 7) projected into a chessboard. The calibration system proposed in [6] was used to retrieve the 5 intrinsic parameters (Fx, Fy, Cx, Cy, S) and 5 distortion coefficients of the projector along with the 3D position and rotation of the projector in relation to the camera (that remains firmly attached to the projector support for fast recalibration of the extrinsic parameters, as seen in Figure 18). It was used 5 sets of 42 gray code image patterns captured with the chessboard in different positions and orientations in relation to the projector, that was pointing to the table workspace at a distance of 0.81 meters. ...
Context 3
... for lightweight rendering it can also start in server mode without a GUI. The immersive HMI developed (shown in Figures 9 and 10) projects into the workspace detailed textual information of the current assembly task along with a video showing the operation being performed by an expert operator. Given the high variability of assembly / maintenance operations, the system was designed to decompose the assembly process into a set of small and concise operations. ...
Context 4
... user interaction with the projected HMI is done by analyzing the 3D point cloud sensor data that falls within a set of Regions of Interest (ROIs), that are shown in Figure 11 as 4 green cubes for navigating within the assembly steps (first, previous, next and last), 1 dark blue box for pausing / playing the video and 1 yellow box for the video seek functionality (examples of a user interacting with the HMI shown in Figures 12 to 14). ...
Context 5
... user interaction with the projected HMI is done by analyzing the 3D point cloud sensor data that falls within a set of Regions of Interest (ROIs), that are shown in Figure 11 as 4 green cubes for navigating within the assembly steps (first, previous, next and last), 1 dark blue box for pausing / playing the video and 1 yellow box for the video seek functionality (examples of a user interacting with the HMI shown in Figures 12 to 14). ...
Context 6
... a ROI state machine activates its action, the 3D sensor data centroid (shown as spheres in Figure 11) is computed for providing a visual debugging feedback of the HMI state and also for being used in higher level perception, namely in the seek bar ROI (the vertical yellow box in Figure 11), in which the seek time is computed by considering the relative position of the finger within the ROI (the bottom of the ROI is associated with the start of the current video while the top corresponds to the end of the current video). For performing proper 3D rendering and also be able to estimate the 6 DoF pose of an object within the workspace, a 3D CAD or mesh model of the final product is required. ...
Context 7
... a ROI state machine activates its action, the 3D sensor data centroid (shown as spheres in Figure 11) is computed for providing a visual debugging feedback of the HMI state and also for being used in higher level perception, namely in the seek bar ROI (the vertical yellow box in Figure 11), in which the seek time is computed by considering the relative position of the finger within the ROI (the bottom of the ROI is associated with the start of the current video while the top corresponds to the end of the current video). For performing proper 3D rendering and also be able to estimate the 6 DoF pose of an object within the workspace, a 3D CAD or mesh model of the final product is required. ...
Context 8
... performing proper 3D rendering and also be able to estimate the 6 DoF pose of an object within the workspace, a 3D CAD or mesh model of the final product is required. Given the lack of public available CAD models for the Mitsubishi M000T20873 starter motor (shown in Figure 15), it was necessary to perform object 3D reconstruction. The 3D mesh model shown in Figure 16 was generated using the David Laser 3D structured light system 12 , and was built by surface matching algorithms using sensor data retrieved from 38 scans in which the starter motor was moved and rotated several times ...
Context 9
... the lack of public available CAD models for the Mitsubishi M000T20873 starter motor (shown in Figure 15), it was necessary to perform object 3D reconstruction. The 3D mesh model shown in Figure 16 was generated using the David Laser 3D structured light system 12 , and was built by surface matching algorithms using sensor data retrieved from 38 scans in which the starter motor was moved and rotated several times ...
Context 10
... achieve this goal, the 3D point cloud registration system (drl 15 ) described in [17] was fine-tuned for our table top use case 16 . Namely, the reference point cloud preprocessing pipeline was configured to randomly select 3000 vertices of the starter motor reconstructed mesh (small green circles shown in Figure 17) and compute the Scale Invariant Feature Transform (SIFT) keypoints [18] (large yellow circles) and their associated Fast Point Feature Histogram (FPFH) descriptors [19]. Later on, the registration pipeline for the sensor point clouds was setup. ...
Context 11
... Figure 17 is shown an example of the estimation of the 6 DoF pose of the starter motor. The left image displays the overlay of the reference point cloud on top of the 3D sensor data in the previously estimated pose (before the operator occluded the part with its hand and moved it to a new place) while the right image shows the updated pose after alignment, that correctly detected that the operator moved the part to the right and rotated it 90 o clockwise. ...
Context 12
... was tested with a BenQ W1070 DLP projector for overlaying the teaching information into the environment, an Asus Xtion Pro Live structured light 3D sensor for 6 DoF pose estimation of objects and a Kinect 2 Time of Flight (ToF) 3D sensor for the user interaction analysis. In Figure 18 it can be seen the work area and the hardware disposition (in the right image the projector is on the top right, the Kinect 2 is on the left, the Asus Xtion is below the projector and the David Laser 3D structured light system camera is at the top). ...
Context 13
... training session started by gathering all the assembly parts and the required tools for performing the starter motor assembly (shown in Figure 19). Then using the proposed immersive teaching system, the operator read the instructions, watched the videos and navigated through the assembly steps using the projected interaction buttons (displayed in Figures 12 to 14) until he completed the assembly process. ...
Context 14
... training session started by gathering all the assembly parts and the required tools for performing the starter motor assembly (shown in Figure 19). Then using the proposed immersive teaching system, the operator read the instructions, watched the videos and navigated through the assembly steps using the projected interaction buttons (displayed in Figures 12 to 14) until he completed the assembly process. Namely, the operator started by assembling the brushes into the brush holder (seen in Figure 20) and then bended the braided cables for ensuring that the brushes were perpendicular to the armature, that was assembled later on (shown in Figure 21). ...
Context 15
... using the proposed immersive teaching system, the operator read the instructions, watched the videos and navigated through the assembly steps using the projected interaction buttons (displayed in Figures 12 to 14) until he completed the assembly process. Namely, the operator started by assembling the brushes into the brush holder (seen in Figure 20) and then bended the braided cables for ensuring that the brushes were perpendicular to the armature, that was assembled later on (shown in Figure 21). These kind of operations that involve flexible parts with cables and rubbers are very hard to automate with robotic manipulators, and as such, are the ideal candidate for being assigned to operators. ...

Citations

Article
Human–Robot Collaboration is a critical component of Industry 4.0, contributing to a transition towards more flexible production systems that are quickly adjustable to changing production requirements. This paper aims to increase the natural collaboration level of a robotic engine assembly station by proposing a cognitive system powered by computer vision and deep learning to interpret implicit communication cues of the operator. The proposed system, which is based on a residual convolutional neural network with 34 layers and a long-short term memory recurrent neural network (ResNet-34 + LSTM), obtains assembly context through action recognition of the tasks performed by the operator. The assembly context was then integrated in a collaborative assembly plan capable of autonomously commanding the robot tasks. The proposed model showed a great performance, achieving an accuracy of 96.65% and a temporal mean intersection over union (mIoU) of 94.11% for the action recognition of the considered assembly. Moreover, a task-oriented evaluation showed that the proposed cognitive system was able to leverage the performed human action recognition to command the adequate robot actions with near-perfect accuracy. As such, the proposed system was considered as successful at increasing the natural collaboration level of the considered assembly station.
Article
Smart manufacturing supported by emerging Industry 4.0 technologies is a key driver to realize mass product customizations. Augmented reality (AR) has been commonly applied to facilitate manual operations with ambient intelligence by overlaying virtual information on physical scenes. In most modern factories, maintenance remains an indispensable process that is difficult or yet to be fully automated. Several studies have previously reviewed AR-based maintenance across all industrial sectors, whereas those specific to manufacturing did not necessarily involve maintenance. Hence, this paper presents a systematic literature review on AR-assisted maintenance in manufacturing with a focus on the operator’s needs. A generic process has been proposed to classify the maintenance operations examined in the past studies into four sequential steps and to analyze the classification results based on the geographical location, maintenance type, AR technical elements, and integrated external sensors. The findings thus derived are expected to provide design guidelines for implementing AR applications with practical values to aid manual maintenance in future smart manufacturing environments.
Article
Full-text available
Augmented Reality (AR) has gradually become a mainstream technology enabling Industry 4.0 and its maturity has also grown over time. AR has been applied to support different processes on the shop-floor level, such as assembly, maintenance, etc. As various processes in manufacturing require high quality and near-zero error rates to ensure the demands and safety of end-users, AR can also equip operators with immersive interfaces to enhance productivity, accuracy and autonomy in the quality sector. However, there is currently no systematic review paper about AR technology enhancing the quality sector. The purpose of this paper is to conduct a systematic literature review (SLR) to conclude about the emerging interest in using AR as an assisting technology for the quality sector in an industry 4.0 context. Five research questions (RQs), with a set of selection criteria, are predefined to support the objectives of this SLR. In addition, different research databases are used for the paper identification phase following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) methodology to find the answers for the predefined RQs. It is found that, in spite of staying behind the assembly and maintenance sector in terms of AR-based solutions, there is a tendency towards interest in developing and implementing AR-assisted quality applications. There are three main categories of current AR-based solutions for quality sector, which are AR-based apps as a virtual Lean tool, AR-assisted metrology and AR-based solutions for in-line quality control. In this SLR, an AR architecture layer framework has been improved to classify articles into different layers which are finally integrated into a systematic design and development methodology for the development of long-term AR-based solutions for the quality sector in the future.
Article
Maintenance of technical equipment in manufacturing is inevitable for sustained productivity with minimal downtimes. Elimination of unscheduled interruptions as well as real-time monitoring of equipment health can potentially benefit from adopting augmented reality (AR) technology. How best to employ this technology in maintenance demands a fundamental comprehension of user requirements for production planners. Despite augmented reality applications being developed to assist various manufacturing operations, no previous study has examined how these user requirements in maintenance have been fulfilled and the potential opportunities that exist for further development. Reviews on maintenance have been general on all industrial fields rather than focusing on a specific industry. In this regard, a systematic literature review was performed on previous studies on augmented reality applications in the maintenance of manufacturing entities from 2017 to 2021. Specifically, the review examines how user requirements have been addressed by these studies and identifies gaps for future research. The user requirements are drawn from the challenges encountered during AR-based maintenance in manufacturing following a similar approach to usability engineering methodologies. The needs are identified as ergonomics, communication, situational awareness, intelligence sources, feedback, safety, motivation, and performance assessment. Contributing factors to those needs are cross-tabulated with the requirements and their results presented as trends, prior to drawing insights and providing possible future suggestions for the made observations. Keywords: Augmented reality; Maintenance; Usability; User requirements
Article
It was explored that instructions for manual industrial installation are better than instructions on a stationary monitor in a head-worn Virtual Reality Display (AR-HWD). A prototype consisting of virtual instruction screens was designed for two instance assembly tasks. In a comparative analysis, participants carried out the tasks with instructions through an AR-HWD and a stationary screen. The task performance and user experience were measured through questionnaires, interviews, and observation notes. The study showed that the consumers had the enjoyment of exploring the technology and were enthusiastic. The perceived utility in the current situation was different, but the users saw a tremendous opportunity for the future with AR-HWDs. The accuracy of the task with the ARHWD directions was as strong as on the screen. AR-HWDs are a better solution than a stationary screen, but technical limitations are required and new technology employees need to be educated in order to make their application effective.