Projection-Based Augmented Reality Assistance for
Manual Electronic Component Assembly Processes
Marco Ojer 1,*, Hugo Alvarez 1, Ismael Serrano 1, Fátima A. Saiz 1, Iñigo Barandiaran 1,
Daniel Aguinaga 2, Leire Querejeta 2and David Alejandro 3
1Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), 20009 Donostia, Spain;
firstname.lastname@example.org (H.A.); email@example.com (I.S.); firstname.lastname@example.org (F.A.S.);
2Ikor Technology Centre, 20018 Donostia, Spain; email@example.com (D.A.); firstname.lastname@example.org (L.Q.)
3IKOR Sistemas Electrónicos, 20018 Donostia, Spain; email@example.com
Received: 19 December 2019; Accepted: 16 January 2020; Published: 22 January 2020
Personalized production is moving the progress of industrial automation forward, and
demanding new tools for improving the decision-making of the operators. This paper presents a
new, projection-based augmented reality system for assisting operators during electronic component
assembly processes. The paper describes both the hardware and software solutions, and depicts the
results obtained during a usability test with the new system.
Keywords: computer vision; augmented reality; projection mapping
It is evident that initiatives such as the German paradigm of "Industry 4.0" or some other similar
ones all around the world are having a deep impact on the manufacturing sector, and thus are reshaping
the industry. The development of such paradigms is accelerating the development and deployment of
advanced ITC-related technologies[
], transforming many aspects, such as the industrial workforce
and the way they develop their tasks. Even though customer-centric and demand-driven production
is moving forward through the progress of industrial automation, the need for a better and more
empowered human workforce is more demanding than ever. The next human workforce should have
new and more powerful tools that allow them to improve their decision-making processes, to more easily
adapt to changing production conditions and to adopt strategies for continuous training. Along with
the development of the Industry 4.0 paradigm appears the concept of Operator 4.0 [
]. This concept is
driven by several objectives, such as to simplify the day-to-day work, while improving efficiency and
autonomy by focusing on added value tasks, all in a comfortable and healthy working environment.
This paper proposes a new system based on augmented reality (AR) for assisting operators during
manual assembly of electronic components. As mentioned before, a customer-centric oriented and
personalized production requires continuous changes in production lines. The electronics sector is
not an exception in this regard. This industry has many automated processes for the assembly of
electronic components for electronic boards, also known as printed circuit boards (PCB), but there are
also many manual assembly stages along the production lines. Operators perform the monotonous task
of board assembly over considerable periods of time; therefore, they are likely to experience fatigue
and distractions. Furthermore, the low profile needed for this task favors rotation of personnel, which
is undesirable because new employees take a certain amount of time to adapt. As a consequence,
manual processes have the highest error ratio of the production process; electronic manufacturers have
identified the necessity of improving these processes as a key point. Therefore, This paper proposes a
Appl. Sci. 2020,10, 796; doi:10.3390/app10030796 www.mdpi.com/journal/applsci
Appl. Sci. 2020,10, 796 2 of 13
system which aims to reduce assembly errors and adaptation times for new employees while increasing
operator comfort, confidence and assembling speed by means of AR.
This paper is structured as follows: Section 2describes the current state of the art related works
with the application of augmented reality to the manufacturing sector. In Section 3, we show our
approach to assist the operators during the manual assembly of electronic components. Section 4
outlines the results of a usability test we carried out with several operators using the proposed approach.
Section 5discusses the proposed approach and shows how a significant and positive impact has been
achieved in the production line evaluated. Finally, Section 6gives some conclusive remarks and also
mentions some future research directions for improving the next generation of the system.
2. Related Work
Visual Computing technologies (including augmented reality) will be key enabling technologies for
the smart factories of the future [
]. These technologies have demonstrated good capacities for
empowering human operators when performing industrial tasks by providing tools that assist them
and improve their comfort and performance [
]. Consequently, the research community has focused
on these technologies and several related approaches have been proposed [
]. Next, we mention a few
AR works applied to the manufacturing sector.
Augmented reality has been extensively used in many industrial processes, such as maintenance
]. Some of these solutions [
] are oriented toward assembly tasks, in which
an AR technology provides virtual instructions in order to guide the operators. In those solutions,
the virtual content is shown in a screen, forcing the operators to constantly change the attention between
the physical workspace and the screen. As stated by [
], switching attention between two sources
during a maintenance task (for example, between the documentation and the workspace when using
a traditional paper based instructions, or between a screen and the workspace) might cause a high
cognitive load, which translates into greater probability of errors and an increase of the task completion
time. On the contrary, projection based augmented reality (also cited as spatial augmented reality
] in a broader meaning, or just projection mapping) projects the virtual data directly in the
physical space. This approach allows the operator to have their hands free and is considered an enabling
technology to face the challenge of supporting operators performing tasks [
]. Attracted by these
advantages, several SAR works have been developed for industrial environments [
of these works are focused on providing guidance to the operators, without verifying if the task is
correct or not. To face that, [
] proposes an AR system that also verifies the operator task by comparing
the status of every step along the maintenance procedure, represented by a captured image, with a
reference virtual 3D representation of the expected status, which is converted to an image as well by
rendering the virtual 3D data using the tracked real camera location.
Moreover, as more and more visual computing solutions are integrated into industrial shop
ﬂoors, the complexity of communication and interaction across different peripherals and industrial
devices increases. Nonetheless, [
] has recently proposed a middleware architecture that enables
communication and interaction across different technologies without manual conﬁguration or
From the works cited above, only [
] deal with PCBs and are focused on a similar domain
to our work. However, they only address the part of offering augmented instructions on the screen
(without projection). Additionally, compared to all the works cited, our work combines the best
characteristics of each of them. Thus, our work has the following strong points:
•The proposed system veriﬁes if the operator has performed the operation correctly.
Instructions are simple, so there is no need to create the multimedia content that is projected.
The authoring effort is minimized to only set the position of each component in the reference board.
The projection is done on a ﬂat surface, so the calibration step has been simpliﬁed to be easy, fast
and automatic (the user only has to put the calibration pattern in the workspace).
Appl. Sci. 2020,10, 796 3 of 13
The proposed system uses advanced visualization techniques (ﬂickering) to deal with reﬂections
when projecting on PCBs.
The proposed system supports dynamic projection; i.e., the projection is updated in real time
when the PCB is moved.
•A normal RGB camera is used; no depth information is required.
3. Proposed Method
This paper proposes a SAR system to guide and assist operators during the process of assembling
electronic components. This system performs real-time checking of the state of a PCB; i.e., checks
presence or absence of electronic components, by means of computer vision techniques. It offers visual
information about which component should be assembled and whether previous assemblies have
been correctly done. This work is based on [
], but with the improvement that the virtual content
is directly projected on the PCB using projection mapping techniques. In the following sections we
provide a brief description of the SAR system that we rely on and give a detailed explanation of
the components newly-added to the aforementioned system. The system has two work modes, one
consists of the model generation (during an offline phase) and the other consists of the real-time board
inspection and operator guiding (during an online phase); see Figure 1. We explain each component in
the following subsections.
Figure 1. Pipeline of the system.
The proposed system consists of four different parts: an illumination system, a 2D high-resolution
image acquisition setup, a screen and a projector (see Figure 2). The illumination system, the camera
and the projector must be located at sufﬁcient height in order to not disturb the operator during
manual operation. Given user experiences and comments, the minimum ergonomic height settled on
was 600 mm. A 12 mega-pixel camera is at the center of the illumination system, at a height of 700 mm.
This positioning, combined with the optical lens, offers a ﬁeld of view of 500
420 mm. A PCB’s
maximum size was established to 320 ×400 mm, which is covered by the proposed setup.
The projector model used is conventional, more speciﬁcally, an Optoma ML750e, which uses
LED technology and has a light output of 700 lumens. It is not a very powerful projector, but it has
proven to be sufﬁcient (Section 3.6.2), and, in return, thanks to its small dimensions, it has allowed us
to achieve a fairly compact setup. It is positioned next to the camera, covering all the ﬁeld of view of
Appl. Sci. 2020,10, 796 4 of 13
Hardware setup of the proposed system. The camera and projector are highlighted with
light-blue and dark-red rectangles, respectively.
The screen is in front of the operator, hopefully at the most ergonomic position. The screen shows
the outputs and feedback of the proposed system. It is a complementary visualization, since this
output is also shown directly on the board using the projector.
3.2. Authoring Tool
The main goal of this tool is to generate models which are able to distinguish between the presence
and absence of electronic components in the board. This tool is intended to be used before board
inspection in case there are any components unknown to the system. In this case, an operator with
correct access rights will use this tool to generate the model for this speciﬁc component.
The component catalog is immense, of the order of 10,000 different components, which it
is being constantly updated. Furthermore, these components present huge variations in their
characteristics such as size, shape, texture and color. In order to tackle this problem, [
] proposed a
one-classiﬁer-per-component approach and the deﬁnition of a training phase that only needs a single
image of a minimum number of components to generate a model. This training phase can be divided
into different stages: segmentation, image generation and training.
Segmentation: In this stage the operator takes an image of the new referenced component, selecting
a foamy material with chromatic contrast to the background. The operator has to place a set of
components with the same reference almost covering all the camera field of view. Experiments
show that five well distributed components are enough to capture the prospective distortion of
Appl. Sci. 2020,10, 796 5 of 13
the camera. When the capture is ready, the segmentation process starts. The first step consists of
applying a rough or an approximate segmentation. After this process, a more accurate segmentation
is carried out using GrabCut algorithm  for improving component segmentation result.
Image generation: To get a high performance classiﬁer, a substantial number of image samples
that include as much component variability as possible, is necessary. In [
], the authors
propose generating synthetic images of the components and different backgrounds by applying
geometric and photo-metric transformations. This step ensures the robustness of trained classiﬁers
Training: In order to generate the classiﬁcation model from the generated set of images, the ﬁrst
part is to extract the relevant features from these images. The images of this dataset have a huge
variety in terms of background; some of them are totally uniform, while others have numerous
pinholes and tracks. For this reason, global features obtained from the whole image should be
used instead of focusing on local keypoints. Once features are extracted, a classifier is trained
with them, in order to to discriminate between components and background, and it is saved in a
], a study is conducted which compares the accuracy of different combinations of features
and classiﬁers. Training and validation were performed with artiﬁcially generated images, whereas
testing was performed with real images taken with the proposed setup ensuring performance in real
environments. This study was conducted using 21 different components chosen in order to cover a big
spectrum of components, ranging from multi-colored big components to uniform small components.
In conclusion, a combination of color histograms, histogram Of gradients (HOG) and local binary
patterns (LBPs) were chosen as features. Along with a radial-basis function support vector machine
(RBF-SVM) as the classiﬁer, this combination achieved more than 90% accuracy in validation and
testing. Furthermore, this combination was assured to have low computation time; that is enough for
a real-time application.
3.3. Board Tracking
As the proposed system uses the image captured by the camera to recognize components, it is
essential to avoid distortions in the image due to the camera lens. It is therefore necessary to calibrate
the camera, i.e., to know the intrinsic camera parameters, before or prior to using the system. In our
system, we propose to use the well known Zhang’s camera calibration algorithm [
]. This calibration
process only needs to be done once, and it allows us to calculate the compensation that has to be
applied to each image captured by the camera to avoid distortions.
During the component assembly phase, the boards have a non-ﬁxed position, having one degree
of freedom for horizontal displacement. They have also different sizes, shapes and colors due to the
mounting boards and used materials. Owing to a component’s position being referred to via the
bottom-left corner of the board, the use of some markers is proposed with the ﬁnal purpose of tracking
the board position. In this system, the ArUco markers are used .
Two ArUco markers are placed to locate the vertical board position, an other two ArUco markers
are placed to locate the horizontal board position. During the assembly, the operator might occlude
the horizontal markers, but if it happens, the system assumes the previously captured horizontal
marks positions as current positions (temporal coherence assumption). The corner of the board is
calculated by intersecting the vertical line and the horizontal line referenced to the markers; see Figure
3. This corner is necessary to obtain the reference system of the PCB, and therefore, to locate component
positions. If vertical line calculation is not possible, the component inspection stops. Thus, visible
vertical markers are necessary to track correctly the board.
Appl. Sci. 2020,10, 796 6 of 13
Images where the printed circuit boards (PCB) is in different positions. The purple lines are
located thanks to the ArUco markers; the corner (purple circle) is the intersection between them and
denotes the PCB reference system.
In this step, the main goal is to verify the presence of the components on the board.
First, the assembly region of each components should be located. A list of component relative
coordinates with respect to the board corner is feed to the system, and because the board corner is
already located, the assembly regions can be situated in the image. This coordinate list is created by
the quality engineer during the design process of the board using the manufacturing execution system
(MES) of the company.
A further step is to calculate the detection probability of each component, using the cropped
image of the assembly region. The classiﬁcation models of the board components are loaded from the
model database. Then, for each cropped image, the selected combination of features is extracted and
feed to the classiﬁcation model, an RBF-SVM in this case.
The output of the model is a probability for the analyzed image crop of the component. A high
value of this probability represents component presence, whereas low probability means absence.
Note that a larger region usually provides a stronger response than smaller region because it has more
Appl. Sci. 2020,10, 796 7 of 13
borders, texture, colors, etc. To adjust this response, a threshold calculated proportionally using the
region size is given. This operation minimizes false positives.
When these values are obtained, the output is visualized on the screen and on the board.
The visualization strategy is explained in the next section.
3.5. Screen Visualization
With the veriﬁcation output, the region location is highlighted in the screen by a rectangle; if the
component is mounted, the rectangle is green, whereas if it is not mounted, the color is red. The current
group of components to be mounted is highlighted with a blinking orange solid rectangle in the
visualization. On the right side of the screen, the reference and image of the component to be mounted
are shown; see Figure 4.
Figure 4. Screen visualization of the current state of the PCB.
3.6. Projection Mapping
The main problem of screen based visualization is that the operator has to constantly check
the state of the assembly on the screen, switching attention between the board and screen. A more
ergonomic solution is obtained when the projector is used to visualize this output directly onto the
PCB. This improves the posture of the worker and increases the assembly speed and quality, since the
operator does not have to look up to receive work-instructions.
Apart from offering assistance in a conventional screen, the proposed system also provides
guidance by projecting relevant virtual content directly onto the PCB. However, to project content
in the desired place and with an adequate degree of immersion, it is ﬁrst necessary to calibrate the
3.6.1. Camera–Projector Calibration
To project virtual content adequately in a physical space, we must calibrate the setup; i.e.,
ﬁnd a geometric transformation that adapts the virtual data to the shape of the projection surface.
This transformation can be ﬁxed manually by modifying the position or shape of the virtual
content until the projection gives the desired results, which is a laborious and expensive process
that requires technical skills. However, in those cases where there is also a camera in the setup,
the camera–projector calibration, i.e., ﬁnding the correct geometric transformation, can be calculated
automatically. The projector can emit a pattern that is captured and recognized by the camera and
Appl. Sci. 2020,10, 796 8 of 13
which can be used to estimate the transformation that moves content from the camera’s coordinate
system to the projector’s coordinate system. Additionally, when an object is recognized in the camera
image and the camera pose is known, i.e., the position and orientation respect to the object is known
(Section 3.3), we have the transformation that relates the object and camera coordinate systems. Thus,
since the virtual content is deﬁned in the same coordinate system as the object, its projection can be
calculated using the chain rule. In this work, we have followed this methodology to calibrate the
camera–projector pair. We propose to place a planar checkerboard in the physical space, and the
projector projects a complete gray code sequence. This structured-light sequence can be decoded, so
that each pixel of the camera is associated with a projector row and column. Therefore, since the 3D
coordinates of the checkerboard corners and their 2D positions (pixels) in the camera and projectors
images are known, a traditional stereo calibration method can be applied to solve the three-dimensional
camera–projector relationship (see [
]). Nonetheless, in our setup, the projection surface is a plane (a
PCB), and it is always parallel to the camera image plane, so we have simpliﬁed the camera–projector
relationship to 2D. We have modiﬁed the [
] implementation to estimate a 2D homography that
represents the camera–projector relationship. Although this simpliﬁcation can be inaccurate for more
complex projection surfaces, it offers good results for planar surfaces and simpliﬁes the calibration
process. In the original calibration version [
], a structured-light sequence must be captured from
several points of view, but in our simpliﬁed version, only one point of view is required. Therefore, our
simpliﬁed and not optimized version only takes approximately 85 seconds to do the calibration (50
seconds to project and capture the gray code sequence and 35 seconds to decode the patterns and to
estimate the homography). Nevertheless, this time is not usually critical, since the calibration process
is only executed once when the setup is built. Likewise, the setup must be recalibrated when there is a
change in the camera, the projector or the projection surface.
3.6.2. Virtual Content Projection
In the proposed projection mapping pipeline (Figure 5), as stated in the previous subsection, ﬁrst,
the virtual content is transferred to the camera image using the camera tracking data (
, Section 3.3),
which creates the view that is displayed in the screen. Then, this content, which is already referenced
with respect to the camera image coordinate system, is again transformed using the camera–projector
, Section 3.6.1) to the projector image area that is subsequently projected. Thus,
to project any content, we deﬁne its location in the reference 2D coordinate system of the board and
then we apply the chain rule, which can be represented conceptually by Ttrack ∗Hcal ib .
In our application, we decided to project the following information (Figure 6), which answers
three simple questions that are very useful for operators:
"Where?": The place where the operator has to assemble the current electronic component, which
is highlighted with the projection of a white ﬂicking rectangle.
"What?": The reference number of the electronic components that must be assembled in the current
"How many?": The number of the current electronic components that have already been assembled
regarding the total number to be assembled. A fraction "
" is projected, where
is the number of
current components already assembled from the total of j.
The projection of "What?" and "How many?" is located at the border frame (Figure 6), outside the
electronic board, as this area is not used for anything and it offers good visibility. The projection of
"Where?" on the other hand, is superimposed on the real position that corresponds to the inside the
electronic board (Figure 6). This was not an appropriate area to get good contrast due to the material
of the PCB and the limited power of the projector that was used, so we opted to ﬂick the projection to
capture the operator’s visual attention, and, consequently, improve its visibility. This has been proven
as a good solution, since the result of the usability test was positive (Section 4).
Appl. Sci. 2020,10, 796 9 of 13
Conceptual scheme of the projection mapping pipeline. Virtual content (left) is transferred to
the camera image (
), and then, this content is transformed again (
) to the projector image
area that is subsequently projected. See text for details.
Example of virtual content that is projected (highlighted in white) in the printed circuit board
during the component assembly process. The projection is more clearly seen live, so we provided the
bottom row that has zoomed-in versions of the top images to see the projections in these images with
4. Usability Test
With the aim of evaluating the beneﬁts of the AR extension compared to the previous system,
a system usability scale (SUS) survey was made, which compares the usability between the two
systems. On one hand, the original system presented in [
], where instructions are only displayed
Appl. Sci. 2020,10, 796 10 of 13
on the screen. On the other hand, the proposed system, where instructions are displayed both in the
screen and on the board directly via the projector. SUS survey is a ten-item scale test giving a global
view of subjective assessments of usability , which is used as a standard survey for usability tests.
The proposed test consists of mounting the same PCB with the aid of both solutions, the original
and the proposed ones, wherein every mounted process is timed and the number of mounting errors
is measured. Finally, the SUS test was completed.
A total of 21 people were surveyed. They were between 20–45 years old; there were 15 men and
six women, one of them color-blind. They did not have any experience in PCB assembly. This was
done in order to emulate a newcomer to the production line, since rotation of personnel is common.
The test was performed in a laboratory where a replica of the manufacturing production workspace
They were divided into three groups: seven participants for each group. Group 1 used the original
system in the ﬁrst place and later the proposed one. Contrarily, Group 2 used the proposed system
ﬁrst and original system second. Groups 1 and 2 did not have any experience mounting the electronic
board; thus, it was fair to assume that the ﬁrst mounting would take longer than the second, as the
users had more experience for the second mounting. For this reason, Group 3 was created. This group
had already mounted the PCB using a different solution, so they already had some knowledge of the
PCB when using both processes. This grouping was done in order to measure time efﬁciency among
processes, but it did not have any impact from the usability point of view.
Figure 7displays the SUS scores. The higher the score, the more usable the systems is. The systems
achieved average values of 80 and 90 out of 100, respectively. Although a SUS score interpretation is
not straightforward, Bangor et al. [
] concluded that any system above 68 can be considered usable;
he also proposed an adjective scale, where a mean SUS score of around 70 is considered good, one
around 85.5 is considered excellent and one around 90.9 is referred as the best imaginable. Thus, both
systems are highly usable, but the use of augmented reality is preferable.
Distributions of SUS scores. Blue represents the original system and Yellow the proposed one.
Black lines mark the average value of both distributions.
As mentioned, mounting times were measured in order to get some objective insights about
system efﬁciency; see Figure 8. As predicted, for Groups 1 and 2, the ﬁrst mounting was usually the
more time consuming one. However, for Group 3, where participants started both mountings with the
same experience, the proposed solution yielded lower mounting times for all participants. In addition,
the feedback provided by the two systems prevented the users from making any errors.
Appl. Sci. 2020,10, 796 11 of 13
Mounting times of each group. Blue and yellow bars represent the original and proposed
system, respectively. Group 1 started with the original, Group 2 started with the proposed and Group
3 already had experience.
These results show that the AR system is even faster and more comfortable than the previous
system. From the users’ comments, it can be deduced that both velocity and comfort are increased
because the user only needs to look and focus on the board, instead of changing their focus between
screen and board, thereby helping the operator to maintain the same posture. Moreover, the direct
projection onto the board allows the operator to find placing location easier, saving operational
time and reducing placement errors. The system was also validated by experienced workers of the
manufacturing company, who also pointed out the enhancement provided by the projection mapping.
], the usability of the only-screen system is compared with the traditional system used by the
manufacturer; the system proposed achieved a much higher satisfaction levels than the traditional
system. Therefore, the AR extension is also much more usable than the traditional system.
We propose to use direct projection in the workspace for improving user satisfaction and at
the same time reducing assembly errors. The previous section shows that operators actually ﬁnd
the system more usable, feel more secure with it and require less time to do their tasks. A further
advantage is that operators requires less training time, as the system gives assistance throughout the
assembly. Moreover, this system allows the production managers to have traceability of the most
complex components or PCBs to be assembled. This enables them to take further measures for ensuring
operator satisfaction while also optimizing production because of the reduction of potential errors.
To guarantee that the projection-based solution is effective, the illumination conditions of the
workspace have to be considered. The ambient light cannot be strong, so that the light emitted by the
projector is predominant and the projected content is shown with contrast and sharpness. A balance
must be achieved between a valid ambient light for object detection (electronic components in our
case) and light that does not defeat the visibility of the projector. Similarly, it is preferable to work on
non-specular surfaces, so that no brightness is generated that hinders the visibility of the projection.
In our scenario, we had to deal with this difﬁculty, since PCBs are specular, and therefore, we had to
use more sophisticated visualization techniques to capture the operator’s attention (ﬂickering).
In the use case presented in this paper (assembly small electronic components in a PCB) we have
not had problems with hidden areas of projection. These areas appear when an object that is in the
workspace and in front of the projector has large dimensions and occludes the area behind it. Thus,
the rays emitted by the projector cannot reach this area, and therefore, it is not possible to project
content in this zone. To solve this limitation, a multiprojector conﬁguration should be used.
6. Conclusions and Future Work
Despite the improvements in the last few decades, the use of augmented reality in industry has
not been extended yet due to several reasons, including ergonomics, visual fatigue, content creation,
Appl. Sci. 2020,10, 796 12 of 13
the lack of IT infrastructure, etc. [
]. In fact, ergonomics is the main obstacle for AR glasses; thus,
projection based AR systems have been positioned as the alternative because they project data directly
in the workspace, leaving the operator’s hands free and avoiding discomfort due to motion sickness or
vergence-accommodation conﬂicts 
The fast adoption of new, advanced ITC-related technologies such as cloud computing and
augmented reality by the manufacturing sector is having a real positive impact in several terms,
such as increasing ﬂexibility, productivity and efﬁciency. In this paper, we propose integrating an AR
system to support operators during the manual assembly of electronic components for improving
workers’ ability to adapt to very variable production conditions. Our results show that, compared
with the old procedure, with the new system the operators generate less errors, especially when they
face a new PCB they have not assembled before. In addition, they feel more comfortable because
they know that there is an additional system that ensures that their work is being done correctly. In
the future, we plan to implement some additional features, such as one to verify the polarity; i.e.,
the orientations of some components. Also, we plan to evaluate the impact of using deep learning
approach for recognizing components in order to increase robustness against severe illumination
The following are available online at www.mdpi.com/xxx/s1, Video S1: title:
Automatic System To Assist Operators in The Assembly of Electronic Components.
Conceptualization, I. S. and I. B.; Formal analysis, M. O.; Funding acquisition, D. A. and L.
Q.; Investigation, M. O. and H. A.; Methodology, I. S.; Resources, D. A.; Software, M. O., H. A. and I. S.; Validation,
F. S.; Writing – original draft, H. A.; Writing – review and editing, M. O. and F. S. All authors have read and agreed
to the published version of the manuscript.
We would like also to thanks to SPRI agency for founding the SIRA applied research project under the
Hazitek 2018 calls, where the research described in this paper was carried on.
We would like to thank the expert operators of Ikor for doing the user evaluation tests. We
also thank Sara Garcia for her help in generating multimedia content.
Conﬂicts of Interest: The authors declare no conﬂicts of interest
Alvarez, H.; Lajas, I.; Larrañaga, A.; Amozarrain, L.; Barandiaran, I. Augmented reality system to guide
operators in the setup of die cutters. Int. J. Adv. Manuf. Technol. 2019,103, 1543–1553.
Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective
Rating Scale. J. Usabitily Stud. 2009,4, 114–123.
Bimber, O.; Raskar, R. Spatial Augmented Reality Merging Real and Virtual Worlds; A.K. Peters: Natick, MA,
4. Brooke, John. SUS: A quick and dirty usability scale. Usability Eval. Ind. 1996,189, 4–7.
Hahn, J.; Ludwig, B.; Wolff, C. Augmented reality-based training of the PCB assembly process. In Proceedings
of the 14th International Conference on Mobile and Ubiquitous Multimedia, Linz, Austria, 30 November–
2 December 2015; Volume 46, pp. 395–399.
Kern, J.; Weinmann, M.; Wursthorn, S. Projector-based Augmented Reality for quality inspection of scanned
objects. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017,IV-2/W4, 83–90.
Korn, O.; Funk, M.; Schmidt, A. Design Approaches for the Gamification of Production Environments: A Study
Focusing on Acceptance. In Proceedings of the 8th ACM International Conference on Pervasive Technologies
Related to Assistive Environments, Corfu, Greece, 1–3 July 2015; pp. 1–7.
de Lacalle, L.N.L.; Posada, J. Special Issue on New Industry 4.0 Advances in Industrial IoT and Visual
Computing for Manufacturing Processes. Appl. Sci. 2019,9, 4323.
Manuri, F.; Pizzigalli, A.; Sanna, A. A State Validation System for Augmented Reality Based Maintenance
Procedures. Appl. Sci. 2019,9, 2115.
Martinetti, A.; Marques, H.; Singh, S.; Dongen, L. Reﬂections on the Limited Pervasiveness of Augmented
Reality in Industrial Sectors. Appl. Sci. 2019,9, 3382.
Appl. Sci. 2020,10, 796 13 of 13
Moreno, D.; Taubin, G. Simple, Accurate, and Robust Projector-Camera Calibration. In Proceedings of the
Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission,
Zurich, Switzerland, 13–15 October 2012; pp. 464–471.
Neumann, U.; Majoros, A. Cognitive, performance, and systems issues for augmented reality applications
in manufacturing and maintenance. In Proceedings of the IEEE 1998 Virtual Reality Annual International
Symposium, Atlanta, GA, USA, 14–18 March 1998; pp. 4–11.
Ojer, M.; Serrano, I.; Saiz, F.; Barandiaran, I.; Gil, I.; Aguinaga, D.; Alejandro, D. Real-Time Automatic Optical
System to Assit Operator in the Assembling of Electronic Components. Int. J. Adv. Manuf. Technol. 2019,
Palmarini, R.; Erkoyuncu, J.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality
applications in maintenance. Robot. Comput.-Integr. Manuf. 2018,49, 215–228.
Posada, J.; Toro, C.; Barandiaran, I.; Oyarzun, D.; Stricker, D.; De Amicis, R.; Pinto, E.B.; Eisert, P.; Döllner, J.;
Vallarino, I. Visual Computing as a Key Enabling Technology for Industrie 4.0 and Industrial Internet.
IEEE Comput. Graph. Appl. 2015,35, 26–40.
Posada, J.; Zorrilla, M.; Dominguez, A.; Simões, B.; Eisert, P.; Stricker, D.; Rambach, J.; Dollner, J.; Guevara, M.
Graphics and Media Technologies for Operators in Industry 4.0. IEEE Comput. Graph. Appl.
Rodriguez, L.; Quint, F.; Gorecky, D.; Romero, D.; Siller, H.R. Developing a Mixed Reality Assistance System
Based on Projection Mapping Technology for Manual Operations at Assembly Workstations. Procedia Comput.
Sci. 2015,75, 327–333.
Romero, D.; Stahre, J.; Wuest, T.; Noran, O.; Bernus, P.; Fast-Berglund, Å.; Gorecky, D. Towards an
operator 4.0 typology: A human-centric perspective on the fourth industrial revolution technologies.
In Proceedings of the International Conference on Computers and Industrial Engineering (CIE46), Tianjin,
China, 29–31 October 2016.
Romero Ramirez, J.; Muñoz Salinas, R.; Medina Carnicer, R. Speeded up detection of squared ﬁducial
markers. Image Vis. Comput. 2018,76, 38–47.
Rother, C.; Kolmogorov, V.; Blake, A. "GrabCut": Interactive Foreground Extraction Using Iterated Graph
Cuts. ACM Trans. Graph. 2004,6, 309–314.
Sand, O.; Büttner, S.; Paelke, V.; Röcker, C. smARt. Assembly—Projection-Based Augmented Reality for
Supporting Assembly Workers. In Proceedings of the 8th International Conference Virtual, Augmented and
Mixed Reality, Toronto, ON, Canada, 17–22 July 2016; pp. 643–652.
Sanna, A.; Manuri, F.; Lamberti, F.; Member, S.; Paravati, G.; Pezzolla, P. Using handheld devices to
support augmented reality-based maintenance and assembly tasks. In Proceedings of the IEEE International
Conference on Consumer Electronics, Las Vegas, NV, USA, 9–12 January 2015; pp. 178–179.
Segura, A.; Diez, H.; Barandiaran, I.; Arbelaiz, A.; Álvarez, H.; Simões, B.; Posada, J.; García-Alonso, A.;
Ugarte, R. Visual computing technologies to support the Operator 4.0. Comput. Ind. Eng.
Simões, B.; De Amicis, R.; Barandiaran, I.; Posada, J. X-Reality System Architecture for Industry 4.0 Processes.
Multimodal Technol. Interact 2018,2, 72.
Wang, X.; Ong, S.K.; Nee, A. Real-virtual components interaction for assembly simulation and planning.
Robot. Comput.-Integr. Manuf. 2016,41, 102–114.
Webel, S.; Engelke, T.; Peveri, M.; Olbrich, M.; Preusche, C. Augmented Reality Training for Assembly and
Maintenance Skills. BIO Web Conf. 2011,1, 00097.
Yuan, M.; Ong, S.K.; Nee, A. Augmented reality for assembly guidance using a virtual interactive tool. Int. J.
Prod. Res. 2008,46, 1745–1767.
Zhang, Z. A ﬂexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell.
29. InspectAR. Available online: https://www.inspectar.com (accessed December 10, 2019).
2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).