ArticlePDF Available

Projection-Based Augmented Reality Assistance for Manual Electronic Component Assembly Processes

Authors:

Abstract and Figures

Personalized production is moving the progress of industrial automation forward, and demanding new tools for improving the decision-making of the operators. This paper presents a new, projection-based augmented reality system for assisting operators during electronic component assembly processes. The paper describes both the hardware and software solutions, and depicts the results obtained during a usability test with the new system.
Content may be subject to copyright.
applied
sciences
Article
Projection-Based Augmented Reality Assistance for
Manual Electronic Component Assembly Processes
Marco Ojer 1,*, Hugo Alvarez 1, Ismael Serrano 1, Fátima A. Saiz 1, Iñigo Barandiaran 1,
Daniel Aguinaga 2, Leire Querejeta 2and David Alejandro 3
1Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), 20009 Donostia, Spain;
halvarez@vicomtech.org (H.A.); iserragr@everis.com (I.S.); fsaiz@vicomtech.org (F.A.S.);
ibarandiaran@vicomtech.org (I.B.)
2Ikor Technology Centre, 20018 Donostia, Spain; daguinaga@ikor.es (D.A.); lquerejeta@ikor.es (L.Q.)
3IKOR Sistemas Electrónicos, 20018 Donostia, Spain; david.alejandro@ikor.es
*Correspondence: mojer@vicomtech.org
Received: 19 December 2019; Accepted: 16 January 2020; Published: 22 January 2020


Abstract:
Personalized production is moving the progress of industrial automation forward, and
demanding new tools for improving the decision-making of the operators. This paper presents a
new, projection-based augmented reality system for assisting operators during electronic component
assembly processes. The paper describes both the hardware and software solutions, and depicts the
results obtained during a usability test with the new system.
Keywords: computer vision; augmented reality; projection mapping
1. Introduction
It is evident that initiatives such as the German paradigm of "Industry 4.0" or some other similar
ones all around the world are having a deep impact on the manufacturing sector, and thus are reshaping
the industry. The development of such paradigms is accelerating the development and deployment of
advanced ITC-related technologies[
15
], transforming many aspects, such as the industrial workforce
and the way they develop their tasks. Even though customer-centric and demand-driven production
is moving forward through the progress of industrial automation, the need for a better and more
empowered human workforce is more demanding than ever. The next human workforce should have
new and more powerful tools that allow them to improve their decision-making processes, to more easily
adapt to changing production conditions and to adopt strategies for continuous training. Along with
the development of the Industry 4.0 paradigm appears the concept of Operator 4.0 [
18
]. This concept is
driven by several objectives, such as to simplify the day-to-day work, while improving efficiency and
autonomy by focusing on added value tasks, all in a comfortable and healthy working environment.
This paper proposes a new system based on augmented reality (AR) for assisting operators during
manual assembly of electronic components. As mentioned before, a customer-centric oriented and
personalized production requires continuous changes in production lines. The electronics sector is
not an exception in this regard. This industry has many automated processes for the assembly of
electronic components for electronic boards, also known as printed circuit boards (PCB), but there are
also many manual assembly stages along the production lines. Operators perform the monotonous task
of board assembly over considerable periods of time; therefore, they are likely to experience fatigue
and distractions. Furthermore, the low profile needed for this task favors rotation of personnel, which
is undesirable because new employees take a certain amount of time to adapt. As a consequence,
manual processes have the highest error ratio of the production process; electronic manufacturers have
identified the necessity of improving these processes as a key point. Therefore, This paper proposes a
Appl. Sci. 2020,10, 796; doi:10.3390/app10030796 www.mdpi.com/journal/applsci
Appl. Sci. 2020,10, 796 2 of 13
system which aims to reduce assembly errors and adaptation times for new employees while increasing
operator comfort, confidence and assembling speed by means of AR.
This paper is structured as follows: Section 2describes the current state of the art related works
with the application of augmented reality to the manufacturing sector. In Section 3, we show our
approach to assist the operators during the manual assembly of electronic components. Section 4
outlines the results of a usability test we carried out with several operators using the proposed approach.
Section 5discusses the proposed approach and shows how a significant and positive impact has been
achieved in the production line evaluated. Finally, Section 6gives some conclusive remarks and also
mentions some future research directions for improving the next generation of the system.
2. Related Work
Visual Computing technologies (including augmented reality) will be key enabling technologies for
the smart factories of the future [
15
]. These technologies have demonstrated good capacities for
empowering human operators when performing industrial tasks by providing tools that assist them
and improve their comfort and performance [
23
]. Consequently, the research community has focused
on these technologies and several related approaches have been proposed [
8
]. Next, we mention a few
AR works applied to the manufacturing sector.
Augmented reality has been extensively used in many industrial processes, such as maintenance
operations [
14
]. Some of these solutions [
5
,
22
,
25
27
,
29
] are oriented toward assembly tasks, in which
an AR technology provides virtual instructions in order to guide the operators. In those solutions,
the virtual content is shown in a screen, forcing the operators to constantly change the attention between
the physical workspace and the screen. As stated by [
12
], switching attention between two sources
during a maintenance task (for example, between the documentation and the workspace when using
a traditional paper based instructions, or between a screen and the workspace) might cause a high
cognitive load, which translates into greater probability of errors and an increase of the task completion
time. On the contrary, projection based augmented reality (also cited as spatial augmented reality
(SAR) [
3
] in a broader meaning, or just projection mapping) projects the virtual data directly in the
physical space. This approach allows the operator to have their hands free and is considered an enabling
technology to face the challenge of supporting operators performing tasks [
16
]. Attracted by these
advantages, several SAR works have been developed for industrial environments [
1
,
6
,
7
,
17
,
21
]. Most
of these works are focused on providing guidance to the operators, without verifying if the task is
correct or not. To face that, [
9
] proposes an AR system that also verifies the operator task by comparing
the status of every step along the maintenance procedure, represented by a captured image, with a
reference virtual 3D representation of the expected status, which is converted to an image as well by
rendering the virtual 3D data using the tracked real camera location.
Moreover, as more and more visual computing solutions are integrated into industrial shop
floors, the complexity of communication and interaction across different peripherals and industrial
devices increases. Nonetheless, [
24
] has recently proposed a middleware architecture that enables
communication and interaction across different technologies without manual configuration or
calibration.
From the works cited above, only [
5
,
29
] deal with PCBs and are focused on a similar domain
to our work. However, they only address the part of offering augmented instructions on the screen
(without projection). Additionally, compared to all the works cited, our work combines the best
characteristics of each of them. Thus, our work has the following strong points:
The proposed system verifies if the operator has performed the operation correctly.
Instructions are simple, so there is no need to create the multimedia content that is projected.
The authoring effort is minimized to only set the position of each component in the reference board.
The projection is done on a flat surface, so the calibration step has been simplified to be easy, fast
and automatic (the user only has to put the calibration pattern in the workspace).
Appl. Sci. 2020,10, 796 3 of 13
The proposed system uses advanced visualization techniques (flickering) to deal with reflections
when projecting on PCBs.
The proposed system supports dynamic projection; i.e., the projection is updated in real time
when the PCB is moved.
A normal RGB camera is used; no depth information is required.
3. Proposed Method
This paper proposes a SAR system to guide and assist operators during the process of assembling
electronic components. This system performs real-time checking of the state of a PCB; i.e., checks
presence or absence of electronic components, by means of computer vision techniques. It offers visual
information about which component should be assembled and whether previous assemblies have
been correctly done. This work is based on [
13
], but with the improvement that the virtual content
is directly projected on the PCB using projection mapping techniques. In the following sections we
provide a brief description of the SAR system that we rely on and give a detailed explanation of
the components newly-added to the aforementioned system. The system has two work modes, one
consists of the model generation (during an offline phase) and the other consists of the real-time board
inspection and operator guiding (during an online phase); see Figure 1. We explain each component in
the following subsections.
Figure 1. Pipeline of the system.
3.1. Setup
The proposed system consists of four different parts: an illumination system, a 2D high-resolution
image acquisition setup, a screen and a projector (see Figure 2). The illumination system, the camera
and the projector must be located at sufficient height in order to not disturb the operator during
manual operation. Given user experiences and comments, the minimum ergonomic height settled on
was 600 mm. A 12 mega-pixel camera is at the center of the illumination system, at a height of 700 mm.
This positioning, combined with the optical lens, offers a field of view of 500
×
420 mm. A PCB’s
maximum size was established to 320 ×400 mm, which is covered by the proposed setup.
The projector model used is conventional, more specifically, an Optoma ML750e, which uses
LED technology and has a light output of 700 lumens. It is not a very powerful projector, but it has
proven to be sufficient (Section 3.6.2), and, in return, thanks to its small dimensions, it has allowed us
to achieve a fairly compact setup. It is positioned next to the camera, covering all the field of view of
the camera.
Appl. Sci. 2020,10, 796 4 of 13
Figure 2.
Hardware setup of the proposed system. The camera and projector are highlighted with
light-blue and dark-red rectangles, respectively.
The screen is in front of the operator, hopefully at the most ergonomic position. The screen shows
the outputs and feedback of the proposed system. It is a complementary visualization, since this
output is also shown directly on the board using the projector.
3.2. Authoring Tool
The main goal of this tool is to generate models which are able to distinguish between the presence
and absence of electronic components in the board. This tool is intended to be used before board
inspection in case there are any components unknown to the system. In this case, an operator with
correct access rights will use this tool to generate the model for this specific component.
The component catalog is immense, of the order of 10,000 different components, which it
is being constantly updated. Furthermore, these components present huge variations in their
characteristics such as size, shape, texture and color. In order to tackle this problem, [
13
] proposed a
one-classifier-per-component approach and the definition of a training phase that only needs a single
image of a minimum number of components to generate a model. This training phase can be divided
into different stages: segmentation, image generation and training.
Segmentation: In this stage the operator takes an image of the new referenced component, selecting
a foamy material with chromatic contrast to the background. The operator has to place a set of
components with the same reference almost covering all the camera field of view. Experiments
show that five well distributed components are enough to capture the prospective distortion of
Appl. Sci. 2020,10, 796 5 of 13
the camera. When the capture is ready, the segmentation process starts. The first step consists of
applying a rough or an approximate segmentation. After this process, a more accurate segmentation
is carried out using GrabCut algorithm [20] for improving component segmentation result.
Image generation: To get a high performance classifier, a substantial number of image samples
that include as much component variability as possible, is necessary. In [
13
], the authors
propose generating synthetic images of the components and different backgrounds by applying
geometric and photo-metric transformations. This step ensures the robustness of trained classifiers
during operation.
Training: In order to generate the classification model from the generated set of images, the first
part is to extract the relevant features from these images. The images of this dataset have a huge
variety in terms of background; some of them are totally uniform, while others have numerous
pinholes and tracks. For this reason, global features obtained from the whole image should be
used instead of focusing on local keypoints. Once features are extracted, a classifier is trained
with them, in order to to discriminate between components and background, and it is saved in a
database.
In [
13
], a study is conducted which compares the accuracy of different combinations of features
and classifiers. Training and validation were performed with artificially generated images, whereas
testing was performed with real images taken with the proposed setup ensuring performance in real
environments. This study was conducted using 21 different components chosen in order to cover a big
spectrum of components, ranging from multi-colored big components to uniform small components.
In conclusion, a combination of color histograms, histogram Of gradients (HOG) and local binary
patterns (LBPs) were chosen as features. Along with a radial-basis function support vector machine
(RBF-SVM) as the classifier, this combination achieved more than 90% accuracy in validation and
testing. Furthermore, this combination was assured to have low computation time; that is enough for
a real-time application.
3.3. Board Tracking
As the proposed system uses the image captured by the camera to recognize components, it is
essential to avoid distortions in the image due to the camera lens. It is therefore necessary to calibrate
the camera, i.e., to know the intrinsic camera parameters, before or prior to using the system. In our
system, we propose to use the well known Zhang’s camera calibration algorithm [
28
]. This calibration
process only needs to be done once, and it allows us to calculate the compensation that has to be
applied to each image captured by the camera to avoid distortions.
During the component assembly phase, the boards have a non-fixed position, having one degree
of freedom for horizontal displacement. They have also different sizes, shapes and colors due to the
mounting boards and used materials. Owing to a component’s position being referred to via the
bottom-left corner of the board, the use of some markers is proposed with the final purpose of tracking
the board position. In this system, the ArUco markers are used [19].
Two ArUco markers are placed to locate the vertical board position, an other two ArUco markers
are placed to locate the horizontal board position. During the assembly, the operator might occlude
the horizontal markers, but if it happens, the system assumes the previously captured horizontal
marks positions as current positions (temporal coherence assumption). The corner of the board is
calculated by intersecting the vertical line and the horizontal line referenced to the markers; see Figure
3. This corner is necessary to obtain the reference system of the PCB, and therefore, to locate component
positions. If vertical line calculation is not possible, the component inspection stops. Thus, visible
vertical markers are necessary to track correctly the board.
Appl. Sci. 2020,10, 796 6 of 13
Figure 3.
Images where the printed circuit boards (PCB) is in different positions. The purple lines are
located thanks to the ArUco markers; the corner (purple circle) is the intersection between them and
denotes the PCB reference system.
3.4. Verification
In this step, the main goal is to verify the presence of the components on the board.
First, the assembly region of each components should be located. A list of component relative
coordinates with respect to the board corner is feed to the system, and because the board corner is
already located, the assembly regions can be situated in the image. This coordinate list is created by
the quality engineer during the design process of the board using the manufacturing execution system
(MES) of the company.
A further step is to calculate the detection probability of each component, using the cropped
image of the assembly region. The classification models of the board components are loaded from the
model database. Then, for each cropped image, the selected combination of features is extracted and
feed to the classification model, an RBF-SVM in this case.
The output of the model is a probability for the analyzed image crop of the component. A high
value of this probability represents component presence, whereas low probability means absence.
Note that a larger region usually provides a stronger response than smaller region because it has more
Appl. Sci. 2020,10, 796 7 of 13
borders, texture, colors, etc. To adjust this response, a threshold calculated proportionally using the
region size is given. This operation minimizes false positives.
When these values are obtained, the output is visualized on the screen and on the board.
The visualization strategy is explained in the next section.
3.5. Screen Visualization
With the verification output, the region location is highlighted in the screen by a rectangle; if the
component is mounted, the rectangle is green, whereas if it is not mounted, the color is red. The current
group of components to be mounted is highlighted with a blinking orange solid rectangle in the
visualization. On the right side of the screen, the reference and image of the component to be mounted
are shown; see Figure 4.
Figure 4. Screen visualization of the current state of the PCB.
3.6. Projection Mapping
The main problem of screen based visualization is that the operator has to constantly check
the state of the assembly on the screen, switching attention between the board and screen. A more
ergonomic solution is obtained when the projector is used to visualize this output directly onto the
PCB. This improves the posture of the worker and increases the assembly speed and quality, since the
operator does not have to look up to receive work-instructions.
Apart from offering assistance in a conventional screen, the proposed system also provides
guidance by projecting relevant virtual content directly onto the PCB. However, to project content
in the desired place and with an adequate degree of immersion, it is first necessary to calibrate the
camera–projector pair.
3.6.1. Camera–Projector Calibration
To project virtual content adequately in a physical space, we must calibrate the setup; i.e.,
find a geometric transformation that adapts the virtual data to the shape of the projection surface.
This transformation can be fixed manually by modifying the position or shape of the virtual
content until the projection gives the desired results, which is a laborious and expensive process
that requires technical skills. However, in those cases where there is also a camera in the setup,
the camera–projector calibration, i.e., finding the correct geometric transformation, can be calculated
automatically. The projector can emit a pattern that is captured and recognized by the camera and
Appl. Sci. 2020,10, 796 8 of 13
which can be used to estimate the transformation that moves content from the camera’s coordinate
system to the projector’s coordinate system. Additionally, when an object is recognized in the camera
image and the camera pose is known, i.e., the position and orientation respect to the object is known
(Section 3.3), we have the transformation that relates the object and camera coordinate systems. Thus,
since the virtual content is defined in the same coordinate system as the object, its projection can be
calculated using the chain rule. In this work, we have followed this methodology to calibrate the
camera–projector pair. We propose to place a planar checkerboard in the physical space, and the
projector projects a complete gray code sequence. This structured-light sequence can be decoded, so
that each pixel of the camera is associated with a projector row and column. Therefore, since the 3D
coordinates of the checkerboard corners and their 2D positions (pixels) in the camera and projectors
images are known, a traditional stereo calibration method can be applied to solve the three-dimensional
camera–projector relationship (see [
11
]). Nonetheless, in our setup, the projection surface is a plane (a
PCB), and it is always parallel to the camera image plane, so we have simplified the camera–projector
relationship to 2D. We have modified the [
11
] implementation to estimate a 2D homography that
represents the camera–projector relationship. Although this simplification can be inaccurate for more
complex projection surfaces, it offers good results for planar surfaces and simplifies the calibration
process. In the original calibration version [
11
], a structured-light sequence must be captured from
several points of view, but in our simplified version, only one point of view is required. Therefore, our
simplified and not optimized version only takes approximately 85 seconds to do the calibration (50
seconds to project and capture the gray code sequence and 35 seconds to decode the patterns and to
estimate the homography). Nevertheless, this time is not usually critical, since the calibration process
is only executed once when the setup is built. Likewise, the setup must be recalibrated when there is a
change in the camera, the projector or the projection surface.
3.6.2. Virtual Content Projection
In the proposed projection mapping pipeline (Figure 5), as stated in the previous subsection, first,
the virtual content is transferred to the camera image using the camera tracking data (
Ttrack
, Section 3.3),
which creates the view that is displayed in the screen. Then, this content, which is already referenced
with respect to the camera image coordinate system, is again transformed using the camera–projector
calibration (
Hcalib
, Section 3.6.1) to the projector image area that is subsequently projected. Thus,
to project any content, we define its location in the reference 2D coordinate system of the board and
then we apply the chain rule, which can be represented conceptually by Ttrack Hcal ib .
In our application, we decided to project the following information (Figure 6), which answers
three simple questions that are very useful for operators:
"Where?": The place where the operator has to assemble the current electronic component, which
is highlighted with the projection of a white flicking rectangle.
"What?": The reference number of the electronic components that must be assembled in the current
step.
"How many?": The number of the current electronic components that have already been assembled
regarding the total number to be assembled. A fraction "
i/j
" is projected, where
i
is the number of
current components already assembled from the total of j.
The projection of "What?" and "How many?" is located at the border frame (Figure 6), outside the
electronic board, as this area is not used for anything and it offers good visibility. The projection of
"Where?" on the other hand, is superimposed on the real position that corresponds to the inside the
electronic board (Figure 6). This was not an appropriate area to get good contrast due to the material
of the PCB and the limited power of the projector that was used, so we opted to flick the projection to
capture the operator’s visual attention, and, consequently, improve its visibility. This has been proven
as a good solution, since the result of the usability test was positive (Section 4).
Appl. Sci. 2020,10, 796 9 of 13
Figure 5.
Conceptual scheme of the projection mapping pipeline. Virtual content (left) is transferred to
the camera image (
Ttrack
), and then, this content is transformed again (
Hcalib
) to the projector image
area that is subsequently projected. See text for details.
Figure 6.
Example of virtual content that is projected (highlighted in white) in the printed circuit board
during the component assembly process. The projection is more clearly seen live, so we provided the
bottom row that has zoomed-in versions of the top images to see the projections in these images with
more quality.
4. Usability Test
With the aim of evaluating the benefits of the AR extension compared to the previous system,
a system usability scale (SUS) survey was made, which compares the usability between the two
systems. On one hand, the original system presented in [
13
], where instructions are only displayed
Appl. Sci. 2020,10, 796 10 of 13
on the screen. On the other hand, the proposed system, where instructions are displayed both in the
screen and on the board directly via the projector. SUS survey is a ten-item scale test giving a global
view of subjective assessments of usability [4], which is used as a standard survey for usability tests.
The proposed test consists of mounting the same PCB with the aid of both solutions, the original
and the proposed ones, wherein every mounted process is timed and the number of mounting errors
is measured. Finally, the SUS test was completed.
A total of 21 people were surveyed. They were between 20–45 years old; there were 15 men and
six women, one of them color-blind. They did not have any experience in PCB assembly. This was
done in order to emulate a newcomer to the production line, since rotation of personnel is common.
The test was performed in a laboratory where a replica of the manufacturing production workspace
was located.
They were divided into three groups: seven participants for each group. Group 1 used the original
system in the first place and later the proposed one. Contrarily, Group 2 used the proposed system
first and original system second. Groups 1 and 2 did not have any experience mounting the electronic
board; thus, it was fair to assume that the first mounting would take longer than the second, as the
users had more experience for the second mounting. For this reason, Group 3 was created. This group
had already mounted the PCB using a different solution, so they already had some knowledge of the
PCB when using both processes. This grouping was done in order to measure time efficiency among
processes, but it did not have any impact from the usability point of view.
Figure 7displays the SUS scores. The higher the score, the more usable the systems is. The systems
achieved average values of 80 and 90 out of 100, respectively. Although a SUS score interpretation is
not straightforward, Bangor et al. [
2
] concluded that any system above 68 can be considered usable;
he also proposed an adjective scale, where a mean SUS score of around 70 is considered good, one
around 85.5 is considered excellent and one around 90.9 is referred as the best imaginable. Thus, both
systems are highly usable, but the use of augmented reality is preferable.
Figure 7.
Distributions of SUS scores. Blue represents the original system and Yellow the proposed one.
Black lines mark the average value of both distributions.
As mentioned, mounting times were measured in order to get some objective insights about
system efficiency; see Figure 8. As predicted, for Groups 1 and 2, the first mounting was usually the
more time consuming one. However, for Group 3, where participants started both mountings with the
same experience, the proposed solution yielded lower mounting times for all participants. In addition,
the feedback provided by the two systems prevented the users from making any errors.
Appl. Sci. 2020,10, 796 11 of 13
Figure 8.
Mounting times of each group. Blue and yellow bars represent the original and proposed
system, respectively. Group 1 started with the original, Group 2 started with the proposed and Group
3 already had experience.
These results show that the AR system is even faster and more comfortable than the previous
system. From the users’ comments, it can be deduced that both velocity and comfort are increased
because the user only needs to look and focus on the board, instead of changing their focus between
screen and board, thereby helping the operator to maintain the same posture. Moreover, the direct
projection onto the board allows the operator to find placing location easier, saving operational
time and reducing placement errors. The system was also validated by experienced workers of the
manufacturing company, who also pointed out the enhancement provided by the projection mapping.
In [
13
], the usability of the only-screen system is compared with the traditional system used by the
manufacturer; the system proposed achieved a much higher satisfaction levels than the traditional
system. Therefore, the AR extension is also much more usable than the traditional system.
5. Discussion
We propose to use direct projection in the workspace for improving user satisfaction and at
the same time reducing assembly errors. The previous section shows that operators actually find
the system more usable, feel more secure with it and require less time to do their tasks. A further
advantage is that operators requires less training time, as the system gives assistance throughout the
assembly. Moreover, this system allows the production managers to have traceability of the most
complex components or PCBs to be assembled. This enables them to take further measures for ensuring
operator satisfaction while also optimizing production because of the reduction of potential errors.
To guarantee that the projection-based solution is effective, the illumination conditions of the
workspace have to be considered. The ambient light cannot be strong, so that the light emitted by the
projector is predominant and the projected content is shown with contrast and sharpness. A balance
must be achieved between a valid ambient light for object detection (electronic components in our
case) and light that does not defeat the visibility of the projector. Similarly, it is preferable to work on
non-specular surfaces, so that no brightness is generated that hinders the visibility of the projection.
In our scenario, we had to deal with this difficulty, since PCBs are specular, and therefore, we had to
use more sophisticated visualization techniques to capture the operator’s attention (flickering).
In the use case presented in this paper (assembly small electronic components in a PCB) we have
not had problems with hidden areas of projection. These areas appear when an object that is in the
workspace and in front of the projector has large dimensions and occludes the area behind it. Thus,
the rays emitted by the projector cannot reach this area, and therefore, it is not possible to project
content in this zone. To solve this limitation, a multiprojector configuration should be used.
6. Conclusions and Future Work
Despite the improvements in the last few decades, the use of augmented reality in industry has
not been extended yet due to several reasons, including ergonomics, visual fatigue, content creation,
Appl. Sci. 2020,10, 796 12 of 13
the lack of IT infrastructure, etc. [
10
]. In fact, ergonomics is the main obstacle for AR glasses; thus,
projection based AR systems have been positioned as the alternative because they project data directly
in the workspace, leaving the operator’s hands free and avoiding discomfort due to motion sickness or
vergence-accommodation conflicts [16]
The fast adoption of new, advanced ITC-related technologies such as cloud computing and
augmented reality by the manufacturing sector is having a real positive impact in several terms,
such as increasing flexibility, productivity and efficiency. In this paper, we propose integrating an AR
system to support operators during the manual assembly of electronic components for improving
workers’ ability to adapt to very variable production conditions. Our results show that, compared
with the old procedure, with the new system the operators generate less errors, especially when they
face a new PCB they have not assembled before. In addition, they feel more comfortable because
they know that there is an additional system that ensures that their work is being done correctly. In
the future, we plan to implement some additional features, such as one to verify the polarity; i.e.,
the orientations of some components. Also, we plan to evaluate the impact of using deep learning
approach for recognizing components in order to increase robustness against severe illumination
changes.
Supplementary Materials:
The following are available online at www.mdpi.com/xxx/s1, Video S1: title:
Automatic System To Assist Operators in The Assembly of Electronic Components.
Author Contributions:
Conceptualization, I. S. and I. B.; Formal analysis, M. O.; Funding acquisition, D. A. and L.
Q.; Investigation, M. O. and H. A.; Methodology, I. S.; Resources, D. A.; Software, M. O., H. A. and I. S.; Validation,
F. S.; Writing – original draft, H. A.; Writing – review and editing, M. O. and F. S. All authors have read and agreed
to the published version of the manuscript.
Funding:
We would like also to thanks to SPRI agency for founding the SIRA applied research project under the
Hazitek 2018 calls, where the research described in this paper was carried on.
Acknowledgments:
We would like to thank the expert operators of Ikor for doing the user evaluation tests. We
also thank Sara Garcia for her help in generating multimedia content.
Conflicts of Interest: The authors declare no conflicts of interest
References
1.
Alvarez, H.; Lajas, I.; Larrañaga, A.; Amozarrain, L.; Barandiaran, I. Augmented reality system to guide
operators in the setup of die cutters. Int. J. Adv. Manuf. Technol. 2019,103, 1543–1553.
2.
Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective
Rating Scale. J. Usabitily Stud. 2009,4, 114–123.
3.
Bimber, O.; Raskar, R. Spatial Augmented Reality Merging Real and Virtual Worlds; A.K. Peters: Natick, MA,
USA, 2005.
4. Brooke, John. SUS: A quick and dirty usability scale. Usability Eval. Ind. 1996,189, 4–7.
5.
Hahn, J.; Ludwig, B.; Wolff, C. Augmented reality-based training of the PCB assembly process. In Proceedings
of the 14th International Conference on Mobile and Ubiquitous Multimedia, Linz, Austria, 30 November–
2 December 2015; Volume 46, pp. 395–399.
6.
Kern, J.; Weinmann, M.; Wursthorn, S. Projector-based Augmented Reality for quality inspection of scanned
objects. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017,IV-2/W4, 83–90.
7.
Korn, O.; Funk, M.; Schmidt, A. Design Approaches for the Gamification of Production Environments: A Study
Focusing on Acceptance. In Proceedings of the 8th ACM International Conference on Pervasive Technologies
Related to Assistive Environments, Corfu, Greece, 1–3 July 2015; pp. 1–7.
8.
de Lacalle, L.N.L.; Posada, J. Special Issue on New Industry 4.0 Advances in Industrial IoT and Visual
Computing for Manufacturing Processes. Appl. Sci. 2019,9, 4323.
9.
Manuri, F.; Pizzigalli, A.; Sanna, A. A State Validation System for Augmented Reality Based Maintenance
Procedures. Appl. Sci. 2019,9, 2115.
10.
Martinetti, A.; Marques, H.; Singh, S.; Dongen, L. Reflections on the Limited Pervasiveness of Augmented
Reality in Industrial Sectors. Appl. Sci. 2019,9, 3382.
Appl. Sci. 2020,10, 796 13 of 13
11.
Moreno, D.; Taubin, G. Simple, Accurate, and Robust Projector-Camera Calibration. In Proceedings of the
Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission,
Zurich, Switzerland, 13–15 October 2012; pp. 464–471.
12.
Neumann, U.; Majoros, A. Cognitive, performance, and systems issues for augmented reality applications
in manufacturing and maintenance. In Proceedings of the IEEE 1998 Virtual Reality Annual International
Symposium, Atlanta, GA, USA, 14–18 March 1998; pp. 4–11.
13.
Ojer, M.; Serrano, I.; Saiz, F.; Barandiaran, I.; Gil, I.; Aguinaga, D.; Alejandro, D. Real-Time Automatic Optical
System to Assit Operator in the Assembling of Electronic Components. Int. J. Adv. Manuf. Technol. 2019,
14.
Palmarini, R.; Erkoyuncu, J.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality
applications in maintenance. Robot. Comput.-Integr. Manuf. 2018,49, 215–228.
15.
Posada, J.; Toro, C.; Barandiaran, I.; Oyarzun, D.; Stricker, D.; De Amicis, R.; Pinto, E.B.; Eisert, P.; Döllner, J.;
Vallarino, I. Visual Computing as a Key Enabling Technology for Industrie 4.0 and Industrial Internet.
IEEE Comput. Graph. Appl. 2015,35, 26–40.
16.
Posada, J.; Zorrilla, M.; Dominguez, A.; Simões, B.; Eisert, P.; Stricker, D.; Rambach, J.; Dollner, J.; Guevara, M.
Graphics and Media Technologies for Operators in Industry 4.0. IEEE Comput. Graph. Appl.
2018
,38, 119–132.
17.
Rodriguez, L.; Quint, F.; Gorecky, D.; Romero, D.; Siller, H.R. Developing a Mixed Reality Assistance System
Based on Projection Mapping Technology for Manual Operations at Assembly Workstations. Procedia Comput.
Sci. 2015,75, 327–333.
18.
Romero, D.; Stahre, J.; Wuest, T.; Noran, O.; Bernus, P.; Fast-Berglund, Å.; Gorecky, D. Towards an
operator 4.0 typology: A human-centric perspective on the fourth industrial revolution technologies.
In Proceedings of the International Conference on Computers and Industrial Engineering (CIE46), Tianjin,
China, 29–31 October 2016.
19.
Romero Ramirez, J.; Muñoz Salinas, R.; Medina Carnicer, R. Speeded up detection of squared fiducial
markers. Image Vis. Comput. 2018,76, 38–47.
20.
Rother, C.; Kolmogorov, V.; Blake, A. "GrabCut": Interactive Foreground Extraction Using Iterated Graph
Cuts. ACM Trans. Graph. 2004,6, 309–314.
21.
Sand, O.; Büttner, S.; Paelke, V.; Röcker, C. smARt. Assembly—Projection-Based Augmented Reality for
Supporting Assembly Workers. In Proceedings of the 8th International Conference Virtual, Augmented and
Mixed Reality, Toronto, ON, Canada, 17–22 July 2016; pp. 643–652.
22.
Sanna, A.; Manuri, F.; Lamberti, F.; Member, S.; Paravati, G.; Pezzolla, P. Using handheld devices to
support augmented reality-based maintenance and assembly tasks. In Proceedings of the IEEE International
Conference on Consumer Electronics, Las Vegas, NV, USA, 9–12 January 2015; pp. 178–179.
23.
Segura, A.; Diez, H.; Barandiaran, I.; Arbelaiz, A.; Álvarez, H.; Simões, B.; Posada, J.; García-Alonso, A.;
Ugarte, R. Visual computing technologies to support the Operator 4.0. Comput. Ind. Eng.
2018
,
doi:10.1016/j.cie.2018.11.060.
24.
Simões, B.; De Amicis, R.; Barandiaran, I.; Posada, J. X-Reality System Architecture for Industry 4.0 Processes.
Multimodal Technol. Interact 2018,2, 72.
25.
Wang, X.; Ong, S.K.; Nee, A. Real-virtual components interaction for assembly simulation and planning.
Robot. Comput.-Integr. Manuf. 2016,41, 102–114.
26.
Webel, S.; Engelke, T.; Peveri, M.; Olbrich, M.; Preusche, C. Augmented Reality Training for Assembly and
Maintenance Skills. BIO Web Conf. 2011,1, 00097.
27.
Yuan, M.; Ong, S.K.; Nee, A. Augmented reality for assembly guidance using a virtual interactive tool. Int. J.
Prod. Res. 2008,46, 1745–1767.
28.
Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell.
2000,22, 1330–1334.
29. InspectAR. Available online: https://www.inspectar.com (accessed December 10, 2019).
c
2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Currently, many industrial physical tasks, especially assembly tasks, are still manual, which can frequently result in possible operation errors and low efficiency due to the different proficiency and experience of workers. To minimize these errors and improve efficiency, there has been a lot of research on AR/MR assembly assistance systems to support the manual operation of the worker by using AR/MR technology to provide visual or other forms of assistance [21][22][23][24][25]. And AR/MR assembly assistance has proven to be superior to traditional (paper-based or screen-based) instruction delivery in terms of accuracy and time [24,25]. ...
... To minimize these errors and improve efficiency, there has been a lot of research on AR/MR assembly assistance systems to support the manual operation of the worker by using AR/MR technology to provide visual or other forms of assistance [21][22][23][24][25]. And AR/MR assembly assistance has proven to be superior to traditional (paper-based or screen-based) instruction delivery in terms of accuracy and time [24,25]. ...
Article
Full-text available
With the rapid development of mixed reality (MR) technology, many compact, lightweight, and powerful devices suitable for remote collaboration, such as MR headsets, hand trackers, and 3D cameras, become readily available, providing hardware and software support for remote collaboration. Consequently, exploring MR technologies for remote collaboration on physical industry tasks is becoming increasingly worthwhile. In many complex production scenarios, such as assembly tasks, significant gains can be achieved by having remote experts assist local workers to manipulate objects in local workspaces. However, it can be challenging for a remote expert to carry out effective spatial reference and action demonstration in a local scene. Sharing 3D stereoscopic scenes can provide depth perception and support remote experts to move and explore a local user’s environment freely. Previous studies have demonstrated that gesture-based interaction is natural and intuitive, and interaction based on virtual replicas can provide clear guidance, especially for industrial physical tasks. In this study, we develop an MR remote collaboration system that shares the stereoscopic scene of the local workspace by using real-time 3D video. This system combines gesture cues and virtual replicas in a complementary manner to support the remote expert to create augmented reality (AR) guidance for the local worker naturally and intuitively in the virtual reality immersive space. A formal user study was performed to explore the effects of two different modalities interface in industrial assembly tasks: our novel method of using the combination of virtual replicas and gesture cues in the 3D video (VG3DV), and a method similar to the popular method currently of using gesture cues in the 3D video (G3DV). We found that using the VG3DV can significantly improve the performance and user experience of MR remote collaboration in industrial assembly tasks. Finally, some conclusions and future research directions were given.
... This technology employs black and white markers to detect the augmented object, while location-based applications operate without the usage of markers. This technique relies on the global positioning system (GPS) or a digital compass to determine the user's location, after which real-world physical things are substituted with, or combined with, augmented objects (Parekh et al., 2020) There's also projection-based augmented reality, often known as Spatial Augmented Reality (SAR) or projection mapping, which operates by projecting virtual data directly into actual space (Ojer et al., 2020). ...
... This method uses a digital compass or global positioning system (GPS) to detect the user's location before replacing or combining real-world physical objects with augmented ones (Parekh et al., 2020). There's also projection-based augmented reality, often known as Spatial Augmented Reality (SAR) or projection mapping, which operates by projecting virtual data directly into an actual space (Ojer et al., 2020). ...
Article
Full-text available
One of the most advanced reality technologies for education in recent years is augmented reality (AR). To create a fun learning atmosphere and to aid student learning, several subjects have begun incorporating modern technology into their teaching and learning procedures. In addition to being extensively tested and developed for typical students, AR has also been used successfully to help kids with learning disabilities (SLD). This study is focused on students with learning difficulties, looking at the changes in the usage of augmented reality (AR) technology in education over the previous few years, particularly in the area of physical education. Physical Education (PE) is frequently identified as one of the disciplines that is challenging for kids with learning disabilities to follow. This study makes use of a detailed analysis of an AR application in connection to this subject over the preceding five years because AR has the significant potential to be applied in the field of physical education. The development of this technology in physical education, the kind of AR technology employed, and the kinds of learning disability groups that the technology can help are demonstrated in a clear and understandable manner. The researcher’s perspectives and the chance to advance this study will be helped by this.
... The experiments showed that the GC is a superior guidance option since users are pointed to a certain point more clearly than by AR annotations. Ojer et al. [40] have developed a projection-based AR assistance system for manual Printed Circuit Board (PCB) assembly, which consist of an illumination system, a 2D high-resolution image acquisition setup a screen and a projector. No AR glasses are needed, which reduces worker eye strain, yet the illumination of the worktable area must ensure that the light from the projector remains predominant. ...
Article
Full-text available
Product assembly is often one of the last steps in the production process. Product assembly is often carried out by workers (assemblers) rather than robots, as it is generally challenging to adapt automation to any product. When assembling complex products, it can take a long time before the assembler masters all the steps and can assemble the product independently. Training time has no added value; therefore, it should be reduced as much as possible. This paper presents a custom-developed system that enables the guided assembly of complex and diverse products using modern technologies. The system is based on pick-to-light (PTL) modules, used primarily in logistics as an additional aid in the order picking process, and Computer Vision technology. The designed system includes a personal computer (PC), several custom-developed PTL modules and a USB camera. The PC with a touchscreen visualizes the assembly process and allows the assembler to interact with the system. The developed PC application guides the operator through the assembly process by showing all the necessary assembly steps and parts. Two-step verification is used to ensure that the correct part is picked out of the bin, first by checking that the correct pushbutton on the PTL module has been pressed and second by using a camera with a Computer Vision algorithm. The paper is supported by a use case demonstrating that the proposed system reduces the assembly time of the used product. The presented solution is scalable and flexible as it can be easily adapted to show the assembly steps of another product.
... This technology employs black and white markers to detect the augmented object, while location-based applications operate without the usage of markers. This technique relies on the global positioning system (GPS) or a digital compass to determine the user's location, after which real-world physical things are substituted with, or combined with, augmented objects (Parekh et al., 2020) There's also projection-based augmented reality, often known as Spatial Augmented Reality (SAR) or projection mapping, which operates by projecting virtual data directly into actual space (Ojer et al., 2020). ...
Article
Full-text available
"Augmented Reality (AR) is one of education’s most developed reality technologies in the last few decades. Many subjects have started integrating this technology into their teaching and learning process to create an attractive learning environment and to help the student learning process. As well as for regular students, AR has also been highly tested and developed on students with learning difficulties (SLD) and found positive results. This study focused on students with learning difficulties, which will find out the trends in the development of AR technology in Physical Education (PE). The PE subject is often assessed as one of the subjects which are difficult for children with learning difficulties to follow. By using a systematic review of this topic over the last five years. It is hoped that it will clearly show the development of this technology in physical education, the type of AR technology used, and the types of learning disability groups with which the technology can assist. The results show that the use of AR technology that is integrated into PE learning with SLDs is not found. This is an excellent opportunity for researchers to conduct and develop this research further."
Article
Fast-paced knowledge and expertise sharing (KES) is a typical demand in contemporary workplaces due to dynamic markets and ever-changing work practices. Past and current computer supported cooperative work (CSCW) research has long been investigating how computer technologies can support people with KES. Recent claims have asserted that augmented reality- (AR-)based cyber-physical production systems (CPPS) are poised to bring significant changes in the ways that KES unfolds in manufacturing contexts. This paper scrutinises such claims by implementing a short-term evaluation of an AR-based CPPS and assessing how it can potentially support (1) the generation of AR content by experienced production workers and (2) the visualisation and processing of such content by novice workers. We, therefore, contribute a user study to the CSCW community that sheds light on the use of a particular type of AR-based CPPS for KES in industrial contexts.?
Chapter
Progressions in Medical, Industrial 4.0, and training require more client communication with the information in reality. Extended reality (XR) development could be conceptualized as a wise advancement and a capable data variety device fitting for far off trial and error in image processing. This innovation includes utilizing the Head Mounted Devices (HMD) built-in with a functionalities like data collection, portability and reproducibility. This article will help to understand the different methodologies that are used for 3Dimensional (3D) mode of interaction with the particular data set in industries to refine the system and uncovers the bugs in the machineries. To identify the critical medical issues, the future technology can give a comfort for the medic to diagnose it quickly. Educators currently utilizing video animation are an up-rising pattern. Important methods used for various applications like, Improved Scale Invariant Feature Transform (SIFT), Block Orthogonal Matching Pursuit (BOMP), Oriented fast and Rotated Brief (ORB) feature descriptor and Kanade-Lucas-Tomasi (KLT), Semi-Global Block Matching (SGBM) algorithm etc., In high-speed real time camera, the position recognition accuracy of an object is less than 65.2% as an average in motion due to more noise interferences in depth consistency. So, more optimization is needed in the algorithm of depth estimation. Processing time of a target tracking system is high (10.670 s), that must be reduced to provide increased performance in motion tracking of an object in real-time. XR is a key innovation that is going to work with a change in perspective in the manner in which clients collaborate with information and has just barely as of late been perceived as a feasible answer for tackling numerous basic requirements.KeywordsExtended reality (XR)Machine learning algorithms3D reconstructionMotion trackingImage depth analysisVirtual reality (VR)Augmented reality (AR)Mixed reality (MR)
Article
Full-text available
Augmented Reality (AR) has gradually become a mainstream technology enabling Industry 4.0 and its maturity has also grown over time. AR has been applied to support different processes on the shop-floor level, such as assembly, maintenance, etc. As various processes in manufacturing require high quality and near-zero error rates to ensure the demands and safety of end-users, AR can also equip operators with immersive interfaces to enhance productivity, accuracy and autonomy in the quality sector. However, there is currently no systematic review paper about AR technology enhancing the quality sector. The purpose of this paper is to conduct a systematic literature review (SLR) to conclude about the emerging interest in using AR as an assisting technology for the quality sector in an industry 4.0 context. Five research questions (RQs), with a set of selection criteria, are predefined to support the objectives of this SLR. In addition, different research databases are used for the paper identification phase following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) methodology to find the answers for the predefined RQs. It is found that, in spite of staying behind the assembly and maintenance sector in terms of AR-based solutions, there is a tendency towards interest in developing and implementing AR-assisted quality applications. There are three main categories of current AR-based solutions for quality sector, which are AR-based apps as a virtual Lean tool, AR-assisted metrology and AR-based solutions for in-line quality control. In this SLR, an AR architecture layer framework has been improved to classify articles into different layers which are finally integrated into a systematic design and development methodology for the development of long-term AR-based solutions for the quality sector in the future.
Article
Full-text available
This work presents an optical inspection-guiding system for electronic board manufacturing. The system monitors in real time the mounting process of electronic components performed by an operator. It visually guides the operator through the mounting process while checking the correctness of its actions. As a consequence, mounting errors are reduced while operator comfort is enhanced. This work also introduces a novel method to generate virtual images from a few real images in order to generate enough data for model training. The proposed method is tested using 7 different descriptor combinations and 4 different classifiers. We have also collected, generated, and evaluated a component dataset of 20 different components, called ECAD. The solution was tested with 16 real scenarios, different electronic boards which are empty or full with components. Finally, a usability test was carried out with 21 different people comparing the original and proposed solutions. The propose system is advantageous since it enhances operator’s comfort and satisfaction, increases mounting speed, and reduces error ratio.
Article
Full-text available
The new advances of IIOT (Industrial Internet of Things), together with the progress in visual computing technologies, are being addressed by the research community with interesting approaches and results in the Industry 4.0 domain[...]
Article
Full-text available
The paper aims to investigate the reasons why Augmented Reality (AR) has not fully broken the industrial market yet, or found a wider application in industries. The main research question the paper tries to answer is: what are the factors (and to what extent) that are limiting AR? Firstly, a reflection on the state of art of AR applications in industries is proposed, to discover the sectors more commonly chosen for deploying the technology so far. Later, based on a survey conducted after that, three AR applications have been tested on manufacturing, automotive, and railway sectors, and the paper pinpoints key aspects that are conditioning its embedding in the daily working life. In order to compare whether the perception of employees from railway, automotive, and manufacturing sectors differs significantly, a one-way analysis of variance (ANOVA) has been used. Later, suggestions are formulated in order to improve these aspects in the industry world. Finally, the paper indicates the main conclusions, highlighting possible future researches to start.
Article
Full-text available
Maintenance has been one of the most important domains for augmented reality (AR) since its inception. AR applications enable technicians to receive visual and audio computer-generated aids while performing different activities, such as assembling, repairing, or maintenance procedures. These procedures are usually organized as a sequence of steps, each one involving an elementary action to be performed by the user. However, since it is not possible to automatically validate the users actions, they might incorrectly execute or miss some steps. Thus, a relevant open problem is to provide users with some sort of automated verification tool. This paper presents a system, used to support maintenance procedures through AR, which tries to address the validation problem. The novel technology consists of a computer vision algorithm able to evaluate, at each step of a maintenance procedure, if the user correctly completed the assigned task or not. The validation occurs by comparing an image of the final status of the machinery, after the user has performed the task, and a virtual 3D representation of the expected final status. Moreover, in order to avoid false positives, the system can identify both motions in the scene and changes in the camera’s zoom and/or position, thus enhancing the robustness of the validation phase. Tests demonstrate that the proposed system can effectively help the user in detecting and avoiding errors during the maintenance process.
Article
Full-text available
This paper describes an Augmented Reality system for the improvement of the manufacturing process in the packaging sector. It presents a successful use case of how to integrate the Augmented Reality technology in a factory shop floor by providing a tool that helps operators in their daily work. Given a product reference, the proposed system digitizes the setting of the die cutter automatically from an image and stores it in a database to subsequently consult and analyze. Furthermore, the content display is not carried out as a conventional Augmented Reality system (wearable devices such as glasses or mobile), but projecting directly on the workspace to facilitate its interpretation. Compared to the current workflow, where the data is recorded on sheets of paper and stored physically in warehouses, the proposed system offers several advantages such as preventing data loss, reducing costs, or the possibility of increasing knowledge from the post-processing of digitized data.
Article
Full-text available
Information visualization has been widely adopted to represent and visualize data patterns as it offers users fast access to data facts and can highlight specific points beyond plain figures and words. As data comes from multiple sources, in all types of formats, and in unprecedented volumes, the need intensifies for more powerful and effective data visualization tools. In the manufacturing industry, immersive technology can enhance the way users artificially perceive and interact with data linked to the shop floor. However, showcases of prototypes of such technology have shown limited results. The low level of digitalization, the complexity of the required infrastructure, the lack of knowledge about Augmented Reality (AR), and the calibration processes that are required whenever the shop floor configuration changes hinders the adoption of the technology. In this paper, we investigate the design of middleware that can automate the configuration of X-Reality (XR) systems and create tangible in-site visualizations and interactions with industrial assets. The main contribution of this paper is a middleware architecture that enables communication and interaction across different technologies without manual configuration or calibration. This has the potential to turn shop floors into seamless interaction spaces that empower users with pervasive forms of data sharing, analysis and presentation that are not restricted to a specific hardware configuration. The novelty of our work is due to its autonomous approach for finding and communicating calibrations and data format transformations between devices, which does not require user intervention. Our prototype middleware has been validated with a test case in a controlled digital-physical scenario composed of a robot and industrial equipment.
Article
Full-text available
Visual computing technologies have an important role in manufacturing and production, particularly in new Industry 4.0 scenarios with intelligent machines, human-robot collaboration and learning factories. In this article, we explore challenges and examples on how the fusion of graphics, vision and media technologies can enhance the role of operators in this new context.
Article
Full-text available
Squared planar markers have become a popular method for pose estimation in applications such as autonomous robots, unmanned vehicles and virtual trainers. The markers allow estimating the position of a monocular camera with minimal cost, high robustness, and speed. One only needs to create markers with a regular printer, place them in the desired environment so as to cover the working area, and then registering their location from a set of images. Nevertheless, marker detection is a time-consuming process, especially as the image dimensions grows. Modern cameras are able to acquire high resolutions images, but fiducial marker systems are not adapted in terms of computing speed. This paper proposes a multi-scale strategy for speeding up marker detection in video sequences by wisely selecting the most appropriate scale for detection, identification and corner estimation. The experiments conducted show that the proposed approach outperforms the state-of-the-art methods without sacrificing accuracy or robustness. Our method is up to 40 times faster than the state-of-the-art method, achieving over 1000 fps in 4 K images without any parallelization.
Article
Full-text available
After scanning or reconstructing the geometry of objects, we need to inspect the result of our work. Are there any parts missing? Is every detail covered in the desired quality? We typically do this by looking at the resulting point clouds or meshes of our objects on-screen. What, if we could see the information directly visualized on the object itself? Augmented reality is the generic term for bringing virtual information into our real environment. In our paper, we show how we can project any 3D information like thematic visualizations or specific monitoring information with reference to our object onto the object’s surface itself, thus augmenting it with additional information. For small objects that could for instance be scanned in a laboratory, we propose a low-cost method involving a projector-camera system to solve this task. The user only needs a calibration board with coded fiducial markers to calibrate the system and to estimate the projector’s pose later on for projecting textures with information onto the object’s surface. Changes within the projected 3D information or of the projector’s pose will be applied in real-time. Our results clearly reveal that such a simple setup will deliver a good quality of the augmented information.