Conference PaperPDF Available

Online Synchronization of Building Model for On-Site Mobile Robotic Construction



Content may be subject to copyright.
37th International Symposium on Automation and Robotics in Construction (ISARC 2020)
Online Synchronization of Building Model
for On-Site Mobile Robotic Construction
S. Ercan Jennya, H. Blumb, A. Gawelb, R. Siegwartb, F. Gramazioaand M. Kohlera
aDepartment of Architecture, ETH Zurich, Switzerland
bDepartment of Mechanical and Process Engineering, ETH Zurich, Switzerland,,,,,
Abstract -
This research presents a novel method for a data flow that
synchronizes building information with the robot map and
updates building components to their "as-built" states, in or-
der to facilitate an on-site mobile construction process. Our
experiments showcase mobile mapping and localization of a
robotic platform featuring segmentation, plane association
and quantitative evaluation of deviations. For the users of
the on-site mobile robotic system, we present a suitable inter-
face that allows for task level commanding and the selection
of target and reference building components (i.e. walls, floor,
ceiling). Additionally, this interface seamlessly integrates
the online workflow between building construction and the
robot map, updating the target building components to their
"as-built" states in real time and providing a visual rep-
resentation of additional task-specific attributes for building
components in the robot map, in addition to geometries. This
is presented as a first step toward integrating users of the sys-
tem into the proposed robotic workflow to develop decision-
making strategies for fitting building tasks to local references
Keywords -
On-Site Mobile Construction; Localization; Construction
Robotics; Building Model; As-Built; Deviation Analysis;
Robotic Construction Workflow with User Interaction
1 Introduction
Robotic technologies are widely applied in the off-site
prefabrication of building components, where the strength
of high-tech assembly lines can yield their full potentials:
robots and work-pieces are in fixed locations with con-
stant conditions, and building components can be mass-
produced without the need to dynamically adapt the pro-
cess. However, the majority of building tasks are executed
directly on construction sites. In contrast to off-site pre-
fabrication, on-site building construction often deals with
dynamically changing conditions of large scale building
components in spatially complex and cluttered environ-
ments. Furthermore, on-site work inherently generates
deviations from the "as-planned", which is the state of the
design as it should be. This requires craftsmen to register
the differences and adapt the building tasks according to
the "as-built" condition, which is the state of the construc-
tion as it is [1].
To facilitate an on-site construction cycle in an unbro-
Select target geometry by
clicking on control points
Robot trajectory generated
on target geometry
Figure 1. Robot trajectory representing the building
task, generated on selected target geometry.
ken digital chain, a mobile construction robot must be
able to understand the context within which it is work-
ing: it must localize itself via a robot map, both globally
and locally, in reference to the already built components
and the task being executed. In addition, it must be able
to detect and understand any divergences between “as-
planned”, and “as-built” conditions. To achieve this in
an on-site robotic construction process, a key challenge
is linking the building information to the mobile robot’s
internal representation of the world (referred to as a robot
map) and perception of its surroundings, using on-board
sensing. This information is often represented as point
clouds where the underlying geometric relationships are
unknown. Such linkages between the building information
and the mobile robot allow to cope with any divergences
and the associated inaccuracies of the building materials
and components, and to apply decision-making strategies
while executing building tasks. To facilitate this link, there
must be a flow of data between the complex building in-
formation and the construction robot.
In this paper we present:
An online digital workflow between the building
model and robot perception
37th International Symposium on Automation and Robotics in Construction (ISARC 2020)
A suitable interface that allows the users of the on-
site mobile robotic system to fit building tasks to local
references on-site
Real-time adaptation of the building components (i.e.
walls) with respect to measured "as-built" state
2 State of the Art
Recent developments indicate an increased use of mo-
bile robots on jobsites to monitor, track progress1, or reg-
ister differences2. Still, the problem of how to manage
the flow of data between mobile robots that build and
the complex building information is an emerging topic.
Early automation attempts in the 1990s sought to replace
manual processes with robotic building technologies in
the construction sector, and resulted in early-stage mobile
construction robots such as [2, 3, 4, 5], all of which lacked
the hardware ability to interface to the complex building
information. An early attempt to have a mobile construc-
tion robot with a limited interface using integrated sensing
abilities, for localizing itself with respect to the building
components for executing building tasks on-site is exem-
plified in [6], but still not enabling real-time adaptation of
building components.
A large amount of research is dedicated to auto-
mated modelling of "as-built" states of building compo-
nents [7, 8], and localizing camera and laser sensors in
a Building Information Model (BIM) as well as tracking
construction progress [9, 10, 11, 12, 13, 8]. While these
works show an integration of BIM with static and mobile
sensing, the assumption is that the "as-built" status reflects
the "as-planned" status and the problem of progress track-
ing is then defined as detecting the presence or absence of
discrete building components. In this paper, we address
the challenge of having an online data flow for an on-site
mobile construction process by detecting and communi-
cating metric deviations, such as walls being placed sev-
eral cm differently than "as-planned". Detection of such
metric deviations enable the adaptation of building models
beyond mere presence or absence of building components.
Current research in the field, such as [14, 15], has put
forward novel approaches for software interfaces that fa-
cilitate the adaption of "as-planned" building information
in an on-site mobile construction process. In the case
of [14], for the task of automated brick-laying, the planned
geometries of two pillars are fitted to the robot map, and
their as-built location is extracted to generate the robot
tasks for fabricating a brick wall between them in a mobile
robotic construction process. In the case of [15], a sim-
ilar approach is implemented locally. By tracking visual
features of the geometry that is being built, each discrete
steel member is registered with stereo vision and locally
corrected, all facilitated by the interface between the robot
perception and the building information during the exe-
cution of the building task. These works demonstrated
the feasibility and necessity of deviation measurements to
adapt and execute construction tasks online, but they were
limited to specific pre-programmed geometries and tasks.
It has not been demonstrated so far to establish an online
data flow for an integrated robotic construction process and
a flexible work area selection directly on job sites, local-
izing the construction robot both globally and locally in
reference to the already built components (local references
such as walls, floor, ceiling). With this research, we pro-
pose the online deployment of the building model into the
robotic construction process, aiming to introduce a real-
time method to plan a robotic workflow by fitting building
tasks to local, selected references on-site, via a suitable
2D/3D Building
Deviation Analysis
Mesh Data
Figure 2. Overall workflow.
3 Method
Towards a generalized workflow, our method considers
an abstraction where a building task is executed (i.e. as a
robot trajectory) on a selected target geometry. To execute
such a task, the robot requires poses of the trajectory in a
local coordinate frame, which we anchor at given reference
geometries. In this section, we describe a method that can
execute such a task although the location and orientation
of the target mesh faces (belonging to a target geometry),
with respect to the reference mesh faces (belonging to
reference geometries), may diverge from the "as-planned"
The procedure for the proposed method consists of the
37th International Symposium on Automation and Robotics in Construction (ISARC 2020)
Deviations found?
2. Select target geometry and references
using building model as user interface
3. User input added
to COMPAS mesh
4. Serialize and publish mesh
including user input
5. Localize robot and
run selective ICP algorithm
6. Publish segmented point cloud
and the transformation data for the
target geometry (for the selected faces of
the wall)
7. Apply transformation to the target
geometry and update building model
locally (update wall)
8. Generate robot trajectory
on updated geometry
( wall)
1. Convert “as-planned” 2D/3D
building info to COMPAS mesh
data structure
Figure 3. The procedure for the proposed method.
following steps (Figure 3): Firstly, the mesh data struc-
ture representing the building components is generated
from the “as-planned” 2D/3D building information, as
described in subsection 3.1. Secondly, the user selects
the target geometry and the references by clicking on the
control points for bounding the areas of interest, and the
program executes necessary steps to find the target and ref-
erence mesh faces that contain all given points as vertices.
This is then included in the mesh data structure via label-
ing the target mesh faces as "selected" and the reference
mesh faces as "reference", as described in subsection 3.2.
Following this, the mesh data structure containing the user
input is serialized and published as a message in the ROS
environment running the robot controller. Next, the robot
is localized and the ICP algorithm is executed on-site, as
described in subsection 3.3, using local references (coined
selective ICP) [16]. The calculated transformation is pub-
lished back and imported into the building model. As
described in subsection 3.4, the transformation is applied
to the target geometry and the building model is updated
locally in the design environment. Finally, the robot tra-
jectory representing the building task is generated on the
"as-built" target wall. These steps are also shown in an
accompanying video3.
3.1 Building Model Data Structure
A schematic overview of the data representation com-
municated to the robot is shown in Figure 4. This
vertex A
vertex B
vertex C
pos x
pos y
pos z
triangles []
triangles []
Figure 4. Data representation communicated be-
tween the building model and the robot.
data representation is initially generated from the "as-
planned" 2D/3D building information, using the open-
source, Python-based computational framework COM-
PAS4and visualised in 3D modeling software Rhino (Fig-
ure 2). It contains the mesh data structure of the COMPAS
framework that allows for storing and adapting geometry
and topology, aiming to represent the continuously chang-
ing building information and the robot trajectory (repre-
senting the building task) in relation to target geometries,
i.e selected faces of the target wall. Like this, robot trajec-
tory planning and generation from the design environment
is aligned with the "as-built" state and the robot map (Fig-
ure 1). The communication between the robot controller
- running in a ROS5environment - and the Python-based
37th International Symposium on Automation and Robotics in Construction (ISARC 2020)
design environment is established using the ROS Bridge
library roslibpy6. Additionally, the kinematic model of the
mobile robot is visualized in the design environment using
the COMPAS FAB package of the COMPAS framework.
This allows users to visualize the current robot state and
task status in relation to the building model in the later
stages of the process as well (such as i.e. feedback- based
plaster spraying - that comes after Step 8 of the procedure
shown in Figure 3) - which is not included within the scope
of this paper.
3.2 Building Model as User Interface
The building model described in 3.1 - with the necessary
abstraction level - is used directly on-site for task level
commanding and for the selection of the target geometries
and the necessary references, which are the relevant faces
of the neighboring meshes, constraining the work area for
the building task in the x, y and z axes. Firstly, the user
selects the target geometry by clicking on the control points
(located on the corners of the geometries) for bounding
the area of interest (Figure 1), and the program executes
necessary steps to find the target mesh faces that contain
all given points as vertices. The same steps are repeated
for the selection of the references. Next, the selected
vertices are labeled in the mesh data structure, which is
then serialised and communicated to the robot.
3.3 Robot Localization
The robot localizes against the building model using
measurements from the 3D LiDAR described in subsec-
tion 4.1. In this way, the user provides an initial coarse
alignment during robot start-up. The robot subsequently
aligns the LiDAR scan with a sampled point cloud from the
mesh as described in [17], using the Iterative Closest Point
(ICP) algorithm [18]. To increase accuracy and overcome
ambiguities, the alignment is constrained to a few refer-
ence mesh faces [16]. Here, these reference mesh faces are
selected through the interface described in subsection 3.2
and communicated to the robot as described in subsection
3.1. Consequently, all LiDAR scans are aligned to the se-
lected reference frame when measurements on the target
mesh faces are extracted.
3.4 Adapting the Building Model to As-Built
When the robot is localized, deviations between "as-
planned" and "as-built" can be measured on the target
geometry. Candidate planes from the LiDAR scan are
extracted in order to find the plane associated to the target
geometry. Given measured distances dit between points i
on the candidate plane cand the target plane t, we associate
Figure 5. Mobile robotic platform used in our exper-
iments, here with an additional arm mounted.
planes by:
arg min
dit +λ2
dit +(1λ1λ2)rot(c,t)
where λ1, λ2are tuning weights. The geometric transfor-
mation between "as-planned" and "as-built" can then be
found as the translation between the plane centroids and
rotation rot(c,t)between the face normals of the target
geometry. As the current sensor setup does not allow the
measuring of points over the whole height of the building
components (i.e. walls), the z coordinate of the centroid
cannot be estimated and therefore the z translation is not
considered in the experiments. Upon detection of the geo-
metric transformation, it is applied to the target geometry
and the building model is updated locally.
4 Experiments
Experiments are presented in three subsections. The
first subsection describes the robotic platform; the second
subsection showcases the online selection of references
and the work area and the third section focuses on the
deviation detection and update of the target geometries lo-
cally in the building model. All experiments are conducted
at a parking garage, which is set up as a mock construction
scene at ETH Zurich.
4.1 Robotic Platform
The experiments are conducted on our open-source
robotic platform7that consists of a wheeled base with
LiDAR and IMU sensors, as shown in Figure 5. The plat-
form’s capabilities of high-accuracy manipulation [17] and
37th International Symposium on Automation and Robotics in Construction (ISARC 2020)
0s 4s 8s 12s
Second work area (8s-12s)
First work area (0s-4s)
Figure 6. Top row: Online selection of references and work area. Bottom row: Robot map and perception.
localization in cluttered environments [16] were validated
in earlier works. Since no manipulation for the execution
of the building task is tested within the scope of this paper,
the robotic arm was not mounted on the mobile platform
in our experiments.
The LiDAR sensor used has an accuracy of 2 cm, 16
beams, and an opening angle of 30°, leading to height
measurements on target walls of approximately 1 m. The
scope of this work is therefore not a precise deviation anal-
ysis or sensing benchmark, which is out of the capabilities
of the used sensors, but the demonstration of an integrated
online workflow. The demonstrated workflow is not spe-
cific to the sensors used here, and can in fact be applied to
any robotic platform that has the capability to localize in a
robot map and to measure spatial information on selected
4.2 Online Selection of References and Work Area
In the first set of experiments, steps 1-4 of the procedure
shown in Figure 3 are tested in order to showcase the initial
steps of the online data flow proposed for synchronizing
building information and the robot map. Initially, for the
first work area shown in Figure 6, relevant references X,
Y, and Z are selected, where X refers to the reference
constraining the work area in the x-axis, and Y and Z,
refer to the ones constraining the work area in the y and
z axes. Without the need to reload a new building model
to the robot off-site, the references are shifted on-site (as
described in subsection 3.2) to the ones shown in Figure 6,
at 8s. This successfully demonstrates a flexible, online
method for work area selection to execute a building task
(i.e. plaster spraying), for which fitting to relevant and
local references on-site is crucial.
4.3 Deviation Detection and Update
In the second set of experiments, steps 1-8 of the pro-
cedure shown in Figure 3 are tested in order to showcase
the update of building components to their as-built states
for tolerance handling to facilitate an on-site construction
process. For a selected set of target geometries shown in
Figure 7, Walls 1-3, the same set of references X, Y, and
Z (shown in Figure 7) are selected as described in subsec-
tion 3.2, constraining the work areas in x, y, and z axes to
execute the selective ICP algorithm on-site. Robot trajec-
tories are then generated on updated target geometries.
Table 1. Geometric transformations calculated for
the target geometries, resulting from deviation de-
tection: Translation of the mesh face centroid and
rotation of the mesh face normal
Target X Translation Y Translation Rotation
Wall 1 30 ±11 mm 23 ±34 mm 1.1±1.4
Wall 2 13 ±1mm 109 ±1mm 2.0±1.4
Wall 3 5±50 mm 36 ±42 mm 0.9±0.1
5 Results
In all conducted experiments, the robot correctly associ-
ated its measurements to the selected target geometry and
was able to provide associated sensor readings in the form
of a point cloud, in addition to a parametric analysis of
the deviation. This is important, as it allowed for updating
the target geometry and for adapting the robot trajectory,
representing the building task. The geometric transforma-
tions calculated for each target geometry are presented in
Table 1.
Within the scope of this paper, the experiments focused
37th International Symposium on Automation and Robotics in Construction (ISARC 2020)
Figure 7. Deviation detection on selected target ge-
ometries, Walls 1-3, and generation of robot trajec-
tories on updated target geometries.
on demonstrating the online workflow of the robotic con-
struction system. In order to assess achievable degrees of
precision, tests will be performed with higher-quality sen-
sors on the robotic platform, and results will be compared
to a ground truth site survey derived from measurements
with a tripod system.
6 Conclusion and Outlook
In this paper, we present the first implementation steps
of a novel method for an online data flow to synchro-
nize building information with the robot map for facilitat-
ing an on-site construction process in an unbroken digi-
tal chain. This is established via linking the generation
of robot trajectories, representing the building tasks, to
the “as-built” states of selected building components di-
rectly on-site. Within the experiments presented in this
paper, this is achieved by updating the target geometries
in the building model, via the selective ICP algorithm
executed directly on-site. Online deployment of the build-
ing model as an interface for including human actors into
the robotic construction process is also tested, introduc-
ing a flexible method to plan a robotic workflow by fitting
building tasks to local references on-site. However, these
experiments do not extend to the actual execution of the
tasks, which involve i.e. feedback-based plaster spraying,
grinding, chiseling, etc. For these steps, in-process robot
trajectory adaptation will be established with continuous
process control, using visual feedback for acquisition of
the current state of target geometries i.e. spraying, chisel-
ing, and grinding surfaces. The overarching goal of this
research is the development of a digital toolbox, so that
the compatibility of the proposed workflow can be tested
on different mobile robotic platforms deployed for on-site
Within the scope of the experiments presented in this
paper, the proposed method facilitated the digitization of
crucial task information, i.e. selection of relevant ref-
erences for building tasks for importation back into the
building model and robot map. In further development,
we will explore methods of dispatching task-specific in-
structions via different types of interfaces and experiment
with on-site robotic workflows based on human collabo-
ration and aim to further leverage the strengths of both
humans and robots, enhancing the capabilities of digital
construction processes.
We would like to thank Julian Stiefel for his contribu-
tions on mobile robotic deviation analysis. This work was
partially supported by the Swiss National Science Foun-
dation (SNF), within the National Centre of Competence
in Research Digital Fabrication (NCCCR DFAB) and by
the HILTI group.
[1] Pingbo Tang, Daniel Huber, Burcu Akinci, Robert
Lipman, and Alan Lytle. Automatic reconstruc-
tion of as-built building information models from
laser-scanned point clouds: A review of related tech-
niques. Automation in construction, 19(7):829–843,
[2] Jurgen Andres, Thomas Bock, Friedrich Gebhart,
and Werner Steck. First results of the development
37th International Symposium on Automation and Robotics in Construction (ISARC 2020)
of the masonry robot system rocco: A fault toler-
ant assembly tool. In Denis A. Chamberlain, editor,
Automation and Robotics in Construction XI: Pro-
ceedings of the Eleventh International Symposium
on Automation and Robotics in Construction (IS-
ARC), pages 87–93, Brighton, UK, May 1994. Inter-
national Association for Automation and Robotics
in Construction (IAARC). ISBN 9780444820440.
[3] D. Apostolopoulos, H. Schempf, and J. West. Mo-
bile robot for automatic installation of floor tiles. In
Proceedings of IEEE International Conference on
Robotics and Automation, volume 4, pages 3652–
3657 vol.4, 1996.
[4] Ronie Navon. Process and quality control with
a video camera, for a floor-tilling robot. Au-
tomation in Construction, 10:113–125, 11 2000.
[5] G. Pritschow, M. Dalacker, J. Kurz, and M. Gaenssle.
Technological aspects in the development of a
mobile bricklaying robot. In Eugeniusz Budny
and Anna McCrea, editors, Proceedings of the
12th International Symposium on Automation and
Robotics in Construction (ISARC), pages 281–
290, Warsaw, Poland, June 1995. International
Association for Automation and Robotics in
Construction (IAARC). ISBN 9788386040025.
[6] V. Helm, S. Ercan, F. Gramazio, and M. Kohler. Mo-
bile robotic fabrication on construction sites: Dim-
rob. In 2012 IEEE/RSJ International Conference on
Intelligent Robots and Systems, pages 4335–4341,
[7] Tomás Werner and Andrew Zisserman. New tech-
niques for automated architectural reconstruction
from photographs. In European conference on com-
puter vision, pages 541–555. Springer, 2002.
[8] Khashayar Asadi, Hariharan Ramshankar, Mojtaba
Noghabaei, and Kevin Han. Real-time image lo-
calization and registration with bim using perspec-
tive alignment forindoor monitoring of construction.
Journal of Computing in Civil Engineering, 33(5):
04019031, 2019.
[9] Frédéric Bosché. Automated recognition of 3d cad
model objects in laser scans and calculation of as-
built dimensions for dimensional compliance control
in construction. Advanced engineering informatics,
24(1):107–118, 2010.
[10] Viorica Pătrăucean, Iro Armeni, Mohammad Na-
hangi, Jamie Yeung, Ioannis Brilakis, and Carl Haas.
State of research in automatic as-built modelling.
Advanced Engineering Informatics, 29(2):162–171,
[11] Christopher Kropp, Christian Koch, and Markus
König. Interior construction state recognition with
4d bim registered image sequences. Automation in
construction, 86:11–32, 2018.
[12] YM Ibrahim, Tim C Lukins, X Zhang, Emanuele
Trucco, and AP Kaka. Towards automated progress
assessment of workpackage components in construc-
tion projects using computer vision. Advanced En-
gineering Informatics, 23(1):93–103, 2009.
[13] Kevin Han, Joseph Degol, and Mani Golparvar-
Fard. Geometry-and appearance-based reasoning
of construction progress monitoring. Journal of
Construction Engineering and Management, 144(2):
04017110, 2018.
[14] Kathrin Dörfler, Timothy Sandy, Markus Giftthaler,
Fabio Gramazio, Matthias Kohler, and Jonas Buchli.
Mobile robotic brickwork. In Robotic Fabrication in
Architecture, Art and Design 2016, pages 204–217.
Springer, 2016.
[15] Manuel Lussi, Timothy Sandy, Kathrin Doerfler,
Norman Hack, Fabio Gramazio, Matthias Kohler,
and Jonas Buchli. Accurate and adaptive in situ fabri-
cation of an undulated wall using an on-board visual
sensing system. In 2018 IEEE International Con-
ference on Robotics and Automation (ICRA), pages
1–8. IEEE, 2018.
[16] Her mann Blum, Julian Stiefel, Cesar Cadena, Roland
Siegwart, and Abel Gawel. Precise robot localization
in architectural 3D plans. June 2020.
[17] Abel Gawel, Hermann Blum, Johannes Pankert,
Koen Krämer, Luca Bartolomei, Selen Ercan, Farbod
Farshidian, Margarita Chli, Fabio Gramazio, Roland
Siegwart, et al. A fully-integrated sensing and con-
trol system for high-accuracy mobile robotic building
construction. In IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS 2019), 2019.
[18] Paul J Besl and Neil D McKay. Method for regis-
tration of 3-d shapes. In Sensor fusion IV: control
paradigms and data structures, volume 1611, pages
586–606. International Society for Optics and Pho-
tonics, 1992.
... The fullymobile robotic platform that is being currently developed to be implemented in future research, should further expand the possibilities for this technology. A constant materialfeed combined with a fully mobile robotic platform -with synchronized arm and base movements [25] -will allow RPS to be applied as a continuous mobile 3D printing process on-site [8,26], easing the challenge of segmentation and to avoid the formation of cold joints. Still, transitioning to the existing building structure -i.e. from floor to wall, wall to ceiling, as well as controlling tolerances according to specific situations (such as corners or deviations in the building structure) will challenge the process applicability. ...
Full-text available
This paper describes the 1:1 scale application of Robotic Plaster Spraying ( RPS ), a novel, adaptive thin-layer printing technique, using cementitious base coat plaster, realized in a construction setting. In this technique, the print layers are vertical unlike most 3DCP processes. The goal is to explore the applicability and scalability of this spray-based printing technique. In this study, RPS is combined with an augmented interactive design setup, the Interactive Robotic Plastering ( IRoP ), which allows users to design directly on the construction site, taking the building structure, as-built state of the on-going fabrication and the material behavior into consideration. The experimental setup is an on-site robotic system that consists of a robotic arm mounted on a semi-mobile vertical axis with an integrated, automated pumping and adaptive spraying setup that is equipped with a depth camera. The user interaction is enabled by a controller-based interaction system, interactive design tools, and an augmented reality interface. The paper presents the challenges and the workflow that is needed to work with a complex material system on-site to produce bespoke plasterwork. The workflow includes an interactive design procedure, localization on-site, process control and a data collection method that enables predicting the behavior of complex-to-simulate cementitious material. The results demonstrate the applicability and scalability of the adaptive thin-layer printing technique and address the challenges, such as maintaining material continuity and working with unpredictable material behavior during the fabrication process.
... Other than the pointbased ICP method, meshes were also used for robot global localization without the need of an initial guess in [25]. Recently, researchers in [26] proposed a novel interface to connect building construction and map representation, which could also detect deviations between as-designed and as-built models via localization results. ...
Full-text available
Conventional sensor-based localization relies on high-precision maps. These maps are generally built using specialized mapping techniques, which involve high labor and computational costs. While in the architectural, engineering and construction industry, building information models (BIMs) are available and can provide informative descriptions of environments. This paper explores an effective way to localize a mobile 3D LiDAR sensor in BIM considering both geometric and semantic properties. Specifically, we first convert original BIM to semantic maps using categories and locations of BIM elements. After that, a coarse-to-fine semantic localization is performed to align laser points to the map via iterative closest point registration. The experimental results show that the semantic localization can track the pose with only scan matching and present centimeter-level errors over 340 meters traveling, thus demonstrating the feasibility of the proposed mapping-free localization framework. The results also show that using semantic information can help reduce localization errors in BIM.
... Similarly, A pose-graph with Lidar measurements that represented by a SLAM and the floor plan was proposed, and the robot localization system that assisted by architectural CAD drawings could achieve a sub-centimeter accuracy [46]. Moreover, a novel method for high-accuracy localization based on the state of the known building structure was presented [47], and the synchronization of building information with the robot's map accelerated to the process of mobile construction [48]. In our previous work, an adaptive robust localization method based on artificial landmarks and generated buildings was also studied [49]. ...
Full-text available
The effectiveness of mobile robot aided for architectural construction depends strongly on its accurate localization ability. Localization of mobile robot is increasingly important for the printing of buildings in the construction scene. Although many available studies on the localization have been conducted, only a few studies have addressed the more challenging problem of localization for mobile robot in large-scale ongoing and featureless scenes. To realize the accurate localization of mobile robot in designated stations, we build an artificial landmark map and propose a novel nonlinear optimization algorithm based on graphs to reduce the uncertainty of the whole map. Then, the performances of localization for mobile robot based on the original and optimized map are compared and evaluated. Finally, experimental results show that the average absolute localization errors that adopted the proposed algorithm is reduced by about 21% compared to that of the original map.
... Such references are part of the task definition and can e.g. be synchronised online with robot [23]. ...
Quasi-static robotic systems and discrete fabrication strategies fall short of the capabilities needed for automating on-site plastering, which involves operating over large spans and maintaining material continuity. This paper presents continuous, mobile Robotic Plaster Spraying (RPS) – a thin-layer, spray-based printing-in-motion technique using cementitious plaster – realized on a prototypical construction site. The experimental setup consists of a fully mobile, custom wheeled base that is synchronized with a robotic arm, and an integrated pumping and spraying system. In this 1:1 scale application, the print layers are executed during the motion of the mobile robot and they are printed vertically on the walls of an existing building structure. The experiments showcase the potentials of producing bespoke – three-dimensional – or standardized – flat – plasterwork with the proposed technique. The results demonstrate the applicability and scalability of RPS and the findings contribute to the research on mobile additive fabrication.
Conventional sensor-based localization relies on high-precision maps, which are generally built using specialized mapping techniques involving high labor and computational costs. In the architectural, engineering and construction industry, Building Information Models (BIM) are available and can provide informative descriptions of environments. This paper explores an effective way to localize a mobile 3D LiDAR sensor on BIM-generated maps considering both geometric and semantic properties. First, original BIM elements are converted to semantically augmented point cloud maps using categories and locations. After that, a coarse-to-fine semantic localization is performed to align laser points to the map based on iterative closest point registration. The experimental results show that the semantic localization can track the pose successfully with only one LiDAR sensor, thus demonstrating the feasibility of the proposed mapping-free localization framework. The results also show that using semantic information can help reduce localization errors on BIM-generated maps.
Full-text available
Construction performance monitoring has been identified as a key component that leads to the success of a construction project. Real time and frequent monitoring will enable early detection of potential schedule delays and facilitate the communication of progress information accurately and quickly. To facilitate as-built and as planned data comparison, this paper proposes an automated registration of a video sequence (i.e., a series of image frames) to an as-planned building information model (BIM) in real time. This method discovers the camera poses of image frames in the BIM coordinate system by performing an augmented monocular simultaneous localization and mapping (SLAM) and perspective detecting and matching between the image frames and their corresponding BIM views. The results demonstrate the effectiveness of real-time registration of images with BIMs. The presented method can potentially fully automate past studies that automate progress inference, given visual representation of as-built models aligned with BIM. Moreover, it will facilitate communication on jobsites by associating quality and progress with visuals that are in the BIM coordinate system.
Full-text available
Deviations from planned schedules in construction projects frequently lead to unexpected financial disadvantages. However, early assessment of delays or accelerations during the phase of construction enables the adjustment of subsequent and dependent tasks. Manually performed, this involves many human resources if as-built information is not immediately available. This is particularly valid for indoor environments, where a general overview of tasks is not given. In this paper, we present a novel method that increases the degree of automation for indoor progress monitoring. The novel method recognizes the actual state of construction activities from as-built video data based on as-planned BIM data using computer vision algorithms. To achieve that, two main steps are incorporated. The first step registers the images with the underlying 4D BIM model. This means the discovery of the pose of each image of a sequence according to the coordinate system of the building model. Being aware of the image origin, it allows for the advanced interpretation of the content in consecutive processing. In the second step, the relevant tasks of the expected state of the 4D BIM model are projected onto the image space. The resulting image regions of interest are then taken as input for the determination of the activity state. The method is extensively tested in the experiment section of this paper. Since each consecutive process is based on the output of preceding steps, each process of the introduced method is tested for its standalone characteristics. In addition, the general manner of applicability is evaluated by means of two exemplary tasks as a concluding proof of the success of the novel method. All experiments show promising results and direct towards automatic indoor progress monitoring.
Full-text available
This paper describes the implementation of a discrete in situ construction process using a location-aware mobile robot. An undulating dry brick wall is semi-autonomously fabricated in a laboratory environment set up to mimic a construction site. On the basis of this experiment, the following generic functionalities of the mobile robot and its developed software for mobile in situ robotic construction are presented: (1) its localization capabilities using solely on-board sensor equipment and computing, (2) its capability to assemble building components accurately in space, including the ability to align the structure with existing components on site, and (3) the adaptability of computational models to dimensional tolerances as well as to process-related uncertainties during construction. As such, this research advances additive non-standard fabrication technology and fosters new forms of flexible, adaptable and robust building strategies for the final assembly of building components directly on construction sites. While this paper highlights the challenges of the current state of research and experimentation, it also provides an outlook to the implications for future robotic construction and the new possibilities the proposed approaches open up: the high-accuracy fabrication of large-scale building structures outside of structured factory settings, which could radically expand the application space of automated building construction in architecture.
Although adherence to project schedules and budgets is most highly valued by project owners, more than 53% of typical construction projects are behind schedule and more than 66% suffer from cost overruns, partly because of an inability to accurately capture construction progress. To address these challenges, this paper presents new geometry- and appearance-based reasoning methods for detecting construction progress, which has the potential to provide more frequent progress measures using visual data that are already being collected by general contractors. The initial step of geometry-based filtering detects the state of construction of building information modeling (BIM) elements (e.g., in-progress, completed). The next step of appearance-based reasoning captures operation-level activities by recognizing different material types. Two methods have been investigated for the latter step: a texture-based reasoning for image-based 3D point clouds and color-based reasoning for laser-scanned point clouds. This paper presents two case studies for each reasoning approach to validate the proposed methods. The results demonstrate the effectiveness and practical significances of the proposed methods.
Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.