ArticlePDF Available

Guided Next Best View for 3D Reconstruction of Large Complex Structures

  • Dubai Futue Labs

Abstract and Figures

In this paper, a Next Best View (NBV) approach with a profiling stage and a novel utility function for 3D reconstruction using an Unmanned Aerial Vehicle (UAV) is proposed. The proposed approach performs an initial scan in order to build a rough model of the structure that is later used to improve coverage completeness and reduce flight time. Then, a more thorough NBV process is initiated, utilizing the rough model in order to create a dense 3D reconstruction of the structure of interest. The proposed approach exploits the reflectional symmetry feature if it exists in the initial scan of the structure. The proposed NBV approach is implemented with a novel utility function, which consists of four main components: information theory, model density, traveled distance, and predictive measures based on symmetries in the structure. This system outperforms classic information gain approaches with a higher density, entropy reduction and coverage completeness. Simulated and real experiments were conducted and the results show the effectiveness and applicability of the proposed approach.
Content may be subject to copyright.
remote sensing
Guided Next Best View for 3D Reconstruction of
Large Complex Structures
Randa Almadhoun 1,* , Abdullah Abduldayem 1, Tarek Taha 2, Lakmal Seneviratne 1and
Yahya Zweiri 1,3
1KU Center for Autonomous Robotic Systems (KUCARS), KU Main Campus, Khalifa University of Science
and Technology, Abu Dhabi 127788, UAE; (A.A.); (L.S.); or (Y.Z.)
2Algorythma, Autonomous Aerial Lab, Abu Dhabi 112230, UAE;
3Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK
Received: 11 September 2019; Accepted: 15 October 2019; Published: 21 October 2019
In this paper, a Next Best View (NBV) approach with a profiling stage and a novel utility
function for 3D reconstruction using an Unmanned Aerial Vehicle (UAV) is proposed. The proposed
approach performs an initial scan in order to build a rough model of the structure that is later
used to improve coverage completeness and reduce flight time. Then, a more thorough NBV
process is initiated, utilizing the rough model in order to create a dense 3D reconstruction of
the structure of interest. The proposed approach exploits the reflectional symmetry feature if it
exists in the initial scan of the structure. The proposed NBV approach is implemented with a
novel utility function, which consists of four main components: information theory, model density,
traveled distance, and predictive measures based on symmetries in the structure. This system
outperforms classic information gain approaches with a higher density, entropy reduction and
coverage completeness. Simulated and real experiments were conducted and the results show the
effectiveness and applicability of the proposed approach.
autonomous exploration; coverage planning; aerial robots; UAV; view planning;
navigation; 3D reconstruction
1. Introduction
The ability to reconstruct 3D models of large structures has been utilized in various applications
such as infrastructure inspection [
] and search and rescue [
]. Reconstruction tasks are typically
performed using conventional methods either by following an autonomous procedure such as
moving a sensor around the structure of interest at fixed waypoints or following manual procedure.
Performing this process manually is considered time consuming and provides no guarantees of
coverage completeness or reproducibility. In addition to this, a basic autonomous process may not
work on large complex objects due to the presence of occlusions and hard to view surfaces.
The increasing autonomy of robots, as well as their ability to perform actions with a high degree
of accuracy, makes them suitable for autonomous 3D reconstruction in order to generate high quality
models. If an existing model is provided, then an exploration path could be computed offline, which
is then executed by the robot during the reconstruction task [
]. However, these reference models
are usually not available, not provided, or inaccurate. Therefore, a common approach is to instruct
the robot to navigate around the object and iteratively construct a 3D model of it without a priori
knowledge. To navigate iteratively in an informed manner, the Next Best View (NBV) is determined
Remote Sens. 2019,11, 2440; doi:10.3390/rs11202440
Remote Sens. 2019,11, 2440 2 of 20
based on the data gathered, and a utility function that evaluates the gain from different candidate
locations (usually referred to by viewpoints) [1012].
Several methods are proposed in the literature to tackle the problem of non-model based
exploration. Some of the presented work in the literature models the exploration problem as a
partial differential equation (PDE) whose result is a vector field, where the next best position is
selected based on the path of steepest descent [
]. However, solving these PDEs may not consider
the motion constraints of the robot and is considered computationally expensive [
]. An alternative
approach is the genetic algorithm which facilitates computing many random paths that are crossed at
random points [15]. The genetic algorithm would then select the best path by evaluating the random
paths based on the specified task. Moreover, frontier methods perform well in constrained areas and
indoors [
]. In these environments, the frontier may exist at the start of a room or the end of a
corridor. However, a large unexplored space exists in outdoor environments represented by the sky
and the large empty areas with no obstacles. This can generate a huge frontier where it can potentially
reach the horizon. The work in [
] shows that frontier method consumes time on evaluating the
various amounts of viewpoints in outdoor environments while Oshima et al.
was able to mitigate
this by finding the centroids of randomly scattered points near the frontiers using clustering, and then
evaluating them as candidate viewpoints. A similar approach is performed in [
], which generates
viewpoints randomly and prioritizes them, and then selects the viewpoints based on the visibility and
distance. In addition to this, the work in [
] selects the frontiers that minimizes the velocity changes.
Additional work that focuses on the surface and accuracy of the reconstructed model is presented
in [
]. The work in [
] iteratively samples a set of candidate viewpoints then selects the optimal
one based on the expected information gain, the distance and the change of direction. The work
presented in [22] utilizes the approach of set cover problem to generate candidate samples that cover
frontiers and then it generates the connecting path using Traveling Salesman Problem (TSP) to generate
maximal coverage. The work in [
] focuses on the surface quality where a path is generated for
sectors of low quality and uncovered frontiers. Other proposed work perform initial scanning to get
an overview of the unknown structure before the main exploration process, as described in [
The limitation of the work presented in [
] is that the initial scan is performed at specified altitude
and the exploration path is generated offline, similar to model based approaches. Similarly, in [
], the
work performs scanning at specified altitude and then generates a 3D model that is then exploited to
generate the trajectory to get an accurate reconstructed model.
Unlike the presented approaches, we propose a methodology that utilizes a UAV platform with a
sensor suite that enables it to explore complex unknown structures without a priori model by following
a Guided NBV strategy. The main aim of the proposed work is to explore unknown structures and to
provide dense 3D reconstructed models. The proposed Guided NBV starts by performing a profiling
step of the structure to provide an initial quick scan that improves coverage completeness and guides
the iterative NBV process. The end result of NBV exploration is a set of scans, gathered in an iterative
manner to cover a complex structure. The constructed 3D model can then be used for inspection or
mapping applications. The path is generated utilizing the Information Gain (IG) of the viewpoints
using a probabilistic volumetric map representation OctoMap [
]. An open source ROS package
implementation of our proposed Guided NBV is available in [
]. The main contributions in this paper
are as follows:
A new frontier based view generator is proposed that defines frontiers as low density cells in the
volumetric map.
A novel utility function is proposed to evaluate and select viewpoints based on information gain,
distance, model density and predictive measures.
A set of simulations and real experiments were performed using structures of different shapes to
illustrate the effectiveness of the approach.
Our proposed approach is presented in Section 2providing a general overview of the main
components. Then, each of the main components of the proposed approach is described in details as
Remote Sens. 2019,11, 2440 3 of 20
follows: mapping in Section 2.1, profiling in Section 2.2, predictive model generation in Section 2.3,
viewpoints sampling in Section 2.4, viewpoints evaluation in Section 2.5, and termination in Section 2.6.
Section 3presents both the simulated and real experiments used to validate the proposed methodology
and the obtained results. Section 4summarizes the results discussion. Finally, our conclusions are
discussed in Section 5.
2. Methods
The proposed Guided NBV approach scans the structure encapsulated in a workspace of known
dimensions in order to obtain an initial rough model of the structure. This model is referred to as the
profile in this work, as it illustrates the overall structure shape and the model main geometric features.
Profiling is the process of generating the profile.
After the profiling process, the standard NBV method steps are performed iteratively, as shown
in Figure 1. The method samples a set of viewpoints, which are evaluated to determine the next best
viewpoint. The selected viewpoint is then followed by the UAV where it updates the model using the
obtained observations. If the termination condition is not satisfied, the process is repeated. Following
these steps forms a full iteration of NBV. All of these steps are discussed in details in the next sections.
The system can operate using different number of camera sensors with varying sensor ranges
and fields of view. There are two main assumptions used in the proposed system: availability of an
accurate global positioning system and a workspace of known dimensions.
Figure 1. A flowchart of the proposed Guided Next Best View (NBV) method.
Remote Sens. 2019,11, 2440 4 of 20
2.1. 3D Mapping
The 3D mapping module performs the data acquisition related to the NBV process. This module is
also responsible for the 3D mapping associated processes such as density calculations, saving/loading
maps and storing the final structure model. Two representations are used in the 3D mapping module
processes: a 3D occupancy grid map consisting of voxels (OctoMap) to guide the NBV process and a
point cloud for the final 3D reconstruction. The gathered data by the RGBD sensors and LIDAR are
used to update these representations at each iteration.
The final model of the object of interest is created using the point cloud data. During the NBV
exploration, the point cloud is used to create a 3D occupancy grid using the OctoMap implementation.
The collected point cloud is used to generate a 3D model of the object once the mission is completed.
Then, the collected images during the NBV exploration can be used to texture the final generated
mesh model.
The 3D occupancy grid divides space into a grid of voxels where each voxel is marked with
one state as being free (containing no points), occupied (containing points), or unknown. Using the
OctoMap, continuous values are stored rather than discrete state in order to determine the likelihood
of a voxel being occupied, with likelihood of 0 being free, 1 being occupied, and 0.5 meaning it could
be occupied or free with equal likelihood. Using OctoMap, a set of occupancy and clamping thresholds
are defined for the upper and lower bounds of occupancy probabilities to represent the different
states and ensure that confidence is bounded. The notion of entropy from information theory is used
to determine the amount of remaining information in the scene. The entropy
of the voxel
occupancy probability piis computed using Equation (1).
ei=(pi)log (pi)(1pi)log (1pi)(1)
In NBV, the “next best view” has to be determined iteratively at each step. Different sensors
could be used in the 3D mapping process, each with different limitations including: resolution,
maximum/minimum range and Field Of View (FOV). Considering these limitations, it is possible to
quantify the Information Gain (IG) metric, which is the amount of information that a particular view
contains. To compute IG, a ray casting process is performed from the candidate viewpoint, as shown
in Figure 2.
(a) (b)
Figure 2.
The process of ray tracing. The rays are shown in pink, occupied voxels are green, free voxels
are transparent, and unknown voxels are blue. (
) Virtual rays are casted from the selected viewpoint
(shown in yellow) while respecting the sensor limitations. (
) The rays traverse the occupancy map
until either the ray reaches the sensor’s maximum range, or the ray is blocked by an occupied cell.
The occupancy grid is not suitable for reconstructing the final structure since it cannot capture
the fine details of the structure being constructed. Instead, point cloud is used to construct the final
3D mesh since it is denser and more precise. However, the occupancy grid is a more efficient data
representation that facilitates differentiating between free and unexplored space, essential information
for computing IG during the Guided NBV approach.
Remote Sens. 2019,11, 2440 5 of 20
Point Density
During the final iterations of the Guided NBV, the proposed approach switches from exploration
to improving the quality of the reconstruction. This ensures a dense and complete coverage without
major gaps in the model. A density measure is included in the proposed method in order to account
for this switch.
Computing the density by only considering the number of points in each voxel provides poor
density estimation for complex structures. Some parts of these structures partially passes through the
voxel such as curved areas. This makes the density based frontier viewpoint generator (described in
Section 2.4) select these areas consistently even though their density will not change. Therefore, we
introduce a new density measurement that considers the neighboring points of each point in the final
point cloud. The points’ neighbors are found by performing a search for points within a radius
rsearc h
from the point. The computed data are then stored with the 3D occupancy grid, which stores the
average density of the points and the number of points in that voxel. A 2D example of calculating the
density in each cell is shown in Figure 3.
(a) (b) (c)
Figure 3.
A 2D example illustrating the cell density calculation method. (
) Cells are represented as a
2D grid, where points in each cell are presented as red dots, and circles illustrate the neighborhood
of two-point samples. (
) The classical method for computing density, where the density is simply
the number of point in each grid. (
) The density calculated using the proposed method, where a
neighborhood search is performed for each point to get a better density estimation.
2.2. Profiling
The main aim of the profiling stage is to generate an initial rough model that can guide further
NBV exploration. To capture the profile, a laser range finder is used to perform this initial scan.
Although the profile can be obtained using visual data, but structure-from-motion and stereoscopic
readings provide poor distance estimation at longer ranges, and require multiple viewpoints in order to
compute the position of an observed point. Therefore, the combined distance and angular capabilities
of LIDARs allow the system to collect sparse data about a large structure quickly.
The proposed profile approach creates a suitable path that allows the sensor to move using a set
of fixed waypoints on the surface of the structure’s bounded rectangular workspace. This requirement
is reasonable in many scenarios since the object being reconstructed is on a known size in a controlled
environment. In many cases, the structure is contained within a 3D workspace and occupies finite
dimensions. The bounded workspace size can be obtained by either estimation or measurement.
Based on the obtained workspace limits, the motion of the sensor is constrained to the bounds of
this workspace. The sensor data are gathered and processed as the sensor moves. Updating the
occupancy grid is performed by buffering and synchronizing the scan data with the corresponding
sensor positions along the trajectory. Once the entire scan is complete, the map is updated in batch
using the buffered data. A rectangular bounded box is shown in Figure 4where the UAV is moved to
eight fixed waypoints.
Remote Sens. 2019,11, 2440 6 of 20
(a) (b) (c)
Figure 4.
Profiling process following structure bounded box. The cumulative profile and path are
shown after executing: (a) one waypoint; (b) three waypoints; and (c) eight waypoints.
2.3. Exploiting Possible Symmetry
The generated profile may have deficiencies and gaps due to sensor limitations and occlusions.
To enhance the obtained structure initial model, the profile can be exploited to make a better prediction
by utilizing symmetries that exist in many structures. In this work, reflectional symmetry is utilized,
making use of our technique described in [
]. The main steps of symmetry detection described
in [
] include: computing keypoints and features in the profile, matching keypoints and fitting planes,
determining the symmetry plane using fitted planes, and reflecting a copy of the profile point cloud
across line of symmetry. Once the symmetry predicted point cloud is obtained, a new 3D occupancy
grid is created using this point cloud with the same resolution and bounds as the main 3D occupancy
grid. The areas which consists of predicted points are considered occupied while the rest of the space
is assumed to be unknown. Only two states are considered important in the new generated 3D grid:
occupied (i.e., predicted) or not. The voxels are expected to be occluded by unseen prediction if the ray
length in the predicted 3D grid is shorter than the main 3D map. Using ray-tracing for the purpose of
prediction update, the rays are taken to the predictive 3D map where voxels are set to free space if they
intersect with the ray. The work presented in [
] explains how symmetry prediction reduces travel
cost and improves coverage by correctly predicting unknown cells are part of the surface rather than a
large section of an unknown space (given an inflated information gain value).
2.4. Viewpoint Sampling
The process of determining the next best possible viewpoint in a continuous 3D space is intractable,
as there an infinite number of possible viewpoints. A few selected viewpoints are sampled, utilizing
two proposed approaches: Adaptive Grid Viewpoint Sampling (AGVS) and Frontier Viewpoint
Sampling (FVS). The entire Guided NBV process terminates if no more valid candidate viewpoints can
be generated. The AGVS viewpoint sampling method is initially used until a local minima is detected
based on the entropy change. The viewpoint sampling component switches to FVS, as illustrated in
Figure 1, which will direct the robot towards areas with low density. After covering the low density
area, the viewpoint sampling switches back to AGVS.
Our AGVS approach described in [
] samples a few viewpoints in the vehicle’s nearby vicinity.
It performs position sampling
and orientation based sampling
. This method reduces
travel cost by sampling viewpoints in the nearby vicinity, although it is not globally optimal. Figure 5
shows an example of adaptively generated viewpoints at different scales.
A local minimum is assumed if the maximum entropy change per voxel falls below a threshold
within the past few iterations. Therefore, the viewpoint generator switches to FVS, which places the
sensor directly in front of regions of interest and frontiers. A frontier is defined in the literature as
the boundary between known and unknown space. In our work, frontiers are defined as voxels with
low density. By focusing on low density frontiers, obtaining new information is guaranteed. Based
on a specified density threshold, a set of frontier voxels is generated. The set of frontiers is clustered
using Euclidean based clustering method and the centroid of each cluster is determined. The nearest
centroid is then selected to be used in generating the views that fully covers it. The viewpoints are
Remote Sens. 2019,11, 2440 7 of 20
sampled on a cylindrical shape considering the centroid of the cluster as a center. Three layers are
generated at different heights from the centroid. Furthermore, four viewpoints are generated at each
layer with orientation pointing towards the central axis (the axis passes through the centroid of the
cluster). Using three layers around the cluster with viewpoints pointing to the central axis facilitates
providing full coverage of the cluster from different directions. This is illustrated in Figure 6.
Figure 5.
Example of AGVS performed using different scale values of
, and
, respectively,
multiplied by the discretization length L.
(c) (d)
Figure 6. (a) Low density frontier voxels shown in purple; (b) the centroids (yellow) of the clusters of
the frontiers voxels; (
) FOV planes visualization where two sensors are mounted on the UAV; and
(d) an example of the three layers to cover an example frontier voxels (cubes) cluster.
Remote Sens. 2019,11, 2440 8 of 20
The validity of viewpoints is checked iteratively as they are generated. The viewpoint is validated
by checking if it is in a free space, within workspace bounds, not colliding with octree map, and the
centroid is inside its FOV. The visibility of each viewpoint is then obtained in the cylinder utilizing the
concept of occlusion culling described in [
] to check the frontier cluster coverage. Occlusion culling
process extracts visible points that exist in a viewpoint FOV. The Guided NBV process terminates if all
the generated viewpoints by the used method are considered not valid.
2.5. Viewpoint Evaluation
The next best view is determined by selecting the viewpoint that maximizes a utility function.
The utility function is computed for each candidate viewpoint using their visible point cloud and
occupancy grid. Once the best viewpoint is selected, the vehicle navigates to that viewpoint position,
collects data using the mounted sensors, and updates the constructed 3D model accordingly.
The proposed utility function rewards some behaviors and penalizes others. The main objective
of our proposed utility function is to maximize the IG, and the density of the final 3D reconstructed
model. The utility function also needs to penalize far away viewpoints and utilize the predictions
made utilizing the reflectional symmetry feature if it exists. The utility function is calculated for each
candidate viewpoint, by evaluating their covered regions, in order to select the best one based on utility
value. The proposed utility function formulation is presented in Equation (2) where
is the utility
function value,
is a normalized expected entropy in the visible candidate region,
is a normalized
average point density in the visible candidate region,
is the number of visible occupied nodes in
the visible candidate region,
is the normalized ratio of predicted voxels to
is the Euclidean
distance from the current position to the selected viewpoint and
, and
are the weights of each
component. The inclusion of each of these terms is described further below.
U= (1+αˆ
P)eλdlog10 (Nocc)(2)
Information Gain (IG) is often used in the literature [
], which aims at minimizing the
remaining total information by measuring the total entropy of voxels
in a given viewpoint’s visible
candidate region. The other part is the weighted information gain described in [
], which aims
at minimizing euclidean distance
while maximizing the IG. These two parts are combined in our
proposed utility function in addition to the other terms.
Viewpoints with high number of occupied voxels are given extra weight in order to direct the
vehicle towards areas that overlap with the previous views. This happens when the profile has vast
unexplored areas around the structure, which would make the vehicle select unknown voxels that
might be away of the structure to maximize information. However, rewarding viewpoints based on
high occupied voxels count could lead to the selection of uninformative viewpoints. Therefore, the
logarithm is used to avoid this since it rewards increasing magnitudes but does not show significant
increase between magnitudes. In our implementation, the base-10 logarithm is used instead of
the base-
logarithm since it has a larger interval between magnitudes. This feature facilitates
differentiating between viewpoints of extremely low and moderate
without rewarding higher
excessively. Any higher base-
logarithm can be used in order to further flatten the higher
magnitudes effect.
The density is calculated as described previously, utilizing the average number of points in a voxel.
The number of predicted voxels is determined by performing ray casting as described in Section 2.3.
The prediction ratio is computed using Equation (3), where
is the normalized ratio of predicted
is the number of predicted voxels in viewpoint, and
is the number of occupied
voxels in viewpoint.
Npred +Nocc (3)
Remote Sens. 2019,11, 2440 9 of 20
2.6. Termination
In general, a suitable termination condition is adapted for each application, but it is difficult to
quantify the desired criteria. In ideal scenarios, once the system has achieved coverage completeness,
the process should terminate. However, computing coverage percentage without the original reference
model is considered difficult. The work presented in [
] terminates if no viewpoint has an IG above
a predefined threshold while the work in [
] terminates if the difference in total entropy reduction
decreased below a predefined threshold. Three termination conditions are used in our Guided NBV
method: comparing the current NBV process iteration number with a predefined maximum iterations
number, checking the global entropy
change after
iterations [
], or when no valid viewpoints are
generated by the viewpoint sampling component. as mentioned previously.
Using global entropy change approach, the process terminates if the condition
is satisfied for more than
consecutive iterations. To avoid premature termination
if the method is stuck in a local minimum, a sufficient value of
is selected. If the total number of
unknown voxels is very large, the values of
will be large and hence the difference will be
small. This could happen in large complex structures represented using voxels with small resolution
or when the profile quality is not good. Therefore, for large structures, several consecutive iterations
need to be selected to check the differences.
The NBV termination conditions related to the viewpoints validity and vehicles motion repetition
check are embedded as part of the NBV cycle. The other termination conditions related to the fixed
maximum number of iterations and global entropy change are selected based on the application
where only one of them is selected. UAVs have limited flight time, which requires selecting a suitable
termination method based on the conducted experiment. In practical applications, the maximum
number of iterations could be selected based on the complexity of the structure, and the amount
of captured information. The amount of captured information at each viewpoint is affected by the
number of used sensors and the voxels ray-tracing process.
3. Experimental Results
To perform experimental evaluation, four models were selected: an A340 aircraft and a power
plant models used in simulation, and a lab representative model and an airplane used for the indoor
real experiments. These models represent environments with different geometrical features in terms of
shapes, curves, textures, and volumes. Two real indoor experiments were conducted using different
structures in order to validate the applicability of the Guided NBV system. In all experiments, collision
check with the structure was performed by constructing a circular collision box around the UAV.
The motion of the UAV was also constrained within the specified structure workspace dimensions.
Figure 7shows an example of the circular collision box created to check the viewpoints validity.
The path connecting the current position with the next candidate viewpoint was also checked for
collision, utilizing the generated profile to compute path intersections. If the path intersected with
the profile, the viewpoint was discarded and another candidate viewpoint was checked. This allowed
avoiding collisions with the structure while following the path connecting viewpoints.
3.1. Simulated Experiments
3.1.1. Simulation Setup
The first two scenarios were Software-In-The-Loop (SITL) [
] experiments using simulated Iris
quadrotor platform. The quadrotor consists of RGBD sensors with a range of 0.3–8 m and a laser
scanner mounted at 10
axis, with FOV of 180
and range of 0.1–30 m. The SITL system uses
a PX4 Firmware [
] similar to the real UAV system Firmware. The simulated Firmware and the on
board computer communicate through the mavlink protocol using a ROS plugin (mavros) [
]. The 3D
occupancy grid was processed and represented using OctoMap library [
], while the processing of
the point cloud was performed using the Point Cloud Library [35].
Remote Sens. 2019,11, 2440 10 of 20
Figure 7.
The circular collision box in purple created around the UAV locations to check the validity of
the viewpoints. The path of the UAV is shown in blue and the scans are presented in dark red.
The octree resolution was selected to be 0.2 m for both scenarios since they include complex large
structures that have detailed shapes. A collision radius of 1.5 m was selected based on the size of the
used quadrotor. For the AGVS view generator, the selected sampling distance
was 1 m while the
selected sampling angle was
to allow the view generator create a variety of possible viewpoints that
points to the structure. Table 1summarizes the other parameters used in the two simulation scenarios.
The specified number of vertices and faces of the structures mesh models defines the amount of details
each structure contains in addition to coverage problem scale based on the workspace dimensions.
Table 1. The parameters of the Guided NBV used in the two simulation scenarios.
Parameter Scenario 1 Scenario 2
Workspace dimension 40 ×40 ×20 m350 ×32 ×12 m3
Mesh Vertices/Faces 17,899/31,952 7105/4422
Camera FOV [60H, 45V][90H, 60V]
Camera tilt angle (pitch) (20,20) 0
Number of Cameras 2 1
Utility Weights (α,β,γ,λ)(10, 0.5, 0.5, 0.2) (10, 1, 0, 0.2)
Global Entropy Change Ethreshold 0.001 0.001
The coverage evaluation was performed by comparing two new occupancy grids where the first
was constructed from the final point cloud and the second one from the points sampled from the
original mesh. Based on the percentage matching between the two occupancy voxel grids, the coverage
percentage was evaluated. Using this approach with different voxel sizes allows measuring two
different types of coverage including overall shape coverage (large voxels, e.g., 0.5 m), and detailed
shape coverage (small voxels, e.g., 0.05 m). In both scenarios, the proposed Guided NBV was compared
against the two state-of-the-art utilities functions, namely weighted IG [
] and total entropy of voxels
in viewpoint [29], applying the same parameters for all the experiments.
3.1.2. Simulation Results
Aircraft Model
A 3D reconstructed model of an aircraft was generated in this experiment by exploring the
bounded box of the structure. A dense point cloud map was generated using the data collected at
the selected viewpoints along the generated exploration path in simulation.
Figure 8presents the achieved coverage and the path generated to cover the aircraft model. Table 2
illustrates the results obtained by our proposed method when compared to other two approaches
in terms of coverage, distance, and entropy reduction. As shown in the table, the density and
resolution of the reconstructed model is low (15.3%) since the profiling was performed at the
edges of the bounded workspace from a long distance. The overall achieved coverage approaches
high percentages (99.0%) and improves the model density when our proposed Guided NBV
Remote Sens. 2019,11, 2440 11 of 20
method was used. Since the aircraft model is symmetric, the utility function weights were selected
to achieve high density and utilize the existing prediction measures. Using the proposed utility
function with the selected weights, as shown in Table 1, facilitated the capturing of more data
by allowing the UAV to travel around the structure more efficiently. The plot shown in Figure 9
illustrates the average density attribute along the iterations for each of the used utility functions.
As shown in the plot, the proposed utility function achieves higher density values per voxel
in each iteration than the other two approaches. The plot is non-monotonic since the average
density was computed based on the size of point cloud and the number of occupied voxels in the
octree. At complex curved areas, some voxels hold few points compared to other voxels, which
creates the difference in the average density. The details of the structure were captured with
high coverage density when evaluated with voxels of resolution 0.05 m. However, this achieved
improvement in coverage comes at a cost which is an increase in the traveled distance.
(b) (c)
Figure 8.
The path followed by the UAV in simulation around the aircraft model using three different
utility functions: (a) our proposed utility; (b) Isler et al. [29] utility; and (c) Bircher et al. [17] utility.
Table 2.
Scenario 1 (aircraft) final results showing coverage completeness by using the proposed utility
function compared to the other approaches.
Profiling Utility
Function Iterations Entropy
(Res = 0.05 m)
(Res = 0.10 m)
(Res = 0.50 m)
Yes [17] 577 10,504 1347.6 86.5% 93.9% 98.8%
Yes [29] 585 10,846 1500.9 85.1% 92.0% 98.5%
Yes Proposed 635 10,970 1396.5 94.1% 97.2% 99.8%
Profile — 370.0 15.3% 63.9% 87.3%
Remote Sens. 2019,11, 2440 12 of 20
Figure 9.
The voxel point density vs. the number iterations, showing the proposed approach’s
performance as compared to other state-of-the-art utility functions.
Power Plant Model
Similarly, the power plant structure was 3D reconstructed by exploring the bounded box of the
model workspace. The collected data at each viewpoint in simulation were used to generate a
dense point cloud map. The generated paths and achieved coverage are shown in Figure 10, and
the results compared with two other approaches are illustrated in Table 3. As shown in the table,
the resolution and density of the reconstructed structure is low (15.1%).
The Guided NBV achievesd high coverage percentages compared to the two other illustrated
approaches. The longer path distance is compensated by a better coverage percentage and average
density (proposed density = 177 pt/voxel, Bircher et al.
= 140 pt/voxel, Isler et al.
152 pt/voxel). Moreover, the Guided NBV achieves high coverage (97.5%) for the power plant
structure compared to Song and Jo
(96.32%) and Song and Jo
(90.45%) using similar
setup and structure model.
The power plant model is not symmetric, thus the utility function weights were selected to achieve
high density, as shown in Table 1. The plot shown in Figure 11 illustrates the average density and
entropy attributes along the iterations showing the FVS effect on the reduction of the entropy and
the increase of the density at some areas. Using the AGVS sampling method with FVS facilitated
dense coverage, exceeding 90%. This is mainly due to the ability to take larger steps to escape
local minima.
Table 3.
Scenario 2 (power plant) final results showing coverage completeness by using the proposed
utility function compared to the other approaches.
Profiling Utility
Function Iterations Entropy
(Res = 0.05 m)
(Res = 0.10 m)
(Res = 0.50 m)
Yes [17] 162 14,574 479.5 95.9% 87.0% 98.5%
Yes [29] 360 24,425 945.4 95.7% 86.3% 98.1%
Yes Proposed 210 23,983 469.5 97.5% 93.6% 98.9%
Profile — 512.0 15.1% 69.5% 86.3%
Remote Sens. 2019,11, 2440 13 of 20
(b) (c)
Figure 10.
The path followed by the UAV in simulation around the power plant structure using three
different utility functions: (
) our proposed utility; (
) Isler et al.
utility; and (
) Bircher et al.
Figure 11.
The density and total entropy attributes along the iterations during the power plant coverage
using the proposed Guided NBV. The areas where the viewpoints generator switched to FVS are shown
in green.
3.2. Real Experiments
3.2.1. Real Experiment Setup
The third and fourth scenarios were performed in an indoor lab environment using a UAV
platform. A
450 quadrotor based frame was used with a Pixhawk autopilot system running PX4
Firmware [
]. A Jetson TX2 onboard computer is mounted on the quadrotor running the Guided NBV
process, and it is connected to the autopilot system via mavlink protocol [
]. A ZED Mini camera
of a FOV (90
, 60
) is mounted on the quadrotor with a 0
tilt angle, at
0.18 m along the z-axis.
A hokuyo LIDAR is mounted on the quadrotor for profiling at
0.08 m along the z-axis. The position
Remote Sens. 2019,11, 2440 14 of 20
of the quadrotor is determined using an Optitrack motion capture system [
]. Figure 12 shows the
quadrotor utilized in this experiment.
Figure 12. The quadrotor hardware components.
A collision radius of 0.8 m was selected based on the size of the DJI 450. For the AGVS viewpoints
generator, the selected sampling distance
was 0.5 m while the selected sampling angle was
These viewpoints generator parameters were selected to facilitate the generation of a set of possible
viewpoints within the indoor lab workspace dimensions. The utility function weights were selected
also to be as
0.2. A fixed number of iterations was selected as the termination
condition. Table 4summarizes the other parameters used in the two real experiments scenarios.
Table 4. The parameters of the Guided NBV used in the two real experiments scenarios.
Parameter Scenario 3 Scenario 4
Model size 1.3 ×1×1.5 m34×3×1 m3
Workspace dimension 4 ×4×2 m36×4×2 m3
Octree resolution 0.07 0.05
Offline 3D reconstruction of the selected structures was performed using the collected images
during the Guided NBV exploration. While the UAV was exploring the environments, an OctoMap
was generated using the processed point cloud.
3.2.2. Real Experiment Results
Representative Model
A representative model in an indoor lab environment was used in this experiment. The proposed
Guided NBV was performed starting with a profiling stage, and then executing the NBV iterative
process using quadrotor platform, as described previously. The generated exploration path is
shown in Figure 13, which consists of 40 viewpoints, with a path length of 32 m, generated in
8 min. The obtained average density is 33 points/voxel and the entropy reduction value is 30, 140.
The plot shown in Figure 14 shows the decreasing entropy and increasing density behaviors
across the iterations.
Remote Sens. 2019,11, 2440 15 of 20
(a) Side view (b) Top view
Figure 13.
The path followed by the UAV shown in orange color including the profiling stage.
The planned trajectory is shown in black and the reconstruction of the processed point cloud shown
using OctoMap voxels.
Figure 14.
The density and total entropy attributes along the iterations during the indoor
real experiment.
A feasible exploration path was generated by our proposed Guided NBV approach for the indoor
structure. This is demonstrated in the 3D OctoMap (resolution of 0.04 m) of the structure, which
was generated while exploring the environment using the real quadrotor UAV platform and
collecting data along the path. The final textured 3D reconstructed model is shown in Figure 15.
Trajectory planning could be performed to connect the waypoints optimizing the path for the UAV
acceleration or velocity as a further enhancement. The illustration of the indoor lab experiment of
Scenario 3 is shown in a video at [37].
Remote Sens. 2019,11, 2440 16 of 20
(a) Side view (b) Top view
Figure 15.
The 3D reconstructed representative model utilizing the collected images during the NBV
mission. The locations of the gathered images are presented in green dots.
Airplane Model
In this experiment, another model of different complexity in terms of shape and texture was
selected which is the airplane. The generated exploration path is shown in Figure 16 and it
consists of 60 waypoints, with a path length of 55.4 m, generated in 15 min. Figure 17 shows the
plot of the obtained increasing average density and decreasing entropy with an average density
of 45 points/voxel and entropy reduction value is 28949.
(a) Top view (b) Front view
Figure 16.
The path followed by the UAV shown in orange color including the profiling stage.
The planned trajectory is shown in black and the reconstruction of the processed point cloud shown
using OctoMap voxels.
The achieved results show the applicability of our proposed NBV exploration method to different
structures, which can be illustrated in the generated 3D OctoMap (resolution of 0.02) of the
airplane structure. A finer OctoMap resolution was used since the airplane consists of more
details and edges than the previous representative model. The final textured 3D reconstructed
model of the airplane is shown in Figure 18. The illustration of the indoor lab experiment of
Scenario 4 is shown in a video at [38].
Remote Sens. 2019,11, 2440 17 of 20
Figure 17.
The density and total entropy attributes along the iterations during the indoor real
experiment for the airplane model.
(a) Backward view (b) Front view
Figure 18.
The 3D reconstructed airplane model utilizing the collected images during the NBV mission.
The locations of the gathered images are presented in green dots.
4. Discussion
Using our proposed Guided NBV facilitates exploring unknown structures to provide 3D
reconstructed model that can be used for different purposes such as inspection. Experiments were
performed using different structures of interest to demonstrate the effectiveness of the Guided NBV.
According to the obtained results from simulated structures, the Guided NBV generated feasible
paths for the aircraft model, and power plant model. This was demonstrated in the final dense point
cloud map, which can be further used to generate a 3D reconstructed model. The proposed approach
was applied on symmetric and not symmetric structures by modifying the proposed utility function
weights accordingly. The achieved results were compared with two other utility functions, which
showed reconstructed models with low density and resolution compared to our final reconstructed
model. Our proposed FVS facilitated escaping local minima by taking large steps, which increased
density and reduced entropy. The paths connecting the viewpoints that cover frontiers affect the utility
since they are located in different locations when the coverage reach high percentages. In addition,
the path needs to avoid colliding with the structure. This effect is controlled using the penalizing
distance factor parameter. Although covering frontiers with low density increases the traveled distance,
it allows achieving higher density and coverage of the structure.
The proposed Guided NBV was also validated in a real indoor environment using representative
model and airplane model. Using a real quadrotor platform, the Guided NBV generated exploration
paths for both models demonstrating their feasibility in the final generated 3D OctoMap and 3D
textured model. The collected images along the path could be used to enhance the texture of the
final reconstruction in simulated experiments. The generated path could be further enhanced by
Remote Sens. 2019,11, 2440 18 of 20
performing trajectory planning optimizing the path between the waypoints considering the UAV
motion constraints including velocity and acceleration.
5. Conclusions
In this paper, a Guided NBV approach is proposed to explore unknown large structures and 3D
reconstruct them. Our approach facilitates the practical deployment of UAVs in reconstruction and
inspection applications when a model of the structure is not known a priori. Our proposed NBV starts
with an initial scanning step called profiling that facilitates the NBV process. Our main contributions
with respect to the state-of-the-art include: a density frontier based viewpoint generator, an adaptive
based viewpoint generator that avoids local minima, and a novel utility function. The four main
components of the proposed utility function are information gain, distance, density and symmetry
prediction. The total remaining information is minimized using the information theory approach,
while the travel cost is minimized by the distance component. The density component facilitates
generating final dense 3D model. A set of simulations and real indoor experiments were performed
with structures of different shapes and complexities. The achieved results demonstrate that our
proposed approach improves coverage completeness, density and entropy reduction compared to the
other state-of-the-art approaches. The implementation of the proposed algorithm is accessible in [
For the future work, the proposed method will be extended to use multi-agent systems and to use deep
reinforcement learning to detect occluded parts accurately. The view selection will also be extended
to consider the energy of the UAV and the covered surface area. The trajectory of the UAV will be
enhanced by performing trajectory planning considering the UAV motion constraints.
Author Contributions:
Methodology, R.A., A.A. and T.T.; Software, R.A., A.A. and T.T.; Supervision, T.T., L.S.
and Y.Z.; Validation, R.A.; Writing—original draft, R.A.; and Writing—review and editing, R.A., T.T. and Y.Z.
Funding: This research received no external funding.
This publication was based on work supported by the Khalifa University of Science and
Technology under Award No. RC1-2018-KUCARS.
Conflicts of Interest: The authors declare no conflict of interest.
Faria, M.; Maza, I.; Viguria, A. Applying Frontier Cells Based Exploration and Lazy Theta* Path Planning
over Single Grid-Based World Representation for Autonomous Inspection of Large 3D Structures with an
UAS. J. Intell. Robot. Syst. 2019,93, 113–133. [CrossRef]
Hollinger, G.A.; Englot, B.; Hover, F.S.; Mitra, U.; Sukhatme, G.S. Active planning for underwater inspection
and the benefit of adaptivity. Int. J. Robot. Res. 2013,32, 3–18. [CrossRef]
Jiang, S.; Jiang, W.; Huang, W.; Yang, L. UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition
and Offsite Visual Inspection of Transmission Line. Remote Sens. 2017,9, 278. [CrossRef]
Oleynikova, H.; Taylor, Z.; Siegwart, R.; Nieto, J. Safe Local Exploration for Replanning in Cluttered
Unknown Environments for Micro-Aerial Vehicles. In Proceedings of the IEEE International Conference on
Robotics and Automation (ICRA’18), Brisbane, Australia, 21–25 May 2018.
Environments, L.S.; Selin, M.; Tiger, M.; Duberg, D.; Heintz, F.; Jensfelt, P. Efficient Autonomous Exploration
Planning of Large-Scale 3-D Environments. IEEE Robot. Autom. Lett. 2019,4, 1699–1706.
Bircher, A.; Kamel, M.; Alexis, K.; Burri, M.; Oettershagen, P.; Omari, S.; Mantel, T.; Siegwart, R.
Three-dimensional coverage path planning via viewpoint resampling and tour optimization for aerial
robots. Auton. Robot. 2015,40, 1059–1078. [CrossRef]
Almadhoun, R.; Taha, T.; Seneviratne, L.; Dias, J.; Cai, G. GPU accelerated coverage path planning optimized
for accuracy in robotic inspection applications. In Proceedings of the 2016 IEEE 59th International Midwest
Symposium on Circuits and Systems (MWSCAS), Abu Dhabi, UAE, 16–19 October 2016; pp. 1–4.
Frías, E.; Díaz-Vilariño, L.; Balado, J.; Lorenzo, H. From BIM to Scan Planning and Optimization for
Construction Control. Remote Sens. 2019,11, 1963. [CrossRef]
Remote Sens. 2019,11, 2440 19 of 20
Martin, R.A.; Rojas, I.; Franke, K.; Hedengren, J.D. Evolutionary View Planning for Optimized UAV Terrain
Modeling in a Simulated Environment. Remote Sens. 2016,8, 26. [CrossRef]
Vasquez-Gomez, J.I.; Sucar, L.E.; Murrieta-Cid, R.; Herrera-Lozada, J.C. Tree-based search of the next best
view/state for three-dimensional object reconstruction. Int. J. Adv. Robot. Syst. 2018,15, 1–11. [CrossRef]
Palomeras, N.; Hurtos, N.; Carreras, M.; Ridao, P. Autonomous Mapping of Underwater 3-D Structures:
From view planning to execution. In Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA’18), Brisbane, Australia, 21–25 May 2018; pp. 1965–1971.
Martin, R.A.; Blackburn, L.; Pulsipher, J.; Franke, K.; Hedengren, J.D. Potential Benefits of Combining
Anomaly Detection with View Planning for UAV Infrastructure Modeling. Remote Sens.
,9, 434.
13. Shade, R.; Newman, P. Choosing where to go: Complete 3D exploration with stereo. In Proceedings of the
2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011;
pp. 2806–2811.
Heng, L.; Gotovos, A.; Krause, A.; Pollefeys, M. Efficient visual exploration and coverage with a micro aerial
vehicle in unknown environments. In Proceedings of the 2015 IEEE International Conference on Robotics
and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1071–1078.
Haner, S.; Heyden, A. Discrete Optimal View Path Planning. In Proceedings of the 10th International Conference
on Computer Vision Theory and Applications (VISAPP 2015), Berlin, Germany,
11–14 March 2015
; pp. 411–419.
Cieslewski, T.; Kaufmann, E.; Scaramuzza, D. Rapid exploration with multi-rotors: A frontier selection
method for high speed flight. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2135–2142.
Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding Horizon “Next-Best-View” Planner
for 3D Exploration. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation
(ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1462–1468.
Oshima, S.; Nagakura, S.; Yongjin, J.; Kawamura, A.; Iwashita, Y.; Kurazume, R. Automatic planning
of laser measurements for a large-scale environment using CPS-SLAM system. In Proceedings of the
2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany,
28 September–3 October 2015; pp. 4437–4444.
Palomeras, N.; Hurtos, N.; Vidal Garcia, E.; Carreras, M. Autonomous exploration of complex underwater
environments using a probabilistic Next-Best-View planner. IEEE Robot. Autom. Lett. 2019. [CrossRef]
Palazzolo, E.; Stachniss, C. Effective Exploration for MAVs Based on the Expected Information Gain. Drones
2018,2, 9. [CrossRef]
Song, S.; Jo, S. Surface-based Exploration for Autonomous 3D Modeling. In Proceedings of the IEEE
International Conference on Robotics and Automation (ICRA’18), Brisbane, Australia, 21–25 May 2018;
pp. 4319–4326.
Song, S.; Jo, S. Online inspection path planning for autonomous 3D modeling using a micro-aerial
vehicle. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore,
29 May–3 June 2017; pp. 6217–6224.
Hepp, B.; Nießner, M.; Hilliges, O. Plan3D: Viewpoint and Trajectory Optimization for Aerial Multi-View
Stereo Reconstruction. ACM Trans. Graph. 2017,38, 4:1–4:17. [CrossRef]
Roberts, M.; Dey, D.; Truong, A.; Sinha, S.N.; Shah, S.; Kapoor, A.; Hanrahan, P.; Joshi, N. Submodular
Trajectory Optimization for Aerial 3D Scanning. In Proceedings of the 2017 IEEE International Conference
on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5334–5343.
Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D
mapping framework based on octrees. Auton. Robot. 2013,34, 189–206. [CrossRef]
Source Implementation Package of the Proposed Algorithm. Available online:
ma_nbv_cpp.git (accessed on 30 May 2018).
Abduldayem, A.; Gan, D.; Seneviratne, L.; Taha, T. 3D Reconstruction of Complex Structures with Online
Profiling and Adaptive Viewpoint Sampling. In Proceedings of the International Micro Air Vehicle
Conference and Flight Competition, Malaga, Spain, 14–16 June 2017; pp. 278–285.
Holz, D.; Basilico, N.; Amigoni, F.; Behnke, S. Evaluating the efficiency of frontier-based exploration
strategies. In Proceedings of the ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010
(6th German Conference on Robotics), Munich, Germany, 7–9 June 2010.
Remote Sens. 2019,11, 2440 20 of 20
Isler, S.; Sabzevari, R.; Delmerico, J.; Scaramuzza, D. An information gain formulation for active volumetric
3D reconstruction. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation
(ICRA), Stockholm, Sweden, 16–20 May 2016; pp. 3477–3484.
Quin, P.; Paul, G.; Alempijevic, A.; Liu, D.; Dissanayake, G. Efficient neighbourhood-based information
gain approach for exploration of complex 3d environments. In Proceedings of the 2013 IEEE International
Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 1343–1348.
Paul, G.; Webb, S.; Liu, D.; Dissanayake, G. Autonomous robot manipulator-based exploration and mapping
system for bridge maintenance. Robot. Auton. Syst. 2011,59, 543–554. [CrossRef]
Abduldayem, A.A. Intelligent Unmanned Aerial Vehicles for Inspecting Indoor Complex Aero-Structures.
Master’s Thesis, Khalifa University, Abu Dhabi, UAE, 2017.
33. PX4. Available online: (accessed on 16 July 2019).
34. Mavros. Available online: (accessed on 16 July 2019).
Rusu, R.B.; Cousins, S. 3D Is Here: Point Cloud Library (PCL). In Proceedings of the IEEE International
Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011.
36. Optitrack. Available online: (accessed on 25 March 2019).
Experimentation Using Representative Model. Available online:
(accessed on 25 March 2019).
Experimentation Using Airplane Model. Available online: (accessed on
25 March 2019).
2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (
... Therefore, the main aim of the proposed work is to cover large unknown 3D structures in short time and achieve high coverage. The proposed HCPP develops the conventional Guided NBV in [21] to use multi-robot systems sharing the global map, and combines an LSTM prediction component. The HCPP switches between the conventional viewpoint generation methods and the LSTM prediction component based on a criteria that involves a specific number of alternating iterations and an entropy threshold. ...
... The HCPP follows the steps of the Guided NBV but with an integrated LSTM based next waypoint prediction component. The approach starts by generating a rough model using the profiling process described in [21] by following the structures bounding box. Then an iterative step is applied where each robot performs viewpoint sampling, evaluation and selection, followed by viewpoint execution and mapping. ...
... This is repeated until the termination condition is satisfied (number of iterations is our case). Other termination conditions described in [21] can be used but the focus here is on the maximum number of iterations to compare the HCPP with the conventional CPP work. The hybrid CPP switches between the conventional way of viewpoint generation and selection, and the LSTM viewpoint prediction network based on a specific switching condition. ...
Coverage Path Planning (CPP) is an essential capability for autonomous robots operating in various critical applications such as fire fighting, and inspection. Performing autonomous coverage using a single robot system consumes time and energy. In particular, 3D large structures might contain some complex and occluded areas that shall be scanned rapidly in certain application domains. In this paper, a new Hybrid Coverage Path Planning (HCPP) approach is proposed to explore and cover unknown 3D large structures using a decentralized multi-robot system. The HCPP approach combines a guided Next Best View (NBV) approach with a developed Long Short Term Memory (LSTM) waypoint prediction approach to decrease the CPP exploration time at each iteration and simultaneously achieve high coverage. The hybrid approach is the new ML paradigm which fosters intelligence by balancing between data efficiency and generality allowing the exchange of some CPP parts with a learned model. The HCPP uses a stateful LSTM network architecture which is trained based on collected paths that cover different 3D structures to predict the next viewpoint. This architecture captures the dynamic dependencies of adjacent viewpoints in the long term sequences like the coverage paths. The HCPP switches between these methods triggered by either the number of iterations or an entropy threshold. In the decentralized multi-robot system, the proposed HCPP is embedded in each robot where each one of them shares its global 3D map ensuring robustness. The results performed in a realistic Gazebo robotic simulator confirmed the advantage of the proposed HCPP approach by achieving high coverage on different 3D unknown structures in a shorter time compared to conventional NBV.
... An interesting application of NBV is also proposed in [21], in which an autonomous underwater vehicle (AUV) is programmed to choose its next viewpoint optimally to map or inspect complex underwater structures. Similarly, in [1] an unmanned aerial vehicle (UAV) equipped with NBV was reported for 3D reconstruction of large structures. The utility function in [1], considers four criterion categories: information theory, model density, traveled distance, and predictive measures based on symmetries in the structure. ...
... Similarly, in [1] an unmanned aerial vehicle (UAV) equipped with NBV was reported for 3D reconstruction of large structures. The utility function in [1], considers four criterion categories: information theory, model density, traveled distance, and predictive measures based on symmetries in the structure. Figure 1 shows the flowchart of a typical AOR system. ...
... No matter which classifier or fusion technique is selected, each of them may be used in the context of the proposed next best view system through the flow described in Fig. 1. Averaging, 1 Dataset available at Next-Best-View-Dataset. Naïve Bayes [15], and Dempster-Shafer (DS) [16] fusion algorithms are used in the tests. ...
Full-text available
Active vision is the ability of intelligent agents to dynamically gather more information about their surroundings by physical motion of the camera. In the case of object recognition, active vision enables improved performance by incorporating classification decisions from new viewpoints when there is some degree of uncertainty in the current recognition result. A natural question in an autonomous active vision system is, nonetheless, how to determine the new viewpoint, i.e. in what pose should the camera be moved? This is the traditional question of next best view in active perception systems. Current approaches to the next best view problem either need construction of occupancy grids or require training datasets of 3D objects or multiple captures of the same object in specified poses. Occupancy grid methods are usually dependent on multiple camera movements to perform well, which make them more useful for 3D reconstruction applications than object recognition. In this paper, a next best view method for active object recognition based on object appearance and surface direction is proposed that decides on the next cameras pose without requiring any specifically structured training datasets of 3D objects. It is also designed for single-shot deductions of next viewpoint and is able to determine next best views without the need for substantial knowledge of 3D voxels in the environment around the camera. The experimental results illustrate the efficiency of the proposed method, while showing large improvements in accuracy and F1 score.
... Furthermore, researchers keep working on the development of more advanced autonomous path planning techniques, which can be adopted for dynamic scenes like in disaster mapping applications and other autonomous robot-like tasks [15][16][17][18]. The basic idea is to find the optimal collision-free path of the UAS and its sensor pose in every view, by relying on the best next view (NBV) in an Furthermore, researchers keep working on the development of more advanced autonomous path planning techniques, which can be adopted for dynamic scenes like in disaster mapping applications and other autonomous robot-like tasks [15][16][17][18]. ...
... Furthermore, researchers keep working on the development of more advanced autonomous path planning techniques, which can be adopted for dynamic scenes like in disaster mapping applications and other autonomous robot-like tasks [15][16][17][18]. The basic idea is to find the optimal collision-free path of the UAS and its sensor pose in every view, by relying on the best next view (NBV) in an Furthermore, researchers keep working on the development of more advanced autonomous path planning techniques, which can be adopted for dynamic scenes like in disaster mapping applications and other autonomous robot-like tasks [15][16][17][18]. The basic idea is to find the optimal collision-free path of the UAS and its sensor pose in every view, by relying on the best next view (NBV) in an iterative way [15,19,20]. ...
... The basic idea is to find the optimal collision-free path of the UAS and its sensor pose in every view, by relying on the best next view (NBV) in an Furthermore, researchers keep working on the development of more advanced autonomous path planning techniques, which can be adopted for dynamic scenes like in disaster mapping applications and other autonomous robot-like tasks [15][16][17][18]. The basic idea is to find the optimal collision-free path of the UAS and its sensor pose in every view, by relying on the best next view (NBV) in an iterative way [15,19,20]. Furthermore, advanced scene understanding and SLAM techniques are also applied [18,21]. ...
Full-text available
In the last two decades, unmanned aircraft systems (UAS) were successfully used in different environments for diverse applications like territorial mapping, heritage 3D documentation, as built surveys, construction monitoring, solar panel placement and assessment, road inspections, etc. These applications were correlated to the onboard sensors like RGB cameras, multi-spectral cameras, thermal sensors, panoramic cameras, or LiDARs. According to the different onboard sensors, a different mission plan is required to satisfy the characteristics of the sensor and the project aims. For UAS LiDAR-based mapping missions, requirements for the flight planning are different with respect to conventional UAS image-based flight plans because of different reasons related to the LiDAR scanning mechanism, scanning range, output scanning rate, field of view (FOV), rotation speed, etc. Although flight planning for image-based UAS missions is a well-known and solved problem, flight planning for a LiDAR-based UAS mapping is still an open research topic that needs further investigations. The article presents the developments of a LiDAR-based UAS flight planning tool, tested with simulations in real scenarios. The flight planning simulations considered an UAS platform equipped, alternatively, with three low-cost multi-beam LiDARs, namely Quanergy M8, Velodyne VLP-16, and the Ouster OS-1-16. The specific characteristics of the three sensors were used to plan flights and acquired dense point clouds. Comparisons and analyses of the results showed clear relationships between point density, flying speeds, and flying heights.
... Automated methods for free-form surface mapping have developed significantly in recent years. Photogrammetric solutions involving 3D or 2D cameras have evolved from heavily user-guided [2] to fully automated 3D model reconstruction techniques [3,4]. Although using automated geometry reconstruction is a viable approach for some applications, it weakens the speed advantage given by robotised NDT. ...
Full-text available
Robotic sensing is used in many sectors to improve the inspection of large and/or complex parts, enhancing data acquisition speed, part coverage and inspection reliability. Several automated or semi-automated solutions have been proposed to enable the automated deployment of specific types of sensors. The trajectory to be followed by a robotic manipulator is typically obtained through the offline programmed tool paths for the inspection of a part. This method is acceptable for a part with known geometry in a well-structured and controlled environment. The part undergoing assessment needs to be precisely registered with respect to the robot reference system. It implies the need for a setup preparation phase for each new part, which can be very laborious and reliant on the human experience. This work combines real-time robot control and live sensor data to confer full autonomy to robotic sensing applications. It presents a novel framework that enables fully autonomous single-pass geometric and volumetric inspection of complex parts using one single robotised sensor. A practical and robust robot control sequence allows the autonomous correction of the sensor orientation and position to maximise the sensor signal amplitude. It is accompanied by an autonomous in-process path planning method, capable of keeping the inspection resolution uniform throughout the full extension of the free-form parts. Last but not least, a by-product of the framework is the progressive construction of the digital model of the part surface throughout the inspection process. The introduced framework is scalable and applicable to widely different fields.
... Processes involving 3D or 2D cameras have evolved from requiring user-inputted positions [1] to fully automated 3D model reconstruction techniques. Automated photogrammetry has been applied to a wide range of scales, from fine-detail model reconstruction using robotic arms [2,3] to large-scale reconstruction using autonomous robots with wide-aperture sensors [4]. A recent example of photogrammetry enabling a 2-pass scan within NDT utilising Structure-from-Motion (SfM) [5]. ...
Full-text available
Robotised Non-Destructive Testing (NDT) has revolutionised the field, increasing the speed of repetitive scanning procedures and ability to reach hazardous environments. Application of robot-assisted NDT within specific industries such as remanufacturing and Aersopace, in which parts are regularly moulded and susceptible to non-critical deformation has however presented drawbacks. In these cases, digital models for robotic path planning are not always available or accurate. Cutting edge methods to counter the limited flexibility of robots require an initial pre-scan using camera-based systems in order to build a CAD model for path planning. This paper has sought to create a novel algorithm that enables robot-assisted ultrasonic testing of unknown surfaces within a single pass. Key to the impact of this article is the enabled autonomous profiling with sensors whose aperture is several orders of magnitude smaller than the target surface, for surfaces of any scale. Potential applications of the algorithm presented include autonomous drone and crawler inspections of large, complex, unknown environments in addition to situations where traditional metrological profiling equipment is not practical, such as in confined spaces. In simulation, the proposed algorithm has completely mapped significantly curved and complex shapes by utilising only local information, outputting a traditional raster pattern when curvature is present only in a single direction. In practical demonstrations, both curved and non-simple surfaces were fully mapped with no required operator intervention. The core limitations of the algorithm in practical cases is the effective range of the applied sensor, and as a stand-alone method it lacks the required knowledge of the environment to prevent collisions. However, since the approach has met success in fully scanning non-obstructive but still significantly complex surfaces, the objectives of this paper have been met. Future work will focus on low-accuracy environmental sensing capabilities to tackle the challenges faced. The method has been designed to allow single-pass scans for Conformable Wedge Probe UT scanning, but may be applied to any surface scans in the case the sensor aperture is significantly smaller than the part.
... Examples of unsupervised machine learning with UAV photogrammetry include analyzing photos of plants (more on object recognition after explaining NBV), agricultural models, and forestry management [63][64][65]. NBV appears in recent publications such as Bolourian and Hammad [66], Ashour et al. [67], and Almadhoun et al. [68]. These publications present on autonomous exploration of environments, labeling and discovering items with photogrammetry/LiDAR, and selecting the literal next best view for the UAV to take a picture of with unsupervised machine learning. ...
Full-text available
Unmanned aerial vehicles (UAV) enable detailed historical preservation of large-scale infrastructure and contribute to cultural heritage preservation, improved maintenance, public relations, and development planning. Aerial and terrestrial photo data coupled with high accuracy GPS create hyper-realistic mesh and texture models, high resolution point clouds, orthophotos, and digital elevation models (DEMs) that preserve a snapshot of history. A case study is presented of the development of a hyper-realistic 3D model that spans the complex 1.7 km2 area of the Brigham Young University campus in Provo, Utah, USA and includes over 75 significant structures. The model leverages photos obtained during the historic COVID-19 pandemic during a mandatory and rare campus closure and details a large scale modeling workflow and best practice data acquisition and processing techniques. The model utilizes 80,384 images and high accuracy GPS surveying points to create a 1.65 trillion-pixel textured structure-from-motion (SfM) model with an average ground sampling distance (GSD) near structures of 0.5 cm and maximum of 4 cm. Separate model segments (31) taken from data gathered between April and August 2020 are combined into one cohesive final model with an average absolute error of 3.3 cm and a full model absolute error of <1 cm (relative accuracies from 0.25 cm to 1.03 cm). Optimized and automated UAV techniques complement the data acquisition of the large-scale model, and opportunities are explored to archive as-is building and campus information to enable historical building preservation, facility maintenance, campus planning, public outreach, 3D-printed miniatures, and the possibility of education through virtual reality (VR) and augmented reality (AR) tours.
... Researchers have proposed different view planning algorithms, including complete design and next best view planning strategies, for the second view planning of the explore-then-exploit groups. For instance, [40][41][42] proposed on-line next best view planning for UAV with an initial 3D model. Other authors proposed different complete design view planning algorithms for 3D reconstruction of buildings using UAVs [12,21,39,[43][44][45][46][47]. ...
Full-text available
Imaging network design is a crucial step in most image-based 3D reconstruction applications based on Structure from Motion (SfM) and multi-view stereo (MVS) methods. This paper proposes a novel photogrammetric algorithm for imaging network design for building 3D reconstruction purposes. The proposed methodology consists of two main steps: (i) the generation of candidate viewpoints and (ii) the clustering and selection of vantage viewpoints. The first step includes the identification of initial candidate viewpoints, selecting the candidate viewpoints in the optimum range, and defining viewpoint direction stages. In the second step, four challenging approaches—named façade pointing, centre pointing, hybrid, and both centre & façade pointing—are proposed. The entire methodology is implemented and evaluated in both simulation and real-world experiments. In the simulation experiment, a building and its environment are computer-generated in the ROS (Robot Operating System) Gazebo environment and a map is created by using a simulated robot and Gmapping algorithm based on a Simultaneously Localization and Mapping (SLAM) algorithm using a simulated Unmanned Ground Vehicle (UGV). In the real-world experiment, the proposed methodology is evaluated for all four approaches for a real building with two common approaches, called continuous image capturing and continuous image capturing & clustering and selection approaches. The results of both evaluations reveal that the fusion of centre & façade pointing approach is more efficient than all other approaches in terms of both accuracy and completeness criteria.
... Hashim et al. [15] propose a view selection recommendation method by minimizing pose ambiguity for object pose estimation using depth sensors. There are also works on robotic arm navigation for autonomous exploration and grasping [28,29,4,49], UAV positioning in remote sensing [1], Pose estimation [43] and 3D scene inpainting [13]. ...
Full-text available
We present a classification based approach for the next best view selection and show how we can plausibly obtain a supervisory signal for this task. The proposed approach is end-to-end trainable and aims to get the best possible 3D reconstruction quality with a pair of passively acquired 2D views. The proposed model consists of two stages: a classifier and a reconstructor network trained jointly via the indirect 3D supervision from ground truth voxels. While testing, the proposed method assumes no prior knowledge of the underlying 3D shape for selecting the next best view. We demonstrate the proposed method's effectiveness via detailed experiments on synthetic and real images and show how it provides improved reconstruction quality than the existing state of the art 3D reconstruction and the next best view prediction techniques.
... Additionally, Almadhoun et al. [19] , Mendoza et al. [20] , and Huang et al. [21] utilized the next-best-view algorithm to generate waypoints in real time. ...
Full-text available
Background In recent decades, unmanned aerial vehicles (UAVs) have developed rapidly and been widely applied in many domains, including photography, reconstruction, monitoring, and search and rescue. In such applications, one key issue is path and view planning, which tells UAVs exactly where to fly and how to search. Methods With specific consideration for three popular UAV applications (scene reconstruction, environment exploration, and aerial cinematography), we present a survey that should assist researchers in positioning and evaluating their works in the context of existing solutions. Results /Conclusions It should also help newcomers and practitioners in related fields quickly gain an overview of the vast literature. In addition to the current research status, we analyze and elaborate on advantages, disadvantages, and potential explorative trends for each application domain.
Autonomous exploration tasks become highly challenging in subterranean applications due to the constraints imposed by the nature of the environments (e.g., dead-end branches, unstructured regions, narrow passages and bifurcations). Robots need to constantly balance their exploration objectives with measures to ensure safety. To address these challenges, we present an informed exploration approach, which exploits a reachability graph to represent the environments structure and adaptive navigation to find collision-free motions. Our system aims to make the inspection task tractable as well as to maximize the information acquired about the environment while preserving safety. We evaluate our navigation and exploration techniques against several challenging cave scenarios reconstructed using real data. Our experimental results demonstrate that our method enables the robot to make informed decisions and perform exploration more efficiently than existing techniques in confined spaces.
Full-text available
Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.
Full-text available
Scan planning of buildings under construction is a key issue for an efficient assessment of work progress. This work presents an automatic method aimed to determinate the optimal scan positions and the optimal route based on the use of Building Information Models (BIM) and considering data completeness as stopping criteria. The method is considered for a Terrestrial Laser Scanner mounted on a mobile robot following a stop & go procedure. The method starts by extracting floor plans from the BIM model according to the planned construction status, and including geometry and semantics of the building elements considered for construction control. The navigable space is defined from a binary map considering a security distance to building elements. After a grid-based and a triangulation-based distribution are implemented for generating scan position candidates, a visibility analysis is carried out to determine the optimal number and position of scans. The optimal route to visit all scan positions is addressed by using a probabilistic ant colony optimization algorithm. The method has been tested in simulated and real buildings under very dissimilar conditions and structural construction elements. The two approaches for generating scan position candidates are evaluated and results show the triangulation-based distribution as the more efficient approach in terms of processing and acquisition time, especially for large-scale buildings.
Full-text available
Aerial robots are a promising platform to perform autonomous inspection of infrastructures. For this application, the world is a large and unknown space, requiring light data structures to store its representation while performing autonomous exploration and path planning for obstacle avoidance. In this paper, we combine frontier cells based exploration with the Lazy Theta* path planning algorithm over the same light sparse grid—the octree implementation of octomap. Test-driven development has been adopted for the software implementation and the subsequent automated testing process. These tests provided insight into the amount of iterations needed to generate a path with different voxel configurations. The results for synthetic and real datasets are analyzed having as baseline a regular grid with the same resolution as the maximum resolution of the octree. The number of iterations needed to find frontier cells for exploration was smaller in all cases by, at least, one order of magnitude. For the Lazy Theta* algorithm there was a reduction in the number of iterations needed to find the solution in 75% of the cases. These reductions can be explained both by the existent grouping of regions with the same status and by the ability to confine inspection to the known voxels of the octree.
Full-text available
Three-dimensional models from real objects have many applications in robotics. To automatically build a three-dimensional model from an object, it is essential to determine where to place the range sensor in order to completely observe the object. However, the view (position and orientation) of the sensor is not sufficient, given that its corresponding robot state needs to be calculated. Additionally, a collision-free trajectory to reach that state is required. In this article, we directly find the state of the robot whose corresponding sensor view observes the object. This method does not require to calculate the inverse kinematics of the robot. Unlike previous approaches, the proposed method guides the search with a tree structure based on a rapidly exploring random tree overcoming previous sampling techniques. In addition, we propose an information metric that improves the reconstruction performance of previous information metrics.
Conference Paper
Full-text available
A modified Next Best View (NBV) approach is presented to improve the 3D reconstruction of complex symmetric structures. Two new stages are introduced to the NBV approach; A profiling stage quickly scans the structure of interest and builds a rough model called the "profile". The profile is relatively sparse but captures the major geometric features of the structure. A symmetry detection stage then determines major lines of symmetry in the profile and labels points of interest. If a point exists in known space but its mirror image lies in unknown space, the mirrored point becomes a point of interest. The reconstruction is performed by a sensor mounted on an Unmanned Aerial Vehicle by using a utility function that incorporates the detected symmetry points. We compare the proposed method with the classical information gain utility function in terms of coverage completeness , total iterations, and travel distance.
Exploration is an important aspect of robotics, whether it is for mapping, rescue missions or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this work we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.
Autonomous underwater vehicles (AUVs) have been extensively used for open-sea exploration. However, the mapping or inspection of complex underwater structures, which have an interest from the scientific and the industrial point of view, is still carried out by professional divers or remotely operated vehicles. We propose a probabilistic next-best-view planner, targeted to hover-capable AUVs, that will allow them to explore these complex environments without an a priori model. The proposed method is based on scanning the area from different viewpoints in an iterative way. At each step, a viewpoint is chosen from a set of random samples according to a utility function. An obstacle-free path to the selected viewpoint is planned, and the vehicle navigates to it to gather a new scan that will be registered with the previous ones. To evaluate the proposed method, we present four different tests using the Girona 500 AUV, both in simulation and in real scenarios. The results demonstrate the capability to explore complex environments autonomously, producing models of the environment with a high degree of coverage that can enable mapping and inspection applications.