Figure 6 - uploaded by Jhacson Meza
Content may be subject to copyright.
(a) output LAS point cloud from the SfM pipeline. (b) Ground segmentation results in LAS format.

(a) output LAS point cloud from the SfM pipeline. (b) Ground segmentation results in LAS format.

Source publication
Conference Paper
Full-text available
Digital Elevation Models (DEMs) are used to derive information from the morphology of a land. The topographic attributes obtained from the DEM data allow the construction of watershed delineation useful for predicting the behavior of systems and for studying hydrological processes. Imagery acquired from Unmanned Aerial Vehicles (UAVs) and 3D photog...

Contexts in source publication

Context 1
... due to rainfall. From this zone of approximately 144 566 m 2 , we acquired 287 images with a DJI Phantom 3 Professional drone and 9 GCPs distributed on the ground for the geo-referencing process, measured with a differential GPS (dGPS). From the SfM pipeline discussed in section 2 we obtained a geo-referenced point cloud in LAS format as shown in Fig. 6(a). Based on this point cloud and using the previous segmentation strategy we removed the non-ground points of the reconstruction and keep the ground points as seen in Fig. 6(b), where we only keep the earth ...
Context 2
... process, measured with a differential GPS (dGPS). From the SfM pipeline discussed in section 2 we obtained a geo-referenced point cloud in LAS format as shown in Fig. 6(a). Based on this point cloud and using the previous segmentation strategy we removed the non-ground points of the reconstruction and keep the ground points as seen in Fig. 6(b), where we only keep the earth ...
Context 3
... Fig. 7(a) we show the digital surface model generated with the unsegmented point cloud of Fig. 6(a) with ODM. Using the segmented point cloud of Fig. 6(b), we generated a GeoTIFF image ( Fig. 7(b)) which has many gaps. By filtering, we obtain the DTM with the gaps filled using nearest-neighbor interpolation, implemented with a function provided by ODM. Fig. 7(c). In Fig. 7(d) we show a 1D profile of the DSM and the DTM, which have ...
Context 4
... Fig. 7(a) we show the digital surface model generated with the unsegmented point cloud of Fig. 6(a) with ODM. Using the segmented point cloud of Fig. 6(b), we generated a GeoTIFF image ( Fig. 7(b)) which has many gaps. By filtering, we obtain the DTM with the gaps filled using nearest-neighbor interpolation, implemented with a function provided by ODM. Fig. 7(c). In Fig. 7(d) we show a 1D profile of the DSM and the DTM, which have been extracted from the red an blue line over the Fig. ...

Similar publications

Article
Full-text available
Recently, there has been a great demand for 3D building models in several applications including cartography and planning applications in urban areas. This led to the development of automated algorithms to extract such models since they reduce the time and cost when compared to manually onscreen digitizing. Most algorithms are built to solve the pr...

Citations

... The whole Structure from Motion pipeline, is described in Fig. 3 and the first procedure which is done consists in finding the key points of the image set. A key point is a pixel or group of pixels that are easily recognizable in a pair of images of the same scene [6]. To find this set of key points in each image, the Scale-invariant feature transform [7] algorithms are used. ...
... This supplements intensive surveying of species and also expands the potential application of field survey methods, while accounting for changes in vegetation over time [12]. In addition to the applications of multispectral sensors, RGB (Red, Green, Blue) cameras can also capture elevation data using recent improvements in point cloud reconstruction methods, such as Structure from Motion [13], detecting microtopographical variations and generating accurate digital elevation models [14]. ...
Article
Full-text available
High-resolution images obtained by multispectral cameras mounted on Unmanned Aerial Vehicles (UAVs) are helping to capture the heterogeneity of the environment in images that can be discretized in categories during a classification process. Currently, there is an increasing use of supervised machine learning (ML) classifiers to retrieve accurate results using scarce datasets with samples with non-linear relationships. We compared the accuracies of two ML classifiers using a pixel and object analysis approach in six coastal wetland sites. The results show that the Random Forest (RF) performs better than K-Nearest Neighbors (KNN) algorithm in the classification of pixels and objects and the classification based on pixel analysis is slightly better than the object-based analysis. The agreement between the classifications of objects and pixels is higher in Random Forest. This is likely due to the heterogeneity of the study areas, where pixel-based classifications are most appropriate. In addition, from an ecological perspective, as these wetlands are heterogeneous, the pixel-based classification reflects a more realistic interpretation of plant community distribution
Thesis
Full-text available
Drones have become inexpensive and can be used to survey an area of interest and send real time data to the ground control station. However, low-cost drones suffer from the constraints of limited flight time due to battery limitations. If the region is denied of network infrastructure (cellular network) such as in remote areas or disaster then the range of the drone is limited to the wifi network range. A system that uses multiple drones can be used to survey and acquire real-time data as it reduces the amount of flight time for each drone. The range of the drones can be extended using a wireless mesh network with each drone working as a node of the mesh. For such systems, the operator can either set the flight path for each drone before the mission or many operators can fly the drones individually. However, it becomes the responsibility of the operators that the drones do not collide with each other, and they do not fly beyond the range of the mesh network. The mesh nodes are mobile, therefore the area of coverage of the network is dynamic throughout the mission. In this study, I propose a centralized system called Pegasus for autonomous exploration and aerial map building with multiple drones using a wireless mesh network. The different phases in the pipeline used in this system to achieve autonomous flight and map building are described. The pipeline consists of the presentation layer, multi-agent coverage path planning layer, motion control layer, real time image acquisition layer, and map generation layer using structure from motion. To evaluate the system, a simulation framework is presented in this study. Finally, the system is evaluated in the real world with a single drone.