Conference PaperPDF Available

Reconstruction of a 3D Polygon Representation from full-waveform LiDAR data

Authors:

Abstract and Figures

This study focuses on enhancing the visualisation of FW LiDAR data. The intensity profile of each full-waveform pulse is accumulated into a voxel array, building up a fully-3D representation of the returned intensities. The 3D representation is then polygonised using functional representation (FRep) of geometric objects. In addition to using the higher resolution FW data, the voxels can accumulate evidence from multiple pulses, which confers greater noise resistance. Moreover, this approach opens up possibilities of vertical observation of data, while the pulses are emitted in different angles. Multi-resolution rendering and visualisation of entire flightlines are also allowed. Introduction: The most common approach of interpreting the data, so far, was decomposition of the signal into a sum of Gaussian functions and sequentially extraction of points clouds from the waves (Wanger, Ullrich, Ducic, Malzer , & Studnicka, 2006). Neunschwander et al used this approach for Landover classification (Neuenschwander, Magruder, & Tyler, 2009) while Reightberger et al applied it for distinguishing deciduous trees from coniferous trees (Reitberger, Krzystek, & Stilla, 2006). In 2007, Chauve et al proposed an approach of improving the Gaussian model in order to increase the density of the points cloud extracted from the data and consequently improve point based classifications applied on full-waveform LiDAR data (Chauve, Mallet, Bretar, Durrieu, Deseilligny, & Puech, 2007).
No caption available
… 
Content may be subject to copyright.
Reconstruction of a 3D Polygon
Representation from full-waveform LiDAR data
Milto Miltiadou
1,2
, Michael Grant
2
, Matthew Brown
1
, Mark Warren
2
, Emma Carolan
2
2
Centre for Digital Entertainment, University of Bath, Bath UK, mm841@bath.ac.uk
1
Remote Sensing Group, Plymouth Marine Laboratory, Plymouth, UK, mmi@pml.ac.uk
Corresponding author name: Milto Miltiadou, mmi@pml.ac.uk, 07549700928
ABSTRACT
This study focuses on enhancing the visualisation of FW LiDAR data. The intensity profile
of each full-waveform pulse is accumulated into a voxel array, building up a fully-3D
representation of the returned intensities. The 3D representation is then polygonised using
functional representation (FRep) of geometric objects. In addition to using the higher resolution
FW data, the voxels can accumulate evidence from multiple pulses, which confers greater noise
resistance. Moreover, this approach opens up possibilities of vertical observation of data, while
the pulses are emitted in different angles. Multi-resolution rendering and visualisation of entire
flightlines are also allowed.
Introduction: The most common approach of interpreting the data, so far, was
decomposition of the signal into a sum of Gaussian functions and sequentially extraction of
points clouds from the waves (Wanger, Ullrich, Ducic, Malzer , & Studnicka, 2006).
Neunschwander et al used this approach for Landover classification (Neuenschwander,
Magruder, & Tyler, 2009) while Reightberger et al applied it for distinguishing deciduous trees
from coniferous trees (Reitberger, Krzystek, & Stilla, 2006). In 2007, Chauve et al proposed an
approach of improving the Gaussian model in order to increase the density of the points cloud
extracted from the data and consequently improve point based classifications applied on full-
waveform LiDAR data (Chauve, Mallet, Bretar, Durrieu, Deseilligny, & Puech, 2007).
In this research, particular attention is drawn on the visualisation of the data. Previous
work in visualising FW LiDAR has used transparent objects and point clouds. Inserting the
waveforms into a 3D Volume and visualising them using different transparencies across the
voxels was proposed by Perssion et al in 2005. In “FullAnalyze”, for each waveform sample, a
sphere with radius proportional to its amplitude is created (Chauve et al, 2009). However, both
publications are based on small regions of interest, while entire flightlines can be visualised
using our approach.
Here it worth mentioning that the full-waveform LiDAR data are provided by NERC
ARSF. The data was collected on the 8th of April in 2010 at New Forest in UK using a small
footprint Leica ALS50-II system. The backscattered signal was saved into LAS1.3 files after being
digitised using 256 samples at 2ns intervals. This corresponds to 76.8m of waveform length.
Method: A volumetric approach of polygonising FW LiDAR data is proposed here.
Voxelisation is chosen over Gaussian decomposition, to decrease the amount of information
reduced while discretisation and allow multi resolution regular sampling of the data. First, the
waveforms are inserted into a 3D Volume, then an FRep object is defined by the Volume and by
the end the FRep object is polygonised using the Marching Cubes algorithm. More details are
given below.
The waveforms are converted into voxels by inserting the waves into a 3D volume,
similar to Person et al, 2005. But in our case, low level filtering is applied to discard noise first.
Further, to overcome the uneven number of samples per voxel, the average amplitude of the
samples that lie inside each voxel is taken, instead of selecting the sample with the highest
amplitude. Therefore:





where n is the number of samples inserted into that voxels and Ii is the intensity of the sample i.
The results of the normalisation are shown on the following thickness maps generated
from the same flightline; A thickness map is an image, where each pixel value represents the
number voxel between the first and the last non-empty voxels of each column (z-axis). As shown
below, the quality of the output image is significantly increased when normalisation is applied.
Another problem to be addressed is the noise. The system records and digitises 256
samples per pulse. When the pulse doesn’t hit any objects, the systems still records low signals
which are as noise. For that reason low level filtering is applied and the samples with amplitude
lower than the noise level are discarded. Aliasing also seems to appear on areas with small
thickness like the ground. But addressing this problem is beyond the scope of this paper.
Once the pulse samples are inserted into a 3D Volume, the volume is then used as a
discrete density function 
  to represents an FRep object. Recalling form Pasko et al, an
FRep object is defined by a continuous function
where:
 , when X lies on the surface of the object
 , when X lies inside the object and
 , when X lies outside the object (Pasko & Savchenko, 1994)
in our case, is a discrete density function that takes as input a 3D point and
returns the accumulated intensity value of the voxel that lies in.
X is 3D point (x, y, z) and here x, y and z are longitude, latitude and height respectively.
α is the isolevel of the object and defines the boundary of the object.
is equal to α iff
lies on the surface of the object . On the original paper α=0, but in this case, α thresholds some
of the noise from the actual object. α is also a user defined parameter and can vary depending on
the amount of noise that exists in the data. As shown later at the results, while α decreases, the
number of non-empty voxels classified as noise increases and the amount of information
preserved decreases.
An FRep object is defined by a continuous function and the quality of it is not defined. On
the one hand, this is useful on reducing storage memory and it also allows multiple rendering
resolutions of the same object. But on the other hand, the object has no discrete values (vertices,
faces and edges). So, processing is required before rendering/visualising. This problem is either
address by ray-tracing or by polygonising the object. In this case we chose polygonisation using
the Marching Cubes Algorithm, which allows direct rendering with commodity 3D-accelerated
hardware.
Without Normalisation
With Normalisation
Figure 1: Thickness map, before and after Normalisation
The Marching cubes algorithm is an algorithm used to construct surfaces from implicit
objects using a search table. Let’s assume that
defines an object to be polygonised. At first a
3D volume is divided into cubes, named voxels. Each voxel is defined by eight corner points and
each point lies either inside or outside the object. This is calculated from the function
, as
explained above. Then, by enumerating all the possible cases and linearly interpolating the
intersections along the edges, the surface of the implicit object is constructed (Lorensen & Cline,
1987).
According to Lorensen and Cline, the normal of each vertex is calculated by measuring
the change of gradient in that area. In our case, this does not lead to a smooth looking surface,
due to the high gradient changes that exist in the Volume, especially where trees exist. Therefore,
for each vertex we get the average normal of its adjacent triangles.
Results and Experiments:
The output of our system is a 3D
polygon mesh. The area of interest is user
defined, so either an entire flightline or a
small area can be visualized (Figure 1).
Further, the output could either be derived
from FW LiDAR or discrete LiDAR, but as
shown on Figure 2, polygon meshes
created from the FW data contain more
information.
By the end, Figure 4 shows how the results
are modified while increasing or
decreasing the rest three user-defined
parameters of our system: Voxel Length,
Isolevel and Noise Level. The voxel Length
controls the resolution of the output; the
bigger the voxel length is the lower the
resolution is. The isolevel is the boundary
that defines whether a voxel is inside or
outside the object. While isolevel increases
the number of voxels that are inside the
object decreases. For that reason if this
value is set too high, the object seems to disappear. The Noise level, here, serves low level
filtering. All the samples with intensities less than the noise level are ignored. If the noise level is
too low, then the noise is obvious on the results and if it is too high important information are
discarded and the object seems to disappear again.
Discrete LiDAR
Full-wavefrom LiDAR
Voxel
Length
=
1.7m
Figure 2: selecting region of interest
Figure 3: selecting region of interest
Voxel
Length
Visualisation with
different voxel lengths
Isolevel
Visualisations with
various isolevels
Noise
Level
Visualisations with
various noise levels
10.0
m
45
5
5.7
m
15
15
4.44
m
-45
17
1.43
m
-60
30
1.0m
-85
75
0.67
m
-
100
135
Figure 4: Switching the user defined parameters
Conclusions: To sum up, previous work on visualisation uses either transparent voxels
or spheres while at this paper, an approach of generating fully 3D polygon representations of FW
data was presented. A 3D Volume representation of FW LiDAR data is firstly generated, by
accumulating the intensity profile of each recorded full-waveform into a voxel array. The 3D
representation is then polygonised using functional representation of objects (FReps).
The output is a 3D-polygon representation of the selected data, showing well-separated
structures such as tree canopies and greenhouses. The polygon is suitable for direct rendering
with commodity 3D-accelerated hardware, allowing smooth visualisation. Furthermore,
comparing the results of applying the same method on discrete LiDAR, the polygons generated
from FW LiDAR contain more detail. The user-defined parameters (resolution, noise-level,
isolevel and region of interest) also increase the flexibility of our system. Finally, this method is
particularly beneficial for various resolutions rendering of the data, while entire flightlines can
be visualised.
References:
Chauve, A., Bretar, F., Durrieu, S., Pierrot-Deseilligny, M., & Puech, W. (2009). FullAnalyze:
A research tool for handling, processing and analysing full-waveform LiDAR data. IEEE
International Geoscience & Remote Sensing Symposium. Cape Town: South Africa.
Chauve, A., Mallet, C., Bretar, F., Durrieu, S., Deseilligny, M. P., & Puech, W. (2007).
Processing full-waveform LiDAR data: Modelling raw signals. International Archives of
Photogrammetry, Remote Sensing and Spatial Information Sciences.
Lorensen, W. E., & Cline, H. E. (1987). Marching Cubes: A High Resolution 3D Surface
Construction Algorithm. General Electric Company Corporate Research and Development
Schenectady. New York: ACM.
Neuenschwander, A., Magruder, L., & Tyler, M. (2009). Landcover classification of small-
footprint, full-waveform lidar data. Jounal of Applied Remote Sensing, Vol. 3, 033544.
Pasko, A., & Savchenko, V. (1994). Blending operations for the funcitonally based
constructive geometry.
Persson, A., Soderman, U., Topel, J., & Ahlberg, S. (2005, September). Visualisation and
Analysis of full-waveform airborne laser scanner data. V/3 Workshop "Laser scanning 2005".
Enschede, the Neverlands.
Reitberger, J., Krzystek, P., & Stilla, U. (2006). Analysis of full waveform LiDAR data for
tree species classification.
Wanger, W., Ullrich, A., Ducic, V., Malzer , T., & Studnicka, N. (2006). Gaussian
decompositions and calibration of a novel small-footprint full-waveform digitising airborne laser
scanner. ISPRS Journal of Photogrammetry and Remote sensing 60, 100-112.
Wanger, W., Ullrich, A., Melzer, T., Briese, C., & Kraus, K. (2004). From single-pulse to full-
waveform arborne laser scanners: potential and practical challenges.
Keywords: Visualisation, full-wavefrom LiDAR, Voxelisation, FRep, 3D-polygon
Indicate here preferred conference session: LiDAR
Indicate here preference between oral and poster presentation: ORAL
... The process of intensity normalization of the FWF point clouds was the same as for the PR point cloud. The other processing steps were performed in the open-source software DASOS (forest in Greek) developed by [70,71]. The FWF data (i.e., the waveform samples) were voxelized by DASOS, which creates a 3D discrete density volume, such as a 3D grayscale image, by accumulating intensities of multiple pulses. ...
Article
Full-text available
This study experiments with different combinations of UAV hyperspectral data and LiDAR metrics for classifying eight tree species found in a Brazilian Atlantic Forest remnant, the most degraded Brazilian biome with high fragmentation but with huge structural complexity. The selection of the species was done based on the number of tree samples, which exist in the plot data and in the fact the UAV imagery does not acquire information below the forest canopy. Due to the complexity of the forest, only species that exist in the upper canopy of the remnant were included in the classification. A combination of hyperspectral UAV images and LiDAR point clouds were in the experiment. The hyperspectral images were photogrammetric and radiometric processed to obtain orthomosaics with reflectance factor values. Raw spectra were extracted from the trees, and vegetation indices (VIs) were calculated. Regarding the LiDAR data, both the point cloud-referred to as Peak Returns (PR)-and the full-waveform (FWF) LiDAR were included in this study. The point clouds were processed to normalize the intensities and heights, and different metrics for each data type (PR and FWF) were extracted. Segmentation was preformed semi-automatically using the superpixel algorithm, followed with manual correction to ensure precise tree crown delineation before tree species classification. Thirteen different classification scenarios were tested. The scenarios included spectral features and LiDAR metrics either combined or not. The best result was obtained with all features transformed with principal component analysis with an accuracy of 76%, which did not differ significantly from the scenarios using the raw spectra or VIs with PR or FWF LiDAR metrics. The combination of spectral data with geometric information from LiDAR improved the classification of tree species in a complex tropical forest, and these results can serve to inform management and conservation practices of these forest remnants.
... A surface forming a polygonal mesh should be reconstructed from the voxelised FW LiDAR, so that the scanned area could be visualised using rasterisation. The surface reconstruction approach explained below was outlined in, 30 . 31 ...
Conference Paper
Full-text available
Full-waveform (FW) LiDAR have been available for 20 years, but compared to discrete LiDAR, there are very few researchers exploiting these data due to the increased complexity. DASOS is an open source command-line software developed for improving the adoption of FW LiDAR in Earth Observation related applications. It uses voxelisation for interpreting the data, which is fundamentally different from the state-of-art tools interpreting FW LiDAR. There are four key features of DASOS: (1) Generation of polygonal meshes by extracting an iso-surface from the voxelised data. (2) the 2D FW LiDAR metrics exported in standard GIS format; each pixel corresponds to a column from the voxelised space and contains information about the spread of the non-open voxels, (3) efficient alignment with hyperspectral imagery using a hashed table with buckets of geolocated hyperspectral pixels. The outputs of the alignment are coloured polygonal meshes, and aligned metrics. (4) The extraction of 3D raw or composite features into vectors using 3D-windows; these feature vectors can be used in machine learning for describing objects, such as trees. Machine learning approaches (e.g. random forest) could be used for classifying trees in the 3D-voxelised space.
... To enhance the visualisation of FW LiDAR data, a volumetric approach of polygonising the data was proposed by Miltiadou et al, 2014. First, the waveforms are inserted into a 3D discrete density volume, an implicit object is defined from the volume and the object is polygonised using the Marching Cubes algorithm. ...
Article
Full-text available
The overarching aim of this paper is to enhance the visualisations and classifications of airborne remote sensing data for remote forest surveys. A new open source tool is presented for aligning hyperspectral and full-waveform LiDAR data. The tool produces coloured polygon representations of the scanned areas and aligned metrics from both datasets. Using data provided by NERC ARSF, tree coverage maps are generated and projected into the polygons. The 3D polygon meshes show well-separated structures and are suitable for direct rendering with commodity 3D-accelerated hardware allowing smooth visualisation. The intensity profile of each wave sample is accumulated into a 3D discrete density volume building a 3D representation of the scanned area. The 3D volume is then polygonised using the Marching Cubes algorithm. Further, three user-defined bands from the hyperspectral images are projected into the polygon mesh as RGB colours. Regarding the classifications of full-waveform LiDAR data, previous work used extraction of point clouds while this paper introduces a new approach of deriving information from the 3D volume representation and the hyperspectral data. We generate aligned metrics of multiple resolutions, including the standard deviation of the hyperspectral bands and width of the reflected waveform derived from the volume. Tree coverage maps are then generated using a Bayesian probabilistic model and due to the combination of the data, higher accuracy classification results are expected.
Article
Full-text available
The paper describes an approach to tree species classification based on features that are derived by a waveform decomposition of full waveform LIDAR data. Firstly, 3D points and their attributes are extracted from the waveforms, which yields a much larger number of points compared to the conventional first and last pulse techniques. This is caused by the detailed signal analysis and the possibility to detect multiple pulse reflections. Also, constraints are embedded into the mathematical model of the decomposition to avoid erroneous 3D points caused by the system electronics. Secondly, special tree saliencies are proposed, which are computed from the extracted 3D points. Subsequently, an unsupervised tree species classification is carried out using these saliencies. The classification, which groups the data into two clusters (deciduous, coniferous), leads to an overall accuracy of 80 % in a leaf-on situation. Finally, the results are shortly discussed.
Conference Paper
Full-text available
Full-waveform (FW) lidar systems provide range profiles of the Earth topography. They are acquired from airborne platforms or from satellites. Many applications derive from the use of such data, from the extraction of 3D point clouds to the inversion of vegetation profiles. Nevertheless, handling range profiles is much more difficult than handling 3D point cloud. The aim of this paper is to present a research tool based on opensource libraries that can process and visualize such data. We focused our work on the implementation on the 2D/3D interface that gives the possibility to visualize the interaction between the lidar electromagnetic waves and the Earth topography. Moreover, this tool integrates several processing steps of FW Lidar data.
Article
Full-text available
We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.
Article
Full-waveform lidar data are emerging into the commercial sector and provide a unique ability to characterize the landscape. The returned laser waveforms indicate specific reflectors within the footprint ( vertical structure), while the shape of the return convolves surface reflectance and physical topography. These data are especially effective in vegetative regions with respect to canopy structure characterization. The objective of this research is to evaluate the performance of waveform-derived parameters as input into a supervised classifier. Extracted waveform metrics include Gaussian amplitude, Gaussian standard deviation, canopy energy, ground energy, total waveform energy, ratio between canopy and ground energy, rise time to the first peak, fall time of the last peak, and height of median energy ( HOME). The classifier utilizes a feature selection methodology which provides information on the value of waveform parameters for discriminating between class pairs. For this study area, energy ratio and Gaussian amplitude were selected most frequently, but rise time and fall time were also important for discriminating different tree types and densities. The lidar classification accuracy for this study area was 85.8% versus 71.2% for Quickbird imagery. Since the lidar-based input data are structural parameters derived from the waveforms, the classification is improved for classes that are spectrally similar but structurally different.
Article
In this study we use a technique referred to as Gaussian decomposition for processing and calibrating data acquired with a novel small-footprint airborne laser scanner that digitises the complete waveform of the laser pulses scattered back from the Earth's surface. This paper presents the theoretical basis for modelling the waveform as a series of Gaussian pulses. In this way the range, amplitude, and width are provided for each pulse. Using external reference targets it is also possible to calibrate the data. The calibration equation takes into account the range, the amplitude, and pulse width and provides estimates of the backscatter cross-section of each target. The applicability of this technique is demonstrated based on RIEGL LMS-Q560 data acquired over the city of Vienna.
Marching Cubes: A High Resolution 3D Surface Construction Algorithm. General Electric Company Corporate Research and Development Schenectady
  • W E Lorensen
  • H E Cline
  • Acm
  • A Neuenschwander
  • L Magruder
  • M Tyler
  • A Pasko
  • V Savchenko
Processing full-waveform LiDAR data: Modelling raw signals. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences. Lorensen, W. E., & Cline, H. E. (1987). Marching Cubes: A High Resolution 3D Surface Construction Algorithm. General Electric Company Corporate Research and Development Schenectady. New York: ACM. Neuenschwander, A., Magruder, L., & Tyler, M. (2009). Landcover classification of smallfootprint, full-waveform lidar data. Jounal of Applied Remote Sensing, Vol. 3, 033544. Pasko, A., & Savchenko, V. (1994). Blending operations for the funcitonally based constructive geometry.
Visualisation and Analysis of full-waveform airborne laser scanner data
  • A Persson
  • U Soderman
  • J Topel
  • S Ahlberg
Persson, A., Soderman, U., Topel, J., & Ahlberg, S. (2005, September). Visualisation and Analysis of full-waveform airborne laser scanner data. V/3 Workshop "Laser scanning 2005". Enschede, the Neverlands.
Keywords: Visualisation, full-wavefrom LiDAR, Voxelisation, FRep, 3D-polygon Indicate here preferred conference session: LiDAR Indicate here preference between oral and poster presentation: ORAL
  • W Wanger
  • A Ullrich
  • T Melzer
  • C Briese
  • K Kraus
Wanger, W., Ullrich, A., Melzer, T., Briese, C., & Kraus, K. (2004). From single-pulse to fullwaveform arborne laser scanners: potential and practical challenges. Keywords: Visualisation, full-wavefrom LiDAR, Voxelisation, FRep, 3D-polygon Indicate here preferred conference session: LiDAR Indicate here preference between oral and poster presentation: ORAL