ThesisPDF Available

Terrestrial Laser Scanning for Geodetic Deformation Monitoring

Authors:
  • technet GmbH Berlin

Abstract and Figures

The determination of geometric changes within an area or object of interest by means of repetitive surveys at different points in time is referred to as geodetic deformation monitoring. Despite its development in the early years of the twentieth century the original processing chain remained identical in essence until now. It contains the choice of suitable viewpoints, observation of at least two so called epochs, transformation of all epochs into a common coordinate system and finally the actual monitoring of deformations. In order to acquire an area under investigation discrete points of interest have to be physically signalised. Thereby repetitive observations can be achieved throughout the epochs. Downsides of this approach are among others the time consuming necessity to signalise the area under investigation as well as the “blindness” against deformations that occur outside the scope of the interest points. The emergence of terrestrial laser scanners (TLS) into engineering geodesy around the turn of the millennium led to a paradigm shift that allows to observe an area under investigation in a quasi-laminar fashion without the need of signalising points within the object space. Through this, all deformations can be revealed in principle that occurred in between two epochs within the area under investigation. Based on the already mentioned process chain of geodetic deformation monitoring the contribution at hand initially compares methodical differences as well as parallels among established approaches and terrestrial laser scanning. This results in several unsolved problems that are treated as research questions in this thesis. A substantial preparative step for geodetic deformation monitoring is the choice of suitable viewpoints from an economic perception and under the perspective of engineering geodesy. As existing methods for this task are not directly transferable to TLS, a novel combinatorial search algorithm is proposed and exemplified. Furthermore, a stochastic model for terrestrial laser scanners is introduced that uses intensity values of the received laser signal as an input and allows to predict the theoretical precision of observations from a certain viewpoint. A vital task in deformation monitoring under the assumption of a congruency model is the transformation into a stable reference frame. Only if this prerequisite holds, occurred deformations can be correctly identified and consequently quantified. For the case of observations onto signalised, discretely observable targets, for instance by tacheometry, several methods have been developed in the past in order to reveal sets of congruent points respectively points that were subject of deformation, so that this problem domain can be seen as solved. If one now transforms this problem to TLS then it can be established that areas where deformation occurred have to be identified and rejected from computing transformation parameters between epochs. A look at current literature on TLS based deformation monitoring shows that nearly all researchers imitate the tacheometric line of action by deploying artificial targets which have been designed for laser scanners. Through this a beneficial characteristic of TLS is neglected namely the enormous information density within the object space that can be used to compute transformation parameters. For the until then unsolved problem of automatically distinguishing stable and deformed regions in datasets which have been captured by TLS two algorithms are proposed. The performance of these implementations is tested regarding their robustness and other criteria, based on practical data. Furthermore, a method for determination of statistically significant deformations in TLS-datasets is introduced. Through this, the subjective choice of arbitrary thresholds for quantification and visualisation of deformations is counteracted. Finally, a procedure for visualisation of deformations within the object space is presented that simplifies the now and then abstract interpretation of the outcome of deformation monitoring.
Content may be subject to copyright.
Technische Universität Berlin
Fakultät VI - Institut für Geodäsie und Geoinformationstechnik
Terrestrial Laser Scanning for
Geodetic Deformation Monitoring
vorgelegt von
M.Sc.
Daniel Wujanz
geb. in Trier
von der Fakultät VI - Planen Bauen Umwelt
der Technischen Universität Berlin
zur Erlangung des akademischen Grades
Doktor der Ingenieurwissenschaften
- (Dr.-Ing.) -
genehmigte Dissertation
Promotionsausschuss:
Vorsitzender: Prof. Dr.-Ing. Harald Schuh, Deutsches GeoForschungsZentrum Potsdam
Gutachter: Prof. Dr.-Ing. Frank Neitzel, Technische Universität Berlin
Gutachter: Prof. Dr. Roderik Lindenbergh, Delft University of Technology, Niederlande
Gutachter: Prof. Dr.-Ing. Ingo Neumann, Leibniz Universität Hannover
Tag der wissenschaftlichen Aussprache: 28. Januar 2016
Berlin 2016
Eidesstattliche Erklärung
Hiermit versichere ich, dass ich die vorliegende Arbeit selbstständig verfasst und keine anderen
als die angegebenen Quellen und Hilfsmittel benutzt habe. Alle Ausführungen, die anderen veröf-
fentlichten oder nicht veröffentlichten Schriften wörtlich oder sinngemäß entnommen wurden,
habe ich kenntlich gemacht.
Die Arbeit hat in gleicher oder ähnlicher Fassung noch keiner anderen Prüfungsbehörde vorgelegen.
Daniel Wujanz Berlin am 08.09.2015
Summary
The determination of geometric changes within an area or object of interest by means of repetitive
surveys at different points in time is referred to as geodetic deformation monitoring. Despite its
development in the early years of the twentieth century the original processing chain remained
identical in essence until now. It contains the choice of suitable viewpoints, observation of at least
two so called epochs, transformation of all epochs into a common coordinate system and finally
the actual monitoring of deformations. In order to acquire an area under investigation discrete
points of interest have to be physically signalised. Thereby repetitive observations can be achieved
throughout the epochs. Downsides of this approach are among others the time consuming necessity
to signalise the area under investigation as well as the “blindness” against deformations that occur
outside the scope of the interest points. The emergence of terrestrial laser scanners (TLS) into
engineering geodesy around the turn of the millennium led to a paradigm shift that allows to
observe an area under investigation in a quasi-laminar fashion without the need of signalising
points within the object space. Through this, all deformations can be revealed in principle that
occurred in between two epochs within the area under investigation.
Based on the already mentioned process chain of geodetic deformation monitoring the contribution
at hand initially compares methodical differences as well as parallels among established approaches
and terrestrial laser scanning. This results in several unsolved problems that are treated as research
questions in this thesis.
A substantial preparative step for geodetic deformation monitoring is the choice of suitable view-
points from an economic perception and under the perspective of engineering geodesy. As existing
methods for this task are not directly transferable to TLS, a novel combinatorial search algorithm
is proposed and exemplified. Furthermore, a stochastic model for terrestrial laser scanners is in-
troduced that uses intensity values of the received laser signal as an input and allows to predict
the theoretical precision of observations from a certain viewpoint.
A vital task in deformation monitoring under the assumption of a congruency model is the trans-
formation into a stable reference frame. Only if this prerequisite holds, occurred deformations can
be correctly identified and consequently quantified. For the case of observations onto signalised,
discretely observable targets, for instance by tacheometry, several methods have been developed in
the past in order to reveal sets of congruent points respectively points that were subject of defor-
mation, so that this problem domain can be seen as solved. If one now transforms this problem to
TLS then it can be established that areas where deformation occurred have to be identified and
rejected from computing transformation parameters between epochs. A look at current literature
on TLS based deformation monitoring shows that nearly all researchers imitate the tacheometric
line of action by deploying artificial targets which have been designed for laser scanners. Through
this a beneficial characteristic of TLS is neglected namely the enormous information density within
the object space that can be used to compute transformation parameters. For the until then un-
solved problem of automatically distinguishing stable and deformed regions in datasets which have
been captured by TLS two algorithms are proposed. The performance of these implementations is
tested regarding their robustness and other criteria, based on practical data.
Furthermore, a method for determination of statistically significant deformations in TLS-datasets
is introduced. Through this, the subjective choice of arbitrary thresholds for quantification and
visualisation of deformations is counteracted. Finally, a procedure for visualisation of deformations
within the object space is presented that simplifies the now and then abstract interpretation of the
outcome of deformation monitoring.
Kurzfassung
Die Bestimmung von geometrischen Veränderungen eines Untersuchungsgebietes bzw. -objektes
durch wiederholte Vermessung zu verschiedenen Zeitpunkten wird als geodätische Deformation-
smessung bezeichnet. Auch seit deren methodischen Entwicklung zu Beginn des zwanzigsten
Jahrhunderts blieb die ursprüngliche Prozesskette im Wesentlichen unverändert und beinhaltet
die Wahl von geeigneten Aufnahmestandpunkten, die Vermessung von mindestens zwei soge-
nannten Epochen, die Überführung der Epochen in ein gemeinsames Koordinatensystem und
schließlich die eigentliche Deformationsmessung. Zur Erfassung einer Epoche wird das Unter-
suchungsgebiet zunächst diskretisiert, indem interessierende Punkte signalisiert werden, um so
eine wiederholte Vermessung zu ermöglichen. Nachteile dieser Vorgehensweise sind unter anderem
die zeitaufwändige Signalisierung, sowie die „Blindheit“ gegenüber Deformationen, die in nicht
signalisierten Arealen aufgetreten sind. Mit Einzug des terrestrischen Laserscannings (TLS) in
die Ingenieurgeodäsie wurde um die Jahrtausendwende ein Paradigmenwechsel eingeleitet, der nun
eine quasi-flächenhafte Vermessung ohne die Einbringung von Zielmarken in den Objektraum er-
möglicht. Dadurch können prinzipiell alle Deformationen aufgedeckt werden, die zwischen zwei
Epochen im Untersuchungsgebiet aufgetreten sind.
Die vorliegende Arbeit vergleicht zunächst an Hand der bereits erwähnten Prozesskette der geodätis-
chen Deformationsmessung methodische Unterschiede sowie Parallelen zwischen der etablierten
Vorgehensweise und dem terrestrischen Laserscanning. Auf Grundlage der ermittelten ungelösten
Probleme ergeben sich die Forschungsfragen der weiteren Kapitel.
Ein wesentlicher Schritt zur Vorbereitung von geodätischen Deformationsmessungen ist die Auswahl
geeigneter Aufnahmestandpunkte unter ökonomischen und ingenieurgeodätischen Gesichtspunk-
ten. Da bestehende Verfahren nicht direkt auf das terrestrische Laserscanning angewendet werden
können, wird ein neuartiger modellbasierter Algorithmus zur kombinatorischen Suche geeigneter
Aufnahmestandpunkte vorgestellt und an einem Beispiel demonstriert. Zudem wird ein stochastis-
ches Modell für terrestrische Laserscanner vorgestellt, welches als Eingangsgrößen Intensitätswerte
des reflektierten Lasersignals verwendet und somit die Berechnung der zu erwartenden Präzision
der Messungen von einem vorgegebenen Standpunkt ermöglicht.
Ein entscheidender Punkt bei der Deformationsmessung unter Annahme eines Kongruenzmodells
bildet die Überführung einzelner Epochen in ein stabiles Referenzkoordinatensystem. Nur wenn
diese Annahme erfüllt wird, gelingt es, aufgetretene Deformationen korrekt zu identifizieren und
schließlich zu quantifizieren. Liegen Beobachtungen zu signalisierten, diskret anzielbaren Zielze-
ichen vor, stehen zahlreiche Methoden zur Verfügung, um kongruente Punktgruppen zu ermitteln,
so dass dieses Problem als gelöst angesehen werden kann. Überträgt man dieses Problem auf das
terrestrische Laserscanning, so gilt es nun, flächenhafte Areale zu erkennen in denen Deformatio-
nen aufgetreten sind, und diese von der Berechnung von Transformationsparametern zwischen den
Epochen auszuschließen. Ein Blick in aktuelle Publikationen zum Thema Deformationsmessung
mit TLS zeigt, dass nahezu alle Ansätze die tachymetrische Herangehensweise durch Nutzung
von Zielzeichen imitieren, die in den Objektraum eingebracht werden müssen. Dadurch wird ein
wesentlicher Vorteil von terrestrischen Laserscannern vernachlässigt, nämlich die enorm hohe In-
formationsdichte im Objektraum, die zur Berechnung von Transformationsparametern genutzt
werden kann. Für das bis dahin ungelöste Problem der automatischen Identifikation von sta-
bilen und deformierten Regionen in Datensätzen aus TLS werden zwei Algorithmen vorgestellt.
Die Leistungsfähigkeit der Algorithmen wird hinsichtlich der Robustheit gegenüber Deformationen
und weiterer Kriterien an verschiedenen praktischen Szenarien getestet.
Des Weiteren wird eine Methode zur Ermittlung von statistisch signifikanten Deformationen in
TLS-Datensätzen vorgestellt, wodurch der subjektiven Wahl von frei wählbaren Schwellwerten bei
der Quantifizierung und Visualisierung von Deformationen begegnet wird. Schließlich wird ein
Verfahren zur Visualisierung von Deformationen im Objektraum präsentiert, welches die mitunter
abstrakte Interpretation der Ergebnisse einer Deformationsmessung erleichtert.
Contents i
Contents
Table of contents i
1 Introduction 1
2 Deformation monitoring: Past and present methodologies 5
2.1 FundamentalsofTLS .................................. 5
2.1.1 Reflectorless distance measurement approaches . . . . . . . . . . . . . . . . 5
2.1.2 Scanningprinciples................................ 6
2.1.3 Impact of spatial sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.4 Influences onto reflectorless distance measurements . . . . . . . . . . . . . . 8
2.1.5 CalibrationofTLS................................ 9
2.2 On stochastic models for TLS and viewpoint planning . . . . . . . . . . . . . . . . 9
2.2.1 Stochasticmodelling............................... 10
2.2.2 Geodetic viewpoint planning . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Referencing methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Georeferenced approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1.1 Use of artificial targets . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1.2 Direct Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1.3 Kinematic data acquisition . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 Co-registration approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2.1 Intensity based registration . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2.2 Surface based registration approaches . . . . . . . . . . . . . . . . 18
2.3.2.2.1 Coarse registration algorithms . . . . . . . . . . . . . . . 18
2.3.2.2.2 Fine Matching algorithms . . . . . . . . . . . . . . . . . . 19
2.3.2.3 Registration based upon geometric primitives . . . . . . . . . . . . 21
2.3.3 Assessment and categorisation of (Geo-) referencing and matching procedures 24
2.3.4 Comparative analysis of several registration algorithms . . . . . . . . . . . . 25
2.3.4.1 Raindrop Geomagic Studio 12 . . . . . . . . . . . . . . . . . . . . 25
2.3.4.2 Leica Cyclone 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.4.3 GFaI Final Surface 3.0.5 . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.4.4 4-Points Congruent Sets Algorithm . . . . . . . . . . . . . . . . . 28
2.3.4.5 Interpretation of the results . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Identification / Rejection of outliers . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 On TLS-based deformation monitoring . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.1 Related work: Fields of application . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.2 Related work: Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5.3 Procedure assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 Concludingremarks ................................... 38
3 On viewpoint planning based on an intensity based stochastic model for TLS 39
3.1 An intensity based stochastic model for TLS . . . . . . . . . . . . . . . . . . . . . 39
3.1.1 Experimental determination of stochastic measures . . . . . . . . . . . . . . 40
3.1.2 Assessment of the stochastic model . . . . . . . . . . . . . . . . . . . . . . . 42
3.2 Viewpoint planning for TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.1 On discretisation errors and captured surfaces . . . . . . . . . . . . . . . . . 48
ii Contents
3.2.2 An economic planning strategy . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2.3 Datapreparation................................. 53
3.2.4 Viewpoint planning by geometric means . . . . . . . . . . . . . . . . . . . . 55
3.2.4.1 Consideration of sufficient overlap between viewpoints . . . . . . . 57
3.2.4.2 Consideration of sufficient overlap and geometric contrast between
viewpoints ............................... 62
3.2.4.3 Reduction of computational costs: Estimating the minimum amount
ofviewpoints.............................. 66
3.2.4.4 Comparison of the expenditure of work . . . . . . . . . . . . . . . 67
3.2.5 Viewpoint planning considering stochastic information . . . . . . . . . . . . 68
3.2.5.1 Exemplification of the procedure on a simple example . . . . . . . 68
3.2.5.2 Exemplification of the procedure on a complex example . . . . . . 69
3.2.5.3 Exemplification of the procedure on a complex example using ra-
diometric information . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.3 Conclusion ........................................ 75
4 Registration of point clouds based on stable areas for deformation monitoring 77
4.1 Introduction of the test case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2 Datapreparation..................................... 78
4.3 Identification of stable areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.4 DefoScan++: Identification of deformation via comparison of spans between trans-
formedpoints....................................... 82
4.4.1 Identification of deformation via comparison of corresponding points . . . . 82
4.4.2 Application of the procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.5 The ICProx-algorithm: Identification of deformation based on comparison of spans
betweenfocalpoints ................................... 87
4.5.1 The maximum subsample method . . . . . . . . . . . . . . . . . . . . . . . 87
4.5.1.1 Basicconcept.............................. 88
4.5.1.2 Direct solution of the MSS-problem . . . . . . . . . . . . . . . . . 88
4.5.1.3 Randomised selection of combinations . . . . . . . . . . . . . . . . 88
4.5.1.4 Preliminary inspection . . . . . . . . . . . . . . . . . . . . . . . . 88
4.5.1.5 Topological matrix of congruent subsets . . . . . . . . . . . . . . . 89
4.5.1.6 Exemplification of the procedure . . . . . . . . . . . . . . . . . . . 91
4.5.2 The ICProx-algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.5.3 Fully automatic deformation monitoring . . . . . . . . . . . . . . . . . . . . 97
4.5.4 Dealing with complexity the combinatory wall . . . . . . . . . . . . . . . 98
4.5.5 Impact of cell size, number of virtual transformations and degree of contam-
ination....................................... 99
4.6 Resultsanddiscussion.................................. 103
5 On TLS based geodetic deformation monitoring 105
5.1 A rigorous method for C2M-deformation monitoring based on variance-covariance
propagation........................................ 105
5.1.1 Schematic description of the proposed method . . . . . . . . . . . . . . . . 105
5.1.2 Practical application and influential impacts . . . . . . . . . . . . . . . . . . 108
5.1.2.1 Impact of the geometric configuration . . . . . . . . . . . . . . . . 109
Contents iii
5.1.2.2 Application on a practical test case . . . . . . . . . . . . . . . . . 110
5.1.2.3 Influences due to registration . . . . . . . . . . . . . . . . . . . . . 113
5.2 Visualisation of deformation on arbitrary surfaces . . . . . . . . . . . . . . . . . . . 115
5.2.1 Relatedwork ................................... 115
5.2.2 Description of the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.2.2.1 Applied components . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.2.2.2 Data preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.2.2.3 Relative orientation . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.2.2.4 Transformation............................. 118
5.2.3 Quality assurance and inspection . . . . . . . . . . . . . . . . . . . . . . . . 119
5.3 Conclusions........................................ 122
6 Practical application of the ICProx-algorithm 123
6.1 Monitoring of a rock glacier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.2 Monitoring of an ice glacier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.3 Monitoringoftoothwear ................................ 128
6.4 Subsidencemonitoring.................................. 129
6.5 Deformation monitoring in crash testing . . . . . . . . . . . . . . . . . . . . . . . . 131
6.6 Concludingremarks ................................... 135
7 Conclusion and outlook 137
Bibliography 141
List of Figures 162
List of Tables 163
1
1 Introduction
For decades the vast majority of tasks for engineering geodesists were associated to geometric
monitoring of natural or manmade structures. In order to fulfil these tasks an object of interest
has to be discretised based on knowledge of according experts. This means that certain parts
of an object are represented by single points to which coordinates are assigned. A problem of
this perception is that geometric changes can only be detected if they influence areas that are
repeatedly observed. As a consequence, problems arise if unexpected or rare events occur outside
of the inspected area. The development of terrestrial laser scanners (TLS) allows capturing an
area under investigation in a quasi-laminar fashion without the necessity of signalisation and can
hence be seen as a paradigm shift. Consequently, the emergence of TLS initiated an extension of
the classical sphere of action of engineering geodesists for instance to Geomorphology, Archaeology
and Cultural Heritage, Biology and Environmental Sciences where natural objects and areas are of
interest. Note that the abbreviation TLS is used throughout the thesis for terrestrial laser scanners
and terrestrial laser scanning in equal measure.
Despite the fact that research in the field of TLS has been vividly conducted since beginning of
the new millennium only few scientists analysed or adapted the well-known processing chain of
deformation monitoring within the scope of this new technology. Instead questionable methods
and procedures are naively applied in research as well as in practice due to the fact that laser
scanning has blossomed into a profitable business in the meantime. The thesis at hand takes a
close look at all procedures that are required to derive geometric changes based on terrestrial scans
and furthermore strives to reintroduce expertise that engineering geodesists thoroughly gathered
within the span of more than a century.
The roots of deformation monitoring can be traced back to Ganz (1914) where the movement of
Rosablanche, a summit in the Swiss Alps, has been monitored based on trigonometric observations.
The applied methodology has then been used and refined to analyse the behaviour of large dams as
extensively reported by Lang (1929). A review on the development of deformation monitoring can
be found in Welsch (1979). Even though the origins of this problem domain have been founded
in the early years of the last century its process chain remained nearly identical namely
Step 1: Network design respectively viewpoint planning in
order to determine optimal viewpoints for observation,
Step 2: Data acquisition of at least two epochs,
Step 3: Transformation of all epochs into a common and stable coordinate system,
Step 4: Deformation monitoring.
It should be pointed out that throughout the thesis the term deformation monitoring is used instead
of deformation analysis as causes of deformation usually have to be identified by an expert of the
according problem domain and not an engineering geodesist as mentioned by Lang (1929 p. 5).
Interestingly Lang (1929 p. 17) also suggests to observe an object or area of interest completely and
not by just a few chosen discrete points as in classical engineering geodesy in order to being capable
to entirely monitor its behaviour. An extensive description of this aspect will be given in subsection
2.1.3. Lang’s argument coincides with current objectives of the German Geodetic Commission
(DGK) section engineering geodesy, who state that developments in deformation monitoring should
satisfy temporal and spatial continuity (Kuhlmann et al. 2014). Sensors that come very close to
these prerequisites are terrestrial laser scanners (TLS) apart from the fact that they acquire data
sequentially. Hence, data captured by TLS will be subject of this thesis.
Heunecke et al. (2013 p. 15) describe the subject of geodetic monitoring “...as the acquisition
of geometric changes of a measuring object... respectively “...the detection of movements and
deformation... (translation from German). The technical requirements onto a stated problem
vary significantly due to the diversity of problem domains where the behaviour of an object is
of interest. Typical fields of application for deformation monitoring are for instance structural
monitoring of objects such as bridges, dams or towers but also wear of machines, crash testing
2 1 Introduction
Table 1.1: Model comparison for deformation monitoring (Heunecke et al. 2013 p. 78)
Deforma-
tion
model
Dynamic model Static model Kinematic
model
Congru-
ency
model
Time Deformation as a
function of time and
stress
Not explicitly
modelled
Deformation as a
function of time
Not
explicitly
modelled
Forces Deformation
caused by forces
Not modelled Not
modelled
Condition
of object
Moves under stress Adequately in
rest during stress
In motion Adequately
in rest
and landslide or glaciers movement in the geo-scientific domain. In general, two important aspects
need to be clarified in a dialogue with specialists of the according expertise namely the degree of
discretisation and temporal resolution. An important boundary of the first aspect is the so called
Nyquist-frequency or Nyquist interval (Shannon 1949) in imitation of an article by Nyquist
(1928). It describes the required spatial frequency in order to retrieve information without loss
which is referred to as aliasing in this context (Luhmann et al. 2011 p. 133). Assuming a case
where the sampling frequency is too low in relation to the signal of interest the result leads to
discretisation errors which consequently ends in misinterpretation. The second aspect concerning
the required temporal resolution is highly correlated to the stated problem domain e.g. occurrence
of heavy rainfalls as cause of landslides or increased production rates as a catalyst for machine wear
and is usually defined within interdisciplinary exchange. In general three types of deformation can
be distinguished namely (Heunecke et al. 2013 p. 92, Scaioni et al. 2013):
Rigid body deformation - the shape of an object of interest remain stable whereas location
and / or orientation change e.g. glacier detachment.
Shape changes - for instance torsion, bending or strain due to external forces such as in crash
testing.
Deposition, degradation, wear or loss of material - e.g. soil erosion.
After data acquisition at a sufficient temporal and spatial resolution a suitable deformation model
needs to be chosen. Technical literature such as Heunecke et al. (2013 p. 78) distinguish four
different cases that are briefly discussed in the following and gathered in table 1.1. The first one,
which is referred to as dynamic model links temporal and external forces to observed object
reactions. As external forces are impossible or relatively hard to control in natural environments
or on large artificial structures no use will be made in this thesis. A prerequisite of static models
is that the object of interest needs to remain stable during measurements in order to being able
to omit time as an influential factor. Furthermore the functional relationship between forces that
act onto an object and its observed reaction is described. Kinematic models describe the time-
dependent behaviour of object points for instance by polynomial or trigonometric functions. The
motivation behind this approach is to being able to draw conclusions about object movements
and its parameters at discrete points in time. By doing this the descriptive kinesic behaviour
is monitored but not their causes. A problem that also arises in this context is the necessity of
object generalisation where discrete points are chosen in order to describe an entire object. The
most popular strategy for deformation monitoring which is also used in this thesis is the so called
congruency model. This solely geometric perception considers geometric changes between two
epochs for instance by comparison of coordinates. A vital assumption of the congruency model
is that some portion of an observed area of interest remains geometrically stable (Neumann &
Kutterer 2007).
The thesis at hand is structured in accordance to the processing chain of deformation monitoring
that has already been introduced. Note that the second step of the procedure, the actual data
3
acquisition, will not be separately discussed in this thesis. At first an overview about established
geodetic methodologies as well as contemporary approaches based on TLS scans for deformation
monitoring is given in chapter 2. Subsection 2.6 will assess all discussed methods based on which
four research questions within the context of deformation monitoring deploying TLS data are
posed. In order to support the reader in linking individual steps of the deformation monitoring
processing chain to according chapters, the related design step is given in brackets in the following.
A prerequisite for viewpoint planning is the availability of a suitable stochastic model while both
aspects are subject of chapter 3 (Step 1). The most critical part of the above mentioned process
chain concerning the outcome of deformation monitoring is the transformation into a common
coordinate system which is the main topic of this thesis and can be found in chapter 4 (Step 3).
A rigorous method for deformation monitoring based on point clouds captured by terrestrial laser
scanners as well as a visualisation strategy of deformations on arbitrary surfaces is presented in
chapter 5 (Step 4). Chapter 6 features several practical scenarios that are used to evaluate the
capabilities as well as the boundaries of the most promising proposed matching algorithm from the
previous section while chapter 7 concludes the thesis and gives an outlook on open questions.
5
2 Deformation monitoring: Past and present methodologies
This section will give an overview about all essential design steps for deformation monitoring
where both established and contemporary contributions are considered. Deformation monitoring
is based on frequently observing and object or area of interest which hence results in a time series of
potentially different states within object space. The required frequency of documentation depends
on the characteristic behaviour of the area of interest e.g. if it is of linear nature or if it is triggered
by certain influences such as heavy rainfalls or increasing temperatures. After a brief introduction
to the functional principle of TLS, influential impacts and calibration routines in subsection 2.1,
the focus is set on stochastic models within a geodetic context. An important step when conducting
measurements that have to fulfil certain requirements in terms of precision (Ghilani 2010 p. 4) is
to perform viewpoint planning in order to fully deploy the potential of the sensor which will be of
interest in section 2.2. Section 2.3 will introduce important aspects on referencing of point clouds
which describes the central topic of this thesis. A practical analysis of commercial registration
algorithms has been conducted in order to assess their suitability for deformation monitoring and
will hence be extensively discussed. Furthermore the subject of outlier identification / rejection as
well as the actual monitoring process are of interest in subsection 2.4, respectively 2.5. The last
section of this chapter will summarise discovered problems for which several solutions are proposed
in this thesis.
2.1 Fundamentals of TLS
A TLS consists of three major components namely an emitter unit, a receiver unit as well as a
deflection unit (Jutzi &Stilla 2003). The essential technology that consequently allowed the
development of TLS is the reflectorless distance measurement which hence can be seen as the key
component and is described in the following subsection. In order to capture an area or object of
interest the emitted laser beam requires deflection into different spatial directions for which several
scanning principles have been developed as discussed in subsection 2.1.2. As laser scanners can be
applied for acquisition of arbitrary surfaces that still have to reflect a sufficient signal numerous
influences onto the outcome can occur which is subject of section 2.1.4. A prerequisite before
conducting engineering surveys is to strictly apply calibrated sensors. Approaches to achieve this
are discussed under subsection 2.1.5.
2.1.1 Reflectorless distance measurement approaches
The basic components for electro-optical distance measurement units are laser diodes that emit
signals of the above mentioned types, optical components that ensure a potentially small beam
divergence with increasing distance as well as a photosensitive diode that receives the reflected
signal and provides input for signal processing (Vosselman &Maas 2010, pp. 11). In general,
two reflectorless distance measurement methodologies are incorporated in TLS that are either
based on continuous wave (cw) or pulsed lasers. The latter one is also referred to as time-of-flight
(tof) principle and is based on the assumption that if the speed of light cas well as the refractive
index nare known and the runtime τof a signal is measured the according distance ρbetween
sensor and object can be computed by
ρ=c
n·τ
2.(2.1)
In order to demonstrate the requirements for a tof-distance measurement unit an example is given
that assumes the following parameters:
c– speed of light in vacuum: 299792.458 km/s,
n– refractive index: 1,
resdist – desired resolution of the distance measurement: 5 mm
6 2 Deformation monitoring: Past and present methodologies
while restemp describes the mandatory temporal resolution. Joeckel &Stober (1999 pp. 74)
name drying temperature, air pressure, partial vapour pressure as well as the coefficient of expan-
sion of air as decisive meteorological influences onto distance measurements captured by electro-
optic sensors. Based on equation (2.1) the following relation can be formed
restemp =2
c·resdist (2.2)
which yields to a required temporal resolution of 0.0335 ns. Joeckel &Stober (1999 pp. 21)
point out that the required temporal resolution is independent to the length that should be mea-
sured. Interested readers are referred to Jutzi &Stilla (2006) for more information on the
subject of reflectorless tof-distance measurement.
Another method to determine distances are phase based techniques where a continuous wave is
emitted instead of pulsed signals (Vosselman &Maas 2010, pp. 7). Therefore a signal is e.g.
amplitude modulated (AM) by forming a sinusoidal wave. The emitted and received signal are
compared to each other in terms of waveform while the resulting phase shift yields to the time
delay and hence as already introduced in equation (2.1) to the range. A downside of this method is
that a low frequency fmleads to low precision in phase detection and in consequence to inaccurate
distance measurements. Usage of a higher frequency would in fact lead to a higher achievable
accuracy but would amplify the second problem of the approach: the higher the frequency the
smaller the range of ambiguity. In order to solve this problem various wavelengths are applied in
order to ensure a potentially large ambiguity interval as well as a desirable accuracy.
As this thesis focuses on data processing and not the actual acquisition process the interested
reader is referred to Wehr &Lohr (1999), Jutzi &Stilla (2003), Jutzi (2007) or Vosselman
&Maas (2010, pp. 2). Comparable to other optical measuring techniques reflectorless distance
measurements fail respectively produce erroneous results if the signal is subject to transmission, a
substantial amount of absorption or total reflection (Stöcker 2000, pp. 375).
2.1.2 Scanning principles
In order to expand a one dimensional distance measurement unit into a laser scanner several
additional components are required namely mirror(s), motors and angular detectors. The task of
one or more mirrors is to deflect an emitted signal of the laser diode and to transmit the signal
that is reflected of an object’s surface to a collecting optical group and finally to a photodiode
for signal processing. By changing the orientation of the according mirror(s) additional degrees of
freedom can be measured. By emitting a signal onto a single mirror that rotates around one axis
a profile in 3D-space is sampled while the use of two mirrors yields to a 3D-scanner. The current
mirror orientation is determined by use of angular detectors. In order to being able to compute
3D-coordinates the following three polar elements are required (Neitzel 2006):
Direction ϕi,
Tilt angle λi,
Spatial distance sias measured by the reflectorless distance measurement unit.
For precise determination of 3D-coordinates all required components need to be synchronised and
calibrated as described in subsection 2.1.5. The process of scanning an environment can be achieved
by various methods or combined use while figure 2.1 shows four essential approaches.
Figure 2.1 a illustrates the oscillating mirror principle which has been used in some of the first
generation of laser scanners but is now still popular in aerial laser scanning systems (ALS). The
basic idea is that an emitted signal is deflected by an oscillating mirror that swivels within a certain
range. As the mirror is accelerating and slowing down before and after reaching the turning points a
varying point density results. The combination of two orthogonally mounted mirrors that oscillate
perpendicular to each other leads to so called camera-view scanners that can only capture data
within a certain field-of-view. Figure 2.1 b shows the principle of the rotating polygon mirror where
2.1 Fundamentals of TLS 7
Figure 2.1: Several scanning strategies and resulting sampling pattern in object space
(Vosselmann &Maas 2010, p. 17)
the deflection occurs in one direction only. The deflected lines are parallel to each other which
results in a far more homogeneous sampling pattern. Figure 2.1 c schematically shows the idea
behind a palmer scanner that is mostly used in terrestrial laser scanners e.g. where the total station
principle is applied. In order to realise such a scanner a mirror is aligned to a laser diode in an
angle dissimilar to 90ř while the mirror rotation results in a circular shaped scanning pattern. The
according pattern in ALS systems is of elliptic shape due to the forward motion of an aerial sensor
platform. Figure 2.1 d depicts the concept of a fibre scanner where a laser pulse is sequentially fed
into an array of glass fibre by a rotating mirror. As a result a fixed resolution is given leading to
a homogenous ground sampling distance (Vosselmann &Maas 2010, p. 16).
2.1.3 Impact of spatial sampling
Classical geodetic acquisition methods describe an object or area of interest by several discrete
points that are selected by a geodesist and / or under consultation by an expert of the according
field as already briefly mentioned. The major difference between this established perception and
TLS is that no actual point selection is carried out in the field. Instead an object of interest is
covered by a densely sampled set of discrete points - a so called point cloud. The acquisition
method of TLS is referred to as being quasi-laminar throughout the thesis due to the fact that it
occupies some properties of discrete strategies yet holds more laminar characteristics and hence
comes close to the desired ideal of laminar data acquisition. Figure 2.2 illustrates the circumstance
where an object of interest has been acquired from two slightly different viewpoints by a TLS which
results consequently in a varying point distribution on the object’s surface. As a consequence one
could say that TLS observes non-repeatable points on an object’s surface that are dependent to the
sensor’s position in relation to the object. Interestingly the point sampling may also differ between
two scans even if the orientation and location of the TLS remained stable between two scans due
to the imperfection of angular detectors. In summary it has to be established that novel thresholds
respectively stochastic measures have to be derived that account for the spatial sampling of the
acquisition procedure which will be covered in some of the following subsections.
8 2 Deformation monitoring: Past and present methodologies
Figure 2.2: Object of interest (left) and acquired data from two slightly different viewpoints (mid-
dle and right part). It can be seen that the point sampling on the object’s surface
differs between both datasets. Digitised copy of the Bust of Queen Nefertiti, New
Kingdom, 18th dynasty ca. 1340 BCE, Egyptian Museum and Papyrus Collection,
National Museums Berlin, Germany. Unauthorised external use is strictly prohibited.
2.1.4 Influences onto reflectorless distance measurements
On one hand reflectorless range finders allow to observe distances without the necessity to equip a
point of interest but on the other hand that means that a multiplicity of additional falsifying effects
compared to classical tacheometer observations on prisms emerge. Hence, this circumstance is a
relevant scientific subject starting from the early days of TLS in surveying as Lichti et al. (2000)
demonstrate. Soudarissanane et al. (2011) groups the entirety of influences into the following
four categories:
Scanner mechanism which includes properties of applied hardware, calibration and settings.
Atmospheric conditions and environmental impacts such as temperature, humidity or air
pressure.
Object properties such as roughness and reflective behaviour of the irradiated surface.
Scanning geometry includes the relative spatial relation between object and TLS during data
acquisition.
Böhler et al. (2003) perform accuracy and performance tests on TLS while the most significant
contribution of this work can be found in the development of a 3D-version of the Siemens-star,
a device that is applied to determine the resolution of a photographic system (Luhmann et al.
2011, pp. 130). The proposed device is referred to as Böhler-star and determines the resolution
of a TLS in dependence to object distance and local spatial resolution that is defined by angular
increments of the deflector unit. The highest resolution can be achieved at the distance where the
spot size or foot print of the laser beam is minimal (Wujanz 2009). Staiger (2005) gives an
overview about influential parameters onto the quality of point clouds and compares various TLS
within practical experiments that were available on the market at the time. The behaviour of a
TLS’ foot print is discussed by Jacobs (2006) while Reshetyuk (2006b), Bucksch et al. (2007),
Vögtle et al. (2008) and Zamecnikova et al. (2014) analyse influences caused by the surface
reflectance respectively various materials onto the distance measurement unit. Soudarissanane
et al. (2007, 2009, 2011) carried out extensive experiments concerning the error budget of TLS
including influences caused by incidence angle and distance. An overview about influencing factors
within the TLS data acquisition and post processing chain is given by Buckley et al. (2008)
2.2 On stochastic models for TLS and viewpoint planning 9
from the perspective of geological monitoring. Hejbudzka et al. (2010) analysed the impact
of meteorological incidents onto distance measurements conducted by TLS which is to the best
knowledge of the author the only article that deals with this vital subject. It has to be mentioned
that this impact is extremely hard to model as laser scanners capture data in a polar fashion hence
every single emitted ray is subject to effects of different degree. Pesci et al. (2011) theoretically and
experimentally assess influences caused by a TLS’ footprint as well as the chosen scan resolution.
2.1.5 Calibration of TLS
An important prerequisite of surveying instruments that needs to be satisfied is the necessity of
a valid calibration which includes the distance measurement unit as well as certain conditions
concerning the instruments’ axes that have to be met. The aim of a calibration procedure is to
minimise systematic errors which otherwise yield to falsification of an instruments’ observations.
Various articles have been published on the subject that can be briefly distinguished by the type
of observations that serve as input for the calibration as well as the applied functional model that
has been chosen for the TLS of interest. Neitzel (2006) applied Stahlberg’s (1997) vectorial
description of axes errors based on observation of six spheres that have been surveyed in two
phases in order to calibrate a Z+F Imager 5003 TLS whose functional model is in accordance to a
tacheometer. Furthermore three essential conditions are defined that have to be met by an error
free instrument while the according axes are illustrated in figure 2.3:
Rotation axis and tilting axis are normal to each other,
Tilting axis and collimation axis are in a perpendicular relation,
The collimation axis intersects the rotation axis.
Rietdorf et al. (2004) respectively
Figure 2.3: Occurrence of a tilting axis error and collima-
tion error (Neitzel 2006)
Rietdorf (2005) describe a calibra-
tion procedure for estimation of axis
errors that applies planar patches as
observations whereas the idea has been
adapted by Bae &Lichti (2007) in
form of an on-site calibration routine
analogue to photogrammetric solu-
tions (Luhmann et al. 2011, pp. 448).
Salo et al. (2008) presented a dis-
tance component calibration routine
that has been applied to a phase-
based TLS. Sound summaries on the
subject are presented by Lichti &Licht (2006), Lichti (2007), Reshetyuk (2006a), Schulz
(2007), Reshetyuk (2009) and Lichti (2010). In order to check a TLS in the field e.g. after a
longer period where the instrument hasn’t been used, after a fall or uncertain transportation a field
test procedure (Gottwald 2008) should be carried out that allows the user to draw conclusions
whether a re-calibration is required or not.
2.2 On stochastic models for TLS and viewpoint planning
The field of deformation monitoring can be clearly assigned to the subject of engineering geodesy
where detailed knowledge about a sensor’s error budget in form of stochastic models is essential
to objectively and reliably draw conclusions about a stated problem. In order to fully employ the
potential of an applied sensor in terms of its achievable accuracy the shape of an object of interest
has to be known. This aspect is schizophrenic to some degree as the shape that is actually of
interest is usually not known prior to a survey – if this would be the case data acquisition would be
needless at first glance. In summary, it can be established that the survey of an object is actually an
10 2 Deformation monitoring: Past and present methodologies
iterative process where the degree of details improves during the cause of repetitive acquisition that
consequently lead to an improved set of required viewpoints. The following subsections describe
the state-of-the-art of stochastic modelling and viewpoint planning at first from the traditional
surveying perspective and subsequently within the context of TLS.
2.2.1 Stochastic modelling
A telling definition on stochastic models is given by Bjerhammar (1973, p. 8) who states that “the
stochastic model (random model) is used, when there is no way of making an exact prediction of the
outcome (random trial). As this is always the case for observations as well as the fact that only the
precision can be determined in the field but not the according accuracy. Furthermore stochastic
models serve for weighting of individual observations during parameter estimation. Ghilani (2010
p. 165) argues that “the weight of an observation is a measure of an observation’s relative worth
compared to other observation in an adjustment. Weights are used to control the sizes of corrections
applied to observations in an adjustment. In addition it has also to be mentioned that weights
or a priori accuracies influence the quality measures of the estimated parameters due to given
functional relations (Niemeier 2008 pp. 124). The relation between weights, diagonal elements,
and correlations, given on the secondary diagonal, between observations is expressed by weight
matrix Pthat is related to the observation vector l. In the following representation of Pno
correlations are assumed which is expressed by zero values in the secondary diagonals
P= (Qll )1=
σ2
0
σ2
10
σ2
0
σ2
2...
0σ2
0
σ2
n
.(2.3)
In order to express relations respectively weights between observations an unknown variance is re-
quired usually depicted by σ2
0which is also referred to as variance factor, standard error or variance
of the unit weight. A frequently used assumption is σ2
0= 1 for weighting of all observations while
also more advanced presuppositions can be made where individual variance factors are used for dif-
ferent observation groups. Apart from choosing appropriate correlations between observations the
most critical part is related to determination of individual variances σ2
n. While this circumstance is
well known for tacheometric observations, where typical variances are 0.3 mgon for directions and
1 mm + 1 ppm for distances, this issue is a lot more complex for TLS measurements, as already
discussed in subsection 2.1.4. This aspect may be the reason that very few studies have focused
on the subject of stochastic modelling for TLS observations.
A comparison of different stochastic models for registration of point clouds captured by TLS has
been conducted by Elkhrachy &Niemeier (2006). Point-to-point correspondences for computa-
tion of transformation parameters were established based on artificial tie points. Centre coordinates
of the applied targets have been weighted by using three different stochastic models:
Equal weights which leads to an identity matrix P=I.
Weight proportional to range between scanner and target.
Weight proportional to squared range between scanner and target.
2.2 On stochastic models for TLS and viewpoint planning 11
Consequently the according weight matrices follow
A=
1
/
s10 0
01
/
s10 0
0 0 1
/
s11
/
s20 0
0 0 1
/
s20
0 0 1
/
s2
(2.4)
for the second model and
B=
1
s2
10 0
01
s2
10 0
0 0 1
s2
11
s2
20 0
0 0 1
s2
20
0 0 1
s2
2
(2.5)
for the third model where s1indicates the distance to the target centre from the origin of the TLS
in the first scan while s2represents the according value for a second scan. It can be seen that
the impact of less precise targets that are located further from the scanner receive lower weights
and hence a smaller impact onto the outcome. Tests on a real world dataset revealed the last
model to produce the most accurate results. Even though a stochastic model has been applied for
weighting purposes a global test of the adjustment (Niemeier 2008 pp. 167) is not conducted.
Other related articles such as Schaer et al. (2007) or Bae et al. (2009) also derive stochastic
information for airborne laser scanning respectively TLS but also do not assess the validity of the
according models. Hence, the motivation arises to develop and validate a stochastic model for a
TLS that is of vital importance for:
Planning of optimal viewpoints,
Weighting of observations e.g. for registration or estimation of geometrical parameters,
Outlier identification,
Deformation monitoring
which will be subject of chapter 3.
2.2.2 Geodetic viewpoint planning
A prerequisite before carrying out a classical engineering survey based on total station observations
is to perform a sophisticated network design respectively network optimisation. The major aims
of this task are to receive an optimal solution that satisfies homogeneity of surveyed points in
terms of accuracy and reliability for instance by carefully controlling the redundancy numbers of
observations. While these aspects are purely seen from an engineering perspective an economic
point of view is also essential. For this sake a minimisation of the required expenditure of work
needs to be undertaken which has to be smaller than a predefined value PA. This measure can
either be defined by economic means or by a client for instance at a construction site where certain
other design steps can only to be interrupted for a predefined amount of time during the survey.
The following equation describes this problem by
XajnjX
A
(2.6)
12 2 Deformation monitoring: Past and present methodologies
where ajdenotes the required effort for a single observation while njrepresents the amount of
repetitions. A detailed summary on network design is for instance given by Niemeier (2008 pp.
331) or Ghilani (2010 pp. 455).
While the established perspective in engineering geodesy is based on chosen discrete points it is
obvious that the mentioned procedure cannot be simply transferred to TLS that acquires an area
of interest in a quasi-laminar fashion. First thoughts on finding optimal TLS viewpoints have been
proposed by Soudarissanane et al. (2008) and Soudarissanane &Lindenbergh (2011) which
will be discussed in this section. As an input a 2D map is derived from a given 3D-model of a scene
as “Almost all 3D indoor scene can be reduced to a 2D map by taking an horizontal cross section
of the scene at for instance the height of the sensor. This approximation of the 3D surrounding as
a 2D map results in less intensive computations” (Soudarissanane &Lindenbergh 2011). In
contrast to the aforementioned equation that describes the expenditure of work in laser scanning a
minimum number of viewpoints is desired to cover a region of interest which still features sufficient
overlap between adjacent scans for registration via e.g. fine matching algorithms that will be
extensively discussed in section 2.3.2.2.2. Again a trade-off has to be found that serves both, the
number of acquisitions and the required effort for registration. Furthermore the quality of all
measured points should be optimal for which the following criteria need to be satisfied:
Completeness: All edges of the 2D map should be covered by at least one viewpoint.
Reach: All edges are captured from at least one viewpoint that is not closer as the minimum
distance of a scanner dmin and the maximum distance between instrument and edge dmax.
Incidence angle: All edges are acquired from at least one viewpoint where the according inci-
dence angles fall below a maximum threshold αas this influence causes the largest falsifying
impact according to Soudarissanane et al. (2011).
The problem of determining optimal viewpoints is solved by gridding the given 2D map based on
predefined distances. On each grid point a simulated scan with defined settings is carried out via ray
tracing. As a result, artificial point clouds are computed where the distance between individual
points and scanner as well as according incidence angles on the object’s surface arise. These
geometric measures are then used to evaluate all viewpoints according to the above mentioned
criteria. Gridding the 2D map dramatically reduces the required computational effort which can
be quite high in dependence to the chosen number of potential viewpoints and angular resolution
of the simulated scanner. The left part of figure 2.4 depicts an example of a visibility polygon
for a complex room. The outer bound polygon P0is outlined by red lines. Interior obstruction
polygons P0j= 1. . .6 are represented by blue areas. The interior of these polygons are not visible.
A simulated viewpoint Ois depicted by a red star. The visibility polygon Vfrom this location O
is represented by the green area. On the right a simulation of nineteen viewpoints are shown that
are required to cover all the edges under range and incidence angles constraints. The resulting
visibility polygons are represented by grey areas.
2.3 Referencing methodologies 13
Figure 2.4: The left part of the figure depicts an example for a visibility polygon based on a 2D
map. On the right nineteen optimal viewpoints are represented by stars (both figures
by Soudarissanane &Lindenbergh 2011).
In conclusion the content of the mentioned articles have to be rated as an important and vital
contribution within the context of laser scanning especially from an engineering surveying perspec-
tive. However, some aspects need to be rated critically and hence require improvement. At first it
has to be said that a 2D map may be a suitable simplification for indoor scenarios but definitely
not for natural scenarios outdoors. Indeed the solution provides an optimal solution regarding the
stated criteria however does the user not know what that means in terms of achievable accuracy.
Finally, the first criterion has to be highlighted as it states that it is sufficient to capture every
edge by at least one viewpoint. This argument neglects the fact that this procedure does not
ensure sufficient resolution to avoid discretisation errors. In addition a set of viewpoints may be
determined that cannot be registered to a common dataset. Based on these drawbacks advanced
thoughts and techniques will be presented in section 3.2.
2.3 Referencing methodologies
As already mentioned in the introduction the most influential component in the processing chain of
deformation monitoring is the transformation of various captured epochs into a common reference
coordinate system. In order to achieve this several approaches are applicable, that can be cate-
gorised into georeferenced and co-registration strategies and will be discussed in the following more
closely. The peculiarity of the first method is that the common coordinate frame is described by a
reference coordinate system which is established by additional geodetic observations (Paffenholz
et al. 2010, Reshetyuk 2010). The second methodology, which is also referred to as registration
or alignment, works solely on the captured data.
2.3.1 Georeferenced approaches
The first category of referencing approaches follow the long tradition of deformation monitoring
where superior predominantly georeferenced coordinate systems serve as reference frames in order
to being able to quantify geometric changes in between epochs. In order to achieve this, several
approaches can be distinguished that require additional sensors or other equipment such as artificial
targets, which will be of interest in the following subsection.
2.3.1.1 Use of artificial targets
The most popular methodology for the sake of registration within the context of deformation
monitoring thus far is based on artificial targets which need to be placed within the area of interest.
14 2 Deformation monitoring: Past and present methodologies
During acquisition of an according epoch all targets need to be captured in order to being able to
derive transformation parameters into the desired coordinate system. In general two types of target
centre detection can be distinguished namely geometrically motivated approaches and intensity
based methods. The most typical form from the first mentioned group are spherical targets where
the centre can be determined by adjusting a sphere through the acquired point cloud. However,
other geometric arrangements are also applicable, for instance by using corner cube reflectors that
are assembled by three intersecting metal sheets (Wujanz et al. 2013a). Georeferencing can be
directly achieved by tacheometric observations or via global navigation satellite system (GNSS) and
forced-centring. Intensity based target identification apply well established methods from image
processing such as correlative techniques (Ackermann 1984) that use radiometric information
in form of laser intensity images. A detailed description of target centre determination based on
reflectance imagery is given by Abmayr et al. (2008), Alba et al. (2008b) or Chow et al. (2010).
Numerous downsides of this strategy are known such as the necessity to enter the area of inter-
est, which may be prohibited, dangerous or impossible as well as the prerequisite that at least
some of the attached targets have to maintain stable between epochs. Clarification of this re-
quirement, for instance via S-Transformation (Baarda 1973), needs to be conducted before an-
other epoch can be captured. The size of a target determines up to which distance it can be
detected - the larger the distance between scanner and target, the larger the target has to be.
While targets based on geometric primitives can be easily adapted to larger distances between
TLS and target, radiometrically motivated targets are restricted to the minimum extent of the
scanner’s laser footprint. This circumstance is highly correlated to the highest achievable resolu-
tion (Böhler et al. 2003) which peaks at the focal point of the embedded laser’s optic (Wujanz
2009). It is obvious that targets can’t be scaled arbitrarily as they would be difficult to manufacture
and transport as well as prone to gusts of wind that may alter their original position. Also it has
to be mentioned that the distribution of targets within an investigated area has an impact onto the
derived transformation parameters and hence also onto the result of the deformation monitoring
itself. The biggest drawback of this strategy however is the fact that the high redundancy of TLS
is not utilised.
2.3.1.2 Direct Georeferencing
The basic principle behind direct georeferencing is the determination of six degrees of freedom
(6dof) by usage of external sensors such as inclinometers, compasses and GNSS antennas. Usually
the last mentioned sensors also establish the superior coordinate system as a prerequisite for
deformation monitoring. Early thoughts on the topic have been raised by Lichti &Gordon
(2004) who investigated the error budget of the procedure for acquisition of cultural heritage.
Schuhmacher &Böhm (2005) proposed several strategies for direct referencing of terrestrial laser
scans including a low cost GPS-antenna and a digital compass. Due to the high demands in terms
of accuracy onto the sensors - especially for scenarios with large object distances - the resulting
insufficient alignment of the point clouds requires a refinement by using acquired information from
the object space in order to generate the final result. Paffenholz &Kutterer (2008) propose
the additional use of two GPS antennas for determination of position and orientation within a
superior coordinate system. Reshetyuk (2010) presented an approach where two GPS-antennas
are solely deployed. The approach follows the strategy of a traverse where the position of the
scanner is determined by an attached antenna while the second antenna is mounted on a second
tripod. After determination of both positions an orientation is computed where the external
antenna is replaced by a cylindrical target. Paffenholz et al. (2010) proposed a multi sensor
system consisting of a TLS, a GNSS-antenna as well as two inclinometers. An adaptive extended
Kalman filter is introduced, which allows in combination with the inclinometer data to determine
all desired 6dof. Figure 2.5 shows different setups of the system where the TLS is combined with
different sensors: use of two GNSS-antennas (left), one GNSS antenna with two inclinometers
(centre) and one antenna with one 360-prism for assessment.
2.3 Referencing methodologies 15
Figure 2.5: Various configurations of the developed direct referencing system (Paffenholz et al.
2010, Paffenholz 2012).
An extension of the previously mentioned approach has been published by Paffenholz &Bae
(2012) where transformational and positional uncertainty is considered. Furthermore refinement
of the outcome is conducted by a modified version of the iterative closest point algorithm (ICP)
that will be discussed in detail in section 2.3.2.2.2. For sound summaries on the topic the reader
is referred to Reshetyuk (2009) and Paffenholz (2012). As mentioned in the first section the
congruency model for deformation monitoring assumes a certain part of a scene of interest to
remain stable. This requirement can be overcome by usage of georeferenced approaches as the
comparison of epochs is conducted within a global reference frame. This aspect has to be rated
advantageous in contrast to co-registration approaches that are discussed later in this section.
2.3.1.3 Kinematic data acquisition
A disadvantage of TLS is its sequential sampling process which hence leads to temporal offsets
between measured points and consequently to geometric distortion if the object of interest moves
or the scanner itself moves during data acquisition. Hence, laser scanners are per se not suitable for
observation of kinematic processes or acquisition of a scene by moving a TLS through object space
on a movable sensor platform. The last mentioned strategy would lead to a notable acceleration of
the data acquisition process in the field compared to static TLS where the instrument’s viewpoint
is bound to a tripod. However, if the movement of the object or sensor platform can be described
by tracking its six degrees of freedom (6dof) during scanning corrections can be applied that yield
to a geometric rectification and thus allows kinematic laser scanning or short k-TLS. The most
prominent example of k-TLS can be seen in mobile mapping (Schwarz &El-Sheimy 2004) where
a kinematic multi sensor platform moves through an area of interest. A sound overview on the
history, components, processing steps and other important aspects concerning mobile mapping in
a general sense are given by El-Sheimy (2005). Nevertheless several cases can also be defined that
are suitable to capture kinematic scenarios whereas the object coordinate system (OCS), sensor
coordinate system (SCS) and environment coordinate system (ECS) need to be introduced in this
context:
Case 1: OCS, SCS and ECS remain constant. The object of interest however changes its shape
caused by dynamic influences during data acquisition.
Case 2: OCS and ECS remain constant while the SCS moves within them (mobile mapping).
Case 3: SCS and ECS remain constant while the OCS moves within them.
Case 4: OCS and SCS change independently from each other within the ECS.
Since the focus of this subsection lies on systems that apply active instruments for the purpose of
data acquisition, contributions that apply TLS in a kinematic sense are introduced in the following.
Paffenholz et al. (2008) focus on problems that follow the definition of case 1 where an object
deforms during the measurement and conclusions concerning its behaviour have been revealed.
Neitzel et al. (2012) satisfy the same case by performing structural health monitoring (SHM)
within a comparative study between k-TLS, accelerometers as well as ground-based radar.
16 2 Deformation monitoring: Past and present methodologies
The majority of scientific effort has nevertheless concentrated on problems related to or focusing
on the second case. Mettenleiter et al. (2008) describe different possibilities and essential
aspects of time synchronisation in detail on example of TLS which has been applied in this con-
tribution. Vennegeerts et al. (2008) discuss soft- and hardware-based approaches for temporal
synchronisation and evaluate the geometric outcome derived by both methods. Hesse (2007) uses
a combination of TLS, GPS and inclinometers for a mobile mapping system which turns down
usage of costly inertial measurement units. While the previous approaches applied terrestrially op-
erating platforms Böder et al. (2010) installed their system including hydrographic sensors onto
a vessel. An ongoing project at Jade University of Applied Sciences, Germany called “WindScan”
(Grosse-Schwiep et al. 2013) applies a combination of TLS and photogrammetry in order to
determine strain on blades of actuated wind generators. Hence, this project as well as the approach
described by Wujanz et al. (2013c) where a ship has been captured in motion on the water fol-
lows, according to the previously stated definitions, the third case. The author is not aware of any
contribution that covers the fourth mentioned case. In order to conduct kinematic laser scanning
the following four steps need to be carried out:
Step 1: Transformation of all sensor coordinate systems into a common coordinate system.
Step 2: Synchronisation / temporal calibration of the system.
Step 3: Data acquisition and determination of the object’s kinematic parameters.
Step 4: Geometric rectification of the point cloud.
For further details on the subject the reader is referred to the cited literature.
2.3.2 Co-registration approaches
A downside of the previously mentioned approaches is that they require additional sensory. Thus,
methods are discussed in the following subsections that solely apply data captured by TLS as an
input for so called co-registration approaches. In general, a prerequisite of all approaches in order
to carry out registration is that a sufficient overlap between point clouds is given. This aspect is
assumed to be the case anyway as otherwise deformation monitoring can’t be conducted. A second
requirement is that the data contains sufficient information which is:
Radiometric contrast for intensity based methods: Acquired intensity values feature a het-
erogeneous characteristic.
Geometric contrast for geometrically motivated approaches: The captured surface is discon-
tinuous.
2.3.2.1 Intensity based registration
A major challenge in matching of point clouds is the enormous complexity due to the three dimen-
sional characteristic of the stated problem. By interpreting the 3D-point cloud as a two-dimensional
image computational and methodological issues can be overcome. This can be achieved by using
meta-data that is stored for instance in specific formats such as Leica’s ptx-format. The header
as well as the first line of such a file is given in the following. The first two lines specify the
dimensions of the acquired scan which hence can be used to define the dimensions of an im-
age. Lines three to ten can be used to store transformational information in case that the given
dataset has been transformed into another coordinate system. From line eleven onwards a list
that contains 797500 lines is given (number of columns multiplied by the number of rows) where
each line represents a single point. The first three values represent the according coordinates
while the fourth value features the intensity value which is standardised to a range between 0
and 1. This information embodies the key component of the approach as it is used to generate
an image like representation. Line twelve contains a point where a distance measurement failed.
Hence, default values are filled into the file so that the topology of the scanned area remains stable.
2.3 Referencing methodologies 17
11276 // number of columns
2625 // number of rows
30.000 0.000 0.000 // skip
4100 // skip
5-0 1 0 // skip
60 -0 1 // skip
71 0 0 0.0 // skip
8-0 1 0 0.0 // skip
90 -0 1 0.0 // skip
10 0.000 0.000 0.000 1.0 // skip
11 -0.0046 -0.0073 2.3062 0.754 // coordinates and radiometric information
12 0.000 0.000 0.000 0.5 // default values
13 // X Y Z Intensity
Figure 2.6 shows on the left how the image matrix is assembled where istands for the according
intensity. In addition to the intensity values the according coordinates are stored which hence also
allows querying geometric information for each pixel. On the right a generated intensity image can
be seen. Green coloured pixels highlight points for which the distance measurement failed.
i1:1 . . . i625:1
.
.
.....
.
.
i1:1276 · · · i625:1276
Figure 2.6: Dimensions of the image matrix (left) and generated intensity image of a point cloud
(right)
One of the first articles that applied an intensity based registration by using image processing
strategies has been presented by Böhm &Becker (2007). Based on two intensity images feature
points and descriptors are computed based on Lowe’s (2004) scale-invariant feature transform
algorithm (SIFT). A set of consistent feature matches are then identified by usage of Fischler
&Bolles’ (1981) random sample consensus (RANSAC) method which revealed an outlier ratio
of roughly 90%. Reasons for this high amount can be found in repetitive patterns and symmetry
of the object. In terms of achieved accuracy the outcome falls behind conventional methods. A
comparable solution has been presented by Kang et al. (2009) who also apply SIFT for feature
matching yet propose a sequential algorithm for rejection of mismatches. Another problem that is
related to usage of input images respectively detected feature descriptors is their dependency to
the sensor location. Houshiar et al. (2013) tackle this issue by using map projection methods for
rectification of intensity images before feature matching is conducted. A supplementary approach
has been proposed by Akca (2007a) which extends a previously published method (Grün &Akca
2005). Instead of using image processing techniques where features are detected and matched
so called quasi-surfaces are generated based on the intensity data. The results are in terms of
achieved accuracy comparable to surface based matching. A related technique has been proposed
by Al-Manasir &Fraser (2006) where the exterior orientation between two TLS viewpoints is
determined by photogrammetric means and usage of an integrated camera.
18 2 Deformation monitoring: Past and present methodologies
2.3.2.2 Surface based registration approaches
As already mentioned under subsection 2.3.2.1 the complexity of determining transformation pa-
rameters in R3is an ambitious task. Hence, the registration procedure solely based on using the
overlapping region of point clouds is divided into two steps: a pre-alignment which brings the
datasets into a coarse arrangement and a fine matching that defines the final relative orientation
and location. The first function fulfils two major tasks namely avoidance of local minima as well
as reduction of the computational effort that is required within the iterative fine matching process.
The following subsection will give an overview on selected matching algorithms of both types. It
should be mentioned that a vast amount of contributions has been published on the subject thus
only several methodically different approaches will be discussed.
2.3.2.2.1 Coarse registration algorithms
Aiger et al. (2008) proposed an approach called four-points-congruent-sets-algorithm (4PCS)
where a quadruple of approximate coplanar points from one point cloud is matched to another
randomly chosen set of the same amount. The motivation of this procedure is based on the fact
that the search space of the problem dramatically decreases in comparison to triplets. A key com-
ponent of the approach is described by two measures that are invariant under affine transformation.
These points uniquely define 4-points set which hence allows to selectively seek for approximately
congruent quadruples. Based on these matches a suitable set of transformation parameters is found
by evaluating the resulting overlapping region of two point clouds in dependence to a relative con-
gruence measure (RANSAC). According to the authors the algorithm is “robust against noise” as
the quadruples are chosen as big as possible based on an overlap estimate between two datasets.
Figure 2.7 illustrates the matching process between a target set of four points as well as a source
set consisting of five points. A set of points is approximately congruent to each other if e1e2.
In this case point set {a,b,c,d} as depicted left is congruent to {q1, q3, q4, q5} that is shown right
(Aiger et al. 2008).
Figure 2.7: Based on a quadruple of points two ratios can be computed which are referred to
as r1and r2(left part). For every vector within a 4-point congruent subset two
possible assignments of intersection eican be generated dependent to the definition
of the vector (centre). Finally, quadruples are established from a second set of points
to which the first dataset should be related. For the sake of simplification only two
intermediate points eiare shown in the right part of the figure (Aiger et al. 2008).
Rusu et al. (2008, 2009) introduce the idea of so called persistent feature histograms which gener-
ally coincides with Lowe’s (2004) methodology of descriptors for image matching that are used to
precisely characterise detected features. Therefore points are analysed in a k-neighbourhood that
are locally enclosed by a sphere of a predefined radius. Based on this environment a histogram is
computed based on different surface normals between a point of interest (POI) and all k-neighbours
2.3 Referencing methodologies 19
which hence describes the mean surface curvature of the POI. The characterisation process applies
a 16-bin histogram in which the percentage of source points within the k-neighbourhood is sorted
that fall into predefined intervals. These intervals are described based on four feature types that
essentially describe measures of angles between all points’ normals and distances among them –
thus they describe datum independent measures. In order to reduce the amount of features down
to a set that adequately characterise a dataset persistence analysis is conducted. For this sake
the radius of the sphere that encloses the k-neighbourhood is altered in several steps so that sev-
eral histograms arise for the POI while only points are chosen that show unique characteristics.
In order to establish correspondences between two point clouds the according histograms of per-
sistent features are compared and sorted regarding their similarity. Based on the most similar
entries correspondences and finally transformation parameters are obtained. As a final step an
ICP-algorithm including a modified error function is conducted. Figure 2.8 illustrates persistent
features on the Stanford bunny dataset over a series of varying radii as well as the final selection of
persistent points. Apart from the two discussed approaches coarse alignment processes have also
been proposed by Torre-Ferrero et al. (2012), Theiler et al. (2013, 2014), who applied the
4PCS-algorithm only on 3D-keypoints, Chuang &Jaw (2015) and Weber et al. (2015).
Figure 2.8: Persistence over multiple radii (left to right: r1= 3 mm, r2= 4 mm, r3= 5 mm,) that
enclose a k-neighbourhood and overall persistent points exemplified on the Stanford
bunny dataset (Rusu et al. 2009)
2.3.2.2.2 Fine Matching algorithms
After coarse alignment of two datasets relative to each other has been carried out as described in
the previous subsection the final matching process in form of fine matching algorithms follows. The
necessity of conducting coarse matching before fine matching can be explained by a convergence
region in which the two point clouds need to fall into in order to avoid local minima. One of the
first fine matching algorithms named iteratively closest point algorithm (ICP) has been proposed
by Besl &McKay (1992) where point-to-point correspondences are established. This step has to
be rated critical, as illustrated in figure 2.2, due to the fact that points are captured by TLS that
are non-repeatable while this issue can be tackled by establishing point-surface correspondences
(Chen &Medioni 1991). A comparable methodology has been presented by Zhang (1994).
In the following the general ICP process chain should be described in detail. As a first step coarse
alignment between the datasets referred to as point cloud A(green shark on the right of figure
2.9) and B(red shark) needs to be carried out which can be achieved either manually, by direct
georeferencing as introduced in subsection 2.3.1.2 or coarse registration algorithms that have been
subject of the previous subsection. This step respectively the outcome is highlighted by a red
rectangle on the left while the result of coarse matching is accentuated on the right by using the
same shape and colour. Assuming that point cloud A’s coordinate system has been defined as a
target system a point subset of the whole is selected in order to cut down computational cost and
to reduce matrix dimensions during computation of transformation parameters. This selection is
referred to as candidate points or candidates and is determined only once during the algorithm.
Now that candidates have been sampled the ICP’s task is to iteratively determine correspondences
between the datasets. This is carried out either by determining the closest point (for instance
20 2 Deformation monitoring: Past and present methodologies
Besl &McKay 1992, Zhang 1994) in Bto a candidate of point cloud Aor the closest triangle
(e.g. Chen &Medioni 1991). In order to avoid implausible matches correspondences need to
lie within a certain radius and a certain search window that is defined around the face normal
of a candidate (Rusinkiewicz &Levoy 2001). It has also to be mentioned that the search of
correspondence is a computationally demanding procedure which can be tackled more efficient
by using spatial structures such as k-d-trees (Bentley 1975). This crucial step is highlighted
by a large orange rectangle on the right in the figure in which correspondences based on the
shortest distance between Aand Bare surrounded by dashed rectangles. Note that established
correspondences iteratively change during the algorithm while the relative alignment gradually
improves. This circumstance is necessary as no correspondences are known before the start of
the algorithm. The iterative part of the ICP is repeated until a convergence criterion is satisfied
as depicted by the green rectangles in the illustration. Several strategies are applicable while
the one that comes to mind at first has its origin in adjustment calculation. In order to solve
a non-linear adjustment problem an iterative strategy is applied. Starting with an approximate
parameter vector X0parameters are iteratively estimated where the current solution vector ˆ
Xi
serves as a new approximate parameter vector X0
i. This procedure is repeated until ˆ
Xifalls below
a predefined threshold. Depending on the functional model different units and types may be stored
in the solution vector ˆ
Xihence Akca (2007b) introduces separate thresholds for rotational and
translational components. Chen &Medioni’s (1991) strategy compares the average squared error
of residuals between two sets of correspondences within two adjacent iterations to a set threshold.
Grant (2013) determines the root mean square error (RMS) of transformed points between two
iterations and a threshold. As divergence may occur between datasets due to insufficient geometric
contrast a maximum number of iterations is also defined.
Figure 2.9: Schematic illustration of the ICP algorithm (left part) and graphical representation of
the process on the right. The depicted models are modified versions of dataset kk719
from the Squid database (Squid 2014).
2.3 Referencing methodologies 21
After a general description of the ICP has been given several extensions and variants are discussed
in the following. As measurements in general are always subject to uncertainty a certain positional
accuracy results for the case of laser scanners. In order to address this issue Bae &Lichti (2008)
proposed a weighted variant of the Chen &Medioni’s (1991) algorithm for which geometric
characteristics are derived namely change of curvature, estimated surface normals vectors and
the variance of angular changes of normals (Bae 2005). Furthermore a method for accuracy
assessment of the resulting registration error is proposed based on the computed spatial variance.
In order to eliminate outliers from the computation of transformation parameters Fischler &
Bolles’ (1981) random sample consensus (RANSAC) method has been applied. An extensive
summary of the procedure can be found in Bae (2006). An overview about several variations
of the ICP can be found in Rusinkiewicz &Levoy (2001) while Nüchter et al. (2010) focus
on different parameterisations of rigid body transformations for registration of point clouds. An
approach that is conceptually very similar to the ICP is the so called least squares matching (LSM)
approach as published by Grün &Akca (2005) and Akca (2007b). Transformation parameters
are estimated by use of a generalised Gauss-Markoff-model where the squared Euclidian distance
between two surfaces is minimised. The theoretical fundaments of least squares matching have
been developed amongst others by Ackermann (1984), Förstner (1982, 1984), Grün (1984,
1985), Gülch (1984) and Pertl (1984). Instead of trying to iteratively find corresponding points
search surfaces are matched to template patches. It should be mentioned that LSM can also be
used to solve other correspondence issues for instance matching of a 3D-space curve to a 3D-
surface. Grant et al. (2012a, 2012b, 2013) introduced the idea of symmetric correspondence
where point-to-plane correspondences are established on both scans that are to be co-registered.
Hence, the methodology describes an extension to Chen &Medioni’s (1991) approach where
correspondences are determined between a point afrom point cloud Aand a triangle derived from
point cloud Band vice versa. Another noteworthy extension is the computation of stochastic
measures of all face normals that are used to assemble a full weight matrix. The critical procedure
of outlier removal is conducted by definition of a threshold value comparable to Bae (2006) whereas
a closer look at this subject will be given under subsection 4.3. Even though a stochastic model
has been applied for weighting purposes a global test of the adjustment (Niemeier 2008 pp. 167)
is not conducted. In case that more than two point clouds need to be registered at a time a so
called global registration problem has to be solved as discussed by Pulli (1999) or Williams &
Bennamoun (2001). Non-rigid registration methods have been proposed by Li et al. (2008a, 2009)
e.g. for motion reconstruction or matching of objects that changed their shape between scans.
2.3.2.3 Registration based upon geometric primitives
An essential problem of TLS is the fact that captured points are usually not repeatedly observed
during a successive scan. Hence, point-to-point correspondences cannot be formed as it is the
case in classical geodesy e.g. for computation of transformation parameters or determination
of deformation vectors. An established and often used technique to overcome this issue is the
identification and use of geometric primitives within scenes such as cylinders, planes or spheres.
While the latter one is mostly brought into a scene of interest in form of artificial targets the
previously mentioned shapes are very likely to be found in urban areas such as houses or roads
for the case of planar shapes while cylindrically shaped objects are for instance pipes in industrial
facilities or lamp posts. Two central advantages of using geometric primitives for registration
purposes should be highlighted while the first one is that reproducible parameters can be derived
for instances discrete points (e.g. centre of a sphere) or normals / axes as for the case of cylinders
and spheres. In addition the amount of storable data reduces dramatically as only geometric
parameters of all detected primitives need to be stored and not the entire point cloud. Figure 2.10
illustrates the circumstance of independence against sampling where an artificial target is depicted
by a grey sphere which has been captured from two different viewpoints at a varying sampling rate
(green and red spheres on the left respectively right). It can be seen that the approximated centre
of the sphere that is represented by a blue sphere remains the same. The second one is that the
accuracy of the geometric parameters is higher compared to solely measured TLS points due to
the use of highly redundant observations.
22 2 Deformation monitoring: Past and present methodologies
Figure 2.10: Impact of varying sampling and scan position onto the centre of a spherical target
(grey transparent sphere). It can be seen that the centre of the sphere (blue sphere)
remains equal despite the fact that acquired points are different.
A method that applies straight lines respectively axes of cylinders has been proposed by Licht-
enstein &Benning (2009, 2010) respectively Lichtenstein (2011) predominantly for matching
of industrial facilities as well as buildings. As a first step edges or cylinders need to be identified
within the datasets that should be matched. Subsequently correspondences between straight lines
need to be established for which four characteristic states namely skew, parallelism respectively
identical or intersecting lines can be distinguished. Distances and angles between pairs of straight
lines serve as discriminative measures for correspondence determination based on a correspon-
dence matrix. After determination of an approximate rotation (Schwermann 1995) all straight
line pairs are aligned nearly parallel. Subsequently exact transformation parameters are estimated
based on corresponding centre points between the shortest distance of two line pairs as depicted in
figure 2.11. A contribution by von Hansen et al. (2008) applies extracted straight lines in order
to register aerial and terrestrial point clouds.
Figure 2.11: Sequential determination of transformation parameters between two datasets based
on straight lines. After coarse rotation of lines 1-2 and 3-4 the distance d between
the centre points (red and blue circle) is minimised (Lichtenstein &Benning
2010).
A far more popular procedure obtains planes instead of straight lines and bears the highest accuracy
potential due to the very high redundancy of planes e.g. that are typically available in urban
environments. Several scientific articles have been published on the subject (e.g. Dold 2005,
He 2005, von Hansen 2006, von Hansen et al. 2006, Dold &Brenner 2006, Pathak et
al. 2009) while one commercial implementation called SCANTRA is available on the market
(Technet 2014) that correlates to the contributions of Rietdorf (2005) and Gielsdorf et al.
(2008). Rabbani &van den Heuvel (2005) proposed an algorithm that detects planes, cylinders
2.3 Referencing methodologies 23
and spheres within an industrial environment for registration purposes. A sound description of
the general methodology for plane based matching is given by Dold (2010). The first step of
the procedure converts given point clouds into a raster-wise representation where the periodical
sampling process of laser scanners is deployed that leads to a matrix where each element contains
a 3D point. Subsequently a filter mask is run over the matrix as in image processing where local
plane fitting is conducted. Based on the standard deviation of each plane a decision is drawn
whether a planar region has been detected or not. Then a region growing algorithm is applied that
automatically sets seed points and determines connected planar regions as depicted in the left part
of figure 2.12. Optionally, characteristic properties can be assigned to each detected planar segment
in order to simplify the process of correspondence search. In order to avoid perspective falsification
of characteristic properties in dependence to the chosen viewpoint during data acquisition geometric
conversion is undertaken as illustrates in figure 2.12 on the right.
Figure 2.12: Outcome of the plane segmentation process (left) and geometric conversion of ex-
tracted planes (right) for feature characterisation (Dold 2010 pp. 61)
Therefore planes are rotated in a way that the according face normals coincide with one axis of an
arbitrary coordinate system. Then properties are computed that help to discriminate segmented
patches such as:
Circumference of a convex hull around a segment.
Encircling rectangle.
Area of a planar patch.
Average intensity value of segmented points.
Optionally: Average value of RGB values in case that a point cloud is textured.
Now that planes as well as descriptive properties have been computed correspondences need to
be established which describes the most sophisticated part of the approach. Hence, the reader is
referred to Dold (2010 pp. 73) where several techniques are extensively discussed.
The problem of finding transformation parameters based on now known correspondences between
planes from different datasets is conducted in a sequential fashion – first determination and appli-
cation of rotation, then translation, as approximate values including an iterative process would be
elsewise required. The determination of rotational transformation parameters is conducted based
on the planes’ face normals for which several approaches can be applied e.g. Sanso (1973), Horn
(1987) or Eggert et al. (1997). After application of rotation onto one dataset translation compo-
nents need to be calculated. Therefore at least three planar segments are required that assemble
a space in order to solve translational fractions in three directions. Additionally the assumption is
made that after application of rotation corresponding planes have identical face normal apart from
slight deviations caused by the measurement process. Finally, the distances between corresponding
planes is minimised within a least squares adjustment. A schematic description of the procedure
is illustrated in figure 2.13.
24 2 Deformation monitoring: Past and present methodologies
Figure 2.13: Schematic description of plane based matching (based on Dold 2010)
2.3.3 Assessment and categorisation of (Geo-) referencing and matching procedures
Several procedures for co-registration and georeferencing of point clouds have already been dis-
cussed in subsection 2.3 hence a comparison and assessment should reveal the most suitable one for
the stated problem of carrying out deformation monitoring based on TLS observations. Important
aspects for the evaluation are the following prerequisites that are respectively aren’t required by
the according methods where the first key word is used as an abbreviation in table 2.1 that gathers
the outcome:
Sensors: Additional sensors are needed for geo-referencing.
Access: The area of interest needs to be entered.
Content: The scenery has to contain certain information.
The outer left column of the table below contains all methodologies of interest which are coarsely
structured into georeferenced approaches abbreviated by (GEO) and co-registration approaches
(CR).
Table 2.1: Assessment of referencing methods
Methodology Sensors Access Content
GEO: Use of artificial targets Yes Yes Artificial targets
GEO: Direct Georeferencing Yes No None
GEO: Kinematic data acquisition Yes Yes None
CR: Intensity based registration No No Radiometric contrast: e.g. features on a surface
CR: Surface based registration No No Geometric contrast: local geometric variations
CR: Use of geometric primitives No No Geometric primitives: urban scenarios
As the motivation in this thesis is to develop a potentially versatile approach georeferenced ap-
proaches are ruled out as additional sensors and / or access of the area of interest is required.
Furthermore the accuracy of the required referencing sensors acts accumulatively in context with
2.3 Referencing methodologies 25
the TLS and hence decreases the achievable quality. Hence, co-registration methods bear the
bigger potential for the stated problem. However intensity based registration requires existence
of unique and distinguishable features a prerequisite which may not be satisfied in natural areas.
The same argument can be applied for registration methods that apply geometric primitives as
an input which are well suited for use in artificial scenes. Thus, these strategies are not versatile
enough to be used in various cases of deformation monitoring which is why surface based registra-
tion is applied in this thesis. However it has be mentioned that a perfect co-registration algorithm
should be capable to process all possible forms of information as an input in order to establish an
integrated matching approach. This procedure may also incorporate several additional radiometric
information layers in the future that prospective instruments such as multi spectral laser scanners
provide (Hemmleb et al. 2005, Wujanz 2009, Hakala et al. 2012).
2.3.4 Comparative analysis of several registration algorithms
In this subsection different algorithms and commercial products have been compared concerning
their capabilities in terms of their implemented quality assurance measures as well as their be-
haviour against deformed areas namely Raindrop Geomagic, Leica Cyclone, GFaI Final Surface
and the academic 4PCS-algorithm. Therefore two point clouds with nearly complete overlap fea-
turing non-rigid body deformation which is subsequently denoted as “snow” have been employed.
The “snow” dataset features two epochs of a roof section, see figure 2.14. A snow mantle of roughly
16 cm can be found on the roof in the first dataset while most of the snow has melted when the
second point cloud has been captured. In order to provide a reference set of transformation pa-
rameters all “deformed” areas that are covered by snow have been removed before the registration
process has been started in Final Surface. The matching error of the reference set added up to
3.9 mm based on 2509 points which represented the best 50% of the matching points. Table 2.2
shows the reference values for the second dataset below where tx,ty,tzdenote translations in three
cardinal directions and rx,ry,rzthree Euler rotations around the according axes. The major aim
of this experiment is to determine the stability of all algorithms against datasets that contain
outliers in form of deformation. Hence, the “snow” scene has been processed including the snow
cover to provoke potential effects. Furthermore the impact of tunable parameters onto the final
result respectively their quality measures should be determined if possible. Note that parts of this
subsection have been published in Wujanz (2012).
Table 2.2: Reference transformation parameters
tx[m] ty[m] tz[m] rx[]ry[]rz[]
-0.007 0.046 0.125 0.203 -0.145 10.633
2.3.4.1 Raindrop Geomagic Studio 12
Raindrop’s Geomagic Studio is able to perform transformations by using geometric primitives and
a surface based matching algorithm. The only implemented quality measure is the average distance
between two datasets while a colour coded inspection map can be computed if one of the point
clouds has been converted into a meshed surface representation. The outcome of the surface based
matching can be influenced by the sample size and a maximum tolerance setting. It is worth
mentioning that no initial alignment of the datasets is needed which works in most cases. Figure
2.14 shows a colour coded visualisation of computed deformations where the units of the colour
bar are given in metres. It can notably be seen that the dataset on the left side, which has been
computed by applying the reference parameters, shows only deformations on the roof (blue shades)
caused by the snow as expected. The yellow patterns in the windows are affected by blinds that
have been lowered in between epochs. A look on the right half of the image would lead to the
conclusion that the wall as well as the roof would lean forward as indicated by the colour coding.
26 2 Deformation monitoring: Past and present methodologies
Figure 2.14: Colour coded visualisation of deformations [m]: Based on reference transformation
parameters and Result 2 derived by Geomagic (right)
Table 2.3 gathers all produced results where the first column depicts the according results. Result
1 has been derived with default settings, whereas result 2 has been processed by applying an
implemented “automatic deviator eliminator” that marginally reduced the average error. After
setting the deviator eliminator down to 0, which actually means that only points are used that
perfectly satisfy the current set of transformation parameters, the computed average error was
oddly enough larger than the deviations computed by applying default settings as depicted by
result 3.
Table 2.3: Transformation parameters computed with Geomagic
Result tx[m] ty[m] tz[m] rx[]ry[]rz[] Average
error
1-0.067 -0.193 0.138 -0.087 0.029 10.912 72.9
2-0.001 0.006 0.205 0.345 0.042 10.641 71.2
3-0.071 -0.220 0.130 -0.065 -0.005 10.903 73.6
2.3.4.2 Leica Cyclone 7.1
Leica’s Cyclone software is capable of performing target based registrations, surface based regis-
trations, using geometric primitives as well as a combination of all mentioned approaches. Imple-
mented quality measures are a histogram depicting the deviations, a function to colour the regis-
tered point clouds differently in order to check the result manually as well as a report that gathers
transformation parameters and a statistical description of the residuals. Control parameters of
the programme are the number of points that are used to compute the transformation parameters
whose default setting is 3% of the points within the overlapping area as well as a maximum search
distance. Figure 2.15 illustrates the impact of the maximum search distance (horizontal axis) onto
the average deviations and their according root mean squares (RMS). Expectedly both measures
decreased in general with declining search distance.
2.3 Referencing methodologies 27
Figure 2.15: Influence of maximum search distance onto average deviations (dark grey) and their
according RMS (light grey)
Table 2.4 gathers the according transformation parameters for three selected settings where the
first column depicts the search radius and the last one the average error both in mm. Nevertheless
is the allegedly most accurate result not the closest one to the set of reference parameters. In fact