ArticlePDF Available

Abstract and Figures

In geodesy, hypothesis testing is applied to a wide area of applications e.g. outlier detection, deformation analysis or, more generally, model optimisation. Due to the possible far-reaching consequences of a decision, high statistical test power of such a hypothesis test is needed. The Neyman-Pearson lemma states that under strict assumptions the often-applied likelihood ratio test has highest statistical test power and may thus fulfill the requirement. The application, however, is made more difficult as most of the decision problems are non-linear and, thus, the probability density function of the parameters does not belong to the well-known set of statistical test distributions. Moreover, the statistical test power may change, if linear approximations of the likelihood ratio test are applied. The influence of the non-linearity on hypothesis testing is investigated and exemplified by the planar coordinate transformations. Whereas several mathematical equivalent expressions are conceivable to evaluate the rotation parameter of the transformation, the decisions and, thus, the probabilities of type 1 and 2 decision errors of the related hypothesis testing are unequal to each other. Based on Monte Carlo integration, the effective decision errors are estimated and used as a basis of valuation for linear and non-linear equivalents.
Content may be subject to copyright.
Open Access. ©2018 R. Lehmann and M. Lösler, published by De Gruyter. This work is licensed under the Creative Commons
Attribution-NonCommercial-NoDerivs 4.0 License.
J. Geod. Sci. 2018; 8:98–114
Research Article Open Access
R. Lehmann* and M. Lösler
Hypothesis testing in non-linear models
exemplied by the planar coordinate
transformations
https://doi.org/10.1515/jogs-2018-0009
Received February 13, 2018; accepted June 25, 2018
Abstract: In geodesy, hypothesis testing is applied to a
wide area of applications e.g. outlier detection, defor-
mation analysis or, more generally, model optimisation.
Due to the possible far-reaching consequences of a deci-
sion, high statistical test power of such a hypothesis test
is needed. The Neyman–Pearson lemma states that un-
der strict assumptions the often-applied likelihood ratio
test has highest statistical test power and may thus fulll
the requirement. The application, however, is made more
dicult as most of the decision problems are non-linear
and, thus, the probability density function of the param-
eters does not belong to the well-known set of statistical
test distributions. Moreover, the statistical test power may
change, if linear approximations of the likelihood ratio test
are applied.
The inuence of the non-linearity on hypothesis testing
is investigated and exemplied by the planar coordinate
transformations. Whereas several mathematical equiva-
lent expressions are conceivable to evaluate the rotation
parameter of the transformation, the decisions and, thus,
the probabilities of type 1 and 2 decision errors of the re-
lated hypothesis testing are unequal to each other. Based
on Monte Carlo integration, the eective decision errors
are estimated and used as a basis of valuation for linear
and non-linear equivalents.
Keywords: Hypothesis testing; Likelihoodr atio test;M onte
Carlo integration; Non-linear model; Coordinate transfor-
mation
*Corresponding Author: R. Lehmann: University of Applied Sci-
ences Dresden, Faculty of Spatial Information, E-mail:
M. Lösler: Frankfurt University of Applied Sciences, Faculty of
Architecture, Civil Engineering and Geomatics
1Introduction
Hypothesis testing plays an important role in the frame-
work of parameter estimation. In the context of outlier de-
tection, hypothesis testing is used to detect and to iden-
tify implausible observations (e.g. Lehmann and Lösler
2016, Klein et al. 2017). In congruence analysis, hypoth-
esis testing is introduced to distinguish stable points or
areas from instable parts of an epochal observed network
(e.g. Velsink 2015, Lehmann and Lösler 2017). To nd an
adequate number of model parameters, e.g. in the frame-
work of reverse engineering, hypothesis testing indicates
the benet of a more complex model versus a simplied
model (e.g. Ahn 2005).
In geodesy, the likelihood ratio (LR) test is most of-
ten applied (Koch 1999, Teunissen 2000). It bases on the
Neyman–Pearson lemma, which demonstrates that under
various assumptions such a test has the highest statistical
test power (Neyman and Pearson 1933). In practice, most
of the decision problems are non-linear and the under-
lying likelihood function must be maximized iteratively,
e.g. by ordinary least-squares techniques with the risk of
nding only a local maximum. Moreover, the often-used
LR test in the linearized model deteriorates the decision
due to a potential loss of statistical test power. Finally,
the true probability density function of such a test does
not belong to well-known class of statistical test distribu-
tions, and therefore, critical values cannot be computed
with standard statistical functions. To derive the true prob-
ability density function as well as corresponding critical
values, a Monte Carlo integration can be carried out (see
e.g. Lehmann 2012).
Estimation in non-linear geodetic models has been
widely investigated. Teunissen (1985) found that two types
of non-linearity exist: The rst is inherent in the problem
and manifests itself in the non-linearity of the model oper-
ator. The second is perhaps introduced by a parametriza-
tion, which can even make an inherently linear problem
non-linear. This is the case when the planar four parame-
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |99
ter transformation is parameterized by rotation angle and
scale.
This investigation focuses on the inuence of the non-
linearity on hypothesis testing exemplied by the planar
coordinate transformations. Here, several mathematical
equivalent expressions are conceivable to evaluate the ro-
tation parameter of the transformation by hypothesis test-
ing. Depending on the degree of non-linearity, the eec-
tive αcan dier in comparison to its usually used χ2equiv-
alent. The planar geodetic coordinate transformation is a
good example to study non-linear eects in geodetic mod-
els, because under standard assumptions on the covari-
ance matrix it admits an analytical solution (Teunissen
1985, 1986). Moreover, the planar coordinate transforma-
tions have a wide range of applications in geodesy.
The paper is organized as follows: After briey in-
troducing the non-linear Gauss–Markov model, we focus
on the LR test as a general decision method. Then the
least squares solutions of planar coordinate transforma-
tions are introduced. As an example for hypothesis testing
in non-linear models, we set up a test problem for the rota-
tion angle and solve it by various dierent applications of
the LR test. Finally, we compare these dierent solutions
in terms of decision errors, for which the method of Monte
Carlo integration is used.
2Hypothesis test in the non-linear
Gauss–Markov model
Throughout this paper, true values of quantities will be de-
noted by tilde and estimates by hat.
We start from the non-linear Gauss–Markov model
(GMM)
Y=A˜
X+e(2.1)
where Yis a n-vector of observations and ˜
Xis a u-vector
of unknown true model parameters. Ais a known non-
linear operator mapping from the u-dimensional param-
eter space to the n-dimensional observation space. eis an
unknown random n-vector of normally distributed obser-
vation errors. The associated stochastic model reads:
eN(0,σ2P1)(2.2)
Pis a known positive denite n×n-matrix of weights
(weight matrix). σ2is the a priori variance factor, which
may be either known or unknown. Estimates ^
X,^
Yof the
unknown true values of ˜
X,˜
Y=Yeare desired.
In geodesy, a decision problem is generally posed as
a statistical hypothesis test. Opposing the special model
represented by the GMM Eqs. (2.1), (2.2) augmented by non-
linear equality constraints
B˜
X=b(2.3)
to a general model represented by the GMM without equal-
ity constraints is equivalent to opposing the null hypothe-
sis
H0:B˜
X=b(2.4)
to the alternative hypothesis
HA:B˜
X=b.(2.5)
The standard solution of the testing problem in classi-
cal statistics goes as follows (e.g. Tanizaki 2004 p. 49 ):
1. A test statistic T(Y)is introduced, which is known to
assume extreme values if H0does not hold true.
2. Under the condition that H0holds true, the probability
distribution of T(Y)is derived, represented by a cumu-
lative distribution function (CDF) F(T|H0).
3. A probability of type 1 decision error α(signicance
level) is suitably dened (say 0.01 or 0.05 or 0.10), see
Fig. 1.
4. For one-sided tests, a critical value cis derived by
c=F1(1α|H0), where F1denotes the inverse CDF
(also known as quantile function) of T|H0. (For two-
sided tests two critical values are needed, but this case
does not show up in this investigation.)
5. The empirical value of the test statistic T(Y)is com-
puted from the given observations Y. If T(Y)>cthen
H0must be rejected, otherwise we fail to reject H0.
In principle, we are free to choose a test statistic. Even
heuristic choices like
T(Y):=
B^
Xb
(2.6)
with some suitable norm ·are conceivable. Although
the statistical test power (probability of rejection of H0
when it is false) of such a test might be non-optimal or even
poor.
3The likelihood ratio test
In geodesy, we most often apply the likelihood ratio
(LR) test (e.g. Tanizaki 2004 p. 54 ). The test statistic of
the LR test reads
TLR (Y):= max LX,σ2|Y:B(X)=b
max LX,σ2|Y (3.1)
100 |R. Lehmann and M. Lösler
Figure 1: Probability density functions fof test statistic Tunder H0
and HAand decision errors α,β.
where LX,σ2|Ydenotes the likelihood function of the
GMM to be maximized with no restriction (denominator)
and with the restriction B(X)=b(numerator). For the
GMM Eqs. (2.1), (2.2) the likelihood function reads
LX,σ2|Y=
det 2πσ2P10.5
exp 1
2σ2(YA(X))TP(YA(X))
(3.2)
It is well known that maximizing LX,σ2|Yis equiv-
alent to minimizing the least squares error functional (e.g.
Koch 1999 p. 161f, Lösler et al. 2017)
(X) := (YA(X))TP(YA(X)) (3.3)
either with constraints B(X)=bor without constraints. In
the rst case, is augmented by the Lagrange term
(X,k)=(X)+ 2kTB(X)b(3.4)
where kis the vector of Lagrange multipliers, in geodesy
also known as correlates.
To simplify matters, we will restrict the derivation to
the case of a known a priori variance factor σ2. In this case,
(3.1) can be expressed as
TLR (Y)=
exp min((X,k))
2σ2
exp min((X))
2σ2
= exp min((X,k))min((X))
2σ2(3.5)
Moreover, we may replace TLR(Y)by the fully equiva-
lent test statistic
T(Y):= 2·log (TLR (Y)) =min((X,k))min((X))
σ2
(3.6)
If T(Y)>cwith a properly chosen critical value c,
then H0must be rejected, otherwise we fail to reject H0.
Note that all these derivations are fully valid even if Aor
Bare non-linear operators.
In the case that Aand Bare both linear operators, we
obtain the expression (e.g. Lehmann and Neitzel 2013)
T(Y)=^
wTΣ1
^
w^
w(3.7)
where
^
w:= B^
Xb(3.8)
is the vector of estimated misclosures and Σ^
wis the re-
lated covariance matrix. ^
Xdenotes the minimizer of (X)
in Eq. (3.3), known as least squares estimate of X. Equa-
tion (3.7) can be seen as a special case of Eq. (2.6). If σ2is
known and Eq. (2.2) holds true, the test statistic Eq. (3.7)
follows the distributions:
T(Y|H0)χ2(m)(3.9a)
T(Y|HA)χ2m,Λ(3.9b)
with the non-centrality parameter
Λ=˜
wTΣ1
^
w˜
w(3.10)
mdenotes the number of independent constraints. The
vector of true misclosures ˜
w=B˜
Xband hence also
Λare naturally unknown.
All established tests in geodesy belong to the class of
LR tests. The rationale of these tests is provided by the
famous Neyman–Pearson lemma (Neyman and Pearson
1933), which demonstrates that under various assump-
tions such a test has the highest statistical test power
among all competitors. It is often applied even if we can-
not exactly or only approximately make these assump-
tions in practice, because we know that the power is still
larger than for rival tests (Teunisssen 2000, Kargoll 2012,
Lehmann and Voß-Böhme 2017).
In truly non-linear models, we generally encounter
three special problems:
1. The likelihood function Eq. (3.2) can only be maxi-
mized iteratively with the danger of nding only a lo-
cal maximum. (Global optimization methods, which
promise to nd also the global maximum, are not yet
widely applied practically because of the considerable
computational workload for multidimensional prob-
lems.)
2. Test statistic Eq. (3.7) is only an approximation of the
true LR test statistic Eq. (3.6) because the likelihood
ratio is taken in the linearized GMM.
3. The probability density function (PDF) of T(Y)or
some equivalent of it does not belong to the well-
known set of statistical test distributions (t,χ2,Fetc.)
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |101
such that the critical values must be computed numer-
ically.
In the next sections, we will illustrate some consequences
of these problems. In the conclusions, we will return to
these points.
4The least squares solution of the
three-parameter transformation
Figure 2: Planar parameter transformation
In a plane consider two Cartesian reference frames x,yand
X,Y, which are related by translation and rotation, such
that an arbitrary point Phas coordinates xP,yP,XP,YP
satisfying the non-linear transformation equations
XP=X0+xP·cos ϵyP·sin ϵ
YP=Y0+xP·sin ϵ+yP·cos ϵ(4.1)
with transformation parameters X0,Y0,ϵ, see Fig. 2. Re-
lated equations can be formulated for the opposite trans-
formation direction.
We start from a set of Npoints having observed coordi-
nates
x1,y1,. . . ,xN,yN,X1,Y1,. . . ,XN,YN(4.2)
in both frames. The problem is to nd the best estimates for
X0,Y0,ϵin the least squares sense, also known as the least
squares solution of the three-parameter transformation.
In the following, we restrict ourselves to the case that
the coordinates of one system are non-stochastic (error-
free) xed quantities. Without restriction of generality, the
error-free coordinates are denoted as x1,y1,. . . ,xN,yN.
Moreover, we assume that for each pair of observations
Xi,Yi, both Xiand Yihave the same weight pi. This GMM
reads
Y=
X1
Y1
.
.
.
XN
YN
,˜
X=
˜
ϵ
˜
X0
˜
Y0
,
A˜
X=
˜
X0+x1·cos ˜
ϵy1·sin ˜
ϵ
˜
Y0+x1·sin ˜
ϵ+y1·cos˜
ϵ
.
.
.
˜
X0+xN·cos ˜
ϵyN·sin ˜
ϵ
˜
Y0+xN·sin ˜
ϵ+yN·cos˜
ϵ
,
P=
p10· · · 0 0
0p10 0
.
.
.....
.
.
0 0 pN0
0 0 · · · 0pN
(4.3)
This setting is one of the rare cases, where an analytical
solution exists. For the sake of simplicity, we assume that
σ2is chosen such that the weights fulll Σpi= 1.
For the sake of compact notation, we introduce in ei-
ther coordinate system the following abbreviations:
1. the weighted barycentres are given by
X*:=
N
i=1
piXi,Y*:=
N
i=1
piYi,
x*:=
N
i=1
pixi,y*:=
N
i=1
piyi(4.4)
2. the coordinates related to the barycentres as origins
∆Xi:= XiX*,Yi:= YiY*,
∆xi:= xix*,∆yi:= yiy*(4.5)
3. the moments of inertia related to the barycentres
h:=
N
i=1
pi∆x2
i+∆y2
i=x2
*y2
*+
N
i=1
pix2
i+y2
i
(4.6a)
H:=
N
i=1
pi∆X2
i+∆Y 2
i=X2
*Y2
*+
N
i=1
piX2
i+Y2
i
(4.6b)
4. the auxiliary terms
c:=
N
i=1
pi(∆Xixi+Yiyi)=
N
i=1
pi(Xi∆xi+Yi∆yi)
(4.7a)
102 |R. Lehmann and M. Lösler
s:=
N
i=1
pi(∆Y ixi∆Xiyi)=
N
i=1
pi(Yi∆xiXi∆yi)
(4.7b)
In GMM Eq. (4.3) the non-linear least squares solution
for ϵ,X0,Y0reads (see appendix 1)
^
ϵ= arctan s
c(4.8a)
^
X0=X*x*·cos ^
ϵ+y*·sin ^
ϵ=X*x*·cy*·s
c2+s2
(4.8b)
^
Y0=Y*x*·sin ^
ϵy*·cos ^
ϵ=Y*x*·s+y*·c
c2+s2
(4.8c)
^
X=h+H2c2+s2(4.8d)
These formulas do not directly contain the observa-
tions Y, but only the statistics c,s,X*,Y*. All other quanti-
ties are xed. Therefore, the vector (c,s,X*,Y*)Tis a su-
cient statistic of the problem. Moreover, it is normally dis-
tributed because of the linear relationship
Z:=
c
s
X*
Y*
=
∆x1∆y1· · · ∆yN
∆y1∆x1· · · ∆xN
1 0 · · · 0
0 1 · · · 1
PY (4.9)
The covariance matrix of Zcan be derived by covariance
propagation
ΣZ=
∆x1∆y1· · · ∆yN
∆y1∆x1· · · ∆xN
1 0 · · · 0
0 1 · · · 1
2P1P
∆x1∆y1· · · ∆yN
∆y1∆x1· · · ∆xN
1 0 · · · 0
0 1 · · · 1
T
=σ2
h0 0 0
0h0 0
0 0 1 0
0 0 0 1
(4.10)
Thus, c,s,X*,Y*are even independent random vari-
ables. For the expectations we obtain
E{c}=
N
i=1
pi(E{Xi}∆xi+E{Yi}∆yi)
=
N
i=1
pi˜
X0+xi·cos ˜
ϵyi·sin ˜
ϵ∆xi
+˜
Y0+xi·sin ˜
ϵ+yi·cos ˜
ϵ∆yi
=
N
i=1
pi((xi·cos ˜
ϵ∆yi·sin ˜
ϵ)∆xi
+(∆xi·sin ˜
ϵ+∆yi·cos ˜
ϵ)∆yi)=h·cos ˜
ϵ(4.11a)
E{s}=
N
i=1
pi(E{Yi}∆xiE{Xi}∆yi)=h·sin ˜
ϵ(4.11b)
E{X*}=
N
i=1
piE{Xi}=
N
i=1
pi˜
X0+xi·cos ˜
ϵyi·sin ˜
ϵ
=˜
X0+x*·cos ˜
ϵy*·sin ˜
ϵ(4.11c)
E{Y*}=
N
i=1
piE{Yi}=
N
i=1
pi˜
Y0+xi·sin ˜
ϵ+yi·cos ˜
ϵ
=˜
Y0+x*·sin ˜
ϵ+y*·cos ˜
ϵ(4.11d)
Starting from an initial guess for ϵ,X0,Y0, the solu-
tion Eq. (4.8) can also be obtained as the limit of a se-
quence of linearized GMM. Despite of the non-linearity of
the GMM, we obtain a unique solution for the parameter
estimation problem. Thus, there is no danger of nding
only a local minimum here.
5The least squares solution of the
four-parameter transformation
In extension of Eq. (4.1) we introduce a scale parame-
ter µsuch that the new observation equations read
XP=X0+µ·(xP·cos ϵyP·sin ϵ)
YP=Y0+µ·(xP·sin ϵ+yP·cos ϵ)(5.1)
By the substitution a:= µ·cos ϵ,o:= µ·sin ϵwe
obtain the linear representation
XP=X0+xP·ayP·o
YP=Y0+xP·o+yP·a(5.2)
with the parameter vector
˜
X=
˜
a
˜
o
˜
X0
˜
Y0
(5.3)
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |103
The least squares solution of this linear GMM is simple and well known:
^
a=c
h,^
o=s
h(5.4a)
^
X0=X*x*·^
a+y*·^
o(5.4b)
^
Y0=Y*x*·^
oy*·^
a(5.4c)
^
X=Hc2+s2
h(5.4d)
where h,H,c,sare as dened in Eqs. (4.6a,b), (4.7a,b). This solution permits an estimate of the rotation angle and scale
parameter:
^
ϵ= arctan ^
o
^
a= arctan s
c(5.5a)
^
µ=^
a2+^
o2=c2+s2
h(5.5b)
See also appendix 3. Note that Eq. (5.5a) coincides with Eq. (4.8a).
6LR hypothesis testing in the three-parameter transformation
As an example of a hypothesis test in a planar transformation model, we want to test a hypothesis for the rotation angle
˜
ϵof the form
H0:˜
ϵ=ϵ0vs.HA:˜
ϵ=ϵ0(6.1)
which can be identied as a special case of Eqs. (2.4),(2.5) by
B˜
X=
1
0
0
T
˜
X,b=ϵ0,^
w=^
ϵϵ0(6.2)
with m= 1. Obviously, Bis a linear operator, but Ais not. To apply test statistic Eq. (3.7), A(X)must be linearized by
Taylor expansion:
A(X)=A^
X+A·X^
X+o
X^
X
(6.3)
In the following, we investigate four dierent derivations of a test statistic for problem Eq. (6.1).
¬Starting from an initial guess for ϵ,X0,Y0, a sequence of linear GMM is computed, until the iteration converges. In
the nal step, the Jacobian matrix Aassumes the form
A=
x1·sin ^
ϵy1·cos ^
ϵ1 0
x1·cos ^
ϵy1·sin ^
ϵ0 1
.
.
.
xN·sin ^
ϵyN·cos ^
ϵ
xN·cos ^
ϵyN·sin ^
ϵ
.
.
.
1
0
.
.
.
0
1
(6.4)
104 |R. Lehmann and M. Lösler
This gives an approximation of the covariance matrix of the estimated parameters (see appendix 4)
Σ^
X=σ2ATPA1=σ2
Σpix2
i+y2
isymm .
x*·sin ^
ϵy*·cos ^
ϵ1
x*·cos ^
ϵy*·sin ^
ϵ0 1
1
=σ2
h
1symm .
x*·sin ^
ϵ+y*·cos ^
ϵ h + (x*·sin ^
ϵ+y*·cos ^
ϵ)2
x*·cos ^
ϵ+y*·sin ^
ϵy2
*x2
*
2sin 2^
ϵx*y*cos 2^
ϵ h + (x*·cos ^
ϵy*·sin ^
ϵ)2
(6.5a)
σ2
^
ϵ=σ2
h(6.5b)
When we perform the LR test of Eq. (4.3) using the linear approximation of A, we come up with Eq. (3.7), which
reads here
T3.1(Z)=^
ϵϵ0TΣ1
^
ϵ^
ϵϵ0=^
ϵϵ0
σ^
ϵ2
=h
σ2arctan s
cϵ02(6.6)
(‘’3.1” denotes here the 1st version of the three-parameter test statistic.)
A practically equivalent formulation of Eq. (6.1) is
H0: tan ˜
ϵ= tan ϵ0vs.HA : tan ˜
ϵ=tan ϵ0(6.7)
(disregarding the impractical non-issue that tan ϵ= tan ( ϵ+π)).
Here, B(X)is also non-linear and must be linearized by Taylor expansion:
B(X)=B^
X+BT·X^
X+o
X^
X
(6.8)
In the nal step of the iteration, the Jacobian matrix Bassumes the form
B=
cos2^
ϵ
0
0
(6.9)
In this case, Eq. (3.7) reads
T3.2(Z)=tan ^
ϵtan ϵ0TBTΣ^
XB1tan ^
ϵtan ϵ0
=tan ^
ϵtan ϵ02
σ2
hcos4^
ϵ=h
σ2sc c2tan ϵ0
c2+s22
(6.10)
This result is obviously dierent from Eq. (6.6). One could argue that Eq. (6.10) should be less reliable than Eq. (6.6)
because also Bmust now be linearized too. But this argument is not conclusive, because we could have obtained this
result also by substituting t:= tan ϵin the transformation equations and solving and testing for the new parameter
tinstead of ϵ. In this case, Bwould be the same as in Eq. (6.2). The same line of reasoning would apply for other
trigonometric functions in Eq. (6.7).
The main reason why (6.6) and (6.10) are dierent is not the “non-issue” discussed above, but the fact that the
linearization errors by truncating the corresponding Taylor expansions are dierent. Proof: Use cot” instead of “tan
in (6.7). Although the same ˜
ϵ=ϵ0+,kZholds, we arrive at a dierent test statistic than in (6.10).
®Applying covariance propagation to Eq. (4.8a) and using the quotient rule and the chain rule, we obtain the expres-
sion:
σ2
^
ϵ=
s
1 + s2
c2c2
1
1 + s2
c2c
0 0
ΣZ
s
1 + s2
c2c2
1
1 + s2
c2c
0 0
T
=c2+s2σ2h
1 + s2
c22c4
=σ2h
c2+s2(6.11)
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |105
This is dierent from Eq. (6.5b), because the linearization is applied at a later stage. Therefore, we can assume that
this is a better approximation than Eq. (6.5b). Using this expression in Eq. (3.7) yields
T3.3(Z)=^
ϵϵ0
σ^
ϵ2
=c2+s2
σ2harctan s
cϵ02(6.12)
But still this test statistic is a linear approximation via Eq. (3.7).
¯To obtain a fully non-linear LR test statistic, we revert to Eq. (3.6):
T3.4(Z)=min()min()
σ2
=(^
X0,^
Y0,ϵ0)(^
X0,^
Y0,^
ϵ)
σ2
=1
σ2[(h+H2(c·cos ϵ0+s·sin ϵ0))
h+H2c2+s2
=2
σ2c2+s2c·cos ϵ0s·sin ϵ0(6.13)
where appendix 1 and Eq. (4.8d) have been used.
Note that this test statistic is as simple to compute as the three previous versions.
7LR hypothesis testing in the four-parameter transformation
We want to test the same hypothesis Eq. (6.1), but now for the four-parameter transformation. In terms of the substitution
model parameters Eq. (5.5a), it can be formulated as
H0: arctan ˜
o
˜
a=ϵ0vs.HA: arctan ˜
o
˜
a=ϵ0(7.1)
This can be identied as a special case of Eqs. (2.4), (2.5) by
B˜
X= arctan ˜
o
˜
a,b=ϵ0,^
w= arctan ^
o
^
aϵ0(7.2)
with m= 1.
In the following, we investigate four dierent derivations of a test statistic for problem Eq. (7.1).
¬Acting on a,o, operator Bis non-linear, but Ais linear here. Consequently, B(X)must be linearized as in Eq. (6.8):
B=
o
1 + o2
a2a2
1
1 + o2
a2a
0 0
T
=1
o2+a2
o
a
0
0
(7.3)
In this case, Eq. (3.7) reads
T4.1(Z)=arctan ^
o
^
aϵ0TBTΣ^
XB1arctan ^
o
^
aϵ0=c2+s2
σ2harctan s
cϵ02(7.4)
where Σ^
Xis the well-known covariance matrix (e.g. Wolf 1966, Somogyi and Kalmár 1988)
Σ^
X=σ2
1/h0
0 1/h
0 0
0 0
0 0
0 0
1 0
0 1
(7.5)
106 |R. Lehmann and M. Lösler
It turns out that T4.1(Z)T3.3(Z). However, the corresponding models are dierent.
Alternatively, we can solve the non-linear four-parameter transformation with parameters µ,ϵinstead of a,oby iter-
ation. In the nal step, the Jacobian matrix Aassumes the form
A=
x1·^
µ·sin ^
ϵy1·^
µ·cos ^
ϵ x1·cos ^
ϵy1·sin ^
ϵ
x1·^
µ·cos ^
ϵy1·^
µ·sin ^
ϵ x1·sin ^
ϵ+y1·cos^
ϵ
.
.
..
.
.
1 0
0 1
.
.
..
.
.
xN·^
µ·sin ^
ϵyN·^
µ·cos ^
ϵ xN·cos ^
ϵyN·sin ^
ϵ
xN·^
µ·cos ^
ϵyN·^
µ·sin ^
ϵ xN·sin ^
ϵ+yN·cos^
ϵ
1 0
0 1
(7.6)
This gives the covariance matrix of the estimated parameters (see appendix 5)
Σ^
X=σ2ATPA1=σ2
^
µ2Σpix2
i+y2
i
0Σpix2
i+y2
i
symm .
^
µ·x*·sin ^
ϵy*·cos ^
ϵx*·cos ^
ϵy*·sin ^
ϵ
^
µ·x*·cos ^
ϵy*·sin ^
ϵx*·sin ^
ϵ+y*·cos ^
ϵ
1
0 1
1
=σ2
h
^
µ2
0 1
symm .
^
µ1·x*·sin ^
ϵ+y*·cos ^
ϵy*·sin ^
ϵx*·cos ^
ϵ
^
µ1·y*·sin ^
ϵx*·cos ^
ϵx*·sin ^
ϵy*·cos ^
ϵ
Σpix2
i+y2
i
0Σpix2
i+y2
i
(7.7a)
σ2
^
ϵ=σ2
h·^
µ2(7.7b)
The hypotheses are now formulated as in Eq. (6.1). Acting on µ,ϵ, operator Ais non-linear, but Bis linear here and
corresponds to Eq. (6.2).
When we perform the LR test using the linear approximation of A, we come up with Eq. (3.7), which reads here
T4.2(Z)=^
ϵϵ0TΣ1
^
ϵ^
ϵϵ0=^
ϵϵ0
σ^
ϵ2
=c2+s2
h2·h
σ2arctan s
cϵ02(7.8)
It turns out that T4.2(Z)T4.1(Z)T3.3(Z).
®Let us now study the special case ϵ0= 0.Here, the hypotheses can be written as
H0:˜
o= 0 vs.HA:˜
o=0(7.9)
In the four-parameter transformation, this can be identied as a special case of Eqs. (2.4), (2.5) by
B˜
X=
0
1
0
0
T
˜
X,b= 0,^
w=^
o(7.10)
In this case, both Aand Bare linear operators and Eq. (3.7) reads
T4.3(Z)=^
o2
σ2
^
o
=h^
o2
σ2=s2
σ2h(7.11)
where Eq. (5.4) and Eq. (7.5) have been used.
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |107
¯To obtain a fully non-linear LR test statistic, we revert to
Eq. (3.6):
T4.4(Z)=min()min()
σ2
=(^
X0,^
Y0,ϵ0,^
µ)(^
X0,^
Y0,^
ϵ,^
µ)
σ2
=1
σ2H(c·cos ϵ0+s·sin ϵ0)2
hH+c2+s2
h
=c2+s2(c·cos ϵ0+s·sin ϵ0)2
2
=(c·sin ϵ0s·cos ϵ0)2
2(7.12)
where appendix 2 and Eq. (5.4d) have been used.
8Distributions
Due to the coincidence with T3.3, the test statistics
T4.1,T4.2will not be further discussed.
Note that all derived test statistics Ti(Z)depend on
only two of the four elements of Z, i.e. cand s. This will
be highlighted by the notation Ti(c,s)used below:
T3.1(c,s)=h
σ2arctan s
cϵ02(8.1a)
T3.2(c,s)=h
σ2sc c2tan ϵ0
c2+s22
(8.1b)
T3.3(c,s)=c2+s2
σ2harctan s
cϵ02(8.1c)
T3.4(c,s)=2
σ2c2+s2c·cos ϵ0s·sin ϵ0
(8.1d)
T4.3(c,s)=s2
σ2h(8.1e)
T4.4(c,s)=(c·sin ϵ0s·cos ϵ0)2
σ2h(8.1f)
In the linear or linearized GMM, we obtain from
Eq. (3.9) the following distributions of the LR test statistics:
Ti(c,s)|H0χ2(1),i= 3.1,3.2,3.3,4.3,4.4(8.2a)
Ti(c,s)|HAχ21,h
σ2(˜
ϵϵ0)
2,i= 3.1,3.3(8.2b)
T3.2(c,s)|HAχ21,h
σ2(tan ˜
ϵtan ϵ0)2cos4^
ϵ
(8.2c)
T4.3(c,s)|HAχ21,˜
s2
σ2h(8.2d)
T4.4(c,s)|HAχ21,(˜
c·sin ϵ0˜
s·cos ϵ0)2
σ2h(8.2e)
However, observing that test statistics Ti(c,s),
i= 3.1,3.2,3.3,4.4in Eq. (8.2a) are obtained by lin-
earization of Aor Bor both, these distributions can be
no more than approximations of the true distributions of
Ti(c,s)in the vicinity of ^
ϵ. But oftentimes Ti(c,s)is eval-
uated far away from ^
ϵ, especially if αis small. Test statistic
T3.4,T4.4is dened in the fully non-linear model and test
statistic T4.3is dened in the fully linear model. Therefore,
no such approximation is made here.
Remark: In Eq. (8.2b) it would not be correct to apply Eq.
(6.11) instead of Eq. (6.5b). Equation (6.5b) must be used
even in case i= 3.3, because Eq. (6.5b) is derived from Σ^
X
in Eq. (6.5a), as it is required by Eq. (3.9).
Simplications:
¬If we rotate the source system (x,y) by ϵ0about the
barycentre (x*,y*)and solve the same transformation
problem with the rotated coordinates, c,sare replaced by
c=c·cos ϵ0+s·sin ϵ0
s=c·sin ϵ0+s·cos ϵ0(8.3)
Note that the vector (c,s)Tis the result of the rota-
tion of (c,s)Tby angle ϵ0about the origin (0,0) and has
therefore the same covariance matrix
Σc,s=σ2h0
0h(8.4)
The transformation problems with the rotated coordi-
nates have the solution
cos ^
ϵ=c
c2+s2=c·cos ϵ0+s·sin ϵ0
c2+s2
= cos ^
ϵ·cos ϵ0+ sin ^
ϵ·sin ϵ0= cos ^
ϵϵ0(8.5)
Now, testing Eq. (6.1) is obviously identical to testing
H0:˜
ϵ= 0 vs.HA:˜
ϵ=0(8.6)
with the rotated coordinate system, i.e. with c,s. This ob-
viously results in the same test statistics T3.1,T3.3. Less
obvious, the same applies to T3.4and T4.4by virtue of
T3.4c,s=2
σ2c2+s2c
108 |R. Lehmann and M. Lösler
=2
σ2c2+s2c·cos ϵ0s·sin ϵ0
=T3.4(c,s)(8.7a)
T4.4c,s=s2
2=(c·sin ϵ0+s·cos ϵ0)2
2
=T4.4(c,s)(8.7b)
For T4.3no rotation is necessary, because it only ap-
plies to the special case ϵ0= 0. Moreover, note that for
ϵ0= 0 we nd a coincidence of T4.3and T4.4. This shows
that T4.4follows the χ2-distribution Eq. (8.2a,d,e). Hence,
we will further discuss only T4.4.
However, the situation for T3.2is dierent. Here, a dif-
ferent result is obtained. Note that the solution in terms of
the parameter t:= tan ϵis not rotational invariant. This
becomes obvious in the case of ˜
ϵ=±π/2, where ˜
tnot even
exists.
Disregarding this non-issue, we will continue with
ϵ0= 0, noting that almost no restriction of generality is
made.
If we scale both coordinate systems by the factor σ1and
solve the transformation problem with the scaled coordi-
nates, c,s,h,are replaced by
c′′ =c
σ2,s′′ =s
σ2,h′′ =h
σ2,σ′′2=σ2
σ2= 1 (8.8)
The weights do not change, such that Σpi= 1 is re-
tained. Note that the new vector (c′′,s′′)Thas the covari-
ance matrix
Σc′′,s′′ =σ2h0
0h=h′′ 0
0h′′ (8.9)
The new solution is ^
ϵ′′ =^
ϵ.
Now testing Eq. (8.6) with the scaled coordinates, i.e.
with c′′,s′′, obviously results in
T3.1c′′,s′′=h′′ arctan s′′
c′′ 2
=T3.1c,s(8.10a)
T3.2c′′,s′′=h′′ s′′ c′′
c′′2+s′′ 22
=T3.2c,s(8.10b)
T3.3c′′,s′′=c′′ 2+s′′2
h′′ arctan s′′
c′′ 2
=T3.3c,s
(8.10c)
T3.4c′′,s′′= 2 c′′ 2+s′′2c′′ =T3.4c,s
(8.10d)
T4.4c′′,s′′=s′′ 2
h′′ =T4.4c,s(8.10e)
Thus, all test statistics are scale invariant, too.
A special problem exists for T3,2, which can be written
as
T3.2c′′,s′′=h′′
4sin 22^
ϵ′′ h′′
4(8.11)
The fact that the χ2density function is non-zero on the
whole positive real line again proves that T3.2has not the
χ2distribution.
Henceforth, we drop double-primes, such that
ϵ0:= 0,σ2:= 1 (8.12)
is assumed with almost no loss of generality. (Remember
that “almost” here concerns only T3.2, which is not rota-
tion invariant.)
The question, which test statistic is best, must be an-
swered by the resulting probabilities of decision error.
1. The probability of type 1 decision error αis usually se-
lected by the user. But if the 1α-quantile of χ2(1) is
used for T3.1,T3.2,T3.3, the eective αcan be dier-
ent.
2. The probability of type 2 decision error βshould be
small.
Both probabilities αand βare linked via the critical value
c, see Fig. 1.
9Probability of type 1 decision
error
The idea is to compare the 1α-quantiles of χ2(1) with
the quantiles of the true distribution of Ti|H0obtained
by Monte Carlo integration. This method has been suc-
cessfully used e.g. by Lehmann (2012) for the computation
of critical values of normalized and studentized residuals
employed in geodetic outlier detection. In principle, it re-
places
random variates by computer generated pseudo ran-
dom numbers,
probability distributions by histograms and
statistical expectations by arithmetic means
computed from a large number of Monte Carlo experi-
ments, i.e. computations with pseudo random numbers in-
stead of noisy observations.
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |109
In the case that H0is true, we have ˜
ϵ= 0, such that
from Eq. (4.11) follows
E{c|H0}=h,E{s|H0}= 0.(9.1)
According to Eqs. (4.10), (8.12) we need to generate the
following pseudo random numbers:
c|H0N(h,h),s|H0N(0,h)(9.2)
We use M= 108Monte Carlo samples, which turns out
to be suciently high, because the results only insigni-
cantly change, when the computations are repeated with
dierent pseudo random numbers.
We use three stages of non-linearity, expressed by the
signal/noise ratio: h= 1000 means that the signal is 1000
times larger than the noise (σ= 1), which causes only
weak non-linear eects. Analogous, h= 100 and h= 10
cause medium and strong non-linear eects, respectively.
In Table 1, the 1α-quantiles of χ2(1) and the quan-
tiles of the true distribution of Ti|H0,i= 3.1,3.2,3.3are
compared. For T4.3T4.4we can directly use the quan-
tiles of χ2(1). As expected, the largest dierences occur for
h= 10 and T3.2. Using the χ2-quantile as a critical value
can be both, an advantage and a disadvantage in terms of
α. Consider h= 10 and a desired α= 0.01 in T3.1, we erro-
neously select 6.63 as a critical value, instead of 8.92. The
true αfor T3.1is not 0.01, but even larger than 0.02. By
interpolation of the derived quantiles in Table 1, we obtain
an eective α= 0.021. In contrast to that, we nd from
Eq. (8.11) that |T3.2|<2.5always holds, such that 6.63 is
never exceeded, which corresponds to an eective α= 0.
The true quantiles of T3.4are given in Table 2, but
should not be compared to the χ2-quantiles, because they
are obtained in the non-linear model. It is perhaps un-
expected that T3.4follows the χ2distribution even better
than other test statistics, as can be seen from a comparison
of Table 1 and 2.
10 Probability of type 2 decision
error
The aim of this investigationis to nd out , whichtest statis-
tic has the highest statistical test power, i.e. the best abil-
ity to reject a false H0. For comparison, we plot the power
function of Ti|HA,i= 3.1,. . . ,4.4, denoted as
1βi(|˜
ϵ|)(10.1)
Due to the symmetry of βi, all plots are produced
only for positive ˜
ϵ. Whenever Eq. (8.2b,c) hold only ap-
proximately, we again use Monte Carlo integration to
compute the true distribution of Ti|HA. According to
Eqs. (4.10), (4.11), (8.12) we need to generate the following
pseudo random numbers:
c|HAN(h·cos ˜
ϵ,h),s|HAN(h·sin ˜
ϵ,h)(10.2)
We nd
1βi(|˜
ϵ|)= Pr(Ti>ci|HA),i= 3.1,. . . ,4.4(10.3)
where ciis the critical value, which equals the 1α-
quantile of either the χ2(1) distribution or the true distri-
butions obtained in the preceding section, whenever this
is dierent. The rst case is practically applied. Below we
restrict ourselves to the choice of α= 0.05.
In Fig. 3, the power function Eq. (10.3) is plotted for
T4.4, which requires no Monte Carlo integration because
Eq. (8.2e) holds exactly. We see that the power is increas-
ing with |˜
ϵ|, which is expected, because H0and HAare
getting more and more dierent, cf. Fig. 1. Furthermore,
the statistical test power is worse when his small, which is
also expected. Remember that h= 10 means that the mo-
ment of inertia of the points are only 10 times larger than
the standard deviations σ= 1 of the target coordinates,
which makes testing hypotheses nearly hopeless.
Figure 3: Power functions for T4.4and various values of h.
In Fig. 4-6 the other power functions Eq. (10.3) are
plotted relative to that of T4.4. A ratio >1means that Ti
outperforms T4.4and vice versa. Test results with χ2(1)-
quantiles are displayed by dotted curves and are denoted
by Tχ2, while those using true distributions computed
by the Monte Carlo method are displayed by solid curves
and are denoted by T(α). For T3.4only the solid curve
makes sense.
In case of weak non-linearity, i.e. h= 1000, see
Fig. 4, practically no dierence is visible. All seven power
110 |R. Lehmann and M. Lösler
Table 1: Quantiles of χ2(1)(column 2) vs. quantiles of the true distribution of Ti|H0,i= 3.1,3.2,3.3(following columns)
T4.3T4.4T3.1T3.2T3.3T4.1T4.2
χ2(1 α,1) h=10 h=100 h=1000 h=10 h=100 h=1000 h=10 h=100 h=1000
α=0.10 2.71 2.99 2.73 2.71 1.93 2.63 2.70 2.99 2.73 2.71
α=0.05 3.84 4.46 3.89 3.85 2.28 3.69 3.83 4.38 3.89 3.85
α=0.02 5.41 6.78 5.51 5.42 2.46 5.12 5.38 6.40 5.51 5.42
α=0.01 6.63 8.92 6.78 6.64 2.49 6.19 6.58 8.07 6.77 6.64
Table 2: Quantiles of the true distribution of T3.4|H0
T3.4h=10 h=100 h=1000
α=0.10 2.79 2.71 2.71
α=0.05 3.96 3.85 3.84
α=0.02 5.59 5.43 5.41
α=0.01 6.86 6.65 6.63
Figure 4: Power function ratios for h=1000 (weak non-linearity).
Dotted curves: using χ2(1)-quantiles and are denoted by Tχ2,
solid curves: using true quantiles for critical values and are denoted
by T(α). Black and red solid curves visually overlap.
functions behave equally well. In case of medium non-
linearity, i.e. h= 100, there is also no great dierence be-
tween the test statistics, except for T3.2, when the χ2(1)-
quantile is used (red dotted curve), see Fig. 5. The reason
is that this approximate quantile (c= 3.69) diers much
from the true value (c= 3.84). Otherwise, χ2(1)-quantiles
are outperforming the true quantiles.
The strong non-linear case, i.e. h= 10, is depicted in
Fig. 6. The dierences between the tests are even ampli-
ed. Note, the dierent vertical scales in Fig. 4-6. When
the χ2(1)-quantile c= 3.84 is used, T3.2is unable to reject
a false H0, no matter how large |˜
ϵ|is (red dotted curve).
This is a consequence of Eq. (8.11) and the price we have to
pay that α= 0 has been obtained in the preceding section.
Figure 5: Power function ratios for h=100 (medium non-linearity).
Dotted curves: using χ2(1)-quantiles and are denoted by Tχ2,
solid curves: using true quantiles for critical values and are denoted
by T(α). Black and red solid curves visually overlap.
Figure 6: Power function ratios for h=10 (strong non-linearity). Dot-
ted curves: using χ2(1)-quantiles and are denoted by Tχ2, solid
curves: using true quantiles for critical values and are denoted by
T(α).
Due to the strong non-linearity, the power is again worst, if
the true quantiles are applied. This behavior is expected,
because a shift of the critical value cchanges αand βin
opposite directions, see Fig. 1. It follows that the increase
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |111
of probability of type 2 decision error corresponds to the
loss of probability of type 1 decision error observed in the
preceding section.
All solid curves are free of this eect, because they
truly refer to α= 0.05. This can easily be validated be-
cause for ˜
ϵ= 0 the power is always equal. The only signi-
cant dierences between the powers of Tioccur for strong
non-linearity, so we will only focus on the case h= 10, see
Fig. 6.
In the interval 0<˜
ϵ<0.2the best power is obtained
for T3.3, where the covariance propagation has been ap-
plied to Eq. (4.8a). This is even better than for the full non-
linear test T3.4(green curve). But this advantage is very
small and could be accidental. Remember that there is no
mathematical proof that Eq. (3.6) outperforms Eq. (3.7).
This has been demonstrated here. However, for values of
˜
ϵ>0.4the situation changes, as is displayed in Fig. 7.
Figure 7: Same as Fig. 6, solid curves only, but larger range of ˜
ϵ
Note that a comparison of T3.ivs. T4.jis less instruc-
tive, because if the scale is unknown, one should always
use the four-parameter transformation, even though a test
in a three-parameter transformation model may be more
powerful.
Finally, note that the results in this section are not
obtained from a “numerical experiment”, but are strictly
valid for all planar coordinate transformations with error-
free coordinates in one coordinate system and the conven-
tional assumption on the weights Eq. (4.3).
11 Conclusions
We have presented an analysis of the decision errors, when
performing LR tests in planar coordinate transformation
models. Several mathematical equivalent expressions are
conceivable to apply the LR test to one specic hypothesis
test Eq. (6.1), but dierent results are obtained.
At the end of section 3, we named three problems,
which arise, if we apply the LR test to non-linear models
in the usual way, which we now want to further comment
on.
¬The likelihood function Eq. (3.2) can only be maximized
iteratively with the danger of nding only a local maxi-
mum. For problems like many transformations, which per-
mit a unique analytical non-linear least squares solution
like Eq. (4.8), this problem does not exist. The likelihood
function has a unique maximum.
Test statistic Eq. (3.7) gives an LR-Test only in the lin-
earized GMM, i.e. not in the truly non-linear GMM. While
Eq. (3.6) requires the minimization of and , Eq. (3.7)
only relies on the minimization of .min()min()is
computed only by linear approximation. The consequence
could be a small loss of statistical power of the test, de-
pending on the degree of non-linearity. For the planar co-
ordinate transformations with α= 0.05 this has not al-
ways been found, not even for strong non-linearity. How-
ever, if αis chosen smaller, the dierences between the
power functions amplify.
®The PDF of Eq. (3.6) or Eq. (3.7) does not belong to the
well-known set of test distributions (t,χ2,Fetc.) such that
the critical values must be computed numerically. This is
usually not done, because it requires numerical eort. But
using Monte Carlo integration it is simple, as has been
demonstrated in section 9. The advantage would be that
we eectively obtain the desired value of α. Otherwise, we
found a shift of some probability from type 1 to type 2 de-
cision error or back, which is undesired.
The same analytical computation can be done for
other problems, for which explicit non-linear analytical
least squares solutions exist. This encloses
many other transformation problems, also 3D trans-
formations (e.g. Grafarend and Awange 2003), also
transformation where coordinates in both systems are
error-aected (e.g. Chang 2015)
many curve and surface tting problems (e.g. Ahn
2005)
The four parameter transformation is an exceptional case,
because it is intrinsically linear, but can be made non-
linear by parameterization Eq. (5.1). The resulting non-
112 |R. Lehmann and M. Lösler
linear eects can be investigated easily by comparison
with the linear model Eq. (5.2).
Also, more complex hypothesis tests can be studied in this
way, e.g. in the framework of multiple outlier detection.
The same approach can be applied to study other deci-
sion methods like model selection by information crite-
ria, which has also been applied to transformations and
other geodetic models (Lehmann 2014, 2015, Lehmann and
Lösler 2016, 2017).
AAppendix 1: Analytical solution
for the transformation with xed
scale parameter
The least squares error functional Eq. (3.3) to be minimized
reads with Eq. (4.3)
(^
X) =
N
i=1
pi[(Xi^
X0xi·^
ϵ+yi·^
ϵ)2
+ (Yi^
Y0xi·sin ^
ϵyi·^
ϵ)2] = min (A.1)
Two necessary conditions for a minimum read
0 =
^
X0
=2
N
i=1
piXi^
X0xi·cos ^
ϵ+yi·sin ^
ϵ
0 =
^
Y0
=2
N
i=1
piYi^
Y0xi·sin ^
ϵyi·cos ^
ϵ
Using Σpi= 1 gives estimates for the translation parame-
ters:
^
X0=
N
i=1
piXixi·cos ^
ϵ+yi·sin ^
ϵ
=X*x*·cos ^
ϵ+y*·sin ^
ϵ
^
Y0=
N
i=1
piYixi·sin ^
ϵyi·cos ^
ϵ
=Y*x*·sin ^
ϵy*·cos ^
ϵ
Substitution ^
X0,^
Y0into the least squares error functional
yields
min = ^
X=
N
i=1
pi∆Xixi·cos ^
ϵ+∆yi·sin ^
ϵ2
+∆Y ixi·sin ^
ϵ∆yi·cos ^
ϵ2
=
N
i=1
pi∆X2
i+∆Y 2
i+∆x2
i+∆y2
i
2∆Xixi·cos ^
ϵ∆yi·sin ^
ϵ
2∆Y ixi·sin ^
ϵ+∆yi·cos ^
ϵ
=h+H2c·cos ^
ϵ+s·sin ^
ϵ
The third necessary condition for a minimum reads
0 = 1
2
∂Ω
^
ϵ=c·sin ^
ϵs·cos ^
ϵ
This gives the estimate for the rotation parameter
^
ϵ= arctan s
c= arcsin s
c2+s2= arccos c
c2+s2
This unique stationary point must be a minimum because
is bounded from below. The minimum is obtained at
^
X=h+H2c2+s2
c2+s2=h+H2c2+s2
BAppendix 2: Analytical solution
for the transformation with xed
rotation parameter
Similar to appendix 1, but with xed rotation parameter ϵ0
and with estimated scale parameter ^
µ, we start from
^
X=
N
i=1
piXi^
X0xi·^
µ·cos ϵ0+yi·^
µ·sin ϵ02
+(Yi^
Y0xi·^
µ·sin ϵ0yi·^
µ·cos ϵ0)2= min
and obtain
^
X0=X*x*·^
µ·cos ϵ0+y*·^
µ·sin ϵ0
^
Y0=Y*x*·^
µ·sin ϵ0y*·^
µ·cos ϵ0
Substitution ^
X0,^
Y0into the least squares error functional
yields
min = ^
X=
N
i=1
pi∆Xixi·^
µ·cos ϵ0+∆yi·^
µ·sin ϵ02
+∆Y ixi·^
µ·sin ϵ0∆yi·^
µ·cos ϵ02
=^
µ2·h+H2c·^
µ·cos ϵ0+s·^
µ·sin ϵ0
The third necessary condition for a minimum reads
0 = 1
2
∂Ω
^
µ=^
µ·h(c·cos ϵ0+s·sin ϵ0)
This gives the estimate for the scale parameter
^
µ=c·cos ϵ0+s·sin ϵ0
h
Hypothesis testing in non-linear models exemplied by the planar coordinate transformations |113
This unique stationary point must be a minimum because
is bounded from below. The minimum is obtained at
^
X=(c·cos ϵ0+s·sin ϵ0)2
h
+H2c·cos ϵ0+s·sin ϵ0
h(c·cos ϵ0+s·sin ϵ0)
=H(c·cos ϵ0+s·sin ϵ0)2
h
CAppendix 3: Analytical solution
for the four-parameter
transformation
Following the line of appendix 2, but replacing ϵ0by ^
ϵ
gives
^
µ=c·cos ^
ϵ+s·sin ^
ϵ
h
^
X=Hc·cos ^
ϵ+s·sin ^
ϵ2
h
Now also minimizing ^
Xfor ^
ϵyields a fourth necessary
condition
0 = 1
2
∂Ω
^
ϵ=1
hc·cos ^
ϵ+s·sin ^
ϵc·sin ^
ϵs·cos ^
ϵ
At least one of the factors must be zero, therefore we obtain
two solutions
^
ϵ1= arctan s
c,^
ϵ2= arctan c
s
but the second solution obviously belongs to a maximum
of and is dropped. We thus arrive at
^
ϵ= arctan s
c
Inserting this for ^
µand ^
Xgives
^
µ=c2+s2
h
^
X=Hc2+s2
h
DAppendix 4: Covariance matrix of
the linearized GMM of the
three-parameter transformation
The normal matrix of the linearized GMM with Ain
Eq. (6.4) is of the form
ATPA =
u v w
v1 0
w0 1
with
u:= Σpix2
i+y2
i,v:= x*·sin ^
ϵy*·cos ^
ϵ,
w:= x*·cos ^
ϵy*·sin ^
ϵ.
The corresponding inverse can be readily written
down:
ATPA1=1
uv2w2
1vw
v u w2vw
w vw u v2
=1
h
1vw
v h +v2vw
w vw h +w2
because uv2w2=x2
*y2
*+Σpix2
i+y2
i=h.
EAppendix 5: Covariance matrix of
the linearized GMM of the
four-parameter transformation
The normal matrix of the linearized GMM with Ain Eq.
(7.6) is of the form
ATPA =
^
µ2·u0
0u
^
µ·v^
µ·w
wv
^
µ·v w
^
µ·wv
1 0
0 1
with u,v,was in appendix 4.
The corresponding inverse can be readily written
down:
ATPA1=1
h
^
µ20
0 1
^
µ1v^
µ1w
w v
^
µ1vw
^
µ1w v
u0
0u
where uv2w2=hhas been used (see appendix 4).
References
Ahn S.J., 2005, Least squares orthogonal distance tting of curves
and surfaces in space. Lecture Notes in Computer Science (LNCS),
3151, Springer, Heidelberg, ISBN 3-540-23966-9.
Chang G., 2015, On least-squares solution to 3D similarity transfor-
mation problem under Gauss–Helmert model. J Geod., 89, 6, 573–
576, DOI 10.1007/s00190-015-0799-z.
Grafarend E.W., Awange J.L., 2003, Non-linear analysis of the threedi-
mensional datum transformation [conformal groupC7(3)]. J Geod .,
77, 1-2, 66–76, DOI 10.1007/s00190-002-0299-9
Kargoll B., 2012, On the Theory and Application of Model Misspeci-
cation Tests in Geodesy.Deutsche Geodätische Kommission Reihe
C, Nr. 674, nchen.
Klein I., Matsuoka M.T., Guzatto M.P., Nievinski F.G., 2017, An
approach to identify multiple outliers based on sequen-
tial likelihood ratio tests. Survey Review, 49, 357, 1-9, DOI
10.1080/00396265.2016.1212970.
114 |R. Lehmann and M. Lösler
Koch K.R., 1999, Parameter estimation and hypothesis testing in lin-
ear models. 2nd edn., Springer, Heidelberg, DOI 10.1007/978-3-
662-03976-2.
Lehmann R., 2012, Improved critical values for extreme normalized
and studentized residuals in Gauss–Markov models. J Geod., 86,
16, 1137-1146, DOI 10.1007/s00190-012-0569-0
Lehmann R., 2014, Transformation model selection by multiple hy-
pothesis testing. J Geod., 88, 12, 1117-1130, DOI 10.1007/s00190-
014-0747-3.
Lehmann R., 2015, Observation error model selection by information
criteria vs. normality testing. Stud. Geophys. Geod., 59, 4, 489-
504, DOI 10.1007/s11200-015-0725-0.
Lehmann R., Lösler M., 2016, Multiple outlier detection: hypothesis
tests versus model selection by information criteria. J Surv. Eng.,
142, 4, DOI 10.1061/(ASCE)SU.1943-5428.0000189.
Lehmann R., Lösler M., 2017, Congruence analysis of geodetic net-
works hypothesis tests versus model selection by information
criteria. J Appl. Geodesy, 11, 4, 271-283, DOI 10.1515/jag-2016-
0049.
Lehmann R., Neitzel F., 2013, Testing the compatibility of constraints
for parameters of a geodetic adjustment model. J Geod., 87, 6, 555-
566, DOI 10.1007/s00190-013-0627-2.
Lehmann R., Voß-Böhme A., 2017, On the statistical power of
Baarda’s outlier test and some alternative. J Geod. Sci., 7, 1, 68-
78, DOI:10.1515/jogs-2017-0008.
Lösler M., Lehmann R., Eschelbach C., 2017, Model selection via
akaike information criterion Application in Congruence Analysis
(in German). Allgemeine Vermessungs-Nachrichten, 124, 5, 137-
145.
Neyman J., Pearson E.S., 1933, On the problem of the most e-
cient tests of statistical hypotheses. PhilosophicalTransactions of
the Royal Society A: Mathematical, Physical and Engineering Sci-
ences, 231, 694–706, 289–337, DOI 10.1098/rsta.1933.0009.
Somogyi J., Kalmár J., 1988, Verschiedene robuste Schätzungsver-
fahren für die Helmerttransformation. Allgemeine Vermessungs-
Nachrichten, 95, 4, 141-146.
Tanizaki H., 2004, Computational methods in statistics and econo-
metrics, Marcel Dekker New York, ISBN-13: 978-0824748043.
Teunissen P.J.G., 1986, Adjusting and testing with the models of the
ane and similarity transformations. Manuscr. Geod., 11, 214-
225.
Teunisszen P.J.G., 1985, The geometry of geodetic inverse linear map-
ping and non-linear adjustment. Netherlands Geodetic Commis-
sion, Publications on Geodesy, New Series, Delft, 1–186.
Teunissen P.J.G., 2000, Testing theory; an introduction. 2nd edition,
Series on Mathematical Geodesy and Positioning, Delft University
of Technology, The Netherlands, ISBN 90-407-1975-6.
Wolf H., 1966, Die Genauigkeit der für eine Helmert-Transformation
berechneten Koordinaten. Zeitschrift für Vermessungswesen, 91,
2, 33-34.
Velsink H., 2015, On the deformation analysis of point elds. J Geod.,
89, 11, 1071-1087, DOI 10.1007/s00190-015-0835-z.
... The theory of outlier detection and reliability measures has been applied to coordinate transformations by Heck (1985) and Lehmann (2014). Lehmann and Lösler (2018) pointed out that in the case of nonlinear transformation models, hypotheses testing is performed only in the linearized model and may therefore be inexact. The consideration of multiple outliers in the formulation of test statistics is often recommended. ...
... To assume the form in Eq. (8), these correlations must be dropped. But this simplification is frequently used to derive analytical solutions in the framework of coordinate transformations (Lehmann and Lösler 2018;Shen et al. 2006;Sjöberg 2013). In the following, we take great advantage of this restriction. ...
... After some long but straightforward derivations using Eqs. (10) and (11), the following formulas for the least-squares estimates of the four transformation parameters and the a posteriori variance factor are found (Lehmann and Lösler 2018;Lehmann 2023;Sanso 1973): ...
Article
Coordinate transformations are essential in geodesy, surveying engineering, and many other disciplines working with coordinate systems. The task is to estimate transformation parameters from given coordinates of control points by least-squares adjustment. We focus on the internal reliability of the underlying adjustment model. It measures the ability of an adjustment model to detect discrepancies (biases) in the control points. The best established metric for the internal reliability is the minimum detectable bias (MDB). We derive explicit formulas for the MDB of the most important planar coordinate transformations: the planar similarity transformation and the planar rototranslation transformation. They are worked out for a bias in one coordinate, in both coordinates of a control point, and in all four coordinates of two control points of the target system. We investigate situations, where the MDB is infinite, such that biases are undetectable. The results are formulated in nine theorems.
... In der Geodäsie und in der Messtechnik wird die MCM vorrangig zum Abschätzen von Messunsicherheiten (Lösler u. a. 2016a;Schwarz und Hennes 2017), zur Simulation von komplexen Mess-und Auswerteprozessen (Lehmann und Lösler 2018;Luhmann 2018, S. 621f) oder in der Erstellung und Optimierung von Aufnahme-und Netzkonfigurationen (Schmitt 1977) ...
... Insbesondere kann durch die Wahl der Parametrierung ein lineares Problem in ein nichtlineares Problem münden. Die zweidimensionale Helmert-Transformation ist hierfür ein anschauliches Beispiel (Lehmann und Lösler 2018). ...
... Hierbei ist jedoch zu beachten, dass die Verzerrung bzw. das Bias der Schätzwerte durch die Nichtlinearität im funktionalen Modell hervorgerufen wird, die Größe der Verzerrung jedoch durch die Dispersion Σ e beeinflusst wird(Lehmann und Lösler 2018; Lösler u. a. 2020). Da für nichtlineare Modelle nur in Ausnahmefällen strenge Formeln für den Erwartungswert bzw. ...
Thesis
Die Erde befindet sich in einem kontinuierlichen Wandel, der aus verschiedenen variierenden dynamischen Prozessen und einwirkenden Kräften resultiert. Die globale Erderwärmung, der Anstieg des Meeresspiegels oder tektonische Verschiebungen sind einige der globalen Phänomene, die diesen Veränderungsprozess sichtbar machen. Um diese Veränderungen besser zu verstehen, deren Ursachen zu analysieren und um geeignete Präventivmaßnahmen abzuleiten, ist ein eindeutiger Raumbezug zwingend notwendig. Der International Terrestrial Reference Frame (ITRF) als globales erdfestes kartesisches Koordinatensystem bildet hierbei die fundamentale Basis für einen eindeutigen Raumbezug, zur Bestimmung von präzisen Satellitenorbits oder zum Detektieren von Verformungen der Erdkruste. Die 2015 verabschiedete Resolution „A global geodetic reference frame for sustainable development“ (A/RES/69/266) der Vereinten Nationen (UN) verdeutlicht den hohen Stellenwert und die Notwendigkeit eines solchen globalen geodätischen Bezugssystems. Das Global Geodetic Observing System (GGOS) wurde 2003 durch die International Association of Geodesy (IAG) gegründet. „Advancing our understanding of the dynamic Earth system by quantifying our planet’s changes in space and time“ lautet die 2011 formulierte Zielsetzung, auf die alle Arbeiten von GGOS ausgerichtet sind, um die metrologische Plattform für sämtliche Erdbeobachtungen zu realisieren. Die Bestimmung eines globalen geodätischen Bezugsrahmens, der weltweit eine Positionsgenauigkeit von 1mm ermöglicht, ist eine der großen Herausforderungen von GGOS. Das Erreichen dieses Ziels setzt neben der technischen Weiterentwicklung und dem infrastrukturellen Ausbau geodätischer Raumverfahren das Identifizieren und Quantifizieren von systematischen Abweichungen sowohl im lokalen als auch im globalen Kontext voraus. Die Bestimmung eines globalen geodätischen Bezugsrahmens erfolgt durch eine kombinierte Auswertung aller geodätischen Raumverfahren. Da diese untereinander nur eine geringe physische Verknüpfung aufweisen, stellen lokal bestimmte Verbindungsvektoren, die auch als Local-Ties bezeichnet werden, eine der wesentlichen Schlüsselkomponenten bei der Kombination dar. Ungenaue, fehlerbehaftete und inaktuelle Local-Ties limitieren die Zuverlässigkeit des globalen geodätischen Bezugssystems. In der vorliegenden Arbeit werden ein Modell sowie verschiedene Lösungsverfahren entwickelt, die eine Verknüpfung der geometrischen Referenzpunkte von Radioteleskopen bzw. Laserteleskopen mit anderen geodätischen Raumverfahren durch prozessbegleitende lokale terrestrische Messungen erlauben. Während Radioteleskope zur Interferometrie auf langen Basislinien (VLBI) verwendet werden, ermöglichen Laserteleskope Entfernungsmessungen zu Erdsatelliten (SLR) oder zum Mond (LLR). Die Bestimmung des geometrischen Referenzpunktes von Laser- und Radioteleskopen ist messtechnisch herausfordernd und erfordert eine indirekte Bestimmungsmethode. Bestehende geometrische Methoden sind entweder auf eine bestimmte Teleskopkonstruktion beschränkt oder erfordern ein spezielles Messkonzept, welches ein gezieltes Verfahren des Teleskops voraussetzt. Die in dieser Arbeit hergeleitete Methode weist keine konstruktionsbedingten Restriktionen auf und erfüllt zusätzlich alle Kriterien der durch das GGOS angeregten prozessintegrierten in-situ Referenzpunktbestimmung. Hierdurch wird es möglich, den Referenzpunkt kontinuierlich und automatisiert zu bestimmen bzw. zu überwachen. Um die Zuverlässigkeit von VLBI-Daten zu erhöhen und um die Zielsetzung von 1mm Positionsgenauigkeit im globalen Kontext zu erreichen, wird das bestehende VLBI-Netz gegenwärtig durch zusätzliche Radioteleskope unter dem Namen VLBI2010 Global Observing System (VGOS) erweitert. Die hierbei entstehenden VGOS-Radioteleskope zeichnen sich u. a. durch eine sehr kompakte Bauweise und hohe Rotationsgeschwindigkeiten aus. Weitgehend ununtersucht ist das Eigenverformungsverhalten dieser Teleskope. Während für konventionelle Radioteleskope bspw. Signalwegänderungen von z. T. mehreren Zentimetern dokumentiert sind, existieren nur wenige vergleichbare Studien für VGOS-Radioteleskope. Hauptgründe sind zum einen die erhöhten Genauigkeitsanforderungen und zum anderen fehlende Modelle zur Beschreibung der Reflektorgeometrien, wodurch eine direkte Übertragung bisheriger Mess- und Analyseverfahren erschwert wird. In dieser Arbeit werden für VGOS-spezifizierte Radioteleskope Modelle erarbeitet, die eine geometrische Beschreibung der Form des Haupt- und Subreflektors ermöglichen. Basierend auf diesen Modellen lassen sich u. a. Änderungen der Brennweite oder Variationen der Strahllänge infolge von lastfallabhängigen Deformationen geometrisch modellieren. Hierdurch ist es möglich, wesentliche Einflussfaktoren zu quantifizieren, die eine Variation des Signalweges hervorrufen und unkompensiert vor allem zu einer systematischen Verfälschung der vertikalen Komponente der Stationskoordinate führen. Die Wahl eines geeigneten Schätzverfahrens, um unbekannte Modellparameter aus überschüssigen Beobachtungen abzuleiten, wird häufig als trivial und gelöst angesehen. Im Rahmen dieser Arbeit wird gezeigt, dass neben messprozessbedingten systematischen Abweichungen auch systematische Abweichungen durch das gewählte Schätzverfahren entstehen können. So resultieren aus der Anwendung eines Schätzverfahrens, welches ausschließlich in linearen Modellen Gültigkeit besitzt, i.A. keine erwartungstreuen Schätzwerte bei nichtlinearen Problemstellungen. Insbesondere in der Formanalyse des Hauptreflektors eines VLBI-Radioteleskops zeigt sich, dass die resultierenden Schätzwerte verzerrt sind, und diese Verzerrungen Größenordnungen erreichen, die als kritisch zu bewerten sind.
... However, Xue and Yang (2017) studied the linearization error in short-distance positioning applications, such as indoor positioning or laser scanning, and emphasise that the bias in nonlinear least-squares can become significant. As shown by Lehmann and Lösler (2018), the nonlinearity of the least-squares problem depends on the functional model, but the stochastic model controls the impact of the nonlinearity onto the estimates. ...
... Hence, having an implicit nonlinear functional model, the bias of the parameters as well as the related dispersion are obtained by Teunissen (1990) showed that the nonlinearity of the model is caused by two types of nonlinearity, the nonlinearity of the manifold itself and the nonlinearity of the parameter curves in the manifold. Whereas the first type is intrinsic, the second type depends on the parametrization (see also Lehmann and Lösler 2018). Corresponding explicit and implicit functional models differ in the parametrization but are related to the same manifold. ...
... Based on the first-order dispersion of the estimated parameters of the best-fitting plane, Heinz et al. (2019) recommended to increase the discretisation of the observed plane instead of improving the dispersion of the observations. As demonstrated, the stochastic model controls the impact of the nonlinearity onto the estimates (see also Lehmann and Lösler 2018). If almost unbiased estimates are desired, the user is advised to improve the dispersion of the observations instead of increasing the discretisation. ...
Article
Full-text available
To evaluate the benefit of a measurement procedure onto the estimated parameters, the dispersion of the parameters is usually used. To draw objective conclusions, unbiased or at least almost unbiased estimates are required. In geodesy, most of the functional relations are nonlinear but the statistical properties of the estimates are usually obtained by a linearised substitute-problem. Since the statistical properties of linear models cannot be passed to the nonlinear case, the estimates are biased. In this contribution, the bias of the parameters as well as the bias of the dispersion in nonlinear implicit models is investigated, using a second-order Taylor expansion. Nonlinear implicit models are general models and are used, for instance, in the framework of surface-fitting or coordinate transformation, which considers errors for the coordinates in source and target system. The bias is introduced as a further indicator to validate the benefit of an adapted measurement process using more precise measuring instruments. Since some parametrisations yield an ill-posed problem, also the case of a singular equation system is investigated. To demonstrate the second-order effect onto the estimates, a best-fitting plane is adjusted under varying configurations. Such a configuration is recommended in evaluating uncertainties of optical 3D measuring systems, e.g. in the framework of the VDI/VDE 2634 guideline. The estimated bias is used as an indicator whether a large number of poor observations provides better results than a small but precise sample.
... Applying critical values taken from improper distributions leads to wrong decisions, as shown by Lehmann and Lösler (2018). This problem can be solved by improving critical values or by deriving test statistics that follow known statistical test distributions using a proper stochastic model. ...
... Here we set the significance level of the LR test to α 0.05: Lehmann and Lösler (2018) showed that the loss of power of the test due to nonlinearity is small for that choice. Under some regularity conditions, the test statistic T under H 0 follows a χ 2 p f (chi2) distribution. ...
Article
Full-text available
The measurement noise of a terrestrial laser scanner (TLS) is correlated. Neglecting those correlations affects the dispersion of the parameters when the TLS point clouds are mathematically modelled: statistical tests for the detection of outliers or deformation become misleading. The account for correlations is, thus, mandatory to avoid unfavourable decisions. Unfortunately, fully populated variance covariance matrices (VCM) are often associated with computational burden. To face that challenge, one answer is to rescale a diagonal VCM with a simple und physically justifiable variance inflation factor (VIF). Originally developed for a short-range correlation model, we extend the VIF to account for long-range dependence coming from, for example, atmospheric turbulent effects. The validation of the VIF is performed for the congruency test for deformation with Monte Carlo simulations. Our real application uses data from a bridge under load.
... Therefore, there is a need for a redefinition of the transformation parameters between the LMG, classical geodetic network, and GNSS global grid systems. Although several related 2D coordinate transformation works have been reported in literature (Gargula and Gawronek, 2023;Hong, 2021;Qin et al., 2020;Alcaras et al. 2020;Lu et al., 2019;Rofatto et al., 2019;Eteje et al., 2019;Bremner and Santos, 2019;Lehmann and Lösler, 2018;Öcalan, 2018;Goudarzi and Landry, 2017;Ampatzidis and Melachroinos, 2017;Ampatzidis and Demirtzoglou, 2017;Ansari et al. 2017;Soycan et al., 2017), none of the existing studies have applied and compared the suitability of 2D conformal and 2D affine models to unify a mine grid, national mapping grid and UTM. It was noticed that most of the studies focused on cadastral coordinate transformation, direct projection of geocentric system to local topocentric coordinates and geological map transformation. ...
Article
Full-text available
In mining operations, coordinate transformation plays a key role in transforming coordinates acquired in the Global Navigation Satellite System (GNSS) into the national and local mine grid systems. It has often been known that most mining sites have transformation parameters determined using only a few common points or the minimum co-located points. However, these determined parameters only fit within a limited extent of the mine concession. Hence, allowing for extrapolation and incorrect transformation results when the existing transformation parameters are utilised beyond the existing co-located points. As the mine expands beyond it operationalised zones, there is the need to redefine a new set of transformation parameters that are devoid of extrapolation and apply to a wider coverage of the mine concession. This study applied, evaluated, and compared the Two-dimensional (2D) conformal similarity model and 2D affine model to facilitate the transformation of the Local Mine Grid (LMG) coordinates to the Ghana National Projected Grid (GNG), Universal Transverse Mercator (UTM), and vice versa. To guarantee the consistency of the transformation results between the models tested on all the grid systems, similar transformation performance was revealed. In transforming between GNG and LMG, the 2D conformal results vary from the 2D Affine by 0.0079 m,-0.0128 m, 0.0079 m and 0.0261 m in RMSEHPE, SDHPE, MaxHPE, and MinHPE. Similar observation was made for transforming between UTM and LMG, and UTM and GNG, respectively. Based on the results obtained it can be stated that the two models are applicable in connecting mine grid system into a national grid system (non-geocentric), and UTM.
... As a result, we do not also need to concern about whether it is a linear geodetic network (e.g., levelling or GNSS vector network) or non-linear (e.g., trilateration), since the univariate model is always linear. Linearization of a nonlinear model may reduce the detection power [35]. In addition, the univariate approach has the benefit of reducing the smearing and masking effect of displacements. ...
Preprint
Full-text available
• A new sequential testing procedure to identify multiple unstable points, regardless of whether they are a priori assumed control points or not. • The proposed method controls the false positive rate efficiently. • A new rigorously procedure to determine the maximum number of points to be inspected. • The procedure avoids the problem of having non-separable models by restricting the maximum pointo to be inspected. • The method operates in the domain of the observations and not of the coordinates. Consequently, the problem of defining the datum at different epochs and the S-transformation are avoided. • The fact of always working with a linear model reduces possible test power losses due to model linearization.
... Hierdurch wird die Schätzung jedoch verzerrt, da eine Übertragung statistischer Eigenschaften von linearen Problemen auf nichtlineare Probleme unzulässig ist (Teunissen & Knickmeyer 1988). Die Verzerrung resultiert aus der Nichtlinearität des funktionalen Modells, ihre Größe wird aber maßgeblich durch das stochastische Modell bestimmt (Lehmann & Lösler 2018 Abbildung 2 zeigt exemplarisch die resultierenden Verzerrungen der X-Koordinate und der zugehörigen Standardabweichung des geschätzten Referenzpunktes in Abhängigkeit vom stochastischen Modell. Die Verzerrung steigt erwartungsgemäß mit zunehmendem σ i an, bleibt aber selbst für σ i = 10 cm deutlich unter 1 mm. ...
Chapter
Full-text available
Die kontinuierliche Erfassung von Verformungen an der Erdkruste erfolgt u. a. durch Synthetic Aperture Radar (SAR). Für die radiometrische Kalibrierung dieser SAR-Systeme wird häufig auf festinstallierte dreiflächige Radarreflektoren, sogenannte Corner-Cube-Reflektoren (CCR), am Boden zurückgegriffen. Die geometrische Genauigkeit wie bspw. die Ebenflächigkeit aber auch die Orientierungsgenauigkeit beeinflussen hierbei die Unsicherheit der Rückstrahlfläche der Reflektoren. Werden CCRs auch als geometrische Bezugspunkte verwendet, so sind zusätzlich auch die Positionen der Reflektorphasenzentren mit übergeordneter Genauigkeit bereitzustellen. Basierend auf präzisen terrestrischen Messungen können diese Kenngrößen durch eine geeignete Modellierung abgeleitet und evaluiert werden. Eine hierfür geeignete Analysestrategie wird in diesem Beitrag vorgestellt und deren Anwendung anhand von Realdaten, die am Onsala Space Observatory 2022 erhoben wurden, demonstriert.
... Even if the parameter vectors ˆi X are derived by a linear substitute problem within each MC simulation, the linearisation takes place at different expansion points, see Figure 1. Instead of solving a single linear substitute problem using (11) However, if the functional model ( ) f X is nonlinear, the distribution of the parameters depends on the stochastic model and the curvature of f , as shown by Lehmann and Lösler (2018) in the framework of coordinate transformations. The VCMs obtained by (17) and (19), respectively, are comparable, if the observations are normally distributed and f is only a moderately nonlinear function. ...
Article
Full-text available
In this contribution it is shown how an extended uncertainty budget of the observations according to the Guide to the Expression of Uncertainty in Measurement (GUM) can be considered in adjustment computations. The extended uncertainty budget results from the combination of Type A standard uncertainties determined with statistical methods and Type B standard uncertainties derived with nonstatistical methods. Two solutions are investigated, namely the adjustment in the classical Gauss-Markov model and the adjustment in the Gauss-Markov model using Monte Carlo simulations for the consideration of the uncertainties of the observations. Numerical examples are given to show that an appropriate interpretation of the dispersion measures for the unknowns is particularly important in order to avoid misinterpretation of the results. Furthermore, the effects of changing the weights of the observations on the adjustment results are shown. Finally, practical advice for the consideration of an extended uncertainty budget of the observations in adjustment computations is given.
... The second-order correction consists of the Hessian matrix, obtained by the functional model, and the dispersion matrix, defining the stochastic model. Doubtlessly, the nonlinearity comes from the functional model, but the stochastic model controls the size of the bias onto the estimates [30,56]. Thanks to the high accuracy of close range photogrammetric systems but also due to the symmetric measurement configuration, the second-order effect becomes negligible, and the first-order solution yields proper estimates. ...
Article
Full-text available
A global geodetic reference system (GGRS) is realized by physical points on the Earth’s surface and is referred to as a global geodetic reference frame (GGRF). The GGRF is derived by combining several space geodetic techniques, and the reference points of these techniques are the physical points of such a realization. Due to the weak physical connection between the space geodetic techniques, so-called local ties are introduced to the combination procedure. A local tie is the spatial vector defined between the reference points of two space geodetic techniques. It is derivable by local measurements at multitechnique stations, which operate more than one space geodetic technique. Local ties are a crucial component within the intertechnique combination; therefore, erroneous or outdated vectors affect the global results. In order to reach the ambitious accuracy goal of 1 mm for a global position, the global geodetic observing system (GGOS) aims for strategies to improve local ties, and, thus, the reference point determination procedures. In this contribution, close range photogrammetry is applied for the first time to determine the reference point of a laser telescope used for satellite laser ranging (SLR) at Geodetic Observatory Wettzell (GOW). A measurement campaign using various configurations was performed at the Satellite Observing System Wettzell (SOS-W) to evaluate the achievable accuracy and the measurement effort. The bias of the estimates were studied using an unscented transformation. Biases occur if nonlinear functions are replaced and are solved by linear substitute problems. Moreover, the influence of the chosen stochastic model onto the estimates is studied by means of various dispersion matrices of the observations. It is shown that the resulting standard deviations are two to three times overestimated if stochastic dependencies are neglected.
Preprint
Full-text available
Congruence analysis is widely used to monitor structural stability through statistical analysis of differences in estimated coordinates of geodetic network points across observation epochs. Traditional methods for the identification of unstable points rely on either iterative hypothesis tests or combinatorial procedures, each with inherent limitations. To overcome these, we propose the Sequential and Combinatorial Geometry-Free Unstable Point Identification (SeqCup-Free) method, which integrates combinatorial analysis and likelihood ratio tests within a sequential framework. Unlike conventional approaches, SeqCup-Free uses observation differences instead of estimated coordinates, which removes the need for geodetic network datum definition and maintains the statistical power of hypothesis tests. Additionally, we introduce a modified version, SeqCup-Mod, which extends the method to non-nested hypothesis tests and achieves high success rates. A critical aspect of our approach is the definition of the maximum number of points considered unstable, which avoids statistical overlap while allowing the system to detect the maximum possible displacements. Results from simulations and real geodetic network data show that SeqCup-Free provides consistent and, in some cases, superior performance compared to classical and recent methods in deformation monitoring.
Article
Full-text available
Baarda’s outlier test is one of the best established theories in geodetic practice. The optimal test statistic of the local model test for a single outlier is known as the normalized residual. Also other model disturbances can be detected and identified with this test. It enjoys the property of being a uniformly most powerful invariant (UMPI) test, but is not a uniformly most powerful (UMP) test. In this contribution we will prove that in the class of test statistics following a common central or non-central χ
Article
Full-text available
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution, we compare the Akaike information criterion (AIC) and the Anderson-Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
Article
Full-text available
A new approach to determine a multi-point deformation of the earth’s surface or objects upon it, represented by point fields measured in two epochs, is presented. The problem of determining, which points have been deformed, is not approached by testing point-by-point, but by formulating alternative hypotheses that test if one, two or more subsets of points have been deformed, each subset in its own way. The method is based on the least squares connection adjustment, defines alternative hypotheses and searches the best one by testing a large amount of them. If the best hypothesis is found, a least squares estimation of the deformations is provided. The test results of the presented method are invariant under changes of the S-systems in which the point coordinates are defined. The results of a numerical test of the method applied to a simulated network are given. In designing a geodetic deformation network minimal detectable deformations can be computed, belonging to likely deformation patterns. The proposed method leads to a reconsideration of the duality of reference and object points. A comparison with the method of testing confidence ellipsoids is made. The relevance of the difference between geometric and physical interpretations of deformations and the consequences of the presented method for future developments are discussed.
Article
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
Article
In applied engineering geodesy, hypothesis testing is one of the standard tools to evaluate estimated least-squares parameters, to detect outliers in observations, or to prove the stability of points in the framework of deformation analysis. Based on the decision of the hypothesis test, the least-squares model is optimized directly or indirectly, e. g. by adding additional parameters or by removing incorrect observations. In statistics, this approach is known as model selection, and its goal is to find a suitable model, which is indicated by an adequate number of parameters and high rate of goodness-of-fit at the same time. Based on information theory, one well-known method for model selection is the Akaike information criterion (AIC). The AIC evaluates model candidates via Kullback-Leibler divergence. This investigation gives a short introduction to AIC and focuses on the usage of AIC technique in congruence analysis by means of two case studies.
Article
One of the main challenges in the quality control of geodetic measurements is the reliable identification of multiple outliers. Within this context, the goal of this paper is to present a procedure designated here as Sequential Likelihood Ratio Tests for Multiple Outliers (SLRTMO). To verify its performance, a levelling network was simulated involving one, two and three (simultaneous) outliers. Also a GNSS network involving one and two (simultaneous) outliers was analysed. Results showed that SLRTMO is efficient for single and multiple outliers, simulated with magnitude greater than five standard deviations, with a mean success rate of 79.6% for these cases. Furthermore, the maximum number of outliers to be tested has to be defined according to the redundancy of the network so as to ensure the performance of SLRTMO.
Article
The detection of multiple outliers can be interpreted as a model selection problem. The null model, which indicates an outlier free set of observations, and a class of alternative models, which contain a set of additional bias parameters. A common way to select the right model is the usage of a statistical hypothesis test. In geodesy Baarda's data snooping is most popular. Another approach arises from information theory. Here, the Akaike information criterion (AIC) is used to select an appropriate model for a given set of observations. AIC is based on the Kullback-Leibler divergence, which describes the discrepancy between the model candidates. Both approaches are discussed and applied to test problems: The fitting of a straight line and a geodetic network. Some relationships between data snooping and information criteria are elaborated. In a comparison it turns out that the information criteria approach is more simple and elegant. But besides AIC there are many alternative information criteria selecting different outliers, and it is not clear, which one is optimal.
Article
In this note, the 3D similarity datum transformation problem with Gauss–Helmert model, also known as the 3D symmetric Helmert transformation, is studied. The closed-form least-squares solution, i.e., without iteration, to this problem is derived. It is found that the rotation parameters in this solution are the same to that for the transformation with Gauss–Markov model, while the scale and translation parameters differ from each other.