Total Least-Squares Adjustment of Condition Equations

ArticleinStudia Geophysica et Geodaetica 55(3) · July 2011with53 Reads
DOI: 10.1007/s11200-011-0032-3
The usual least-squares adjustment within an Errors-in-Variables (EIV) model is often described as Total Least-Squares Solution (TLSS), just as the usual least-squares adjustment within a Random Effects Model (REM) has become popular under the name of Least-Squares Collocation (without trend). In comparison to the standard Gauss-Markov Model (GMM), the EIV-Model is less informative whereas the REM is more informative. It is known under which conditions exactly the GMM or the REM can be equivalently replaced by a model of Condition Equations or, more generally, by a Gauss-Helmert-Model (GHM). Such equivalency conditions are, however, still unknown for the EIV-Model once it is transformed into such a model of Condition Equations. In a first step, it is shown in this contribution how the respective residual vector and residual matrix would look like if the Total Least-Squares Solution is applied to condition equations with a random coefficient matrix to describe the transformation of the random error vector. The results are demonstrated using numeric examples which show that this approach may be valuable in its own right.
    • "In geodetic literature Teunissen (1988) was the first who solved an EIV model in an exact form. The equivalent form of the EIV model appears as the condition equation with a random coefficient matrix (Schaffrin and Wieser 2011). From a geodetic point of view, the EIV model is a special case of the nonlinear Gauss Helmert Model (GHM), which generates the standard LS solution after iterative linearization (Neitzel 2010; Schaffrin and Snow 2010; Fang 2011, 2013a; Bányai 2012; Snow 2012). "
    [Show abstract] [Hide abstract] ABSTRACT: It is well known that the errors-in-variables (EIV) model has been treated as a special case of the traditional geodetic model, the nonlinear Gauss–Helmert model (GHM), for more than a century. In this contribution, an adjustment of the EIV model with equality and inequality constraints is investigated based on the nonlinear GHM. In each iteration, the constrained EIV model is linearized to form a quadratic program. Furthermore, the precision description is investigated for the mixed constrained problem. The demonstrated results from the numerical examples show that this approach avoids the large computational expenses of the existing combinatorial solution that normally accompany the number of inequality constraints. Keywords Total Least-Squares (TLS) Á Errors-in-variables model Á Equality and inequality constraints Á Gauss–Helmert model Á Convex quadratic program
    Full-text · Article · Aug 2015
    • "Certainly, approximate solutions can in many cases be obtained from preliminary solutions of the system of equations (for instance selecting a number of equations). Still, this is possible only in the case of relatively simple equations, such as observations of distances or angles, usually in geodetic applications, or in the cases of iterative, converging solutions (Schaffrin and Wieser 2011). On the contrary, in the cases of highly non-linear, redundant systems of equations met in various geophysical problems, preliminary or iterative solutions may lead to local solutions (local minima) very different from the real (global) solution (see figure thirteen in Saltogianni and Stiros 2012a) and hence the conditions of linearization are not met. "
    [Show abstract] [Hide abstract] ABSTRACT: The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an Rn grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the Rn containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.
    Full-text · Article · Mar 2014
    • "Wieser (2008), Schaffrin and Felus (2009), Schaffrin and Wieser (2009), Schaffrin and Wieser (2011), Tong et al. (2011) and Shen et al. (2011) "
    [Show abstract] [Hide abstract] ABSTRACT: In an earlier work, a simple and flexible formulation for the weighted total least squares (WTLS) problem was presented. The formulation allows one to directly apply the existing body of knowledge of the least squares theory to the errors-in-variables (EIV) models of which the complete description of the covariance matrices of the observation vector and of the design matrix can be employed. This contribution presents one of the well-known theories—least squares variance component estimation (LS-VCE)—to the total least squares problem. LS-VCE is adopted to cope with the estimation of different variance components in an EIV model having a general covariance matrix obtained from the (fully populated) covariance matrices of the functionally independent variables and a proper application of the error propagation law. Two empirical examples using real and simulated data are presented to illustrate the theory. The first example is a linear regression model and the second example is a 2-D affine transformation. For each application, two variance components—one for the observation vector and one for the coefficient matrix—are simultaneously estimated. Because the formulation is based on the standard least squares theory, the covariance matrix of the estimates in general and the precision of the estimates in particular can also be presented.
    Full-text · Article · Nov 2013
Show more