Article

Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

In a linear Gauss–Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov–Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In turn, Raus and Hämarik (2007) used the quasioptimality principle [40], while Brezinski et al. (2009) demonstrated an approach concerning error estimation based on extrapolation [41]. Schaffrin (2008) introduced an optimal Tikhonov RP modifying best invariant quadratic uniformly unbiased estimates [42]. Also, Hochstenbach et al. (2015) prepared a method when no information about the error norm in the data is available [43]. ...
... In turn, Raus and Hämarik (2007) used the quasioptimality principle [40], while Brezinski et al. (2009) demonstrated an approach concerning error estimation based on extrapolation [41]. Schaffrin (2008) introduced an optimal Tikhonov RP modifying best invariant quadratic uniformly unbiased estimates [42]. Also, Hochstenbach et al. (2015) prepared a method when no information about the error norm in the data is available [43]. ...
Article
The double-parameter iterative Tikhonov regularization is assessed for strengthening the single-epoch model-based precise GNSS positioning within the framework of cognitive meanings. The simultaneous iterative Tikhonov regularization of the least-squares (LS) estimators of the parameters of interest in the single-epoch GNSS model is analyzed to enhance their accuracy properties. Regularization parameters (RP) are collected in the regularization operator, which can play a standardizing role in the regularization principle. Thus, the double-parameter approach can consider the heteroscedasticity of the LS estimators of interest in the regularization principle. This approach employs the quality-based mean squared error (mse) matrix trace minimization criterion to select optimal double-RP values simultaneously. Then, the unconstrained LS estimation is iteratively regularized, obtaining regularized float solutions. The numerical tests indicate that the double-parameter approach successfully strengthens the single-epoch GNSS measurement models due to considering the heteroscedasticity of the LS estimators of interest in the regularization principle. The regularized variance-covariance (vc) matrix describes float solutions of improved precision properties at the cost of losing the LS estimator’s unbiasedness. The accuracy is thus superior in the sense of mse. Therefore, the regularized LS estimator is well-scaled but with a biased localization. It also provides a more peaked probability density function (PDF) of real-valued ambiguities, obtaining a lower failure rate (FR) of integer least-squares (ILS) ambiguity resolution (AR). In this way, the regularized ILS estimator performs well in the ambiguity domain with the presence of regularized bias.
... In accordance with B. Schaffrin (2008), consider the following two identities: ...
... Residual vector(s) and residual matrix: In analogy to the formulas derived by B. Schaffrin (2008) and in view of the treatment of the EIV-Model with prior information on the parameters by B. Schaffrin (2009), consider the following two adapted identities: ...
Presentation
Full-text available
Abstract: A relatively new approach to determine the Tykhonov regularization parameter within a Gauss-Markov Model is based on the reproBIQUUE principle (reproducing Best Invariant Uniformly Unbiased Estimation) and the idea that this parameter resembles a variance ratio. In this contribution , a similar algorithm will be developed for the more challenging Errors-In-Variables Model (EIV-Model) where the standard Total Least-Squares (TLS) adjustment is replaced by a form of regularized TLS estimation of Tykhonov type. An attempt will be made to further allow different variance components to describe the relative variance of input and output data within the EIV-Model.
... However, other methods can also be considered, e.g. generalized cross-validation [36], L-curve [18], quasi optimality principle [37], reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUE) [38], or error estimation based on extrapolation [39]. The L-curve method is also mathematically attractive because it balances the norms of model parameters and observation residuals. ...
Article
Full-text available
The Modified Ambiguity Function Approach (MAFA) implicitly conducts the search procedure of carrier phase GNSS integer ambiguity resolution (IAR) in the coordinate domain using the integer least squares (ILS) principle, i.e. MAFA-ILS. One of the still open scientific problems is an accurate definition of the search region, especially in the context of instantaneous IAR. In doing so, the float solution results, which encompass float position (FP) and its variance-covariance (VC) matrix, must be improved as these are necessary for defining the search region. For this reason, the ambiguity parameters are separately regularized, and then the baseline parameters are conditioned on regularized float ambiguities. The conditional-regularized estimation is thus designed, obtaining the regularized FP (RFP) and its VC-matrix. This solution is promising because its accuracy is enhanced in the sense of mean squared error (MSE) thanks to the improved precision at the cost of regularized bias. The optimal regularization parameter (RP) values obtained for ambiguity parameters balance the contributions of improved precision and bias in the regularized float baseline solution's MSE. Therefore, the regularized search region is defined accurately in the coordinate domain to contain such approximate coordinates that more frequently give the correct ILS solution. It also contains fewer MAFA-ILS candidates, improving the search procedure's numerical efficiency. The regularized ILS estimator performs well with the presence of bias, increasing the probability of correct IAR in the coordinate domain.
... On the other hand, an excessively increase in RP can cause undesirable bias in the estimated vector of unknown parameters. Many methods to determine the RP have been developed [35,36,37,38]. Hochstenbach et al. proposed in [39] an interesting technique to determine the optimal RP when no knowledge of the norm of error (4) in the data is available. ...
Article
This paper analyses the regularization of an ill-conditioned mathematical model in a single-epoch precise GNSS positioning. The regularization parameter (RP) is selected as a parameter that minimizes the criterion of the Mean Squared Error (MSE) function. The crucial for RP estimation is to ensure stable initial least-squares (LS) estimates to replace the unknown quadratic matrix of actual values with the LS covariance matrix. For this purpose, two different data models are proposed, and two research scenarios are formed. Two regularized LS estimations are tested against the non-regularized LS approach. The first one is the classic regularization of LS estimation. In turn, the second one is its iterative counterpart. For the LS estimator of iterative regularization, regularized bias is significantly lower while the overall accuracy is improved in the sense of MSE. The regularized variance-covariance matrix of better precision can mitigate the impact of regularized bias on integer least-squares (ILS) estimation up to some extent. Therefore, iterative LS regularization is well-designed for single-epoch integer ambiguity resolution (AR). Nevertheless, the performance of the ILS estimator is studied in the context of the probability of correct integer AR in the presence of regularized bias.
... In this paper, we have used a modified L-curve method for determining the regularization parameter. In another approach, Schaffrin (2008) has proposed an iterative method for the regularization parameter determination. ...
Article
This contribution presents the Tikhonov regularized weighted total least squares (TRWTLS) solution in an errors-in-variables (EIV) model. The previous attempts had solved this problem based on the hybrid approximation solution (HAPS) within a nonlinear Gauss-Helmert model. The present formulation is a generalized form of the classical nonlinear Gauss-Helmert model, having formulated in an EIV general mixed observation model. It is a follow-up to the previous work throughout the WTLS problems formulated rely on a standard least squares (SLS) theory. Two cases, namely the EIV parametric model and the classical nonlinear mixed model, could be considered special cases of the general mixed observation model. These formulations are conceptually simple; because they are formulated based on the SLS theory, and subsequently, the existing SLS knowledge can directly be applied to the ill-posed mixed EIV model. Two geodetic applications have then adopted to illustrate the developed theory. As a first case, 2D affine transformation parameters (six-parameter affine transformation) for ill-scattered data points are adeptly solved by the TRWTLS method. Second, the circle fitting problem as a nonlinear case is not only tackled for well-scattered data points but also tackled for ill-scattered data points in a nonlinear mixed model. Finally, all results indicate that the Tikhonov regularization provides a stable and reliable solution in an ill-posed WTLS problem, and hence an efficient method applicable to many engineering problems.
... Typically, Tikhonov regularization method with single regularization parameter (Tikhonov 1963;Hoerl and Kennard 1970a, b;Tikhonov and Arsenin 1977;Xu and Rummel 1994b;Koch and Kusche 2002;Schaffrin 2008a) or multiple regularization parameters (Xu et al. 2006), and the truncated singular value decomposition (Hansen 1990;Xu 1998;Wu et al. 2013) are applied. To determine the regularization parameter, one may visit the generalized cross-validation (GCV) (Golub et al. 1979;Kent and Mohammadzadeh 2000;Xu 2009), the mean squared error (MSE) criterion (Hoerl and Kennard 1970a, b;Schaffrin 2008b), or the L-curve method (Hansen and O'Leary 1993;Calvetti et al. 2000). Xu and Rummel (1994a) investigated the statistical performance of estimated regularization parameter. ...
Article
In the geodetic community, an adjustment framework is established by the four components of model choice, parameter estimation, variance component estimation (VCE), and quality control. For linear ill-posed models, the parameter estimation and VCE have been extensively investigated. However, at least to the best of our knowledge, nothing is known about the quality control of hypothesis testing in ill-posed models although it is indispensable and rather important. In this paper, we extend the theory of hypothesis testing to ill-posed models. As the Tikhonov regularization is typically applied to stabilize the solution of ill-posed models, the solution and its associated residuals are biased. We first derive the statistics of overall-test, w-test and minimal detectable bias for an ill-posed model. Due to the bias influence, both overall-test and w-test statistics do not follow the distributions as those used in well-posed models. Then, we develop the bias-corrected statistics. As a result, the bias-corrected w-test statistic can sufficiently be approximated by a standard normal distribution; while the bias-corrected overall-test statistic can be approximated by two non-central chi-square distributions, respectively. Finally, some numerical experiments with a Fredholm first kind integral equation are carried out to demonstrate the performance of our designed hypothesis testing statistics in an ill-posed model.
... With the aim of achieving the highest signal-to-noise ratio of gravity field solution, an optimal regularization factor should be determined before solving the regularized normal Equation 12. The Mean Square Error (MSE) method (Schaffrin, 2008;Shen et al., 2012;Xu et al., 1994) by minimizing the trace of MSE is employed to choose the optimal regularization parameter in this study. The MSE of any regularized solution can be calculated according to ...
Article
Full-text available
Mass transport estimates based on filtered Gravity Recovery and Climate Experiment (GRACE) monthly spherical harmonic gravity field solutions generally suffer from resolution loss and signal attenuation. To develop high‐resolution solutions from GRACE Level‐1B data, this study proposes a new regularization method. Transforming spatial constraints from GRACE‐based filtered mass changes into the spectral domain and imposing them on spherical harmonics, we resolve high‐resolution gravity field solutions expressed as spherical harmonics instead of mascons. The proposed method greatly enhances the spatial resolution and signal strength of spherical harmonic solutions. Using the presented method, we have produced a time series of high‐resolution (degree 180) spherical harmonic solutions named Tongji‐RegGrace2019, which can be directly used without further smoothing. To evaluate Tongji‐RegGrace2019, we conduct the global (trend and annual amplitude) and regional comparisons (groundwater loss signals over India, hydrology signals over river basins, and ice melting signals over Greenland, Antarctica Peninsula and Patagonia) among various GRACE solutions. Our analyses show that Tongji‐RegGrace2019 agrees well with Center for Space Research and Jet Propulsion Laboratory mascon solutions in terms of signal power and spatial resolution. Over the selected areas, the correlation coefficients of mass changes between Tongji‐RegGrace2019 and mascon solutions are at least 82%. Compared to the filtered solution, higher spatial resolution and stronger signal power are achieved by Tongji‐RegGrace2019 and mascon solutions, which have the potential to retrieve signals at a smaller spatial scale. Over Patagonia Icefield, the improvement of trend estimates by Tongji‐RegGrace2019 with respect to the filtered solution is about 150%.
... The MSE was used to assess the unbiasedness of the predictor, and the optimal value of the MSE should be approximately zero [29], [42]. The RMSE was used to check the goodness of fit of the prediction, and models with smaller RMSE values are preferred because a low RMSE means that the fitted values are close to the observed values [43]. ...
Article
Full-text available
Visible and near-infrared reflectance (Vis-NIR) spectroscopy can provide low-cost and highdensity data for mapping various soil properties. However, a weak correlation between the spectra and measurements of soil heavy metals makes spectroscopy difficult to use in predicting incipient risk areas. In this study, we introduce a new spectral index (SI) based on Vis-NIR spectra and use it as a covariate in ordinary cokriging (OCK) to improve the mapping of soil heavy metals. The SI was defined from the highest covariance between spectra and heavy metal content in the partial least squares regression (PLSR) model. The proposed mapping approach was compared with an ordinary kriging (OK) predictor that uses only soil heavy metal data and an OCK predictor that uses soil organic matter (SOM) and Fe as covariates. To this end, a total of 100 topsoil (0-20 cm) samples were collected in an agricultural area near Longkou City, and the contents of As, Pb and Zn in the soil were determined. The results showed that OCK with the SI provided better results in terms of unbiasedness and accuracy compared to other comparative methods. Additionally, we explored the SI through simple strategies based on spectral analysis and correlation statistics and found that the SI synthesized most of the soil properties affected by heavy metals and was not limited to Fe and SOM. In summary, the SI method is cost-effective for improving soil heavy metal mapping and can be applied to other areas.
... Methods which give up the unbiasedness to obtain an improved relative accuracy as expressed by the Mean Squared Prediction Error MSPE, are popular in various statistical and mathematical applications. These methods, often termed "shrinkage estimators", were developed to solve problems that contain nearlinear dependence (multicollinearity) among the predictor variables (e.g, Montgomery and Friedman, 1993;Hoerl and Kennard, 1970), to treat cases with non-normal data, and to obtain an improved MSPE (Scha rin 1985(Scha rin , 2000a(Scha rin , 2008. However, it has not been until fairly recently, through the work of Scha rin (1993) and Gotway and Cressie (1993) that shrinkage methods have been extended to the geostatistical analysis. ...
Article
Full-text available
This article studies the Optimal Biased Kriging (OBK) approach which is an alternative geostatistical method that gives up the unbiasedness condition of Ordinary Kriging (OK) to gain an improved Mean Squared Prediction Error (MSPE). The system of equations for the optimal linear biased Kriging predictor is derived and itsMSPE is compared with that of Ordinary Kriging. A major impediment in implementing this system of equations and performing Kriging interpolation with massive datasets is the inversion of the spatial coherency matrix. This problem is investigated and a novel method, called “homeogram tapering”, which exploits spatial sorting techniques to create sparse matrices for efficient matrix inversion, is described. Finally, as an application, results from experiments performed on a geoid undulation dataset from Korea are presented. A precise geoid is usually the indispensable basis for meaningful hydrological studies over wide areas. These experiments use the theory presented here along with a relatively new spatial coherency measure, called the homeogram, also known as the non-centered covariance function.
... The regularization parameter in Eqs. (21) and (23) is usually determined through heuristic methods such as L-curve criterion, generalized cross-validation, maximum likelihood, Morozov's discrepancy principle, quasi-optimality criterion, and Cp plot (Mallows 1973; Hansen 1992; Golub and von Matt 1997 ), including the analytical derivation efforts (Cai et al. 2004; Schaffrin 2008 ). The empirical determination of the regularization parameter through the plot of weighted residuals sum of the squares versus the regularization parameter also proves to be sufficient in many problems (Bürgmann et al. 2002; Wright et al. 2003; Aktu g et al. 2010). ...
Article
Geodetic network design and optimization is a very well-known concept in geodesy. However, in many cases, the available geo-detic network configuration with respect to the estimation model is insufficient because of the physical and financial limitations. For the case of estimating the datum transformation parameters between two datums, the colocated points are only an unevenly and inhomogenously distributed subset of the available national/regional networks. Because the transformation parameters are defined with respect to an earth-centered, earth-fixed (ECEF) frame, very limited geographic coverage of the national/regional networks often leads to a weakly multi-collinear estimation problem. Such limited geographical coverage is often coupled with the intrinsic geometrical distortions as well as the relatively lower precision of the observations, in particular, when transforming a terrestrial network into a space-based network. In such cases, the individual parameters become highly correlated and oversensitive to the network configuration, and the individual transformation parameters cannot be estimated reliably. In this study, the concept of an idealized three-dimensional (3D) regional network geometry is introduced , its inverse cofactor matrix is analytically derived, and a regularized estimation method based on the inverse cofactor matrix of an ideal network distribution is presented to deal with the weakly multicollinear datum transformation problem. The efficiency of the proposed method is shown in three realistically simulated networks. The proposed method outperforms the standard least squares in terms of mean square error (MSE) and reduces the correlations among the parameters.
... In the downward continuation, the noise in geopotential differences will be amplified and a regularization method is needed to reduce the noise amplification . Examples of regularization methods applied to GRACE gravity recovery are those based on variance component [Koch and Kusche, 2002], reproBIQUUE [Schaffrin, 2008], updated covariance functions [Han et al., 2005a]. A simplified method is to use a Tikhonov regularization [Tikhonov and Arsenin, 1977] with an optimal regularization parameter estimated from the L-curve criterion [Hansen and O'Leary, 1993]. ...
Article
Full-text available
We present a method of directly estimating surface mass anomalies at regional scales using satellite-to-satellite K-band Ranging (KBR) data from the Gravity Recovery and Climate Experiment (GRACE) twin-satellite mission. Geopotential differences based primarily on KBR measurement are derived using a modified energy integral method with an improved method to calibrate accelerometer measurements. Surface mass anomalies are computed based on a downward continuation process, with optimal regularization parameters estimated using the L-curve criterion method. We derive the covariance functions in both space- and space-time domains and use them as light constraints in the regional gravity estimation process in the Amazon basin study region. The space-time covariance function has a time-correlation distance of 1.27 months, which is evident that observations between neighboring months are correlated and the correlation should be taken into account. However, most of the current GRACE solutions did not consider such temporal correlations. In our study, the artifact in the regional gravity solution is mitigated by using the covariance functions. The averaged commission errors are estimated to be only 6.86% and 5.85% for the solutions based on the space-covariance function (SCF) and the space-time covariance function (STCF), respectively. Our regional gravity solution in the Amazon basin study region, which requires no further post-processing, shows enhanced regional gravity signatures, reduced gravity artifacts, and the gravity solution agrees with NASA/GSFC's GRACE MASCON solution to about 1 cm RMS in terms of water thickness change over the Amazon basin study region. The regional gravity solution also retains the maximum signal energy while suppressing the short wavelength errors.
... It is not an easy task, however, depending on a proper choice of R on one hand and optimality criterion on the other hand. There are many algorithms to determine the regularization parameter, e.g., Tikhonov (1963a,b); Hoerl and Kennard (1970a); Golub et al. (1979);Morozov (1984); Schaffrin (2008); Wang and Xiao (2001) and Wang et al. (2008). Xu et al. (2006b) and Xu (2009) recently extended the MSE criterion and the generalized cross-validation method to solve inverse ill-posed problems by combining different types of data with a number of unknown variance components. ...
Article
Full-text available
A regularized solution is well-known to be biased. Although the biases of the estimated parameters can only be computed with the true values of parameters, we attempt to improve the regularized solution by using the regularized solution itself to replace the true (unknown) parameters for estimating the biases and then removing the computed biases from the regularized solution. We first analyze the theoretical relationship between the regularized solutions with and without the bias correction, derive the analytical conditions under which a bias-corrected regularized solution performs better than the ordinary regularized solution in terms of mean squared errors (MSE) and design the corresponding method to partially correct the biases. We then present two numerical examples to demonstrate the performance of our partially bias-corrected regularization method. The first example is mathematical with a Fredholm integral equation of the first kind. The simulated results show that the partially bias-corrected regularized solution can improve the MSE of the ordinary regularized function by 11%. In the second example, we recover gravity anomalies from simulated gravity gradient observations. In this case, our method produces the mean MSE of 3.71 mGal for the resolved mean gravity anomalies, which is better than that from the regularized solution without bias correction by 5%. The method is also shown to successfully reduce the absolute maximum bias from 13.6 to 6.8 mGal.
... Due to small change in satellite-receiver geometry over short time observation and under-determined observation problem, [2], [5] suggested a regularization method based on a biased estimation. The under-determined matrix problem will always occur whenever one considers the bias effects of each random variable into the estimation problem. ...
Conference Paper
Full-text available
In short-timespan processing of GPS observations the combination of code and carrier phase observations has the shortcoming that the normal matrix is often ill-conditioned and giving unstable computation. The regularized least-squares (RLS) and the iterative least-squares (ILS) methods are often proposed as alternatives to the conventional least-squares (CLS) method. The RLS are claimed to give better reliability of GPS ambiguity solving for short-period observations and to improve the quality of the normal equation, also reducing the condition number of the normal matrix. The regularization parameter is determined by minimizing the trace of the mean square error matrix. However, the regularization induces a biased estimation and its benefits are difficult to be confirmed. On the other hand, the ILS do not improve the estimate results and their stochastic properties, apart from stabilizing the normal matrix and reducing its condition number: this, however, depends on the given initial estimate vector. In this work we investigate the performance of RLS and ILS as compared to CLS when applied to single-frequency epoch-by-epoch processing with ambiguity solving by LAMBDA method. Results show that, while the investigated methods do not produce significant differences in terms of estimated baseline precision, improvements are instead observed in the condition number of the normal matrix, with ILS producing the best results when using estimated initial values from CLS. On the other hand, the RLS method fails to improve the condition number for epoch-by-epoch strategy. All methods also give practically equal reliability for ambiguity resolution, where the evaluation is taken in terms of success rate.
... It has lots of applications in different applied sciences including Geodesy; see e.g. Sjöberg (1984c), Sjöberg (1985), Xu et al. (2006) and Schaffrin (2008) and the references therein. This theory has been applied by Fotopoulos (2005), Kiamehr and Eshagh (2008) and Eshagh (2010c) for the error calibration of geoid, orthometric and ellipsoidal heights. ...
Article
In this paper, we analyze the general errors-in-variables (EIV) model, allowing both the uncertain coefficient matrix and the dispersion matrix to be rank-deficient. We derive the weighted total least-squares (WTLS) solution in the general case and find that with the model consistency condition: (1) If the coefficient matrix is of full column rank, the parameter vector and the residual vector can be uniquely determined independently of the singularity of the dispersion matrix, which naturally extends the Neitzel/Schaffrin rank condition (NSC) in previous work. (2) In the rank-deficient case, the estimable functions and the residual vector can be uniquely determined. As a result, a unified approach for WTLS is provided by using generalized inverse matrices (g-inverses) as a principal tool. This method is unified because it fully considers the generality of the model setup, such as singularity of the dispersion matrix and multicollinearity of the coefficient matrix. It is flexible because it does not require to distinguish different cases before the adjustment. We analyze two examples, including the adjustment of the translation elimination model, where the centralized coordinates for the symmetric transformation are applied, and the unified adjustment, where the higher-dimensional transformation model is explicitly compatible with the lower-dimensional transformation problem.
Article
Although the least-squares (LS) adjustment within the Gauss–Markoff model (GMM) and the model with condition equations as dual problem have been geometrically interpreted, no one merged the LS adjustment formulated by the Gauss–Helmert model (GHM) also into the common pattern. We formulate the GHM from the GMM based on a partial orthogonality between their respective coefficient matrices. Then, the LS adjustment within these three models is interpreted geometrically, which implies that the GHM is not the combined model but the intermediate model between the GMM and the model of conditional equations. Meanwhile, the case of a singular dispersion matrix is analyzed once more, but now in the most general way, where the restriction of the Neitzel–Schaffrin rank condition is relaxed. The findings of this part can be summarized as: (1) The LS solution is developed within the general GHM, which yields the unique best linear uniformly unbiased estimation if the parameter coefficient matrix is of full rank; (2) We demonstrated that Baarda’s S-transformation also holds with the most general model setup; (3) In the general case, we proved that the unique S\boldsymbol{S}-homBLUMBE (best homogeneously linear uniformly minimum bias estimation) can be achieved by S1\boldsymbol{S}^{-1}-norm minimization within the LS solution set; (4) The S\boldsymbol{S}-homBLE (best homogeneously linear estimation) type is analyzed for the GHM for the first time, and we find it can function as the intermediate algebraic connection between the LS and the S\boldsymbol{S}-homBLUMBE solution.
Article
During downward continuation of airborne gravity, ill-condition causes different effects to different parameters. In order to eliminate or alleviate the effects to an appropriate level, we put forward a new algorithm named regularization by grouping correction. Using the signal-to-noise ratio to assess the ill-condition effects, parameters are grouped. Regularization matrix is constructed by grouping amendment idea. Regularization parameter is selected by minimizing the mean square error. Using the simulative airborne gravity data based on the EGM2008 as true values of the gravity field, the effectiveness of the method is verified. Comparing with three other methods, the new method has higher accuracy.
Chapter
The linear observation equation is usually expressed as 6.1 where the non-random design matrix the vector of unknown parameters the vector of measurements and contaminated by random error vector with zero mean and variance-covariance matrix where P is the weight matrix and is the variance of unit weight. If the coefficient matrix A of the observation equation possesses very large condition number, the observation equation is ill-conditioned, which is defined as ill-posed problems by Hadamard (1932). In geodesy ill-posed problems are frequently encountered in satellite gravimetry due to downward continuation, or in geodetic date procession due to the colinearity among parameters that are to be estimated. Most useful and necessary adjustment algorithms for data processing are outlined in the second part of this chapter. The adjustment algorithms discussed here include least squares adjustment, sequential application of least squares adjustment via accumulation, sequential least squares adjustment, conditional least squares adjustment, a sequential application of conditional least squares adjustment, block-wise least squares adjustment, a sequential application of block-wise least squares adjustment, a special application of block-wise least squares adjustment for code-phase combination, an equivalent algorithm to form the eliminated observation equation system and the algorithm to diagonalize the normal equation and equivalent observation equation. A priori constrained adjustment and filtering are discussed for solving the rank deficient problems. After a general discussion on the a priori parameter constraints, a special case of the so-called a priori datum method is given. A quasi-stable datum method is also discussed. A summary is given at the end of this part of the chapter. © 2013 Springer-Verlag Berlin Heidelberg. All rights are reserved.
Article
In geodesy and geophysics, many large-scale over-determined linear equations need to be solved which are often ill-conditioned. When the conjugate gradient method is used, their ill-conditioned effects to the solutions must be overcome. By regularization ideas, the conjugate gradient method is improved. Firstly, by constructing the interference source vector, a new equation is derived with ill-condition diminished greatly, which has the same solution to the original normal equation. Then the new equation is solved by conjugate gradient method. Finally, the effectiveness of the new method is verified by some numerical experiments of airborne gravity downward to the earth surface. In the numerical experiments, the new method is compared with LS, CG and Tikhonov methods, and its accuracy is the highest.
Chapter
The optimization problem which appears in treating overdetermined linear system equations is a standard topic in any textbook on optimization. Here we consider again a weak nonlinear system as a problem which allows a Taylor expansion. We start from the first section with a front page example, an inconsistent linear system of a threedimensional observation space with a parameter space of two dimensions.
Chapter
The Special Gauss-Markov model with datum defect – the stochastic analogue of Minimum Norm Least-Squares, is treated here first by the Best Linear Minimum Bias Estimator (BLUMBE), namely by Theorem 6.3, in the first section. Theorem 6.5 offers the estimation of σ2 by Best Invariant Quadratic Uniformly Unbiased Estimation to be compared with the result of Theorem 6.6, namely the estimation of σ2 by Best Invariant Quadratic Estimation. Six numerical models are compared based on practical leveling measurements.
Chapter
Here we follow E.Grafarend: “Variance-covariance-component estimation of HELMERT type in the Gauss-Helmert model” in detail. In the first section we carefully define the Gauss-Helmert model of condition equations with unknown parameters in a linear model. The second section is E.Grafarend‘s model of variance-covariance-component estimation of HELMERT type within linear models of inhomogeous condition equations: B ϵ=Byc\mathbf{B}\ \epsilon= \mathbf{By} -\mathbf{c}, By not an element of R(A) + c. In contrast, section three is E.Grafarend’s model of variance-covariance-component estimation of HELMERT type within the linear model of inhomogeneous condition equations with unknown parameters, namely within the linear model model Ax+ϵ+Bϵ=Byc\mathbf{Ax} + \epsilon+ \mathbf{B}\epsilon= \mathbf{By} -\mathbf{c}, By not an element of R(A) + c.
Chapter
Conditional equations in its linear form are a standard topic in Geodetic Sciences. We mention for example F.R. Helmert (1907) and H. Wolf (1986).
Chapter
This is an elementary presentation of the arithmetic of trees. We show how it is related to the Tamari poset. In the last part we investigate various ways of realizing this poset as a polytope (associahedron), including one inferred from Tamari’s thesis.
Chapter
Up to now, we have only considered an “univariate Gauss–Markov model”. Its generalization towards a multivariate Gauss–Markov model will be given in Sect.14.1. At first, we define a multivariate linear model by Definition 14.1 by giving its first and second order moments. Its algebraic counterpart via multi-variate LESS is subject of Definition 14.2. Lemma 14.3 characterizes the multi-variate LESS solution. Its multivariate Gauss–Markov counterpart is given by Theorem 14.4. In case we have constraints in addition, we define by Definition 14.5 what we mean by “multivariate Gauss–Markov model with constraints”. The complete solution by means of “multivariate Gauss–Markov model with constraints” is given by Theorem 14.5.
Chapter
Algebraic techniques ofGroebner bases andMultipolynomial resultants are presented in this Chapter as efficient tools for solving explicitly systems of linear and nonlinear equations. Similar to the Gauss elimination technique applied to linear systems of equations,Groebner bases andMultipolynomial resultants are useful in eliminating several variables in multivariate systems of nonlinear equations in such a manner that the end product results intounivariate polynomial equations whose roots are readily determined using existing functions such as theroots command in MATLAB.
Chapter
The bias problem in probabilistic regression has been the subject of Sect. 4-37 for simultaneous determination of first moments as well as second central moments by inhomogeneous multilinear, namely bilinear, estimation. Based on the review of the first author “Variance-covariance component estimation: theoretical results and geodetic application” (Statistical and Decision, Supplement Issue No. 2, pages 401–441, 105 references, Oldenbourg Verlag, München 1989), we collected 5 postulates for simultaneous determination of first and second central moments.
Chapter
We define the fifth problem of probabilistic regression by the inhomogeneous general linear Gauss-Markov model including fixed effects as well as random effect, namely by Aξ + CE{z} + y = E{y} together with variance-covariance matrices Σz{\Sigma }_{\mathrm{z}} and Σy{\Sigma }_{\mathrm{y}} being unknown as well as ξ, E{z}, and E{y}, y. It is the standard model of Kolmogorov-Wiener Prediction in its general form.
Chapter
A special nonlinear problem is the three-dimensional datum transformation solved by the Procrustes Algorithm. A definition of the three-dimensional datum transformation with the coupled unknowns of type dilatation unknown, also called scale factor, translation and rotation unknown follows afterwards.
Book
Here we present a nearly complete treatment of the Grand Universe of linear and weakly nonlinear regression models within the first 8 chapters. Our point of view is both an algebraic view as well as a stochastic one. For example, there is an equivalent lemma between a best, linear uniformly unbiased estimation (BLUUE) in a Gauss-Markov model and a least squares solution (LESS) in a system of linear equations. While BLUUE is a stochastic regression model, LESS is an algebraic solution. In the first six chapters we concentrate on underdetermined and overdeterimined linear systems as well as systems with a datum defect. We review estimators/algebraic solutions of type MINOLESS, BLIMBE, BLUMBE, BLUUE, BIQUE, BLE, BIQUE and Total Least Squares. The highlight is the simultaneous determination of the first moment and the second central moment of a probability distribution in an inhomogeneous multilinear estimation by the so called E-D correspondence as well as its Bayes design. In addition, we discuss continuous networks versus discrete networks, use of Grassmann-Pluecker coordinates, criterion matrices of type Taylor-Karman as well as FUZZY sets. Chapter seven is a speciality in the treatment of an overdetermined system of nonlinear equations on curved manifolds. The von Mises-Fisher distribution is characteristic for circular or (hyper) spherical data. Our last chapter eight is devoted to probabilistic regression, the special Gauss-Markov model with random effects leading to estimators of type BLIP and VIP including Bayesian estimation. A great part of the work is presented in four Appendices. Appendix A is a treatment, of tensor algebra, namely linear algebra, matrix algebra and multilinear algebra. Appendix B is devoted to sampling distributions and their use in terms of confidence intervals and confidence regions. Appendix C reviews the elementary notions of statistics, namely random events and stochastic processes. Appendix D introduces the basics of Groebner basis algebra, its careful definition, the Buchberger Algorithm, especially the C. F. Gauss combinatorial algorithm. © Springer-Verlag Berlin Heidelberg 2012. All rights are reserved.
Article
Full-text available
The satellite gravity gradiometry (SGG) data of the recent European satellite mission, the Gravity field and steady-state Ocean Circulation Explorer (GOCE), can be used as an external source for quality description of terrestrial gravity anomalies and the Earth’s gravity models (EGMs). In this study integral estimators are provided and modified in a least-squares sense to regenerate the SGG data of GOCE from terrestrial gravity anomalies and an existing EGM. Based on the differences between the generated and real GOCE SGG data, condition adjustment models are constructed and variance component estimation (VCE) is applied for balancing the a priori errors of data with these differences. Here, a 1-month orbit of GOCE is considered over Iran and the condition adjustment models and VCE process are used to calibrate the errors of the GOCE data, terrestrial gravity anomalies of the area and the EGM. Numerical studies over Iran show that the a priori errors of the GOCE data and the EGM were properly presented. Also the average error of the terrestrial gravity anomalies, with a resolution of 0.5° × 0.5°, after condition adjustment and VCE process using Tzz, Tx, Tyz and −Txx −Tyy is about 30 mGal.
Article
The Earth’s gravity field modelling is an ill-posed problem having a sensitive solution to the error of data. Satellite gravity gradiometry (SGG) is a space technique to measure the second-order derivatives of geopotential for modelling this field, but the measurements should be validated prior to use. The existing terrestrial gravity anomalies and Earth gravity models can be used for this purpose. In this paper, the second-order vertical–horizontal (VH) and horizontal–horizontal (HH) derivatives of the extended Stokes formula in the local north-oriented frame are modified using biased, unbiased and optimum types of least-squares modification. These modified integral estimators are used to generate the VH and HH gradients at 250 km level for validation purpose of the SGG data. It is shown that, unlike the integral estimator for generating the second-order radial derivative of geopotential, the system of equations from which the modification parameters are obtained is unstable for all types of modification, with large cap size and high degree, and regularization is strongly required for solving the system. Numerical studies in Fennoscandia show that the SGG data can be estimated with an accuracy of 1 mE using an integral estimator modified by a biased type least-squares modification. In this case an integration cap size of 2.5° and a degree of modification of 100 for integrating 30′ × 30′ gravity anomalies are required.
Article
The method of generalized cross-validation (GCV) has been widely used to determine the regularization parameter, because the criterion minimizes the average predicted residuals of measured data and depends solely on data. The data-driven advantage is valid only if the variance–covariance matrix of the data can be represented as the product of a given positive definite matrix and a scalar unknown noise variance. In practice, important geophysical inverse ill-posed problems have often been solved by combining different types of data. The stochastic model of measurements in this case contains a number of different unknown variance components. Although the weighting factors, or equivalently the variance components, have been shown to significantly affect joint inversion results of geophysical ill-posed problems, they have been either assumed to be known or empirically chosen. No solid statistical foundation is available yet to correctly determine the weighting factors of different types of data in joint geophysical inversion. We extend the GCV method to accommodate both the regularization parameter and the variance components. The extended version of GCV essentially consists of two steps, one to estimate the variance components by fixing the regularization parameter and the other to determine the regularization parameter by using the GCV method and by fixing the variance components. We simulate two examples: a purely mathematical integral equation of the first kind modified from the first example of Phillips (1962) and a typical geophysical example of downward continuation to recover the gravity anomalies on the surface of the Earth from satellite measurements. Based on the two simulated examples, we extensively compare the iterative GCV method with existing methods, which have shown that the method works well to correctly recover the unknown variance components and determine the regularization parameter. In other words, our method lets data speak for themselves, decide the correct weighting factors of different types of geophysical data, and determine the regularization parameter. In addition, we derive an unbiased estimator of the noise variance by correcting the biases of the regularized residuals. A simplified formula to save the time of computation is also given. The two new estimators of the noise variance are compared with six existing methods through numerical simulations. The simulation results have shown that the two new estimators perform as well as Wahba's estimator for highly ill-posed problems and outperform any existing methods for moderately ill-posed problems.
Article
In this contribution a variation of Golub/Hansen/O’Leary’s Total Least-Squares (TLS) regularization technique is introduced, based on the Hybrid APproximation Solution (HAPS) within a nonlinear Gauss–Helmert Model. By applying a traditional Lagrange approach to a series of iteratively linearized Gauss–Helmert Models, a new iterative scheme has been found that, in practice, can generate the Tykhonov regularized TLS solution, provided thatsomecare is taken to do the updates properly. The algorithm actually parallels the standard TLS approach as recommended in some of the geodetic literature, but unfortunately all too often in combination with erroneous updates that would still show convergence, although not necessarily to the (unregularized) TLS solution. Here, a key feature is that both standard and regularized TLS solutions result from the same computational framework, unlike the existing algorithms for Tykhonov-type TLS regularization. The new algorithm is then applied to a problem from archeology. There, both the radius and the center-point coordinates of a circle have to be determined, of which only a small part of the arc had been surveyed in-situ, thereby giving rise to an ill-conditioned set of equations. According to the archaeologists involved, this circular arc served as the starting line of a racetrack in the ancient Greek stadium of Corinth, ca. 500 BC. The present study compares previous estimates of the circle parameters with the newly developed “Regularized TLS Solution of Tykhonov type.”
Article
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model (Y−E Y = (X−E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler–Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335–342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
Book
Full-text available
Contents: Introduction - Inverse problems modeled by integral equations of the first kind: Causation - Parameter estimation in differential equations: Model identification - Mathematical background for inverse problems - Some methodology for inverse problems - An annotated bibliography on inverse problems.
Article
Full-text available
Some results of Groetsch, King and Murio on a general regularized finite element method for Fredholm equations of the first kind are improved in this note. A sufficient condition for weak convergence of the approximations is also given. Taken together, the main results of this paper are exact finite element analogues of classical results on Tikhonov regularization in infinite dimensional spaces.
Article
Full-text available
The use of biased estimation in data analysis and model building is discussed. A review of the theory of ridge regression and its relation to generalized inverse regression is presented along with the results of a simulation experiment and three examples of the use of ridge regression in practice. Comments on variable selection procedures, model validation, and ridge and generalized inverse regression computation procedures are included. The examples studied here show that when the predictor variables are highly correlated, ridge regression produces coefficients which predict and extrapolate better than least squares and is a safe procedure for selecting variables.
Article
Full-text available
Solving discrete ill-posed problems via Tikhonov regularization introduces the problem of determining a regularization parameter. There are several methods available for choosing such a parameter, yet, in general, the uniqueness of this choice is an open question. Two empirical methods for determining a regularization parameter (which appear in the biomedical engineering literature) are the composite residual and smoothing operator and the zero-crossing method. An equivalence is established between the zero-crossing method and a minimum product criterion, which has previously been linked with the L-curve method. Finally, the uniqueness of a choice of regularization parameter is established under certain restrictions on the Fourier coefficients of the data in the ill-posed problem. Yes Yes
Article
Full-text available
The response surface technique called ridge analysis was originally introduced by Hoerl (1959) more than 25 years ago. Despite tremendous advantages over more conventional response surface procedures when more than two independent variables are present, ridge analysis has received little attention in the statistical literature since then, although numerous applications have appeared in engineering journals. This situation may be partially due to the fact that this procedure led to the discovery of ridge regression, which has completely overshadowed ridge analysis in the literature since. This discussion will briefly review the mathematics of ridge analysis, its literature, practical advantages, and relationship to ridge regression.
Article
Full-text available
A regularization method which has been applied to improperly posed problems very successfully is given by Tikhonov. A disadvantage of this technique is the arbitrariness of the choice of a regularizing parameter which influences the smoothness of the solution. In this paper a criterium for the proper choice of an optimal regularizing parameter is proposed. A procedure based on the singular value decomposition technique is applied which allows the determination of a regularization parameter in a very economic way.
Article
Full-text available
Consider the ridge estimate (λ) for β in the model unknown, (λ) = (XX + nλI) Xy. We study the method of generalized cross-validation (GCV) for choosing a good value for λ from the data. The estimate is the minimizer of V(λ) given bywhere A(λ) = X(XX + nλI) X . This estimate is a rotation-invariant version of Allen's PRESS, or ordinary cross-validation. This estimate behaves like a risk improvement estimator, but does not require an estimate of σ, so can be used when n − p is small, or even if p ≥ 2 n in certain cases. The GCV method can also be used in subset selection and singular value truncation methods for regression, and even to choose from among mixtures of these methods.
Article
Full-text available
Swindel (1976) introduced a modified ridge regression estimator based on prior information. Sarkar (1992) suggested a new estimator by combining in a particular way the two approaches followed in obtaining the restricted ieast squares and ordinary ndge regression estimators. In this paper we compare the mean square error matrices of the modified ridge regression estimator based on prior information and die restricted ridge regression estimator introduced by Sarkar (1992). We stated a sufficient condition for the mean square error matrix of the modified ndge regression estimator to exceed the mean square enor matrix of the restricted ridge regression estimator.
Article
It has long been argued that Minimum Mean Square Error Estimation, although theoretically superior to the least-squares adjustment, is impractical in the absence of any prior information on the unknown parameters. The Empirical BLE therefore applies another estimate from the same dataset, e.g. the BLUUE (Best Linear Uniformly Unbiased Estimate) or the ridge estimate, in order to overcome this problem. Here, we introduce the repro-BLE (Best Linear Estimate with the reproducing property) which - if it exists - belongs to the same class of (nonlinear) estimates, but with the provision that the vector used to form the empirical mean square error risk coincides with the eventual estimate, thus fulfilling the 'reproducing property'. A few elementary examples for the case of direct observations clarify this approach and may help to understand the behavior of repro-BLE in comparison to the more commonly used Empirical BLE, or to the BLUUE that is generated by a (weighted) least-squares adjustment. The more general Gauss-Markov model will be treated in a second part.
Article
Tikhonov’s regularization techniques are widely applied to geophysical and geodetic inverse problems. A single regularization parameter is frequently adopted and needs to be properly chosen in order to obtain a stable solution. In this paper we generalize the ordinary regularization method by introducing more than one regularization parameter, based on consideration of the minimum mean square error of the estimator. It is shown that the new method results in a smaller mean square error of the estimate than the ordinary regularization method, if the regularization parameters are properly selected. As one example of the most important applications of the proposed method, we discuss the problem of the determination of potential fields using satellite observations. As a result of the theory presented here we expect the following questions can be answered in the positive: (1) Have the methods used conventionally and based upon the use of empirical spectra of the potential coefficients, such as Kaula’s rule or modified versions in determination of gravitational models, been adequate in stabilizing the solution in terms of minimum mean square error? (2) Can we solve the unstable problem of determination of potential fields from satellite-tracking data without use of empirical spectra of the potential coefficients? (3) Is it possible to further improve the recently produced potential models, if the proposed method is utilized? Numerical confirmation concerning the size of the improvement will be left to a following contribution, which inevitably requires large scale simulations. The differences of interpreting a geophysical inverse problem between Bayesians and frequentists are also detailed, and the practical implications are especially stressed.
Chapter
It is well known that Tikhonov’s regularization method for ill-posed problems has a direct correspondence to certain predictors in the context of random effects models. Hence not only the iteration scheme of King/Chillingworth (1979) or E. Schock (1984) can easily be derived in full analogy to the iterated inhomBLIP (Best inhomogeneously LInear Predictor), but also an apparently new scheme is readily developed following the iterated homBLUP (Best homogeneously Linear weakly Unbiased Predictor) approach by B. Schaffrin (1985) which has some relations to a robustified Krige type prediction. The performance of the proposed iteration scheme with particular regard to its convergence properties is shown by an example from geodesy.
Article
When solving applied problems one can often use certain prior information on the desired solution. Two classical, commonly used methods allowing to employ additional prior information are the minimax method and the Bayes method. In this paper, one more method for using prior information is proposed which is a generalization of the maximum likelihood method. This method, named the “generalized maximum likelihood method” (GMLM), enables one to make use of prior information on the solution in stochastic and deterministic forms. The GMLM can be applied for the estimation of solutions of systems of algebraic equations (linear and nonlinear). Additional prior information on the desired vector is supposed to be present except for the case of the system of main equations. This additional prior information can be given in various forms. The main problem to be solved with the aid of the GMLM is to find optimal estimators of the desired vector taking into account additional prior information. A great many of applied problems are reducible to systems of algebraic equations, among them problems of processing and interpretation of complicated physical experiments. Some of these problems, for example, inverse problems of mathematical physics, are ill-posed and therefore unstable even with respect to small errors in measurable quantity.
Article
Hoerl and Kennard (1970) have proposed a method of estimation for multiple regression problems which involves adding small positive quantities to the diagonal of XT X. They use a type of mean square error to justify the procedure. This paper considers some generalizations of mean square error and provides a stronger defence of the method.
Article
A principal objective of this paper is to discuss a class of biased linear estimators employing generalized inverses. A second objective is to establish a unifying perspective. The paper exhibits theoretical properties shared by generalized inverse estimators, ridge estimators, and corresponding nonlinear estimation procedures. From this perspective it becomes clear why all these methods work so well in practical estimation from nonorthogonal data.
Article
This paper is an exposition of the use of ridge regression methods. Two examples from the literature are used as a base. Attention is focused on the RIDGE TRACE which is a two-dimensional graphical procedure for portraying the complex relationships in multifactor data. Recommendations are made for obtaining a better regression equation than that given by ordinary least squares estimation.
Chapter
Preface Symbols and Acronyms 1. Setting the Stage. Problems With Ill-Conditioned Matrices Ill-Posed and Inverse Problems Prelude to Regularization Four Test Problems 2. Decompositions and Other Tools. The SVD and its Generalizations Rank-Revealing Decompositions Transformation to Standard Form Computation of the SVE 3. Methods for Rank-Deficient Problems. Numerical Rank Truncated SVD and GSVD Truncated Rank-Revealing Decompositions Truncated Decompositions in Action 4. Problems with Ill-Determined Rank. Characteristics of Discrete Ill-Posed Problems Filter Factors Working with Seminorms The Resolution Matrix, Bias, and Variance The Discrete Picard Condition L-Curve Analysis Random Test Matrices for Regularization Methods The Analysis Tools in Action 5. Direct Regularization Methods. Tikhonov Regularization The Regularized General Gauss-Markov Linear Model Truncated SVD and GSVD Again Algorithms Based on Total Least Squares Mollifier Methods Other Direct Methods Characterization of Regularization Methods Direct Regularization Methods in Action 6. Iterative Regularization Methods. Some Practicalities Classical Stationary Iterative Methods Regularizing CG Iterations Convergence Properties of Regularizing CG Iterations The LSQR Algorithm in Finite Precision Hybrid Methods Iterative Regularization Methods in Action 7. Parameter-Choice Methods. Pragmatic Parameter Choice The Discrepancy Principle Methods Based on Error Estimation Generalized Cross-Validation The L-Curve Criterion Parameter-Choice Methods in Action Experimental Comparisons of the Methods 8. Regularization Tools Bibliography Index.
Article
A principal objective of this paper is to discuss a class of biased linear estimators employing generalized inverses. A second objective is to establish a unifying perspective. The paper exhibits theoretical properties shared by generalized inverse estimators, ridge estimators, and corresponding nonlinear estimation procedures. From this perspective it becomes clear why all these methods work so well in practical estimation from nonorthogonal data.
Article
A mean squared error criterion is used to compare five estimators of the coefficients in a linear regression model: least squares, principal components, ridge regression, latent root, and a shrunken estimator. Each of the biased estimators is shown to offer improvement in mean squared error over least squares for a wide range of choices of the parameters of the model. The results of a simulation involving all five estimators indicate that the principal components and latent root estimators perform best overall, but the ridge regression estimator has the potential of a smaller mean squared error than either of these.
Article
Not Available Bibtex entry for this abstract Preferred format for this abstract (see Preferences) Find Similar Abstracts: Use: Authors Title Return: Query Results Return items starting with number Query Form Database: Astronomy Physics arXiv e-prints
Article
Please use extracts from reviews of first edition Key Features * Updated and thoroughly revised edition * additional material on geophysical/acoustic tomography * Detailed discussion of application of inverse theory to tectonic, gravitational and geomagnetic studies.
Article
Hoer1 and Kennard introduced a class of biased estimators (ridge estimators) for the parameters in an ill-conditioned linear model. In this paper the ridge estimators are viewed as a subclass of the class of linear transforms of the least squares estimator. An alternative class of estimators, labeled shrunken estimators is considered. It is shown that these estimators satisfy the admissibility condition proposed by Hoer1 and Kennard. In addition, both the ridge estimators and shrunken estimators are derived as minimum norm estimators in the class of linear transforms of the least squares estimators. The former minimizes the Euclidean norm and the latter minimizes the design dependent norm. The class of estimators which are minimum variance linear transforms of the least squares estimator is obtained and the members of this class are shown to be stochastically shrunken estimators. An example is computed to show the behavior of the different estimators.
Article
This paper is an exposition of the use of ridge regression methods. Two examples from the literature are used as a base. Attention is focused on the RIDGE TRACE which is a two-dimensional graphical procedure for portraying the complex relationships in multifactor data. Recommendations are made for obtaining a better regression equation than that given by ordinary least squares estimation.
Article
The general form of ridge regression proposed by Hoerl and Kennard is examined in the context of the iterative procedure they suggest for obtaining optimal estimators. It is shown that a non-iterative, closed form solution is available for this procedure. The solution is found to depend upon certain convergence/divergence conditions which relate to the ordinary least) squares estimators. Numerical examples are given.
Article
The solution of ill-posed problems is non-trivial in the sense that frequently applied methods like least-squares fail. The ill-posedness of the problem is refiected by very small changes in the input data which may result in very large changes in the output data. Hence, some sort of stabilization or regularization is required. Some examples of (geodetic) ill-posed problems are given. Several regularization methods exist to compute stabie solutions, along with several ways of determining the so-called regularization parameter(s). The idea of the regularization methods is discussed as weil as the determination of optimal regularization parameters. Moreover, the different methods are compared, emphasizing the quality or accuracy of the methods. Finally, the differences between methods and parameter choice rules are illuminated by an example from airborne gravimetry.
Article
In this paper we examine the relationship between the general ridge estimator and the standardized ridge estimator.
Article
The Tikhonov regularization method for discrete ill-posed problems is considered. For the practical choice of the regularization parameter alpha, some authors use a plot of the norm of the regularized solution versus the norm of the residual vector for all alpha considered. This paper contains an analysis of the shape of this plot and gives a theoretical justification for choosing the regularization parameter so it is related to the ''L-corner'' of the plot considered in the logarithmic scale. Moreover, a new criterion for choosing alpha is introduced (independent of the shape of the plot) which gives a new interpretation of the ''corner criterion'' mentioned above. The existence of ''L-corner'' is discussed.
Article
The standard gravity field of a level ellipsoid (i.e., the earth) is obtained by study of a harmonic expansion formulated on the surface of some ellipsoid containing the earth and extended to an ellipsoid contained by the earth. The study is based on an inversion of the exterior Dirichlet problem; the Tykhonov regularization of the improperly posed boundary problem provides the solution technique. The solution involves generalized inverse operators of the best approximation, minimum norm best approximation and hybrid approximation solution types. Numerical results are presented.
Article
An algorithm is given for selacting the biasing paramatar, k, in RIDGE regrassion. By means of simulaction it is shown that the algorithm has the following properties: (i) it produces an aberaged squared error for the regrassion coafficiants that is les than least squares, (ii) the distribuction of squared arrots for the regression coafficiants has a smallar variance than does that for last squares, and (iii) regradless of he signal-to-noiss retio the probability that RIDGE producas a smaller squared error than least squares is greatar than 0.50.
Article
In this paper we propose a cubically convergent algorithm. Our basic tool is the Tikhonov regularization and Morozov's and damped Morozov's discrepancy principles. Numerical experiments for integral equations of the first kind are presented to compare the efficiency of the proposed algorithms.
Article
For the determination of gravity field parameters using satellite observations, regularization (with, e.g. Kaula's rule) is widely used, because the derived normal equations are ill-conditioned. The procedure is interpreted as a kind of collocation. Alternatively, it can be viewed as biased estimation. Thus several questions arise concerning the bias problem of the estimated geopotential fields, the uncertainty about the proper Kaula's constant column, and the unrealistic accuracy measure of the estimate. These factors may affect its application, depending on the magnitudes of the bias values and the accuracy difference between least-squares (LS) collocation and biased estimation. Two tests are carried out based on the GEM-T1 model. Test A uses the actual GEM-T1 coefficients and because test A is influenced by the biased underestimated values, a second test B assumes that Kaula's rule reflects the magnitudes of the geopotential coefficients. The results show that the coefficients of lower degrees are well determined if Kaula's rule is applied to degree and order 6 and above. In test A, the bias of each coefficient reaches 20 per cent of the estimated value at degree 19, and more than 30 per cent after degree 25. The computation of the mean squared errors of biased estimation indicates that the accuracy measure of LS collocation is very conservative globally. The reason for this is the underestimation of the coefficients. Test B shows that the bias of each coefficient increases to 20 per cent of the estimated value at degree 15 and 30 per cent at degree 19. More than 100 per cent is reached at degree 25. About 4/5 of the total number of the coefficients are too optimistic in accuracy, if the variance-covariance matrix of LS collocation is used.
Article
Geophysical and geodetic inverse problems are often ill posed. They are smoothed to guarantee stable solutions. Geophysical and geodetic applications of smoothness techniques like Tikhonov's regularization method seem to have been limited to one realization of sampling. However, smoothness (or ridge) parameters are data related but empirically chosen. It is expected that the ridge parameters and thus the resolutions of models will be different from one realization of sampling to another. Therefore, the chief motivation of this paper is to investigate large-sample properties of some smoothness (i.e. biased) estimators in terms of mean-square error. Some potentially applicable biased estimators are included in this simulation. the example is the recovery of local gravity fields from gradiometric observables. On the basis of 500 realizations of sampling, we extensively investigate the mean-square error and bias problems, the best and worst performances, and the statistical properties of ridge parameters. All of the biased estimators indeed improve the least-squares solution, but the sizes of improvement are quite different. If the iterative ridge estimator is employed, the average value of mean-square error roots of surface gravity anomalies is much less than 5 mgal.