## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

To read the full-text of this research,

you can request a copy directly from the author.

... As a matter of human limitations, imperfect instruments, unfavorable physical conditions and improper measurement routines, which together define the measurement condition, all measurement results most likely contain errors. To reduce the measurement errors on the final results one need to improve the overall condition of the measurement using least square adjustment (Fan, 1997). ...

... Least squares method is a classical method which defines the optimal estimate of X (unknown) by minimizing the sum of the weighted observation residuals squared (Fan, 1997). ...

... According to Fan, (1997), the linear system is: ...

Today, advanced GPS receivers are improving the accuracy of positioning information, but in critical locations such as urban areas, the satellite availability is limited above all due to the signal blocking problem, which degrade the required accuracy. For this reason, different methods of measurement should be used. The objective of this thesis is to evaluate and compare precision, accuracy and time expenditure of total station (TS), Global Positioning System (GPS) and terrestrial laser scaner (TLS). Comparing precision, accuracy and the required time of these three measurements will improve the knowledge about how much precision and accuracy can be achieved and at what time expense. To investigate this task, a reference network consisted of 14 control points has been measured five times with Leica 1201 TS and served as a reference value for comparison with RTK and TLS measurements. The reference network points were also measured five times with the GPS RTK method so as to compare accuracy, precision and time expenditure with that of TS. In addition, in order to compare the accuracy, precision and time expense of total station and TLS, the North Eastern façade of the L building at KTH campus in Stockholm, Sweden has been scaned five times with HDS 2500 scaner on six target points. These six target points were also measured five times with TS. Then comparison made to evaluate the quality of the coordinates of the target points determined with both measurements. The data were processed in Cyclone, Geo Professional School and Leica geo office software. According to the result obtained, the reference network points measured with TS were determined with 1 mm precision for both horizontal and vertical coordinates. When using RTK method on the same reference network points, 9 mm in horizontal and 1.5 cm accuracy in vertical coordinates has been achieved. The RTK measurements, which were measured five times, determined with a maximum standard deviation of 8 mm (point I) and 1.5 cm (point A) for horizontal and vertical coordinates respectively. The precision of the remaining control points is below these levels. The coordinates of the six target points measured with TS on the L building façade were determined with a standard deviation of 8 mm for horizontal and 4 mm for vertical coordinates. When using TLS for the same target points, 2mm accuracy has been achieved for both horizontal and vertical coordinates. The TLS measurements, which were measured five times, determined with a maximum standard deviation of 1.6 cm (point WM3) and 1.2 cm (point BW11) for horizontal and vertical coordinates respectively. The precision of the remaining control points is below these levels. With regard to time expenditure, it is proved that total station consumed more time than the other two methods (RTK and TLS). TS consumed 82 min more time than RTK but, almost similar time has been consumed by TS and TLS (38 min for TS and 32 min for TLS).-3-Acknowledgement

... For instance, if there are two points selected on each control line, four lines at least are needed for the calculation. If ij > 8, the parameters of rigorous LBTM can be calculated based on least squares adjustment [40]. ...

... Error equations as Equation (9) can be constructed by each GCP and point on the control lines. They can be further normalized and calculated on the basis of least squares adjustment [40] through overall iteration. Distinguished from existing LBTMs, it is of significance to select more than two points on each control line to strengthen the control network of the overall adjustment in the solution process of rigorous LBTM. ...

... The flow chart of image rectification based on rigorous LBTM is shown in Figure 3. Error equations as Equation (9) can be constructed by each GCP and point on the control lines. They can be further normalized and calculated on the basis of least squares adjustment [40] through overall iteration. Distinguished from existing LBTMs, it is of significance to select more than two points on each control line to strengthen the control network of the overall adjustment in the solution process of rigorous LBTM. ...

High precision geometric rectification of High Resolution Satellite Imagery (HRSI) is the basis of digital mapping and Three-Dimensional (3D) modeling. Taking advantage of line features as basic geometric control conditions instead of control points, the Line-Based Transformation Model (LBTM) provides a practical and efficient way of image rectification. It is competent to build the mathematical relationship between image space and the corresponding object space accurately, while it reduces the workloads of ground control and feature recognition dramatically. Based on generalization and the analysis of existing LBTMs, a novel rigorous LBTM is proposed in this paper, which can further eliminate the geometric deformation caused by sensor inclination and terrain variation. This improved nonlinear LBTM is constructed based on a generalized point strategy and resolved by least squares overall adjustment. Geo-positioning accuracy experiments with IKONOS, GeoEye-1 and ZiYuan-3 satellite imagery are performed to compare rigorous LBTM with other relevant line-based and point-based transformation models. Both theoretic analysis and experimental results demonstrate that the rigorous LBTM is more accurate and reliable without adding extra ground control. The geo-positioning accuracy of satellite imagery rectified by rigorous LBTM can reach about one pixel with eight control lines and can be further improved by optimizing the horizontal and vertical distribution of control lines.

... In this study, we employ least-squares adjustment of a non-constrained 2-dimensional triangulation network (free-network) using indirect observations method [16,17]. Specifically, the observed angles, distances and approximate coordinates are precisely optimised in a rigorous least-squares way. ...

... The geodetic network is called to be a free network when it lacks the essential information such as position, orientation and scale of the network and the datum parameters of the coordinate reference system [14,17]. The measurements of the baselines are based on observational process either during the triangulation or in the trilateration [14]. ...

... The free geodetic network is solved by the normal least-squares formed by a number of observation equations [17] ...

We utilise minimum-norm least-squares based on the indirect observations methods to adjust our 2-dimensional triangulation network.The main objective of this paper is to optimally adjust the approximate coordinates of the nodes (points) of the given network. The network observations (11 measured distances and 17 angles) have been adjusted by being combined in linear system of equations in terms of free-network adjustment procedure to rigorously adjust the approximate coordinates over the network points. We obtained better converged values by applying an iterative procedure, the minimum corrections for the free-network coordinates are obtained after a number of ﬁve iterations. The data snooping procedure has been used to test the reliability and precision of the network observations. The T-Test criterion is then applied for gross error detection, ﬁve angles and two lines are suspected to include gross errors at a critical value of 1.98.

... There are two methods for introducing Eq. (9) into the adjustment model. The former is to introduce the planarity constraint as a true value in order to attain ''adjustment by elements with constraints'' (Fan, 2005), while the latter would introduce the planarity constraint as weighted observations in order to accomplish ''adjustment by elements with pseudo observations'' (Fan, 2005). ...

... There are two methods for introducing Eq. (9) into the adjustment model. The former is to introduce the planarity constraint as a true value in order to attain ''adjustment by elements with constraints'' (Fan, 2005), while the latter would introduce the planarity constraint as weighted observations in order to accomplish ''adjustment by elements with pseudo observations'' (Fan, 2005). ...

... Actual accuracy is calculated by the root mean square (RMS) error using the check points, while theoretical accuracy is expressed by the diagonal elements of the inverse of the normal matrix. The expression of theoretical accuracy is (Fan, 2005): ...

In the digital conservation of the Dunhuang wall painting, bundle adjustment is a critical step in precise orthoimage generation. The error propagation of the adjustment model is accelerated because the near-planar photographic object intensifies correlation of the exterior orientation parameters and the less than 60% forward overlap of adjacent images weakens the geometric connection of the network. According to the photographic structure adopted in this paper, strong correlation of the exterior orientation parameters can be verified theoretically. In practice, the additional constraints of near-planarity and exterior orientation parameters are combined with bundle adjustment to control the error propagation. The positive effects of the additional constraints are verified by experiments, which show that the introduction of weighted observation equations into bundle adjustment contributes a great deal to the theoretical and actual accuracies of the unknowns as well as the stability of the adjustment model.

... That is, the acquired values must be compared to a baseline set of values (true values) in order to establish the accuracy of the laser scanner. These types of errors can be categorised according toFan (2010) as: Mistake; ...

... Another method is by comparisons of results; this will highlight any discrepancies in the results. According toFan (2010), error can be categorised into three sources;MistakeA mistake is also known as a blunder. This type of error can arise by misuse of an instrument, incorrect measuring and computational error. ...

Laser Scanning is a 21st century surveying technique used to generate high density and point cloud data in 3D for surveying, mapping and monitoring purposes, for example rock mass movement in mining. For high precision and accurate work, the instrument must be used correctly and with regular calibration and checks to ensure that it constantly performs according to expectations and manufacturer specifications. The aim of this research was to develop a short-range scanning laboratory for testing the accuracy of terrestrial laser scanning systems for rock engineering applications. This research was based on methods used by previous researchers in testing the accuracy of the instrument and the development of a suitable facility for such testing. The procedure used in developing the short-range testing facility included selecting an appropriate venue of size and shape that suits the requirements for a short-range laser scanning laboratory. This was followed by the construction of the master control beacon and creation of additional scan set ups in order to capture all the points in the facility. Targets were strategically placed on the wall and roof of the laboratory in order to determine the centre point coordinate of each target. A Leica Total Station TCR 1201+ and Trimble S6 Total Station were used to establish accurate coordinates for the control beacon and the targets respectively. Thereafter, the scanning of the targets was carried out using a FARO Focus XD 130 terrestrial laser scanner. Comparisons were performed using the coordinates from the terrestrial laser scanning and those of the Total Station to examine the point accuracy of the scans. The results from the comparison between the scanner coordinates and the total station coordinates showed that the FARO Focus laser scanner performed within manufacturer specifications but not always. This implies the instrument is capable of generating accurate point and reliable cloud data that can be used for the purpose of monitoring underground rock mass movements. Errors regarded mistakes in the final analysis occurred as a result of target design (in terms of size and orientations) and oblique lines of sight.

... x i . The measurement of the values of X is accompanied by inaccuracies which by their way of expression are absolute or relative [2], [3], [4], [5], [6]. The absolute inaccuracy ∆X is represented in the unit of the measured variable. ...

... By their character of change the inaccuracies are random or systematic [2], [3], [4], [5], [6]. The nature and physical meaning of the random and systematic inaccuracies of a measurement are different. ...

Let an indirectly measurable variable Y be represented as a function of finite number directly measurable variables X1, X2, ..., Xn. We introduce maximum absolute and relative inaccuracies of second order of Y – this idea is a continuation of our research of a new principle for representing the maximum inaccuracies of Y using the inaccuracies of X1, X2, ..., Xn. Using inaccuracies of second order we determine the maximum inaccuracies of indirectly measurable variable Y with quadratic approximation which gives their values more precise. We give algorithmically an easily applicable method for determining their numerical values. The defined by us maximum inaccuracies of second order give the opportunity for more precise determination of the inaccuracy when measuring indirectly measurable variables.

... The term filtration came to geodesy from electronics, where it means getting the necessary information from the signal distorted by certain type of noise [1]. Modern geodetic technologies belong to the class of continuous measuring devices and examples of such technologies are global satellite navigation systems [2] and automated geodetic deformation monitoring systems [3]. ...

... The vector, which contains these parameters, is named as state vector of dynamic system. The linear model of dynamic system in KF can be described by the next equations [1]: ...

During geodetic monitoring with GNSS technology one of important steps is the correct processing and analysis of the measured displacements. We used the processing method of Kalman filter smoothing algorithm, which allows to evaluate not only displacements, but also the speed, acceleration, and other characteristics of the deformation model. One of the important issues is the calculation of the observations weight matrix in the Kalman filter. Recurrence algorithm of Kalman filtering can calculate and specify the weights during processing. However, the weights obtained in such way do not always exactly correspond to the actual observation accuracy. We established the observations weights based on the accuracy of baseline measurements. In the presented study, we offered and investigated different models of establishing the accuracy of the baselines. The offered models and the processing of the measured displacements were tested on an experimentally geodetic GNSS network. The research results show that despite of different weight models, changing weights up to 2 times do not change Kalman filtering accuracy extremely. The significant improvements for Kalman filtering accuracy for baselines shorter than 10 km were not got. Therefore, for typical GNSS monitoring networks with baseline range 10-15 km, we recommend to use any kind of models. The compulsory condition for getting correct and reliable results is checking results on blunders. For baselines, which are longer than 15 km we propose to use weight model which include baseline standard deviation from network adjustment and corrections for baseline length and its accuracy.

... ( 1-7) where is the residual vector containing all residuals , which are derived together with the optimal estimate, and is the variance-covariance matrix of (Fan, 2010). ...

... We need to linearize the non-linear observation equations of quadratic shapes to be able to use the least squares method for acquiring best fit shapes. The general linearization procedure is as follows (Fan, 2010): ...

The importance of single-tree-based information for forest management and related industries in countries like Sweden, which is covered in approximately 65% by forest, is the motivation for developing algorithms for tree detection and species identification in this study. Most of the previous studies in this field are carried out based on aerial and spectral images and less attention has been paid on detecting trees and identifying their species using laser points and clustering methods.
In the first part of this study, two main approaches of clustering (hierarchical and K-means) are compared qualitatively in detecting 3-D ALS points that pertain to individual tree clusters. Further tests are performed on test sites using the supervised k-means algorithm in which the initial clustering points are defined as seed points. These points, which represent the top point of each tree are detected from the cross section analysis of the test area. Comparing those three methods (hierarchical, ordinary K-means and supervised K-means), the supervised K-means approach shows the best result for clustering single tree points. An average accuracy of 90% is achieved in detecting trees. Comparing the result of the thesis algorithms with results from the DPM software, developed by the Visimind Company for analysing LiDAR data, shows more than 85% match in detecting trees.
Identification of trees is the second issue of this thesis work. For this analysis, 118 trees are extracted as reference trees with three species of spruce, pine and birch, which are the dominating species in Swedish forests. Totally six methods, including best fitted 3-D shapes (cone, sphere and cylinder) based on least squares method, point density, hull ratio and slope changes of tree outer surface are developed for identifying those species. The methods are applied on all extracted reference trees individually. For aggregating the results of all those methods, a fuzzy logic system is used because of its good reputation in combining fuzzy sets with no distinct boundaries. The best-obtained model from the fuzzy system provides 73%, 87% and 71% accuracies in identifying the birch, spruce and pine trees, respectively. The overall obtained accuracy in species categorization of trees is 77%, and this percentage is increased dealing with only coniferous and deciduous types classification. Classifying spruce and pine as coniferous versus birch as deciduous species, yielded to 84% accuracy.

... Physical and geometric quantities such as angles, distances, heights, and gravity are measured and processed. In this case, a great number of data appears [1]. A quantity is always measured differently even though it is measured many times under the same conditions [2]. ...

Classical outlier tests based on the least-squares (LS) have significant disadvantages in some situations. The adjustment computation and classical outlier tests deteriorate when observations include outliers. The robust techniques that are not sensitive to outliers have been developed to detect the outliers. Several methods use robust techniques such as M-estimators, L1- norm, the least trimmed squares etc. The least trimmed squares (LTS) among them have a high-breakdown point. After the theoretical explanation, the adjustment computation has been carried out in this study based on the least squares (LS) and the least trimmed squares (LTS). A certain polynomial with arbitrary values has been used for applications. In this way, the performances of these techniques have been investigated.

... It is worth mentioning that digitalization uncertainties are essentially generated by human factors and measurement routine; accordingly, they can be considered as "chance" or "stochastic" errors, to be treated in the frame of the classical theory of errors [32,33]. ...

The Molise region (southern Italy) fronts the Adriatic Sea for nearly 36 km and has been suffering from erosion since the mid-20th century. In this article, an in-depth analysis has been conducted in the time-frame 2004–2016, with the purpose of discussing the most recent shoreline evolution trends and individuating the climate forcings that best correlate with them. The results of the study show that an intense erosion process took place between 2011 and 2016, both at the northern and southern parts of the coast. This shoreline retreat is at a large extent a downdrift effect of hard protection systems. Both the direct observation of the coast and numerical simulations, performed with the software GENESIS, indicate that the shoreline response is significantly influenced by wave attacks from approximately 10° N; however, the bimodality that characterizes the Molise coast wave climate may have played an important role in the beach dynamics, especially where structural systems alternate to unprotected shore segments.

... One way to present the total uncertainty of each point is to use the variance values. For instance, the uncertainty of point p can be obtained by (Fan 2010) ...

The design of surveying networks inside tunnels is of crucial importance as an optimal design enables the network to fulfill its demanded quality parameters, for instance, precision and reliability. The precision of a geodetic network in the tunnels usually drops drastically as the network expands inside the tunnel, and as the distance to known control points increases. The reliability of the tunnel geodetic networks is also fairly low due to weak network geometry (limited space in the lateral section of the tunnels). This paper studies the uncertainty of the tunnel surveying networks by different observation plans in the West Link project, where an eight-kilometer railway tunnel is to be constructed underneath the city of Gothenburg in Sweden. Adding more free station set-ups and involving observations from the tunnel wall-bracket points can improve the network uncertainty and reliability. Moreover, including orientation measurements (gyro-observations) has a significant effect on the prevention of a quick precision-drop of the networks in the long length corridors.

... This technique consists of finding a line that fits a data set following a certain criterion. The most common criterion, which will also be employed in this work, is least squares adjustment [28]. ...

Increasingly more patients exposed to radiation from computed axial tomography (CT) will have a greater risk of developing tumors or cancer that are caused by cell mutation in the future. A minor dose level would decrease the number of these possible cases. However, this framework can result in medical specialists (radiologists) not being able to detect anomalies or lesions. This work explores a way of addressing these concerns, achieving the reduction of unnecessary radiation without compromising the diagnosis. We contribute with a novel methodology in the CT area to predict the precise radiation that a patient should be given to accomplish this goal. Specifically, from a real dataset composed of the dose data of over fifty thousand patients that have been classified into standardized protocols (skull, abdomen, thorax, pelvis, etc.), we eliminate atypical information (outliers), to later generate regression curves employing diverse well-known Machine Learning techniques. As a result, we have chosen the best analytical technique per protocol; a selection that was thoroughly carried out according to traditional dosimetry parameters to accurately quantify the dose level that the radiologist should apply in each CT test.

... In addition to this, it can be noticed that the Multiple Linear Regression improves. The improvement is minimal with respect to the Simple Linear Regression based on the stored water height, but it is remarkable when compared with that of the temperature, whose relation could be considered as simply moderated [7]. ...

... Furthermore, an optimal network should have the capability to detect gross errors in the observations and minimise the effect of the undetected ones on the adjustment results ( Fan, 2010). Baarda (1968) proposed a global test for outlier detection and data snooping for the localisation of gross errors and introduced the concept of reliability. ...

An optimal design of a geodetic network helps the surveying engineers maximise the efficiency of the network. A number of pre-defined quality requirements, i.e. precision, reliability, and cost, of the network are fulfilled by performing an optimisation procedure. Today, this is almost always accomplished by implementing analytical solutions, where the human intervention in the process cycle is limited to defining the requirements. Nevertheless, a trial and error method can be beneficial to some applications. In order to analytically solve an optimisation problem, it can be classified to different orders, where an optimal datum, configuration, and optimal observation weights can be sought such that the precision, reliability and cost criteria are satisfied.
In this thesis, which is a compilation of six peer-reviewed papers, we optimised and redesigned a number of GNSS-based monitoring networks in Sweden by developing new methodologies. In addition, optimal design and efficiency of total station establishment with RTK-GNSS is investigated in this research.
Sensitivity of a network in detecting displacements is of importance for monitoring purposes. In the first paper, a precision criterion was defined to enable a GNSS-based monitoring network to detect 5 mm displacements at each network point. Developing an optimisation model by considering this precision criterion, reliability and cost yielded a decrease of 17% in the number of observed single baselines implying a reliable and precise network at lower cost. The second paper concerned a case, where the precision of observations could be improved in forthcoming measurements. Thus a new precision criterion was developed to consider this assumption. A significant change was seen in the optimised design of the network for subsequent measurements. As yet, the weight of single baselines was subject to optimisation, while in the third paper, the effect of mathematical correlations between GNSS baselines was considered in the optimisation. Hence, the sessions of observations, including more than two receivers, were optimised. Four out of ten sessions with three simultaneous operating receivers were eliminated in a monitoring network with designed displacement detection of 5 mm. The sixth paper was the last one dealing with optimisation of GNSS networks. The area of interest was divided into a number of three-dimensional elements and the precision of deformation parameters was used in developing a precision criterion. This criterion enabled the network to detect displacements of 3 mm at each point.
A total station can be set up in the field by different methods, e.g. free station or setup over a known point. A real-time updated free station method uses RTK-GNSS to determine the coordinates and orientation of a total station. The efficiency of this method in height determination was investigated in the fourth paper. The research produced promising results suggesting using the method as an alternative to traditional levelling under some conditions. Moreover, an optimal location for the total station in free station establishment was studied in the fifth paper. It was numerically shown that the height component has no significant effect on the optimal localisation.

... Such observation model represents a combined adjustment by parameters of indirect and direct observations (Feil, 1989), but if for every station no more than one absolute measurement is introduced, it also corresponds to adjustment with pseudo-observations (e.g. Fan, 1997;Niemeier, 2002). ...

The paper presents the new joint adjustment of the Croatian First Order Gravity Network, for the first time adjusted as a whole. The adjustment involves absolute and relative gravity measurements, latter performed in the course of four survey stages. Firstly, the measurements are concisely described. Revision of the absolute and pre-processing of the relative measurements are briefly presented. The applied adjustment model is described. Accordingly, the gravity values of all stations (absolute and relative), corrections of linear calibration coefficient and linear drift coefficients are included in the functional model as unknown parameters. The absolute measurements are included in the adjustment as observations. The new adjustment resulted in significantly different gravity values as compared to previous adjustments (of individual stages of the network). The differences in gravity values are an order of magnitude greater than the expected accuracy. It is shown that the differences are mainly due to the errors in the gravimeters’ calibration constants, which were neglected in the previous adjustments. Because of the significant differences, the new linear transformation function from the Potsdam to the Croatian Gravity System is determined.

... and the value of the maximal relative inaccuracy of Y is ∆ 1 Y Y , where ∆ 1 Y is defined by (1) and [6,7]. The value of the maximal absolute inaccuracy ∆ 1 Y according to our method [1] is ...

In this paper we refine and generalize some previous our results on the inaccuracy (error) theory. We define conditions, which characterize different types of functions. Via these functions an indirectly measurable variable Y can be analytically represented. We also present criteria for comparison of the maximal absolute and relative inaccuracies of the indirectly measurable variable Y in the first and in the second order for two experiments. We correct some of our previous conclusions regarding the application of the dimensionless scale for evaluation of the quality of an experiment. Furthermore we give two numerical contra examples.

... Furthermore, an optimal network should have the capability to detect gross errors in the observations and minimise the effect of the undetected ones on the adjustment results (Fan 2010). Baarda (1968) proposed a global test for outlier detection and data snooping for the localisation of gross errors and introduced the concept of reliability. ...

... The method used in forming Matrix A, Matrix L and Matrix P in Trilateration was also applied in Traversing and their mathematical model is seen in equation (6) with a reduced normal equation as in equation (7) (Fan, 2010). ...

Using a multistage sampling of respondents, this study estimates ordinary least square logistic
regression models of violent crime victimization risks in the different residential neighbourhood of
Minna, Nigeria. It focuses on the effects of the neighborhood built environment in the form of
nonresidential landuses and neighbourhood-level social and economic characteristics. As shown in
the outcomes, residents with higher level of education had lower risk of violent crime
victimization, as did high income individuals. Results showed that individuals' risks of
violent crime victimization were significantly on the increase if respondents lived in the high
density than residents of the low density residential neighbourhood. Findings indicated that
the neighbourhood-level presence of commercial and recreational landuses significantly increases
residents’ risk of violent crime victimization in the study area. The study concludes that violent
crime victimization risks varied significantly across neighborhoods and that socio-economic,
structural and landuse variables accounted for this. Implications for future research and crime
prevention policy are discussed.

... The error ellipse of the projected point in image can be calculated from its covariance matrix [Draper and Smith, 1981;Fan, 1997]. The lengths of error ellipse axes are determined by the eigenvalues, the eigenvectors represent the direction of the two major axis of error ellipse. ...

Mobile mapping is the process of collecting geospatial data with a moving vehicle. These vehicles are often equipped with two types of sensors: remote sensing (cameras, lidar, radar) and geo-localization (GNSS, IMU, odometer). Precise and robust georeferencing has been a major challenge for the implementation of mobile mapping systems. Indeed, in dense urban environments, the masks of signals and multipath errors corrupt the measurements and lead to big positioning errors. High precision IMUs enable to bridge the gaps of positioning and ensure a drift low enough to fulfil the requirements of mapping in terms of accuracy. Nowadays, the hybrid positioning systems (GNSS / IMU / Odometer) are mature enough to provide reliable industrial solutions for the collection of geo-referenced data. National and private mapping agencies have started to collect the required row data for building geospatial repositories at very large scales. However, the very high cost of positioning systems incorporating high precisions IMUs restricts their use to the establishment of geospatial reference data and more affordable positioning solutions are needed for map updating purpose.The objective of this thesis is to provide a low cost positioning solution that can be used on a large number of map updating vehicles.We propose to use one or more cameras on a vehicle as a georeferencing system. Indeed, the vehicle’s trajectory can be estimated using visual odometry techniques. To limit the drift of the trajectory due to the accumulation of errors, we propose a registration on a set of visual landmarks that are precisely georeferenced. These landmarks are reconstructed using the reference data generated by precise and expensive mapping systems. Natural road features such as road markings and traffic signs were chosen as visual landmarks.A local bundle adjustment algorithm has been adapted to estimate the pose of the vehicle using a sequence of images acquired by one or more embedded cameras. A rigorous approach that takes into account the uncertainties enables to tune automatically the weights of every constraint in the equation system of the adjustment and to estimate the uncertainties of the parameters. They are used in a propagation based matching algorithm that accelerates the process of tracking the interest points between the images and eliminate many false matches. This significantly reduces the drift of the visual odometry by reducing the sources of errors. The remaining part of the drift is removed using georeferenced visual landmarks. The process of matching the image sequence with the landmarks is guided by the uncertainty of the poses. It adds a set of absolute constraints in the equation system of bundle adjustment. The drift is drastically reduced. Each step of the algorithm is evaluated on real image sequences with ground truths

... This is equivalent to 2.83 times of standard error. In Geodesy and Photogrammetry one often sets 2 or 3 times of initial standard error r as the accepted error of tolerance (Fan, 1997). In some other citations even the E90 error (=±1.645 ...

In geodesy three surfaces, the physical surface of the earth, the geoid and the reference ellipsoid are encountered giving rise to orthometric height (h), the ellipsoidal height (H) and the geoidal separation (N). The orthometric height and the ellipsoidal height are with reference to the geoid and the reference ellipsoid respectively. The vertical separation between the ellipsoid and the geoid is the geoidal separation. A mathematical relation depicting the surface of the geoid with regard to the reference ellipsoid is the geoid model. It relates the geoidal separation with the horizontal location.
The Global Navigational Satellite System provides precise location of points on the surface of the earth. The vertical location provided is the ellipsoidal height which needs conversion to a more usable format, the orthometric height. This is done by integrating ellipsoidal heights with a geoid model. The accuracy of conversion depends on the accuracy of geoid model. Therefore, development of geoid model has become a current area of research in geodesy.
Objective of this study is to develop a local geoid model by employing various polynomial models and thereafter to analyse the accuracy of these models. The test area is in Papua New Guinea. The geometric method is used for computation of the geoidal separation from ellipsoidal and orthometric heights and thereafter the horizontal coordinates and the geoidal separation are used to develop the geoid surface using second, third and fourth degree polynomials. The study shows that the third degree polynomial provided an accuracy of ±20 cm.

... Model development is beginning to evaluate the parameters φ, β 1 , β 2 on the basis of daily ob- servations of the input t k , ΔT k and output S k . To do this, let's find the minimum of the function [15] ...

The modern geodetic equipment allows observations as soon as possible, providing high accuracy and productivity. Achieving high accuracy of measurement is impossible without taking into account external factors that create influence on an observation object. Therefore, in order to evaluate an influence of thermal displacement on the results of geodetic monitoring a mathematical model of horizontal displacement of above-ground pipelines was theoretically grounded and built. In this paper we used data of experimental studies on the existing pipelines "Soyuz" and "Urengoy - Pomary - Uzhgorod". Above-ground pipeline was considered as a dynamic system "building - environment". Based on the characteristics of dynamic systems the correlation between the factors of thermal influence and horizontal displacement of the pipeline axis was defined.
Establishing patterns between input factors and output response of the object can be useful not only for geodetic control, but also for their consideration in the design of new objects. It was investigated that the greatest influence on the accuracy of geodetic observations can create dispersion of high-frequency oscillations caused by daily thermal displacement. The magnitude of displacement exceeds actual measurement error.
The article presents the results of calculation of high-frequency oscillations of above-ground gas pipeline.
The result made it possible to substantiate the accuracy and methodology of geodetic observations of the horizontal displacement of pipeline axes taking into account an influence of cyclical thermal displacement.
Research results were recommended for use in practice for enterprises that serve the main gas pipelines and successfully tested by specialists of PJSC "Ukrtransgaz" (Kharkiv, Ukraine) during the technical state control of aerial pipeline crossing in Ukraine and also can be used to form the relevant regulations.

... There are different methods [1, 2, 3, 4] for determining the inaccuracy (error) when determining the value of Y in a given experiment. In [5, 6, 7] we researched the maximum absolute and maximum relative inaccuracy of the indirectly measurable variable ...

Let an indirectly measurable variable Y be represented as a function of a finite number of directly measurable variables X1, X2, ..., Xn. In our previous researches we: 1) represented the maximum inaccuracies of Y in first degree of approximation as linear functions of the inaccuracies of X1, X2, ..., Xn; 2) defined the spaces of the maximum inaccuracies and we defined a dimensionless scale for quality (accuracy) evaluation of an experiment in them; 3) introduced the maximum inaccuracies in second degree of approximation.
In the current paper we prove that the maximum inaccuracies of Y in second degree of approximation are quadrics of the inaccuracies of X1, X2, ..., Xn and that these forms describe certain types of quadric hypersurfaces of parabolic class. Moreover: 1) we give a complete algebraic classification of these hypersurfaces; 2) we define a dimensionless scale for quality (accuracy) evaluation of the experiment given the maximum inaccuracies in second degree of approximation.

... Statistically, field observations and the resulting measurements are never exact. Any observation can contain various types of errors (Fan 1997; New Jersey Institute of Technology 2007). Random errors are caused by various subjective and objective factors in the process of memory production and the presentation of information, for example rounding and farmers' memory errors. ...

... Statistically, field observations and the resulting measurements are never exact. Any observation can contain various types of errors (Fan 1997; New Jersey Institute of Technology 2007). Random errors are caused by various subjective and objective factors in the process of memory production and the presentation of information, for example rounding and farmers' memory errors. ...

Premium ratemaking is an important issue to guarantee insurance balance of payments. Most ratemaking methods require large samples of long-term loss data or farm-level yield data, which are often unavailable in developing countries. This study develops a crop insurance ratemaking method with survey data. The method involves a questionnaire survey on characteristic yield information (average yield, high yield, and low yield) of farming households’ cropland. After compensating for random error, the probability distributions of farm-level yields are simulated with characteristic yields based on the linear additive model. The premium rate is calculated based on Monte Carlo yield simulation results. This method was applied to Dingxing County, North China to arrive at the insurance loss cost ratio and calculate the necessary premium rate. The method proposed in this study could serve as a feasible technique for crop insurance ratemaking in regions that lack sufficient long-term yield data, especially in developing countries with smallholder agriculture.

... One possibility to calculate an error variance is the application of the law of error propagation (e.g. Fan, 1997) to Eq. (37), which results under consideration of (33) and (34) after few transformations in ...

... Since the initial value of R SC1 need not be calculated precisely, the correlations of the nine parameters of R SC1 are not considered in this step. The normal equations are constructed and calculated on the basis of least squares adjustment (14). ...

Rotation photogrammetric systems are widely used for 3D information acquisition, where high-precision calibration is one of the critical steps. This study first shows how to derive the rotation model and deviation model in the object space coordinate system according to the basic structure of the system and the geometric relationship of the related coordinate systems. Then, overall adjustment of multi-images from a surveying station is employed to calibrate the rotation matrix and the deviation matrix of the system. The exterior orientation parameters of images captured by other surveying stations can be automatically calculated for 3D reconstruction. Finally, real measured data from Wumen wall of the Forbidden City is employed to verify the performance of the proposed calibration method. Experimental results show that this method is accurate and reliable and that a millimetre level precision can be obtained in practice.

... Most practical theories assume that all measurements contain some type of error due to measurement conditions regarding instrument, human and environment aspects. Under this assumption, repeatability of measurements improve this factor, thus producing a more reliable and accurate product [16]. ...

Digital Terrain Models (DTMs) are widely and intensively used as a computerized mapping and modeling infrastructure representing our environment. There exist many different types of wide-coverage DTMs generated by various acquisition and production techniques, which differ significantly in terms of geometric attributes and accuracy. In aspects of quality and accuracy most studies investigate relative accuracy relying solely on coordinate-based comparison approaches that ignore the local spatial discrepancies exist in the data. Our long-term goal aims at analyzing the absolute accuracy of such models based on hierarchical feature-based spatial registration, which relies on the represented topography and morphology, taking into account local spatial discrepancies exist. This registration is the preliminary stage of the quality analysis, where a relative DTM comparison is performed to determine the accuracy of the two models. This paper focuses on the second stage of the analysis applying the same mechanism on multiple DTMs to compute the absolute accuracy based on the fact that this solution system has a high level of redundancy. The suggested approach not only qualitatively computes posteriori absolute accuracies of DTMs, usually unknown, but also thoroughly analyzes the absolute accuracies of existing local trends. The methodology is carried out by developing an accuracy computation analysis using simultaneously multiple different independent wide-coverage DTMs that describe the same relief. A comparison mechanism is employed on DTM pairs using Least Squares Adjustment (LSA) process, in which absolute accuracies are computed based on theory of errors concepts. A simulation of four synthetic DTMs is presented and analyzed to validate the feasibility of the proposed approach.

First, this paper introduces a statistical model of gross errors, namely the Bernoulli–Gaussian (BG) model, which characterizes the gross error as a product of a Bernoulli variable and a Gaussian variable. The BG model offers a framework to interpret various causes of outliers through the perspective of gross errors. In addition, it unifies commonly used observation models for outliers by adjusting the range of BG model parameters. Second, this paper proposes an estimation method for BG model parameters based on the expectation maximization (EM) algorithm. This approach attributes different gross error parameters for distinct types of observations, facilitating parameter estimation in both single-source and multisource observation systems. Additionally, by organizing equations in the form of individual observations, its applicability can be broadened to both static and dynamic scenarios. Finally, a normal sample example and a Global Navigation Satellite System (GNSS) positioning example verified the effectiveness of the proposed method for estimating the BG model parameters.

This contribution introduces a statistical model of gross errors, which is called the Bernoulli-Gaussian (BG) model, where the gross error consists of the product of a Bernoulli variable and a Gaussian variable. First, with the BG model, different causes of outliers can be interpreted from the perspective of gross errors. As well, the commonly used observation models, such as the mean shift model and variance inflation model, can be unified by the BG model, via choosing different values range of model parameters. Second, based on the EM (expectation maximization) algorithm, the estimation method of BG model parameters for linear observation equations is proposed. With this method, the BG model parameters can be estimated in both a static observation system and a dynamic observation system. Finally, normal sample examples and GNSS examples proved that it is effective in estimating the BG model parameters via the the EM algorithm.

A harmful effect of anthropogenic activities in urban environments is the increases of thermal discomfort and subsequently, a negative effect on humans’ mental and physical performance. Therefore, it is of high importance to detect, monitor, and predict thermal discomfort, especially its temporal and spatial patterns in cities. The objective of this study is to propose a new method for modeling outdoor thermal comfort based on remote sensing and climatic datasets. To do so, several datasets were utilized, including those from Landsat, Moderate Resolution Imaging Spectroradiometer (MODIS), Digital Elevation Model (DEM) from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and climatic datasets from local meteorological stations. The method was experimented in the city of Tehran, Iran. For modeling outdoor thermal comfort, the Least Squares Adjustment (LSA) model was presented based on the Principle Component Analysis (PCA). In this model, the Principle Components (PCs) of the environmental and surface biophysical parameters were considered as independent variables and Discomfort Index (DI) as dependent variable. Finally, by determining the optimal values of the adjustment coefficients for each independent variable, maps of outdoor thermal comfort at different timestamps were produced and analyzed. The results of the modeling showed that correlation coefficient and Root Mean Square Error (RMSE) between the modeled and observed outdoor thermal comfort values at the meteorological stations for the training data sets were 0.86 and 1.80, for the testing data set were 0.89 and 2.04, respectively, while it was 0.85 and 1.15 for the self-deployed devices. The average values of DI in warm season of year was 8.5 °C higher than the cold season of the year. Further, in both warm and cold seasons of year the mean value of DI for bare land was found higher than other land covers, whereas that of water bodies lower than others. Our findings suggest that efficiency can be achieved for modeling outdoor thermal comfort using LSA with remote sensing and climatic datasets.

It is known that distance can be measured directly using the instruments of distance measurement, or indirectly using the coordinates of two edge points using the popular Pythagorean formula. In most of the geomatics engineering studies, the error in distances is expressed by the uncertainty of the direct measured distances only, while there is not enough studies that highlight the error in a computed distance. This paper presents the error in distance in a novel formulated approach, which considers the planimetric position error in both edges of the computed distance, to determine the error in that distance. Furthermore, the research concludes that the size and direction of error ellipse at edge points and the azimuth of the distance, are the main factors that combine to define the value of error in those distances that are measured indirectly.

We utilise minimum-norm least-squares based on the indirect observations methods to adjust our 2-dimensional triangulation network. The main objective of this paper is to optimally adjust the approximate coordinates of the nodes (points) of the given network. The network observations (11 measured distances and 17 angles) have been adjusted by being combined in linear system of equations in terms of free-network adjustment procedure to rigorously adjust the approximate coordinates over the network points. We obtained better converged values by applying an iterative procedure, the minimum corrections for the free-network coordinates are obtained after a number of five iterations. The data snooping procedure has been used to test the reliability and precision of the network observations. The T-Test criterion is then applied for gross error detection, five angles and two lines are suspected to include gross errors at a critical value of 1.98.

This paper concerns determination of 3-dimensional transformation parameters between SK-63 in Kyrgyzstan and ITRF 2005. The study uses 70 Kyrgyz geodetic points where SK-63 map projection coordinates, levelled heights and GPS-derived ITRF2005 coordinates are available. Comparison of planar coordinates derived from the original ITRF 2005 coordinates and coordinates derived from transformation using the estimated parameters show an average point position error of about 1 meter. Numerical results also indicate that transformation parameters estimated in Russia are not suitable for use in Kyrgyzstan, due to larger point residuals (3 meters in average) and a systematic westernward shift.

Many Global Positioning System/Global Navigation Satellite System (GPS/GNSS) methods are applied to the cadastral survey after the rapid development of satellite-based positioning. These methods are reported to give efficiency, speed and economy compared to the conventional ones. In this study, it is aimed to comprehensively evaluate the most commonly used GPS/GNSS methods for cadastral survey. Furthermore, a median based comparison strategy was developed for the distribution of the results. The research results showed the difference of a few centimeters between the coordinates obtained from the terrestrial and the GPS/GNSS techniques. Furthermore, the developed robust criteria verified the compatibility of the results. It is clear that the GPS/GNSS based methods achieve high accurate output in real time and are well-matched with surveying standards in Turkey. In addition, the robust criteria appear to be a fast, effective and objective method to compare the results, especially for the height component.

An Arabic book (free-of-charge) discuses mathematics used in surveying engineering practice

This study aims to investigate the ability of different least squares adjustment techniques for detecting deformation. A simulated geodetic netwo rk is used for this purpose. The observations are collected using the Total Station instrument in three epochs and different least squares adjustment methods are used to analyze the simulated network. The applied methods are adjustment-byelement, using variance-covariance components and Tikhonov regularization. For numerical computation, we utilized exist geodetic network around the simulated network and the deformation (changes in the simulated network) imposes to the object using a simulator in each epoch. The obtained results demonstrate that more accurate outcome for detection of small deformation is possible by estimating variance-covariance components. The difference of the estimated and the simulated deformations in the best scenario, i.e., applying variance-covariance components, is 0.2 and 0.1 mm in x and y directions. In comparison with adjustment by element and Tikhonov regularization methods the differences are 1.1 and 0.1 in x direction and 1.4 and 1.1 mm in y direction, respectively. In addition, it is also possible to model the deformation and therefore it can be seen that how the calculated displacement will affect the result of deformation modelling. It has been demonstrated that determining reasonable variance-covariance components is very important to estimate realistic deformation model and monitoring the geodetic networks.

Online Material: Matlab scripts and example data for adaptive Kalman filter.
The 2011 M w 9.0 Tohoku‐Oki earthquake was the most disastrous known event in Japan,triggering a powerful tsunami wave with a maximum amplitude of ∼40 m (Mori et al. , 2012). At least 15,883 people were killed and over 6145 injured, with a total economic loss of more than $235 billion U.S., making it the costliest natural disaster in world history (Emergency Disaster Countermeasures Headquarters, 2011; Hennessy‐Fiske, 2011). The earthquake occurred at 14:46:24 Japan Standard Time (05:46:24 UTC), with epicenter location at 38.297° N, 142.372° E and depth of 30 km (U.S. Geological Survey Earthquake Hazards Program, 2011). Seismic waveforms were recorded by continuous Global Navigation Satellite Systems (GNSS) monitoring network of GNSS Earth Observation Network (GEONET) (Sagiya et al. , 2001) and strong‐motion seismograph networks of the Kyoshin Network (K‐NET) and Kiban Kyoshin Network (KiK‐net) (Aoi et al. , 2004). These have been separately or jointly applied to rapid earthquake magnitude determination (Ohta et al. , 2012; Wright et al. , 2012; Melgar, Crowell, et al. , 2013), tsunami warning (Melgar and Bock, 2013), earthquake rupture process inversion (Ide et al. , 2011; Suzuki et al. , 2011; Yagi and Fukahata, 2011; Yokota et al. , 2011; Yue and Lay, 2011; Frankel, 2013), ionosphere perturbation detection (Heki, 2011; Tsugawa et al. , 2011; Komjathy et al. , 2012), study of Earth free oscillation (Mitsui and Heki, 2012), etc.
For those studies, the acquisition of robust seismic waveforms is vital but defective, especially in a real‐time mode. High‐rate Global Positioning System (GPS) is capable of recording seismic waveforms with centimeter‐level accuracy (Genrich and Bock, 2006), regardless of relative kinematic positioning mode (Nikolaidis et al. , 2001; Larson et al. , 2003; Blewitt et al. , 2006, 2009; Bilich et al. , 2008; Crowell …

Well credited and widely used ionospheric models, such as the
International Reference Ionosphere or NeQuick, describe the variation of
the electron density with height by means of a piecewise profile tied to
the F2-peak parameters: the electron density,, and the height, .
Accurate values of these parameters are crucial for retrieving reliable
electron density estimations from those models. When direct measurements
of these parameters are not available, the models compute the parameters
using the so-called ITU-R database, which was established in the early
1960s. This paper presents a technique aimed at routinely updating the
ITU-R database using radio occultation electron density profiles derived
from GPS measurements gathered from low Earth orbit satellites. Before
being used, these radio occultation profiles are validated by fitting to
them an electron density model. A re-weighted Least Squares algorithm is
used for down-weighting unreliable measurements (occasionally, entire
profiles) and to retrieve and values—together with their error
estimates—from the profiles. These values are used to monthly
update the database, which consists of two sets of ITU-R-like
coefficients that could easily be implemented in the IRI or NeQuick
models. The technique was tested with radio occultation electron density
profiles that are delivered to the community by the COSMIC/FORMOSAT-3
mission team. Tests were performed for solstices and equinoxes seasons
in high and low-solar activity conditions. The global mean error of the
resulting maps—estimated by the Least Squares technique—is
between and elec/m for the F2-peak electron density (which is equivalent
to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km
for the height (2 %).

The concept of outlier detection by statistical hypothesis testing in geodesy is briefly reviewed. The performance of such tests can only be measured or optimized with respect to a proper alternative hypothesis. Firstly, we discuss the important question whether gross errors should be treated as non-random quantities or as random variables. In the first case, the alternative hypothesis must be based on the common mean shift model, while in the second case, the variance inflation model is appropriate. Secondly, we review possible formulations of alternative hypotheses (inherent, deterministic, slippage, mixture) and discuss their implications. As measures of optimality of an outlier detection, we propose the premium and protection, which are briefly reviewed. Finally, we work out a practical example: the fit of a straight line. It demonstrates the impact of the choice of an alternative hypothesis for outlier detection.

Some ground objects in the digital topographic map have particular geometric features. Generally speaking, the straight lines and curves for composing roads are tangent, and the corners of buildings are usually right-angles. Due to inevitable errors during the measuring process, the original geometric features of ground objects will be damaged. In this regard, this paper puts forward to deal with the measuring errors in the contours of ground objects by using the “least-square method” to adjust, that is, the conception of “the adjustment oriented to ground objects”. It has derived out six condition equations which are common in the contours of ground objects. Meanwhile, it has proposed the processing methods to turn the tangency conditions for straight line segments and circular arc to the vertical conditions of straight line segments and to the side conditions. The paper has also developed some software basing on “ObjectARX”. After such treatment, the contours of ground objects strictly satisfy the geometric conditions.

In recent years, the method of self-calibration widely used in photogrammetry has been found suitable for the estimation of systematic errors in terrestrial laser scanners. Since high correlations can be present between the estimated parameters, ways to reduce them have to be found. This paper presents a unified approach to self-calibration of terrestrial laser scanners, where the parameters in a least-squares adjustment are treated as observations by assigning appropriate weights to them. The higher these weights are the lower the parameter correlations are expected to be. Self-calibration of a pulsed laser scanner Leica Scan Station was performed with the unified approach. The scanner position and orientation were determined during the measurements with the help of a total station, and the point clouds were directly georeferenced. The significant systematic errors were zero error in the laser rangefinder and vertical circle index error. Most parameter correlations were comparatively low. In part, precise knowledge of the horizontal coordinates of the scanner centre helped greatly to achieve low correlation between these parameters and the zero error. The approach was shown to be advantageous to the use of adjustment with stochastic (weighted) inner constraints where the parameter correlations were higher. At the same time, the collimation error could not be estimated reliably due to its high correlation with the scanner azimuth because of a limited vertical distribution of the targets in the calibration field. While this problem can be solved for a scanner with a nearly spherical field-of-view, it will complicate the calibration of scanners with limited vertical field-of-view. Investigations into the influence of precision of the scanner position and levelling on the adjustment results lead to two important findings. First, it is not necessary to level the scanner during the measurements when using the unified approach since the parameter correlations are relatively low anyway. Second, the scanner position has to be known with a precision of about 1 mm in order to get a reliable estimate of the zero error.

95 3.1.1 Basic Formulas

- Elements Adjustment
- .............................................................. In Linear Models

Adjustment by Elements in Linear Models........................... 95
3.1.1 Basic Formulas...................................... 95
3.1.2 Special Applications: Direct Adjustment and Linear Regression........... 102

105 3.2.1 Selection of Unknown Parameters and Datum Problem

- Observation Equations

Observation Equations...................................... 105
3.2.1 Selection of Unknown Parameters and Datum Problem................ 105
3.2.2 Linearization of Non-linear Observation Equations.................. 105

124 3.5.1 Adjustment by Elements in Two Groups

- Sequential Adjustment By Elements

Sequential Adjustment by Elements.............................. 124
3.5.1 Adjustment by Elements in Two Groups........................ 124
3.5.2 General Sequential Adjustment by Elements...................... 126

146 4.2.3 Least Squares Solutions

- .............................................................. Minimum-Norm Solution

Minimum-Norm Solution................................ 146
4.2.3 Least Squares Solutions................................. 150
4.2.4 Minimum-Norm Least-Squares Solution........................ 154

Interpretation of Free Network Solution

- Adjustment........................................................................ Free Network
- D....................................................................... Matrix

Free Network Adjustment.................................... 157
4.3.1 Free Networks...................................... 157
4.3.2 Structure of Matrix D.................................. 157
4.3.3 Interpretation of Free Network Solution........................ 160
4.3.4 Alternative Formulations................................ 164

172 5.2.1 Helmert's Method in Adjustment by

- ...................................... Helmert 's Method

Helmert's Method........................................ 172
5.2.1 Helmert's Method in Adjustment by Elements.................... 172

180 5.3.1 BQUE in Adjustment by Elements

- ........................................................ Adjustment.......................... Best Quadratic Unbiased Estimates

Best Quadratic Unbiased Estimates............................... 180
5.3.1 BQUE in Adjustment by Elements........................... 180
5.3.2 BQUE in Condition Adjustment............................ 184

204 6.3.1 Local Redundancies

- Observations............................................................................................................. Reliability

Reliability of Observations.................................... 204
6.3.1 Local Redundancies................................... 204
6.3.2 Errors of Hypothesis Tests................................ 205
6.3.3 Internal and External Reliability............................ 206