GPS Carrier Phase Ambiguity Fixing Concepts
ABSTRACT High precision relative GPS positioning is based on the very precise career phase measurements. A prerequisite for obtaining high precision relative positioning results, is that the doubledifferenced carrier phase ambiguities become sufficiently separable from the baseline coordinates. Different approaches are in use and have been proposed to ensure a sufficient separability between these two groups of parameters. In particular, the approaches that explicitly aim at resolving the integervalues of the doubledifferenced ambiguities have been very successful. Once the integer ambiguities are successfully fixed, the carrier phase measurements will start to act as if they were highprecision pseudorange measurements, thus allowing for a baseline solution with a comparable high precision. The fixing of the ambiguities on integer values is however a nontrivial problem, in particular if one aims at numerical efficiency. This topic has therefore been a rich source of GPSresearch over the last decade or so. Starting from rather simple but timeconsuming integer rounding schemes, the methods have evolved into complex and effective algorithms. Among the different approaches that have been proposed for cartier phase ambiguity fixing are those documented in Counselman and Gourevitch [1981], Remondi [1984;1986;1991], Hatch [1986; 1989; 1991], HofmannWeUenhof and Remondi [1988], Seeber and Wtibbena [1989], Blewitt [1989], Abott et al. [1989], Frei and Beutler [1990], Euler and Goad [1990], Kleusberg [1990], Frei [1991], Wiibbena [1991], Euler and Landau [1992], Erickson [1992], Goad [1992], Teunissen [1993a; 1994a, b], Hatch and Euler [1994], Mervart et al. [1994], De Jonge and Tiberius [1994], Goad and Yang [1994]. The purpose of the present lecture notes is to present the theoretical concepts of the GPS ambiguity fixing problem, to formulate procedures of solving it and to outline some of the intricacies involved. Several examples are included in the

 "In many applications, it is advantageous if the basis vectors are short and close to be orthogonal [1]. For more than a century, lattice reduction have been investigated by many people and several types of reductions have been proposed, including the KZ reduction [2], the Minkowski reduction [3], the LLL reduction [4] and Seysen's reduction [5] etc. Lattice reduction plays an important role in many research areas, such as, cryptography (see, e.g., [6]), communications (see, e.g., [1], [7]) and GPS (see, e.g., [8]), where the closest vector problem (CVP) and/or the shortest vector problem (SVP) need to be solved: "
Article: A Modified KZ Reduction Algorithm
[Show abstract] [Hide abstract]
ABSTRACT: The KorkineZolotareff (KZ) reduction has been used in communications and cryptography. In this paper, we modify a very recent KZ reduction algorithm proposed by Zhang et al., resulting in a new algorithm, which can be much faster and more numerically reliable, especially when the basis matrix is ill conditioned. 
 "The two key elements of the ambiguity discrimination test are the test statistic and the corresponding critical value (CV). Although welldocumented acceptance test procedures are available for ambiguity validation testing, a rigorous discrimination testing procedure is still lacking (Teunissen 1996). "
[Show abstract] [Hide abstract]
ABSTRACT: As a prerequisite of network differential global positioning system applications, the network ambiguity must be determined. Ambiguity resolution and validation are important aspects of this process. However, validation theory is still under investigation. This paper presents an improved network ambiguity validation method that incorporates additional knowledge measured from the network. This process involves the detection of outliers of the baseline measurement errors. By breaking the spatial correlation, incorrectly fixed ambiguities cause the corresponding baseline measurement errors to appear as outliers, which may be discovered and identified with the proposed outlier detection algorithm and outlier identification algorithm, respectively. These detection and identification procedures are iteratively performed until all of the wrong baseline ambiguities are corrected. Because the validation procedure is unconnected to the initial integer ambiguity estimation process, any available ambiguity resolution method may be used to obtain the initial integers, without algorithm correction. When the network ambiguity combinations do not pass the validation algorithm, the method uses a direct estimation algorithm to obtain the correct ambiguity. By using a direct estimation algorithm rather than a search process, this new method consumes less computational time than conventional methods. This study compares the performance of this new method with those of the conventional Fratio and Wratio test validation algorithms by using Monte Carlo simulation techniques. Results from a field experiment conducted on data from the United States continuously operating reference stations (USCORS) reveal that this validation algorithm accelerates the convergence process of ambiguity determination.Surveys in Geophysics 03/2012; 34(2). DOI:10.1007/s1071201292111 · 5.11 Impact Factor 
 "This ILS procedure is efficiently mechanized in the LAMBDA method, see e.g. (Teunissen, 1998a). Note that the success rate of integer leastsquares estimation is independent of the parameterization of the float ambiguities. "
Conference Paper: GNSS ambiguity resolution: which subset to fix?
Proceedings of International Global Navigation Satellite Systems Society, IGNSS Symposium 2011; 01/2011