Journal of Global Positioning Systems (2005)
Vol. 4, No. 1-2: 223-229
Real Time Quality Assessment for CORS Networks
Simon Fuller, Philip Collier, Allison Kealy
Department of Geomatics, The University of Melbourne , Melbourne, Australia
e-mail: s.fuller2@ pgrad.unimelb.edu.au Tel: + 61 3 8344 4509
Received: 15 November 2004 / Accepted: 13 July 2005
Abstract. The growing use of real time high accuracy
Global Positioning System (GPS) techniques has resulted
in an increase in the number of critical decisions made on
the basis of a GPS derived position. When making these
decisions mobile users require assurance that the GPS
position quality meets their requirements. Providers of
Continually Operating Reference Stations (CORS),
whom mobile users are generally reliant upon, must also
be able to assure users that their d a ta meets agreed quality
standards. Unfortunately, the realistic and reliable
description of position and data quality is an area in
which GPS has traditionally been weak. Research being
undertaken as part of the Cooperative Research Centre
for Spatial Information (CRC-SI) is attempting to addr ess
this problem by assessing and reporting on the quality of
raw GPS observations in real time. This paper examines a
number of existing app roaches to assessing the quality of
raw GPS observations and presents a conceptual
architecture for the development of a real time quality
control system.
Key words: GPS, Quality, Real Time, Stochastic
Modelling, CORS
1 Introduction
The increasing usage of high accuracy real time GPS
positioning in a wide range of application s has resulted in
a proportional rise in the number of critical decisions
made on the basis of GPS positions. These decisions may
be critical from a safety-of-life, financial, or
environmental perspective. In making these decisions
GPS users must be capable of determining if the quality
of the position meets their requirements. Furthermore,
they must be confident that the indicators of position
quality that their decision is based on are realistic and
reliable in all conditions, at any time.
To obtain high accuracy real time GPS positions, mobile
users rely heavily on information from external sources,
be it from a local GPS basestation, a regional CORS
network, or a global correction service. Thus the quality
of the mobile user’s position is intrinsically linked to the
quality of the ex ternal data. Mo bile users must be assured
that the information provided to them is of sufficient
quality to meet their requirements. It follows that
suppliers of GPS data products, e.g. CORS providers,
must be able to deliver quality information to the mobile
user in real time.
Research being undertaken as part of the positioning
program of the CRC-SI is attempting to address many of
the issues associated with the real time assessment of
CORS and mobile user positioning quality. The aim of
this research is to develop real time procedures for CORS
networks and mobile users that will improve the
reliability of the mobile user’s position and provide a
realistic assessment of the position quality. To
accomplish this an understanding of existing approaches
to quality control and the ability of th ese approaches to be
adapted to real time operation is required. This paper
presents a review of the current methods for assessing the
quality of CORS and mobile user data and positions, in
conjunction with an analysis of the potential of these
methods to operate in real time. Finally a conceptual
architecture for the real time quality control of CORS
networks and mobile users is proposed.
2 Quality cont rol for CO RS networks and mobi l e
users
The positioning accuracy and quality achievable by GPS
is dependent on the raw data quality and the processing
algorithm chosen. The quality control of GPS
observations falls into two parallel categories – the
validation and description of the raw data quality,
independent of its future application, and the quality
224 Journal of Global Positioning Systems
control undertaken as part of the processing algorithm
(Brown et al., 2003). Given the wide range of processing
algorithms available the quality control processes
employed by these algorithms is not of particular concern
at this stage. Suffice it to say that the quality control and
end results of the chosen processing algorithm will be
dependent on the provision of the high quality
observation data and an accurate stochastic model, both
of which are a direct outcome of quality control of raw
observation data.
The methods and procedures for the validation and
description of raw data quality are generally independent
of the processing algorithm chosen. The aspects of raw
data quality control considered here include data
completeness, the detection and repair of cycle slips and
receiver clock jumps, and the description of raw data
quality in the form of stochastic models.
2.1 Data c ompleteness
The most basic form of raw data quality control consists
of statistics that describe the amount and completeness of
the data collected by a GPS receiver. The consequences
of ignoring data completeness in a qu ality control process
can be severe, leading to difficulty in detecting outliers
and cycle slips, increased time to resolve ambiguities, the
introduction of multiple ambiguities, a weaker solution
due to limited data availability, and in the worst case an
inability to compute a solution (Brown et al., 2003).
Three aspects of data completeness are generally
considered; data gaps - being epochs with incomplete or
no observations; missing epochs - whereby observations
are not recorded for a satellite that is visible; and the
availability of sufficient ephemeris information for a
satellite. Statistics on data completeness, when analysed
over extended time periods, can be useful for determining
problems with receiver hardware and software (Brown et
al., 2003), site-specific problems (Brown et al., 2003,
Jonkman and de Jong, 2000a), and abnormalities in the
satellite constellation or ephemeris information (Jonk man
and de Jong, 2000a). Current quality control software
packages such as GQC (Brown et al., 2003) and TEQC
(Estey and Meertens, 1999) operate in a post-processing
mode and are well suited to this sort of task.
From a real time quality control (RT-QC) perspective
data gaps, missing epochs, satellite constellation
problems and so forth need to be closely monitored and
appropriate action taken to notify users of any problems
that may impact on the quality of their position solution.
Additionally, data gaps and missing epochs are likely to
have a detrimental impact on the ability of any real time
algorithms for the detection of cycle slips, or the
generation of stochastic models, to carry out their
assigned tasks..
2.2 Systematic biases in observation data
High accuracy GPS positioning is dependent upon the
identification and removal of the main error sources that
impact upon the observation quality. In relation to the
quality control of raw data, receiver clock jumps, cycle
slips, and quasi-random (e.g. multipath, diffraction,
ionospheric scintillation etc.) effects are the main error
sources that can degrade observation quality. The impact
of quasi-random errors are not considered here but are
dealt with briefly in the section describing stochastic
modelling. However, the influence of quasi-random
errors does hamper cycle slip detection, mainly due to the
fact that their influence on the phase observations is not
limited to an integ er number of cycles (Kim and Lan gley,
2001). The treatment of true systematic errors in the RT-
QC context is discussed in the following sections.
2.2.1 Receive r clock jum ps
GPS receivers align themselves with GPS time using a
variety of techniques. Some receivers constantly
synchronise their clock with GPS time (so called “Clock
Steering”) whilst others allow their clock to drift and
periodically introduce corrections of approximately 1
millisecond to keep the clock close to GPS time (Fig. 1).
Other receivers allow the clock to drift unchecked and
simply keep track of the bias and bias rate of change
(Rizos, 1999, Gurtner, 1999, Fraser, 2004).
Rec ei ver C loc k O ffs et, 2 3 Ma y 2 0 0 2
-1.000
-0.500
0.000
0.500
1.000
10.00 11.00 12.00 13.00 14.00 15.00
Time of Day
Clock Offset (ms)
Fig 1: Receiver Clock Jumps
Of concern from a RT-QC perspective is the second
technique (illustrated in Fig. 1), whereb y clock jumps are
introduced into the raw observations. These jumps
produce a systematic bias in the undifferenced code and
phase observations, as shown in the following equation:
(
)
(
)
(
)
∆⋅−∆⋅+Φ=∆⋅Φ+Φ=∆+Φ cttt
ρ
(1)
where
Φ
represents the carrier phase (or pseudorange)
observation and
Φ
its rate of change with respect to
time;
represents the clock jump;
ρ
is the satellite
dependent geometric range rate; and c is the speed of
light in a vacuum. The clock jumps themselves are quite
Fuller et al.: Real Time Quality Assessment for CORS Networks 225
small (less than or equal to 1 millisecond) but they have
two distinct effects on the code and phase observables.
The term ∆⋅c represents a constant receiver dependent
effect on the geometric range whilst the term
ρ
represents the contribution of the satellite dependent
geometric range rate at the time of the clock jump (Kim
and Langley, 2001). The first of these terms (
c) is
removed during subsequent single or double difference
processing. The latter term (
ρ
) does not cancel during
differencing, as it is dependent on a particular satellite-
receiver combination.
Thus the term ∆⋅
ρ
introduces a systematic bias into the
geometric range rate. The size of the bias is dependent on
the particular geometric range rate. Assuming a
maximum possible rate of 900m/s, a one-millisecond
jump could potentially introduce 0.9m of error into the
geometric range. From a RT-QC perspective it is crucial
that these effects are estimated and removed in real time.
Without correcting for such an effect it may be difficult
to detect and repair cycle slips, estimate an accurate
stochastic model, and undertake subsequent quality
control (e.g. during the processing algorithm).
2.2.2 Cycle sl ip detection and repair
Cycle slips are discontinuities of an integer number of
cycles in the carrier phase observations caused by a loss
of lock in the receiver’s carrier tracking loops. Hofmann-
Wellenhof et al. (1992) describe three potential causes for
cycle slips. Firstly, the most likely cause of cycle slips are
physical obstructions to the satellite signal due to natural
or man-made features (e.g. buildings, trees, bridges etc.).
Secondly, low signal to noise ratios (SNR) due to
ionospheric conditions, multipath, rapid changes in
receiver position, or low satellite elevation can produce
cycle slips. Finally, failures in the receiver software or
malfunctioning satellite oscill ators may cause cycle slips,
however such incidents are rare.
To take advantage of the superior measurement precision
of the phase observables, cycle slips must be removed
from the phase data before further processing can occur.
This process involves detecting the location of the cycle
slip (in time), determining the number of L1 and/or L2
cycles that comprise the slip, and then correcting all
phase observations of the affected satellite subsequent to
the time of the cycle slip (Kim and Langley, 2001,
Hofmann-Wellenhof et al., 1992).
The focus on Real Time Kinematic (RTK) positioning in
recent times has moved the detection and repair of cycle
slips, traditionally a post-processed activ ity, into the real-
time domain. RTK positioning is dependent on the
resolution of the integer ambiguities, a process greatly
aided by the presence of clean, cycle slip free data. The
push for instantaneous ambiguity resolution has lead to
the development of real-time algorithms for the detection
and repair of cycle slips.
One such algorithm is the instantaneous cycle slip
correction technique proposed by Kim and Langley
(2001). This algorithm utilises the triple difference (TD)
observables of the carrier phases in conjunction with
Doppler and code observables. TD observations are
generally free of the majority of GPS biases, such as
receiver and satellite clock offsets, integer ambiguities,
atmospheric effects, multipath, and satellite orbits. Thus,
the size of the remaining biases and noise should be less
than a few centimetres, provided that the observation
interval is relatively short. Cycle slips would then be
evident in the TD observations as large spikes, several
orders of magnitude larger than the mean bias and noise.
These assumptions may not hold in all cases, for example
severe ionospheric disturbances, very long baselines, or
rapid variations in the receiver position may lead to the
triple difference biases and noise exceeding the L1 and
L2 wavelengths, without cycle slips being present. In
such situation s the observation interval can be reduced to
a level such that the biases and noise exhibited by the
TDs are once again at the centimetre level and therefore,
useful in detecting and repairing cycle slips (Kim and
Langley, 2001).
Cycle slip candidates are obtained by examining the
mean and variance of the predicted TD residuals (being
the difference between the observed TDs and the
computed TDs). If dual frequency carrier phase
observations are available the number of candidates can
be reduced through the use of TDs formed from the
geometry free linear combination observations.
Following identification of the cycle slip candidates a
least squares estimation is carried out to determine the
two candidates (best and second best) that minimise the
least squares residuals. The statistical likelihood of these
two candidates is assessed and if they are considered
significantly different then the best candidate is accepted
and the slip is repaired. In a fin al step a reliability test on
the cleaned data is carried out to determine if further,
unspecified, errors remain in the observations.
Another example of an algorithm capable of real-time
cycle slip detection and repair has been proposed by de
Jong (1998) and was implemented in the Dutch
Permanent GPS Network and during the International
GLONASS Experiment (Jonkman and de Jong, 2000b).
The algorithm is based on the use of a Kalman filter in
conjunction with the recursive Detection, Identification
and Adaptation (DIA) procedure developed by Teunissen
(1990). The DIA procedure consists of an overall model
test to detect any unspecified errors in the observation or
dynamic models (Detection). If an unspecified error is
encountered a number of alternative models,
incorporating different bias parameters, are tested. The
226 Journal of Global Positioning Systems
model producing the highest test statistic is considered
the most likely to represent the “correct” observation
model (Identification). Finally the original observation
model is modified to reflect the identified bias
(Adaptation).
The DIA algorithm was developed to be independent of
the positioning application the data was intended for.
Thus no external information such as receiver-satellite
geometry, clock offsets or atmospherics should be
required. This is accomplished through the use of the
geometry free linear combination for both the observation
and dynamic models (Jonkman and de Jong, 2000b). Of
particular note is that this method is app lied on a satellite-
by-satellite basis for a single receiver, thus no observation
differencing is required. This is adv antag eous in th e sen se
that data from other receivers is not required to detect
cycle slips. Further studies by de Jong (1998) showed th at
the DIA geometry free approach, on a satellite-by-
satellite basis, was theoretically cap able of detecting slip s
of a single cycle in magnitude, provided the observation
interval is relatively short.
2.3 Stochastic Modelling
GPS data processing involves the determination of
various unknown parameters (e.g. station coordinates,
tropospheric estimates, integer ambiguities etc.) from a
set of observations. Generally these observations consist
of different types of measurements on different
frequencies (e.g. code and phase measurements on L1
and L2) and there are usually large numbers of them
when compared to the unknown parameters. In the
positioning community the accepted methodology for
determining the parameters is least squares (LSQ)
estimation. LSQ estimation relates the observed
quantities to the unknown parameters through a set of
mathematical equations known as a functional model.
The noise or precision of the observed quantities is
represented using a stochastic model.
A great deal of work has been put into the development
of functional models for GPS data processing. The
stochastic model has received less attention from
researchers until relatively recently. As a result simple
stochastic models are frequently used in LSQ based GPS
data processing algorithms (Tiberius et al., 1999).
Stochastic models are used in three phases of GPS
processing, quality control of the raw observations,
ambiguity resolution, and computation of the unknown
parameters (Kim and Langley, 2001). The statistical
quantities used in cycle slip detection and repair
algorithms are derived from the chosen stochastic model.
Thus the use of incorrect or oversimplified stochastic
models may result in faulty slip detection, thereby
introducing biases into the ambiguity resolution and
parameter estimation processes. Similarly, the
performance of instantaneous, real-time ambiguity
resolution strategies is greatly improved when using
accurate stochastic models. Accurate stochastic models
reduce the ambiguity search space and ensure that the
fixed ambiguities are correct. An incorrect stochastic
model could potentially result in faulty ambiguity
resolution, with unsatisfactory consequences for the
accuracy of the positioning application. Finally, the
estimated quality of the unknown parameters (obtained
from the LSQ estimation) are implicitly dependent on the
a priori stochastic model. An incorrect a priori model
may lead to overly optimistic estimates of the derived
position quality, leading users to believe they have met
quality requiremen ts when, in fact they have not (Tiberius
et al., 1999).
A number of methods have been proposed to provide
more realistic stochastic models for the various GPS
observables. Four approaches will be considered here -
the elevation dependent method (Euler and Goad, 1991),
the SNR or C/No approach (Brunner et al., 1999, Richter
and Euler, 2001), a rigorous least squares estimation
approach, and a method based on time differencing (Kim
and Langley, 2001).
2.3.1 Elevation dependent modelling
The dependence of observation noise on satellite
elevation has been known for some time and can mainly
be attributed to the receiver antenna’s gain pattern, with
additional con tributions from atmospheric attenuation and
multipath (Kim and Langley, 2001 , Tiberius et al., 1999).
Modelling the observation noise with respect to satellite
elevation can be carried out using functions tailored to
individual receivers (Euler and Goad, 1991) or using
general functions that can be applied regardless of
receiver type (Hugentobler et al., 2004). One drawback of
the elevation dependent approach is that it only considers
the variance of the individual observations. Cross
correlations between observations types (e.g. C1 and P2)
are neglected, as are spatial and temporal correlations.
Thus a fully populated variance covariance matrix is not
available when using this method.
2.3.2 C/N0 Based modelling
GPS signal power is expressed in the form of carrier-to-
noise power den si t y ratios (C/N0), also known as signal to
noise ratios (SNR). The C/N0 measurements generated by
GPS receivers are an indication of how well the receiver
hardware is tracking the incoming GPS signals. As such
they provide a direct indication of th e quality o f th e phase
observations (Richter and Euler, 2001, Kim and Langley,
2001, Brunner et al., 1999). The C/N0 approach to
Fuller et al.: Real Time Quality Assessment for CORS Networks 227
stochastic modelling seeks to take advantage of this
information to provide a more realistic assessment of the
observation noise.
C/N0 values are highly correlated with satellite elevation,
due in the most part to the antenna gain pattern, but also
influenced by atmospheric refraction and multipath.
Initial work focussed on this link to produce stochastic
models that were in effect, elevation dependent
(Harting er and Brunner, 1999). Furth er work by (Brunner
et al., 1999) extended the simple C/N0 models to account
for the fact that C/N0 is also influenced by signal
diffraction. C/N0 values observed in “clean”
environments can be treated as a “known” template for
C/N0 values observed in other environments. Deviations
of the observed values from the template are considered
to be the result of diffraction and down weighting (or
removal) of the observations occurs as a result. The
practical difficulties of providing templates for the
various receiver-antenna combinations has been
discussed in Richter and Euler (2001).
Problems with this method include the dependence on
C/N0 values, which may not be available from all
receivers, and the fact that cross, spatial, and temporal
correlations are not considered.
2.3.3 Least squares estimation
The least squares estimation approach offers a rigorous
solution to the problem of estimating a priori stochastic
models. Results in Barnes et al. (1998) indicate that using
the optimal stochastic model, estimated from the LSQ
residuals, significantly effects positioning results, when
compared to alternative modelling approaches (e.g. C/N0
approach). The basis of this approach is the direct
estimation of every element in the a priori variance
covariance matrix from the a posteriori observation
residuals. Due to the recursive nature of this process it
can be incorporated into a Kalman filter or sequential
least squares adjustment (Kim and Langley, 2001).
One technique to carry out the estimation of the variance
covariance elements is Minimum Norm Quadratic
Unbiased Estimation (MINQUE) developed by Rao
(1971) and utilised for static baseline processing by
Wang (1998). Unfortunately, MINQUE and similar
techniques are computationa lly intensive and not suited to
real-time processing. The optimality of the least squares
estimation approach is not guaranteed, as the estimation
technique may make assumptions about the correlations
that do not hold in all cases (e.g. temporal correlations
may be ignored). Furthermore, a certain level of
observation r edundancy is requ ired to produce reasonable
estimates, a situation th at may not exist in all p ositioning
scenarios ( K im and Langley, 2001).
2.3.4 Differencing in the time domain
Differencing in the time domain was proposed by Kim
and Langley (2001) to overcome the three main prob lems
in the existing modelling approaches - the lack of a fully
populated variance covariance matrix, no temporal
correlations, and no observation redundancy in long
baseline solutions. This method takes the view that high
order differencing in time (differencing TDs to produce
quadruple differences (QDs), then differencing QDs to
produce quintuple differences (dQDs)) will remove all
systematic biases and correlations, leaving only white
noise.
The assumption that systematic biases and correlations
are removed is justified on the basis that the differencing
process is in effect the application of consecutive
subtractive filters. These filters remove biases (e.g.
receiver and satellite clock offsets), damp low frequency
effects (e.g. atmospherics, multipath), and amplify high
frequency effects (e.g. noise, ionospheric scintillation).
For short baselines the effects of the correlated biases are
assumed to be ignorable, thereby implying the temporal
correlations are also ignorable. However, temporal
correlations may still exist, particularly in high multipath
environments, thus high order differencing is still
required. For long baselines the correlated biases are not
ignorable and consequently time correlations will exist
(Kim and Langley, 2001). Assuming the dQDs are free of
systematic biases and correlations they represent white
noise at the dQD level. The variance covariance matrix of
the dQDs can then be formed from a set of arbitrary dQD
samples. Using the mathematical relationship between the
various differencing levels, variance covariance matrices
for any difference (i.e. zero, single, double) can be
derived.
Of concern here is the generation of the dQD variance
covariance matrix. One solution is the estimation of
covariance functions. However, this is a computationally
intensive process not really suited to real-time use. If a
simpler technique is utilised one must question its
effectiveness in correctly modelling the cross, sp atial, and
temporal correlations, particularly when extrapolating
back from the dQDs. Furthermore, this method is
228 Journal of Global Positioning Systems
Mobile User
Solution
Quality
CORS Network
Solution
Quality
Quality
Models
Raw
Data RT-QC
Position
Solution
vs Solution
Quality
Solution
Quality
Raw
Data
Position
Solution
R
T
-QC
Quality
Models
vs
Fig. 2: Proposed RT-QC Architecture.
dependent on the selection of an appropriate time interval
for the differencing. The assumption that the dQD
observable represents white noise requires the high
frequency biases and correlations (which are amplified by
the use of subtractive filters) to be insign ificant. Th is may
not always be the case (e.g. in unstable ionospheric
conditions) and it may be necessary to adjust the time
interval in response to changes in the behaviour of the
high frequency biases.
3 RT-QC Architecture
The aim of the research being undertaken is the
development of real time procedures for CORS networks
and mobile users that will improve the reliability of the
mobile user’s position and provide a realistic assessment
of the position quality. Through an examination of the
existing approaches to assessing raw data quality an
understanding of the various aspects and limitations of
raw data quality assessment has been developed. To
proceed further, a conceptual architecture for a proposed
RT-QC system has been developed and is shown in Fig 2.
The RT-QC architecture is built around the idea that the
assessment of raw data quality (RT-QC box) should be
carried out independent of the processing algorithm
(Position Solution box). However, in the initial stages of
the project information fro m the position so lution will be
considered during the quality control process. The red
boxes indicate the current approach to assessing the
quality of CORS and mobile users position and raw data.
As Fig. 2 shows, this research is attempting to develop
procedures whereby quality models of th e CORS network
data can be transmitted to a mobile user, thereby
improving the quality of the mobile user’s position and
the estimates of position quality.
4 Conclusion s
The number of critical decisions made on the basis of
GPS positions has increased proportionally with the use
of GPS within the community. When faced with a
decision that may have severe consequences GPS users
must be confident that their position has been determined
to a sufficient level of quality to justify the decision and
that the indicators of quality their decisions is based are
realistic and reliable. The quality of a GPS position is a
direct result of the raw data quality and the processing
algorithm chosen. This paper has presented a review of
some existing methods for the assessment and reporting
of raw GPS data quality and the potential of these
methods to be adapted for use in a real time environment.
A conceptual architecture of a RT-QC system has been
presented as a way forward for future research in this
area.
Acknowledgements
This work has been undertaken as part of Project 1.2 in
the CRC-SI. The authors would like to acknowledge the
support of the indu stry partners in this project.
References
Barnes, J. B., N. Ackroyd, et al. (1998). Stochastic modelling
for very high precision real-time kinematic GPS in an
engineering environment. FIG XXI International
Fuller et al.: Real Time Quality Assessment for CORS Networks 229
Congress. Commission 6, Engineering Surveys, Brighton,
UK.
Brown, N., A. Kealy, et al. (2003). Quality control concepts for
regional GPS reference station networks. SatNav 2003,
The 6th International Symposium on Satellite Navigation
Technology, Melbourne, Australia; 1-13.
Brunner, F. K., H. Hartinger, et al. (1999). GPS signal
diffraction modelling: the stochastic SIGMA-D model.
Journal of Geodesy 73: 259-267.
de Jong, C. D. (1998). A unified approach to real-time
integrity monitoring of single- and dual-frequency GPS
and GLONASS observations. ActaGeodaetica et
Geophysica Hungarica 33(2-4): 247-257.
Estey LH, Meertens CM (1999). TEQC: The Multi-Purpose
Toolkit for GPS/GLONASS Data. GPS Solutions, 3(1):42-
49.
Euler, H. J. and C. C. Goad (1991). On optimal filtering of
GPS dual frequency observations without using orbit
information. Bulletin Geodesique 65(2): 130-143.
Fraser, R. (2004). SIRF Output Message ID 28. Personnel
Communication.
Gurtner, W. (1999). Ashtech clock jumps in RINEX files.
http://listserv.unb.ca/bin/wa?A2=ind9909&L=canspace&T
=0&F=&S=&P=4271 .
Accessed 3/11/04
Hartinger, H. and F. K. Brunner (1999). Variances of GPS
phase observations: The SIGMA-E Model. GPS Solutions
2(4): 35-43.
Hofmann-Wellenhof, B., H. Lichtenegger, et al. (1992). GPS
Theory and Practice. New York, Springer-Verlag Wien.
Hugentobler, U., R. Dach, et al. (2004). Bernese GPS Software,
Version 5.0 DRAFT. Bern.
Jonkman, N. F. and C. D. de Jong (2000). Integrity Monitoring
of IGEX-98 Data. Part I: Availability. GPS Solutions
3(4): 10-23.
Jonkman, N. F. and C. D. de Jong (2000). Integrity Monitoring
of IGEX-98 Data. Part II: Cycle Slip and Outlier
Detection. GPS Solutions 3(4): 24-34.
Kim, D. and R. B. Langley (2001). Quality Control Techniques
and Issue in GPS Applications: Stochastic Modelling and
Reliability Test. International Symposium on GPS/GNSS,
Jeju Island, Korea.
Land Victoria (2004). Victorian GPSnet Website.
http://www.land.vic.gov.au .
Accessed 11/11/04
Rao, C. R. (1971). Estimation of Variance and Covariance
components - MINQUE Theory. Journal of Multivariate
Analysis 1: 257-275.
Richter, B. and H. J. Euler (2001). Study of improved
observation modelling for surveying type applications in
multipath environment.
http://www.leicaatl.com/support/gps/Technical_papers/727
422en.pdf .
Accessed 8/11/04
Rizos, C. (1999). Simultaneity considerations for GPS phase
reductions.
http://www.gmat.unsw.edu.au/snap/gps/gps_survey/chap6/
638.htm .
Accessed 2/9/04
Teunissen, P. J. G. (1990). An integrity and quality control
procedure for use in multi sensor integration. ION GPS
90, Colorado Springs; 513-522.
Tiberius, C. C. J. M., N. F. Jonkman, et al. (1999). The
Stochastics of GPS Observables. GPS World 10(2): 49-54.
Wang, J. (1998). Stochastic modeling for static GPS baseline
data processing. Journal of Surveying Engineering 124(4):
171-181.