Least squares and linear estimators |
The measurement errors consist of two parts: systematic error and random error. For many techniques, the systematic errors have been greatly eliminated or supressed due to careful calibration, investigation and modeling. Therefore the measurement error is considered to be primarily random error and to obey Gaussian distribution. In most cases of data analysis, we have a good apriori model for the parameters hence only the adjustments of the apriori model are estimated. Since the adjustments are usually several orders smaller than the parameters, we can use the linear algebra to estimate the adjustments. For the errors obeying Gaussian distribution, the optimal solution is the least squares estimation.
Assume the linearlized observation equation is: Y1 = A1 X + e1 (1) where Y1 are the observables X are the estimated parameters A1 is the design matrix e1 is the measurement error with variance s12 The normal equation is: (A1T W1 A1) X = A1T W1 Y1 (2) where W1 = s1-2 The least squares solution and its covariance matrix are: X = (A1T W1 A1)-1 A1T W1 Y1 (3) CX = (A1T W1 A1)-1 (4) If there is another observation: Y2 = A2 X + e2 (5) where the variance of e2 is s22 The combined normal equation is: (A1T W1 A1 + A2T W2 A2) X = A1T W1 Y1 + A2T W2 Y2 (6) The combined solution and its covariance are: X = (A1T W1 A1 + A2T W2 A2)-1 [A1T W1 Y1 + A2T W2 Y2] (7) CX = (A1T W1 A1 + A2T W2 A2)-1 (8) The general constraints can be expressed as equation (5). In this case, the combined solution (7) is the constrained solution. In geodetic data analysis, most constraints are with Y2 = 0. The parameter space constraints ( constraint on X) are widely used in geodetic data analysis. The uses of data space constraints and inequality constraints are not so popular and hence are not discussed here.