Está en la página 1de 119

Introductory Statistical Analysis 211

Proof of Eq. (8-21) is left to the reader [Hint: X i X X i


Taking expectations of Eq. (8-21) in accordance with Eq. (8-16), we get

n
2
2
n
E X i X X i X nE X
i 1
i 1

Noting from Eq. (5-29) that

that

X ].

(8-22)

X 2 2 , and from Eqs. (8-19), (8-20), and (5-29)

, we get

2
2
n
E X i X n 2 n
n 1 2
n
i1

(8-23)

It therefore follows that

2
1 n
E S2 E
Xi X 2

n 1 i1

(8-24)

2
2
Which shows that S is an unbiased estimator of . This is the reason why the term
n 1 is used for the sample variance. If the sum of the squares of the deviations were to
be divided by n rather than by n 1 , the resulting estimator would be biased.
If the population from which the sample is drawn is normally distributed, it can be shown
2
2
2
that S is also a consistent estimator of and that the distribution of S is related to the
chi-square distribution; specifically

n 1 2
S
2

(8-25)

has a chi-square distribution with n 1 degrees of freedom.


8.7. CONFIDENCE INTERVAL FOR THE MEAN
If a random sample of size n is drawn from a normally distributed population with mean
2
and variance , the sample mean X has a normal distribution with mean and

variance

2
n

. Thus, according to Eq. (5-23), the quantity

(8-26)

has a standard normal distribution with zero mean and unit variance, and it follows that

212 Analysis and Adjustments of Survey Measurements

P Z

Z 2 Z 1

(8-27)

where Z is the value of the standard normal distribution function, obtainable from
Table I of Appendix B.
Rearranging the inequality inside the brackets of Eq. (8-27), we get

P X

which reads as follows: The probability that

2 Z 1

lies between

(8-28)

X z

and

is 2

z 1 .

When a specific numerical value

is provided for x , the foregoing

probability statement becomes a confidence statement. The values


X z

X z

and

are known as confidence limits, the interval between them is known as a

confidence interval, and 2 z 1 is known as the degree of confidence, or confidence


level, often stated as a percentage. The construction of a confidence interval for a
particular distribution parameter, such as u, is known as interval estimation.
EXAMPLE 8-5
In Example 8-3, the sample mean of 20 independent measurements of a distance was
calculated to be 537.615 m. If the standard deviation of each measurement (i.e., the
standard deviation of the population) is known to be 0.033 m, construct a 0.95 (95%)
confidence interval for the population mean, .
Solution
For 2

z 1 0.95 , z 195 2

0.975

. From Table I of Appendix B,

z 1.96 .

Therefore, the confidence limits are

537.615

n
and

z
n

1.96(0.033)

537.601m

20
537.615

1.96(0.033)

537.629m

20

Thus, we can say with 95% confidence that u lies in the interval 537.601 m to 537,629 to.
In Example 8-5 the standard deviation v of the population was known. More often,
however, o is unknown and must be estimated, usually by the

Introductory Statistical Analysis 213

sample standard deviation S. Thus, instead of using Eq. (8-26), we must use
T

X
S

(8-29)

It is easily shown from Eqs. (8-7), (8-25), (8-26), and (8-29) that T has a t distribution with
n -1 degrees of freedom. Thus, instead of Eq. (8-28), we have

tS
tS

P X
X
2 F( t ) 1
n
n

When specific numerical values

and s are provided for

X tS

interval with confidence limits

and X

tS

(8-30)

and S, we obtain a confidence

and degree of confidence 2 ( t ) 1 .

EXAMPLE 8-6
With reference to Example 8-3, in which the sample mean of 20 independent
measurements of a distance is calculated to be 537.615 m, and to Example 8-4 in which
the sample standard deviation of the same 20 measurements is calculated to be 0.035 m,
construct a 0.95 (95%) confidence interval for the population mean, .
Solution
For

2 ( t ) 1 0.95 , F ( t ) 1.95 2 0.975

III, Appendix B,

209 .

Degrees of freedom

n 1 20 1 19

. From Table

Therefore the confidence limits are

0.975 , 19

tS

537.615

( 2.09 )( 0.035 )

537.599 m

20

and
X

tS
n

537.615

( 2.09 )( 0.035 )

537.631m

20

Thus, we can say with 95% confidence that lies in the interval 537.599m to 537.631m.
In establishing a confidence interval for the mean of a distribution it has been assumed that
the random sample is drawn from a normal distribution. If the population distribution is
not normal, but the sample size is large, X will have a distribution that is approximately
normal, and Eqs. (8-28) and (8-30) are still valid for all practical purposes.
Finally, as the sample size increases, we see that the values of t approach the
corresponding values of z. Indeed, for a sample size of 30 or larger, the t

214 Analysis and Adjustments of Survey Measurements


distribution can be approximated very well by the standard normal distribution.
8.8. CONFIDENCE INTERVAL FOR THE VARIANCE
When a random sample of size n is drawn from a normal population, the relationship given
by Eq. (8-25), that

n 1

2 S

has a chi-square distribution with n-1 degrees of

freedom, can be used to construct a confidence interval for the population variance,
Thus,

( n 1)S2
P X a2, n 1
X 2b , n 1 b a
2

where X a , n 1 and X b , n 1 are

the

th

and

b:h

percentiles,

(8-31)

respectively,

of

the

chi-square distribution with n-1 degrees of freedom.


From Eq. (8-31) it follows that:

( n 1)S2
2
( n 1)S2
P 2

ba
X a2, n 1
X b , n 1
and when a specific numerical value
interval with limits

( n 1) S 2 X 2
b, n 1

is provided for

and

( n 1) S 2 X 2
a, n 1

(8-32)

, we obtain a confidence
and degree of confidence

b a.
2

In constructing an appropriate confidence interval for , it is customary to make the two


percentiles complementary, i.e., b a 1.
If a confidence interval for the standard deviation is desired, positive square roots of the
2

confidence limits for are taken.


EXAMPLE 8-7
2

With reference once more to Example 8-4, in which a sample variance of 0.00121 m is
calculated from 20 independent measurements of a distance, construct a 0.95 confidence
2

interval for and the corresponding confidence interval for .


Solution
For b a 0.95 and a b 1 we get a 0.025 and b 0.975
Degrees of freedom
2
X 0.975 ,19 32.9

n 1 19 .

From Table II of Appendix B

2
X 0.025 ,19 8.91

Therefore the confidence limits are

( n 1)s 2 (19)(0.00121)

0.00071m 2
2
X 0.975,19
32.9

and

Introductory Statistical Analysis 215

and

( n 1)s 2 (19)(0.00121)

0.00258m 2 .
2
8
.
91
X 0.025,19
Thus, we can say with 95% confidence that
m

lies in the interval 0.00070

to 0.00258

.The corresponding 95% confidence interval for o is 0.026 m to 0.051 m.

8.9. STATISTICAL TESTING


It is often desirable to ascertain from a sample whether or not a population has a particular
probability distribution. The usual course of action that is taken is to make a statement
about the probability distribution of the population, and then test to see if the sample
drawn from the population is consistent with the statement.
The statement that is made about the probability distribution of the population is called a
statistical hypothesis. If the hypothesis specifies the probability distribution completely, it
is known as a simple hypothesis; otherwise, it is known as a composite hypothesis.
For every hypothesis

H0

there is a complementary alternative H 1 .

H0

and H 1 , are often

called the null hypothesis and alternative hypothesis, respectively.


An hypothesis is tested by drawing a sample from the population in question, computing
the value of a specific sample statistic, and then making the decision to accept or reject the
hypothesis on the basis of the value of the statistic. The statistic used for making the test is
caned the test statistic.
Testing of a statistical hypothesis

H0

is not infallible, since it is based upon a sample

drawn from a population rather than upon the entire population itself. Four possible
outcomes can occur:
1.

H0

is accepted, when H 0 , is true.

2. H 0 , is rejected, when
3. H 0 , is accepted, when
4.

H0

H0
H0

is true.
is false.

is rejected, when H 0 , is false.

If outcome (1) or outcome (4) occurs, no error is made in that the correct course of action
has been taken. Outcome (2) is known as a Type I error; outcome (3) is known as a Type II
error.
The size of the Type I error, designated , is defined as the probability of rejecting
when

H0

is true, i.e.,

H0

216 Analysis and Adjustments of Survey Measurements


PRe jet H 0 when H 0 is true .

(8-33)

When is fixed at some level for H 0 , and is expressed as a percentage, it is known as the
significance level of the test. Although the choice of significance level is arbitrary,
common practice indicates a significance level of 5% as "significant", and 1% as "highly
significant".

8.10. TEST OF THE MEAN OF A PROBABILITY DISTRIBUTION


Under certain conditions we may expect the mean
specific value . The hypothesis that.
0

of a probability distribution to have a

, can be tested by drawing a sample of size n


0

and using the sample mean

as the test statistic. Specifically, we have

H0 : 0
H0 : 0
X

is assumed to be normally distributed, or at least approximately normally distributed.

Under the hypothesis that


Eq. (8-27), assuming

, the following probability statement can be derived from

is known:

P ( 0 c) X ( 0 c) 2(z) 1,

(8-34)

where c z n .
If is unknown, the following probability statement can be derived from Ea. (8-30):

P ( 0 c) X ( 0 c) 2F( t ) 1,
where
H0

c ts

(8-35)

n.

, is accepted if x , the specific value of X calculated from the sample, lies

between 0

and 0

otherwise, H 0 , is rejected. The regions of acceptance and

rejection are shown in Fig. 8-4. If is the probability that H 0 , is rejected when it is true,
then

must e the probability that H 0 is accepted when it is true. It follows, then, that

1 2(z) 1

for know

(8-36a)

2F( t ) 1

for know

(8-36b)

Introductory Statistical Analysis 217

Fig. 8-4.
Solving for ( z ) , or f ( t ) , we get:

( z ) 1

(8-37a)

F( t ) 1

(8-37b)

or

Thus, the value of z or t is obtained from the significance level of the test, . Specifically,
Table I of Appendix B is used to evaluate z; Table III of Appendix B is used to evaluate t.
[Note: In Table III, t

t p , n 1 ,

where p

F( t )

.]

EXAMPLE 8-8
An angle is measured 10 times. Each measurement is independent and made with the same
precision, i.e., the 10 measurements constitute a random sample of size 10. The sample
mean and sample standard deviation are calculated from the measurements: x =
4212'14.6", s = 3.7". Test at the 5%o level of significance the hypothesis that , the
population mean of the measurements, is 4212'16.0" against the alternative that is not
4212'16,0",
Solution

0 4212'16,0",
For 5% level of significance, a = 0.05. Thus,

218 Analysis and Adjustments of Survey Measurements

0.975,
2

p F( t ) 1
Degrees of freedom

n 1 9,

From Table III of Appendix B, t

t 0.975 , 9 2.26

ts

. Thus,

( 2.26)(3.7 )
10

2.6

and so

0 c 42 1213 .4
and

0 c 42 1218.6
Since

x 4212'14.6"

lies between 0

and 0

, the hypothesis that =4212'16.01 is

accepted at the 5% level of significance.


8.11. TEST OF THE VARIANCE
OF A PROBABILITY DISTRIBUTION
Under the assumption that a population is normally distributed, we can test the null
2

hypothesis H 0 that the population variance is 0 ; against the alternative that it is not 0 ,
2

using the sample variance S as the test statistic.


Noting that

( n 1) S2 2
0

is distributed as chi-square with n-1 degrees of freedom, we can

make the following probability statement:

P x a2,n 1 (n 1)S2 02 x 2b ,n 1 b a ,

(8-38)

x a2,n 1 02
x 2b ,n 1 02
2
P
S
b a,
(n 1)
(n 1)

(8-39)

from which we get

H0

is accepted if

between x

2
a ,n 1

, the specific value of

(n 1) and x
2
0

(n 1) ;

2
b ,n 1

2
0

calculated from the sample, lies


otherwise,

H0

is rejected. The

regions of acceptance and rejection are shown in Fig. 8-5.


Again,

is the probability that

H0

is accepted when it is true. Thus,

1 b a,

(8-40)

Introductory Statistical Analysis 219

Fig. 8-5.

and for a

b 1 (complementary

percentiles). We obtain.

(8-41)

and

b 1

,
2

(8-42)

This test procedure can, of course, be used to test the standard deviation as well as the
variance.
EXAMPLE 8-9
Referring to the data in Example 8-8, test at the 5% level of significance the hypothesis
that , the population standard deviation of the measurements, is 2.0" against the
alternative that is not 2.0".
Solution

02 2.0.

220 Analysis and Adjustments of Survey Measurements


3.7

From Example 8-8, the sample standard deviation, s


Thus a

a 2 0.025

II, Appendix

.Now

a 0.05

, and b= I (a/2) = 0.975. Degrees of freedom= n 1=9. From Table

2
B, X 0.025 , 9 2.70

X 02.025 ,9
n 1

, and

2
X 0.975 , 9 19.0

. Thus,

( 2.70 )( 2.0 ) 2
1.20 (seconds of arc) 2
9

(19 .0)( 2.0 ) 2


8.44 (seconds of arc) 2
9

and

X 02.975 ,9
n 1
Now

( 3 .7 )

13.7

the hypothesis that

(seconds of arc) . Since

2.0

does not lie between 1.20 and 8.44,

is rejected at the level of significance.

8.12. BIVARIATE NORMAL DISTRIBUTION


The probability distribution of two jointly distributed random variables was discussed in
general terms in Chapter 5. We shall now look at a particular joint distribution of two
random variablesthe bivariate normal distribution. This distribution is very useful when
dealing with planimetric (x, y) positions in surveying.
The joint density function of two random variables X and Y which have a bivariate normal
distribution is
1

f ( x , y)

2 x y 1

exp

x
x

2
2 (1 ) x

x x
2

x

y y

y y

(8-43)
in which

, and
x

are the mean and standard deviation, respectively, of X;


x

, and
y

are the mean and standard deviation, respectively, of Y; and

is the correlation

coefficient of X and Y as defined by Eq. (5-55),


This density function has the form of a bell-shaped surface over the x, y coordinate plane,
centered

x x , y y ,

as shown in Fig. 8-6. The marginal density functions for X and Y

are, respectively,

f (x)

1
x

exp x
x

2
2

(8-44)

Introductory Statistical Analysis 221

Fig. 8-6.
And

1 y y
1
f ( y)
exp
y 2
2
y

},

(8-45)

which are the usual density functions for individual normally distributed random variables.
The two marginal density functions are also shown in Fig. 8-6.
A plane that is parallel to the x, y coordinate plane will cut the bivariate density surface in
an ellipse (see Fig. 8-6). The equation of this ellipse is obtained by setting f (x, y) in Eq.
(8-43) equal to the height K of the intersecting plane above the x, y plane, and simplifying.
The result is

x x

x
where c

x x
2

2 2 2 2
2
ln 4 K x y (1 )

y y y y

1 2 c2 ,


y
y

, a constant.

(8-46)

222 Analysis and Adjustments of Survey Measurements


EXAMPLE 8-10
The parameters of a bivariate normal distribution are

4 , 5,

4 ,
x

0 .5

and

. A plane intersects the density function at

K 0 .1

above the x, y coordinate

xy

plane. Evaluate and plot the ellipse of intersection.


Solution

1
c 2 ln 42 (0.1) 2 (1) 2 (0.5) 2 (1 0.25) 2.60

2
2
2
2
(1 )c (1 0.25 )( 2.60) 1.95,

Thus, the equation of the ellipse of intersection is


2

x 4
x 4 y 5 y 5

2(0.5)

1.95,
1
1 0.5 0.5
Simplifying, we get

( x 4) 2 2( x 4)( y 5) 4( y 5) 2 1.95.
Letting u

x 4,

and v

y5

, we have

u 2 2uv 4 v 2 i
Solving for u in terms of v, we get

u v 1.95 3v 2 .
Thus,

x 4 y 5 1.95 3( y 5) 2
or

x y 1 1.95 3( y 5) 2
Values for x and y are listed in Table 8-4, and the ellipse is plotted in Fig. 8-7
It can be shown through appropriate differentiation of Eq. (8-46) that the extreme points of
the ellipse (A, B, C, and D in Fig. 8-7), have the following coordinates:
X

Introductory Statistical Analysis 223

Table 8 . 4
y

4.2
4-4
4.6
4.8
5-0
5.2
5-4
5.6
5.8

1.03 and 3-37


2-47
4-33
2-39
4.81
2-45
5-15
2.60
5.4 0
2.85
5.55
3.19
5.61
3.67
5.53
4.63
4.97

Fig. 8-7.

If the ellipse is enclosed within an imaginary box, indicated by the broken lines
in Fig- 8-7, we see that the half-dimensions of the box are c x , and c y . We can
also see that acts as a proportioning factor in locating A, B, C, and D.
Points E and G (Fig. 8-7) on the ellipse can be located by setting x in Eq- (8-46)
equal to zero and solving for y:

y c y 1 2 .

224 Analysis and Adjustments of Survey Measurements


Similarly, points F and H (Fig. 8-7) are located by setting y in Eq- (8-46) equal to zero and
solving for x:

z c x 1 2 .
When x

, and

, Eq- (8-46) reduces to

x x 2 y y 2

2c2 ,

(8-47)

which is the equation of a circle with radius c .

8.13. ERROR ELLIPSES


In the previous section, the general case of the bivariate normal distribution was
considered. This is the usual model which applies to survey measurements. If we wish to
focus on the random error components only, we can set x , and y , equal to zero and get
probability distribution that centers on the origin of the x, y coordinate system.
When x

y 0

f ( x , y)

, Eq. (8-43) reduces to

1
2 x y

1
exp
2
1 2
2 1

x
2

y y


y y
(8-48)

and Eq.(8-48) becomes

x
2

y y
1 2 c 2 .

y y

(8-49)

Equation (8-49) represents a family of error ellipses centered on the origin of the x, y
coordinate system. When c 1 , Eq. (8-49) is the equation of the standard error ellipse.
The size, shape and orientation of the standard error ellipse are governed by the
distribution parameters x , y and . Six examples illustrating the effects of different
combinations of distribution parameters are shown in Fig. 8-8.
A typical standard error ellipse is shown in Fig. 8-9. Since c

1,

the imaginary box

(broken line) that encloses the ellipse has half-dimensions x , and y , In general, the
principal axes of the ellipse, x and y, do not coincide

Introductory Statistical Analysis 225


with the coordinate axes x and y; the major axis of the ellipse, x', makes an angle with
the x-axis.

A positional error is expressed in the x, y coordinate system by random


same error is expressed in the x, y coordinate system by random vector
(rotational) transformation which relates the two vectors is

Fig. 8-8.

X
Y

X
Y

vector; the

.The orthogonal

226 Analysis and Adjustments of Survey Measurements

Fig. 8-9.
X cos sin X
Y sin cos Y ,

(8-50)

where is the angle of rotation.


Now the covariance matrices for random vectors
X
X
Y and Y


are
2x 0
2x xy
and
,

2
2
xy y
0 y
respectively. The off-diagonal terms in the covariance matrix for

are zero because X' and Y' are uncorrelated (x' and y' are the
X
Y

principal axes of the ellipse).


Applying the general law of propagation of variances and
covariances, Eq. (6-19), to the vector relationship given by Eq. (850), we get:
2x 0 cos sin 2x xy cos sin

. (8-51)
2
2
0 y sin cos xy y sin cos

Introductory Statistical Analysis 227


Multiplying the matrices and equating corresponding elements we obtain

Substituting

2x 2x cos 2 2 xy sin cos 2y sin 2

(8-52)

2y 2x cos 2 2 xy sin cos 2y sin 2

(8-53)

0 ( 2y 2x ) sin cos xy (cos 2 sin 2 ).

(8-54)

1 2 sin 2

for sin cos , and cos 2 for

2
2
cos sin

in Eq. (8-54), we obtain

1 2
y 2x sin 2 xy cos 2 0
2
from which we get

tan 2

The quadrant of
and denominator

2 xy
2x 2y

(8-55)

is determined in the usual way from the signs of the numerator 2 xy ,

2
2
x y

Eliminating from Eqs. (8-52) and (8-53) results in the following expressions for the
variances of X' and Y':

2x

2y

2x 2y
2
2x 2y
2

2xy

2xy .

2 2
y
x
4

2 2
y
x
4

12

(8-56)

12

(8-57)

The standard deviations x and y ,- are the semimajor axis and semiminor axis,
respectively, of the standard error ellipse.
2

It can be demonstrated that the variances and are the eigenvalues of the covariance
matrix of the random vector

.
X
Y

EXAMPLE 8-11
The random error in the position of a survey nation is expressed by a bivariate normal
distribution with parameters x

x 0 , x 0.22 m

y 0.14 m

and

0.80

c=0.80.

Evaluate the semimajor axis, semiminor axis, and orientation of the standard error ellipse
associated with this position error.

228 Analysis and Adjustments of Survey Measurements


Solution

xy x y 0.800.220.14 0.0246m 2 ,
2x 2y

2 2
y
x
4

2xy

0.22 2 0.14 2
2

12

(0.22 ) 2 (0.14 ) 2

0.0340 m 2 ,

(0.0246 ) 2

12

0.0285 m 2 .

Thus,

12

12

2 2 2

y
2
x
x
2xy
2
4

2x 0.0340 0.0285 0.0625m 2


2x 2y

and

2 2 2

y

x
2xy
2
4

2x 0.0340 0.0285 0.0055m 2 .


2x 2y

2
y

Thus the semimajor axis is

x 0.0625 0.25m
and the semiminor axis is

y 0.0055 0.074 m.
now

tan 2

Since xy ,
Thus, 2

tan

and x
1

2 xy

2
x

2
y are

(1.711) 59.7 ,

2
y

both

2(0.0246 )
1.711 .
0.22 2 0.14 2
positive, 2 lies

in

the

and the orientation of the error ellipse is

first
29.8

quadrant.
.

To determine the probability associated with an error ellipse, it is most convenient to


consider independent (uncorrelated) random errors X and Y. (If the errors are correlated,
they can always be transformed into uncorrelated errors by rotation through angle .)

Introductory Statistical Analysis 229


For uncorrelated random errors, p = 0 and Eq. (8-49) reduces to

x 2 y2
2 c2.
2
x y

(8-58)

Now consider the position of a point defined by the two random errors X and Y. This point
will lie on or within the error ellipse if

x 2 y2

c2.
2x 2y

(8-59)

Since X and Y are two independent normal random variables with zero means, the random
variable

X2 Y2

2x 2y

(8-60)

has a chi-square distribution with two degrees of freedom. The probability density function
of U can be easily derived from the general chi-square density function, Eq. (8-3), noting

that for two degrees of freedom, n 2 1 .


Thus, the probability density function of U is

f (u )

1 u 2
e for u 0.
2

(8-61)

The probability that the position given by values of X and Y lies on cr within the error
ellipse is

x 2 y2

P 2 2 c2 P U c2
x y

21
e u 2 du
0 2
1 e u 2 .

(8-62)

is represented by the volume under the bivariate normal density


for various values of c is
surface within the region defined by the error ellipse.
The probability P

U c

P U c

given in Table 8-5. Since for the standard error ellipse c 1 , we see from Table 8-5 that the
probability is 0.394 that the position of a point plotted from the two random errors will lie
on or within the standard error ellipse.

230 Analysis and Adjustments of Survey Measurements


Table 8-5

p[U<=c]

1.000
1,177
1,414
2.000
2,146
2,447
3.000
3,035
3,500

0,394
0,500
0,632
0,865
0,900
0,950
0,989
0,990
0,998

EXAMPLE 8-12
For the random position error given in Example 8-11, evaluate the semimajor and
scmiminor axes of the error ellipse within which it is 0.90 probable that the error in
position will lie.
Solution
For

P U c

y 0.14 m

0.90 , c 2.146

(table8-5) from the Example 8-11

x 0.22 m

and

. Thus, the semimajor axis of the error ellipse is

c x 2.146 (0.25) 0.54 m,


and the semiminor axis of the error ellipse is

c y 2.146 (0.074 ) 0.16 m.


PROBLEMS
8-1
Show that the distribution function for the chi-square random variable Y in
Example 8-1 is

f ( y) 1 1 e y 2 for y 0.
2

Evaluate F(y) for y = 0.297, 3.36, and 9.49 and compare with the corresponding entries in
Table II of Appendix B.
8-2
Show that the distribution function for the random variable T in Example 8-2 is

f (t)

1.5
1 2
t t 6 t2 4
1.
2

Evaluate F(t) for t = 0.134, 0.741, and 2.13 and compare with the corresponding entries in
Table III of Appendix B.

Introductory Statistical Analysis 231


8-3
Given a random variable T which has a t distribution with 10 degrees of
freedom. Determine from Table III of Appendix B the probability that T takes on a value:
(a) less than 0.70; (b) greater than 0.70; (c) between 0.26 and 0.70; (d) between -3.17 and
3.17; (e) between -1.81 and 0,26.

Note: Since the t distribution is symmetric about zero, P T t 1 P T t .


8-4
A distance is measured 25 times. All measurements are independent and have the
same precision (i.e., the measurements constitute a random sample of size 25). Following
are the measurements:

231.354 m
231.312 m
231.320 m

231.361 m 231.384 m 2 3 1 . 3 4 7 m 231.335 m


231.355 m 231.347 m 231.366 m 231.361 m
231.348 m 231.341 m 231.338 m 231.337 m

231.361 m
231.322 m

231.341 m 231.350 m 231.333 m 231.355 m


231.331 m 231.376 m 231.335 m 231.344 m

Evaluate the sample mean, sample median, sample midrange, sample range, sample mean
deviation, sample variance, and sample standard deviation.
8-5
The following 20 observations of a pair of angles, and , are obtained:

Compute the sample variances,

3114'16.2"

4208'24.0"

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

3114'15.2"
3114'15.6"
3114'14.5"
3114'14.0"
31'14'15.8"
3114'16.0"
3114'14.1"
3114'16.4"
3114'13.6"
3114'16.7"
3114'14.1"
3114'15.3"
3114'15.2"
3114'12.9"
3114'17.9"
3114'16.2"
3114'14.2"
3114'14.1"
3114'14.8"

4208'24.4"
4208'24.5'
4208'23.8"
4208'25.7"
4208'26.1"
4208'21.8"
4208'23.3"
4208'24.8"
4208'23.2"
4208'25.7"
4208'22.7"
4208'26.2"
4208'25.8"
4208'25.3"
4208'24.0"
4208'27.4"
4208'25.8"
4208'23.7"
4208'24.0"

2
S

and S , and the sample covariance,

OBSERVATION

sample statistics compute the sample correlation coefficient for and

. From these

232 Analysis and Adjustments of Survey Measurements

S
SS

8-6
Angles and in Problem 8-5 are added to form a new angle .Compute
a sample of 20 values f o r directly from the 20 pairs of and values in Problem
2

8-5. Then compute from these data the sample variance s , the sample covariance
2
s

and the sample correlation coefficient


2

. Check the computed values

for s , and s by applying variance-covariance propagation to the vector


function

1 0
0 1 ,


using the sample variances and covariance for and , computed in Problem 8-5, as
elements of the covariance matrix for

8-7
A distance is measured 10 times. All measurements are independent and have the
some precision. The standard deviation of each measurement is known to be 0.025 m. The
following values are obtained: 307.532 m, 307.500 m, 307.474 m, 307.549 m, 307.490 m,
307.527 m, 307.556 m, 307.502 m, 307.489 m, and 307.514 m. Evaluate the sample mean
for the distance and the standard deviation of this sample mean. Construct 50% and 95%
confidence intervals for the population mean of t distance.
8-8
if in Problem S-7 the standard deviation of each measurement is not known,
construct 50% and 95% confidence intervals for the unknown standard deviation, under
the assumption that the measurements are normally distributed. Is there any significant
difference between the sample standard deviation and the standard deviation (0.025 m)
given in Problem 8-7?
8-9 An angle is measured six times with the following results:
4010'15.6"
4010'10.8"
4010'08.9"

4010'16.4"
4010' 13.5"
4010'11.0"

The measurements are assumed to be normally distributed and independent. Construct


90% and 99% confidence intervals for the population mean, variance,
and standard deviation.
8-10
An angle is measured 16 times with a theodolite . The measured values are:
1.
2.
3.
4,
5.
6.
7.
3.

5235'24"
5235'28"
5235'22"
5235'20"
5235'25"
5235'29"
5235'18"
5235'26"

9.
10.
11.
12.
13.
14.
15.
16.

5235'24"
5235'29"
5235'35"
5235'31"
5235'29"
5235'26"
5235'30"
5235'31"

Introductory Statistical Analysis 233


It is suspected that the theodolite was disturbed between the eighth and ninth
measurements. Construct 90% confidence intervals for the means of the first eight and last
eight measurements. Is there evidence the theodolite was disturbed?
8-11
An angle is independently measured 10 times with the same precision. The
observed values are 9000'05", 9000'10", 9000'00", 9000'07", 8959'54", 8959'58",
9000'06", 9000'03", 8959'57", 9000'10".
(a)Test the hypothesis that the mean of the measurement equals 9000'00" against the
alternative that the mean does not equal 9000'00". Use a 5% level of significance.
(b)Test the hypothesis that the standard deviation of the measurement is 4" against the
alternative that it is not 4". Use a 5% level of significance.
8-10
A level rod is observed 15 times with a precise level that is equipped with a
micrometer. Following are the rod readings (assumed to be a random sample from a
normal population):
1412.80 mm
1413.09 mm
1412.86 mm
1412.80 mm
1412.78 mm

1412.85 mm
1412.50 mm
1412.84 mm
1412.84 mm
1413.02 mm

1412.87 mm
1412.80
mm
mm
1412.66 mm
1412.84 mm
1412.72 mm

Test the following hypotheses at the 5% level of significance:


(a) H 0 : 1413.00 mm against H1 : 1413.00 mm
(b) H 0 : 1412.75 mm against H1 : 1412.75 mm
(c) H 0 :

0.08 mm against H1 :

0.08 mm

(d) H 0 :

0.20 mm against H1 :

0.20 mm

8-13
Two independent calibrations of a 50 m steel tape yield two different values for
its length: 50.0026 m, and 50.0008 m. The standard deviation of a single tape calibration is
known to be 0.7 mm.
(a) Test at the 2% level of significance for any significant difference between the two
calibration values.
(b) Assuming there is no significant difference between the two calibration values,
construct a 99% confidence interval for the length of the tape based upon the mean of the
two calibration values.
8-14
Plane coordinates X and Y of a survey station have a bivariate normal distribution. The mean and standard deviation of X are 1700.50 m and 0.20 m, respectively; the
mean and standard deviation of Y are 810.65 m and 0.10 m, respectively, The coefficient
of correlation between X and Y is 0.60. Evaluate the principal dimensions (semimajor and
semiminor axes) and orientation of the standard error ellipse associated with this survey
station position.
8-15
The following covariance matrix is associated with the random error in the
horizontal (x, y) position of a point:

0.090
0.096

0.096 2
m ,
0.160

234 Analysis and Adjustments of Survey Measurements


Under the assumption that the random error has a bivariate normal distribution, evaluate
the principal dimensions and orientation of its standard error ellipse. Sketch the standard
error ellipse, showing pertinent dimensions.
If X and Y have a bivariate normal distribution with x

8-16
and

xy 0

, it can be shown that the radial distance R

y 0 , x y 0

has the following

density function:

r2
r
exp
2
2
2

f (r )

for

r 0.

This is the density function of the Rayleigh distribution. Evaluate P R 0 and compare
with the corresponding entry in Table 8-5.
8-17
The computation of a closed traverse results in the following misclosure in the x
and y coordinates:

x c x 0 3.3 cm
y c y 0 6.9 cm,
where xo and y 0 are the given coordinates of the origin of the survey and

xc

and

y c are

the

computed coordinates. The covariance matrix of the computed coordinates, referenced to


the given coordinates, is

4.63
0.87

0.87
cm 2 .

8.83

Is the misclosure acceptable in the sense that it lies within the 0.95 probability error
ellipse?
8-18
The x, y position of a survey station is computed by the method of least squares.
The initial (approximate) position of the station is given by x 0

1040.60 m

y 0 2143.50 m ,

and the normal equations in the least squares solution are

0.250 x 0.40

.
0.500 y 0.20

1.125
0.250

The reference variance is 0.040 m ,Assume the solution requires only one iteration.
Evaluate the least squares position of the survey station and the principal dimensions and
orientation angle of its 99% confidence region (an ellipse identical in size, shape and
orientation to the corresponding 0.99 probability error ellipse but centered on the least
squares position).
8-19
The principal axes of a standard error ellipse coincide with the x and y axes
(Fig.8-10). The standard deviations x , and y are as shown in the figure. Let of
0 OP

be the standard deviation in any direction . In general, P does not lie on the

ellipse; instead, its locus is the so-called pedal curve, shown as a broken line in Fig. 8-10.
(a) Show that

Introductory Statistical Analysis 235

02 2x cos 2 2ysen 2.
(Hint: Put the x' axis in the direction of OP.)
(b) Then show that the equation of the pedal curve in the x, y coordinate system is

y2

x
2

2
x

2y y 2 0.

8-20
A steel tape of length 1 is used to measure the distance between two survey
stations. A total of n full tapelengths are required to measure the distance. Random errors,
E1 , E 2 , E n in aligning the tape introduce a positive error V in the observed distance.
Assume these alignment errors are independent and normally distributed with zero mean

and standard deviation , and that the effect of each error E i , in the direction of taping can
2
2 .
i
2
V 2 Y

be approximated by

(a) Show that


, where Y is distributed as chi-sguare with n degrees of
freedom. (b) Derive expressions for the mean and standard deviation of V. (c) If a 50 m
tape is used to measure a distance of 800 m, and 0.50 m , evaluate v such
that p

V v 0.95 .

Fig. 8-10

General Least Squares


Adjustment

9.1. INTRODUCTION

In Examples 3-5 and 4-6 a straight line is fitted through three points. In these examples,
the y-coordinates were assumed to be the observations, while the x-coordinates were
considered as constants. Based on the straight-line equation

y ax b 0

(9-1)

the condition equations took the form for the adjustment of indirect observations

v B f .

(9-2)

The elements of the unknown parameters vector are the slope a and the y-intercept b, and
the residuals are those associated with the observed y-coordinates.
Let us now consider the case in which not only the y-coordinate but also the x-coordinate
of each point is an observed quantity. The straight-line equation (9-1), written for any
point, will then contain two observations, x and y, and two parameters, a and b. In this
form, it does not fit the condition equations form of either one of the two techniques of
least squares adjustment discussed in Chapter 4. In adjustment of indirect observations,
where the form of the condition equations is given by Eq. (9-2), each condition equation
contains only one observation. In adjustment of observations only, where the condition
equations are of the form

Av f ,

(9-3)

General Least Squares Adjustment 237


no parameters are included in the conditions. Consequently, for the case at hand, there is
need for a more general least squares technique that can handle combined observations and
parameters in the condition equations without the restriction of having only one
observation in each equation. Such a technique is the subject of this chapter.
To permit as much generality as possible, the technique will place no restriction on the
structure of the covariance, cofactor, or weight matrices of the observations, i.e., it will
accept measurements that may be correlated and/or of unequal precision.
Before proceeding with the derivation of the general least squares technique, we shall
rework the straight-line problem of Examples 3-5 and 4-6, now taking the x-coordinates as
well as the y-coordinates as observations.
EXAMPLE 9-1
A straight line, y
given:

ax b

, must be fitted through three points. The following data are

POINT

x(cm)

y(cm)

2
2
x ( cm )

2
2
y ( cm )

1
2
3

2.00
4.00
6.00

3.20
4.00
5.00

0.04
0.04
0.04

0.10
0.08
0.08

These are precisely the same data as given for Example 4-6, Part (2), with variances for the
x-coordinates added. All measured coordinates are assumed to be uncorrelated. It is
required, under these new conditions, to find least squares estimates for the two parameters
a and b.

Solution
For any one of the three points, the equation of the straight line is

F y v y a x v x b 0.
Since a and v x , are both unknown, this equation is nonlinear in the unknowns. It cannot
therefore be used directly, as was the case in Example 4-6, but must be linearized as
follows:

y a 0 x b 0 F v x F v y F a F b 0
x

or

a 0 v x 1v y x a 1b y a 0 x b 0 ,

238 Analysis and Adjustments of Survey Measurements


where

a 0 , b0

are approximate values for the two unknown parameters. Combining the

terms in the residuals together, and those in the parameter corrections together, and
expressing the results in matrix notation, we have

a 0

v x
1 x
v y

a
1 y a 0 x b 0 .
b

The linearized equations for all three points can be written in this form and combined as
follows:

a 0
0

0 a0

0 a0

v x1
v
x2
0 x1

vx3
0 x 2
v

x4
1 x 3
vx5

v x 6

1
y1 a 0 x1 b 0
a

1 y 2 a 0 x 2 b 0 .
b

a
x

b
3
0 3
0

which can be written as

Av B f .
Values for the approximations a 0 , and b 0 , can be obtained by finding the equation of a
straight line that passes through any two of the three points. Taking points 2 and 3, for
example, we have:

y 2 a 0 x 2 b0
and

y3 a 0 x 3 b 0 .
Solving for

a0 ,

and b 0 ,we get

a0

y 3 y 2 5.00 4.00

0.500
x 3 x 2 6.00 4.00

and

b0 y 2 a 0 x 2 4.00 0.500(4.00) 2.00.


Thus,

0
0
0
0
0 .5 1
2 1

A 0
0 0 .5 1
0
0 , B 4 1

0
6 1
0
0
0 0.5 1

General Least Squares Adjustment 239


3.20 0.5( 2.00 ) 2.00 0.20
f 4.00 0.5( 4.00 ) 2.00 0 .

5.00 0.5(6.00 ) 2.00 0


Since both the x and y coordinates are treated as observed variables, and since there are six
coordinate values in total, the covariance matrix is 6x6. Furthermore, since all the
observations are uncorrelated, the covariance matrix is diagonal, i.e.,

2x1

2
y1

2x 2
2y 2
2x 3

The observation vector

0.04

0.10

0.04
2

cm ,
0.08

0.04

2y 3
0.08

and the residual vector v are


v x1
x1
v
y
y1
1
v x 2
x 2
and v .
y
v y2
2
vx3
x3


y 3
v y 3

The constant term vector f can be writing as

a 0
f 0

0 a0

0 a0

which, symbolically, is

f A d,

where d is obviously the constant vector


Now, the vector of observations
observations,

b0
b0 .
b 0

can be transformed into a vector of equivalent

, as follows:

x1
y
1
0 b 0
x 2
0 b 0 ,
y

2
1 b 0
x3

y 3

c A.

240 Analysis and Adjustments of Survey Measurements


The same transformation applies to the vector of residuals, v, i.e.,

v 0 Av.
Thus,

f c d
and the linearized condition equations become

v0 B c d,
which is the form of Eq. (4-7) for the technique of adjustment of indirect observations.
From the general law of propagation of variances and covariances, expressed by
Eq. (6- 19), we can obtain the covariance matrix of c .

0
0
0.11

A A 0 0.09 0 cm 2 .
0
0
0.09
t

Letting 0

0.09 cm

, the weight matrix of the equivalent observations is [see Eq. (4-19)]:

w 02 1.
1 0.01
0.818

(0.09 )
1 0.09

1 0.09
1

and the squares solution, according to Eqs. (4-28), (4-29), and (4-38), is

55 .272 11 .636
N B t WB

11 .636 2.818
0.3272
t B t Wf

0.1636
0.1384 0.5715
N 1

0.5715 2.7147
0.0482
N 1t
.
0.2571
The corrected parameter values are

a1 a 0 a 0.5 0.0482 0.4518


b1 b 0 b 2.0 0.2571 2.2571.

General Least Squares Adjustment 241


The corrected values must now be used as new approximations in the solution for a new
correction vector. Thus

0
0
0
0
0.4518 1

A1
0
0 0.4518 1
0
0

0
0
0
0 0.4518 1
2 1
B1 4 1
and as before,

6 1
0.0393
f1 0.0643

0.0321
0.1082
0.0882
1 A A

0.8151

2
1
Wc1 0 c1
1

1
55 .260 11 .630
N1 B1t Wc1B1

11 .630 2.815
0.0005
t1 B1t Wc1f1

0.0002
t
1

t
1

0.0882

0.5729
2.7219
0.00005
1 N11t1
.
0.00026

0.1387
N11
0.5729

Since the values in 1 , are sufficiently small, the iterative procedure is terminated, and the
final estimates of the parameters arc

a 0.452 ,

b 2.257 cm.

We have seen in the preceding example that in some problems the condition equations are
more readily written in the form Av B f than in either of the two simpler forms
discussed in Chapter 4. While it is possible in principle to solve any problem by any
technique of Least squares, it is quite often

242 Analysis and Adjustments of Survey Measurements


more convenient, and more efficient, to solve a given problem by a particular technique.
Among others, curve-fitting problems and coordinate-transformation problems are best
solved by the general technique of this chapter when all the coordinates in the condition
equations are observed variables.

9.2. DERIVATION

Given a mathematical model, the minimum number of observations necessary for its
unique determination is n 0 . When n measurements which are consistent with the model
are acquired, such that n

n0

, the redundancy is

r n n0.

(9-4)

This means that among the n observational variables there exist r independent condition
equations (see Sections 3.3, 4.1, and 4.4). If we wish to carry unknown parameters into the
adjustment, then we must write an additional condition equation for each parameter.
Hence, if u parameters are carried, the number of condition equations will be

c r u.

(9-5)

The lower limit for the value of u is obviously zero, in which case no parameters are
carried and there are c r conditions among the observations only. On the other hand, the
upper limit for the value of u is n 0 , for which c

r n0 n

, so that we may not write any

more conditions than the total number of observations. Thus,

0 u n0

(9-6)

r c n,

(9-7)

the lower limits of which define the case for adjustment of observations only, and the
upper limits define the case for adjustment of indirect observations, the two special cases
treated in Chapter 4 (see Section 9.4). In many problems, the number of parameters carried
in the adjustment is neither zero nor n 0 . In such cases, neither one of the two special
techniques of Chapter 4 would be directly suitable. Instead, c condition equations exist
which, when they are originally linear, take the form

A v B d,
where

(9-8)

General Least Squares Adjustment 243

c n ;
is a c u rectangular coefficient matrix c n ;

A is a c n rectangular coefficient matrix


B

is the n 1 given observational vector;


v is the corresponding n 1 vector of residuals;
is the u 1 vector of parameters;
d is a c 1 vector of constants.

The precision of the n given observations may be expressed by either a co-variance


matrix , a cofactor matrix Q, or a weight matrix W. In the subsequent derivation, the
cofactor matrix will be used.
There are two possible procedures for deriving the least squares estimate of the parameter
vector . The first was essentially followed in Example 9-1 in which the condition
equations combining observations and parameters are transformed into the form of indirect
observation adjustment. For the sake of completeness, this procedure is summarized here.
Let

c A

(9-9)

represent a vector of c equivalent observations, each of which is a linear combination of


the n original observations; and let

vc Av

(9-10)

be the corresponding c residuals. If Q is the cofactor matrix of the given observations, then
Qc

, the cofactor matrix of the equivalent observations, is, according to Eq. (6-27), given

by

Qc A Q A t

(9-11)

c , c c, n n , n n , c
In terms of the equivalent observations, the condition equations given by Eq. (9-8) become

c vc B d
or

(9-12)

v c B d c f .
If the weight matrix of the equivalent observations is

Wc Q c1 AQA t

(9-13)

244 Analysis and Adjustments of Survey Measurements


then the normal equation matrices for the conditions expressed by Eq. (9-12) are,
according to Eqs. (4-28) and (4-29),

(9-14)

(9-15)

N Bt Wc B Bt AQA t

and

t Bt Wc f Bt AQA t

and the least squares estimates of the parameters are, from Eq. (4-38),

N 1t.

(9-16)

The reader should recognize that Eq. (9-13) is identical to Eq. (4-53) in Chapter 4, and
should understand that the use of the symbols Q c and Wc in Chapter 4 was justified since
they, indeed, represent cofactor and weight matrices, respectively, of the set of equivalent
observations.
In the second derivation procedure we shall apply the minimum criterion directly. Recall
from Chapter 4 that the minimum criterion for observations with a weight matrix W is

v t Wv min imun .

(9-17)

Although in Chapter 4 we restricted ourselves to uncorrelated observations for the sake of


simplicity, this expression applies to all cases regardless of the structure of the weight
matrix. Thus, W may be a full matrix, for which case the observations are correlated with
equal or unequal precision; or it may be a diagonal matrix for uncorrelated observations
with unequal precision; or it may be a scalar or identity matrix reflecting uncorrelated
observations with equal precision.
The linear, or linearized, condition equations are

Av B f .

(9-18)

When they arc originally linear, then from Eq. (9-8)

f d A.

(9-19)

The more common case is to have the conditions nonlinear with c functions of the general
form

F F, x 0

(9-20)

in which is the vector or n observations and x the vector of u parameters. Thus, the
linearization of Eq. (9-20) leads to Eq. (9-18), with

General Least Squares Adjustment 245

F
F
,B
, f F , x ,
0

(9-21)

where the three matrices A, B, and f are evaluated at the numerical values for the
observations and a set of u approximate values x 0 for the unknown parameters.
In a manner similar to the technique of adjustment of observations only (see Section 4.4), a
vector k of c Lagrange multipliers is used. The minimum criterion then becomes

' v t Wv 2k t Av B f min imun .

(9-22)

To achieve a minimum, the partial derivatives of ' with respect to both v and must be
equated to zero, or

'
2 v t W 2k t A 0
v
'
2k t B 0,

which after transposition and rearrangement become

Wv A t k
Bt k 0.

(9-23)
(9-24)

Solving Eq. (9-23) for v, we get

v W 1A t k QA t k,

(9-25)

and substituting into Eq. (9-18), we get

AQA k B f ,
t

which, in view of Eq. (9-11), becomes

Q c k f B.

(9-26)

k Q c1 f B Wc f B ,

(9-27)

Solving Eq. (9-26) for k yields

Finally, substituting into Eq. (9-24) gives

B t Wc f B 0

246 Analysis and Adjustments of Survey Measurements


or

B W B B W f ,

(9-28)

N t ,

(9-29)

i.e.,

where

N Bt Wc B Bt AQA t

and

t B t Wc f Bt AQA t

f,

which are precisely the relationships given by Eqs. (9-14) and (9-15). Equation (9-16) can
then be used to solve for .
If the condition equations are nonlinear, the vector estimate of the parameters is

x x 0 .

(9-30)

The vector of residuals. v, is obtained by substituting the righthand side of Eq. (9-27) for k
in Eq. (9-25):

v QA t Wc f B .

(9-31)

The vector of adjusted observations, , is obtained by adding v to the observation vector,


, i.e.,

(9-32)

EXAMPLE 9-2
Using the data of Example 9-1, calculate the adjusted coordinates of the three points.
Solution

02 0.09 cm 2

and

0.04

0.10

0.04
2
cm .

0
.
08

0.04

0.08

General Least Squares Adjustment 247


Thus

0.04

1.111

0.444
1
Q 2
0.889
0

0.444

0.889

Values for the final iteration are

0.4518
A
0

0 0.4518

0 0.4518

2 1
B 4 1 , f

6 1
0.8151 0
Wc 0
1

0
0

0.0393
0.0643

0.0321
0
0

0 , 0


0
1

Thus

0.01
0.04

0.01
t
t
v QA Wc f B QA f
cm
0.06
0.01

0.03
and

2.00 0.01 2.01


3.20 0.04 3.16

4.00 0.01 3.99


v

cm ,
4.00 0.06 4.06
6.00 0.01 6.01

5.00 0.03 4.97


i.e., the adjusted coordinates of the three points are

0
0

248 Analysis and Adjustments of Survey Measurements


POINT
1
2
3

x(cm)
2.01
3.99
6.01

y(cm)
3.16
4.06
4.97

The reader can verify that these adjusted positions lie on the line y

0.452 x 2.26.

In addition to the estimates themselves, it is equally important to calculate the precision of


the estimates. This is discussed in the following section.

9.3. PRECISION ESTIMATION


To derive Q the cofactor matrix of the parameter estimates, , we first substitute Eq. (915) into Eq. (9-16) to get

N 1B t Wc f .
The vector f is then replaced by d

(9-33)

, according to Eq. (9-19), to give

N 1B t Wc d A .

(9-34)

The only random vector in Eq. (9-34) is . Its cofactor matrix is Q. Thus, the Jacobian of
with respect to is

N 1B t Wc A

(9-35)

and from the propagation law expressed by Eq. (6-28), we get

Q J QJ t

N 1B t Wc A Q N 1B t Wc A

N 1B t Wc (AQA t ) Wc BN 1

N 1 ,

(9-36)
1

since Wc , and N are symmetric and, from Eqs. (9-13) and (9-14), Wc
t

N B Wc B

AQA

, and

, respectively.

If the condition equations are originally linear, then is the vector of estimated
parameters, and Q , given by Eq. (9-36), is the corresponding cofactor matrix. If,
however, the condition equations are nonlinear, they are linearized by a series expansion

General Least Squares Adjustment 249


(see Chapter 2), and is a vector of parameter corrections to be added to the
approximations x 0 . It is important, then, to derive the cofactor matrix Q xx for the final
vector of estimates x. Equation (9-30) indicates that the final estimate is the sum of x 0 ,
and . If x 0 , is taken as the parameter approximation at the beginning of the last iteration
of the least squares solution and is the final correction, then

Q xx Q N 1 ,

(9-37)

because x 0 , can be regarded as a vector of numerical constants the components of which


have been determined by all iterations preceding the last.
Equation (9-37) shows that the cofactor matrix of the parameters estimated by least
squares turns out to be simply the inverse of the normal equations coefficient matrix N.
Thus, the precision of the parameter estimates is a byproduct of the calculation of the
parameter estimates themselves.
While Eq. (9-37) gives the cofactor matrix, or relative covariance matrix, of the estimated
parameters, it is more often necessary to find the absolute precision of the parameter
estimates, i.e., their covariance matrix,

. If the reference variance, 0 , is known

beforehand (a priori), then it is a straightforward matter to calculate

xx

02Q 02 N 1.

or

xx

:
(9-38)

EXAMPLE 9-3
With reference to Example 9-1, calculate the covariance matrix for the two parameter
estimates a and b .
Solution
From Example 9-1, the cofactor matrix of the parameter estimates is

0.1387
Q xx Q N11
0.5729
2

0.5729
.
2.7219

The a priori reference variance 0 is 0.09 cm .Thus

If, however,

2
0

xx

0.0125
02 N 1
0.0516

0.0516
.
0.2450
2

is not known beforehand, an estimate for it, 0 , can be calculated a

posteriori from the results of the adjustment:

02

v t Wv v t Wv

,
cu
r

(9-39)

250 Analysis and Adjustments of Survey Measurements


where
v is the vector of observational residuals;
W is the a priori weight matrix of the observations;
c is the number of condition equations;
u is the number of parameters;
r is the redundancy (degrees of freedom) which is equal to n

n0

n0

, being the total

number of observations requiered to define the underlying model uniquely.*


The a posteriori covariance matrix of the parameter estimates is then

02Q 02 N 1.

xx

(9-40)
2

The a priori value of 0 , if given, can be statistically tested by using 0 in place of S and r
in place of n

1 in

always be used in calculating covariance matrices such as


2
of 0 ;

Section 8.11, Chapter 8. When 0 is consistent with 0 , the latter should


xx

.This is because the value


2

is only one estimate from one data set with limited redundancy, while 0 presumed
2

to be far better known. Should 0 turn out to he inconsistent with 0 , then several steps are
taken to determine the reason. [This topic is outside the scope of this book; for those
interested, see Mikhail (1976)].
t
v Wv

To evaluate the quadratic form,


(9-31). Thus,

, we use the relationship v

t
QA Wc f B

of Eq.

v t Wv QA t Wc f B W QA t Wc f B
t

which reduces to

v t Wv f t Wc f f t
if

we

note

1
t
Wc AQA

,N

that

and

are

Wc

B Wc B , t B Wc f

, and

symmetric,

(9-41)
that

Q W

and

that

, from Eqs. (9-13) through (9-16), re-

spectively.
When the condition equations are nonlinear, the iterative procedure is usually carried out
until the final value of is so small as to be essentially zero. Hence, for the nonlinear case,
Eq. (9-41) reduces to

v t Wv f t Wc f .
*It is interesting to note that when

W 1

n,= 1 and

2
0

t
v v

n 1

the familiar expression for sample variance.

, En. (9-39) reduces to

n 1

(9-42)

General Least Squares Adjustment 251


Two other cofactor matrices are of interest Q vv , the cofactor matrix of the residuals;
and, more important, Q the cofactor matrix of the adjusted observations.
To derive Q vv we note from Eqs. (9-27), (9-15), and (9-19) that

W f BN B W f
W I BN B W d A ,

k Wc f BN 1t
1

(9-43)

and from Eq. (9-25) that

v QA t k.
Applying the general law of cofactor propagation to Eq. (9-43), and noting that d is a
vector of constants, we obtain

Q kk Wc I BN 1B t Wc A Q Wc I BN 1B t Wc A
which reduces to

since Wc

I BN

Q kk Wc I BN 1B t Wc

1 t
B Wc

(9-44)

is idempotent.* Applying the general law of cofactor propagation

to Eq. (9-25), we obtain


QA W I BN

Q vv QA t Q kk QA t

B t Wc AQ

(9-45)

To derive Q , we note from Eqs. (9-32), (9-25) and (9-43) that

v
QA t Wc I BN 1B t Wc d A

I QA t Wc I BN 1B t Wc c,
where c

(9-46)

, a vector of constants. Applying the general law of

t
1 t
QA Wc I BN B Wc d

cofactor propagation to Eq. (9-46) we get

Q I QA t Wc I BN 1B t Wc A Q I QA t Wc I BN 1B t Wc A ,
which reduces to

Q Q QA t Wc I BN 1Bt Wc AQ
*An idempotent matrix has the property that it is equal to its square.

(9-47)

252 Analysis and Adjustments of Survey Measurements


It is important to recognize from Eqs. (9-45) and (9-47) that

Q Q Q vv ,

(9-48)

which is the same relationship as given by Eqs. (6-46) and (6-61).


EXAMPLE 9-4
With reference to the problem of Examples 9-1 and 9-2, calculate the covariance matrix
for the adjusted coordinates.
Solution
From Examples 9-1 and

0.04

1.111

0.444
Q

0.889
0.444

0
0.4518 1

A
0
0 0.4518

0
0
0
2 1
0.8151 0 0

B 4 1 , Wc 0
1 0,

6 1
0
0 1

0.889

0 0.4518

0
0

0.1387
N 1
0.5729

Thus,

0.8030
BN B Wc 0.3212

0.1605
1

0.3941
0.3579
0.3217

0.1969
0.3217

0.8403

and

0.1970
I BN B Wc 0.3212

0.1605
1

0.3941
0.6421
0.3217

0.1969
0.3217

0.1597

0.5729
.
2.7219

General Least Squares Adjustment 253


and

0
0
2.00
1.111
0
0

2
.
00
0
QA t
.
0.889
0
0
0
0
2.00

0
0.889
0
Using Eq. (9-45), the cofactor matrix of the residual vector v can be calculated:

Q vv QA t Wc I BN 1B t Wc AQ

0.006

Symmetric

0.036

0.013

0.057

0.006

0.198

0.071

0.317

0.036

0.026

0.114

0.013

0.507

0.057

0.029
0.159

0.057
.
0.254
0.028

0.126

0.006

From Eq. (9-48) we can get the cofactor matrix of the adjusted coordinates,

Q Q Q vv ,

0.438

Symmetric
2

Finally, since 0

0.036

0.013

0.913 0.071

0.09 cm

0.418

0.057

0.006

0.317

0.036

0.114

0.013

0.382

0.057
0.438

0.029
0.159

0.057
.
0.254
0.028

0.763

the covariance matrix of the adjusted coordinates is

0.0394

02 Q

0.0051 0.0005

0.0032

0.0012

0.0822

0.0064

0.0285

0.0376

0.0103
0.0344

0.0026
0.0082 0.0143

0.0012 0.0051
2
cm .
0.0051 0.0229
0.0394
0.0025

0.0687

254 Analysis and Adjustments of Survey Measurements


9.4. SPECIAL CASES
In Chapter 4, two specific techniques of least squares adjustment were presented. These
techniques are:
1. Adjustment of indirect observations.
2. Adjustment of observations only.
After studying the general case covered in this chapter, it should be clear that the two
techniques of Chapter 4 arc comparatively simpler. They are, in fact, two special cases of
the general technique, as was indicated in Section 9.2.
Adjustment of Indirect Observations
With reference to Section 9.2, this special case is achieved when u (the number of
parameters carried in the adjustment) is equal to its upper limit, n 0 (the minimum number
of observations necessary for a unique determination), making c (the number of condition
equations) equal to its upper limit, n (the number of observations).
According to Eq. (9-18), the initial condition equations can be expressed in general linear
or linearized form as

A 0 v B0 f 0 ,
where A 0 , B 0 , and
When n

c,

f0

(9-49)

arc the initial matrix and vector inputs.

the A 0 matrix becomes a square n n matrix. Furthermore, since the condition

equations are independent,

A 0 must

be nonsingular, i.e.,

if Eq. (9-49) is premultiplied through by A

A0 ,

, and B and f are set equal to a

1
f
0

must have an inverse. Thus,


1
0

and

B
0

, respectively, we get
0

v B f ,

(9-50)

which is identical to Eq. (9-2), the form of the condition equations for adjustment of
indirect observations. The distinctive feature of this equation is that the coefficient matrix
of v is an identity matrix.
With A replaced by the identity matrix, we have, from Eqs. (9-11), (9-13), (9-14), and (915)

Qc IQI Q

(9-51)

Wc Q c1 Q 1 W

(9-52)

N B Wc B B WB

(9-53)

General Least Squares Adjustment 255


and

t B t Wc f B t Wf .

(9-54)

Equations (9-53) and (9-54) are identical to Eqs. (4-28) and (4-29), respectively,
developed in Chapter 4.
When A

and Wc

, as they are in this special case, Eq. (9-31) reduces to

v f B,

(9-55)

which is identical to Eq. (4-32), an obvious rearrangement of the condition equation.


If N is calculated using Eq. (9-53), Q N
cofactor matrix for obtained in Eq. (6-43).
Finally, with A

and Wc

, as given by Eq. (9-36), is identical to the

, Eq. (9-45) reduces to

Q vv Q BN 1B t

(9-56)

which is identical to Eq. (6-44); and Eq. (9-47) reduces to

Q BN 1Bt

(9-57)

which is identical to Eq. (6-45).


Thus, in every respect, the method of least squares adjustment of indirect observations is a
special case of the general procedure.
Adjustment of Observations Only

With reference once more to Section 9.2, this special case is achieved when u is equal to
its lower limit, zero, making c equal to its lower limit, r (the redundancy). With no
parameters carried in the adjustment, the B term in Eq. (9-18) vanishes, leaving

Av f ,

(9-58)

which is identical to Eq. (9-3), the form of the condition equations for adjustment of
observations only.
To make the B term vanish we can simply set B 0 . If this is done, Eq. (9-27) reduces to

k Wc f
which is identical to Eq. (4-52). Equation (9-25), which is

v QA t k

(9-59)

256 Analysis and Adjustments of Survey Measurements


needs no reduction since it is already identical to Eq. (4-48).
As for the cofactor matrices, when B 0 , Eq. (9-45) reduces to

Q vv QA t Wc AQ

(9-60)

which is identical to Eq. (6-58), and Eq. (9-47) reduces to

Q Q QA t Wc AQ

(9-61)

which is identical to Eq. (6-60). Obviously, the relationship expressed by


Eq. (9-48).

Q Q Q vv ,
is consistent with Eqs. (9-60) and (9-61).
This should make it clear that the technique of adjustment of observations only is also a
special case of the general procedure.
In principle, a problem that can be solved by the general method of adjustment can also be
solved by using the special procedures, provided appropriate transformations are made.
The necessary transformations may be relatively complicated, however, and it is not
advocated that they be attempted. Instead, it is suggested that the most appropriate of the
three available techniques be selected to solve a specific adjustment problem.
Selection of the most appropriate technique is based on experience.

9.5. SUMMARY OF SYMBOLS AND EQUATIONS

The basic symbols and equations of the least squares method are summarized for quick
reference when solving adjustment problems.

Symbols

v
x

is the vector of observations,


is the vector of adjusted observations,
is the vector of residuals.
is the vector of parameters.

x0

is the vector of approximate values for x,

is the vector of parameter estimates (linear case) or of parameter corrections


(nonlinear case).
is the vector of parameter estimates (nonlinear case),
is a vector of constants.
is the condition equations constant terms vector.

d
f

General Least Squares Adjustment 257


c

is the vector of equivalent observations.

is the number of given measurements, and thus the number of elements in , ,


and v.
is the number of observations necessary to specify uniquely the model that

n0

underlies the adjustment problem.


is the redundancy, or the number of statistical degrees of freedom.
is the number of parameters carried in the adjustment, and thus the number of

r
u

elements in
c

A, x,

x0

, and

is the number of independent condition equations, and the number of elements


in d, f, and c .

A
B
Q
W

is a c n coefficient matrix for the observations.


is a c u coefficient matrix for the parameters.
is the n n a priori cofactor matrix of .
is the n n a priori weight matrix of .

Qc

is the c c cofactor matrix of c .

Wc

is the c c weight matrix of

N
t
k

is the u u coefficient matrix of the normal equations.


is the u 1 vector of "constants" in the normal equations.
is the c 1 vector of Lagrange multipliers.

Q xx

Q vv

and Q are the cofactor matrices of , x, v, and , respectively.

2
0

is the reference variance (a priori value).

2
0

is the least squares estimate of the reference variance (a posteriori value).

xx

vv

and are the covariance matrices of , x, v, and , respectively.

Equations, General Case

r n n0
cru
0 u n0
rcn

(9-4)
(9-5)
(9-6)
(9-7)

The general linear or linearized form of the condition equations is

Av B f .

(9-18)

f d A.

(9-19)

where for the linear case

258 Analysis and Adjustments of Survey Measurements


and for the nonlinear case

F
F
,B
, f F , x ,
0

(9-21)

F F, x 0

(9-20)

in which

The least squares solution is

Q c AQA t

(9-11)

Wc Q c1 AQA t

(9-13)

N Bt Wc B Bt AQA

t 1

t Bt Wc f Bt AQA t

N t.
x x 0 .

(9-14)
(9-15)
(9-16)
(9-30)

v QA t Wc f B .
v.

(9-31)
(9-32)

The cofactor matrices are

Q N 1

Q xx N

(9-36)

Q Q QA Wc I BN B Wc AQ
1

Q Q Q vv .

(9-37)
(9-47)
(9-48)

The estimate of the reference variance is

v t Wv v t Wv

,
cu
r

(9-39)

v t Wv f t Wc f f t .

(9-41)

02
where

The covariance matrices are

xx

02Q 02 N 1 , if

2
0

is know a priori

(9-38)

General Least Squares Adjustment 259

xx

02Q 02 N 1 , if

2
0

is know a priori;

02 Q , if

2
0

is know a priori

02Q , if

2
0

(9-40)

is know a priori.

Equations, Special Case-Adjustment of Indirect Observations

u n0
c n.

Upper limit

(9-6)

Upper limit

(9-7)

Then condition equations are of the form

v B f ,

(9-50)

The least squares solution is

Wc Q c1 Q 1 W

(9-52)

N B t Wc B B t WB

(9-53)

t B Wc f B Wf

(9-54)

N t.
x x 0 . (nonlinear case)
v f B,
v

(9-16)
(9-30)
(9-55)
(9-32)

The cofactor matrices are

Q N 1

(9-36)
1

Q xx Q N ,

(9-37)

Q vv Q BN 1B t

(9-56)

Q BN B

(9-57)

Equations, Special Case-Adjustment of Observations Only

u0
c r.

Lower limit
Lower limit

(9-6)
(9-7)

260 Analysis and Adjustments of Survey Measurements


The r condition equations are of the form

Av f .

(9-58)

The least squares solution is

Q c AQA t

Wc Q

1
c

(9-11)
(9-13)

k Wc f

(9-59)

v QA k,
v.
t

(9-25)
(9-32)

The cofactor matrices are

Q vv QA t Wc AQ
Q Q Q vv .

(9-60)
(9-48)

PROBLEMS
9-1

Figure 9-1 depicts a plane isosceles triangle ABC. The sides and height of the

triangle are measured; the observations are 1 , 2

and 3 , as shown. If the base x is the

only parameter to be carried in the adjustment, give the model elements

Fig. 9-1.

General Least Squares Adjustment 261


( n, n 0 , r, u,
9-2

and c) and write the condition equations in the linearized form,

Av B f

In Fig. 9-2, distances OA, AB, BC, and CO are observed. The observed values

are 1 , 2 , 3 , and

, respectively. All angles are held fixed. It is required to find least

squares estimates for the coordinates of C ( x 0 and y 0 ). (a) Write suitable condition
equations for this problem in the form Av f . (b) Write the condition equations in the
form Av B f , carrying x, and y, as the parameters.
9-3
Two sides and the three angles of the triangle in Fig. 9-3 are measured. The
measured values are 1 , 2 , 1 and 2 , as shown. Least squares estimates for side x and
altitude h are required. Write suitable condition equations for this problem in the form
Av B f , carrying x and h as the parameters.
9-4

If in Problem 9-1 1

1000.00 m , 2 1000.10 m

and 3

and with equal precision, find the least squares estimate for x

Fig. 9-2.

Fig. 9-3.

800.25 m

, all uncorrelated

262 Analysis and Adjustments of Survey Measurements


9-5

If in Problem 9-2 1

and the covariance matrix for

1000.20 m , 2 500.55 m

, 3

707.75 m

,and

4 1118.60 m

is
t

0
200 100 0
100 200 0
0

cm 2 ,
0
0 100 50

0
50 100
0
find the least squares estimates for x 0 and y 0 Also determine the principal dimensions and
orientation of the standard error ellipse for the computed position of C.
9-6

With reference to the triangle in Problem 9-3, 1

3 4500'30" 4500'30", 1 1000.00 m

and

2 1550.00 m

4000'00" ,

2 9500'00" ,

The observations are uncor-

related, the standard deviation of each observed angle is 15", and the standard deviation of
each observed side is 0.10 m. Find least squares estimates for x and h. Evaluate also the
covariance matrix for x and h, and construct 95% confidence intervals for x and h.
9-7
With reference to Fig. 9-4, the following angles are measured:

ANGLE AOB 1000'00"


1
ANGLE BOC 800'00"
2
ANGLE AOC 18 00'07"
3
ANGLE COD 500'00"
4
5 ANGLE DOC 1200'00"
ANGLE BOE 25 00'12"
6

Fig. 9-4.

General Least Squares Adjustment 263


The covariance matrix for the observation vector is

4 2 0

4 2

Symmetric

0 0 0
0 0 0

2 0 0

4 2 0
4 2

sec onds of

arc .
2

Find the least squares estimate for angle COD. Construct also a 99% confidence interval
for the angle COD.
9-8
The following data are observed:
x(m)

y(m)

1.00
2.95
2.05
3.05
4.10
3.60
4.85
4.30
8.00
4.80
9.10
5.25
All observations (x and y) are independent and have the same precision. Find least squares

estimates for the parameters of the straight line y ax b fitted to the data. Find also the
a posteriori estimate for the reference variance and use this value to estimate the standard
deviation of each observation and the standard deviations of the estimates for a and b. Test
at the 2% level of significance the hypothesis that b= 2.70 m.
9-9
Rework Example 48, Chapter 4, using the time as an observed variable as well as
the altitude, with standard deviations 1.0s and 20 ", respectively.
9-10
Calculate the covariance matrix for the least squares estimates of the maximum
altitude and the time at which maximum altitude occurs, as determined in Problem 9-9.
Evaluate also the standard deviations and coefficient of correlation for these least squares
estimates.
9-11
The angles shown in Fig. 9-5 are measured with a theodolite. The observed
values and their weights are
ANGLE

OBSERVED VALUE

WEIGHT

7014'30"

6227'14"

10338'26"

6152'04"

10910'04"

7656'36"

264 Analysis and Adjustments of Survey Measurements

Fig. 9-5.
(a) Use the method of least squares to determine adjusted values for these angles. (b)
Construct 95% confidence intervals for angles b and f.
9-12
The following data are given for the level net in Fig. 9-6
LINE

FROM

TO

OBSERVED
ELEVATION
DIFFERENCE(m)

DISTANCE
(Km)

1
2
3
4
5
6

A
B
C
A
D
D

B
C
A
D
B
C

+34.090
+15.608
-49.679
+16.010
+18.125
+33.704

2.90
6.15
17.32
15.84
9.22
10.50

Fig. 9-6.

General Least Squares Adjustment 265


The elevation of A is fixed at 324.120 m above mean sea level. The variance of each
elevation difference is directly proportional to the distance.
(a) Determine least squares estimates for the elevations of B, C, and D and evaluate the
cofactor matrix for these estimates. (b) Evaluate the a posteriori reference variance and
construct 95% confidence intervals for the elevations of B, C, and D.

Applications
in Plane Coordinate
Surveys
10.1. INTRODUCTION
Many survey projects are based upon two-dimensional positioning within a plane
rectangular coordinate system. This chapter addresses the application of least squares
adjustment in such plane coordinate surveys. It includes formulation and linearization of
the three basic condition equations (distance, azimuth, and angle) encountered in the
adjustment of plane coordinates by the method of indirect observations (also known as the
method of variation of coordinates), least squares position adjustment for a typical
procedure employed in plane coordinate surveying, and least squares transformation of
coordinates from one plane rectangular system to another_
10.2. THE DISTANCE CONDITION AND ITS LINEARIZATION
The adjusted distance, S ij , between two points i and j is given by

12
2
2

S X X Y Y ,
j
i
i
ij
j

(10-1)

where ( X i , Yi ) and ( X j , Yj ) are the rectangular plane coordinates of i and j, respectively.


This is the distance condition.
Linearization of Eq. (10-1) according to Eq. (2-18) is

Sij
S
S
S
X ij Y ij X ij Y
S S0
ij
j Y j
ij X i Y i X
j
i
i

(10-2)

where

12
2
2

0
S X 0 X 0 Y 0 Y 0 ,
j
i
i
j
ij

(10-3)

Applications in Plane Coordinate Surveys 267


noting that ( X

0
i

0
,Y )
i

and ( X

0
j

0
,Y )
j

are approximate values for the coordinates of i and j,

respectively.
Equation (10-2) assusmes that all four coordinates are unknown variables for which
approximations are necessary. If one of the two points is a control point, the coordinates of
this point Would be known and would thus be taken as constants. In this case, Eq. (10-2)
would include only two partial derivatives; namely, the derivatives with respect to the two
coordinates of the unknown point, For example, if point j is a control point, Eq. (10-2)
reduces to

Sij
S
X ij Y
S S0
ij
ij X i Y i
i
i

(10-4)

12
2
2

S0 X X 0 Y Y 0 ,
j
i
i
j
ij

(10-5)

in which

noting that ( X j , Yj ) are the known coordinates of the control point j, and ( X

0
i

0
,Y )
i

are

approximate values for the coordinates or the unknown point i. For the remainder of this
section we will consider the general case of having all four coordinates as unknowns.
Now, according to Eq. (9-32), the adjusted distance is

S ij Sij v ij ,

(10-6)

where S ij is the observed value of the distance and v is the corresponding residual. Thus, it
ij

follows from Eqs. (10-2) and (10-6) that

Sij
S
S
S
X ij Y ij X ij Y S0 S .
v
j Y j
ij
ij X i Y i X
j
ij
i
i

j
(10-7)
Denoting the partial derivatives by

b1

Sij

X ij

, b2

Sij
Yi

,b 3

Sij
X j

, b4

Sij
Yj

and letting

f ij Sij0 Sij

(10-8)

we obtain

v b X b Y b X b Y f .
ij 1 i
2 i
3 j
4 j ij
If we left

(10-9)

268 Analysis and Adjustments of Survey Measurements

v v ij , B b1

b2

b3

X i
Y
i
b 4 ,
, f f ij ,
X j

Y j

we see that Eq. (10-9) can be written in the familiar matrix form

v B f

(10-10)

of the condition equation for adjustment of indirect observations.


The first clement of the B matrix is

b
1
X

X i Yj Yi

2 12

1
X j Xi 2 Yj Yi 2
2

1 2

2X j X i 1

X X
j
i

X X
j
i
.


1 2
S
2
2

ij

X j X i Y j Yi

(10-11)

Since b, must be evaluated before the final (adjusted) coordinates of i and j are known,
approximate '.nines must he used. Thus.

X0 X0
j
i
b
.
1
S0
ij

(10-12)

Similarly, the remaining elements of the B matrix are

Y0 Y0
j
i
b
.
2
0
S
ij
X0 X0
b
3

(10-13)

. b
1

(10-14)

Y0 Y0
j
i
b
b .
4
2
0
S
ij

(10-15)

S0
ij

and

Applications in Plane Coordinate Surveys 269


10.3. THE AZIMUTH CONDITION AND ITS LINEARIZATION
The azimuth ij of a line from point i to point j (in that specific direction) is defined as the
horizontal angle measured clockwise from the reference meridian to the line. Since it is
common practice for plane coordinate surveys in North America to measure azimuth
clockwise from the north, we shall adopt this convention and work with it consistently in
order to avoid confusion.
Figure 10-1 shows a plane coordinate system with the positive Y-axis north and the
positive X-axis east. Also shown is a line between two points i and j having plane
coordinates ( X i , Yi ) and ( X j , Yj ) , respectively. The adjusted azimuth of this line is given by

X j Xi
arctan
ij
Y j Yi

(10-16)

This is the azimuth condition.


The azimuth, of course, can have a value anywhere between 0 and 360. It is important,
then, to make sure that the quadrant of ij is correctly determined. This means that the
X

signs of both the numerator X


j

and denominator Y
i

Y
i

of the tangent function

must be taken into account. If a computational aid, such as a hand calculator or computer,
is used and it yields only a positive or negative acute angle ij ,an appropriate constant
must be added to ij , in order to obtain the azimuth. Table 10-1 provides the

Fig. 10-1.

270 Analysis and Adjustments of Survey Measurements


TABLE 10-1
QUADRANT

.X j X i

Yj Yi

Positive

Positive

0-90

Positive

Negative

II

90-180

ij 180

Negative

Negative

III 180-270

ij 180

Negative

Positive

IV 270-360

ij 180

necessary information to ensure that the correct value of

ij is

AZIMUTH

ij

ij

obtained. See also Fig. 10-2.

Linearization of Eq. (10-16) yields

ij



X ij Y ij X ij Y
0
j Y j
ij
X i Yi i X j
j
ij i

(10-17)
where

X0 X0
j
i
0 arctan
.
Y0 Y0
ij
j
i

Fig. 10-2.

(10-18)

Applications in Plane Coordinate Surveys 271


Now the adjusted azimuth is

ij ij v ij ,
where ij is the observed value of the azimuth and

v ij

(10-19)

is the corresponding residual. Thus,

from Eqs. (10-17) and (10-19), we get

ij



X ij Y ij X ij Y 0 .
v
j Y j
ij
ij X i Y i X
ij
j
j
i
i

(10-20)
Denoting the partial derivatives by

b1

ij
X ij

, b2

ij
Yi

,b 3

ij
X j

, b4

ij
Yj

and letting

f ij 0ij ij

(10-21)

we obtain

v b X b Y b X b Y f .
ij 1 i
2 i
3 j
4 j ij

(10-22)

which can also be expressed in the usual matrix form as given by Eq. (10-10). To evaluate
the partial derivatives b1 , b 2 , b 3 , and

b4

, we recall from differential calculus that

arctan t 1 2 t ,
x
1 t x

(10-23)

for which it is assumed that t is a function of x. For

X j Xi
Y j Yi

and

xX

we have

X X

1
i
j

b1 ij
2

X i
X j X i X i Yj Yi

1
Yj Yi

Yj Yi 2


2
2
Y Y X X
i
j
i
j

1 Yj Yi

.
Y Y
2
S
j
i
ij

(10-24)

272 Analysis and Adjustments of Survey Measurements


Using approximate values of the coordinates to evaluate

b1 ,

we have

Y0 Y0
j
i
b
.
1
2
S0
ij

(10-25)

In similar fashion

X0 X0
j
i
b
,
2
2
S0
ij
Y0 Y0
j
i
b
b ,
3
1
2
S0
ij

(10-26)

(10-27)

and

X0 X0
j
i
b
b ,
2
4
2
S0
ij

(10-28)

It should be pointed out that all four coordinates are being carried as unknown variables.
This is the general case. If either i or j is a control point, having known coordinate, the two
terms of Eq. (10-20) that correspond to the known coordinates arc dropped.
10.4. THE ANGLE CONDITION AND ITS LINEARIZATION
In surveying, horizontal angles may be turned to the right (clockwise) or to the left
(counterclockwise). Since the azimuth has been defined as a clockwise angle, we shall
consider here angles that are clockwise only.
In Fig. 10-3, ijk is the adjusted horizontal angle at point i measured clock-wise from line
ij to line ik. It is obvious that

jik

ik

,
ij

(10-29)

where ik is the adjusted azimuth of line ik, and ij is the adjusted azimuth of line ij. If the
rectangular plane coordinates of i, j and k are
it follows that

X i , Yi , X j , Yj and X k , Yk , respectively,

X j Xi
X k Xi
arctan
arctan
,
jik
Yk Yi
Y j Yi

(10-30)

Applications in Plane Coordinate Surveys 273


This is the angle condition.

Fig. 10-3.
In a manner similar to the presentation of Eq. (10-22) for the azimuth condition, the
linearized form of the angle condition can be written as

jik

b X b Y b X b Y b X b Y f .
j
jik.
1 i
2 i
3
4 j
5 k
6 k
(10-31)

Note that

where

ijk

jik

jik

jik

(10-32)

is the observed value of the angle and v ijk is the corresponding residual. Note

also that

jik

0 ,
jik
jik

(10-32)

where

X0 X0
X0 X0
i
i arctan j
0 arctan k
,
0
0
0
0
Y

Y
Y

Y
jik
k
i
j
i

in which X

0
i

,Y

0
i

(10-34)

0 0 0 0
X , Y and X , Y are approximate values for the coordinates


j j k k

of i, j and k, respectively. Finally, note that

jik
b

1
X
i

0
0
Y 0 Y 0 Y j Yi
k
i
2
2
S0
S0
ik
ij

(10-35)

274 Analysis and Adjustments of Survey Measurements


0
0
X 0 X 0 X j Xi
jik
i
b
k
2
2
2
Y
S0
i
S0
ik
ij

Y0 Y0
jik
j
i
b

3
2
X
S0
i
ij

X0 X0
jik
j
i
b

4
2
Y
S0
i
ij

Y0 Y0
jik
i b b
b5
k
1 3
2
X
k
S0
ik
0

X X0
jik
i b b ,
b
k
6
2
4
2
Y
k
S0
ik

(10-36)

(10-37)

(10-38)

(10-39)

(10-40)

where

S X
0 2
ij

and

0
j

S X
0 2
ik

X i0

0
k

Y
2

0
j

Yi0

X i0 Yk0 Yi0

(10-41)

(10-42)

The information provided by Table 10-1 is applicable as well to Eqs. (10-30) and (10-34).
In order to be general, the coordinates of all three points, i, j, and k, are carried as
unknown variables in Eq. (10-31). In practice, one or two of the three points may be
control points, in which case the corresponding correction terms in Eq. (10-31) would be
dropped.
10.5. POSITION FIXING BY DISTANCE
Position fixing by distance is a typical plane coordinate survey procedure. In this
procedure, the measurements are the distances between a set of known control points and a
point whose position is unknown. Since we need to determine two coordinates for the
unknown point, at least two distances must be measured. Whenever more than two
distances are measured, the least squares procedure is used for calculating the adjusted
position of the unknown point.
Since for each measured distance there is one known point and one unknown point, the
form of the linearized condition equation is as given by Eq. (10-4).

Applications in Plane Coordinate Surveys 275


In this equation, j refers to the known point and i to the unknown point. It follows that Eq.
(10-9) reduces to

v b X b Y f .
ij 1 i
2 i ij

(10-43)

where, from Eq. (10-8),

f ij Sij0 Sij
and from Eqs. (10-12) and (10-13),

X0 X0
j
i
b
,
1
S0
ij

and

Y0 Y0
j
i
b
,
2
S0
ij

noting from Eq. (10-5) that

12
2
2

S0 X X 0 Y Y 0 ,
j
i
i
j
ij

Since Eq. (10-43) is in the linearized form of the condition equation for adjustment of
indirect observations, this least squares adjustment procedure can be used to solve for the
position of i. In principle, the least squares solution should be iterated in order to
compensate for the higher order terms that are dropped in the linearization. However, if the
0

initial approximations X i and Yi are close to the final values, as they normally should be,
one iteration is usually sufficient.
The following example demonstrates the procedure.
EXAMPLE 10-1
With reference to Fig. 10-4, the position of an unknown point P is to be determined by
measuring distances with an EDM instrument from P to five control points A, B, C, D, and
E. The known positions of the control points and the observed distances are

CONTROL

OBSERVED

POINT

x(m)

y(m)

DISTANCE (m)

698.41

1005.07

4122.109

580.14

2207.37

3444.530

5482.77

8503.22

4897.717

6191.16

7160.26

4129.233

6095.81

4920.30

2739.177

The a priori standard deviation of each distance measurement is given (in meters) by

0.020 2 0.040 2 S2

276 Analysis and Adjustments of Survey Measurements

Fig. 10-4

where S is the distance in km. [See Eq. (7-22).] All measurements are assumed to be
independent.
The reference variance is taken to be
0.0020 m

1.
2.
3.
4.
5.
6.
7.

2
0

for

S 1 km

, i.e.,

2
0

0.020 2 0.040 2

.
Compute an approximate position for P using the observed distances to C and E.
Determine the least squares (adjusted) position of P using all five observed
distances.
Determine the adjusted values of the distances.
Evaluate the covariance matrix of the adjusted position of P based upon the a
priori reference variance.
Evaluate the a posteriori estimate of the reference variance, and test this value
against the given reference variance at the 5% level of significance.
Evaluate the semimajor axis, semiminor axis, and orientation of the standard
error ellipse for P using the covariance matrix evaluated in Part 4.
Plot the 95% confidence region for the position of P.

Solution
1. Calculation of the approximate position of P. Figure 10-4 depicts the general layout of
the problem.

X EC X C X E 5482.77 6095.81 613.04m.

Applications in Plane Coordinate Surveys 277


YEC YC YE 8503.22 4920.30 3582.92m.
Thus,

EC

613 .04 2 3582 .92 2

From the cosine law,

3634 .987 m.

EC 2 PE 2 PC 2
2EC PE
3634 .987 2 2739 .177 2 4897 .717 2

23634 .987 2739 .177

cos( PEC )

0.1642790 .
Therefore (PEC) = 99.45535.
Now the azimuth of EC is

EC arctan

613.04
350.29067.
3582.92

Thus, the azimuth of EP is

EP EC (PEC) 350.29067 99.45535 250.83532.


and so

X EP 2739 .177 sin 250 .83532 2587 .369 m


YEP 2739 .177 cos 250 .83532 899 .229 m

Thus, the approximate coordinates of P are

X 0P X E X EP 6095.81 2587.37 3508.44 m


YP0 YE YEP 4920.30 899.23 4021.07 m.
2. Least squares determination of the position of P. For each measured line, values for
0
0
0
X j X i , Yj Yi , S ij and fij

are calculated and listed in Table 10-2, From this table the

b-coefficients can be calculated:

LINE
X
b
1

0
i

0
i

0
ij

ij

PA

-0.68168

-0.73165

PB
PC
PD
PE

-0.85014
0.4031 1
0.64967
0.94458

-0.52655
0.91515
0.76021
0.32823

278 Analysis and Adjustments of Survey Measurements


Table 10-2
LINE

PA
PB
PC
PD
PE

Xj

Yj

0
X j Xi

0
Yj Yi

0
S ij

(m)

(m)

(m)

(m)

698.41
580.14
5482.77
6191.16
6095.81

1005.07
2207.37
8503.22
7160.26
4920.30

-2810.03
-2928.30
1974.33
2682.72
2587.37

-3016.00
-1813.70
4482.15
3139.19
899.23

0
X

0
p

3508 .44 m ;

Y
p

Y
i

0
fij Sij Sij

(m)

(m)

(m)

4122.199
3444.481
4897.719
4129.346
2639.178

4122.109
3444.530
4897.717
4129.233
2739.177

0.090
-0.049
0.002
0.113
0.001

Xj

4021 .07 m ;

ij

S
ij

0
i

0
Y
i

Y
j

1 2
.

Thus, according to Eq. (10-43), the linearized condition equations are:

v1 0.68168 0.73165
0.090
v 0.85014 0.52655
0.049
2
X p

v 3 0.40311
0.002 ,
0.91515


Yp

v
0
.
64967
0
.
76021
4

0.113
v 5 0.94458
0.001
0.32828
which is of the form v

B f .

The weights of the observed distances are obtained from the given functions for
the relationship

2
w 0 0

noting

2
2
that 0 0.0020 m

LINE
PA

S(Km)
4.12

2
0

, and

. Thus the following results hold

0.020 2 0.040 2 S 2
2

(m )
0.0276

PB
3.44
0.0193
PC
4.90
0.0388
PD
4.13
0.0277
PE
2.74
0.0124
Since the observations are independent, the weight matrix is

2
2
w 0 0

0.0725
0.1036
0.0515
0.0722
0.1613

0.0725

0.1036

.
W
0.0515

0.0722

0.1613
Thus,

0.04942
Bt W
0.05304

0.08807

0.02076

0.04691

0.05455

0.04713

0.05489

0.15236
0.05295

Applications in Plane Coordinate Surveys 279


0.29132 0.18721
N B t WB

0.18721 0.16977
0.00536
t B t Wf

0.00425
11 .781 12 .992
N 1

12 .992 20 .217
0.008
N 1 t.

0.016
and the least squares position of P is

X P X 0P X EP 3508.44 0.01 3508.45 m


YP YP0 YEP 4021.07 0.02 4021.09 m.
The corrections X P and YP , are so small that the solution need not be iterated.
3. Adjusted distances. The residuals are

0.090 0.68168 0.73165


0.107
0.049 0.85014 0.52655
0.034


0.008

v f B 0.002 0.40311
0.016 m
0.91515


0.016

0.76021
0.113 0.64967
0.096
0.001 0.94458
0.012
0.32828
and so the adjusted distances are

4122.109 0.107 4122.216


3444.530 0.034 3444.496


v. 4897.717 0.016 4897.701 m.
4129.233 0.096 4129.329


2739.177 0.012 2739.165
4.Evaluation of the covariance matrix of the adjusted position of P.

xx

11 .781 12 .992
Q XX Q N 1

12 .992 20 .217
11 .781 12 .992 0.0236
02Q xx 0.0020

12 .992 20 .217 0.0260

0.0260 2
m .
0.0404

5. Evaluation and test of the a posteriori estimate of the reference variance.

v t Wv 0.0725 0.107 0.1036 0.034 0.0515 0.016


2

0.0722 0.096 0.1613 0.012 0.00165 m 2 .


2

r n n0 5 2 3

(degrees of freedom).

280 Analysis and Adjustments of Survey Measurements


Thus

v t Wv 0.00165

0.00055 m 2
r
3
2
0

and

0 0.00055 0.023 m.
To test at the 5% level of significance, set

b 1 2 0.975
2
x 0.025.3 0.216

, and

0.05

. Then

a 2 0.025

, and

[Eqs. (8-41) and (8-42))]. From Table III of Appendix B,


2
x 0.975. 9.35

. Thus

x2
2
0.025 .3 0 0.216 0.0020 0.00014 m 2
r
3
and

x2
2
0.975 .3 0 9.35 0.0020 0.00623 m 2 .
r
3
2

Now 0

0.00055 m

accepted, i.e.,

. Thus, 0.00014

2
0 consistent

2
0 0.00623 ,and

the hypothesis that

2
2
0 0.0020 m

is

with 0 at the 5% level of significance (refer to Section 8.11).

6. Evaluation of the semimajor axis, semiminor axis, and orientation of the standard error
ellipse for from part 4,

Thus,

xx

0.0260 2
m .
0.0404

0.0236

0.0260

2
2
2
2
2
x 0.0236 m , y 0.0404 m and xy 0.0260 m

2x 2y
2

, and so

0.0320 m 2

and
12

2x 2y

2xy

0.0273 m 2 .

Thus,

2x 0.0320 0.0273 0.0593 m 2

[Eq. (8-56)]

2y 0.0320 0.0273 0.0047 m 2

[Eq. (8-57)]

The semimajor axis is

x 0.0593 0.244 m 2 .

Applications in Plane Coordinate Surveys 281


The semiminor axis is

y 0.0047 0.069 m.
Now,

tan 2

2 xy

2
x

2
y

2 0.0260
0.0520

3.095
0.0236 0.0404 0.0168

[Eq. (8-55)]

Thus, 2 252 , and the orientation of the ellipse is 126 .


7. Plot of the 95% confidence region for the position of P. For a probability of
0.95, c 2.447 (Table 8-5). Thus, the semimajor axis of the 95% confidence ellipse is

2.447 0.244 0.579 m , and the semiminor axis is 2.447 0.069 0.169 m . Again, the
orientation of the ellipse is 126 . The 95% confidence ellipse is plotted in Fig. 10-5.

10.6. TWO-PARAMETER SIMILARITY TRANSFORMATION

Figure 10-6 depicts the relationship between two plane rectangular coordinate systems

x , y and s , t , which have a common origin but are rotated one

Fig. 10-5.

282 Analysis and Adjustments of Survey Measurements


with respect to the other by an angle . The relationship between the two sets of
coordinates
that

x i , y i and s i , t i of any point i can be obtained by noting from Fig. 10-6


x i ri cos i

(10-44)

y i ri sin i

(10-45)

s i ri cos i

(10-46)

t i ri sin i .

(10-47)

and

Thus

s i ri cos cos ri sin sin


x i cos y i sin

(10-48)

and

t i ri sin cos ri cos sin


y i cos x i sin .

Fig. 10-6.

(10-49)

Applications in Plane Coordinate Surveys 283


In this development, the scale of the (s,t) system is identical to the scale of the (x, y)
system. If, however, we introduce another plane rectangular coordinate system
that for point i

x ' , y ' such

x i' si

(10-50)

yi' t i ,

(10-51)

and

where is a scale factor, then


Thus,

x ' , y ' can have a scale that is different from that of (x, y).

x i' x i cos yi sin


cos x i sin y i

(10-52)

yi' yi cos x i sin


sin x i cos y i

(10-53)

and

Since there are two parameters, and , involved in Eqs. (10-52) and (10-53), and since it
is more convenient not to work with trigonometric functions, this transformation from the
(x, y) system to the

x ' , y ' system can be simplified by introducing two new parameters:


a cos

(10-54)

b sin .

(10-55)

And

the transformation in its simplified form is then

x i' ax i by i

(10-56)

yi' bx i ayi .

(10-57)

In such a transformation x i , y i , may represent survey coordinates in a local rectangular


'

'

system while x i , y i may be coordinates in a shifted UTM (Universal Transverse


Mercator) system. The local system is oriented at an angle with respect to the UTM
system, and the scale factor is that of the UTM projection

284 Analysis and Adjustments of Survey Measurements


When parameters a and b are found, it is easy to calculate and

as follows:

a 2 b2
b
arctan .
a

(10-58)
(10-59)

If one point only has known coordinates in both systems, Eqs. (10-56) and (10-57) can be
solved directly for and . When two or more points have known coordinates in both
systems, then redundancy will exist and a least squares solution is needed. If both sets of
coordinates are observed variables, the general technique of adjustment presented in
Chapter 9 is the appropriate approach to take. If, however, only the x, y coordinates or the
x ' , y ' coordinates are considered as observed variables, the technique of least squares
adjustment of indirect observations can be applied. The technique of adjustment of
observations only can also be applied after some preliminary manipulation of the
conditions equations.

Application of General Least Squares Adjustment


The two transformation equations, Eqs. (10-56) and (10-57), are first rewritten as follows:

f1i ax i by i x i' 0

(10-56)

f 2i bx i ayi yi' 0.

(10-57)

and then linearized

f1i
f
f
f
f
f
v x i 1i v y i 1'i v x ' 1'i v y ' 1i a 1i b
i
i
x i
y i
x i
y i
a
b
x i' a 0 x i b 0 y i

(10-62)

f 2 i
f
f
f
f
f
v x i 2 i v y i 2'i v x ' 2'i v y ' 2 i a 2 i b
i
i
x i
y i
x i
y i
a
b

y i' b 0 x i a 0 y i

(10-63)

In Eqs. (10-62) and (10-63), a 0 , and b 0 , are the parameter approximations a and a are
their corresponding corrections. The partial derivatives are evaluated as follows:

f1i
a0,
x i

f1i
b0 ,
y i

f1i
1,
x i'

f1i
0,
y i'

Applications in Plane Coordinate Surveys 285


f1i
f1i
xi ,
yi ,
a
b
f 2 i
f 2 i
f 2 i
b0 ,
a0,
0,
x i
y i
x i'
f 2i
yi ,
a

f 2 i
1,
y i'

f 2i
xi .
b

In matrix from Eqs. (10-62) and (10-63) are :

a0
b
0

b0
a0

v xi

1 0 v yi x i

0 1 v xj y i

v yj

y i a x i' a 0 x i b 0 y i

,
x i b y i' b 0 x i a 0 y i
(10-64)

which has the form Av B f .


For n points, the matrices are

a0
b
0

b0

a0

1
a0

b0

b0

a0

2n 4n

a0

b0

b0

a0

v x1
v
x1 y1
y1
y x
v 'x1
1
1
'
x 2 y2
v y1

B y2 x 2 v
a
,
,

v xn
b

v
x n yn
'yn
y x
v xn
n
n
v'
2n 2
yn
4n 1

1 0
0 1

286 Analysis and Adjustments of Survey Measurements


x1' a 0 x1 b 0 y1
'

y1 b 0 x1 a 0 y1
x '2 a 0 x 2 b 0 y 2

f y '2 b 0 x 2 a 0 y 2 .

'

x n a 0 x n b0 yn
y' b x a y
0 n
0 n
n

2n 1

Three basic assumptions are made:


1.

All coordinates are uncorrelated.

2.

All x and y coordinates have the same standard deviation,

3.

All x' and y' coordinates have the same standard deviation,

1 .

2 .

Under these assumptions, the covariance matrix of the coordinates is:

diag , ,
2
1

2
1

2
2

, 22 , , 12 , 12 , 22 , 22

4 n 4 n

(10-65)*

and the cofactor matrix is

Q diagQ1 , Q1 , Q 2 , Q 2 , Q1 , Q1 , Q 2 , Q 2 4 n4 n

2
where Q1 1

2
0

2
and Q1 2

2
0

, noting

2
that 0

(10-66)

is the reference variance. It then

follows from Eq. (9-11) that

a 20 Q1 b 20 Q1 Q 2

0
t
Q c AQA

a 0 Q1 b 0 Q1 Q 2
2

a 20 b 20 Q1 Q 2 I 2 n 2 n

a 20 Q1 b 20 Q1 Q 2

0
2

(a scalar matrix ).

(10-67)

(scalar matrix)

(10-68)

Thus, from Eq. (9-13),

Wc Q c1 w c I 2 n 2 n
where

wc

1
.
a 0 b 0 Q1 Q 2

(10-69)

*The symbol " diag a11 , a 22 , , a mm " represents a diagonal matrix whose main diagonal elements
are a 11 , a 22 , , a mm . All other elements in the matrix arc, of course, zero.

Applications in Plane Coordinate Surveys 287


Now, from Eq. (9-14) the coefficient matrix of the normal equations is

n
2
2
( x i y i )
N B t Wc B w c B t B w c i 1

,
n
2
2
(x i y i )

i 1
0

(10-70)

and from Eq. (9-15) the vector of constants is


n
n
'
'
2
2
(
x
x

y
y
)

a
i i
0 (x i yi )
i i
i 1
t B t Wc f w c B t f w c i n1
. (10-71)
n
2
2
( y x ' x y' ) b
i i
i i
0 ( x i yi )

i 1
i 1

Thus, letting
n

p x i2 y i2

(10-72)

(10-73)

i 1

q x i x i' y i y i'
i 1

and
n

q ' y i x i' x i y i' ,

(10-74)

i 1

the normal equations become

q a 0 p
p 0 a
wc
wc
,

0 p b
q ' b 0 p

(10-75)

and, according to Eq. (9-16), the solution to the normal equations is

1
1 p
N 1t.

wc 0

0
a0

q a 0 p
p

w c
,

1
q ' b 0 p q ' b 0
p

(10-76)

288 Analysis and Adjustments of Survey Measurements


noting that w c should never be zero. Thus

q
q'
a 0 and b b 0 ,
p
p

(10-77)

and the corrected values for a and b are

a a 0 a

q
p

(10-78)

and

q'
b b 0 b ,
p
which

indicates

that a and b can

be

computed

(10-79)

directly

from

functions

of

coordinate values only, without requiring initial approximations a 0 and b 0 . It is important


to recognize that in this particular development w c conveniently drops out. This is possible,
of course, only if Wc is a scalar matrix, which depends on the basic assumptions made with
respect to the creation of the covariance matrix . If other conditions are imposed (for
example, x and y are correlated, or the standard deviation of x differs from the standard
deviation of y), Wc will likely be a nonscalar matrix, in which case the solution
for a and b is not as simple as that given by Eqs. (10-78) and (10-79).
With w c dropping out of the solution, we find that standard deviations 1 and 2 or
cofactors Q1 and Q 2 have no influence at all on the computation of the parameter estimates
a

and b . The only requirement is that 1 and 2 cannot both be zero; otherwise w c would be

zero, making a solution impossible. If 1

0 and 2 0

, the x, y coordinates are assumed

fixed (errorless) and all adjustment goes into the x', y' coordinates; if 1 0 and 2 0 ,
the x', y' coordinates are fixed and all adjustment goes into the x, y coordinates. In either
case, the same values for a and b are obtained.
Although 1 and 2 may not have any influence on the evaluation of a and b themselves,
they do affect evaluation of the covariance matrix of a and b , which is

02Q

1
p
02 N 1

wc 0

2
0

0
,
1
p

(10-80)

Applications in Plane Coordinate Surveys 289


In determining the redundancy of this least squares solution in which all four coordinates
of each one of the n points involved are observations, we must realize that the minimum
number of observations needed is

n 0 4 2n 1,
i.e., all four coordinates of one point are needed to find a and b
needed to specify the position of each one of the
is thus

(10-81)

and two coordinates are

n 1 remaining points. The redundancy

r 4n n 0 2n 2.

(10-82)

EXAMPLE 10-2

Following are the coordinates of three points in two plane survey coordinate systems that
have a common origin:
POINT
1

x(m)
1314.31

y(m)
2540.26

x(m)
2102.35

y(m)
1936.44

2
3

2078.70
4900.60

3511.23
5000.39

3152.60
631 l.60

2587.35
3021.20

Each one of the x and y coordinates has a standard deviation of 0.20 m, and each one of
the x' and y' coordinates has a standard deviation of 0.05 m. All coordinates are assumed to
be uncorrelated.
Calculate the least squares estimates for the transformation parameters a and b that are
used in the transformation of the x, y coordinates into the x', y' coordinates. Also, calculate
the covariance matrix and standard deviations of a and b , and the scale and rotation
parameter estimates, and .

Solution
From Eqs. (10-72), (10-73), and (10-74),

p x i2 y i2 73849842 m 2
i 1

q x i x i' y i y i' 69358096 m 2


i 1

q ' y i x i' x i y i' 25241381 m 2


i 1

290 Analysis and Adjustments of Survey Measurements


From Eqs. (10-77) and (10-78), the least squares estimates of the transformation
parameters are:

q 69358096

0.939177
p 73849842
q ' 25241381
b
0.341793.
p 73849842
a

Now

2
2
2
1 0.20
0.0400 m ,

and 2

2 0.0025 m 2 . Select 02 0.0100 m 2 . Then

0.05

12 0.0400
Q1 2
4,
0 0.0100
and

Q2

22 0.0025

0.25 .
02 0.0100

Thus, from Eq. (10-69), with a and b substituted for a 0 and b 0 , respectively,

wc

1
2

b Q1 Q 2
2

0.998876 4 0.25

and so, from Eq. (10-82), the covariance matrix of

0.2355 ,

and b is

1
p

w
c 0

2
0

0
8

0
0.0100 1.35 10

8
1 0.2355
0
1.35 10
p
5.73 10 10

.
10
0
5.73 10

The standard deviations of a and b are

a b 5.73 10 3 2.39 10 5.
From Eqs. (10-58) and (10-59), the scale and rotation parameter estimates are

a 2 b 2 0.998876 0.999438,
and

b
0.341793
arctan arctan
a
0.939177

Applications in Plane Coordinate Surveys 291


19 .99787 19 59 '52".
Application of Least Squares Adjustment
of Indirect Observations
If the number of condition equations is equal to the number of observations, and there is
only one observation in each condition equation, the special technique of least squares
adjustment of indirect observations can be applied. Such is the case when the condition
'

'

equations are Eqs. (10-56) and (10-57) and the coordinates x i and y i are the observations
with x 1 and y1 fixed. If, however, x 1 , and y1 are the observations and x i and y i are fixed,
Eqs. (10-56) and (10-57) are not in the appropriate form because each equation contains
two observations. However, we can use an equivalent pair of condition equations that are
in the appropriate form. These equations are obtained by first expressing Eqs. (10-56) and
(10-57) in matrix form

'
x
xi a b i

' b a y
i
yi

(10-83)

and then premultiplying both sides of Eq. (10-83) by the inverse of the coefficient matrix
to yield

xi a b
y

i b a

1 '
x
1
i
'
2
y a b2
i

'
a b x i

b a '

yi

(10-84)

or

'
'
a
b
x
y
x
i
i
i
a 2 b2
a 2 b2

(10-85)

'
'
b
a
x
y
y
i
i
i
2
2
2
2
a b
a b

(10-86)

It should be obvious that Eqs. (10-85) and (10-86) are condition equations that satisfy the
basic requirements for application of the special technique of adjustment of indirect
observations. However, these condition equations are no longer linear in the parameters a
and b. While it is possible to linearize these equations and then proceed with the least
squares solution, a much simpler way to attack the problem is to replace the original two
parameters a and b with two new parameters c and d such that

a 2 b2
b
d
2
a b2

(10-87)

(10-88)

292 Analysis and Adjustments of Survey Measurements


It is absolutely essential when changing parameters that the number of independent
parameters remains the same. With this change, the condition equations become

x i cx i' dyi'

(10-89)

yi dx i' cyi'

(10-90)

which are linear in c and d.


The least squares solution for the estimates of c and d now follows. Since the adjusted
coordinates and parameter estimates must satisfy the condition equations, we have

which in matrix form,

x i v xi c x i' d y i'

(10-91)

y i v yi d x i' c y i' ,

(10-92)

v B f

, is

v xi x i'
v '
yi y i

y i' c x i
.
x i' d y i

(10-93)

For n points, the matrices are

v xi
x i'
v
'
yi
yi
v , B

'
v xn
x n
v yn
y'

n

y i'

x i'
c
, , and

d
y 'n
x 'n

xi
y
i
f .

x n
y n

Under the assumptions that all x i and y i coordinates are uncorrelated and have the same
standard deviation, the weight matrix W can be set equal to an identity matrix I. Thus,
according to Eq. (9-53), the coefficient matrix of the normal equations is

n
' 2
'
x i y i
N B t WB B t B i 1

x y
n

i 1

' 2
i

' 2
i

(10-94)

Applications in Plane Coordinate Surveys 293


and, according to Eq. (9-54), the vector of constants is

n
'
'
x i x i y i y i
t B t Wf B t f i n1

y i x i' x i y i'

i 1

(10-95)

Letting
n

y ,

as in Eq.(10-73)

as in Eq.(10-74)

p x i'
i 1
n

' 2
i

q x i x i' y i y i'

(10-96)

i 1

and
n

q ' y i x i' x i y i' ,


i 1

the normal equations are

p ' 0 c q
0 p ' d q '

(10-97)

and the solution to the normal equations is

q
c p'
d q ' .

p'

(10-98)

Least squares estimates for the original parameters a and b can then be obtained through
inversion of Eqs. (10-87) and (10-88), i.e.,

c
2
c d 2

d
c 2 d 2

(10-99)

(10-100)

294 Analysis and Adjustments of Survey Measurements


The covariance matrix of

c ,d

and d is, of course,

02 Q c ,d

To get the covariance matrix of

J c
b

1
p'
02 N 1 02
0

(10-101)

and d we first obtain the Jacobian matrix

2
2
a c d
2
2 2
d c d
b 2cd

d c 2 d 2 2

0
02

I.

1 p'
p'

c 2 d c 2 d 2

c 2 d 2 2ab
2
c2 d 2

2cd

2 2

2ab
2
c 2 d 2

(10-102)
Then the general law of propagation of variances and covariances, Eq. (6-20), is applied

a , b J c,d J t

02
p'

a 2 b2

0
2

2 2

(10-103)

EXAMPLE 10-3
Use the coordinate data given in Example 10-2 to calculate least squares estimates for the
parameters a and b in accordance with the method of adjustment of indirect observations,
given that the standard deviation of each x i and y i coordinate is 0.20 m, the x i and y i are
'

'

uncorrelated, and the x i and y i coordinates are errorless. Calculate also the standard
deviations of the estimates of a and b and compare these values with those obtained from
the general adjustment method based upon the statistical data given for this example.
Solution
From Eqs. (10-96), (10-73), and (10-74),
3

73766886 m .

p x i' y i'
i 1
3

as before.

as befote.

q x i x i' y i y i' 69358096 m 2 ,


i 1
3

q ' y i x i' x i y i' 25241381 m 2 ,


i 1

Applications in Plane Coordinate Surveys 295


From Eq. (10-98),

q 69358096

0.9402335
p' 73766886
q ' 25241381
d
0.3421777,
p' 73766886
c

and, from Eqs. (10-99) and (10-100),

c
c 2 d 2

0.9402335
0.939177
0.9402335 2 0.3421777 2

0.3421777
0.341793 .
0.9402335 2 0.3421777 2

and

d
c 2 d 2

Note that these values for a and b are identical to the values obtained in Example 10-2. The
standard deviations of a and b are obtained directly from Eq. (10-103):

2
2
a b 0 a 2 b 2
p'

Now since W=I, 0

x y 0.20 m . .Also, a

12

0 a 2 b 2
.
p'

2
b 0.939177

2 0.341793 2

0.998876

using the least squares estimates for a and b. Thus

a b

0.200.998876 2.33 10 5.
7376886

To compare these values with those obtained from the general adjustment method, we first
note that in this example 0

1 0.20 m. ,

and 2

. Thus

Q1 1, Q 2 0,
and

wc

1
1

1.001125.
2

a b Q1 Q 2 0.998876

Thus, from Eq. (10-82.), the standard deviations of

and b are

a b 02 w c p 0

w cp

0.20
2.33 10 5 ,
1.00112573849842

296 Analysis and Adjustments of Survey Measurements


which agrees with the value obtained using the technique of adjustment of Indirect
observations.
Adjustment of Observations Only
Least squares estimates for the transformation parameters a and b may also be calculated
'

'

using the special technique of adjustment of observations only. Let coordinates x i and y i be
'

'

taken as the observed variables while coordinates x i and y i are considered fixed. As already
'

'

mentioned, one point with known coordinates in both coordinate systems (x,y and x i , y i )
'

'

is sufficient to determine a and b. Since two observations x i and y i are involved, n 0

2.

If

n points are used to find least squares estimates for a and b, the total number of
observations involved is 2n and the redundancy is

2 n 2 .

It then follows that for

adjustment of observations only, 2 n 2 condition equations are to be written.


Since the transformation equations contain the parameters a and b, it is clear that some
preliminary manipulation is necessary to obtain condition equations with observations only
(i.e., no parameters). We proceed as follows. For the first point, the pair of transformation
equations is

x1' ax1 by1

(10-104)

y1' bx 1 ay1 ,

(10-105)

which can be expressed in the following matrix form:

x1' x1
'
y1 y1
'

Solving for a and b in terms of x1 , y1, x and


1

a x1
b y
1

'
1

y1 a
.
x1 b

we get

y1 x1'
x1
1
'

2
2
x1 y1 x1 y1 y1

(10-106)

y1 x1'
,
x1 y1'

(10-107)

from which

x1x1' y1 y1'
x12 y12

(10-108)

and

y1x1' x1 y1'
b
.
x12 y12

(10-109)

Applications in Plane Coordinate Surveys 297


For the second point, the pair of transformation equations is

x '2 ax 2 by 2

(10-110)

y '2 bx 2 ay 2 ,

(10-111)

which, after substitution for a and b, become

y x


y x y


y x x x x

y y y .

x '2 x12 y12 x1x1' y1y1' x 2 y1x1' x1y1' y 2


'
2

2
1

2
1

'
1 1

'
1 1

'
1 1

'
1 1

(10-112)
(10-113)

Similar equations can be written for the remaining points, yielding a total
of 2 n 2 condition equations in terms of observations only.
Although it is possible to manipulate the transformation equations into a system of
condition equations that can be used as basis for this special technique of adjustment, it is
also obvious that these equations are much more complicated than the condition equations
used in the other techniques of adjustment. Moreover, this technique includes inversion of
a matrix

Q c of order 2n 2 , which can be much larger than the simple 2 2 matrix N that

is inverted in the other adjustment techniques. Clearly, the technique of least squares
adjustment of observations only is not as well suited as the other techniques to this type of
problem.
If this technique of adjustment is applied to the problem, the solution yields adjusted
observations which can then be used to calculate a and b ; for example, using the adjusted
coordinates

' '
x 1 , y 1

for point 1:

x1x 1' y1y 1'


x12 y12

y x ' x 1 y 1'
b 1 21
.
x 1 y12

(10-114)

(10-115)

It should be pointed out that the same values for a and b will be obtained using the
adjusted coordinates of any one of the other points.
It is left as an exercise for the reader to apply least squares adjustment of observations only
to solve for a and b using the data given in Example 10-2.

10.7. FOUR-PARAMETER SIMILARITY TRANSFORMATION


In the discussion in Section 10.6, it was assumed that the two coordinate systems have a
common origin. This simplified the transformation by reducing

298 Analysis and Adjustments of Survey Measurements


the number of parameters to two, a and b. More often in practice the two coordinate
systems have different origins and the general two-dimensional similarity transformation
involves four parameters, or

x ' ax by k1

(10-116)

y ' bx ay k 2 ,

in which k1 , k 2 are two shifts which represent the coordinates of the origin of the x, y
'

IV

'

coordinate system in the x , y system (see Fig. 10-7). In a manner


similar to that followed in the development in Section 10.6, and under the same
estochastic assumptions, the coefficient matrix N and vector of constants t are found to be

n 2
2
x i y i
i 1
N w0

x
n

2
i

i 1

i 1
n

2
i

i 1

Simmetric

Fig. 10-7.

i 1

n
xi

i 1
0

n
n

(10-117)

Applications in Plane Coordinate Surveys 299


n
'
'
x i x i yi yi a 0
i 1
n
y x ' x y' b
i i
0
i i
t w 0 i 1

2
i

y i2 k10

2
i

y k10

i 1
n

2
i

x
i 1
n

'
i

'
i

i 1
n

a0
a0

i 1

i 1

i 1
n

b0

nk 10

nk 20

i 1
n

b0

i 1

k 20

yi k 20

i 1

i 1

y
i

i 1
,

(10-118)
i 1
n

where w 0 is the scalar given by Eq. (10-69). Since N is not a diagonal matrix, the general
solution would be to calculate

a , b , k , k
1 2

t N 1t,

(10-119)

add these corrections to the approximations a 0 , b 0 , k10 , k 20 and iterate the solution. A
considerable simplification is possible when we replace x i , y i coordinates by u i , v i whose
origin is the centroid of all coordinates x i , y i , or

ui xi x xi

1 n
xi
n i 1

vi yi y yi

1
y i .
n i 1

(10-120)

When

u i , vi

are used, N becomes a diagonal matrix because


n

ui 0
i 1

and

0.

i 1

Using the auxiliaries


n

p u i2 v i2 , q u i x i' v i y i' , q ' v i x i' u i y i'


i 1

i 1

i 1

i 1

i 1

r x i' , ri y i' ,

300 Analysis and Adjustments of Survey Measurements


we get

N w 0diagp, p, n , n

t w 0 q pa 0 , q 'pb 0 , r nk 10 , r 'nk 20 .
t

Finally

1
q pa 0 a 0 q p see also (Eq. 10 78)
p
1
b b b 0 q 'qb 0 b 0 q ' p see also (Eq. 10 79)
p
1
k 1 k1 k10 r nk 10 k10 r n
n
1
k 2 k 2 k 20 r ' nk 20 k 20 r ' n .
n
a a a 0

(10-121)

(10-122)

(10-123)

(10-124)

Notice, again, that under the assumption that all x i , y i (and consequently u i , v i ) are
2

uncorrelated and have equal variance 1 ,and x i , y i are uncorrelated and have equal
2

variance 2 , the parameter estimates a , b , k 1 , k 2 , are calculated without need for


approximation. Given a , b , k 1 , k 2 , and the coordinates x p , y p of any point P, its transformed
'

'

coordinates x p , y p are calculated by first computing u p


'
'
x p au p b v p k 1 and y p b u p av p k 2
'

'

xp x

and v p

y p y and

then

'

If the centroid x of the x i , y i coordinates is used as an origin, the two shifts k1 , k 2 will drop
out an this reduces to the two-parameter transformation discussed in Section 10.6. The
reader should ascertain this fact by analyzing Eqs. (10-117) through (10-124). It is also left
as a recommended exercise for the reader to develop the case of least squares of indirect
observations for the four-parameter transformation. [Those interested in further study of
transformations and their adjustment should consult Mikhail, (1976)].

PROBLEMS
10-1

The positions of stations B and C in Fig. 10-8 are known and fixed.

B
C

x(m)
1000.000
714754

y(m)
1000.000
1380.328

Applications in Plane Coordinate Surveys 301

Fig. 10-8.
The following observations are made:
17 11'15"

10 "

119 09 '39"

10"

43 38 '54"

10"

b 1404.615 m

0.080 m
b

c 1110 .082 m

0.040 m
c

All observations are independent.


(a) Determine least squares estimates for the coordinates of station A. (b) Evaluate the
covariance matrix for the position of A found in (a). (c) Compute the adjusted angles and
distances and their covariance matrix.
10-2
The positions of stations A and C and the azimuths of AA' and CC' in Fig. 10-9
are known and fixed.
STATION

X(cm)

Y(cm)

LINE

AZIMUTH

A
C

421.420
1862.977

352.115
195.987

AA'
CC'

32732'50"

The following observations are made:


1 91 03 ' 29"

d1 890.455 m

1 253 41' 47 "

d 2 921.300 m

3 64 14 ' 23".

1632'09"

302 Analysis and Adjustments of Survey Measurements

Fig. 10-9.

All observations are independent. the standard deviation of each angle observation is 10";
the standard deviation of each distance observation is 0.050 m.
(a) Determine least squares estimates for the coordinates of B. (b) Determine the
standard error ellipse for the position of B. (c) Compute 90% and 99% confidence
intervals for the adjusted distances d 1 , and d 2 . (d) Compute the coefficient of correlation
between d 1 , and d 2 .
10-3
Angles , , and in Fig. 10-10 are observed at P with a theodolite. The observations are independent and have standard deviation 2.5" . The positions of A, B, C,
and D are known and fixed. The given data are:
ANGLE

OBSERVED

POINT

X (m)

Y(m)

3516'00"
1652'33"
3224'51"

A
B
C
D

5000.000
5266.841
5256.564
5236.650

5371.180
5330.315
5191.664
4979.677

(a) Use the observed values for and and an appropriate resection procedure to obtain
approximate coordinates for P. (b) Determine the least squares position for P. (c) Evaluate
the covariance matrix for the position determined in (b) and compute the principal
dimensions and orientation of the standard error ellipse. (d) Evaluate the a posteriori
reference variance and use it to test the given value of at the 5% level of significance.
10-4
Following are the coordinates for four point in two plane coordinate systems the
have a common origin:

Applications in Plane Coordinate Surveys 303

Fig. 10-10.

POINT

x(m)

y(m)

x' (m)

y'(m)

1
2
3
4

779.94
2045.66
902.51
1548.31

250.15
1743.46
1789.47
-135.24

317.15
718.23
210.07
695.82

191.71
981.58
882.31
102.40

All coordinates are uncorrelated and have the same precision.


(a) Calculate least squares estimates for the transformation parameters and [Eqs.
(10-56) and (10-57)] that are used to transform the x, y coordinates into the x', y'
coordinates. (b) Calculate least squares estimates for the scale and rotation
parameters and . (c) Evaluate the a posteriori reference variance and use it to compute
the covariance matrix for the estimates of and and the covariance matrix for the
estimates of and .
10-5
Following are the coordinates for six points in two plane coordinate systems that
do not have a common origin:
POINT
x(m)
Y(m)
x' (m)
y' (m)
1

1018.77

104.33

1221.04

3633.22

2
3
4
5
6

1016.60
2002.35
2000.99
2994.24
2979.03

935.85
128.62
1043.58
99.42
1050.87

2010.36
1195.02
2061.10
1118.30
2018.98

3597.72
2701.71
2658.20
1765.28
1732.26

APPENDIX A

An introduction
to Matrix Algebra
A.1. DEFINITIONS
A matrix is a group of numbers or symbols collected in an array form. The following are
examples of matrices.

1
6

2 0

1
, ,
4 3
5
( 2)
(1)

a b
9 3,
.
c
d

(3)
( 4)

Every matrix has a specified number of rows and a specified number of columns. Thus
matrix (1), above, has 2 rows and 3 columns and is said to be a 2 3 matrix. Similarly (2) is
a 2 1 matrix, (3) is a 1 3 matrix, and (4) is 2 2 matrix. The two numbers representing the
rows and columns are referred to as the matrix dimensions.
A matrix is designated by a boldface capital Roman letter. Thus, an m n matrix can be
symbolically written as:

a11
a 21
A
m, n
a m1

a12
a 22

a m2

a1n
a 2n
.

a mn

A lowercase letter with a double subscript designates an element in a matrix. Thus

a ij

represents a typical element of the matrix A. The first subscript, i, refers to the number of
the row in which a ij lies, starting with 1 at the top and proceeding down to m at the bottom.
The second subscript, j, refers to the number of the column containing a ij , starting with 1
at the left and proceeding to n at the right. Thus
and jth column. For example,

a 23

a ij ,

lies at the intersection of the ith row

in matrix (1) above is 3, while

a 12

in matrix (3) is 9.

The smallest matrix dimension 1 1 .


A.2. TYPES OF MATRICES
A square matrix is a matrix in which the number of rows equals the number of columns. In

306 Analysis and Adjustments of Survey Measurements


this case,

A
m,n

is a square matrix of order m. The prixxxxxx main) diagonal m,m of a


i j.

square matrix is composed of all elements a ij for which

The following arc examples

of square matrices:

a
1 2
A
, B d

3 4
g

b
e
h

c
f .

The main diagonal of A is composed of the elements 1 and 4, while that of B contains the
elements a, e, and k,
A row matrix, or row vector, is the matrix composed of only one row. It is designated by a
lowercase boldface Roman letter. For example,

a a 1 , a 2 , , a n and c 1,2,4.

1, n

1, 3

A column matrix, or column vector, is a matrix composed of only one column. For
example,

b1
b
2
b
m ,1

b m

and

1
d 3 .
2 ,1

A diagonal matrix is a square matrix in which all elements not on the main diagonal are
zero. For example,

0
0
d11
0 d

22
D
,

0 d mm
0
where

d ij 0

for all

i j

d ij 0

for all

i j.

The following are examples of diagonal matrices

1 0 0
G 0 0 0

0 0 3

and

p 0 0
H 0 q 0 .

0 0 r

Appendix A

307

A scalar is defined as a single number.


A scalar matrix is a diagonal matrix whose main diagonal elements are all equal to the
same scalar. For example,

a 0 0
0 a 0

0 0 a

a ij 0 for all i j
a ij a for all i j

and

2 0 0
H 0 2 0

0 0 2
are scalar matrices.
A unit or identity matrix is a diagonal matrix whose main diagonal elements are all equal
to 1. A unit matrix will always be referred to by I. Thus,

1 0 0
0 1 0

0 0 1

a ij 0 for all i j
a ij 1 for all i j

A null or zero matrix is a matrix whose elements are all zero. It is denoted by a boldface
zero, 0.
A triangular matrix is a square matrix whose elements above (or below), but not
including, the main diagonal are all zero. An upper triangular matrix takes the form

a11
0
A

a 1n
a 22 a 2 n
,

0 a mn
a11

with

The matrix

1 3 4
A 0 1 0

0 0 7
is an example of an upper triangular matrix of order 3.
A lower triangular matrix is of the form

a ij 0

for

i j.

308 Analysis and Adjustments of Survey Measurements


a 11
a
21
A

a m1

0
0
, where a 0
ij

a mm

0
a 22

a m2

for

i j.

the matrix

0
18
B

2 11
is a lower triangular matrix of order 2.
A.3. EQUALITY OF MATRICES
Two matrices A and B of the same dimensions are equal if each element a ij b ij for all i and
j. Matrices of different dimensions cannot be equated. Some relationships which apply to
matrix equality include
If A = B, then B=A for all A and B.
If A = B and B=C, then A=C for all A, B and C.

(A-1)
(A-2)

For example, let

1 2
A

3 4

and

b11
B
b 21

b12
.
b 22

If A=B (noting that both are 2 2 matrices), then

b11 1, b12 2, b 21 3, and b 22 4.


A.4. SUMS OF MATRICES
The sum of two matrices A and B, of the same dimensions, is a matrix C of the same
dimensions, the elements of which are given by c ij

a ij b ij

for all i and j. Matrices of

different dimensions cannot be added. The following relationships apply to matrix


addition:
(A-3)
A B B A (commutative law)

A B C A B C A B C (associative law)

(A-4)

With 0 as the null or zero matrix, we have

A00A A

(A-5)

A (A) 0,

(A-6)

Appendix A

309

where -A is the matrix composed of - a ij as elements. For example, if

1 2 0
6 4 2
A
, B

,
0 3 5
3 2 7
and C

B A

BA

and

x
C
u

y
v

z
,
w

, compute the values of the six elements x, y, z, u, v, w of C. First, compute

6 4 2 1 2 0 5 2 2
3 2 7 0 3 5 3 1 2 .

then form

x
C
u
Thus,

y
v

z 5 2 2

.
w 3 1 2

x 5, y 2 , z 2 , u 3, v 1 ,and w 2

A.5. SCALAR MULTIPLICATION OF MATRICES


Multiplication of a matrix by a scalar a results in another matrix B
are b ij

a ij

whose elements

, for all i and j. (In general, a scalar is denoted by a lowercase Greek letter).

For example, for

, and

7 2
A
,
3 4
we get

6
21
B 3A
.
9 12
The following relations hold for scalar multiplication

A B A B
A A A
AB A B A B
A A.

(A-7)
(A-8)
(A-9)
(A-10)

A.6. MATRIX MULTIPLICATION


The product of two matrices is another matrix, The two matrices must be conformable

310 Analysis and Adjustments of Survey Measurements


for multiplication, i.e., the number of columns of the first matrix must equal the number of
rows of the second matrix. Thus, if A is an m k matrix and B is a k m matrix, the product
AB in that order is another matrix C with m rows (as in A) and n columns (as in B). Each
element c ij in C is obtained by multiplying each one of the k elements of the ith row in A
by the corresponding element in the jth column in B and adding. Algebraically, this is
written as

cij a i1b1 j a i 2 b 2 j a ik b kj .

(A-11)

This process may be shown schematically as follows

a 11

a i1

a m1

a 12

a i2

a m2

a 1k
b
11 b11
b
b11
a ik 11

b11 b11
a mk

b11

b11

b11

c11
c11
c11

r k
.
c
c ij a ir b rj c11
11

r 1
c
c11
c11
11
To illustrate the multiplication of matrices, let

1 2
1 0 1
1 1

A
, B 0 1 , C 0 1 2.

2 0
1 0
1 0 1
Then

3, 2

2, 2

3, 2

1 2
0 1 1 1

2 0

1 0

Appendix A

3, 3

311

11 2 2 11 2 0 5 1
0 1 12 0 1 10 2 1

11 0 2 11 0 0 1 1
1 0 1 1 2

B E 0 1 2 0 1
3, 2
3, 2
1 0 1 1 0
11 0 0 11 12 0 1 10 2 2
0 1 10 2 1 0 2 11 2 0 2 1 .

11 0 0 11 12 0 1 10 2 2

Matrix multiplication is not commutative; that is, in general TS


are square. For example, if

ST

even if both matrices

1 2
T

5 0
and

3 4
S
,
0 2
then

1 2 3 4 3 8
TS

5 0 0 2 15 20
while

3 4 1 2 23 6
ST

,
0 2 5 0 10 0
with the obvious result that TS ST .
The following relationships regarding matrix multiplication hold:
AI IA A, in which I is the unit or identity matrix

A BC AB C ABC (associative law)


A B C AB AC
(distribute laws).
A B C AC BC

(A-13)
(A-14)
(A-15)

(A-16)
One important property of matrix multiplication which distinguishes it from scalar
multiplication is that the product of two matrices can be the null or zero matrix without

312 Analysis and Adjustments of Survey Measurements


either matrix being the zero matrix. The following are three examples of this property:

1.

2.

3.

3 0 0
1 1 2
0 0 2 3 0 0.

3 5
1 2 3
0
1 1 0 3 5 0

3 5
1 2 1
2 2 20 2 1 0
1 0 2

0
.
0
0 0.

A.7. THE TRANSPOSE OF A MATRIX


The transpose of the m n matrix A is an n m matrix formed from A by interchanging
rows and columns such that the ith row of A becomes the ith column of the transposed
matrix. We denote the transpose of A by

. If

B A

, it follows that b ij

a ij

for all i and j.

For example,
if

3 2
A
,
1 1

then

3 1
At
;
2 1

if

1 6
B 0 4,

5 0

then

1 0 5
Bt
;
6 4 0

if

C a

b c,

then

a
C t b .

c

The following relationships apply to the transpose of a matrix:

A Bt A t Bt
AB t A t Bt
A t A t .

(A-17)
(A-18)
(A-19)

Eq. (A-17) can be readily verified by recalling that matrix addition Is element by element.
Thus, whether transposing follows addition, or addition follows transposing, the result will

Appendix A

313

be the same. Equation (A-18) is quite important, since it shows that transposing a matrix
product leads to transposing each matrix then reversing the sequence before performing
the multiplication. Since AB is an m n matrix,
which is the same as the dimensions of

t t
B A

AB T is, by definition, an n m matrix,

at . To illustrate Eq. (A-18), let

n,k n,k

1 1 0
A 0 2 3
2,3

1

B 1 ;
3,1
2

and

then

and C

; while

1
1 1 0 2
AB C
1 ,
2 ,1
0 2 3 8
2

2 8

1 0
B A 1 1 21 2 2 8,

0 3
t

which is equal to C .
Equations (A-19) and (A-20) are straightforward and should be verified by the reader,
using numerical examples.
When the original matrix is square, the operation of transposition does not affect the
elements of the main diagonal. For example, for

a
A
c

b
d

the transpose

a c
At

b d
It follows that if the matrix is the identity matrix I, a diagonal matrix D, or a scalar matrix
K, it is equal to its transpose. Hence,

I, D

and

If x is a column matrix, also called a column vector, then x x is a positive scalar which is
equal to the sum of the squares of the vector components, or the square of its length. For
example,

if

x1
x x 2 , then x t x x1

x 3

x2

x1
x 3 x 2 x12 x 22 x 32 .

x 3

314 Analysis and Adjustments of Survey Measurements


On the other hand x

is a square symmetric matrix, as explained in the following section.

A.8. SYMMETRIC MATRICES


t

A square matrix is symmetric if it is equal to its transpose, i.e., A is symmetric if A A .


Since transposing a matrix does not change the elements of the main diagonal, the
elements above the main diagonal of a symmetric matrix are "mirror images" of those
below the diagonal. For example,

3 2 1
2 5 6 , a b ,

c d

1 6 4

6
3

and

3 0 1
7 2 0

2 4 0

0 0 5

are symmetric matrices.


For any matrix A (not necessarily square), both A

and AA are symmetric. The proof is


t

direct; for example, if C = AA', then C AA , then C AA t A t A C , i.e., C is


symmetric. If B is a symmetric matrix of suitable dimensions, then for any matrix A, both
ABA

and A

BA

are symmetric. For example, let

1 1 0
A

0 2 1

and

3 1
B
;
1 4

then

1 0
3 1
3 5 1
3 1 1 1 0
1 1 0

t
A BA 1 2
5 9
5 23 9 ,

1 4 0 2 1
0 2 1


0 1
1 4
1 9 4
which is obviously symmetric.
A.9. THE INVERSE OF A MATRIX
Division of matrices is not defined. In fact, we may have AB
This implies that the operation of "dividing" by A, even if A
example, let

AC
0

without having B C .
, is not possible. As an

2 0
2 2
2 2
A
, B
, C

,
4 0
5 3
1 4
where obviously

B C.

Computing both A B and A C gives

Appendix A

315

4 4
AB
AC.
8 8
In place of division, the concept of matrix inversion is used. The inverse of a square
matrix A, if it exists, is the unique matrix A

with the following property:

AA 1 A 1A I,

(A-21)

where I is the identity matrix. Thus, for

3 1
A
.
2 1
the matrix

1 1
A 1

2 3
is its inverse because

1 1 3 1 1 0
2 3 2 1 0 1 .

The properties of the inverse are as follows:

AB 1 B1A 1

A
A

1 1
t 1

(A-22)
(A-23)

A 1
A 1 1 A 1.

(A-24)
(A-25)

The square matrix which has an inverse is called nonsingular, while the matrix which does
not have an inverse is called singular.
It was shown previously that AB can equal 0 without either A 0 or B 0 . If, however,
either A or B is nonsingular, then the other matrix must be a null matrix. Hence, the
product of two nonsingular matrices cannot be a null or zero matrix.
In order to present a method for computing a matrix inverse, the concept and properties of
determinants are first introduced

316 Analysis and Adjustments of Survey Measurements


A.10. DETERMINANTS, MINORS, AND COFACTORS

Associated with each square matrix A is a unique scalar value called the determinant of A.
It is denoted either by det A or by
as

3 1
1 2

. Thus, for A

3 1

1 2

, the determinant is expressed

, the value of which is computed ass shown below. The student should be

careful to differentiate between the square brackets used for the matrix and the vertical
lines used for the determinant.
The determinant of order n (for an n n square matrix) can be defined in terms of
determinants of order n 1 and less. In order to apply this procedure, the determinant of
a 1 1 matrix must be defined. Accordingly, for a matrix consisting of a single clement, the
determinant is defined as the value of the element, i.e., for
If A is an
is an

nn

A a 11

A det A a11 .

matrix, and one row and one column of A are deleted, the resulting matrix

n 1 n 1 submatrix of A. The determinant of such a submatrix is called a minor

of A, and it is designated by m ij where i and j correspond to the deleted row and column,
respectively. More specifically, m ij is known as the minor of the element a ij in A. For
example, consider

a 11
A a 21

a 31

a 12
a 22
a 32

a 13
a 23 .

a 33

Each element of A has a minor. The minor of a11 , for example, is obtained by deleting the
first row and first column from A and taking the determinant of the 2 2 submatrix that
remains, i.e.,

m11
In similar fashion, the minors of

m12
The cofactor c ij of an element

a 12

a 23

a 32

a 33

and a13 are

a 21

a 23

a 31

a 33

a ij

a 22

, and

m13

a 21

a 22

a 31

a 32

is defined as
c ij

1i j m ij ,

Obviously, when the sum of the row number i and column number j is even, c ij
when i j is odd,

c ij m ij

(A-26)
m ij

; and

The determinant of an n n matrix A can now be defined as

A a11c11 a12c12 a1n c1n ,

(A-27)

Appendix A

317

which states that the determinant of A is the sum of the products of the elements of the
first row of A and their corresponding cofactors. (It is equally possible to define

in

terms of any other row or column, but for simplicity, the first row only is used.) On the
basis of this definition, the 2 2 matrix

a 11
A
a 21
has cofactors

c11 a 22 a 22 ,

and

a 12
a 22

c12 a 21 a 21 ,

and the determinant of A is

A a11c11 a12c12 a11a 22 a12c 21.


Thus, if

3 1
A
,
1 2
then

A 32 11 5.

For the 3 x 3 matrix

a 11
A a 21

a 31

a 12
a 22
a 32

a 13
a 23 .

a 33

the cofactors of the first row are

c11

a 22

a 23

a 32

a 33

c12
c13

a 22 a 33 a 23a 32

a 21

a 23

a 31

a 33

a 21

a 22

a 31

a 32

a 21a 33 a 23a 31

a 21a 32 a 22 a 31

and so the determinant of A, according to Eq. (A-27), is

A a11 a 22a 33 a 23a 32 a12 a 21a 33 a 23a 31 a13 a 21a 32 a 22a 31 .


For example, if

1 0 1
A 0 2 3 ,

1 0 1

318 Analysis and Adjustments of Survey Measurements


then

A 12 0 0 0 3 10 2 4.
A.11. COFACTOR AND ANOINT MATRICES
The cofactor matrix C of a matrix A is the square matrix of the same order as A in which
each clement a ij is replaced by its cofactor

c ij

, For example, the cofactor matrix of

1 2
A

3 4
is

4 3
C
.
2 1
The adjoint matrix of A, denoted by adj A, is the transpose of its cofactor matrix, i.e.,

adj A C t .

(A-28)

It can be shown that

Aadj A adj A A A I.
Thus, for

1 2
A
,
3 4
we have

A 14 2 3 10
4 2
adj A C t

3 1
and

1 2 4 2 10 0
A adj A

10 I,
3 4 3 1 0 10
or

(A-29)

Appendix A

319

4 2 1 2 10 0

10 I,
3 1 3 4 0 10

adj A A

which demonstrates the relationship expressed by Eq. (A-29).


A.12. MATRIX INVERSION USING THE ADJOINT MATRIX
Comparison of Eqs. (A-21) and (A-29) leads directly to a procedure for evaluating the
inverse from the adjoint matrix, namely,

A 1

adj A
.
A

Referring to the example in Section A-1 1, the inverse of

1 2
A

3 4
is

A 1

1 4 2 0 .4 0 .2

.
10 3 1 0.3 0.1

As another example, let us evaluate the inverse of

3 1 1
A 2 1
0 .

1 2 1
First, the determinant of A is

A 3 1 0 1 2 0 14 1 2.
and the elements of the cofactor matrix are

C11 1, C12 2 , C13 3

C 21 1, C 22 4 , C 23 7

C 31 1, C 32 2 , C 33 5.
Thus,

3
1 2
1 1 1
C 1 4 7 , adj A C t 2 4 2 .

1 2
3 7 5
5

(A-30)

320 Analysis and Adjustments of Survey Measurements


1 1 1 0.5 0.5 0.5
adj A
1
A

2 4 2 1 .0 2 . 0 1 .0

2
A
3 7 5 1.5 3.5 2.5
1

It is left as an exercise for the reader to demonstrate that AA A A I .


From Eq. (A-30) it should be clear that the determinant of a matrix must not be zero in
order for the inverse to exist. Consequently, nonsingular matrices have nonzero
determinants, whereas singular matrices have zero determinants.
A.13. LINEAR EQUATIONS
Linear equations are encountered frequently in survey adjustment problems. Of particular
interest are systems of equations in which the number of equations is equal to the number
of unknowns, as, for example, the following set of n equations in n unknowns:

a11x1 a12 x 2 a1n x n

b1

a 21x1 a 22 x 2 a 2 n x n

b2

(A-31)

a n1x1 a n 2 x 2 a nn x n

bn .

In Eq. (A-31), the a ij are numerical coefficients, the b i are constants, and the x j , are the
unknowns. Equation (A-31) may be expressed in matrix form

Ax b

(A-32)

in which

a 11
a
21
A

a n1

a12
a 22

an2

a 1n
x1

x
a 2n
, x 2 ,


a nn
x n

and

b1
b
2
b .


b n

If the determinant of A is nonzero, the solution to Eq, (A-31), or to Eq, (A-32), is a unique
set of n numerical values for the x j that satisfy all equations simultaneously. This solution
is obtained by premultiplying both sides of Eq. (A-32) by
because

A 0

) to give

A 1Ax A 1b
or

(which exists

Appendix A
x A 1b.

321
(A-33)

since A A I by definition. Thus, the solution is equivalent to finding the inverse of the
coefficient matrix. As an example, consider the following set of three equations in three
unknowns:

3x 1 x 2 x 3 2
2 x1 x 2

x1 2 x 2 x 3 3
or

3 1 1 x1 2
2 1
0 x 3 1 .


1 2 1 x 3 3
The inverse of the coefficient matrix has already been computed in Section A-12.
Thus, by Eq. (A-33),

0 .5 0 .5 0 .5 2 2
x A 1b 1.0 2.0 1.0 1 3


1.5 3.5 2.5 3 7
is the solution to the given set of equations. It is easily seen that when
x 3 7

x 2 , x 3
1
2

, and

all three equations are satisfied.

Computing the inverse by the adjoint matrix method becomes quite tedious and timeconsuming when n is greater than 3. In such eases, more efficient procedures are usually
employed for finding the inverse and solving the equations. These procedures are not
within the scope of this appendix.
A.14. BILINEAR AND QUADRATIC FORMS
If x is a vector of m variables, y is a vector of n variables, and A is an
scalar function

u x t Ay
is known as a bilinear form. For example, if

x1
x x 2 ,

x 3
then

y1
y ,
y2

and

3 1
A 2 1 ,

1 2

mn

matrix, the

(A-34)

322 Analysis and Adjustments of Survey Measurements


3 1
y1
u x1 x 2 x 3 2 1

y
1 2 2
3x1y1 2x 2 y1 x 3 y1 x1y 2 x 2 y 2 2 x 2 y 2
is a bilinear form.
If x is a vector of n variables, and A is a square symmetric matrix of order n, the
scalar function

q x t Ax

(A-35)

is known as a quadratic form. For example if

x1
3 2
x , and A

2 1
x 2
then

q x1

3 2 x1
x 2
3x12 4 x1x 2 x 22

2 1 x1

a quadratic form.
A good example of a quadratic form is the weighted sum of the squared residuals

v t Wv

(A-36)
hich is minimized in least squares. In this particular quadratic form, v is the vector of
observational residuals and W is the symmetric weight matrix of the observations.
A.15. DIFFERENTIATION
QUADRATIC FORMS

OF

VECTORS,

BILINEAR

FORMS,

AND

If a vector y is composed of m functions of some or all of the components of another


vector x, the partial derivative of y with respect to x is an

mn

matrix, J yx , called the

jacobian matrix, which is composed of elements that are the partial derivatives of the
individual components of y with respect to the individual components of x, i.e.,

J yx

y1
x
y 1


x y m

x 1

y1
x 2

y m
x 2

y1
x n

.
y m

x n

(A-37)

Appendix A
For example, if

J yx

2
y x1 2 x 2 3 x 3 ,
1

y1
x
y 1


x y m

x 1

For the bilinear form u x


derivatives can be derived:

Ay

and

7 2x

y
2

y1
x 2

y m
x 2

2
2

5x

323

,
3

y1
x n
2
1

y m 0 4 x 2

x n

6x 3
.
5

, where A is independent of x and y, the following partial

u u u
u
t
t

,
, ,
yA
x x 1 x 2
x m

(A-38)

u u u
u
t
t

,
, ,
xA.
y y1 y 2
y m

(A-39)

The partial derivative with respect to x is derived by setting


constants, and then expressing u as follows

Ay b ,

a column vector of m

u x t Ay x t b x1b1 x 2 b 2 x m b m .
It is then easily seen that

u u u
u
t
t
t

,
, ,
b1 , b 2 , , b m b y A
x x 1 x 2
x m
The partial derivative with respect to y is derivedsimilar fashion by setting
column vector of n constants, and then expressing u as follows:

t
A x c

, a

u x t Ay A t x y c t y c1 y1 c 2 y 2 c n y n .
Thus

u u u
u
t
t
t

,
, ,
c1 , c 2 , , c n c x A .
y y1 y 2
y m
The partial derivative of the quadratic form q

u u

,
x x 1

t
x Ax

u
, ,
x 2

with respect to x is

u
t
2 x A,
x n

(A-40)

where A is symmetric and independent of x. This partial derivative is derived by noting


that

324 Analysis and Adjustments of Survey Measurements

q x t Ax x1 , x1 , , x n

a11

a 21

a n1

a12
a 22

a n2

a1n x1

a 2n x 2


a nn
x n

Symmetric

a 11x12 a 22 x 22 a nn x 2n 2a 12 x1x 2
2 a 1n x 1 x n 2 a 2 n x 2 x n .
Thus

q
2a 11x1 2a 12 x 2 2a 1n x n 2 x t a 1 ,
x 1
where

a1 a11 , a12 , a1n

t , the first column of A;

q
2a 21x1 2a 2 2 x 2 2a 2 n x n 2 x t a 2 ,
x 2
where a 2
Hence,

a 21 , a 22 , a 2 n

q q

,
x x 1

t , the second column of A; and so on.

q
, ,
x 2

q
t
t
t
2 x a1 , 2 x a 2 , , 2 x a n ,
x n

2 x t a1 , a 2 , , a n 2x t A.

APPENDIX B

Tables

326 Analysis and Adjustments of Survey Measurements


Table I. Values of the Standard Normal Distribution Function

( z )

1 u 2 2
e
du PZ z
2

-3. .0013 .0010 .0007 .0005 .0003 .0002 .0002 .0001 .0001 .0000
-2.9 .0019 .0018 .0017 .0017 .0016 .0016 .0015 .0015 .0014 .0014
-2.8 .0026 .0025 .0024 .0023 .0023 .0022 .0021 .0021 .0020 .0019
-2.7 .0035 .0034 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026
-2.6 .0047 .0045 .0044 .0043 .0041 .0040 .0039 .0038 .0037 .0036
-2.5 .0062 .0060 .0059 .0057 .0055 .0054 .0052 .0051 .0049 .0048
-2.4 .0082 .0080 .0078 .0075 .0073 .0071 .0059 .0068 .0066 .0064
-2.3 .0107 .0104 .0102 .0099 .0096 .0094 .0091 .0089 .0087 .0084
-2.2 .0139 .0136 .0132 .0129 .0126 .0122 .0119 .0116 .0113 .0110
-2.1 .0179 .0174 .0170 .0166 .0162 .0158 .0154 .0150 .0146 .0143
-2.0 .0228 .0222 .0217 .0212 .0207 .0202 .0197 .0192 .0188 .0183
-1.9 .0287 .0281 .0274 .0268 .0262 .0256 .0250 .0244 .0238 .0233
-1.8 .0359 .0352 .0344 .0336 .0329 .0322 .0314 .0307 .0300 .0294
-1.7 .0.446 .0436 .0427 .0418 .0409 .0401 .0392 .0384 .0375 .0367
-1.6 .0548 .0537 .0526 .0516 .0505 .0495 .0485 .0475 .0465 .0455
-1.5 .0668 .0655 .0643 .0630 .0618 .0606 .0594 .0582 .0570 .0559
-1.4 .0808 .0793 .0778 .0764 .0749 .0735 .0722 .0708 .0694 .0681
-1.3 .0968 .0951 .0934 .0918 .0901 .0885 .0869 .0853 .0838 .0823
-1.2 .1151 .1131 .1112 .1093 .1075 .1056 .1038 .1020 .1003 .0985
-1.1 .1357 .1335 .1314 .1292 .1271 .1251 .1230 .1210 .1190 .1170
-1.0 .1587 .1562 .1539 .1515 .1492 .1469 .1446 .1423 .1401 .1379
-.9 .1841 .1814 .1788 .1762 .1736 .1711 .1685 .1660 .1635 .1611
-.8 .2119 .2090 .2061 .2033 .2005 .1977 .1949 .1922 .1894 .1867
.9
-.7
.2420 .2389 .2358 .2327 .2297 .2266 .2236 .2206 .2177 .2148
.8
-.6
.2743 .2709 .2676 .2643 .2611 .2578 .2546 .2514 .2483 .2451
.7
-.5
.3085 .3050 .3015 .2981 .2946 .2912 .2877 .2843 .2810 .2776
-.6.4 .3446 .3409 .3372 .3336 .3300 .3264 .3228 .3192 .3156 .3121
.5
-.3
.3821 .3783 .3745 .3707 .3669 .3632 .3594 .3557 .3520 .3483
-.2 .4207 .4168 .4129 .4090 .4052 .4013 .3974 .3936 .3897 .3859
.3
-.1
.4602 .4562 .4522 .4483 .4443 .4404 .4364 .4325 .4286 .4247
.2
-.0
.5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641
.1
Reprinted
with permission of Macmillan Publishing Co., Inc. from introduction to
.0
Probability and Statistics by B.W. Lindgren and G.W. McElrath. Copyright 1969 by
B.W. Lindgren and G.W. McElrath.

Appendix B

0
.1
.2
.3
.4
.5
.6
.7
.8
.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.

.5000
.5398
.5793
.6179
.6554
.6915
.7257
.7580
.7881
.8159
.8413
.8643
.8849
.9032
.9192
.9332
.9452
.9554
.9641
.9713
.9772
.9821
.9861
.9893
.9918
.9938
.9953
.9965
.9974
.9981
.9987

.5040
.5438
.5832
.6217
.6591
.6950
.7291
.7611
.7910
.8186
.8438
.8665
.8869
.9049
.9207
.9345
.9453
.9564
.9648
.9719
.9778
.9826
.9864
.9896
.9920
.9940
.9955
.9966
.9975
.9982
.9990

.5080
.5478
.5871
.6255
.6628
.6985
.7324
.7642
.7939
.8212
.8461
.8686
.8888
.9066
.9222
.9357
.9474
.9573
.9656
.9726
.9783
.9830
.9868
.9898
.9922
.9941
.9956
.9967
.9976
.9982
.9993

.5120
.5517
.5910
.6293
.6664
.7019
.7357
.7673
.7967
.8238
.8485
.8708
.8907
.9082
.9236
.9370
.9484
.9582
.9664
.9732
.9788
.9834
.9871
.9901
.9925
.9943
.9957
.9968
.9977
.9983
.9995

.5160
.5557
.5948
.6331
.6700
.7054
.7389
.7703
7995
.8264
.8508
.8729
.8925
.9099
.9251
.9382
.9495
.9591
.9671
.9738
.9793
.9838
.9874
.9904
.9927
.9945
.9959
.9969
.9977
.9984
.9997

.5199
.5596
.5987
.6368
.6736
.7088
.7422
.7734
.8023
.8289
.8531
.8749
.8944
.9115
.9265
.9394
.9505
.9599
.9678
.9744
.9798
9842
.9878
.9906
.9929
.9946
.9960
.9970
.9978
.9984
.9998

.5239
.5636
.6026
.6406
.6772
.7123
.7454
.7764
.8051
.8315
.8554
.8770
.8962
.9131
.9278
.9406
.9515
.9608
.9686
.9750
.9803
.9846
.9881
.9909
.9931
.9948
.9961
.9971
.9979
.9985
.9998

.5279
.5675
.6064
.6443
.6808
.7157
.7486
.7794
.8078
.8340
.8577
.8790
.8980
.9147
.9292
.9418
.9525
.9616
.9693
.9756
.9808
.9850
.9884
.9911
.9932
.9949
.9962
.9972
.9979
.9985
.9999

327

.5319 .5359
.5714 .5753
.6103 .6141
.6480 .6517
.6844 .6879
.7190 .7224
.7517 .7549
.7823 .7852
.8106 .8133
.8365 .8389
.8599 .8621
.8810 .8830
.8997 .9015
.9162 .9177
.9305 .9319
.9430 .9441
.9535 .9545
.9625 .9633
.9700 .9706
.9762 .9767
.9812 .9817
.9854 .9857
.9887 .9890
.9913 .9916
.9934 .9936
.9951 .9952
.9963 .9964
.9973 .9974
.9980 .9981
.9986 .9986
.9999 1.0000

328 Analysis and Adjustments of Survey Measurements


Table II. Percentiles of the Chi-Square Distribution

DEGREES
OF
FREEDOM

2
x 0.005

2
x 0.01

2
x 0.025

2
x 0.010

2
x 0.10

2
x 0.20

2
x 0.30

1
.000
.000
.001
.004
.016
.064
.148
2
.010
.020
.051
.103
.211
.446
.713
3
.072
.115
.216
.352
.584
1.00
1.42
4
.207
.297
.484
.711
1.06
1.65
2.20
5
.412
.554
.831
1.15
1.61
2.34
3.00
6
.676
.872
1.24
1.64
2.20
3.07
3.83
7
.989
1.24
1.69
2.17
2.83
3.82
4.67
8
1.34
1.65
2.18
2.73
3.49
4.59
5.53
9
1.73
2.09
2.70
3.33
4.17
5.38
6.39
10
2.16
2.56
3.25
3.94
4.87
6.18
7.27
.
11
2.60
3.05
3.32
4.57
5.58
6.99
8.15
12
3.07
3.57
4.40
5.23
6.30
7.81
9.03
13
3.57
4.11
5.01
5.89
7.04
8.63
9.93
14
4.07
4.66
5.63
6.57
7.79
9.47
10.8
15
4.60
5.23
6.26
7.26
8.55
10.3
11.7
16
5.14
5.81
6.91
7.96
9.31
11.2
12.6
17
5.70
6.41
7.56
8.67
10.1
12.0
13.5
18
6.26
7.01
8.23
9.39
10.9
12.9
14.4
19
6.83
7.63
8.91
10.1
11.7
13.7
15.4
20
7.43
8.26
9.59
10.9
12.4
14.6
16.3
21
8.03
8.90
10.3
11.6
13.2
15.4
17.2
22
8.64
9.54
11.0
12.3
14.0
16.3
18.1
23
9.26
10.2
11.7
13.1
14.8
17.2
19.0
24
9.89
10.9
12.4
13.8
15.7
18.1
19.9
25
10.5
11.5
13.1
14.6
16.5
18.9
20.9
26
11.2
12.2
13.8
15.4
17.3
19.8
21.8
27
11.8
12.9
14.6
16.2
18.1
20.7
22.7
28
12.5
13.6
15.3
16.9
18.9
21.6
23.6
29
13.1
14.3
16.0
17.7
19.8
22.5
24.6
30
13.3
15.0
16.8
18.5
20.6
23.4
25.5
40
20.7
22.1
24.4
26.5
29.0
32.3
34.9
50
28.0
29.7
32.3
34.8
37.7
41.4
44.3
60
35.5
37.5
40.5
43.2
46.5
50.6
53.8
Reprinted with permission of Macmillan Publishing Co., Inc. from Introduction to
Probability and Statistics by B.W. Lindgren and G.W. McElrath. Copyright 1969 by
B.W. Lindgren and G.W. McElrath. The table was adapted from Table VIII of Biometrika
Tables for Statisticians, Vol. 1, 3rd Edition (1966) by E.S. Pearson and H.O. Hartley,
originally prepared by Catherine M. Thompson, and is reprinted with the kind permission
of the Biometrika Trustees.

Appendix B

329

DEGREES
OF
FREEDOM

2
x 0.60

2
x 0.70

2
x 0.80

2
x 0.90

2
x 0.95

2
x 0.975

2
x 0.99

2
x 0.995

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
40
50
60

.455
1.39
2.37
3.36
4.35
5.35
6.35
7.34
8.34
9.34
10.3
11.3
12.3
13.3
14.3
15.3
16.3
17.3
18.3
19.3
20.3
21.3
22.3
23.3
24.3
25.3
26.3
27.3
28.3
29.3
39.3
49.3
59.3

1.07
2.41
3.66
4.88
6.06
7.23
8.38
9.52
10.7
11.8
12.9
14.0
15.1
16.2
17.3
18.4
19.5
20.6
21.7
22.8
23.9
24.9
26.0
27.1
28.2
29.2
30.3
31.4
32.5
33.5
44.2
54.7
65.2

1.64
3.22
4.64
5.99
7.29
8.56
9.80
11.0
12.2
13.4
14.6
15.8
17.0
18.2
19.3
20.5
21.6
22.8
23.9
25.0
26.2
27.3
23.4
29.6
30.7
31.8
32.9
34.0
35.1
36.2
47.3
58.2
69.0

2.71
4.61
6.25
7.78
9.24
10.6
12.0
13.4
14.7
16.0
17.3
18.5
19.8
21.1
22.3
23.5
24.8
26.0
27.2
28.4
29.6
30.8
32.0
33.2
34.4
35.6
36.7
37.9
39.1
40.3
51.8
63.2
74.4

3.84
5.99
7.81
9.49
11.1
12.6
14.1
15.5
16.9
18.3
19.7
21.0
22.4
23.7
25.0
26.3
27.6
28.9
30.1
31.4
32.7
33.9
35.2
36.4
37.7
38.9
40.1
41.3
42.6
43.8
55.8
67.5
79.1

5.02
7.38
9.35
11.1
12.8
14.4
16.0
17.5
19.0
20.5
21.9
23.3
24.7
26.1
27.5
28.8
30.2
31.5
32.9
34.2
35.5
36.8
38.1
39.4
40.6
41.9
43.2
44.5
45.7
47.0
59.3
71.4
83.3

6.63
9.21
11.3
13.3
15.1
16.8
18.5
20.1
21.7
23.2
24.7
26.2
27.7
29.1
30.6
32.0
33.4
34.8
36.2
37.6
38.9
40.3
41.6
43.0
44.3
45.6
47.0
48.3
49.6
50.9
63.7
76.2
88.4

7.88
10.6
12.8
14.9
16.7
18.5
20.3
22.0
23.6
25.2
26.8
28.3
29.8
31.3
32.8
34.3
35.7
37.2
38.6
40.0
41.4
42.8
44.2
45.6
46.9
48.3
49.6
51.0
52.3
53.7
66.8
79.5
92.0

También podría gustarte