Está en la página 1de 11

Parameter Estimation: Subspace Methods

Pei-Jung Chung
1 Introduction
An important problem in array processing is the case of plane waves with unknown signal
waveforms arriving at the array. For example, in the narrow band snapshots model with
a linear array, the received snapshots are
X(k) = V ()f(k) +n(k), k = 1, 2, . . . , K (1)
where the matrix
V () = [v(
1
) v(
2
) v(
D
)] (2)
consists of D array manifold vectors. The ith column of V (), v(
i
) corresponds to the
plane wave with the direction of arrival
i
. The problem of central interest is to estimate
the unknown parameter vector = [
1

2
. . .
D
].
In this chapter and next chapter, we introduce estimation methods based on various
criteria. In Section 2, we discuss the MVDR (Capon) beamformer. In Section 3 and
Section 4, we develop two important subspace based methods: the MUSIC (Multiple
Signal Classication) algorithm and the ESPRIT (Estimation of Signal Parameter via
Rotational Invariance Techniques) algorithm.
2 MVDR (Capon) Algorithm
The MVDR beamformer was previously discussed in the context of waveform estimation.
To use these results in the parameter estimation context, we use the weighting with
sample covariance matrix C
X
w
MV DR
=
v()
H
C
1
X
v()
H
C
1
X
v()
. (3)
Here MVDR represents both the MPDR and MVDR beamformers. Then we compute
the average output power of the beamformer

P
MV DR
() =
1
K
K

k=1
| w
H
MV DR
X(k) |
2
=
1
K
K

k=1
w
H
MV DR
X(k)X(k)
H
w
MV DR
= w
H
MV DR
C
X
w
MV DR
. (4)
1
Using the expression (3), we can further simplify (4) to

P
MV DR
() =
1
v()
H
C
1
X
v()
. (5)
Then
1. we vary the steering vector v() over the region of interest, for example,
,
2. plot the value of the estimated power

P
MV DR
() versus ,
3. and nd the D peaks. These values correspond to the D wavenumber estimates:

1
,

2
, . . . ,

D
.
Beamscan We can also use the conventional beamformer with
w
c
= v() (6)
to estimate the output power

P
c
(). The DOA estimate can be obtained by the above
procedure.
Example
Fig 9.2.1
3 MUSIC
3.1 Spectral MUSIC
This section discusses the MUSIC (Multiple Signal Classication ) algorithm rst pro-
posed by Schmidt [3]. The MUSIC algorithm and other subspace based methods utilize
the eigenstructure of the spatial covariance matrix
S
X
= V ()S
f
V ()
H
+
2
w
I. (7)
Compared to the maximum likelihood (ML) approach, the subspace methods require less
computational time and provide acceptable performance in many situations of interest.
As discussed in the chapter on Space-Time Signals, the exact spatial covariance
matrix can be decomposed into the signal and noise subspaces. More precisely,
S
X
= U
S

S
U
H
S
+U
N

N
U
H
N
(8)
where U
s
consists of D signal eigenvectors and U
N
consists of (N D) noise eigenvec-
tors. The diagonal matrix
S
= diag(
1
,
2
, ,
D
) contains the D largest eigenvalues
of S
X
and
N
= diag(
D+1
, ,
N
) contains the remaining eigenvalues.
The signal subspace spans the column space of the array manifold matrix V (),
2
sp(U
S
) = sp(V ()) (9)
and the noise subspace is orthogonal to sp(V ())
sp(U
N
) sp(V ()). (10)
Therefore, any array manifold vector v() lying in the subspace sp(V ()) is orthogonal
to the noise subspace,
U
H
N
v() = 0, (11)
which implies that the squared norm of U
H
N
v() is zero,
U
H
N
v()
2
= v()
H
U
N
U
H
N
v() = 0 (12)
As v(
i
) is a column vector of V (), we have
v(
i
)
H
U
N
U
H
N
v(
i
) = 0, i = 1, 2, . . . , D. (13)
This property suggests that we can nd the DOA estimate

by varying the array
manifold vector over and select the D minima as the estimate.
In practice, the spatial covariance matrix is not available. The eigenvectors of S
X
are estimated from the sample covariance matrix C
X
. The matrix U
N
is replaced by
the estimate

U
N
. This leads to the following criterion

Q
MU
() = v
H
()

U
N

U
H
N
v(), (14)
or equivalently,

Q
MU
() = v
H
() [I

U
S

U
H
S
]v(). (15)
Note that the MUSIC criterion is a scalar function of one-dimensional parameter. It
requires only one-dimensional search to nd the D minima.
Assuming the number of signals D is known. Given the data set X(k), k =
1, 2, . . . , K, the MUSIC algorithm proceeds as follows.
1. Compute the sample covariance matrix C
X
=
1
K

K
k=1
X(k)X(k)
H
.
3
2. Compute its eigenvalues

2
. . .

N
and the corresponding
eigenvectors u
1
, u
2
, . . . , u
N
. Then

U
N
= [ u
D+1
u
N
].
3. We plot

Q
MU
() = v
H
()

U
N

U
H
N
v() for .
4. Choose the D minima of

Q
MU
() as the estimate

.
In order to implement MUSIC, we need enough snapshots to get a reasonable es-
timate of S
X
and we have to know the array manifold v(). Note that the spectral
MUSIC algorithm is applicable to arbitrary array geometry and its extension to the two
dimensional case is straightforward.
3.2 Root MUSIC
For a standard linear array, we can use a polynomial representation of the MUSIC
spectrum. The array manifold polynomial vector is dened as
v
z
(z) = [1 z z
N1
]
T
, (16)
which is the array manifold vector, with a phse shift, evaluated at z = exp(j),
[v
z
(z)]
z=e
j = e
j(
N1
2
)
v(). (17)
Then the MUSIC spectrum (14) can be written as

Q
MU,z
(z) = v
T
z
(
1
z
)

U
N

U
H
N
v
z
(z) (18)
= v
T
z
(
1
z
)
_
I

U
S

U
H
S
_
v
z
(z). (19)
If the eigendecomposition corresponded to the true spectral matrix S
X
, then the exact
MUSIC spectrum could be obtained by evaluating

Q
MU,z
(z) on the unit circle,

Q
MU,z
(z)|
z=e
j =

Q
MU
(), (20)
and the D roots would correspond to the location of the D signals in the -space.
In practice, we compute the roots of

Q
MU,z
(z) and choose the D roots that are inside
the unit circle and closest to the unit circle.
4
We denote these roots by z
i
, i = 1, 2, , D. Then

i
= arg z
i
, i = 1, 2, , D. (21)
Since we are using an estimated covariance matrix, there will be errors in the locations of
roots. The eect of this vector is shown in Fig 9.8. We observe that the radial component
of the error z
i
will not aect

i
. However, it will aect the MUSIC spectrum. Thus,
we would expect that spectral MUSIC has less resolution capability than root MUSIC.
Exercise Given the number of signals D and the data set X(k), k = 1, 2, . . . , K, sum-
marize the steps of the root MUSIC algorithm.
5
4 ESPRIT
In this section we introduce a subspace algorithm referred to as ESPRIT for (Estima-
tion of Signal Parameter via Rotational Invariance Techniques). It was derived by Roy
[2]. There are several versions of ESPRIT. We discuss LS (least square) and TLS (total
least square) ESPRIT.
We develop the ESPRIT algorithm in the context of a uniform linear array. Con-
sider a ULA (uniform linear array) of N elements. In the following gure, we show a
10-element array.
The rst step in ESPRIT is to nd two identical subarrays. The number of sensors
in subarrays is denoted by N
s
and N
s
> D.
Assume that
the 1st sensor in the original array is the 1st sensor in the rst array and
the d
s
+ 1 sensor of the original array is the 1st sensor of the 2nd array.
The parameter d
s
is denotes the distance between the 2 subarrays measured in p
units where p is the inter-element spacing.
original array
1 2 3 4 5 6 7 8 9 10
(a) subarray 1
1 2 3 4 5 6 7 8 9
subarray 2
d
s
= 1 2 3 4 5 6 7 8 9 10
(b) subarray 1
1 2 3 4 5 6 7
subarray 2
d
s
= 3 4 5 6 7 8 9 10
(c) subarray 1
1 2 3 4 5
subarray 2
d
s
= 5 6 7 8 9 10
We can specify the subarrays by the selection matrix. For the subarrays in group (a),
the 1st subarray is specied by the 9 10 matrix
J
s1
= [I
99
0
91
] (22)
6
and the 2nd subarray is specied by the 9 10 matrix
J
s2
= [0
91
I
99
] (23)
where I
99
is a 9 9 identity matrix and 0
91
is a 9 1 zero matrix.
Similarly, in group (b), the 1st subarray is specied by the 7 10 matrix
J
s1
= [I
77
0
73
] (24)
and the 2nd subarray is specied by the 7 10 matrix
J
s2
= [0
73
I
77
]. (25)
In group (c), the 1st subarray is specied by the 5 10 matrix
J
s1
= [I
55
0
55
] (26)
and the 2nd subarray is specied by the 5 5 matrix
J
s2
= [0
55
I
55
]. (27)
Thus, the subarrays in group (a), (b), (c), the selection matrices can be dened as
J
s1
= [I
NsNs
0
Nsds
], (28)
J
s2
= [0
Nsds
I
NsNs
]. (29)
If we denote the array manifold matrix of the total array as V and the array manifold
matrix of the ith subarray (i = 1, 2) as V
i
, then
V
1
= J
s1
V (30)
and
V
2
= J
s2
V . (31)
The ESPRIT algorithm exploits the shift invariance property of the array, which
implies
V
2
= V
1
, (32)
where
=
_

_
e
jds
1
0 0
0 e
jds
2
0
0 0
.
.
. 0
0 0 e
jds
D
_

_
. (33)
The parameters,
i
, i = 1, . . . , D are the wavenumbers of the D signals in the -space.
Recall that for a ULA,
7
=
2

d cos . (34)
The source signal spectral matrix and the steering matrix are assumed to have full
rank, i.e. rank(F) = D and rank(V ) = D. This restriction does not allow fully corre-
lated signals.
The columns of V span the signal subspace, therefore, we have
U
s
= V T, (35)
where T is a non-singular D D matrix. This relationship says that the signal eigen-
vectors are linear combinations of the array manifold vectors of the D sources.
Selecting the subarray signal subspaces gives
U
s1
= J
s1
U
s
= J
s1
V T = V
1
T, (36)
and
U
s2
= J
s2
U
s
= J
s2
V T = V
2
T. (37)
The relation
U
s1
= V
1
T (38)
implies
V
1
= U
s1
T
1
. (39)
Similarly,
U
s2
= V
2
T = V
1
T (40)
implies
U
s2
= U
s1
T
1
T. (41)
Dene
= T
1
T, (42)
(41) becomes
U
s1
= U
s2
(43)
which relates the signal subspace of the 1st subarray to that of the 2nd subarray.
Note that in (42), the matrices and are similar and the eigenvalues of the matrix
are the diagonal elements of . If we can estimate and compute its eigenvalues, we
can obtain an estimate of
1
,
2
, . . . ,
D
. Since the number of sensors of the subarray
N
s
is greater than D, this is an overdetermined set of equations. In practice, we have to
estimate U
s1
and U
s2
by

U
s1
= J
s1

U
s
(44)
8
and

U
s2
= J
s2

U
s
. (45)
Then (41) is replaced by

U
s1

=

U
s2
. (46)
LS-ESPRIT
If we solve (46) using a least squares approach, then we minimize the dierence
between

U
s1

and

U
s2
,

= arg min

U
s2


U
s1

2
F
= arg min

tr
_
[

U
s2


U
s1
]
H
[

U
s2


U
s1
]
_
. (47)
The result is

= [

U
H
s1

U
s1
]
1

U
H
s1

U
s2
. (48)
The steps of the LS-ESPRIT are summarized as follows.
1. Perform the eigendecomposition on C
X
to obtain

U
s
.
2. Find

U
s1
and

U
s2
using

U
s1
= J
s1

U
s
and

U
s2
= J
s2

U
s
.
3. Find the LS estimate for

LS
= [

U
H
s1

U
s1
]
1

U
H
s1

U
s2
.
4. Find the eigenvalues of

LS
denoted by

1
,

2
, . . . ,

D
.
5. Find the estimates in -space by using

i
=
1
d
s
arg

i
, i = 1, 2, . . . D. (49)
TLS-ESPRIT
Because both

U
s1
and

U
s2
are estimates containing errors, Golub and Van Loan
suggest that a total least squares (TLS) approach is more appropriate [1]. If the TLS
approach is used,

TLS
= V
12
V
1
22
, (50)
9
where V
12
and V
22
are matrices dened by the eigendecomposition of the 2D 2D
matrix

C =
_

U
H
s1

U
H
s2
_
_

U
s1

U
s2

=
_

V
11

V
12

V
21

V
22
_

E
_

V
H
11

V
H
21

V
H
12

V
H
22
_
, (51)
with

E
= diag[
E
1
,
E
2
, . . . ,
E
2D
], (52)
where the eigenvalues are ordered

E
1

E
2
. . .
E
2D
. (53)
The TLS-ESPRIT algorithm uses

TLS
given by (50) in steps 3 and 4 of the LS-ESPRIT
algorithm to nd

1
,

2
, . . . ,

D
and

i
, i = 1, . . . , D.
5 Coherent Signals
In this section, we study the problem of correlated and coherent signals. Correlated
signals are signals whoes correlation coecient is nonzero. Coherent signals are signals
where the magnitude of correlation coecient, || equals one.Correlated signals occur
in a multipath environment or in systems where smart jammers are utilized to interfere
with radar or communication systems.
The subspace methods are adversely aected by correlation or coherence between
signals. In this section, we develop algorithms to preprocess the data to reduce the eect
of signal coherence. We focus on a popular spatial smoothing algorithm.
Consider the linear array in Fig 9.45 (a). We construct a set of L subarrays of length
M D + 1 as shown in Fig 9.45 (b). Each subarray is shifted by one element from the
preceeding subarray. The ith subarray has the ith element as its initial element. Typical
reference subarrays for M odd and even are shown in Fig 9.45 (c), (d).
For forward-only spatial smoothing, we compute the spectral covariance matrix as

S
SS
=
1
KL
K

k=1
L

i=1
X
(i)
M
(k)[X
(i)
M
(k)]
H
(54)
where X
(i)
M
(k) is the kth snapshot at the ith subarray.
For forward-backward spatial smoothing, we compute

S
FBSS
=
1
2KL
K

k=1
L

i=1
_
X
(i)
M
(k)[X
(i)
M
(k)]
H
+J[X
(i)
M
(k)]

[X
(i)
M
(k)]
T
J
_
. (55)
We use the sample spectral matrix in either (54) or (55) in one of the algorithms
that we have developed previously. We use a symmetric steering vector for a centerred
M-element array,
10
v
s
() = [e
j(
N1
2
)
e
j(
N1
2
)
]
T
. (56)
The advantage of spatial smoothing is that the correlation between the signals will be
reduced. For most algorithms, smaller correlation improves both the threshold behavior
and the MSE above threshold. The disadvantage of spatial smoothing is that the aperture
of the subarray is smaller, which will degrade the performance. The trade-o between
these two eects will depend on the algorithm that is used.
The spectral MUSIC algorithm provides a simple example. We perform an eigende-
composition of (55) and use the (M D) eigenvectors corresponding to the (M D)
smallest eigenvalues to construct the noise subspace matrix

U
FBSS,N
.
The MUSIC spectrum using forwar-backward spatial smoothing becomes

Q
MU,FBSS
() = v
s
()
H

U
FBSS,N

U
H
FBSS,N
v
s
(). (57)
The estimate

i
, i = 1, , D are obtained from the minima of

Q
MU,FBSS
().
Example 9.6.1
Reading Optimum Array Processing 9.1, 9.2.2, 9.2.3, 9.3.1, 9.3.2, 9.6.1, 9.6.2
References
[1] G. H. Golub and C. F. Van Loan. Matrix Computations. John Hopkins University
Press, Baltimore, 3rd edition, 1996.
[2] R. Roy and T. Kailath. Estimation of signal parameters via rotational invariance
techniques. IEEE Trans. ASSP, ASSP-37(7):984995, July 1989.
[3] R. O. Schmidt. Multiple emitter location and signal parameter estimation. IEEE
Trans. Antennas and Propagation, 34(3):276 280, March 1986.
11

También podría gustarte