Está en la página 1de 5

Spectrum Sensing in Cognitive Radio with

Subspace Matching
Shujie Hou , Robert Qiu , James P. Browning , Michael Wicks
Cognitive

Radio Institute, Department of Electrical and Computer Engineering,


Center for Manufacturing Research, Tennessee Technological University,
Cookeville, TN 38505, USA
Email:shou42@students.tntech.edu, rqiu@tntech.edu
Air Force Research Laboratory, Wright Paterson AFB,
OH 45433, USA
Email:James.Browning@wpafb.af.mil
Sensor Systems Division, University of Dayton Research Institute,
300 College Park, Dayton, OH 45469
Email:Michael.Wicks@udri.udayton.edu

AbstractSpectrum sensing has been put forward to make


more efficient use of scarce radio frequency spectrum. The leading eigenvector of the sample covariance matrix has been applied
to spectrum sensing under the frameworks of PCA and kernel
PCA. In this paper, spectrum sensing with subspace matching
is proposed. The subspace is comprised of the eigenvectors
corresponding to dominant non-zero eigenvalues of the sample
covariance matrix. That is, several eigenvectors are applied to
spectrum sensing other than the only use of leading one. The
distance between the subspaces is measured by the projection
Frobenius norm. The simulations are done based on the simulated
and captured DTV signals.

I. I NTRODUCTION
Spectrum sensing is a cornerstone in cognitive radio which
detects the availability of radio frequency bands for possible
use by secondary user without interference to primary user.
Some traditional techniques proposed for spectrum sensing
are energy detection, matched filter detection, cyclostationary
feature detection, covariance-based detection and feature based
detection [1][9].
Secondary user receives a ddimensional column vector
x. Based on the received signal, there are two hypotheses:
the hypothesis H0 that is primary signal is absent opposite to
the hypothesis H1 that is primary signal is present. The two
hypotheses H0 and H1 are
H0 : x = n
H1 : x = s + n

(1)

in which s and n are assumed to be independent. s is primary


signal and n is zero-mean white Gaussian noise. Secondary
user detects the primary signal between two hypotheses H0
and H1 .
Two quantities used most often to measure the performance
of spectrum sensing algorithm are detection probability Pd and
false alarm probability Pf which are defined as
Pd = prob(detect H1 |x = s + n)
Pf = prob(detect H1 |x = n)

(2)

in which prob represents probability.


The leading eigenvector (eigenvector corresponding to the
largest eigenvalue) of the sample covariance matrix has been
proved stable for non-white wide-sense stationary (WSS) signal. Spectrum sensing with leading eigenvector of the sample
covariance matrix is proposed and hardware demonstrated
successfully [9] under the framework of principal component
analysis (PCA). On the other hand, spectrum sensing with
leading eigenvector has been generalized to the framework of
kernel PCA [10] with kernel trick. However, the previous
techniques have been based on the leading eigenvector of
the sample covariance matrix. In this paper, spectrum sensing
with subspace matching is proposed under both the PCA and
kernel PCA frameworks. The subspace is comprised of the
eigenvectors corresponding to dominant non-zero eigenvalues
of the sample covariance matrix.
Spectrum sensing with subspace matching proposed in this
paper is the straightforward extension of that with leading
eigenvectors. In practice, subspace matching for spectrum
sensing can be implemented by two means. First, if the
training samples of the primary users signal are available,
the subspace of the primary users signal can be learned
priori which is taken as the template. Then the subspace
learned from the received signal by the secondary user should
embody large similarity with the template under hypothesis
H1 . Second, the received signal can be divided into two
segments. Since the subspace for the primary users signal
is stable, the subspaces of these two segments should embody
large similarity whenever the primary users signal is present
in the received signal. In this paper, the training samples of
the primary users signal are assumed to be available.
II. S PECTRUM S ENSING WITH L EADING E IGENVECTOR
UNDER THE FRAMEWORKS OF PCA AND K ERNEL PCA
In this section, spectrum sensing with leading eigenvector
under the frameworks of PCA [9] and kernel PCA [10] will

be simply reviewed. The training vectors of the primary signal


are assumed to be available priori which are s1 , s2 , ..., sM .
Assuming the received vectors by the secondary user are
x1 , x2 , ..., xM , spectrum sensing with leading eigenvector
under the framework of PCA can be summarized as follows,
1) The sample covariance matrix of the training set is
computed by
Rs =

M
1 X
(si us )(si us )T
M i=1

(3)

in which T denotes transpose and us is the mean vector


of the training samples.
2) The leading eigenvector v1 of Rs is obtained by eigendecomposition.
Rs = VVT
(4)
in which is a diagonal matrix with ith diagonal
element i . 1 2 d 0 are eigenvalues of
Rs . V = (v1 , v2 , .., vd ) in which vi is the eigenvector
of Rs corresponding to i .
3) Likewise, the leading eigenvector v
1 for the sample
covariance matrix of x1 , x2 , ..., xM is derived too.
4) The primary signal is detected if the similarity which is
measured by inner-product between v1 and v
1 is larger
than a pre-defined threshold,
hv1 , v
1 i > Tpca

(5)

wherehi denotes inner-product.


In this paper, the inner-product is used as a measure of
similarity. In [9], the autocorrelation is used instead which is

d

X


v1 [k]
v1 [k + l] > Tpca .
(6)
= max

l=0,1,...,d
k=1

The threshold value is determined by the simulation in which


Pf = 10%.
The leading eigenvector detection mentioned above is
based on the data in the original space. In [10], the leading eigenvector detection based on (s1 ), (s2 ), ..., (sM ),
(x1 ), (x2 ), ..., (xM ) is proposed for spectrum sensing
with the use of kernel function. is the mapping which maps
the original space data s and x to the feature space data (s)
and (x).
Assuming (s1 ), (s2 ), ..., (sM ) is zero mean, the sample
covariance matrix of the training set in the feature space is
R(s) =

M
1 X
(si )(si )T .
M i=1

(7)

Similarly, the sample covariance matrix R(x) of


(x1 ), (x2 ), ..., (xM ) can also be obtained in which
(x1 ), (x2 ), ..., (xM ) are also assumed to have zero mean.
The leading eigenvectors of R(s) and R(x) have been
employed for detection.
A kernel function which just relies on the inner-product of
feature space data is defined as [11]
k(xi , xj ) = h(xi ), (xj )i

(8)

to implicitly map the original space data x into a higher


dimensional feature space F . With the use of kernel function,
the mapping need not know explicitly.
The detection algorithm with leading eigenvectors of R(s)
and R(x) can be summarized as follows,
1) Provided a kernel function k. The kernel matrix Kij =
k(si , sj ), i, j = 1, ..., M (Gram matrix) based on the
training samples is obtained. K is positive semi-definite.
2) Eigen-decomposing the kernel matrix K to get the
eigenvectors 1 , 2 , ..., M corresponding to eigenvalues 1 2 M 0.
ij = k(xi , xj ) can be derived
3) Another kernel matrix K
based on the received vectors x1 , x2 , ..., xM .
,
, ...,
correspond4) Likewise, the eigenvectors
1
2
M
ing to eigenvalues 1 2 M 0 are

obtained by eigen-decomposing K.
5) Eigenvectors for R(s) and R(x) can be expressed as
f
) = ((s1 ), (s2 ), ..., (sM ))( 1 , ..., M ),
(v1f , ..., vM
(9)
f
f

(
v1 , ..., v
M ) = ((x1 ), (x2 ), ..., (xM ))( 1 , ..., M ).
(10)
6) vif and v
if , i = 1, 2, ...,M are normalized through i =

=
/ i .
i / i and
i
i
7) The primary signal is detected if the similarity between
v1f and v
1f is larger than a pre-defined threshold
D
E
> Tkpca ,
v1f , v
1f = T1 Kt
(11)
1

in which Tkpca is the derived threshold value and Ktij =


k(si , xj ).
III. S PECTRUM S ENSING WITH S UBSPACE MATCHING
UNDER THE FRAMEWORKS OF PCA AND KERNEL PCA
In the previous section, only the leading eigenvectors v1
and v
1 , v1f and v
1f of the sample covariance matrix are used
for spectrum sensing in the original and feature spaces. In
this section, spectrum sensing with leading eigenvector will
be generalized with the use of several dominant eigenvectors.
Under the framework of PCA, eigenvectors v1 , v2 , ..., vp
of the sample covariance matrix Rs and v
1 , v
2 , ..., v
p of the
sample covariance matrix Rx are used. p is a parameter to be
determined.
The two linear subspaces which consist of vectors
v1 , v2 , ..., vp and v
1 , v
2 , ..., v
p are represented by matrices
A and B for simplicity,
A = (v1 , v2 , ..., vp ),

(12)

B = (
v1 , v
2 , ..., v
p ).

(13)

Similar with leading eigenvector detection, under hypothesis


H1 , the subspace B should embody large similarity with
the subspace A learned from the training vectors of primary
signal. Therefore, under hypothesis H1 , the distance between
subspaces A and B should be small.

The distance between two subspaces is measured by projection Frobenius norm [12] in this paper,
2



(14)
d(A, B) = AAT BBT
F

where kkF is the Frobenius norm of the matrix. In this


paper, only the case that A and B have equal dimension
p is considered. Under such condition, projection Frobenius
2
norm equals 2 ksin k2 , where is a pdimensional vector
of principal angles between A and B and kk2 is l2 norm.
The primary signal is claimed to be detected by the secondary user if
2



(15)
d(A, B) = AAT BBT < Tsubpca ,

is
Kc = K 1M K K1M + 1M K1M

(20)

in which (1M )ij := 1/M . The centering in feature space is


not done in this paper.
The kernel used in this paper is polynomial kernel
de
k(xi , xj ) = (< xi , xj > +c) , c 0,

(21)

where de is the order of the polynomial.


IV. E XPERIMENTS
In this section, the training vectors are derived from discrete
samples s(n), s(n + 1), ..., s(L + n 1) by
s1 = (s(n), s(n + 1), ..., s(n + d 1)),
s2 = (s(n + 1), s(n + 2), ..., s(n + d)),
......
sM = (s(L + n d), ..., s(L + n 1)),

where Tsubpca is the threshold value for subspace matching


(22)
under the framework of PCA.
Likewise, in the feature space, the eigenvectors
pff of R(s) and R(x) where M = L d + 1. Likewise, the received vectors
2f , ..., v
1f , v
v1f , v2f , ..., vpff and v
are used.
x1 , x2 , ..., xM are derived from discrete samples x(n), x(n +
The subspaces which consist of v1f , v2f , ..., vpff and 1), ..., x(L + n 1) in the same way. In the experiment, the
v
1f , v
2f , ..., v
pff are represented by matrices C and D respec- length with L = 500 is used.
tively. C and D are real matrices considered here.
A. Experiments on Simulated Signal
The distance between C and D can be calculated as follows,
The primary signal is assumed to be the sum of three

2


sinusoidal functions with unit amplitude of each. Polynomial
d(C, D) = CCT DDT

kernel of order 2 with c = 0 is used. Based on 1000 Monte
TF

CCT DDT
= trace CCT DDT
Carlo simulations, the detection probability with dimensions



d = 32 and d = 128 are shown in Fig. 1 and Fig. 2,
= trace CCT DDT
CCT DDT
respectively.
In Fig. 2, the results are shown with both inner

T
T
T
T
T
T
T
T
product
and
autocorrelation as the measure of similarity for
= trace CC CC + DD DD CC DD DD CC


leading
eigenvector
detection under the framework of PCA .
= trace CCT + DDT CCT DDT DDT CCT
From Fig. 1 and Fig. 2, it can be seen that the performance
T
T
of subspace matching under the framework of PCA is much
= trace(CT C) +
D) 2trace(D
CCT D)
 trace(D



T
T
T
better than the corresponding leading eigenvector detection.
= 2pf 2trace C D C D
(16) However, the performance of subspace matching under the
framework of kernel PCA is only comparable with the correwhere
sponding leading eigenvector detection.
CT D = (v1f , v2f , ..., vpff )T (
v1f , v
2f , ..., v
pff )

T
Detection Probability with P =0.1
1
= ((s1 ), (s2 ), ..., (sM ))( 1 , ..., pf )
(17)
, ...,
)
0.9
((x1 ), (x2 ), ..., (xM ))(
1
pf
T
t

= ( , ..., ) K ( , ..., ).
0.8
f

pf

pf

The primary signal is detected if d(C, D) < Tsubkpca . The


parameters p and pf are determined by

0.7

0.6
Pd

p
P
i=1
d
P
i=1

pf
P

i
85%,
i

i=1
M
P

0.5

i
85%.

0.4

(18)

0.3

i=1

So far the mean vector of (si ), i = 1, 2, M has been


assumed to be zero. In fact, the kernel matrix [13] based on
the zero-mean data
M
1 X
(si )
(si ).
M i=1

Kernel PCA with leading eigenvector


Kernel PCA with subspace
PCA with leading eigenvector
PCA with subspace

0.2

(19)

0.1
25

20

15

10

SNR in dB

Fig. 1.

The detection probability with Pf = 10% for the simulated signal

The calculated threshold values with Pf = 10% are shown


in Fig. 3. The threshold values are normalized by dividing

Detection Probability with Pf =0.1

Detection Probability with Pf =0.1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6
Pd

Pd

0.5

0.5

0.4

0.4

Kernel PCA with leading eigenvector


Kernel PCA with subspace
PCA with leading eigenvector
PCA with subspace
PCA with leading eigenvector measured by correlation

0.3

0.2

0.1
25

20

15

10

0.3
Kernel PCA with leading eigenvector
Kernel PCA with subspace
PCA with leading eigenvector
PCA with subspace

0.2

0.1
25

20

15

SNR in dB

The detection probability with Pf = 10% for the simulated signal

0.95

0.95

0.9

0.9

0.85

0.85

0.8

0.75

0.7

0.65

Tkpca

0.6

Tpca

0.8

0.75

0.7
Tkpca
Tsubkpca

0.65

Tpca

subkpca

Tsubpca

0.6

Tsubpca
0.55

0.55

0.5
25

The detection probability with Pf = 10% for DTV signal

Fig. 4.

Normalized threshold value

Normalized threshold value

Fig. 2.

10
SNR in dB

20

15

10

0.5
25

20

15

Fig. 3.

Normalized threshold values

their corresponding maximal values. All of these methods


have no noise uncertainty problem. The threshold values for
subspace methods are more robust than the corresponding
leading eigenvector detection.
B. Experiments on Captured DTV Signal
DTV signal [14] captured in Washington D.C. will be
employed to the experiment of spectrum sensing in this
section. Polynomial kernel of order 3 with c = 0 is used. Based
on 1000 Monte Carlo simulations, the detection probability
with d = 32 is shown in Fig. 4 with false alarm probability
10%. The normalized threshold values with Pf = 10% are
shown in Fig. 5.
From Fig. 4, it can be seen that the performance of subspace
matching under the framework of kernel PCA is better than
the corresponding leading eigenvector detection. However, the
performance of subspace matching under the framework of
PCA is worse the corresponding leading eigenvector detection.
V. C ONCLUSION
Spectrum sensing with subspace matching is proposed under
the frameworks of PCA and kernel PCA. The experimental results are compared with the corresponding leading eigenvector
detection. From the simulation, it can be seen that subspace

10

SNR in dB

SNR in dB

Fig. 5.

Normalized threshold values

matching method proposed in this paper is applied to spectrum


sensing successfully. The performance of subspace matching
method under the framework of PCA is better than the
corresponding leading eigenvector detection for the simulated
signal. However, for the DTV signal, the performance of
subspace matching method under the framework of kernel
PCA is better than the corresponding leading eigenvector
detection.
ACKNOWLEDGMENT
This work is funded by National Science Foundation,
through two grants (ECCS-0901420 and ECCS-0821658), and
Office of Naval Research, through two grants (N00010-10-10810 and N00014-11-1-0006).
R EFERENCES
[1] S. Haykin, D. Thomson, and J. Reed, Spectrum sensing for cognitive
radio, Proceedings of the IEEE, vol. 97, no. 5, pp. 849877, 2009.
[2] J. Ma, G. Li, and B. Juang, Signal processing in cognitive radio,
Proceedings of the IEEE, vol. 97, no. 5, pp. 805823, 2009.
[3] D. Cabric, S. Mishra, and R. Brodersen, Implementation issues in spectrum sensing for cognitive radios, in Signals, Systems and Computers,
2004. Conference Record of the Thirty-Eighth Asilomar Conference on,
vol. 1, pp. 772776, Ieee, 2004.
[4] T. Yucek and H. Arslan, A survey of spectrum sensing algorithms
for cognitive radio applications, Communications Surveys & Tutorials,
IEEE, vol. 11, no. 1, pp. 116130, 2009.

[5] Y. Zeng and Y. Liang, Maximum-minimum eigenvalue detection for


cognitive radio, in Personal, Indoor and Mobile Radio Communications, 2007. PIMRC 2007. IEEE 18th International Symposium on,
pp. 15, IEEE, 2006.
[6] Y. Zeng, C. Koh, and Y. Liang, Maximum eigenvalue detection: theory
and application, in Communications, 2008. ICC08. IEEE International
Conference on, pp. 41604164, IEEE, 2008.
[7] Y. Zeng and Y. Liang, Spectrum-sensing algorithms for cognitive
radio based on statistical covariances, Vehicular Technology, IEEE
Transactions on, vol. 58, no. 4, pp. 18041815, 2009.
[8] Y. Zeng and Y. Liang, Covariance based signal detections for cognitive
radio, in New Frontiers in Dynamic Spectrum Access Networks, 2007.
DySPAN 2007. 2nd IEEE International Symposium on, pp. 202207,
IEEE, 2007.
[9] P. Zhang, R. Qiu, and N. Guo, Demonstration of spectrum sensing with
blindly learned feature, accepted by Communications Letters, IEEE,
2011.
[10] S. Hou and R. Qiu, Spectrum sensing for cognitive radio using kernelbased learning, Arxiv preprint arXiv:1105.2978, 2011.
[11] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector
Machines and other kernel-based learning methods. Cambridge university press, 2000.
[12] R. Basri, T. Hassner, and L. Zelnik-Manor, A general framework
for approximate nearest subspace search, in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on,
pp. 109116, IEEE, 2009.
[13] B. Scholkopf, A. Smola, and K. Muller, Nonlinear component analysis
as a kernel eigenvalue problem, Neural computation, vol. 10, no. 5,
pp. 12991319, 1998.
[14] V. Tawil, 51 captured dtv signal. http://grouper.ieee.org/groups
/802/22/Meeting documents/2006 May/Informal Documents, May 2006.

También podría gustarte