Está en la página 1de 48

Filtering and Identification

Lecture 7:
Subspace Identification
Instrumental Variables

Michel Verhaegen and Jan-Willem van Wingerden


1/33

Delft Center for Systems and Control


Delft University of Technology

Overview

Recap: Deterministic Subspace


Identification

Identification of Output Error Models

Identification of Innovation Models

Estimating the Kalman Gain

Identification and Control: Experimental


Results.

2/33

Delft Center for Systems and Control

The Deterministic
Case

x(k + 1) = Ax(k) + Bu(k)


LTI System:

y(k) = Cx(k) + Du(k)


Data equation Y0,s,N = Os X0,N + Ts U0,s,N
Recovery column
matrix Os :

 space of extended observability
1
T
T
U0,s,N , then:
Let,
=
I

U
U
U
0,s,N
U0,s,N
0,s,N
0,s,N
?

Y0,s,N
=
O
X

range(Y

s
0,N
0,s,N
U0,s,N
U0,s,N
U0,s,N ) = range(Os )

The RQ factorization provides an efficient solution :



U
R
0
Q
0,s,N = 11
1 QQT = I Y0,s,N
U0,s,N = R22 Q2
R21 R22
Y0,s,N
Q2
| {z }
Q

Delft Center for Systems and Control

3/33

The 5-line matlab solution:Basic MOESP


1
The i/o data sequences {u(k), y(k)}N
k=0 and the integer s
to specify number of block rows of the Hankel matrices
U0,s,N , Y0,s,N (s > n?).

G IVEN :

T HEN DO :

Construct Hankel matrices U0,s,N , Y0,s,N (U,Y)


RQ factorization: r = triu(qr([U ; Y]));
Extract R22
Range (column space) calculation + order detection:

[uu,ss,vv]=svd(R22);
semilogy(diag(ss),xr);
Estimate AT , CT
4/33

Delft Center for Systems and Control

subid_basic.m

5/33

Delft Center for Systems and Control

Overview

Recap: Deterministic Subspace Identification

Identification of Output Error Models

Identification of Innovation Models

Estimating the Kalman Gain

Identification and Control: Experimental


Results.

6/33

Delft Center for Systems and Control

The Output-error identification problem


Additive Coloured Noise
v(k)
\
SGM
u(k)

v(k)
y(k)

[A, B, C, D]
+

(k)

B,
C,
D]T
[A,

y(k)

7/33

Delft Center for Systems and Control

The Output-error identification problem


Additive Coloured Noise
v(k)
\
SGM
u(k)

Problem:

v(k)
1
{u(k), y(k)}N
k=0

y(k)

[A, B, C, D]
+

(k)

B,
C,
D]T
[A,

y(k)

7/33

Delft Center for Systems and Control

The Output-error identification problem


Additive Coloured Noise
v(k)
\
SGM
u(k)

Problem:

v(k)
y(k)

[A, B, C, D]
+

(k)

B,
C,
D]T
[A,

1
=
{u(k), y(k)}N
k=0


[A, B, C, D] , x(0) consistently
T

y(k)

Assumptions about the additive disturbance v(k):


zero-mean
WSS stochastic process Rational Spectrum
uncorrelated with the input u(k)
7/33

Delft Center for Systems and Control

tst_sub.m

8/33

Delft Center for Systems and Control

Retrieving the subspace Os ?


Recall the Data Equation (with v(k)):
Y0,s,N = Os X0,N + Ts U0,s,N + V0,s,N
Then the Projected data equation (to remove effect of input) is:

Y0,s,N
U0,s,N = Os X0,N U0,s,N + V0,s,N U0,s,N

with
U0,s,N



T
T
= IN U0,s,N
(U0,s,N U0,s,N
)1 U0,s,N

To remove the effect of v(k) we have to interpret products like


V0,s,N
U0,s,N ?

9/33

Delft Center for Systems and Control

Ergodicity: the link between statistics and algebra


T
1
T
(U
U
)
U0,s,N
V0,s,N
=
V

V
U
0,s,N 0,s,N
0,s,N
0,s,N 0,s,N
U0,s,N
{z
}
|
1
T
Then, N V0,s,N U0,s,N equals,

v(0)

v(1)

v(1)
v(2)
1

..
..
N

.
.

v(s 1)

v(N 1)

u(0)T

u(1)T

u(2)T
v(N )
u(1)T

..
..

.
.

u(N 1)
v(N + s 2)

u(s 1)T

..
.

..
.

10/33

Delft Center for Systems and Control

Ergodicity: the link between statistics and algebra


T
1
T
(U
U
)
U0,s,N
V0,s,N
=
V

V
U
0,s,N 0,s,N
0,s,N
0,s,N 0,s,N
U0,s,N
{z
}
|
1
T
Then, N V0,s,N U0,s,N equals,

v(0)

v(1)

v(1)
v(2)
1

..
..
N

.
.

v(s 1)

v(N 1)

u(0)T

u(1)T

u(2)T
v(N )
u(1)T

..
..

.
.

u(N 1)
v(N + s 2)

This equals with probability 1 in the limit N to,

E[v(k)u(k)T ]
E[v(k)u(k + 1)T ]

.
.
T
.
E[v(k + 1)u(k) ]

..
.
Delft Center for Systems and Control

u(s 1)T

..
.

..
.

10/33

Ergodicity: the link between statistics and algebra


T
1
T
(U
U
)
U0,s,N
V0,s,N
=
V

V
U
0,s,N 0,s,N
0,s,N
0,s,N 0,s,N
U0,s,N
{z
}
|
1
T
Then, N V0,s,N U0,s,N equals,

v(0)

v(1)

v(1)
v(2)
1

..
..
N

.
.

v(s 1)

v(N 1)

u(0)T

u(1)T

u(2)T
v(N )
u(1)T

..
..

.
.

u(N 1)
v(N + s 2)

This equals with probability 1 in the limit N to,

E[v(k)u(k)T ]
E[v(k)u(k + 1)T ]

.
.
T
.
E[v(k + 1)u(k) ]
=0

..
.
Delft Center for Systems and Control

u(s 1)T

..
.

..
.

10/33

Consistent Estimation via Instrumental Variables


Recall the
Projected data eq.

Y0,s,N
O
X

=
+
s
0,N
0,s,N
U0,s,N
U0,s,N
U0,s,N

Problem: How to remove V0,s,N from the Projected data


equation?

11/33

Delft Center for Systems and Control

Consistent Estimation via Instrumental Variables


Recall the
Projected data eq.

Y0,s,N
O
X

=
+
s
0,N
0,s,N
U0,s,N
U0,s,N
U0,s,N

Problem: How to remove V0,s,N from the Projected data


equation?
Instrumental Variables:

T
T =O X
ZN
s 0,N U0,s,N ZN + V0,s,N U0,s,N ZN
0,s,N
|
{z
} |
{z
}

Y0,s,N
U

Two requirements on the instrumental variable matrix ZN !

11/33

Delft Center for Systems and Control

Consistent Estimation via Instrumental Variables


Recall the
Projected data eq.

Y0,s,N
O
X

=
+
s
0,N
0,s,N
U0,s,N
U0,s,N
U0,s,N

Problem: How to remove V0,s,N from the Projected data


equation?
Instrumental Variables:

T
T =O X
ZN
s 0,N U0,s,N ZN + V0,s,N U0,s,N ZN
0,s,N
|
{z
} |
{z
}

Y0,s,N
U

Two requirements on the instrumental variable matrix ZN !

11/33

Delft Center for Systems and Control

Consistent Estimation via Instrumental Variables


Recall the
Projected data eq.

Y0,s,N
O
X

=
+
s
0,N
0,s,N
U0,s,N
U0,s,N
U0,s,N

Problem: How to remove V0,s,N from the Projected data


equation?
Instrumental Variables:

T
T =O X
ZN
s 0,N U0,s,N ZN + V0,s,N U0,s,N ZN
0,s,N
|
{z
} |
{z
}

Y0,s,N
U

Two requirements on the instrumental variable matrix ZN !


1

rank

1
limN N

T
X0,N
U0,s,N ZN

=n

limN

1
N

V0,s,N
U

0,s,N

T =0
ZN

11/33

Delft Center for Systems and Control

A first choice for ZN ?


Two requirements on the instrumental variable matrix ZN !
1

rank

1
limN N

X0,N
U

0,s,N

T
ZN

=n

limN

1
N

V0,s,N
U

0,s,N

T =0
ZN

12/33

Delft Center for Systems and Control

A first choice for ZN ?


Two requirements on the instrumental variable matrix ZN !
1

rank

1
limN N

X0,N
U

0,s,N

T
ZN

=n

limN

1
N

V0,s,N
U

0,s,N

T =0
ZN

Since u(k) and v(k) are independent, a possible choice for ZN


equals:
ZN = U0,s,N

12/33

Delft Center for Systems and Control

A first choice for ZN ?


Two requirements on the instrumental variable matrix ZN !
1

rank

1
limN N

X0,N
U

0,s,N

T
ZN

=n

limN

1
N

V0,s,N
U

0,s,N

T =0
ZN

Since u(k) and v(k) are independent, a possible choice for ZN


equals:
ZN = U0,s,N
2

limN

T
V

Z
0,s,N
U
N
N
0,s,N

=0

12/33

Delft Center for Systems and Control

A first choice for ZN ?


Two requirements on the instrumental variable matrix ZN !
1

rank

1
limN N

X0,N
U

0,s,N

T
ZN

=n

limN

1
N

V0,s,N
U

0,s,N

T =0
ZN

Since u(k) and v(k) are independent, a possible choice for ZN


equals:
ZN = U0,s,N
T
limN N1 V0,s,N
U0,s,N ZN = 0


T
1 rank limN 1 X0,N
Z
=0
N
U
N
0,s,N

12/33

Delft Center for Systems and Control

Splitting up the data

N +2s2
For the output sequence {y(k)}k=0
we can define the Hankel
matrices (past and future):

y(N 1)
y( 0 ) y(1)

y(2)
y(N )
y(1)

past
=
Y
..

..
0 ,s,N

.
.

y(s 1) y(s) y(N + s 2)


.

13/33

Delft Center for Systems and Control

Splitting up the data

N +2s2
For the output sequence {y(k)}k=0
we can define the Hankel
matrices (past and future):

y(N 1)
y( 0 ) y(1)

y(2)
y(N )
y(1)

past
=
Y
..

..
0 ,s,N

.
.

y(s 1) y(s) y(N + s 2)

.
y(s + 1) y(N + s 1)
y( s )

y(N + s)
y(s + 1) y(s + 2)
future

=
Y
.

..
s ,s,N
..

y(2s 1)
y(s)
y(N + 2s 2)

idem for the input u(k), etc

Delft Center for Systems and Control

13/33

Instrumental variables for Output-Error Problem


Consider the Projected data eq for the future matrices.

=
+

Ys,s,N
O
X
V

s s,N Us,s,N
s,s,N Us,s,N
Us,s,N

and take ZN equal to the past U0,s,N , then we have:


1
T
Vs,s,N
U
Us,s,N 0,s,N
N N
lim

= 0

and we hope that the following condition is satisfied:

14/33

Delft Center for Systems and Control

Instrumental variables for Output-Error Problem


Consider the Projected data eq for the future matrices.

=
+

Ys,s,N
O
X
V

s s,N Us,s,N
s,s,N Us,s,N
Us,s,N

and take ZN equal to the past U0,s,N , then we have:


1
T
Vs,s,N
U
Us,s,N 0,s,N
N N
lim

= 0

and we hope that the following condition is satisfied:




1
T
rank lim
Xs,N
=n
U
Us,s,N 0,s,N
N N
PI-MOESP method
14/33

Delft Center for Systems and Control

RQ for efficient implementation


Let the following RQ data compression be given:

U s ,s,N

U
0 ,s,N

Y s ,s,N

0
0
R11

= R
0
21 R22

R31 R32 R33

Q
1

Q2

Q3

Lemma: Given the RQ factorization, we have:


!
1.

Ys

,s,N U

,s,N

,s,N

T
= R32 R22
.

2. As a result:


1
T
range lim
R32 R22
N N
Delft Center for Systems and Control

range(Os )
15/33

tst_sub_01.m

16/33

Delft Center for Systems and Control

Can we do better?

17/33

Delft Center for Systems and Control

Can we do better?

Yes, we can

17/33

Delft Center for Systems and Control

Can we do better?

Yes, we can

When more information is available about the noise!

17/33

Delft Center for Systems and Control

Overview

Recap: Deterministic Subspace Identification

Identification of Output Error Models

Identification of Innovation Models

Estimating the Kalman Gain

Identification and Control: Experimental


Results.

18/33

Delft Center for Systems and Control

The Innovation Model identification problem


The Innovation Model
e(k)

y(k)

u(k)
[A, B, C, D, K]

x(k + 1) = Ax(k) + Bu(k) + Ke(k)


y(k)
= Cx(k) + Du(k) + e(k)

Assumptions about the innovation e(k):


zero-mean
white-noise stochastic process
uncorrelated with the input u(k)

Be-aware that the innovation model often is an augmented model


19/33

Delft Center for Systems and Control

The Data Equation for the Innovation Model

With the innovation model given as:


(
x(k + 1) = Ax(k) + Bu(k) + Ke(k)
y(k)
= Cx(k) + Du(k) + e(k)
then the data equations reads:
Yi,s,N = Os Xi,N + Tu,s Ui,s,N + Te,s Ei,s,N

20/33

Delft Center for Systems and Control

Condition 4: x(k) and e(k + ) are uncorrelated for > 0


Statistically: Lemma: Let the innovation model be given:

x(k + 1)
y(k)

Ax(k) + Bu(k) + Ke(k)

Cx(k) + Du(k) + e(k)

with e(k + ) a zero-mean white noise sequence, indepedent of u(k), x(0) k, , then,
E[x(k)eT (k + )]

E[e(k + )xT (k)] = 0

E[y(k)eT (k + )]

E[e(k + )y T (k)] = 0

>0

21/33

Delft Center for Systems and Control

Condition 4: x(k) and e(k + ) are uncorrelated for > 0


Statistically: Lemma: Let the innovation model be given:

x(k + 1)
y(k)

Ax(k) + Bu(k) + Ke(k)

Cx(k) + Du(k) + e(k)

with e(k + ) a zero-mean white noise sequence, indepedent of u(k), x(0) k, , then,

Proof:

x(k) =

E[x(k)eT (k + )]

E[e(k + )xT (k)] = 0

E[y(k)eT (k + )]

E[e(k + )y T (k)] = 0

>0

Ak x(0)

E[x(k)e (k + )]

Pk1
j=0

Aj Bu(k

j 1) +

A E[x(0)e (k + )] +

Pk1

k1
X
j=0

j=0


A BE[u(k j 1)eT (k + )] +
j

KE[e(k j 1)e (k + )]
=

Aj Ke(k j 1). Therefor,

0
21/33

Delft Center for Systems and Control

Condition 4: x(k) and e(k + ) are uncorrelated for > 0 (Ctd)

Algebraically (by ergodicity): Consider the data


equations for i = s:
Ys,s,N = Os Xs,N + Tu,s Us,s,N + Te,s E,ss,N
then,

e(s)

e(s + 1)

e(s + 1)
1

lim

N N

e(2s 1)

..

e(s + N 1)

.
.
.

lim

y T (0)
y T (1)
..
.

y T (1)

..
.

y T (s

1)

=0

y T (N 1)

1
Es,s,N Y0,s,N = 0
N
22/33

Delft Center for Systems and Control

Instrumental variables for Output-Error Problem


Consider the Projected data eq.

Ys,s,N
Us,s,N = Os Xs,N Us,s,N + Es,s,N Us,s,N

and take ZN

U0,s,N
, then we have the PO consistency condition:
equal to
Y0,s,N

T
U0,s,N
1

lim
Es,s,N
Us,s,N
N N
Y0,s,N

and we hope that the following condition is satisfied:

T
U0,s,N
1


Xs,N
rank lim
=n
Us,s,N
N N
Y0,s,N

PO-MOESP method
23/33

Delft Center for Systems and Control

RQ for efficient implementation:

,s,N

R11
U
0 ,s,N


=
R


21
Y

0 ,s,N
R31

Y
s ,s,N

R22

R32

R33

Q1

Q2

Q3

Lemma: Given the RQ factorization, we have:


1.

Y
s

,s,N U

,s,N

U
Y

,s,N

,s,N

= R32 RT .
22

2. As a result:
range

1
T
R32 R22
N N
lim

Delft Center for Systems and Control

range(Os )
24/33

tst_sub_02.m
Consider the innovation model:
"
#
" #
" #
1.5 0.7
1
0.5
x(k + 1) =
x(k) +
u(k) +
e(k)
1
0
0
0
h
i
y(k) = 1 1 x(k) + e(k)
In the experiment we can vary the number of data
samples N , the number of block rows s of the
Hankel matrices. The total number of trials is 100.

25/33

Delft Center for Systems and Control

A happy mariage
Distribution of the VAF values
for 100 identification experiments
on the acoustical duct
100

100
90

53.1 %

90

100

89.5 %

90

80

80

80

70

70

70

60

60

60

50

50

50

40

40

40

30

30

30

20

20

20

10

10

10

0
0

20

40

60

PEM

80

100

0
0

20

40

SI

60

80

100

0
0

96.9 %

20

40

60

80

100

SI and PEM
26/33

Delft Center for Systems and Control

Overview

Recap: Deterministic Subspace Identification

Identification of Output Error Models

Identification of Innovation Models

Estimating the Kalman Gain

Identification and Control: Experimental


Results.

27/33

Delft Center for Systems and Control

Estimating the Kalman Gain from i-o data


Given the innovation model:
x(k + 1) = Ax(k) + Bu(k) + Ke(k)
y(k) = Cx(k) + e(k)

with

K Kalman gain

(k) + Bu(k) + Ky(k)


x
(k + 1) = (A KC) x
| {z }

From this it follows that,


h
x
(j+s) = s x
(j+0)+ s1 B
|

for s = 1, 2, and j = k, .

B | s1 K
{z
Ls

u(j + 0)
..
.

u(j + s 1)

y(j + 0)

..

How to get info on Ls and therefore on the Kalman gain K?

y(j + s 1)
28/33

Delft Center for Systems and Control

Estimating the Kalman Gain from i-o data

u(0 + j)
.
.
.

u(0)
.
.
.

u(1)

u(s 1 + j)
u(s 1)

s
s
x
(s + j)L
x(s) x
(s + 1) ] L
Xs,N =[

y(0 + j)
y(0)

.
.

.
.
.
.

y(s 1 + j)

y(s 1)

u(s)
y(1)

y(s)

s U0,s,N
=L

Y0,s,N

Consider the data equation:


Ys,s,N

U
s 0,s,N

= Os Xs,N +Ts Us,s,N +Vs,s,N Os L


+ Ts Us,s,N +Vs,s,N
| {z } Y
|{z}
Lz

0,s,N

Lu

29/33

Delft Center for Systems and Control

Estimating the Kalman Gain from i-o data


Consider the following linear least squares problem:





Us,s,N


h
i
i
h


u
z
z
u
b
b
Y

=
arg
min
u
z
L L U0,s,N
LN LN
L ,L s,s,N





Y0,s,N

then,

b zN = Os Ls + Os ((A KC))s
lim L
| {z }
N

with
h

Ls = s1 B s2 B

B | s1 K s2 K

i
30/33

Delft Center for Systems and Control

Estimating all system matrices


b z = Un n V T + bias, we have an estimate of Lbs = V T
Knowing L
n
n
N
and hence of the Kalman filter state sequence:

U
T 0,s,N

Xs,N =[
x(s) x
(s + 1) x
(s + N 1)] Vn
Y0,s,N

Therefore, we can define the linear least squares problem:

x
(s) x
(s + 1)

h
i h
i

min k x

T BT KT u(s) u(s + 1) k2
(s + 1) x
(s + 2)
T ,BT ,KT

y(s) y(s + 1)

Estimating Kalman from i/o data solving 2 LSQ problems + 1 SVD


31/33

Delft Center for Systems and Control

Next: Probing some future


developments

32/33

Delft Center for Systems and Control

Approximate H2 optimal AO control

33/33

Delft Center for Systems and Control

También podría gustarte