Está en la página 1de 12

I ~

TERW~RTH
' N E M A N N
0959-1524(95)00009-7
Z Proc. Cont. Vol. 5, No. 6, pp. 363-374, 1995
Copyr i ght 1995 Elsevier Science Lt d
Pri nt ed in Gr eat Britain. All rights reserved
0959-1524/95 $10.00 + 0.00
An efficient method for on-line identification of
steady state
Songl i ng Cao and R. Russell Rhi nehar t *
Department of Chemical Engineering, Texas Tech University, Lubbock, TX
79409-3121, USA
Received 8 December 1994; revised 14 March 1995
A novel method for the on-line identification of steady state in noisy processes is developed. The method
uses critical values of an F-like statistic, and its computational efficiency and robustness to process noise
distribution and non-noise patterns provide advantages over existing methods. The distribution of the sta-
tistic is obtained through Monte-Carlo simulations, and analytically derived shifts in the distribution due
to process ramp changes and autocorrelations in the process data are shown to cross check with simula-
tions. Application is demonstrated on experimentally measured pH, temperature and pressure data.
Keywords: steady-state identification; automated; stochastic
Ident i fi cat i on of st eady state is an i mpor t ant t ask for
sat i sfact ory cont rol of many processes. St eady-st at e
model s are widely used in the process cont rol funct i ons
of model i dent i fi cat i on, opt i mi zat i on ~, faul t det ect i on 2,
sensor anal ysi s, dat a reconci l i at i on 3, and cont rol 4. Since
manuf act ur i ng and chemical processes are i nherent l y
nonst at i onar y, selected model par amet er s must be
adj ust ed fr equent l y t o keep t he model t rue t o t he corre-
spondi ng process. However, par amet er adj us t ment of
st eady-st at e model s shoul d onl y be per for med wi t h
nearl y st eady- st at e dat a otherwise in-process i nvent or y
changes will lead to model error.
By cont r ast , cert ai n supervi sory model -based cont r ol
funct i ons are best per for med at non- st eady- st at e condi -
tions. One such is the i ncrement al adj ust ment of par a-
meters of dynami c models. I f a process is well
cont rol l ed, the cont rol l ed variables remai n cons t ant for
l ong peri ods of time. These are pseudo- st eady- st at e
peri ods dur i ng which the mani pul at ed ,Jariables change
in response t o di st urbances. In t hose periods, noise may
domi nat e t he dynami cs of the model -rel evant state vari-
ables. Whenever noise is used t o adj ust model par ame-
ters, t he par amet er values become statistically
uncert ai n, t he model becomes invalid and funct i onal l y
useless or degraded. Increment al adj ust ment of dynami c
model par amet er s shoul d be triggered by non- st eady-
state condi t i ons.
Anot her pot ent i al appl i cat i on of st eady-st at e identifi-
cat i on is t o det ermi ne the st oppi ng criteria for iterative
* Au t h o r t o wh o m c o r r e s p o n d e n c e s h o u l d be a d d r e s s e d
numeri cal met hods. Ma ny mat hemat i cal t echni ques
such as nonl i near regression and opt i mi zat i on are itera-
tive and require a st oppi ng criteria. Inst ead of i t erat i ng
t hr ough a fixed number of times, t he procedure shoul d
be st opped when the objective funct i on at t ai ns near
st eady-st at e values (with respect to i t erat i on number).
Devel opment of a pract i cabl e t echni que for steady-
state i dent i fi cat i on will help realize the full benefits of
on-line model -based process cont rol techniques. There
are several existing met hods for st eady-st at e identifica-
tion, and herein the pr ocedur e will be developed for a
process t hat generates sequential dat a. I f a process is
sampl ed every fixed t i me interval, a time series or
sequence of dat a will be generat ed, Here, the measured
dat a will be viewed as represent i ng the true process
value with addi t i ve noise and di st urbances, and the
st eady-st at e condi t i on means t hat t he true process value
stays unchanged. Not e t hat here, the concept of st eady-
state is less strict t han ' s t at i onar y' or ' strictly st at i onar y'
in statistics. In statistics, ' s t at i onar y' requires not onl y
the mean of time series dat a t o be const ant but also
requires the di st r i but i on and aut ocor r el at i on, i f any, t o
remai n unchanged wi t h time. By cont rast , for det ermi n-
istic process model l i ng purposes the st eady-st at e condi -
t i on does not require t hat t he associ at ed noise and
di st urbances be st at i onary. In chemical processes the
ampl i t ude and di st r i but i on of noise can change with
flow rates, pH, composi t i on, etc.; however, i f the true
process val ue does not change, it is consi dered to be at
st eady state.
Here noise is consi dered t o consi st of r andom inde-
pendent per t ur bat i ons added t o the true process value.
363
364 On-line i denti fi cati on of steady state: S. Cao and R.R. Rhi nehart
The di st ri but i on of these per t ur bat i ons is usual l y close
t o Gaussi an for chemical process measur ement s. By
cont r ast t o noise, an aut ocor r el at ed di st urbance is a
per t ur bat i on added t o the t rue val ue at each sampling,
but t he magni t ude of any per t ur bat i on is correl at ed t o
previ ous values as well as havi ng a r a ndom i ndependent
component . In this work, t he aut ocor r el at ed distur-
bance is model l ed as an nt h order aut oregressi ve process
dri ven by Gaussi an di st ri but ed noise.
Noi se can be at t r i but ed t o such events as mechani cal
vi brat i on, t her mal mot i on, flow t urbul ence and spuri-
ous electronic signals which cont ami nat e t r ansmi t t ed
values. Aut ocor r el at ed di st urbances can be at t r i but ed t o
such events such as noni deal mi xi ng, and sensor dynam-
ics. But assuredly, i f t he sampl i ng i nt erval were small
enough, one woul d find t hat t he influence of the above
ment i oned noise effects woul d persist for t wo or mor e
samplings. So t he di st i nct i on bet ween correl at ed and
uncor r el at ed noise is subjective, and is dependent on
whet her t he persistence of t he event is l ong enough t hat
t he sampl i ng interval detects aut ocor r el at i on or short
enough so t hat t he per t ur bat i ons appear i ndependent .
Existing approaches
A di rect appr oach t o st eady-st at e i dent i fi cat i on is t o
per f or m a linear regression over a dat a wi ndow and
t hen per for m a t-test on t he regression slope. I f t he
slope is significantly di fferent fr om zero, t he process is
al most cert ai nl y not at st eady state 5. Thi s is usual l y an
off-line t echni que. On-line versions requi re consi derabl e
dat a storage, associ at ed comput at i onal effor t and user
expertise. For instance, t he user mus t define t he dat a
wi ndow t o be l onger t han aut ocor r el at i on persistence.
Fur t her , in t he mi ddl e of a definite oscillation, where
t he linear regression slope was t empor ar i l y zero, t he
met hod woul d give a false reading. At each time inter-
val, t he whol e dat a wi ndow must be updat ed and t he
l i near regression mus t be reperformed. I f t he dat a win-
dow is long, t here will be consi derabl e comput at i onal
effort , and recogni t i on of changes woul d be delayed. I f
t he wi ndow is short , noise will conf ound t he analysis;
and t he wi ndow l engt h whi ch woul d bal ance these
desires woul d change with changes in noi se ampl i t ude.
There are no universal rules t o choose t he l engt h of t he
dat a wi ndow and t he selection will be j udgment al .
An al t ernat i ve me t hod 6 uses an F-t est t ype statistic, a
rat i o of vari ances as measur ed on t he same set of dat a
by t wo di fferent met hods. The dat a in t he mos t recent
wi ndow are averaged and t he vari ance is first conven-
t i onal l y cal cul at ed as t he mean- squar e- devi at i on fr om
t he average. The vari ance can also be cal cul at ed fr om
t he mean of squared differences of successive dat a. I f
t he t i me series is st at i onar y (i.e. i f t he process is at
st eady state) t hen, ideally, t he rat i o of t he vari ances will
be uni t y. However, due t o r a ndom noise t he act ual rat i o
of t he vari ances will not be exact l y uni t y; t he rat i o will
be near uni t y. Al t ernat i vel y, i f t he process is not at
st eady state t he rat i o will be unusual l y large (with
respect t o t he di st r i but i on of rat i o values which are
expected at st eady-st at e condi t i ons).
The second me t hod is a valid, but primitive
appr oach, wi t h several undesi rabl e features. First, it
requires user expertise; the user must choose the hori-
zon t o bal ance t he desire for t he identification to be
insensitive t o noise v e r s u s its insensitivity to l ong-past
upsets. Second, a large number of dat a must be stored
and mani pul at ed at each sampling. Thi rd, aut ocorrel a-
t i on in t he measur ed signal will affect t he statistic. A
process may be at st eady state but may also have
medi um short -l i ved t r ansi ent fl uct uat i ons which last for
a few sampl i ng intervals. The presence of significant
aut ocor r el at i on will al ways pr oduce a ' not at steady-
st at e' message.
Recent l y, there have been three i ndust ri al responses 7-9
t o the question: ' How can st eady-st at e condi t i ons be
identified aut omat i cal l y' posed in a periodical. One
group 7 uses a movi ng average chart common to statisti-
cal process cont r ol (SPC). The t echni que plots a movi ng
average and t he upper and lower ' 30' ' cont rol limits.
When t he movi ng average violates the cont rol limits, an
al ar m indicates t hat a significant event has occurred.
While this me t hod is oft en used t o trigger cont rol , it
cannot indicate st eady state. For instance, cont i nual
osci l l at i on about t he mean may not make the not -at -
st eady-st at e movi ng average vi ol at e the cont rol limit.
When t he noise level reduces, t he cont rol limits cont ract
and earlier dat a may creat e a false vi ol at i on al arm.
Fi nal l y, pr oper SPC appl i cat i on 5,1,11 requires updat i ng
bot h the average and t he vari ance of the movi ng win-
dow of t he most recent 100 process measurement s. This
is comput at i onal l y expensive.
Anot her me t hod 8 compar es the average cal cul at ed
fr om a recent hi st or y t o a ' s t andar d' based on an earlier
hi st ory. The t-statistic 5 is used t o test whet her t he aver-
age is unchanged: t he st eady-st at e hypot hesi s. However,
this met hod also suffers fr om some problems. First, the
hypot hesi s of st eady-st at e and equal mean are quite dif-
ferent. I f the process is oscillating or r ampi ng and the
dat a wi ndow happens t o br acket t he ' st andar d' mean
t hen the t-test coul d accept t he equal mean hypot hesi s
when, in fact, t he process st eady-st at e hypot hesi s shoul d
be rejected. Fur t her , dur i ng any non-st eady-st at e peri od
t he t r adi t i onal r oot - mean- squar e (rms) t echni que for
cal cul at i ng the process vari ance produces a biased
value. Since t he process vari ance changes, the pr oper
met hod for updat i ng t he vari ance cannot be the
st r ai ght for war d rms t echni que. Fi nal l y, st orage and
processi ng all of t he dat a is a comput at i onal burden.
The t hi rd me t hod r epor t ed by t he practice 9 is t o cal-
cul at e t he process meas ur ement st andar d devi at i on over
a movi ng wi ndow of recent dat a hi st ory. Presumabl y
t he rms met hod is used. Whenever t he process is not at
st eady state t he measur ed st andar d devi at i on will be
larger t han its st eady-st at e value. Therefore, when t he
measur ed s t andar d devi at i on is great er t han some
t hr eshol d value, t he not - at - st eady- st at e condi t i on is trig-
gered. Those aut hor s appr opr i at el y not e t hat 'success
with this me t hod relies on t he abi l i t y t o det ermi ne the
On-line identification of steady state: S. Cao and R.R. Rhinehart 365
key uni t vari abl es, the pr ocess vari abl es time per i od
used for cal cul at i on, and t he [ threshold] st andar d devi-
at i on. ' Fur t her , when t he pr ocess vari ance changes, the
t hr eshol d val ues mus t change. Agai n, t he st or age and
oper at i on on the dat a wi ndow is a comput at i onal bur-
den.
A new method for steady-state identification
The design of t he new me t hod is styled aft er the pri mi -
tive F-t est t ype of statistic. It is the rat i o of vari ances,
R, as measur ed on t he same set of dat a by t wo di fferent
met hods.
The pri mi t i ve way of est i mat i ng vari ance woul d be:
. 2 _ 1 N
N - 1 ~ ' ( X i - XN)2 (l )
i=l
The modi fi cat i on (or simplification) begins with a
convent i onal exponent i al l y wei ght ed movi ng average,
or convent i onal fi rst -order filter of a process vari abl e X~.
Thi s requi res little st or age and is comput at i onal l y fast.
In al gebrai c not at i on:
X f , = ZI X < +( I - A1)Xf,
(2)
wher e 0 < Z~ < 1.
I f t he pr evi ous filtered val ue Xj;._j is used t o repl ace
the sampl e mean, "~N, a mean square devi at i on can be
defined as:
v" = E ( ( X , - Jff.i_ l) 2)
and can be est i mat ed by:
i ) 2 1
N -- 1 ( X , - x ~ , , ) 2 (3)
. =
Assumi ng t hat {X~} is uncor r el at ed, usi ng the pr evi ous
val ue of Xj~ X,~j, pr event s aut ocor r el at i on bet ween X~
and Xji_l, and al l ows one t o easily est i mat e a 2 fr om v 2.
Define:
d l -~. X t. - X ~ 1 (4)
I f t he pr ocess is at a st eady- st at e condi t i on and t here is
no aut ocor r el at i on in t he sequent i al measur ement , then
X~ and Xsi_l are i ndependent , then the var i ance on d is
rel at ed t o t he var i ance on X and X I 5:
3 9 "~
or? l = a ~ . + a i r (S)
Fur t her , for the exponent i al l y wei ght ed movi ng aver-
age, when {X,} are i ndependent and st at i onar y, t he vari-
ance on X f fr om Equa t i on (2) becomesl ::
ZI 2 (6)
O'2f - - 2 - Z I crx
Equat i ons (5) and (6) yield:
a ~ = - - a ~ = - v 2 (7)
2 2
fr om whi ch t he noi se vari ance can be est i mat ed i f v 2 is
known.
6.2 = 2 - 21 ~2 (8)
2
However , Equat i on (3) is comput at i onal l y expensive;
so, use a filtered val ue i nst ead of a t r adi t i onal average:
X 2
V), i - - - - ~2 ( ' J ( " - Ji , ) + ( 1 - ~ 2 ) V) ." I
I f the pr ocess is st at i onary:
(9)
E(v}, , . ) = e( ( X, . - X~ , ) : ) = v 2
So, Equat i on (9) is an unbi ased es umat e of v 2, and the
var i ance of v~:, is:
var(v}, i ) = 2 ~ Z 2 var((Xi - Xli ~)2)
whi ch means t hat Equat i on (9) pr ovi des a comput a-
t i onai l y efficient, unbi ased est i mat e of (X~Xj:~_I) 2.
Then t he est i mat e of the noi se var i ance fr om this first
a ppr oa c h will be:
s21,, = 2 -2 )h v}.,. (10)
Act ual l y, since Equat i on (9) requi res Xfi_~ one woul d
c omput e Equat i on (9) befor e Equat i on (2) t o eliminate
t he need t o st ore t he pr evi ous ' average' .
Usi ng this met hod, s21,~ will be i ncreased fr om its
st eady- st at e val ue by a recent shift in t he mean. Such a
measur e coul d be used t o trigger t he not - at - st eady- st at e
condition9; however , the t hr eshol d is dependent on bot h
the meas ur ement uni t s and t he unknown process noise
vari ance.
The second me t hod t o est i mat e the var i ance will use
the mean squar ed di fferences of successive dat a. Define:
62 = E ( ( X i - X i _ l ) z ) (11)
and 6 2 coul d be est i mat ed by:
1
E( s 2 " ) = 2 E ( ( x , - x, _ , ) (12)
However , Equat i on (12) is comput at i onal l y expensive;
so, use a filtered appr oach:
366 On-line i denti fi cati on of s teady s tate: S. Cao and R.R. Rhi nehart
62f,i = A.3( X i - X i q ) 2 + (1 - Z3). 6} , i _ ,
(13)
Agai n, Equat i on (13) provi des an unbi ased est i mat e of
S 2 .
It is easily shown t hat t he second est i mat e of t he
noise vari ance woul d be:
~ 2
2 f , i (14)
S 2 ' i = 2
Taki ng t he r at i o of t he t wo estimates of vari ance as
det er mi ned by Equat i on (10) t o Equat i on (14):
_ s z ( 2 Z 0 " 2
Ri 1,i -- -- V/,i (15)
S 2 ~ 2
2 , i f , i
Summar i zi ng, use Equat i on (9) t o calculate v~, t hen
use Equat i on (2) t o cal cul at e Xf~ t hen use Equat i on (13)
t o cal cul at e 8)~, t hen use Equat i on (15) t o cal cul at e &.
Each are direct, no-logic, low storage, low oper at i on
cal cul at i ons. In practice, it woul d be preferabl e t o com-
pare ~ 2f.iRcrit (Rcrit is t hr eshol d value of R) t o (2-Z~)v2f~ t o
pr event t he possibility of a divide by zero in Equat i on
(15). For each observed variable, t he met hod requires
t he di rect , simple cal cul at i on of three filtered values. In
t ot al t here are t hree variables t o be stored, 10 mul t i pl i -
cat i ons, ei ght addi t i ons, and one compar i son per
observed variable.
There are three possible process behavi ours which
affect t he val ue of R6:
1. I f t he process dat a is at st eady state (process mean
is const ant , addi t i ve noise is i ndependent and iden-
tically di st ri but ed), t hen R will be near 1.
2. I f t he process dat a mean shifts, or i f t he noi se is
aut ocor r el at ed, t hen R will be great er t han 1. When
t here is a shift in mean, bot h t he cal cul at i ons of the
vari ance will be i nfl uenced t emporal l y. The first cal-
cul at i on will increase mor e and persist longer, so R
will be great er t han 1 for a peri od of time, and t hat
is t he way t hat the ' not at steady-~tate' condi t i on
can be identified.
3. I f t he sequent i al l y sampl ed process dat a al t er nat e
bet ween high and low extremes, t hen R will be less
t han 1. This woul d be very uncommon in chemical
processes, Hence this work onl y tests i f R > Rcrit.
I f {X~} are st at i onar y and i ndependent , and i f the
process is at st eady state, there will be a pr obabi l i t y
densi t y funct i on of R (pdf(R)). Critical values for R,
Rcri,, can be cal cul at ed fr om t he pdf(R) (act ual l y cdf(R)).
Once R becomes great er t han some t hreshol d value,
Rmt, we can know at a cert ai n confidence level t hat the
process is not at st eady state.
P r o b a b i l i t y de ns i t y f unc t i on of R
Theoret i cal l y i f {X~} are st at i onar y and i ndependent , the
pdf(R) is a funct i on of Z], ~ and ~3, and it also depends
on t he nat ur e of r a ndom variable {1",.}. But if R is used
as a general purpose st eady-st at e identifier, we desire
t hat pdf(R) is not sensitive t o the nat ur e of Xi. For t u-
nat el y for useful /1, values it is not . Fi gure 1 gives the
compar i son of pdf(R)s for di fferent di st ri but i ons of X~.
For most of t he cases, t he pdf(R) are al most identical in
spite of t he di st r i but i on of t he noise (normal , uni form,
gamma etc.). The pdf(R) of the exponent i al di st ri but i on
is a little di fferent , but its ri ght tail is indistinguishable
fr om t hose of t he ot her di st ri but i ons. As a result their
critical values will be very close and the critical values
based on t he assumpt i on of Gaussi an di st ri but ed noise
will adequat el y represent any process. This observat i on
is empirical but not unexpect ed in light of the robust -
ness of t he similar F-statistic.
The pdf ( R) s in Fi gure 1 and Fi gure 2 are cal cul at ed
by si mul at i ons usi ng comput er generat ed pseudo-ran-
dom numbers. To generat e a Gaussi an di st ri but ed
sequence, a pai r of i ndependent uni for ml y di st ri but ed
[0,1] r a ndom variables r~, r 2 are first generat ed from a
2, 6 i ~ e : ~ : x ) n o r ~ a l
/ \
0
0 0. 5 1.5 2
R
!
Lar nl xl al =0.1
Lar~bda2=O. 1
I
i
2, 5 3
Fi gure 1 Pr obabi l i t y densi t y funct i on for t he R-st at i st i c for a variety
of di st r i but i ons of pr ocess noi se
7
I
n- '9_..o 1 _ ~ o . . o L ~ . o 1 ~
!
i
1
4 - - - - - i
i
i
I
a - - - - Io ~
1 !o.01 ,0.1 ,
o !
0 0. 5
!
i ', i
I : - - i !
1 1.5 2
R
!
!
I
i
i
t
I
i
i
2. 5 3
Fi gure 2 Pr obabi l i t y densi t y funct i on for t he R-st at i st i c for several
choi ces o f t he filter par amet er set (~1, 22, ;!.3)
On-line identification of steady state: S. Cao and R.R. Rhinehart 367
uni for m r a ndom number gener at or (in this work the
Borl and C/ C++ l i brary funct i on is used). Then a Gauss-
ian di st ri but ed process val ue X,. wi t h mean of p and
st andar d devi at i on of cr can be generat ed ~3 as:
X i = ~t + ~j-2. cr 2. ln(q )-sin(2rc r 2 ) (16)
Given t he 2' val ues and a t i me series of r a ndom num-
bers, the pdf(R) can be const r uct ed as follows. Fi rst
estimate the mean and vari ance of t he r andom number
by using the first ten number s in t he sequence. Use the
est i mat ed mean as the initial val ue of bot h X I and X/_~.
Use the est i mat ed vari ance as the initial val ue of v), . .
Use twice t he est i mat ed vari ance as the initial val ue of
62i. ~. Then, for each new r a ndom number X~, use Equa-
tions (9), (2), (13) and (15) t o cal cul at e a new R~ and
updat e Xj> X~_L, v~t and 6 ~ i . In this work this pr ocedur e
is repeat ed for one mi l l i on samples aft er the first 10.
Each time a new R val ue is generat ed it is put into a his-
t ogr am which has 300 bins and ranges fr om 0 t o 3.0
(bin wi dt h 0.01). The hi st ogr am dat a can be l at er used
to give the pdf(R). The met hod woul d also be appl i ed in
this sequential manner t o det ermi ne st eady state.
As an al t ernat i ve t echni que t o usi ng sequential simu-
lated dat a, t he pdf(R) can also be generat ed from act ual
process (heat exchanger and flow cont rol l er) dat a by the
t echni que known as boot st r appi ng. In our boot st r ap-
ping met hod, 100 fl owrat e dat a were sampl ed fr om the
process and put i nt o an array. Dur i ng the pdf(R) gen-
erat i on, this ar r ay of dat a is resampl ed r andoml y, one
at a time wi t h repl acement . The same procedure is used
to const r uct t he new pdf(R) which is f ound t o be nearl y
identifical t o pdf(R) of comput er generat ed Gaussi an
dat a.
Si mul at i ons also show t hat , for uncor r el at ed dat a, 2'~,
does not have a significant effect on t he pdf(R). Onl y 2'2
and 2'3 affect the pdf(R) significantly. Increasi ng 2'2 and
23 increases the vari abi l i t y of R because 2'z and 2'3 are
filter fact ors on the t wo variances. F i g u re 2 shows the
pdf(R)s for some 2'~, 2'2 and 2, 3 values.
devi at i on ~ and is uncorrel at ed. Appendi x A shows
t hat for this t ype of processes:
E( s2 t, i ) _ (2 - 2 " ) E ( v~ , , .)
, 2 - , L . I s T I 2 + I
2~.crx J (17)
This equat i on gives an idea of how t he average R can be
changed and the pdf(R) can be shifted by the ramp. Fi g -
ure s 3 and 4 show t he pdf(R)s wi t h (B) and wi t hout (A)
a r amp on t he means of process signals. Not e t hat the
pdf(R)s of As are bot h cent red on 1, the pdf(R)s of Bs
are cent red on 3.5. It seems for small 2' values, the t wo
pdf(R)s are obvi ousl y distinct fr om each ot her and Type
I and Type II error can be mi ni mi zed.
Now consi der t hat t he process has been st at i onar y for
a l ong time, and t hen suddenl y there is a r amp on the
mean of the process data. Because of the filtering nat ur e
of R, at t he poi nt of change, the pdf(R) will not j ump
fr om A t o B. Rat her it will move gradual l y from A to B,
and t he l ower t he As, t he slower the t ransi t i on will be.
In F i g u re s 3 a nd 4 , t he curves labelled ' C' are pdf(R)s
when the process is on a r amp for 20 dat a poi nt s aft er
an initial st eady state. The curves labelled ' D' are
pdf(R)s when t he process is back t o st eady state for 20
dat a poi nt s aft er a l ong persisting ramp. It is obvi ous
t hat for smal l er As, it will t ake a l onger time for pdf(R)
to get away fr om its previous process state. So, if
smaller 2' values are used it will t ake a l onger time for
the st eady-st at e i dent i f er to det ect any changes of the
process.
The choice of 2' values and Rcrit shoul d bal ance our
desire to reduce the Type I errors (trigger a ' not at
Sel ecti on of ) l val ues
A), ,L2., 23 are all filter factors. Small filter fact ors can sig-
nificantly reduce t he noise influences on the estimates of
process variances, and the pdf(R) will be nar r owed and
' cent red' near uni t y. So, small A, val ues can make the
pdf(R) of st eady state and the pdf(R) of non-st eady-
state split apar t and bot h Type I and Type II errors 5 can
be reduced. But, dynami cal l y, small filter fact ors can
make the R statistic lag far behi nd t he present process
state.
The idea of rapi d t racki ng of t he process can be
depicted in the fol l owi ng exampl e of a nons t at i onar y
process. Suppose there is a r amp on the mean of the
process dat a. The slope of the r amp is s and the sam-
pling time interval is T. Assume t hat the addi t i ve noise
is Gaussi an di st ri but ed with zero mean and st andar d
i i
' i
. . . . . - [ A - - - -
'2 0. 5-
o
o
i ,,, ]
I D J ,
_ i
2 3 4 5 6 7 8 9
R
Figure 3 Pr obabi l i t y densi t y funct i on for t he R-st at i st i c for four cases:
(A) dur i ng st eady st at e; (B) dur i ng a l ong r a mp di st ur bance; (D) 20
sampl es o f st eady st at e aft er a l ong r a mp peri od; (C) 20 sampl es o f
r a mp aft er a l ong st eady st at e peri od. All curves are wi t h ).~, = )-2, =
A 3 = 0.1)
368 On-l i ne i dent i f i cat i on of st eady state: S. Cao and R.R. Rhi nehar t
6- -
5-
4 -
3-
2-
1
0
0
i
i
' / L - - - i
++
, 2
3 4 5 6 7
R
k
i
t
8
Fi gure 4 Pr obabi l i t y densi t y f unct i on for t he R- st at i st i c for f our cases:
(A) dur i ng st eady st at e; (B) dur i ng a l ong r a mp di s t ur bance; (D) 20
sampl es of st eady st at e aft er a l ong r a mp per i od; (C) 20 s ampl es of
r a mp aft er a l ong s t eady- s t at e per i od. All cur ves ar e wi t h A~, = 2~, =
t 3 = 0. 01)
steady state' when the process is at steady state) and the
Type II errors (not trigger a ' not at steady state'
response when the process is not at steady state) and the
need t o rapidly track the process. Small t values make
the steady-state pdf(R) and not-at-steady-state pdf(R)
distinct from each other ( Fi gures 3 and 4). So if the
Type I error is fixed small t values usually can reduce
the Type II error. On the other hand, large A values
mean less filtering and can track the process more
closely. We suggest t hat ,;t, t = 0.2, 12 = 0.1 and t 3 = 0.1
are values which offer a useful compromise. Then, R.95
= 1.44, R.975 - - 1.56, R99 - - 1 . 7 3 and R.995 -- | . 8 6 ( R a can
be easily calculated from cumulative density function,
cdf(R) by linear interpolation). So Rcrit = 2.0 will be a
good choice.
Sampling intervals
Autocorrelation usually increases both the average and
variability of the pdf(R). Fr om a process analysis per-
spective, autocorrelations can be classified as either long
term or short term. Again, the classification is based on
the persistence of the effect relative to the desired process
analysis. When the process is not at steady state there is
long-term autocorrelation. For example, change in inlet
flow rate of a liquid entering a heat exchanger will cause
a dynamic response of the temperature of liquid leaving
the heat exchanger. The long-term autocorrelation
caused by this kind of process dynamics will change the
pdf(R) when a process moves from steady-state to non-
steady-state condition and identifying such events is the
objective of the new identification technique.
Short-term autocorrelation is extrinsic to the process
phenomena one is interested in, and is due to uncertain
events t hat occur and cannot be controlled or even pre-
dicted. For example, in a heat exchanger, if some of the
tubes are partially plugged, the outlet temperatures
from those tubes will be different from the others.
Because of the non-ideal mixing of the exit fluid of each
tube, packets of hot and cold fluid will exit the
exchanger. The temperature sensor on the outlet of the
heat exchanger may show short-term autocorrelation
even though the process may actually, for the operating
and control point of view, be at steady state.
Either short-term or long-term autocorrelation can
change the pdf(R) and can result in false not-at-steady-
state readings. Our solution to differentiate between
long-term and short-term autocorrelation is to make the
sampling intervals long enough such t hat the influence
of short-term autocorrelation on the sampled data is
negligible. The user must decide the time persistence of
influences which are not to be ' watched' by the steady-
state identifier. Practically, the autocorrelations of rep-
resentative steady-state dat a could be calculated and the
sampling interval selected to be long enough such that
the autocorrelation between successive sampling data is
zero within confidence limits. This is a standard statisti-
cal process control procedure in choosing sampling
interval for control charts l,tt. Compared to the linear
regression method, the selection of sampling interval is
nonjudgmental, mechanical and independent of noise
level of the process.
Fi gures 5, 6 and 7 show the pdf(R)s when the devia-
tions are driven by a first-order autoregressive process
[AR(1)]. When sampling with every dat a point, the
degree of autocorrelation between two successive sam-
pled data is so strong (autocorrelation at lag 1, pl = 0.7)
t hat the pdf of R is significantly different from the
uncorrelated situation. If sampling with every five data
points, the degree of autocorrelation is less strong (Pt =
0.168), but the pdf(R) is still quite different from uncor-
related data. However, when sampling with every 10
data points, the autocorrelation becomes negligible (p~
-- 0.0282) and pdf(R) is nearly identical to the uncorre-
iated case. Also note t hat in Fi gure 5, the average R for
autocorrelated data is big enough to trigger a ' not-at-
steady-state' message provided t hat a normal value of
Refit( TM 2-3) is used. Similar results are also found from
a second-order autoregressive process [AR(2)]. So, gen-
erally the longer the sampling interval, the lower the
2. 5
2
1. 5-
0.5-
0
0
t i
/
2 3 4 5
R
Fi gure 5 Pr obabi l i t y densi t y f unct i on for t he R-st at i st i c. A compar i -
son of cor r el at ed a nd uncor r el at ed da t a wi t h a s ampl i ng i nt er val of 1
On- l i ne i dent i f i cat i on o f st eady st at e: S. Cao and R.R. Rhi nehar t 3 6 9
2.1!
1.5-
O.5-
i i
. . . . . . . . . .
i i !
2 3 4
R
Figure 6 Pr oba bi l i t y dens i t y f unc t i on f or t he R- st at i st i c. A c ompa r i -
s on of cor r el at ed a nd unc or r e l a t e d da t a wi t h a s a mpl i ng i nt er val o f 5
"8.
2 . 5
1.5 .........
o . 5 . . . . . . . . . .
J _
J
. . . . . . . . . . . . . . . . ~ ~ l ~ r ~ ' ~ a - ~ a t ~ l
Xi = 0.7 XJ - 1 +hi
2 3 4
R
Figure 7 Pr obabi l i t y dens i t y f unc t i on for t he R- st at i st i c. A c o mp a r i -
s on of cor r el at ed a nd unc or r e l a t e d da t a wi t h a s a mpl i ng i nt er val o f 10
degree of aut ocor r el at i on, and the closer t he pdf(R) is to
the uncor r el at ed case.
For a first-order aut oregressi ve process which is sam-
pled every P dat a poi nt s, Appendi x B gives:
E(s;_,), = -~_ E ( 8 7, , .) = - ~ E ( ( X , - X,. p)' ) = ( 1- p e ) a - ~
(18)
a n d :
E ( s ~ , ) = E 2 , . v T . , . = ~
1 - P e
= or- x
1 ( 1 - , k ~ ) - p e
- - . E ( ( X i - X f , , . e)2)
(19)
where pe is the aut ocor r el at i on coefficient at lag P.
For stable aut oregressi ve processes, t he bigger the
step size P, the smaller the pe. So, the above equat i ons
indicate t hat a big sampl i ng interval coul d make the
means of t hose est i mat i ons close t o the t rue process
variance; and, as a result, R, the rat i o of t hose t wo esti-
I i i i i i
i I i i J i = ' ~ I
1 3 0 - ~ - ~ ~ - - - ~ i
" 1 ( ,I i i 2 I i
[ ~ k s t a t e I i ~- I
0 5 0 1 0 0 1 5 0 2 0 0 2 5 0 3 0 0 3 5 0 4 0 0 4 5 0 5 0 0 5 5 0 6 0 0 6 5 0 7 0 0 7 5 0
Ume (seconds)
Figure 8 De mo n s t r a t i o n of t he s t eady- s t at e i dent i fi er on di st i l l at i on
c o l umn feed t e mp e r a t ur e at t he exi t of an el ect ri cal heat er )t~ = 0.2,
,;t 2 = )~,, = O . I , R ~, : = 2.0
1 1 i = i - - ~ i
! i I
a . . . . . . . . . . . ~ . . . . . i . . . . . . 4 ....................... - - . - - 4- -
i
..... ~ - - - ! - - ' - - 4 .............
6 I I T
- - - - - r - - - I --* . . . . r f ; i 1
0 100 2 0 0 30 0 40 0 5 0 0 60 0 7 0 0 80 0 9 0 0 1 0 0 0
ti m e ( m in )
Figure 9 De mo n s t r a t i o n o f t he s t eady- s t at e i dent i fi er on t he effl uent
p H val ue of a c omme r c i a l i n-l i ne p H c ont r ol syst em. ,;t~ = 0.2, 2: = ~,~
= 0.1. R~,~ = 2.0
28-
g
i
2 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0 100 200 300 400 500 600 700 800 900 1000
t i m e (seconds)
Figure 10 De mo n s t r a t i o n of t he s t eady- s t at e i dent i fi er on t he cool i ng
wa t e r pr es s ur e of a heat exchange. 2, : 0. 2. . k: : A~ = 0.1, R~,, = 2.0
mat es, will be cent red t owar ds uni t y and the pdf(R) will
be close to t hat of uncor r el at ed cases. By adjust i ng the
sampl i ng intervals, t he short -t erm aut ocor r el at i ons
coul d be di fferent i at ed fr om l ong-t erm aut ocor r el at i on
and short -t erm aut ocor r el at i ons can be prevent ed fr om
37 0 On-line i denti fi cati on of steady state: S. Cao and R.R. Rhi nehart
fooling the steady-state identifier. Selecting sampling
interval t o ' eliminate' the influence of autocorrelation is
a standard statistical process control practice~.tL
R esults
The steady-state identifier has been tested by tempera-
ture dat a from a distillation col umn feed preheater TM
( Fi gure 8 ) , pH measurement from a commercial, in-line
pH control system 15 ( Fi g ure 9) and t ube fluid pressure
dat a from a pilot-scale heat exchanger t6 ( Fi g ure 1 0) . In
all cases, the steady-state identifier agrees with a visual
recognition of the system state. It is also observed that
the steady-state identifer is insensitive to the change of
noise level. In Fi g ures 9 and 1 0, there are dramatic
changes (increase) of noise amplitude, but because the
mean of the dat a is unchanged, the identifier indicates
the system is still at steady-state, wi t hout user' s interac-
tions. P= I , ~= 0. 2, ~= 2~= 0. 1 and Rcrit=2.0 are used in
the testing. We found these values of ~, ~, 2~, Rcrit
with P chosen so that aut ocorrel at i on coefficient Pe <
0.05 could balance our desire of reducing Type I and
Type II errors and desire of fast tracking of the process.
The steady-state identifier also responds correctly t o
the dynamic changes of the systems. In Fi g ure 8, most
of the process changes are nearly step changes and
those changes trigger the steady-state identifier almost
instantly. But when the process is back to steady-state,
because of the filtering nat ure of the identifier, it will
take some time for R (may have a high value after a
step changer to go below the critical value and trigger a
new ' steady-state' signal. The amount of delay primar-
ily depends on A, values, the nat ure of the process, and
the trigger value. For most of the processes and ~ val-
ues which are useful, the amount of delay is acceptable.
Conclusion
A new statistically-based met hod has been developed
for aut omat ed identification of steady state. The
met hod is comput at i onal l y inexpensive when compared
t o conventional techniques. The R-statistic is dimen-
sionless and independent of the measurement level.
Because it is a ratio of estimated variances, it is also
independent of the process variance. Simulations and
actual dat a show that for recommended values of ~,
critical values are also effectively independent of the
magnitude and distribution of the noise. Tests on a
variety of experimental processes show that the met hod
agrees with visual recognition.
Ack nowledgements
The authors wish t o t hank Dr B. ' Soundar' Ramchan-
dran for his preliminary investigations, Mehul M.
Desai, Prabu Mur ugan and Pri yabrat a Dut t a for help
in obtaining the experimental results, and Professor
Kamal Chanda (Texas Tech University, Depart ment of
Mat h) for his guidance. The aut hors appreciate bot h
the financial support and technical guidance from the
following members of the Texas Tech Process Control
and Optimization Consortium: Albemarle Corp. (for-
merly Ethyl Corp.); Amoco Oil Co.; Arco Exploration
& Production; Conoco, Inc.; Di amond Shamrock; The
Dow Chemical Co.; The Dynami c Matrix Control
Corp.; Exxon Co., USA; Hyprot ech, Inc.; Johnson
Yokogawa Corp.; Phillips Pet rol eum Co.; and Setpoint,
Inc.
R eferences
1 Lin, T.D.V. Hydroca rb. Proces. April 1993, 107
2 Keller, J.Y., Darouach, M. and Krzakala, G. Comp ut. Chem.
Eng. 1994, 18(10) 1001
3 Albers, J.E. Hydroca rb Proces. March 1994, 65
4 Forbes, J.F. and Marlin, T.E. I nd. E ng. Ch em. Res. 1994, 33,
1919
5 Bethea, R.M. and Rhinehart, R. R. ' Applied Engineering Statis-
tics', Marcel Dekker, New York, NY, 1991
6 Crow, E.L., Davis, F.A. and Maxfield, M.W. 'Statistics Manual' ,
Dover Publications, New York, NY, 1955, 63
7 Loar, J. in ' Control for the Process Industries', Putman Publica-
tions, Chicago, IL., Vol VII, No. 11, November 1994, 62
8 Alekman, S.L. in ' Control for the Process Industries', Putman
Publications, Chicago, IL, Vol VII, No 11, November 1994, 62
9 Jubien, G. and Bihary, G. in ' Cont rol for the Process Industries,'
Putman Publications, Chicago, IL., Vol VII, No. 11, November
1994, 64
10 ' Manual on Presentation of Dat a and Control Chart analysis',
6th edition, ASTM Manual series MNL-7, American Society for
Testing and Materials, Philadelphia, PA, 1991
11 Oakland, J.S. 'Statistical Process Control, A Practical Guide' ,
John Wiley, New York, NY, 1986
12 Box, G.E.P. and Jenkins, G. M. ' Time Series Analysis: Forecast-
ing and Control' , Holden-Day, 1976
13 Edwards, P. and Hamson, M. ' Gui de to Mathematical Model-
ing', CRC Press, Boca Raton, Florida, 1990
14 Dutta, P. and Rhinehart, R. R. in 'Proceedings of the 1995 Amer-
ican Control Conference', June 1995, Seattle, WA, 1787
15 Mahuli, S.K., Rhinehart, R. R. and Riggs, J . B. J . Proc. Cont.
1992, 2,(3)
16 Rhinehart, R. R. in ' Proceedings of the 1994 American Control
Conference' , Baltimore, MD, June 1994, 3122
17 Brockwell, P.J. and Davis, R.A. ' Time Series: Theory and Meth-
ods' , Springer-Verlag, Berlin, 1987
Appendix A: Means of ' variances' for the
process with a ramp
If a process has a long persisting ramp on its mean, the
process dat a Xi can be expressed as:
X~ - X o = s . ( t i - t o) + n i (A.1)
where s is the slope of the ramp, t t is time at ith point,
n~ is noise at ith point. For simplicity and wi t hout loss
of generalization, treat X~, t~ as deviations from X 0 and
to from t = t o, so:
X i = S ' t i - bl' l i
The filtered process value is then:
On-line identification of steady state: S. Cao and R.R. Rhinehart 37 1
X f , i ~" ( 1 - - ~ ' l ) " J ~ f j , i - I "b ~['l" ( S " t~ + n,.) (A.2)
Xf~ can be expressed as the sum of Xf m.~ , t he filtered
val ue of t he mean of process dat a and Xt-..~, t he filtered
val ue of the noise. Define:
X~,,, i = (1 - 21)" X.tm.,-I + XI" s ' t i (A.3)
X . l h , i = ( l - / ~ l ) ' X j h , i _ l q- ~' 1 " /' /i (A.4)
Add above t wo equat i ons t oget her and compar e t o
Equat i on (A.2):
X f , ~ = X lm,i + X f , , , (A.5)
Here X,~., is the non- noi sy par t of Xf,~, while ~, . , is the
noi sy part .
Let t, = t ~ + T and assume t hat Xjm,,, the non- noi sy
par t of Xt:,, has reached the st eady slope s. Tha t is:
X f m, i = s T + z!(fm,i_l
Subst i t ut i ng i nt o Equat i on (A.3) yields:
X j m, i s t i + (1 - 1 ) = " " s ' Y (A.6)
Take the vari ance of Equat i on (A.4) and because Xj,,_~
and n, are i ndependent :
~' : - - X I ) 0- f n, i - I " 0- n
0-7.,., ( I 2 2 + Z ] 2
(A.7)
I f t he di st r i but i on of n,. is st at i onary, t he di st r i but i on of
Xs,., will also be st at i onary.
2 = 0 " 2 = 0-fn
G f n, i Jn,i-I
Subst i t ut i ng i nt o Equat i on (A.7):
0-~" - = 7 k , 0-"~ ( A. 8 )
I f t he process is st at i onar y, from Equat i ons (9) and (10):
2 - X I
E ( s ~ , , ) = i . E ( v } , , )
= 2 - & . E ( ( Z , - X f ~ q )2 )
2
= 2 - 2~ . E ( ( s . t i + n i - X f m , i _ 1 - Zf n i I) 2)
2
= _ _ . 2 - X 1 E ( ( n, . + s . T _ X f , , i q ) 2 )
2 2 ~
2
2 - Z l . s - T
= - - 5 - ( 5 7 , + E ( 4 ) + E ( X2 o , , ,))
2 - - Z 1 s 2 T 2 2 0-2fn )
, , .
= 2 ( +0-~ +
(A9)
)(ni and Xf n, i - l , are i ndependent and their means are
all zero)
Insert i ng ~2r, fr om Equat i ons (A.8) into (A.9)
E(s2.) = 2 - ~1 $ 2 T 2 2 ,
2 ' ( - - ~ + o-~) (A.10)
' 2 - X I
This value not onl y depends on the variance of noise,
but also depends on the slope and the filter fact or 2t.
Large s (steep slope) and small X t (large lag) can result
large values of s 2
l, i "
For the est i mat e s~ cal cul at ed fr om the filtered
squared-di fference of successive dat a, if the process is
st at i onary:
E ( v } , , ) = E ( ( X , - X,_,) 2)
So:
E ( s ~ j ) = - ~ . E ( ( X i - X , _ , f ) = - ~ . E ( ( s T + n, - n, , ) ' )
(A. I1)
Because n, and nj_l are i ndependent and their means are
all zero:
1
E ( s 2 i ) = ~ " E ( ( X i - X , i ) 2 )
= l ( s=V : + a~ + G~)
2
s 2 T 2
- + 0- n
2 (A.12)
The rat i o of means of t hose t wo variances becomes:
E ( s ~ , , ) _ (2 - &) - E( ( X i - Xr j q) 2)
E ( s ~ , , ) E ( ( X , - x,_ ,) 2)
(2 - s 2 T 2
z , ) - ~ - + 2 a~
s 2 T 2 + 2 0- ~
(A.13)
In dimensionless form:
2 - X , E ( v } , i ) 2 "
E ( s ~ , ) _ 2 =
' - , + 1
+1
(A. 14)
This is not a mean of R. This is division of t wo means,
but it provides a good est i mat e of the effect of a r amp
in level on R. I f a process is st at i onar y, this value
shoul d be uni t y (here we assume the onl y correl at i on
bet ween successive dat a is the ramp, n, and n,+j are inde-
pendent ). By l ooki ng at how the average R is di fferent
fr om uni t y, we can est i mat e how far away a process is
fr om being st at i onar y.
37 2 On-line i denti fi cati on of steady state: S. Cao and R.R. Rhi nehart
T a b l e 1 A c ompa r i s on of est i mat es
2 - A , ~ , E(s2~;)
2 E(s~,,)
~1 "a-2 -'71-3 T" ~f.i (A,14) R/
0.1 0.1 0.1 3. 580 3.589 3.953
0.1 0.05 0.05 3. 574 3.589 3.799
0.1 0.01 0.01 3.580 3.589 3.641
0.1 0, 005 0. 005 3.578 3.589 3.618
( s T = 0. 5, o-, = 3. 0)
A numer i cal c ompa r i s on bet ween aver age R and t he
r at i o of me a n sl~; and mean s2.2; is given in Tabl e 1.
Thi s pr ocess is si mul at ed by usi ng comput er - gener -
at ed Gaussi an di st r i but ed ps e udo- r a ndom number s as
descr i bed earl i er. Each t i me a new pr ocess val ue is gen-
er at ed, st, ~ S2!i and R are updat ed and put i nt o t hr ee dif-
fer ent hi st ogr ams. Thi s si mul at i on pr ocess is r epeat ed a
mi l l i on t i mes and upon compl et i on t he pdf(R), pdf(sx!;),
pdf(s2!;), aver age R, aver age sl~; and aver age s22. can all be
cal cul at ed f r om t he hi st ogr ams. All col umns in T a bl e 1
are si mul at ed exper i ment al dat a except for t he r at i o of
E ( s~ ; ) and E(2,~. ) whi ch is cal cul at ed by Equat i on (A.14).
Gi ven t he r amp, noise level and A, values, t he r at i o of
mean s~ and me a n s2~ can be cal cul at ed and used as an
est i mat i on of t he aver age R. The devi at i on of aver age R
f r om uni t y i ndi cat es t he ease wi t h which t he r a mp can
be identified by t he st eady- st at e identifier. The first col-
umn of resul t s veri fi ed t hat t he anal yt i cal sol ut i on of
E ( sO / E ( s2 ) is accur at e.
Appendix B: Means of ' variances' for the first-
order autoregressive process*
For t he fi r st - or der aut or egr essi ve process:
X i = ( 9. X i _ 1 + a i (B.1)
wher e at = / . t + n;, n; is zer o mean noise,
0 < 0 < 1 f or t he pr ocess t o be stable
We have also:
X i - 1 = 0 " X i - 2 + a i - i (B.2)
Subst i t ut e Equa t i on (B.2) i nt o (B.1) and do it r ecur -
sively:
X i = 0 p " X i p + a i + a i l ' O + a i _ 2 ( ~ 2
+ a i - 3 " 0 3 + ' " + a i p+l " 0 p-3 ( B . 3 )
So, i f t he pr ocess is sampl ed every p dat a poi nt s, it can
still be descr i bed as a first or der aut or egr essi ve process:
X i : I f f l ) . X i _ p -b A t (B.4)
where:
= 0~ ( B.5)
0 2 . 03
d~. = a i + a i _l . 0 + a i - 2 + a i - 3 + . . .
+ a i p+l " 0 p-1 (B.6)
No w cal cul at e t he mean of X;, f r om Equa t i on (B.1):
E( X, ) = 0 E ( X ; q ) + E ( a ; ) ( B. 7 )
Because t he process is st at i onar y:
E ( X i ) = E ( X i 1) = E ( X )
S o :
E ( X ) = 0" E ( X ) + E ( a ) (B.8)
E ( X ) - E ( a ) (B.9)
I - 0
As f or t he var i ance of X;, f r om Equa t i on (B.1), because
X,=~ and a, are i ndependent :
0"2 ~- 02 2 "?
Xi " 0-Xi I + 0-a~
Fo r a st at i onar y process:
2 = 0 2 . 0 " 2 + 0"~
0"X (B. 10)
S o :
2
2 0-a (B. I1)
0 " X - -
1 - 0 2
I f t he process is sampl ed ever y P dat a poi nt s ( P> I ) , the
mean and var i ance of X; will st ay t he same.
Now, cal cul at e t he mean of squar ed di ffer ence of suc-
cessive dat a. Because t he mean of t he di fference of suc-
cessive dat a is zero, t he mean of t he s quar ed di fference
of successive dat a is equal t o t he var i ance of t he differ-
ence of successive dat a.
I f sampl i ng each dat a poi nt , let:
d i : X i - X i _ 1 (B.12)
Fr o m Equat i on (B.1):
di = X i - X;_l = 0" X i _ l + a, - X; i = (0 - 1)Xi i + a;
(B.13)
a; and X; , are i ndependent , so:
2 2
var ( g ; - X i q ) = (0 - 1) 2 ' 0"2 + 0"a = ( 0 - - 1) 2. 0"X
2
+ (1 - 02) 0-x = (2 - 20) 0-x (B.14)
I f sampl i ng every P dat a poi nt s, Equat i on (B.11)
becomes:
X i - X i - p = ~ " X i - e + Ai - X i e = ( ~ - 1 ) X i - p + Ai
* For an excel l ent revi ew of t i me series anal ysi s, refer t o Refer ences 12
a nd 17. (B. 15)
On-line identification of steady state: S. Cao and R.R. Rhinehart 37 3
A, a n d X , .e ar e al so i nde pe nde nt , so:
var(X, . - X,._ e) = ( ~ - 1) 2 - O'~- + O" A
2 + (l - (I)2) 2
= ( q b - 1 ) 2 " Y x O ' x
= (2 - 2 0 ) . ~Y?v
(B. 16)
Th a t ~s:
1
-~ E ( ( X , - X i _ p f ) = (1 - *)o'.~- = (1 - ~e) o' ~
(B. 17)
Fo r t he es t i mat e o f s2!, cal cul at ed f r o m t he fi l t er ed
s qua r e d- di f f e r e nc e o f successi ve dat a:
, l . E ( 6 } , , )
E( s 2 , , ) = 7
Becaus e:
6~,, = ~ ( x , - x, 1 ) : + ( 1 - ; ~ ) . 6},,_ ,
i f t he pr oces s is s t a t i ona r y, E(~j!i) = E(~.~_ 0.
Th e a bo v e t wo e qua t i o n s l ead t o:
E ( v2 r, , ) = e ( ( x , - x , _ , ) 2)
So:
E(s~., ) = ~-. E( 6) . , ) = (1 - ~e )cy2 (B. 18)
Thi s is t he me a n o f ' va r i a nc e ' c a l c ul a t e d by t he fi l t er ed
s qua r e d di f f er ences o f successi ve dat a. I f ~ is smal l ,
whi ch me a n s t he degr ee o f a ut o c o r r e l a t i o n is weak, or
P ( t he s ampl i ng i nt er val ) is l arge, t hi s n u mb e r c oul d be
cl ose t o t he t r ue va r i a nc e o f t he pr oces s .
Be f or e we can es t i mat e t he me a n o f t he ' va r i a nc e ' cal -
c ul a t e d by t he fi l t er ed s qua r e d- de vi a t i on f r o m fi l t er ed
pr oc e s s val ue, we fi rst l o o k at t he f i l t er ed pr oc e s s da t a
Xj;,. F r o m Equa t i o n s (2) a nd (B. 1):
Xf,,. = (1 - , ~ l ) X f , i - I + &l Xi = (1 - , ~ l ) BX f , i + ~ k I " X i
(B. 19)
X , = (o. X i q + a, = ?p, B . X i + a i (B. 20)
whe r e B is t he ba c kwa r d shi ft ope r a t or .
F r o m Equa t i o n s (B. 19) a nd (B. 20):
21 kq 1
X f , , - I ( I - , ; t ~ ) B X ' = 1 - ( 1 - & ~ ) B l-0 -----B' a'
(B. 21)
a nd
X i
ai ~1 1
- X l " - I - I - O B l - ( l - & l ) B I -~ ------B" a' -I
_ a, . A t B
l - O R 1 - ( 1 - x 0 g 1 - ----~ a*
(B22)
Th a t is:
I - B
X , - X f , i l = l _ ( l _ . a . l + 0 ) B + 0 ( 1 ~-)-(B:a' (B. 23)
Th e de vi a t i on f r o m t he pr e vi ous fi l t er ed val ue x c- x l : ,
is a s e c o n d - o r d e r a ut or e gr e s s i ve pr oces s and f i r s t - or der
mo v i n g a ve r a ge ( ARMA( 2 , 1 )).
Le t
d, = X~ - Xj ~-1 (B. 24)
Th e a bo v e e qua t i o n is e qui va l e nt to:
4 = ( 1 - ' a ' l + q } ) ' 4 q - ~( 1 - ~ l ) ' d , 2 + a , - a , 1
(B. 25)
Th e a ve r a ge o f d~ is zer o, so t he me a n o f t he s quar ed-
de vi a t i on f r om t he fi l t er ed va l ue is equal t o t he var i ance
o f d, whi c h c a n be c a l c ul a t e d by t he a ut o c o v a r i a n c e
f unc t i on r 0 o f d,, t he ARMA( 2 , 1 ) .
Th e c a l c ul a t i on o f r 0 f or ARMA( 2 , 1 ) is s t a nda r d 17
a nd it is f o un d t h a t f or 4:
2 1 - ~ ,
Jb = - - O'~.
2 - kq 1 - (1 - kq)~
So:
2 - ~ l E ( ( X i - ~ f,-1 , , 2 ~1
2 2
2 - ~ q 2 - 2 1 ( a + b )
- - r o - _ _
2 2
_ l - O 4
l - - ( | - - X 1) ~
u ( , ( )
( B. 2 6 )
Thi s is t he me a n o f ' va r i a nc e ' cal cul at ed by t he s qua r e d
de vi a t i on f r o m t he f i l t er ed val ue.
I f a pr oc e s s is s a mpl e d e ve r y P d a t a poi nt s , t he anal y-
sis is i dent i cal ; howe ve r , ~ r epl aces 0, so:
1 - oR
2 - ) q 2 " E ( ( X i - X f ' " P ) ~ ) - 1 ( - 1- - Z) ~ e ' ~
Th e me a n o f es t i mat e s~i c a l c ul a t e d by fi l t er ed s qua r e d-
de vi a t i on f r o m t he f i l t er ed pr oc e s s val ue is:
e ( s ~ , ) = 2 - ;~,
2 . e ( v ~ , , )
Because:
)2 + (1 - - ~ l ) V~ ' a 1
v?, i = Z2( X i - X l i t
I f t he pr oc e s s is s t a t i ona r y:
3 9
E ( W f , , ) = E ( v? , , ~)
37 4 On-l i ne i dent i f i cat i on of st eady state: S. Cao and R.R. Rhi nehar t
Table 2 A comparison of estimates
-Sl - 2 - 2~1 - 7 E ( sl )
2 V f ' i (B.30)
1 2
m
SI __ ( 2 - ~ l ) V 2 f , i - -
E ( s: ) - - - - - R =
(B.29) s2 ~ . i
P =1 16.854 15.813 5.036 5.029 3.347 3.2814
P =3 19.554 19.476 14.344 14.294 1.363 1.4265
P =5 20.051 20.050 18.171 18.097 1.103 1.1570
P = 8 20.266 20.245 19.915 19.818 1.018 1.0680
The above t wo equat i ons l ead to:
E(v},~) = E ( ( X ~ - X f ~ 1 )2)
So:
1 -
1_
(B.27)
For t he fi rst -order aut or egr essi ve process:
Pe = Ce (B.28)
wher e Pe is the aut ocor r el at i on funct i on at lag P and by
definition:
re
j O e ~ - -
r0
So, Equat i ons (B.18) and (B.27) can be rewri t t en as:
and:
= ( 1 - p~)cr#
(B.29)
E ( s ~ i ) = 2 - ) h . E ( v 2 f , , ) = 2 - ,
2 2
1 - Pe t:r2x
1 - ( 1 - A) . pe
- - E ( ( X i - ~ f , i _ p ) 2
(B.30)
Equa t i on (B.29) is al so t rue for any or der of aut o-
regressi ve process. For an n- or der st at i onar y aut or e-
gressive process:
1
1.2 E ( ( X , - X ~ _ p ) 2 ) = 7 E ( ( ( x i - F 4 X ) )
- ( X , _ e - E ( X ) ) ) 2 )
1
= - ~ . ( F . ( ( x , - F . ( X ) ) 2 ) + r. ( ( X , _ p - E( X) ) 2 )
- 2 . E ( ( X , - E ( X ) ) ( X , _ e - E ( X ) ) ) )
= , r ~ - c o v ( X ; , X ~ _ , ) = , r ~ - p ~ . , r 2 = ( 1 - p , ) . ~
(B.31)
Equat i on (B.30) is not val i d for hi gher or der aut or e-
gressive processes, but t he fol l owi ng exampl e shows
t hat Equat i on (B.30) can give a good appr oxi mat i on for
a second- or der aut or egr essi ve process.
An exampl e s econd- or der aut or egr essi ve process is
descr i bed as:
X i = 0. 85, X i _ 1 - 0. 15. X i _ 2 + a i (B.32)
where X~ is the pr ocess vari abl e, a~ is i ndependent
Gaussi an di st r i but ed wi t h mean equal t o 50 and stan-
dar d devi at i on equal t o 3. Thi s process is si mul at ed by
usi ng comput er gener at ed Gaussi an di st r i but ed pseudo-
r andom number s as descr i bed earlier. Each time a new
process val ue is generat ed, s12~, size, and r are updat ed and
put i nt o t hree di fferent hi st ograms. Thi s si mul at i on
process is r epeat ed a mi l l i on times and upon compl et i on
the pdf(R), pdf(Sl2~), pdf(s22,j), aver age R, average sl2,~ and
average s22,~, can all be cal cul at ed f r om t he hi st ograms.
In t he si mul at i on At = 22 = A3 = 0.1 are used, the
results are summar i zed in T a b l e 2 . (In this exampl e the
aut ocor r el at i on funct i on pp in Equat i ons (B.29) and
(B.30) was cal cul at ed anal yt i cal l y. In real processes,
however , aut ocor r el at i on funct i ons are usual l y cal cu-
l at ed numeri cal l y. )
In T a b l e 2 , it is obser ved t hat t he average of sz2~ is
ext remel y cl ose t o E(s~/,~); t he small di fference is due t o
numeri cal error. Re me mbe r t hat the bin wi dt h used in
t he hi st ogr ams is 0.01 whi ch can pl ace a l i mi t at i on on
the accur acy of the aver age of $22,i . The average of Sl2,i is
general l y ver y cl ose t o E(Sl2,~) cal cul at ed fr om Equat i on
(B.45) especi al l y when t he sampl i ng interval is large and
aut ocor r el at i on is weak. It is al so obser ved t hat t here is
a st r ong rel at i on bet ween t he r at i o of t he average of s12,~
t o the aver age of s22~ and aver age of R. When t he sam-
piing interval is large, bot h t he aver age of s12t, and the
average of s22,~ are cl ose t o t he t rue process vari ance, and
t he aver age of R will be cl ose t o 1, the pdf(R) will be
cl ose t o the pdf ( R) for an uncor r el at ed process.
Equat i ons (B.29) and (B.30) can be used t o select t he
mi ni mum sampl i ng per i od t hat can keep aut ocor r el a-
tion away fr om t ri ggeri ng t he st eady- st at e identifier.
For a real process, the aut ocor r el at i on funct i on can be
cal cul at ed numer i cal l y when t he process is st at i onar y, as
det er mi ned by visual i nspect i on. Then by Equat i ons
(B.29) and (B.30), E(Sl2,i) and E( s2 2 , i ) c a n be cal cul at ed for
each di fferent sampl i ng interval.

También podría gustarte