Está en la página 1de 227

Centro de Investigación y de Estudios Avanzados

del Instituto Politécnico Nacional


Unidad
Unidad Guadalajara

Análisis no Lineal de Sistemas de


Potencia usando
Metodologías Descomposición
TCAD Modal
para diseñar de
diodos
Koopman
epitaxiales y Teoría de Perturbación
de recuperación rápida de silicio
usando una estructura con contacto tipo
+ +
mosaico P /N

Tesis que presenta:


HectorAlfredo
Marcos Eduardo Aldrete Vidrio
Hernández Ortega

para obtener el grado de:


Maestro enCiencias
Doctor en Ciencias

en la especialidad de:
Ingeniería Eléctrica

Director de Tesis
Dr.Dr. Arturo
Juan Román
Martín Messina
Santana Corte
Dr. Juan Luis del Valle Padilla

CINVESTAV del IPN Unidad Guadalajara, Guadalajara, Jalisco,


Guadalajara, Jal., Enero de 2019.
Junio del 2002.
&HQWURGH,QYHVWLJDFLyQ\GH(VWXGLRV$YDQ]DGRV

GHO,QVWLWXWR3ROLWpFQLFR1DFLRQDO

8QLGDG*XDGDODMDUD



1RQOLQHDU3RZHU6\VWHP$QDO\VLVXVLQJ
.RRSPDQ0RGH'HFRPSRVLWLRQDQG
3HUWXUEDWLRQ7KHRU\




$WKHVLVSUHVHQWHGE\
0DUFRV$OIUHGR+HUQiQGH]2UWHJD

WRREWDLQWKHGHJUHHRI
'RFWRURI6FLHQFH

LQWKHVXEMHFWRI
(OHFWULFDO(QJLQHHULQJ

7KHVLV$GYLVRU
'U$UWXUR5RPiQ0HVVLQD


&,19(67$9GHO,318QLGDG*XDGDODMDUD*XDGDODMDUD-DOLVFR1RYHPEHU
Metodologías
Análisis noTCAD
Linealpara diseñar diodos
de Sistemas de
epitaxiales
Potencia de recuperación
usando rápidaModal
Descomposición de silicio
de
usando
Koopmanuna estructura
y Teoría de+con contacto tipo
Perturbación
mosaico P /N+

Tesisde
Tesis deDoctorado
Maestría enenCiencias
Ciencias
Ingeniería Eléctrica

Por:
HectorAlfredo
Marcos Eduardo Aldrete Vidrio
Hernández Ortega
Ingeniero enMaestro
Comunicaciones y Electrónica
en Ciencias
Universidad
CINVESTAV deUnidad
del IPN Guadalajara 1992-1996
Guadalajara 2012-2014

Becario del CONACyT,expediente


de CONACyT, expedienteno.
no.282116
143876

Director de Tesis
Dr.Dr.
Juan Martín
Arturo Santana
Román Corte
Messina
Dr. Juan Luis del Valle Padilla

CINVESTAV del
CINVESTAV del IPN
IPN Unidad
Unidad Guadalajara,
Guadalajara, Enero
Junio del 2002.
de 2019.
1RQOLQHDU3RZHU6\VWHP$QDO\VLVXVLQJ
.RRSPDQ0RGH'HFRPSRVLWLRQDQG
3HUWXUEDWLRQ7KHRU\



'RFWRURI6FLHQFH7KHVLV
,Q(OHFWULFDO(QJLQHHULQJ



%\
0DUFRV$OIUHGR+HUQiQGH]2UWHJD
0DVWHURI6FLHQFH
&,19(67$9GHO,318QLGDG*XDGDODMDUD



6FKRODUVKLSJUDQWHGE\&21$&<71R


7KHVLV$GYLVRU
'U$UWXUR5RPiQ0HVVLQD




&,19(67$9GHO,318QLGDG*XDGDODMDUD1RYHPEHU

ii
Acknowledgements

I would like to express my sincere gratitude to my advisor, Dr. Arturo Román


Messina, for sharing his knowledge, understanding and patience, and for providing
me with the necessary support and enthusiasm required to make this dissertation
possible.

I would also like to thank to my family for the support that they had provided me
through my entire life.

To my professors and friends at CINVESTAV Guadalajara, as well as to my friends


at Indaparapeo, Michoacán, México.

Finally, acknowledgements are also due to the CONACYT for its support.
Resumenȱ
ȱ
Losȱ movimientosȱ transitoriosȱ delȱ sistemaȱ puedenȱ variarȱ desdeȱ unȱ comportaȬ
mientoȱlocalȱaȱeventosȱdeȱescalaȱglobal,ȱyȱpuedenȱestarȱaltamenteȱcorrelacionados;ȱ
especialmenteȱenȱelȱcasoȱdeȱoscilacionesȱpobrementeȱamortiguadasȱoȱinestablesȱ
queȱsurgenȱdespuésȱdeȱgrandesȱdisturbios.ȱTalesȱinteraccionesȱpuedenȱserȱdescriȬ
tasȱútilmenteȱenȱtérminosȱdeȱmodosȱinteractuandoȱporȱmedioȱdeȱlaȱestructuraȱdeȱ
laȱredȱyȱdeȱlasȱcondicionesȱiniciales.ȱȱ

Entenderȱ cómoȱ estosȱ modosȱ evolucionanȱ eȱ interactúanȱ esȱ claveȱ noȱ solaȬ
menteȱparaȱevaluarȱlaȱestabilidadȱdelȱsistema,ȱsinoȱtambiénȱparaȱidentificarȱlosȱ
dispositivosȱdinámicos,ȱyȱsusȱsistemasȱdeȱcontrol,ȱqueȱesténȱenvueltosȱenȱlasȱosȬ
cilaciones,ȱyȱparaȱelȱdiseñoȱdeȱcontroles.ȱȱ

Enȱestaȱtesis,ȱunȱnuevoȱmarcoȱdeȱtrabajoȱbasadoȱenȱmodelosȱqueȱcombinaȱ
elȱanálisisȱmodalȱdeȱKoopmanȱyȱlaȱteoríaȱdeȱperturbaciónȱesȱintroducido.ȱEnȱesteȱ
marco,ȱ elȱ comportamientoȱ noȱ linealȱ esȱ interpretadoȱ comoȱ laȱ proyecciónȱ deȱ lasȱ
funcionesȱpropiasȱdelȱoperadorȱdeȱKoopmanȱdeȱunȱsistemaȱextendidoȱsobreȱlasȱ
variablesȱfísicasȱdelȱsistema.ȱȱ

Esquemasȱ eficientesȱ paraȱ suȱ cálculoȱ sonȱ derivados,ȱ asíȱ comoȱ medidasȱ
cuantitativasȱdeȱobservabilidadȱyȱparticipaciónȱdeȱlosȱmodosȱnoȱlineales.ȱȱ

Porȱotraȱparte,ȱconȱelȱobjetivoȱdeȱreducirȱelȱcostoȱcomputacionalȱdelȱcálculoȱ
deȱlasȱmatricesȱdeȱcoeficientes,ȱunaȱnovedosaȱtécnicaȱdeȱlinealizaciónȱesȱpropuestaȱ
basadaȱenȱlaȱteoríaȱdeȱlaȱperturbaciónȱyȱlaȱreglaȱdeȱlaȱcadena.ȱȱ

Posteriormente,ȱelȱdesarrolloȱdeȱunȱcontroladorȱcuadráticoȱnoȱlinealȱ(trunȬ
cado)ȱconȱlimitacionesȱestructuralesȱesȱpresentado,ȱbasadoȱenȱelȱanálisisȱperturȬ
badoȱdeȱKoopmanȱyȱlaȱteoríaȱdeȱcontrolȱlinealȱóptimo.ȱȱ

ResultadosȱnuméricosȱenȱvariosȱsistemasȱdeȱpotenciaȱmultiȬmáquinaȱsonȱ
usadosȱparaȱvalidarȱeȱilustrarȱlasȱmetodologíasȱpropuestas.ȱAsíȱtambién,ȱseȱenlisȬ
tanȱlosȱtrabajosȱfuturosȱyȱramasȱdeȱinvestigaciónȱoriginadosȱdeȱesteȱtrabajo.ȱȱ

iv
Abstractȱ
ȱ
Powerȱsystem’sȱtransientȱmotionsȱmayȱvaryȱfromȱlocalizedȱbehaviorȱtoȱlargeȬscaleȱ
globalȱ motionȱ andȱ mayȱ beȱ highlyȱ correlated,ȱ especiallyȱ inȱ theȱ caseȱ ofȱ poorlyȱ
dampedȱorȱunstableȱoscillationsȱfollowingȱlargeȱdisturbances.ȱSuchȱinteractionsȱ
canȱbeȱusefullyȱdescribedȱinȱtermsȱofȱmodesȱinteractingȱwithȱeachȱotherȱviaȱtheȱ
networkȱstructureȱandȱviaȱtheȱinitialȱoperatingȱconditions.ȱ

UnderstandingȱhowȱtheseȱmodesȱevolveȱandȱinteractȱisȱkeyȱnotȱonlyȱtoȱasȬ
sessingȱsystemȱstability,ȱbutȱalsoȱtoȱidentifyingȱdynamicȱdevicesȱandȱtheirȱcontrolȱ
systemsȱinvolvedȱinȱtheȱoscillations,ȱandȱtoȱdesignȱcontrollers.ȱȱ

Inȱ thisȱ dissertation,ȱ aȱ newȱ modelȬbasedȱ frameworkȱ thatȱ combinesȱ


Koopmanȱmodeȱanalysisȱandȱperturbationȱtheoryȱisȱproposed,ȱwhereȱnonȬlinearȱ
behaviorȱisȱinterpretedȱasȱaȱprojectionȱofȱtheȱKoopmanȱoperatorȱeigenfunctionsȱ
ofȱanȱextendedȱcoordinateȱsystemȱontoȱtheȱphysicalȱvariablesȱofȱtheȱsystem.ȱȱ

EfficientȱschemesȱforȱtheȱcomputationȱofȱtheseȱextendedȱmodelsȱandȱquanȬ
titativeȱmeasuresȱforȱassessingȱtheȱnonȬlinearȱmodesȱobservabilityȱandȱparticipaȬ
tionȱareȱderived.ȱȱ

Furthermore,ȱaȱnovelȱlinearizationȱmethodȱbasedȱonȱperturbationȱtheoryȱ
andȱtheȱchainȱruleȱisȱintroducedȱtoȱreduceȱtheȱcomputationalȱburdenȱofȱobtainingȱ
theȱmatricesȱofȱcoefficients.ȱȱ

Posteriorly,ȱtheȱdevelopmentȱofȱaȱ(truncated)ȱnonȬlinearȱquadraticȱcontrolȬ
lerȱ withȱ structuralȱ constraintsȱ isȱ presented,ȱ basedȱ onȱ theȱ perturbedȱ Koopmanȱ
modeȱanalysisȱmethodȱandȱtheȱlinearȱoptimalȱcontrolȱtheory.ȱ

Finally,ȱ numericalȱ resultsȱ forȱ severalȱ multiȬmachineȱ powerȱ systemsȱ areȱ


usedȱtoȱvalidateȱandȱillustrateȱtheȱproposedȱmethodologies.ȱAsȱwell,ȱtheȱfutureȱ
workȱandȱresearchȱpathsȱoriginatedȱfromȱthisȱworkȱareȱmentioned.ȱ

iii
Resumenȱ
ȱ
Losȱ movimientosȱ transitoriosȱ delȱ sistemaȱ puedenȱ variarȱ desdeȱ unȱ comportaȬ
mientoȱlocalȱaȱeventosȱdeȱescalaȱglobal,ȱyȱpuedenȱestarȱaltamenteȱcorrelacionados;ȱ
especialmenteȱenȱelȱcasoȱdeȱoscilacionesȱpobrementeȱamortiguadasȱoȱinestablesȱ
queȱsurgenȱdespuésȱdeȱgrandesȱdisturbios.ȱTalesȱinteraccionesȱpuedenȱserȱdescriȬ
tasȱútilmenteȱenȱtérminosȱdeȱmodosȱinteractuandoȱporȱmedioȱdeȱlaȱestructuraȱdeȱ
laȱredȱyȱdeȱlasȱcondicionesȱiniciales.ȱȱ

Entenderȱ cómoȱ estosȱ modosȱ evolucionanȱ eȱ interactúanȱ esȱ claveȱ noȱ solaȬ
menteȱparaȱevaluarȱlaȱestabilidadȱdelȱsistema,ȱsinoȱtambiénȱparaȱidentificarȱlosȱ
dispositivosȱdinámicos,ȱyȱsusȱsistemasȱdeȱcontrol,ȱqueȱesténȱenvueltosȱenȱlasȱosȬ
cilaciones,ȱyȱparaȱelȱdiseñoȱdeȱcontroles.ȱȱ

Enȱestaȱtesis,ȱunȱnuevoȱmarcoȱdeȱtrabajoȱbasadoȱenȱmodelosȱqueȱcombinaȱ
elȱanálisisȱmodalȱdeȱKoopmanȱyȱlaȱteoríaȱdeȱperturbaciónȱesȱintroducido.ȱEnȱesteȱ
marco,ȱ elȱ comportamientoȱ noȱ linealȱ esȱ interpretadoȱ comoȱ laȱ proyecciónȱ deȱ lasȱ
funcionesȱpropiasȱdelȱoperadorȱdeȱKoopmanȱdeȱunȱsistemaȱextendidoȱsobreȱlasȱ
variablesȱfísicasȱdelȱsistema.ȱȱ

Esquemasȱ eficientesȱ paraȱ suȱ cálculoȱ sonȱ derivados,ȱ asíȱ comoȱ medidasȱ
cuantitativasȱdeȱobservabilidadȱyȱparticipaciónȱdeȱlosȱmodosȱnoȱlineales.ȱȱ

Porȱotraȱparte,ȱconȱelȱobjetivoȱdeȱreducirȱelȱcostoȱcomputacionalȱdelȱcálculoȱ
deȱlasȱmatricesȱdeȱcoeficientes,ȱunaȱnovedosaȱtécnicaȱdeȱlinealizaciónȱesȱpropuestaȱ
basadaȱenȱlaȱteoríaȱdeȱlaȱperturbaciónȱyȱlaȱreglaȱdeȱlaȱcadena.ȱȱ

Posteriormente,ȱelȱdesarrolloȱdeȱunȱcontroladorȱcuadráticoȱnoȱlinealȱ(trunȬ
cado)ȱconȱlimitacionesȱestructuralesȱesȱpresentado,ȱbasadoȱenȱelȱanálisisȱperturȬ
badoȱdeȱKoopmanȱyȱlaȱteoríaȱdeȱcontrolȱlinealȱóptimo.ȱȱ

ResultadosȱnuméricosȱenȱvariosȱsistemasȱdeȱpotenciaȱmultiȬmáquinaȱsonȱ
usadosȱparaȱvalidarȱeȱilustrarȱlasȱmetodologíasȱpropuestas.ȱAsíȱtambién,ȱseȱenlisȬ
tanȱlosȱtrabajosȱfuturosȱyȱramasȱdeȱinvestigaciónȱoriginadosȱdeȱesteȱtrabajo.ȱȱ

iv
Table of Contents
Index of tables………………………………………………….x

Index of figures……………………………………………….xii

Glossary............................................................................. xv 
A.  Abbreviations ........................................................................................ xv 
B.  Functions .............................................................................................. xvi 
A.  Indexes ................................................................................................ xvii 
B.  Parameters ......................................................................................... xviii 
C.  Sets ....................................................................................................... xxi 
D.  Symbols ............................................................................................... xxii 
E.  Variables ............................................................................................ xxiii

Chapter 1. Introduction...................................................... 1 
1.1 Background and motivation .................................................................... 2 
1.2 Problem statement ................................................................................... 2 
1.3 A brief review of previous work ............................................................... 3 
1.4 Dissertation objectives ............................................................................. 6 
1.5 Research contributions ............................................................................ 6 
1.6 Organization of the dissertation .............................................................. 7 
1.7 Publications .............................................................................................. 8 
1.7.1 Conference papers ............................................................................. 8 
1.7.2 Refereed journal papers .................................................................... 8 
1.8 References................................................................................................. 9

Chapter 2. Koopman Mode Analysis ............................... 14 


2.1 The Koopman operator .......................................................................... 15 
2.1.1 Koopman eigenfunctions ................................................................. 16 
2.1.2 Koopman modes ............................................................................... 17 
2.2 Especial cases of analysis ...................................................................... 18 
2.2.1 KMA of diagonal systems ................................................................ 18 
2.2.2 KMA of a linear system ................................................................... 19 
2.2.3 KMA of the output variables of a linear system ............................ 19 
2.3 Approximate methods ............................................................................ 21 
2.3.1 Numerical data-driven methods ..................................................... 22 
2.3.1.1 Koopman mode decomposition ............................................................... 22 
2.3.1.2 Dynamic mode decomposition ................................................................ 25 
2.3.2 Model-based approaches ................................................................. 26 
2.3.2.1 Extended dynamic mode decomposition ................................................. 27 

v
2.3.2.2 Finite linear representations ..................................................................... 29 
2.4 Concluding remarks ............................................................................... 29 
2.5 References............................................................................................... 30

Chapter 3. Recursive Linearization................................. 32 


3.1 Mathematical background ..................................................................... 33 
3.2 Conventional linearization methods ..................................................... 34 
3.2.1 First-order linearization methods ................................................... 35 
3.2.1.1 Analytical linearization ............................................................................ 36 
3.2.1.2 Forward-difference approximation (FDA) .............................................. 36 
3.2.1.3 Center-difference approximation (CDA) ................................................. 38 
3.2.2 Second- and higher-order approximations ..................................... 39 
3.2.2.1 Analytic linearization ............................................................................... 40 
3.2.2.2 Perturbation-based methods ..................................................................... 42 
3.3 Recursive linearization .......................................................................... 45 
3.3.1 Mathematical formulation .............................................................. 46 
3.3.1.1 First order ................................................................................................. 46 
3.3.1.2 Second order ............................................................................................ 47 
3.3.1.3 Third order ............................................................................................... 50 
3.3.2 An efficient scheme for the third-order recursive linearization .... 51 
3.3.2.1 Perturbation vectors ................................................................................. 52 
3.3.2.2 First stage ................................................................................................. 53 
3.3.2.3 Second stage ............................................................................................ 54 
3.3.2.4 Third stage ............................................................................................... 55 
3.3.3 Recursive linearization for complex-valued systems ..................... 56 
3.4 Concluding remarks ............................................................................... 57 
3.5 References............................................................................................... 57

Chapter 4. Koopman eigenfunctions-based extended


model ................................................................................. 59 
4.1 Mathematical formulation ..................................................................... 60 
4.1.1 Perturbed non-linear models .......................................................... 60 
4.1.2 Transformation to an upper-triangular system ............................. 61 
4.1.3 Transformation to a diagonal system ............................................. 63 
4.1.4 Initial conditions for the Koopman eigenfunctions ........................ 64 
4.1.5 Solution of the non-linear response in physical variables ............. 65 
4.2 Outline of the proposed framework ....................................................... 66 
4.3 Single-machine, infinite-bus system: an illustrative example ............. 68 
4.4 Non-linearity indexes ............................................................................. 70 
4.5 Efficient computation of PKMA............................................................. 71 
4.5.1 Higher-order matrices of coefficients .............................................. 72 
4.5.1.1 Quadratic eigenfunctions ......................................................................... 73 
4.5.1.2 Cubic eigenfunctions ............................................................................... 74 
4.5.2 Efficient eigendecomposition of the extended system .................... 74 

vi
4.5.3 Efficient computation of initial conditions ..................................... 77 
4.5.4 Sparsity-promoting criteria............................................................. 78 
4.6 Concluding remarks ............................................................................... 80 
4.7 References............................................................................................... 81 

Chapter 5. Koopman Observability Measures ................ 84 


5.1 Linear modal observability measures ................................................... 85 
5.1.1 Quantitative measures of observability.......................................... 85 
5.1.2 Effect of constant entries................................................................. 86 
5.2 Koopman observability measures under Koopman operator theory ... 87 
5.3 Koopman measures of observability in KMD ....................................... 89 
5.3.1 The ranking criteria problem .......................................................... 89 
5.3.2 An observability-based approach .................................................... 90 
5.4 Koopman observability measures under perturbation theory ............. 91 
5.4.1 Derivation of the quantitative observability measures ................. 91 
5.4.2 Weighted observability measures ................................................... 93 
5.4.3 Importance for wide-area monitoring and control ......................... 94 
5.5 Concluding remarks ............................................................................... 94 
5.6 References............................................................................................... 95

Chapter 6. Recursive Linearization of Power System


Models ............................................................................... 97 
6.1 Non-linear model of a multi-machine power system ............................ 98 
6.1.1 Transient model of the generator ................................................... 98 
6.1.2 Simple exciter dynamics.................................................................. 99 
6.1.3 Power system stabilizer................................................................. 100 
6.2 Recursive analysis of third order......................................................... 101 
6.2.1 Recursive linearization of the dynamical equations .................... 101 
6.2.2 Analysis of the system’s embedded sub-functions ....................... 102 
6.2.2.1 Synchronous machines and grid equations ............................................ 102 
6.2.2.2 Simple exciter and PSS .......................................................................... 103 
6.3 Input matrices and interactions state-input ....................................... 104 
6.3.1 First-order dynamics ..................................................................... 105 
6.3.2 Second-order dynamics .................................................................. 105 
6.3.3 Third-order dynamics .................................................................... 106 
6.4 Third-order analysis of the output matrices ....................................... 106 
6.4.1 Bus voltages ................................................................................... 107 
6.4.1.1 Non-linear equations .............................................................................. 107 
6.4.1.2 Analysis of non-linear content ............................................................... 107 
6.4.2 Active power flows ......................................................................... 108 
6.4.2.1 Non-linear equations .............................................................................. 108 
6.4.2.2 Recursively linearized modules ............................................................. 109 
6.5 Concluding remarks ............................................................................. 110 
6.6 References............................................................................................. 110
vii
Chapter 7. PKMA-based Truncated Non-linear Quadratic
Controller ....................................................................... 112 
7.1 Problem formulation ............................................................................ 113 
7.1.1 Linear structurally-constrained LQR controllers ........................ 113 
7.1.1.1 Distributed control ................................................................................. 114 
7.1.1.2 Constrained LQR control ....................................................................... 114 
7.1.2 General characteristics of the required quadratic controller ...... 117 
7.2 LQR design for quadratic Carleman models....................................... 118 
7.2.1 Quadratic Carleman linearization-based models......................... 118 
7.2.2 Bilinear-quadratic optimal control ............................................... 118 
7.2.3 Optimal control of quadratic Carleman models ........................... 119 
7.3 Controller design in Jordan space ....................................................... 121 
7.3.1 First order ...................................................................................... 122 
7.3.2 Second order .................................................................................. 123 
7.3.3 Structurally-constrained design ................................................... 125 
7.3.3.1 Sparse structure ...................................................................................... 125 
7.3.3.2 Iterative process ..................................................................................... 126 
7.3.3.3 Bilinear effect on the quadratic PKMA model ...................................... 128 
7.3.3.4 Diagonalization of the closed-loop system ............................................ 129 
7.3.3.5 Truncated quadratic LQR controller. ..................................................... 131 
7.4 Scheme for future online application .................................................. 132 
7.5 Concluding remarks ............................................................................. 135 
7.6 References............................................................................................. 135

Chapter 8. Numerical Results ....................................... 138 


8.1 Systems of study .................................................................................. 139 
8.1.1 Three-machine test system ........................................................... 139 
8.1.2 Two-area, four-machine power system ......................................... 140 
8.1.3 16-generator, 68-bus power system .............................................. 141 
8.1.4 IEEE 50-machine power system ................................................... 143 
8.1.5 Other multi-machine power systems ............................................ 144 
8.2 Recursive linearization ........................................................................ 145 
8.2.1 Size-variable synthetic case of application ................................... 146 
8.2.1.1 Non-linear equations .............................................................................. 146 
8.2.1.2 Recursive linearization analysis............................................................. 147 
8.2.2 Comparison with conventional linearization techniques............. 147 
8.2.2.1 Qualitative comparison .......................................................................... 147 
8.2.2.2 Quantitative comparison – synthetic case of study ................................ 148 
8.2.2.3 Quantitative comparison – multi-machine power systems .................... 151 
8.3 Perturbed Koopman mode analysis..................................................... 153 
8.3.1 Three-machine test system ........................................................... 154 
8.3.2 IEEE 50-machine test system ....................................................... 155 
8.3.3 Qualitative comparison with the MNF ......................................... 157 
8.3.4 Non-linearity indexes .................................................................... 157 

viii
8.3.5 Complexity analysis ...................................................................... 159 
8.4 Koopman observability measures........................................................ 160 
8.4.1 Application to transient stability data ......................................... 161 
8.4.1.1 The IEEE 39-bus system........................................................................ 161 
8.4.1.2 Six-area, 377-machine model of the Mexican Interconnected System
(MIS) .................................................................................................................. 162 
8.4.2 Application to measured data ....................................................... 167 
8.4.3 Application to extended Koopman eigenfunctions-based models 170 
8.4.3.1 Two-area power system ......................................................................... 171 
8.4.3.2 16-machine test system .......................................................................... 173 
8.5 PKMA-based quadratic non-linear controller ..................................... 175 
8.5.1 Two-area, four-machine power system ......................................... 175 
8.5.2 IEEE 16-machine, 68-bus power system ...................................... 176 
8.5.2.1 Full quadratic non-linear controller ....................................................... 176 
8.5.2.2 Truncated quadratic non-linear controller ............................................. 179 
8.5.2.3 Online design ......................................................................................... 181 
8.6 Concluding remarks ............................................................................. 184 
8.7 References............................................................................................. 185

Chapter 9. Conclusions .................................................. 189 


9.1 General conclusions ............................................................................. 189 
9.2 Future work .......................................................................................... 190

ix
Index of tables
Table 8.1. Loading conditions in p.u. on a 100 MVA base. .......................... 140 
Table 8.2. Linear modes of the three-machine system. ............................... 140 
Table 8.3. Selected linear stability eigenmodes of the two-area, four-machine
power system. ................................................................................................ 141 
Table 8.4. Linear stability eigenmodes of the 16-machine, 68-bus power
system. ........................................................................................................... 142 
Table 8.5. Selected oscillatory modes of the IEEE 50-machine system. ..... 144 
Table 8.6. Selected multi-machine cases of study. ....................................... 144 
Table 8.7. Computational times and achieved accuracy. ............................. 155 
Table 8.8. Koopman modes with largest amplitudes. .................................. 156 
Table 8.9. Computational times and achieved accuracy. ............................. 157 
Table 8.10. Eight largest non-linear interaction indices for Koopman
eigenfunctions. IEEE 50-machine test system ............................................. 158 
Table 8.11. Eight largest non-linear interaction indices for the 3rd order MNF.
........................................................................................................................ 159 
Table 8.12. Computational times and achieved accuracy. ........................... 160 
Table 8.13. Complexity of extended Koopman model for the IEEE 50-machine
system ............................................................................................................ 160 
Table 8.14. Nine top-ranked KMs. Norm and absolute value criteria. ....... 162 
Table 8.15. Eigenvalues of the MIS for the base case condition. ................. 163 
Table 8.16. Five top-ranked KMs for Case 1. ............................................... 164 
Table 8.17. Comparison of modal estimates for Case 2.  Time window: 0-10
sec. .................................................................................................................. 166 
Table 8.18. Computational effort in seconds. Time window: 0-10 sec. ........ 167 
Table 8.19. Identified KMs. Time interval: 0-160 sec. ................................. 169 
Table 8.20. Most observable slow non-linear Koopman modes of the two-area
system. ........................................................................................................... 173 
Table 8.21. Most observable slow non-linear Koopman modes of the 16-
machine system. ............................................................................................ 174 
Table 8.22. Quadratic controller’s performance for different configurations.
........................................................................................................................ 179 
Table 8.23.Numerical comparison for a different number of non-linear modes.
........................................................................................................................ 180 

x
Table 8.24. Time required for computing K11 for the different configurations.
........................................................................................................................ 183 
Table 8.25. Value of J for the regarded communication networks. ............. 183 
Table 8.26. Time required for the regarded communication networks. Seconds.
........................................................................................................................ 183 

xi
Index of figures

Fig. 3.1. Comparison of the conventional first-order linearization process with


the proposed recursive linearization............................................................... 45 
Fig. 3.2. Flow diagram of the proposed algorithm for implementation. ........ 52 
Fig. 4.1. Flowchart of the proposed methodology. Extensions to the basic
model are indicated with an italic font. .......................................................... 67 
Fig. 4.2. Single-machine, infinite bus test system. Parameters are expressed
in pu on a 2200 MVA base [9]. ........................................................................ 68 
Fig. 6.1. Schematic representation of the utilized simple exciter model. ..... 99 
Fig. 6.2. Schematic representation of the considered power system stabilizer
model. ............................................................................................................. 101 
Fig. 6.3. Scheme of the recursive linearization of the vector θbus. ................ 108 
Fig. 7.1. Scheme of a sparse communication structure (continuous blue lines),
in comparison with the full communication network (including the red dotted
lines). .............................................................................................................. 115 
Fig. 7.2. An illustrative construction of the quadratic communication
network. ......................................................................................................... 125 
Fig. 7.3. Flow diagram of the quadratic PKMA-based LQR controller design.
The blocks marked with * and ** require some attention. .......................... 133 
Fig. 7.4. Scheme for the online application of the truncated quadratic PKMA
controller. ....................................................................................................... 134 
Fig. 8.1. Schematic representation of the two-area, four-machine power
system of [1]. .................................................................................................. 140 
Fig. 8.2. Schematic representation of the two-area, four-machine power
system of [2]. .................................................................................................. 141 
Fig. 8.3. Schematic representation of the 16-machine, 68-bus power system
[4]. ................................................................................................................... 143 
Fig. 8.4. Comparison of the CPU time required for computing (a) A11 and (b)
H2 of the synthetic system with the Analytic linearization (AL), recursive
linearization (RL), forward-difference approximation (FDA), and center-
difference approximation (CDA) methods. ................................................... 148 
Fig. 8.5. Comparison of the CPU time required for computing H3 of the
synthetic system with the recursive linearization (RL), forward-difference
approximation (FDA), and center-difference approximation (CDA) methods.
........................................................................................................................ 149 

xii
Fig. 8.6. (a) Absolute error of the computed A11 matrix of the synthetic system.
(b) Eigenvalue with the largest magnitude. Comparison of the RL, FDA, and
CDA methods with N = 100 and perturbation from ρ=1×10-4 through ρ =1×10-
1. ...................................................................................................................... 149 

Fig. 8.7. Absolute error of the computed matrices (a) H2 and (b) H3. Comparison
of the RL, FDA, and CDA methods. N = 100 and perturbation from ρ=1×10-4
through ρ =1×10-1. .......................................................................................... 150 
Fig. 8.9. (a) Absolute error of the computed A11 matrix. (b) Eigenvalue
corresponding to the inter-area mode. Comparison of the RL, FDA, and CDA
methods for the 50-machine power system with a PSS at bus 111 and a
perturbation magnitude from ρ = 1×10-5 through ρ = 1×10-1. .. 151 
Fig. 8.10. Comparison of the recursive linearization method with the
conventional linearization techniques for the power systems of Table 8.4. (a)
CPU time required for computing H2 and (b) H3. (c) Absolute error of the
computed matrix H2 and (d) H3 with ρ=1×10-4. ............................................. 152 
Fig. 8.11. Absolute error of the computed matrices (a) H2 and (b) H3.
Comparison of the RL, FDA, and CDA methods. IEEE 50-machine power
system with a PSS at bus 111 and perturbation from ρ=1×10-5 through ρ
=1×10-1............................................................................................................ 153 
Fig. 8.12. Comparison of PKMA method of third order with the third-order
MNF, the linear solution, and the full solution. Speed deviation of Gen 3. 155 
Fig. 8.13. Comparison of PKMA estimates with the linear solution, the full
solution, and the MNF without terms of fourth and sixth order. (a) Speed
deviation of Gen 94. (b) Rotor speed deviation of Gen 137. ......................... 156 
Fig. 8.14. Speed deviation of dominant machines. (a) Case 1. (b) Case 2. .. 164 
Fig. 8.15. Global behavior of the leading KMs in Table 8.16 based on the
modulus of the Koopman measures of observability. ................................... 165 
Fig. 8.16. Global behavior of the three leading oscillatory KMs in Table 8.17
based on the Koopman observability measures. .......................................... 167 
Fig. 8.17. Normalized time traces of recorded data. (a) Frequency
measurements. (b) Power and voltage measurements................................. 168 
Fig. 8.18. Details of the recorder measurements. (a) Voltage (PMU #22) and
power (PMUs #19, 20, and 21) measurements. (b) Frequency measurements
(PMUs #1-18). ................................................................................................ 168 
Fig. 8.19. Time evolution of the 0.99 Hz Koopman mode. ........................... 169 
Fig. 8.20. Mode shape of the 0.99 Hz KM. (a) PMUs #1-18. (b) PMUs #19-22.
........................................................................................................................ 170 
Fig. 8.21. Highest ςj when the rotor speed deviations of all the generators are
the system’s output signals. Two-area power system. ................................. 171 

xiii
Fig. 8.22. Highest ˆ j when the rotor speed deviations of all the generators are
the system’s output signals. Two-area power system. ................................. 172 
Fig. 8.23. Highest ˆ j when the rotor speed deviations of all the generators are
the system’s output signals. Two-area power system. ................................. 173 
Fig. 8.24. Highest  j when the rotor speed deviations of all the generators are
the system’s output signals. 16-machine power system. ............................. 174 
Fig. 8.25. Scheme for the two-area, four-machine system used for analysis.
........................................................................................................................ 175 
Fig. 8.26. Scheme for the two-area, four-machine system used for analysis.
........................................................................................................................ 176 
Fig. 8.27. Communication structures: a) 1 Case 1. b) 2 Case 1, c) 1 Case 2,
and d) 2 Case 2. ............................................................................................ 177 
Fig. 8.28. Communication structure 1ord for configurations (a) 2, and (b) 4.
........................................................................................................................ 178 
Fig. 8.29. Comparison of speed deviations for the linear LQR (black) and the
quadratic LQR (blue). .................................................................................... 178 
Fig. 8.30. Extra configurations for the 16-machine power system. ............. 182 

xiv
*ORVVDU\
$ $EEUHYLDWLRQV

AL —Š•¢’ŒŠ•ȱ•’—ŽŠ›’£Š’˜—ǯȱ

CDA Ž—Ž›Ȭ’Ž›Ž—ŒŽȱŠ™™›˜¡’–Š’˜—ǯȱ

DARE ’œŒ›ŽŽȱŠ•Ž‹›Š’Œȱ’ŒŒŠ’ȱŽšžŠ’˜—ǯȱ

DFT ’œŒ›ŽŽȱ˜ž›’Ž›ȱ›Š—œ˜›–.

DMD ¢—Š–’Œȱ–˜ŽȱŽŒ˜–™˜œ’’˜—.

FACT •Ž¡’‹•ŽȱŠŒȱ›Š—œ–’œœ’˜—ȱŽŸ’ŒŽǯȱ

FDA ˜› Š›Ȭ’Ž›Ž—ŒŽȱŠ™™›˜¡’–Š’˜—ǯȱ

GDARE Ž—Ž›Š•’£Žȱ’œŒ›ŽŽȱŠ•Ž‹›Š’Œȱ’ŒŒŠ’ȱŽšžŠ’˜—ǯȱȱ

KM ˜˜™–Š—ȱ–˜Žǯȱ

KMA ˜˜™–Š—ȱ–˜ŽȱŠ—Š•¢œ’œǯȱ

KMD ˜˜™–Š—ȱ–˜ŽȱŽŒ˜–™˜œ’’˜—.ȱ

KV ˜˜™–Š—ȱŽ’Ž—ŸŠ•žŽǯȱ

LQR ’—ŽŠ›ȱšžŠ›Š’Œȱ›Žž•Š˜›ǯȱ

ODE ›’—Š›¢ȱ’Ž›Ž—’Š•ȱŽšžŠ’˜—ǻœǼǯȱ

PSS ˜ Ž›ȱœ¢œŽ–ȱœŠ‹’•’£Ž›ǯȱ

RL Recursiveȱ•’—ŽŠ›’£Š’˜—ǯȱ

sep œŠ‹•ŽȱŽšž’•’‹›’ž–ȱ™˜’—ǯȱ

 ž‹Ȭœ™ŠŒŽȱœ¢œŽ–ȱ’Ž—’’ŒŠ’˜—ȱ
[Y

 –Š••ȱœ’—Š•ȱœŠ‹’•’¢ȱŠ—Š•¢œ’œȱ

ȱ ’—ž•Š›ȱŸŠ•žŽȱŽŒ˜–™˜œ’’˜—ǯȱ

% )XQFWLRQV

F̂ ›‘˜˜—Š•ȱ™›˜“ŽŒ’˜—ȱ˜—˜ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žœȱ’—˜›Žȱ‹¢ȱ‘Žȱœ™Š›œŽȱ
œ›žŒž›Žȱ˜ȱŒ˜––ž—’ŒŠ’˜—ǯȱȱ

f: Z ĺ Z ˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—ȱ˜ȱ‘ŽȱŒ˜—’—ž˜žœȬ’–ŽȱŽŸ˜•ž’˜—ȱ˜ȱŠȱ¢—Š–Ȭ
’ŒŠ•ȱœ¢œŽ–ǯȱ

f: Z ĺ Xȱ ˜—Ȭ•’—ŽŠ›ȱ ž—Œ’˜—ȱ ™›˜“ŽŒ’—ȱ ‘Žȱ ˜˜™–Š—ȱ Ž’Ž—ž—Œ’˜—œȱ ˜—˜ȱ


‘Žȱ™‘¢œ’ŒŠ•ȱŸŠ›’Š‹•Žœȱ˜ȱ‘Žȱœ¢œŽ–ǯȱȱ

~ ~
fDZ G ĺX ˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—œȱ‘Šȱ™›˜“ŽŒȱ‘Žȱ’——Ž›ȱ¢—Š–’Œœȱ˜ȱ‘Žȱ¢—Š–Ȭ
’ŒŠ•ȱœ¢œŽ–œȱ˜ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žœǯȱȱ

g: X ĺ Gȱ ˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—ȱŽ’—’—ȱ‘Žȱ˜‹œŽ›ŸŠ‹•Žœȱ˜ȱŠȱœ¢œŽ–ǯȱȱ

ƣ: (X,U) ĺ Xȱ ˜—’—ž˜žœȬ’–Žȱ—˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—ȱŽ’—’—ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ȱ
’—ȱXǯȱȱ

~
g~ DZ(X,U) ĺ G ˜—’—ž˜žœȬ’–Žȱ—˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—œȱŽ’—’—ȱ‘Žȱ’——Ž›ȱ¢—Š–’Œœȱ
˜ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ǯȱȱ

࣢_:_Z_ĺ_Y
˜–Ž˜–˜›™‘’œ–ȱ‹Ž ŽŽ— S Š— T.

h: Y ĺ Yȱ ˜—’—ž˜žœȬ’–Žȱ—˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—ȱŽ’—’—ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ȱ
’—ȱ ˜›Š—ȱŒ˜˜›’—ŠŽœǯ

h':X ĺ Ǔȱ ˜—Ȭ•’—ŽŠ›ȱ ž—Œ’˜—ȱ ™›˜“ŽŒ’—ȱ ‘Žȱ œŠŽȱ ŸŠ›’Š‹•Žœȱ ˜ȱ Šȱ ¢—Š–’ŒŠ•ȱ


œ¢œŽ–ȱ˜—˜ȱ‘Žȱ˜ž™žȱœ’—Š•œǯȱȱ

~
h DZȱS ĺ Ǔȱ ˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—ȱ™›˜“ŽŒ’—ȱ‘Žȱž—Œ’˜—œȱ˜ȱœ™ŠŒŽȱSȱ’—˜ȱ‘Žȱ˜žȬ
™žȱœ’—Š•œǯȱȱ

[YL

J: (X,U) ĺ ℜȱ ‹“ŽŒ’ŸŽȱž—Œ’˜—ȱ˜ȱ‘Žȱǯȱ

S_:_Y_ĺ_Yȱ ’œŒ›ŽŽȬ’–Žȱ–Š™™’—ȱ˜›ȱ‘ŽȱŽŸ˜•ž’˜—ȱ˜ȱ‘Žȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ȱ’—ȱ
yȬŒ˜˜›’—ŠŽœǯȱ

T: Z ĺ Zȱ ’œŒ›ŽŽȱ–Š™ȱ˜›ȱ‘ŽȱŽŸ˜•ž’˜—ȱ˜ȱ‘Žȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ȱ’—ȱzȬŒ˜˜›’Ȭ
—ŠŽœǯȱȱ

UTȱ ȱ ’œŒ›ŽŽȬ’–Žȱ ˜˜™–Š—ȱ˜™Ž›Š˜›ȱŽ—Ž›ŠŽȱ‹¢ȱTǯȱ

$ ,QGH[HV

d∈ℵ ’–Ž—œ’˜—ȱ˜ȱ‘Žȱ’œž›‹Š—ŒŽȱŠ™™•’Žȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

j,k,l,m,p,q∈ℵ —Ž¡ȱŽ›–œȱ

J∈ℵ ž–‹Ž›ȱ˜ȱ˜‹œŽ›ŸŠ‹•Žœǯȱ

L∈ℵ ž–‹Ž›ȱ˜ȱ˜ž™žȱœ’—Š•œȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

M∈ℵ ’–Ž—œ’˜—ȱ ˜ȱ ‘Žȱ ’—’ŽȬ’–Ž—œ’˜—Š•ȱ ˜˜™–Š—ȱ Ž’Ž—ž—Œ’˜—œȱ


œ™ŠŒŽȱǯȱ

M ∈ℵ  ǯȱ
ž–‹Ž›ȱ˜ȱœŽ—œ˜›œȱ˜ȱ‘Žȱœ—Š™œ‘˜ȱŠŠȱ–Š›’¡ȱ X

m∈ℵȱ ž–‹Ž›ȱ˜ȱ—˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—œȱ˜ȱ‘Žȱ’Œ’˜—Š›¢ǯȱ

N∈ℵȱ ȱ ’–Ž—œ’˜—ȱ˜ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žœȱœ™ŠŒŽǯȱ

Ñ∈ℵ  ǯȱ
ž–‹Ž›ȱ˜ȱœ—Š™œ‘˜œȱ˜ȱ‘Žȱœ—Š™œ‘˜ȱŠŠȱ–Š›’¡ȱ X

Nk∈ℵȱ ȱ ˜Š•ȱ—ž–‹Ž›ȱ˜ȱŽ¡’œ’—ȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ˜ȱ˜›Ž›ȱkǯȱ

n∈ℵȱ ›Ž›ȱ˜ȱŠ™™›˜¡’–Š’˜—ȱ˜›ȱ‘ŽȱŠ¢•˜›ȱœŽ›’Žœȱ˜ȱŠȱ—˜—Ȭ•’—ŽŠ›ȱœ¢œŽ–ǯȱ

nb∈ℵȱ ž–‹Ž›ȱ˜ȱ‹žœŽœȱ˜ȱŠȱ™˜ Ž›ȱœ¢œŽ–ǯȱ

ng∈ℵȱ ž–‹Ž›ȱ˜ȱŽ—Ž›Š˜›œȱ˜ȱŠȱ™˜ Ž›ȱœ¢œŽ–ǯȱ

[YLL

nl∈ℵȱ ž–‹Ž›ȱ˜ȱ•’—Žœȱ˜ȱŠȱ™˜ Ž›ȱœ¢œŽ–ǯȱ

~
r∈ℵȱ ȱ ž–‹Ž›ȱ˜ȱœž‹ž—Œ’˜—œȱŽ—Ž›Š’—ȱ G ǯȱ

R∈ℵȱ ȱ ž–‹Ž›ȱ˜ȱ’—™žȱŸŠ›’Š‹•Žœȱ˜ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ǯȱ

Rk∈ℵȱ ž–‹Ž›ȱ˜ȱ™Ž›–žŠ’˜—œȱ˜ȱkȬ‘ȱ˜›Ž›ȱ˜ȱ‘Žȱ’—™žȱŸŠ›’Š‹•Žœǯȱ

S∈ℵȱ ȱ ž–‹Ž›ȱ˜ȱœž‹ž—Œ’˜—œȱŽ—Ž›Š’—ȱSǯȱȱ

% 3DUDPHWHUV
B12∈ℜng×ngȱ –Š’—Š›¢ȱ™Š›ȱ˜ȱ‘Žȱ–Š›’¡ȱ˜ȱ›Š—œŽ›ȱŠ–’Š—ŒŽœȱ™›˜“ŽŒ’—ȱ‘Žȱ
Ÿ˜•ŠŽœȱ˜ȱ‘ŽȱŽ—Ž›Š’˜—ȱ‹žœŽœȱ˜ȱ‘Žȱœ¢œŽ–ȱ‹žœȱŸ˜•ŠŽœǯȱȱ

Bgrid∈ℜng×ngȱ –Š’—Š›¢ȱ™Š›ȱ˜ȱ‘Žȱ–Š›’¡ȱ˜ȱ›Š—œŽ›ȱŠ–’Š—ŒŽœȱ‹Ž ŽŽ—ȱŽ—Ȭ
Ž›Š’˜—ȱ‹žœŽœǯȱ

BL∈ℜnl×nlȱ Š›’¡ȱ˜ȱœŽ›’ŽœȱœžœŒŽ™Š—ŒŽœȱ˜ȱ‘Žȱ’Žȱ•’—Žœǯȱ

Bsh∈ℜnl×nlȱ Š›’¡ȱ˜ȱœ‘ž—ȱœžœŒŽ™Š—ŒŽœȱ˜ȱ‘Žȱ’Žȱ•’—Žœǯȱ

Dj∈ℜȱ ȱ Š–™’—ȱŒ˜Ž’Œ’Ž—ȱ˜ȱ‘ŽȱjȬ‘ȱŽ—Ž›Š˜›ǯȱ

įj∈ℜȱ ȱ —ž•Š›ȱ™˜œ’’˜—ȱ˜ȱ‘ŽȱjȬ‘ȱ–ŠŒ‘’—Žȱ›˜˜›ǯȱ

ǻįj∈ℜ ȱ —•ŽȱŽŸ’Š’˜—ȱ˜ȱ‘ŽȱjȬ‘ȱ–ŠŒ‘’—Žȱ›˜˜›ǯȱ

Ebus∈ℜnbȱ Š—’žŽȱ˜ȱ‘Žȱ‹žœȱŸ˜•ŠŽœȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

Efdj∈ℜȱ’Ž•ȱŸ˜•ŠŽȱ’—ȱ™žǯȱ

E’DjǰȱE’Dj∈ℜȱ DȬȱŠ—ȱQȬŠ¡’œȱŒ˜–™˜—Ž—œȱ˜ȱE’jȱ’—ȱ™žǯ

E’djǰȱE’qj∈ℜȱ dȬȱŠ—ȱqȬŠ¡’œȱŒ˜–™˜—Ž—œȱ˜ȱE’jȱ’—ȱ™žǯȱ

edjǰȱeqj∈ℜȱ dȬȱŠ—ȱqȬŠ¡’œȱŒ˜–™˜—Ž—œȱ˜ȱETjȱ’—ȱ™žǯȱ

Err j∈ℜȱ››˜›ȱœ’—Š•ǯȱ

[YLLL

ET j∈ℜȱȱ ‹œ˜•žŽȱŸŠ•žŽȱ˜ȱ‘ŽȱjȬ‘ȱ–ŠŒ‘’—ŽȱŽ›–’—Š•ȱŸ˜•ŠŽǯȱ

FV∈ℜnbȱ ž¡’•’Š›¢ȱŸŠ›’Š‹•Žȱ˜›ȱŒŠ•Œž•Š’—ȱ‘Žȱ™‘ŠœŽœȱ˜ȱ‘Žȱ‹žœȱŸ˜•ŠŽœǰȱșǯȱ

G12∈ℜnb×ngȱ ȱŽŠ•ȱ™Š›ȱ˜ȱ‘Žȱ–Š›’¡ȱ˜ȱ›Š—œŽ›ȱŠ–’Š—ŒŽœȱ™›˜“ŽŒ’—ȱ‘ŽȱŸ˜•Ȭ
ŠŽœȱ˜ȱ‘ŽȱŽ—Ž›Š’˜—ȱ‹žœŽœȱ˜ȱ‘Žȱœ¢œŽ–ȱ‹žœȱŸ˜•ŠŽœǯȱȱ

Ggrid∈ℜng×ngȱ ŽŠ•ȱ™Š›ȱ˜ȱ‘Žȱ–Š›’¡ȱ˜ȱ›Š—œŽ›ȱŠ–’Š—ŒŽœȱ‹Ž ŽŽ—ȱŽ—Ž›Š’˜—ȱ
‹žœŽœǯȱ

GL∈ℜnl×nlȱ Š›’¡ȱ˜ȱœŽ›’ŽœȱŒ˜—žŒŠ—ŒŽœȱ˜ȱ‘Žȱ’Žȱ•’—Žœǯȱ

Hj∈ℜ —Ž›’ŠȱŒ˜—œŠ—ȱ˜ȱŽ—Ž›Š˜›ȱjǯȱ

Ij ∈ ngȱ —Ž›—Š•ȱŒž››Ž—ȱ’—ȱ™žǯȱ

IL∈ nl ŽŒ˜›ȱ˜ȱŒ˜–™•Ž¡ȱŒž››Ž—œȱ•˜ ’—ȱ‘›˜ž‘ȱ‘Žȱ’Žȱ•’—Žœǯȱȱ

ILim∈ℜnl ŽŒ˜›ȱ˜ȱ’–Š’—Š›¢ȱŒž››Ž—œȱ•˜ ’—ȱ‘›˜ž‘ȱ‘Žȱ’Žȱ•’—Žœǯȱȱ

ILre∈ℜnl ŽŒ˜›ȱ˜ȱ›ŽŠ•ȱŒž››Ž—œȱ•˜ ’—ȱ‘›˜ž‘ȱ‘Žȱ’Žȱ•’—Žœǯȱȱ

IT j ∈ ng ȱ ˜–™•Ž¡ȱŽ›–’—Š•ȱŒž››Ž—ȱœ¢œŽ–ȱ›Š–Žǯȱ

ITimj∈ℜngȱ –Š’—Š›¢ȱ™Š›ȱ˜ȱ‘ŽȱŒ˜–™•Ž¡ȱŽ›–’—Š•ȱŒž››Ž—ȱœ¢œŽ–ȱ›Š–Žǯȱ

ITrej∈ℜngȱ ŽŠ•ȱ™Š›ȱ˜ȱ‘ŽȱŒ˜–™•Ž¡ȱŽ›–’—Š•ȱŒž››Ž—ȱœ¢œŽ–ȱ›Š–Žǯȱ

ITrejǰȱITimj∈ℜȱ ȱŽŠ•ȱŠ—ȱ’–Š’—Š›¢ȱŒ˜–™˜—Ž—œȱ˜ȱ‘ŽȱjȬ‘ȱ–ŠŒ‘’—ŽȂœȱŽ›–’—Š•ȱŒž›Ȭ
›Ž—ȱ’—ȱ™žǯȱ

idjǰȱiqj∈ℜȱ dȬȱŠ—ȱqȬŠ¡’œȱŒ˜–™˜—Ž—œȱ˜ȱ‘ŽȱjȬ‘ȱ–ŠŒ‘’—ŽȂœȱŒž››Ž—ȱ’—ȱ™žǯȱ

KA j∈ℜȱȱ ˜•ŠŽȱ›Žž•Š˜›ȱŠ’—ȱ’—ȱ™ǯžǯȱ

Ȧ0∈ℜȱ ȱ ¢—Œ‘›˜—˜žœȱ›ŽšžŽ—Œ¢ǯȱ

ǻȦj∈ℜȱ ˜˜›ȱœ™ŽŽȱŽŸ’Š’˜—ȱ˜ȱŽ—Ž›Š˜›ȱjȱ–ŠŒ‘’—Žȱ›˜˜›ǯȱ

[L[

Pj∈ℜȱ ȱ Œ’ŸŽȱ™˜ Ž›ȱ’—“ŽŒŽȱ‹¢ȱ–ŠŒ‘’—Žȱjǯȱ

PE j∈ℜȱȱ •ŽŒ›’ŒŠ•ȱŠŒ’ŸŽȱ˜ž™žȱ™˜ Ž›ȱ’—ȱ™ǯžǯȱ

Pline∈ℜnl ŽŒ˜›ȱ˜ȱŠŒ’ŸŽȱ™˜ Ž›ȱ•˜ œǯȱȱ

PMax∈ℜȱ Š¡’–ž–ȱ›Š—œŽ››ŽȱŠŒ’ŸŽȱ™˜ Ž›ȱ˜ȱ‘Žȱ ȱœ¢œŽ–ǯȱ

PMech j∈ℜȱ ŽŒ‘Š—’ŒŠ•ȱ™˜ Ž›ȱ’—™žȱ˜ȱ–ŠŒ‘’—Žȱjǯȱ

Qj∈ℜȱ ȱ ŽŠŒ’ŸŽȱ™˜ Ž›ȱŽ—Ž›ŠŽȱ‹¢ȱ–ŠŒ‘’—Žȱjǯȱ

Raj∈ℜȱȱ —Ž›—Š•ȱ›Žœ’œŠ—ŒŽȱ˜ȱ–ŠŒ‘’—Žȱjȱ’—ȱ™žǯȱ

T’d0j∈ℜ d-Š¡’œȱ˜™Ž—ȱŒ’›Œž’ȱ’–ŽȱŒ˜—œŠ—ȱ˜ȱŽ—Ž›Š˜› jǯȱ

T’q0j∈ℜ q-Š¡’œȱ˜™Ž—ȱŒ’›Œž’ȱ’–ŽȱŒ˜—œŠ—ȱ˜ȱŽ—Ž›Š˜› jǯȱ

Tej∈ℜ •ŽŒ›’ŒŠ•ȱ˜›šžŽȱ˜ȱŽ—Ž›Š˜› jȱ’—ȱ™žǯȱ

TAj∈ℜ ˜•ŠŽȱ›Žž•Š˜›ȱ’–ŽȱŒ˜—œŠ—ȱ’—ȱœŽŒǯȱ

Tbj∈ℜ ›Š—œ’Ž—ȱŠ’—ȱ›ŽžŒ’˜—ȱ’–ŽȱŒ˜—œŠ—ȱ’—ȱœŽŒǯȱ

TCj∈ℜ ›Š—œ’Ž—ȱŠ’—ȱ›ŽžŒ’˜—ȱ’–ŽȱŒ˜—œŠ—ȱ’—ȱœŽŒǯȱ

TRj∈ℜ ›Š—œžŒŽ›ȱ’•Ž›ȱ’–ŽȱŒ˜—œŠ—ȱ’—ȱœŽŒǯȱ

t∈ℜ ’–Žǯȱ

ș j∈ℜȱ ȱ —•Žȱ˜ȱ‘ŽȱjȬ‘ȱ–ŠŒ‘’—ŽȱŽ›–’—Š•ȱŸ˜•ŠŽǯȱ

VAsj∈ℜȱ Žž•Š˜›ȱŸ˜•ŠŽȱœŠŽȱŸŠ›’Š‹•Žȱ’—ȱ™ǯžǯȱ

Vbim∈ℜnbȱ ŽŠ•ȱ™Š›ȱ˜ȱ‘ŽȱŸŽŒ˜›ȱ˜ȱŒ˜–™•Ž¡ȱ‹žœȱŸ˜•ŠŽœǯȱ

Vbre∈ℜnbȱ –Š’—Š›¢ȱ™Š›ȱ˜ȱ‘ŽȱŸŽŒ˜›ȱ˜ȱŒ˜–™•Ž¡ȱ‹žœȱŸ˜•ŠŽœǯȱ

Vbus∈ nbȱ ŽŒ˜›ȱ˜ȱŒ˜–™•Ž¡ȱ‹žœȱŸ˜•ŠŽœǯȱ

[[

VExj∈ℜȱ ’Ž•ȱŸ˜•ŠŽȱ’—ȱ™ǯžǯȱ

Vrefj∈ℜȱ ˜•ŠŽȱ›ŽŽ›Ž—ŒŽȱŸŠ•žŽȱ’—ȱ™ǯžǯȱ

VTrj∈ℜȱ ˜•ŠŽȱ›Š—œžŒŽ›ȱ˜ž™žȱ’—ȱ™ǯžǯȱ

Xdj∈ℜ ȱ dȬŠ¡’œȱ›ŽŠŒŠ—ŒŽȱ˜ȱŽ—Ž›Š˜›ȱjǯȱ

X’dj∈ℜȱ ›Š—œ’Ž—ȱ›ŽŠŒŠ—ŒŽȱ’—ȱdȬŠ¡’œȱ˜ȱŽ—Ž›Š˜›ȱjǯȱ

Xqj∈ℜȱȱ qȬŠ¡’œȱ›ŽŠŒŠ—ŒŽȱ˜ȱŽ—Ž›Š˜›ȱjǯȱ

X’qj∈ℜȱ ›Š—œ’Ž—ȱ›ŽŠŒŠ—ŒŽȱ’—ȱqȬŠ¡’œȱ˜ȱŽ—Ž›Š˜›ȱjǯȱ

Y12∈ nb×ngȱ Š›’¡ȱ˜ȱ›Š—œŽ›ȱŠ–’Š—ŒŽœȱ™›˜“ŽŒ’—ȱ‘ŽȱŸ˜•ŠŽœȱ˜ȱ‘ŽȱŽ—Ž›Ȭ


Š’˜—ȱ‹žœŽœȱ˜ȱ‘Žȱœ¢œŽ–ȱ‹žœȱŸ˜•ŠŽœǯ

Ygrid∈ ng×ngȱ Š›’¡ȱ˜ȱ›Š—œŽ›ȱŠ–’Š—ŒŽœȱ‹Ž ŽŽ—ȱŽ—Ž›Š’˜—ȱ‹žœŽœǯȱ

YL∈ nl×nl
ȱ Š›’¡ȱ˜ȱœŽ›’ŽœȱŠ–’Š—ŒŽœȱ˜ȱ‘Žȱ’Žȱ•’—Žœǯȱ

& 6HWV

˜–™•Ž¡ȱ—ž–‹Ž›œȱœ™ŠŒŽǯȱ

G⊂ Jȱȱ ™ŠŒŽȱ˜ȱ‘Žȱ˜‹œŽ›ŸŠ‹•Žœǯȱȱ

~
G⊂ r
ȱ ™ŠŒŽȱ˜ȱ‘Žȱž—Œ’˜—œȱ˜ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•ŽœȱŽœŒ›’‹’—ȱ’——Ž›ȱ¢—Š–Ȭ
’Œœȱ˜ȱ‘Žȱ¢—Š–’ŒŠ•ȱœ¢œŽ–œǯȱȱ

ℵȱ ȱ ™ŠŒŽȱ˜ȱ—Šž›Š•ȱ—ž–‹Ž›œǯȱ

Ω1⊂ℜR×Nȱ Š–’•¢ȱ˜ȱŒ˜—œŠ—ȱ–Š›’ŒŽœȱ ’‘ȱŠȱœ™Š›œŽȱœ›žŒž›Žǯȱ’—ŽŠ›ȱ›Žœ’žŽœǰȱ
•’—ŽŠ›ȱŒ˜—›˜••Ž›ǯȱ

Ω1ord⊂ℜR×Nȱ Š–’•¢ȱ ˜ȱ Œ˜—œŠ—ȱ –Š›’ŒŽœȱ  ’‘ȱ Šȱ œ™Š›œŽȱ œ›žŒž›Žǯȱ ’—ŽŠ›ȱ Š—ȱ
šžŠ›Š’Œȱ›Žœ’žŽœǰȱ•’—ŽŠ›ȱŒ˜—›˜••Ž›ǯȱ

[[L

Ω2⊂ℜR×Nȱ Š–’•¢ȱ˜ȱŒ˜—œŠ—ȱ–Š›’ŒŽœȱ ’‘ȱŠȱœ™Š›œŽȱœ›žŒž›ŽǯȱžŠ›Š’Œȱ›Žœ’Ȭ
žŽœǰȱ•’—ŽŠ›ȱŒ˜—›˜••Ž›ǯȱ

Ω2ord⊂ℜR×N2ȱ Š–’•¢ȱ ˜ȱ Œ˜—œŠ—ȱ –Š›’ŒŽœȱ  ’‘ȱ Šȱ œ™Š›œŽȱ œ›žŒž›Žǯȱ ’—ŽŠ›ȱ Š—ȱ
šžŠ›Š’Œȱ›Žœ’žŽœǰȱšžŠ›Š’ŒȱŒ˜—›˜••Ž›ǯȱ

ℜȱ ȱ ŽŠ•ȱ—ž–‹Ž›œȱœ™ŠŒŽǯȱ

S⊂ Sȱ ™ŠŒŽȱ˜ȱ‘Žȱž—Œ’˜—œȱ˜ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•ŽœȱŽœŒ›’‹’—ȱœž‹ž—Œ’˜—œȱ
˜ȱ‘Žȱ˜ž™žȱœ’—Š•œǯȱȱ

U⊂ ℜRȱȱ ™ŠŒŽȱ˜ȱ‘Žȱ’—™žȱœ’—Š•œǯȱȱ

X⊂ ℜNȱ ™ŠŒŽȱ˜ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žœǯȱȱ

Y⊂ Nȱȱ ™ŠŒŽȱ˜ȱ‘Žȱ ˜›Š—ȱŸŠ›’Š‹•Žœǯȱȱ

Ǔ⊂ℜLȱ ȱ ™ŠŒŽȱ˜ȱ‘Žȱ˜ž™žȱŸŠ›’Š‹•Žœȱ˜ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ǯȱȱ

Z⊂ Mȱȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱœ™ŠŒŽǯȱ

' 6\PEROV
⊂ Part of

∈ Belongs to

i, j —’Š›¢ȱ’–Š’—Š›¢ȱ—ž–‹Ž›ǯȱ

ǻȱ ȱ —Œ›Ž–Ž—ǯȱ

ǻ jȱ ȱ —Œ›Ž–Ž—ȱ˜ȱ˜›Ž›ȱjǯȱ

ǻ(j)ȱ ȱ ŽŒž›œ’ŸŽȱ•’—ŽŠ›’£Š’˜—ȱ˜ȱ˜›Ž›ȱjǯȱ

Ɇȱ ȱ ’–ŽȱŽ›’ŸŠ’ŸŽȱ˜ȱ’›œȱ˜›Ž›ǯȱ

H ˜—“žŠŽȱ›Š—œ™˜œ’’˜—ȱ˜™Ž›Š’˜—ǯȱ

[[LL

 ȱ ȱ ›Š—œ™˜œ’’˜—ȱ˜™Ž›Š’˜—ǯȱ

Ȋȱ ȱ
ŠŠ–Š›ȱ™›˜žŒǯȱ

żȱ ȱ ¢–‹˜•ȱ˜›ȱ‘Žȱž—Œ’˜—ȱ˜ȱŒ˜–™˜œ’’˜—ǯȱ

*ȱ ȱ ¢–‹˜•ȱŽ—˜’—ȱ‘ŽȱŠ“˜’—ȱ˜™Ž›Š˜›ǯȱ

¢²ȱ ȱ ¢–‹˜•ȱŽ—˜’—ȱ‘Žȱ’——Ž›ȱ™›˜žŒǯȱ

-1ȱ ȱ ¢–‹˜•ȱŽ—˜’—ȱ‘Žȱž—Œ’˜—ȱ˜ȱ’—ŸŽ›œ’˜—ǯȱ

Ƹ
ȱ ȱ ¢–‹˜•ȱ˜ȱŽ—˜Žȱ‘Žȱ˜˜›ŽȬŽ—›˜œŽȱ™œŽž˜’—ŸŽ›œŽǯȱ

||·||Fȱ ȱ ›˜‹Ž—’˜žœȱ—˜›–ǯȱ

( 9DULDEOHV
1ng∈ℜng ŽŒ˜›ȱ˜ȱ’–Ž—œ’˜—ȱngȱž••ȱ˜ȱ˜—Žœǯȱȱ

A11(i)∈ℜN ˜•ž–—ȱ—ž–‹Ž›ȱiȱ˜ȱ–Š›’¡ȱA11ǯȱ

A11∈ℜN×N ˜—’—ž˜žœȬ’–Ž ŠŒ˜‹’Š—ȱ–Š›’¡ȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

A 11∈ℜN×N ’œŒ›ŽŽȬ’–Ž ŠŒ˜‹’Š—ȱ–Š›’¡ȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

B11∈ℜN×R ˜—’—ž˜žœȬ’–Žȱ•’—ŽŠ›ȱ–Š›’¡ȱ˜ȱ’—™žȱœ’—Š•œǯȱȱ

B 11∈ℜN×R ’œŒ›ŽŽȬ’–Žȱ•’—ŽŠ›ȱ–Š›’¡ȱ˜ȱ’—™žȱœ’—Š•œǯȱȱ

Bd∈ℜN×d ’—ŽŠ›ȱ’œž›‹Š—ŒŽȱ’—™žȱ–Š›’¡ǯȱȱ

Bjk∈ℜNj×Rk Š›’¡ȱ˜ȱ’—™žȱœ’—Š•œȱ˜ȱ˜›Ž›ȱkȱŠŽŒ’—ȱ‘Žȱ¢—Š–’Œœȱ˜ȱ˜›Ž›ȱȱ
ȱ jǯȱ

B kj ∈ℜR×(k-1)R Š›’¡ȱ˜ȱkȬ‘ȱ˜›Ž›ȱ™Š›’Š•ȱŽ›’ŸŠ’ŸŽœȱ˜›ȱ‘Žȱ’–ŽȱŽ›’ŸŠ’ŸŽȱ˜ȱ‘Žȱ
œŠŽȱŸŠ›’Š‹•Žȱxjȱ ’‘ȱ›Žœ™ŽŒȱ˜ȱ‘Žȱ’—™žȱœ’—Š•œȱŸŽŒ˜›ȱuǯȱ

[[LLL

B jk∈ Nj×Rk Š›’¡ȱ˜ȱ’—™žȱœ’—Š•œȱ˜ȱ˜›Ž›ȱkȱŠŽŒ’—ȱ‘Žȱ¢—Š–’Œœȱ˜ȱ˜›Ž›ȱȱ
ȱ jǯȱ —ȱ ˜›Š—ȱŒ˜˜›’—ŠŽœǯȱ

Bˆ k ∈ℜm× M

Š›’¡ȱ˜ȱŒ˜Ž’Œ’Ž—œȱ˜›ȱ‘Žȱ›Ž™›ŽœŽ—Š’˜—ȱ˜ȱ‘Žȱ˜‹œŽ›ŸŠ‹•Žœȱ‹ŠœŽȱ
˜—ȱ‘Žȱ’Œ’˜—Š›¢ȱž—Œ’˜—œȱψˆ jǯȱ

ȕjk∈ Žœ’žŽȱ˜ȱ‘ŽȱkȬ‘ȱ•’—ŽŠ›ȱŽ’Ž—–˜Žȱ’—ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žȱjǯȱ

C1∈ℜL×N ’—ŽŠ›ȱ–Š›’¡ȱ˜ȱ˜ž™žȱœ’—Š•œȱ˜ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ǯȱ

Cj∈ℜL×Nj Š›’¡ȱ˜ȱ˜ž™žȱœ’—Š•œȱ˜ȱ˜›Ž›ȱjǯȱ

C1(i)∈ℜL ˜•ž–—ȱiȱ˜ȱ–Š›’¡ȱC1ǯȱ

Cnord∈ℜL×M Š›’¡ȱ˜ȱ˜ž™žȱœ’—Š•œȱ˜ȱ‘ŽȱŽ¡Ž—Žȱœ¢œŽ–ȱ˜ȱ˜›Ž›ȱnǯȱ

 
Ƙ ∈ℜN ×N  ,Y
˜–™Š—’˜—ȱ–Š›’¡ȱŠœœ˜Œ’ŠŽȱ ’‘ȱ‘ŽȱŠŠȱ–Š›’ŒŽœȱ X  ǯȱ

~  
C ∈ ^ M ×M ȱ ŽžŒŽȬ˜›Ž›ȱ™›˜“ŽŒ’˜—ȱ˜ȱ–Š›’¡ȱƘǯȱ

~
C kj ∈ℜL×(k-1)N Š›’¡ȱ˜ȱkȬ‘ȱ˜›Ž›ȱ™Š›’Š•ȱŽ›’ŸŠ’ŸŽœȱ˜›ȱ‘Žȱ’–ŽȱŽ›’ŸŠ’ŸŽȱ˜ȱ‘Žȱ
˜ž™žȱŸŠ›’Š‹•Žȱyjȱ ’‘ȱ›Žœ™ŽŒȱ˜ȱ‘Žȱ’—™žȱœ’—Š•œȱŸŽŒ˜›ȱxǯȱ


ƙ ∈ℜN ŠœȱŒ˜•ž–—ȱ˜ȱ–Š›’¡ Ƙǯȱ

DJ∈ℜL×N Š›’¡ȱŽ’—’—ȱ‘Žȱ˜ž™žȱœ’—Š•œȱžœŽȱ˜›ȱ‘ŽȱȱŽ’—ǯȱ

d(k)∈ℜN×d ’œž›‹Š—ŒŽȱŠ™™•’Žȱ˜ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ǯȱ

ej ˜•ž–—ȱ—ž–‹Ž› j ˜ȱ‘Žȱ’Ž—’¢ȱ–Š›’¡ IN×Nǯȱ

İ1∈ℜ ˜•Ž›Š—ŒŽȱ˜›ȱ•’—ŽŠ›ȱœ›žŒž›Š••¢ȱŒ˜—œ›Š’—ŽȱŒ˜—›˜••Ž›ǯȱȱ

İ2∈ℜ ˜•Ž›Š—ŒŽȱ˜›ȱŒŠ•Œž•Š’—ȱ’›œȬ˜›Ž›ȱŠ’—ȱ–Š›’¡ȱ’—ȱ‘ŽȱšžŠ›Š’ŒȱŽ¡Ȭ
Ž—Žȱœ¢œŽ–ǯȱȱ

ε1 ∈ℜ ‘›Žœ‘˜•ȱ˜›ȱ‘Žȱ’›œȱœ™Š›œ’¢Ȭ™›˜–˜’—ȱŒ›’Ž›’˜—ȱ˜ȱŽŒ’˜—ȱŚǯśǯŚǯȱ

[[LY

ε2 ∈ℜ ‘›Žœ‘˜•ȱ ˜›ȱ ‘Žȱ œŽŒ˜—ȱ œ™Š›œ’¢Ȭ™›˜–˜’—ȱ Œ›’Ž›’˜—ȱ ˜ȱ ŽŒ’˜—ȱ
ŚǯśǯŚǯȱ

Ș1 ‘›Žœ‘˜•ȱŸŠ•žŽȱ˜›ȱŽŽ›–’—’—ȱ‘Žȱœ™Š›œŽȱœ›žŒž›Žȱ˜ȱ‘Žȱ•’—ŽŠ›ȱ
Œ˜—›˜••Ž›ǯȱ

Ș2 ‘›Žœ‘˜•ȱŸŠ•žŽȱ˜›ȱŽŽ›–’—’—ȱ‘Žȱœ™Š›œŽȱœ›žŒž›Žȱ˜ȱ‘ŽȱšžŠȬ
›Š’ŒȱŒ˜—›˜••Ž›ǯȱ

Fk_(ǻx, ǻu) ŽŒ˜›ȱ˜ȱ‘ŽȱkȬ‘ȱ˜›Ž›ȱ—˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—œȱ˜ȱǻx Š— ǻu ŽœŒ›’‹’—


‘ŽȱŽŸ˜•ž’˜—ȱ˜ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žœ.

Fk∈ℜN×Nk Š›’¡ȱ˜ȱŒ˜Ž’Œ’Ž—œȱ˜ȱ‘ŽȱkȬ‘ȱ˜›Ž›ȱŽ›–œȱ˜ȱ‘ŽȱŠ¢•˜›œȱœŽ›’Žœȱ˜›ȱ
‘ŽȱœŠŽȱŸŠ›’Š‹•Žœȱxǯ

Fkj ∈ℜN×(k-1)N Š›’¡ȱ˜ȱkȬ‘ȱ˜›Ž›ȱ™Š›’Š•ȱŽ›’ŸŠ’ŸŽœȱ˜›ȱ‘Žȱ’–ŽȱŽ›’ŸŠ’ŸŽȱ˜ȱ‘Žȱ


œŠŽȱŸŠ›’Š‹•Žȱxjȱ ’‘ȱ›Žœ™ŽŒȱ˜ȱ‘ŽȱœŠŽȱŸŽŒ˜›ȱxǯȱ

·j∈ℜ —ž•Š›ȱ›ŽšžŽ—Œ¢ȱ˜ȱ‘Žȱ•’—ŽŠ›ȱœŠ‹’•’¢ȱŽ’Ž—ŸŠ•žŽœȱȜjǯ

Hk_(ǻy) ŽŒ˜›ȱ˜ȱ‘ŽȱkȬ‘ȱ˜›Ž›ȱ—˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—œȱ˜ȱ‘Žȱ ˜›Š—ȱŸŠ›’Š‹•Žœȱ


ǻy.

Hk∈ L×Nk Š›’¡ȱ˜ȱŒ˜Ž’Œ’Ž—œȱ˜ȱ‘ŽȱkȬ‘ȱ˜›Ž›ȱŽ›–œȱ˜ȱ‘ŽȱŠ¢•˜›œȱœŽ›’Žœȱ˜›ȱ


‘Žȱ ˜›Š—ȱŒ˜˜›’—ŠŽœȱyǯȱ

H12∈ L×N2
Š›’¡ȱ˜ȱŒ˜Ž’Œ’Ž—œȱ˜ȱ‘ŽȱkȬ‘ȱ˜›Ž›ȱŽ›–œȱ˜ȱ‘ŽȱŠ¢•˜›œȱœŽ›’Žœȱ˜›ȱ
‘Žȱ ˜›Š—ȱŒ˜˜›’—ŠŽœȱyǯȱ™Ž—Ȭ•˜˜™ȱœ¢œŽ–ǯȱ

Hnord∈ M×M ’—ŽŠ›ȱ–Š›’¡ȱ˜ȱ‘Ž n-‘ȱ˜›Ž›ȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȬ‹ŠœŽȱŽ¡Ȭ


Ž—Žȱœ¢œŽ–ǯȱ

H’k_(ǻx) ŽŒ˜›ȱ ˜ȱ ‘Žȱ kȬ‘ȱ ˜›Ž›ȱ —˜—Ȭ•’—ŽŠ›ȱ ž—Œ’˜—œȱ ˜ȱ ǻx ŽœŒ›’‹’—ȱ ‘Žȱ
ŽŸ˜•ž’˜—ȱ˜ȱ‘Žȱ˜ž™žȱŸŠ›’Š‹•Žœ.

 12∈
H L×N2
Š›’¡ȱ˜ȱŒ˜Ž’Œ’Ž—œȱ˜ȱ‘ŽȱœŽŒ˜—-˜›Ž›ȱŽ›–œȱ˜ȱ‘ŽȱŠ¢•˜›œȱœŽ›’Žœȱ
˜›ȱ‘Žȱ ˜›Š—ȱŒ˜˜›’—ŠŽœȱyǯȱ•˜œŽȬ•˜˜™ȱœ¢œŽ–ǯȱ

[[Y

Ĥ 12∈ L×N2 Š›’¡ȱ™›˜“ŽŒ’—ȱ‘ŽȱšžŠ›Š’Œȱ ˜›Š—ȱŽ›–œȱ˜ȱ‘ŽȱŒ•˜œŽȬ•˜˜™ȱœ¢œȬ
Ž–ȱ˜ȱ‘Žȱ•’—ŽŠ›ȱ ˜›Š—ȱŸŠ›’Š‹•Žœȱ˜ȱ‘Žȱ˜™Ž—Ȭ•˜˜™ȱ¢—Š–’ŒŠ•ȱœ¢œȬ
Ž–ǯȱ

hk2 j ,l ∈ ˜Ž’Œ’Ž—ȱ˜ȱ˜›Ž›ȱŘȱŒ˜››Žœ™˜—’—ȱ˜ȱ‘Žȱ’—Ž›ŠŒ’˜—ȱ˜ȱ ˜˜™–Š—ȱ

Ž’Ž—ž—Œ’˜—œȱijj(ǻy)ȱŠ—ȱijl(ǻy) ’—˜ȱ‘Žȱ’–ŽȱŽŸ˜•ž’˜—ȱ˜ȱijk(ǻy).

hk3 j ,l ,m ∈ ˜Ž’Œ’Ž—ȱ˜ȱ˜›Ž›ȱřȱŒ˜››Žœ™˜—’—ȱ˜ȱ‘Žȱ’—Ž›ŠŒ’˜—ȱ˜ȱ ˜˜™–Š—ȱ

Ž’Ž—ž—Œ’˜—œȱijj(ǻy), ijl(ǻy),ȱŠ—ȱijm(ǻy) ’—˜ȱ‘Žȱ’–ŽȱŽŸ˜•ž’˜—ȱ˜ȱ


ijk(ǻy).

IN×N Ž—’¢ȱ–Š›’¡ȱ˜ȱ’–Ž—œ’˜—œ N×Nǯȱ

Icj∈ℜR×Nj —’ŒŠ˜›ȱ–Š›’¡ȱŽšžŠ•ȱ˜ȱΩjordȱ ‘Ž—ȱ‘Žȱœ¢–‹˜•œȱƼȱŠ›Žȱ›Ž™•ŠŒŽȱ‹¢ȱ
ŗȂœǯȱ

I 2j k ,l ∈ℜ —Ž¡ȱ˜ȱ’–™˜›Š—ŒŽȱ˜ȱŽ’Ž—ž—Œ’˜—ȱijklȱ˜—ȱ‘ŽȱŽŸ˜•ž’˜—ȱ˜ȱijjǯȱ

I 3j k ,l , m ∈ℜ —Ž¡ȱ˜ȱ’–™˜›Š—ŒŽȱ˜ȱŽ’Ž—ž—Œ’˜—ȱijklmȱ˜—ȱ‘ŽȱŽŸ˜•ž’˜—ȱ˜ȱijjǯȱ

I 3j ,k l , m , p ∈ℜ —Ž¡ȱ˜ȱ’–™˜›Š—ŒŽȱ˜ȱŽ’Ž—ž—Œ’˜—ȱijlmpȱ˜—ȱ‘ŽȱŽŸ˜•ž’˜—ȱ˜ȱijjkǯȱ

ˆ ∈
K m× m
’—’Žȱ›Ž™›ŽœŽ—Š’˜—ȱ˜ȱ‘Žȱ ˜˜™–Š—ȱ˜™Ž›Š˜›ȱ˜‹Š’—Žȱ‹¢ȱ–ŽŠ—œȱ
O

˜ȱ‘ŽȱȱŽŒ‘—’šžŽǯȱȱ

K11∈Ωŗ ’—ŽŠ›ȱŽŽ‹ŠŒ”ȱ–Š›’¡ǯȱȱ

Kjk∈ℜNj×Nk ŽŽ‹ŠŒ”ȱ–Š›’¡ȱ˜ȱkȬ‘ȱ˜›Ž›ȱŽ›–œȱ˜ŸŽ›ȱ‘ŽȱjȬ‘ȱ˜›Ž›ȱ™‘¢œ’ŒŠ•ȱ¢Ȭ
—Š–’Œœǯȱȱ

 jk∈
K Nj×Nk ŽŽ‹ŠŒ”ȱ–Š›’¡ȱ˜ȱkȬ‘ȱ˜›Ž›ȱ ˜›Š—ȱŽ›–œȱ˜ŸŽ›ȱ‘ŽȱjȬ‘ȱ˜›Ž›ȱ ˜›Ȭ
Š—ȱŸŠ›’Š‹•Žœǯȱȱ

K̂ jk∈ Nj×Nk ŽŽ‹ŠŒ”ȱ–Š›’¡ȱ˜ȱkȬ‘ȱ˜›Ž›ȱ ˜›Š—ȱŽ›–œȱ˜ŸŽ›ȱ‘ŽȱjȬ‘ȱ˜›Ž›ȱ ˜›Ȭ


Š—ȱŸŠ›’Š‹•Žœǯȱȱ

[[YL

L11∈ℜN×N ž¡’•’Š›ȱ–Š›’¡ȱ˜ȱŽ—˜›ŒŽȱ‘Žȱœ™Š›œŽȱœ›žŒž›Ž Ωŗ.

L j,k∈ℜN×N Š›’¡ȱ˜ȱŒ˜Ž’Œ’Ž—œȱ˜ȱ‘’›ȱ˜›Ž›ȱ›Ž•Š’—ȱ‘Žȱ’—Ž›ŠŒ’˜—ȱ‹Ž ŽŽ—ȱ
ǻxȱŠ—ȱ‘Žȱ’—™žœȱǻujȱŠ—ȱǻuk.

~
Lj3 ∈ℜN×R2 Š›’¡ȱ˜ȱ‘’›Ȭ˜›Ž›ȱ™Š›’Š•ȱŽ›’ŸŠ’ŸŽœȱ˜›ȱ‘Žȱ’–ŽȱŽ›’ŸŠ’ŸŽȱ˜ȱ
‘ŽȱœŠŽȱŸŠ›’Š‹•Žȱxjȱ ’‘ȱ›Žœ™ŽŒȱ˜ȱxǰȱuǰȱŠ—ȱuǯȱ

ȁ1 ∈ N×N ’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘ŽȱŒ˜—’—ž˜žœȬ’–Žȱ•’—ŽŠ›ȱœŠ‹’•’¢ȱŽ’Ž—ŸŠ•Ȭ
žŽœǯȱ

ȁ k∈ Nk×Nk ’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘Žȱ kȬ‘ȱ˜›Ž›ȱ Ž’Ž—ŸŠ•žŽœȱ˜ȱ‘Žȱ˜™Ž—Ȭ•˜˜™ȱ


œ¢œŽ–ǯ

ȁnord∈ M×M ’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘Žȱœ¢œŽ–ȱŽ’Ž—ŸŠ•žŽœȱ˜ȱ˜›Ž›ȱ1ȱ‘›˜ž‘ȱnǯȱ

 k∈
ȁ Nk×Nk ’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘ŽȱkȬ‘ȱ˜›Ž›ȱŽ’Ž—ŸŠ•žŽœȱ˜ȱ‘ŽȱŒ•˜œŽȬ•˜˜™ȱ
œ¢œŽ–ǯ

Ȝj∈ ’—ŽŠ›ȱœŠ‹’•’¢ȱŽ’Ž—ŸŠ•žŽœȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

λ j∈ ’Ž—ŸŠ•žŽȱ˜ȱHnordǯ

Mj∈ℜR×N2 —’Š›¢ȱ –Š›’¡ȱ ™›˜“ŽŒ’—ȱ ‘Žȱ ’—Ž›ŠŒ’˜—ȱ ˜ȱ œŠŽœȱ Š—ȱ ‘Žȱ •’—ŽŠ›ȱ
ŽŽ‹ŠŒ”ȱ˜ȱ‘ŽȱšžŠ›Š’ŒȱŽ›–œǯȱȱ

Ȃ 1∈ N×N ’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘Žȱ’œŒ›ŽŽȬ’–Žȱ•’—ŽŠ›ȱœŠ‹’•’¢ȱŽ’Ž—ŸŠ•žŽœǯȱ

 j∈ℜN×N2
M Matrix of coefficients of third order ›Ž•Š’—ȱ ‘Žȱ ’—Ž›ŠŒ’˜—ȱ ‹Ž ŽŽ—ȱ
ǻ2xȱŠ—ȱ‘Žȱ’—™žȱǻuj.

ˆ ∈ ^ N × N
Ȃ ’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘ŽȱŠ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱŽ’Ž—ŸŠ•žŽœ μ~ j.

  
Ȃ ∈ ^ M ×M ’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘ŽȱŠ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱŽ’Ž—ŸŠ•žŽœ μ̂ j.

 j ∈ℜN×RN
M Š›’¡ȱ˜ȱ‘’›Ȭ˜›Ž›ȱ™Š›’Š•ȱŽ›’ŸŠ’ŸŽœȱ˜›ȱ‘Žȱ’–ŽȱŽ›’ŸŠ’ŸŽȱ˜ȱ
3

‘ŽȱœŠŽȱŸŠ›’Š‹•Žȱxjȱ ’‘ȱ›Žœ™ŽŒȱ˜ȱxǰȱxǰȱŠ—ȱuǯȱ

[[YLL

ȝj∈ ȱ ȱ ˜˜™–Š—ȱ˜™Ž›Š˜›ȱŽ’Ž—ŸŠ•žŽœǯȱ

μ~ j∈ ȱ ȱ ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱŽ’Ž—ŸŠ•žŽœǯȱȱ

~
μ̂ j∈ ȱ ȱ ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱŽ’Ž—ŸŠ•žŽœȱ˜‹Š’—Žȱ‘›˜ž‘ȱ C ǯȱ

N j ∈ℜN×R Š›’¡ȱ˜ȱŒ˜Ž’Œ’Ž—œȱ˜ȱ‘ŽȱœŽŒ˜—Ȭ˜›Ž›ȱŽ›–ȱ˜ȱ‘ŽȱŠ¢•˜›ȱœŽ›’Žœȱ
›Ž•Š’—ȱ‘Žȱ’—Ž›ŠŒ’˜—ȱ‹Ž ŽŽ—ȱ‘ŽȱœŠŽœȱǻxȱŠ—ȱ‘Žȱ’—™žœȱǻuǯȱ

N kj ∈ℜNj×R Š›’¡ȱ ˜ȱ Œ˜Ž’Œ’Ž—œȱ ˜ȱ œŽŒ˜—ȱ ˜›Ž›ȱ ›Ž•Š’—ȱ ‘Žȱ ’—Ž›ŠŒ’˜—ȱ ‹ŽȬ
 ŽŽ—ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•ŽœȱǻxȱŠ—ȱ‘Žȱ’—™žȱǻukȱŒ˜››Žœ™˜—’—ȱ˜ȱ‘Žȱ
¢—Š–’Œœȱ˜ȱ˜›Ž›ȱjǯȱȱ

N 2j ∈ℜN×R Š›’¡ȱ˜ȱœŽŒ˜—Ȭ˜›Ž›ȱ™Š›’Š•ȱŽ›’ŸŠ’ŸŽœȱ˜›ȱ‘Žȱ’–ŽȱŽ›’ŸŠ’ŸŽȱ˜ȱ
‘ŽȱœŠŽȱŸŠ›’Š‹•Žȱxjȱ ’‘ȱ›Žœ™ŽŒȱ˜ȱxȱŠ—ȱuǯȱ

 j∈
N Nj×R Š›’¡ȱ ˜ȱ Œ˜Ž’Œ’Ž—œȱ ˜ȱ œŽŒ˜—ȱ ˜›Ž›ȱ ›Ž•Š’—ȱ ‘Žȱ ’—Ž›ŠŒ’˜—ȱ ‹ŽȬ
k

 ŽŽ—ȱ‘Žȱ˜™Ž—Ȭ•˜˜™ȱ ˜›Š—ȱŸŠ›’Š‹•ŽœȱǻyȱŠ—ȱ‘Žȱ’—™žȱǻukȱŒ˜››ŽȬ
œ™˜—’—ȱ˜ȱ‘Žȱ¢—Š–’Œœȱ˜ȱ˜›Ž›ȱjǯȱȱ

ˆ j∈
N Nj×R Š›’¡ȱ ˜ȱ Œ˜Ž’Œ’Ž—œȱ ˜ȱ œŽŒ˜—ȱ ˜›Ž›ȱ ›Ž•Š’—ȱ ‘Žȱ ’—Ž›ŠŒ’˜—ȱ ‹ŽȬ
k

 ŽŽ—ȱ‘ŽȱŒ•˜œŽȬ•˜˜™ȱ ˜›Š—ȱŸŠ›’Š‹•Žœȱǻ y ȱŠ—ȱ‘Žȱ’—™žȱǻukȱŒ˜››ŽȬ


œ™˜—’—ȱ˜ȱ‘Žȱ¢—Š–’Œœȱ˜ȱ˜›Ž›ȱjǯȱȱ

oj∈ℜN Vector of ˜˜™–Š—ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœ –Š›’¡ OPKMAǯȱ

ô j∈ℜN Vector of ˜˜™–Š—ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœ –Š›’¡ Ô PKMAǯȱ

o j∈ℜN  PKMAǯȱ
Vector of ˜˜™–Š—ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœ –Š›’¡ O

OKMA∈ℜN×M Š›’¡ȱ˜ȱ ˜˜™–Š—ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœ ž—Ž›ȱ ˜˜™–Š—ȱ˜™Ž›Ȭ


Š˜›ȱ‘Ž˜›¢ǯȱ


OKMD∈ ℜÑ ×M Š›’¡ȱ˜ȱ ˜˜™–Š—ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœ ˜›ȱ‘Žȱ ˜˜™–Š—ȱ–˜Žȱ
ŽŒ˜–™˜œ’’˜—ȱŽŒ‘—’šžŽǯȱ

[[YLLL

OPKMA∈ℜN×M Š›’¡ȱ ˜ȱ ˜˜™–Š—ȱ ˜‹œŽ›ŸŠ‹’•’¢ȱ –ŽŠœž›Žœ ˜›ȱ ‘Žȱ ™Ž›ž›‹Žȱ
˜˜™–Š—ȱ–˜ŽȱŠ—Š•¢œ’œȱ–Ž‘˜ǯȱ

Ô PKMA∈ℜN×M ˜—Ž›Žȱ–Š›’¡ȱ˜ȱ ˜˜™–Š—ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœ ˜›ȱ‘Žȱ™Ž›Ȭ


ž›‹Žȱ ˜˜™–Š—ȱ–˜ŽȱŠ—Š•¢œ’œȱ–Ž‘˜ǯȱ

 PKMA∈ℜN×M ˜—Ž›Žȱ–Š›’¡ȱ˜ȱ ˜˜™–Š—ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœ ˜›ȱ‘Žȱ™Ž›Ȭ


O
ž›‹Žȱ ˜˜™–Š—ȱ–˜ŽȱŠ—Š•¢œ’œȱ–Ž‘˜ǯȱ

p1(t)∈ℜR “˜’—ȱ˜™’–’£Š’˜—ȱŸŠ›’Š‹•Žǯȱȱ

pˆ jk ∈ℜ Žœ’žŽȱ˜ȱ‘ŽȱkȬ‘ȱ•’—ŽŠ›ȱŽ’Ž—–˜Žȱ’—ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žȱjǯȱ

P11∈ℜN×N ˜œ’’ŸŽȱœŽ–’ȬŽ’—’Žȱ–Š›’¡ȱœŠ’œ¢’—ȱŠȱ’œŒ›ŽŽȬ’–Žȱ’ŒŒŠ’ȱŽšžŠȬ
’˜—ǯȱȱ

P 11∈ N×N

Ž›–’’Š—ȱ –Š›’¡ȱ œŠ’œ¢’—ȱ Šȱ ’ŒŒŠ’ȱ ŽšžŠ’˜—ȱ ’—ȱ ˜›Š—ȱ Œ˜˜›’Ȭ
—ŠŽœǯȱ’—ŽŠ›ȱ–˜Ž•ǯȱ

P 2ord∈ N2ord×N2ord
Ž›–’’Š—ȱ–Š›’¡ȱœŠ’œ¢’—ȱŠȱ’ŒŒŠ’ȱŽšžŠ’˜—ȱ’—ȱ ˜›Š—ȱŒ˜Ȭ
˜›’—ŠŽœǯȱžŠ›Š’Œȱ–˜Ž•ǯȱ

ĭk (ǻy)∈ Nkȱ ŽŒ˜›ȱ˜ȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ˜ȱ˜›Ž›ȱkȱ’—ȱ ˜›Š—ȱŒ˜˜›’—ŠŽœǰȱ


ž—Ž›ȱ™Ž›ž›‹Š’˜—ȱ‘Ž˜›¢ȱ›Š–Ž ˜›”ǯȱ

ĭnord (ǻy)∈ M
ȱŽŒ˜›ȱ˜ȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ˜ȱ˜›Ž›ȱŗȱ‘›˜ž‘ȱnȱ’—ȱ ˜›Š—ȱ
Œ˜˜›’—ŠŽœǰȱž—Ž›ȱ™Ž›ž›‹Š’˜—ȱ‘Ž˜›¢ȱ›Š–Ž ˜›”ǯȱ

ijj∈ ȱ ȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ’—ȱ ˜›Š—ȱŒ˜˜›’—ŠŽœǯȱȱ

‫׋‬j∈ ȱ ȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ’—ȱœŠŽȱŸŠ›’Š‹•ŽœȂȱŒ˜˜›’—ŠŽœǯȱȱ

φˆ j∈ ȱ ȱ ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ˜ȱ‘Žȱȱ–Ž‘˜ǯȱ

ȌǻznordǼ∈ Mȱ ŽŒ˜›ȱ˜ȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ˜ȱ˜›Ž›ȱŗȱ‘›˜ž‘ȱnȱ’—ȱzȬŒ˜˜›’Ȭ


—ŠŽœǰȱž—Ž›ȱ™Ž›ž›‹Š’˜—ȱ‘Ž˜›¢ȱ›Š–Ž ˜›”ǯȱ

[[L[

ȌkǻtǼ∈ Nkȱ ˜˜™–Š—ȱŽ’Ž—ž—Œ’˜—œȱ˜ȱȱ˜›Ž›ȱkȱ’—ȱzȬŒ˜˜›’—ŠŽœǯȱ

M ×m
Ȍ̂ ∈ Š›’¡ȱ˜ȱ’Œ’˜—Š›¢ȱž—Œ’˜—œȱψ̂ jǯȱ

M
ψ̂ j∈ ˜—Ȭ•’—ŽŠ›ȱž—Œ’˜—ȱ˜ȱ‘Žȱ’Œ’˜—Š›¢ȱ˜›ȱ‘ŽȱȱŽŒ‘—’šžŽǯȱ

ȡ∈ℜȱ –Š••ȱ ™Ž›ž›‹Š’˜—ȱ žœŽȱ ’—ȱ ‘Žȱ ™Ž›ž›‹Š’˜—Ȭ‹ŠœŽȱ •’—ŽŠ›’£Š’˜—ȱ


–Ž‘˜œǯȱ

Q11∈ℜƼȱ ˜œ’’ŸŽȱ œŽ–’ȬŽ’—’Žȱ –Š›’¡ȱ  Ž’‘’—ȱ ‘Žȱ œŠŽȱ ŸŠ›’Š‹•Žœȱ ’—ȱ ‘Žȱ
ȱŽœ’—ǯȱ‘¢œ’ŒŠ•ȱŒ˜˜›’—ŠŽœǯȱ

 11∈ℜƼȱ
Q ˜œ’’ŸŽȱ œŽ–’ȬŽ’—’Žȱ –Š›’¡ȱ  Ž’‘’—ȱ ‘Žȱ œŠŽȱ ŸŠ›’Š‹•Žœȱ ’—ȱ ‘Žȱ
ȱŽœ’—ǯȱ ˜›Š—ȱŒ˜˜›’—ŠŽœǯȱ

Q2ord∈ℜؘ›Ƽؘ›ȱ ˜œ’’ŸŽȱœŽ–’ȬŽ’—’Žȱ–Š›’¡ȱ Ž’‘’—ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•Žœȱ˜ȱ
‘ŽȱšžŠ›Š’ŒȱŠ›•Ž–Š—ȱ–˜Ž•ȱ˜›ȱ‘ŽȱȱŽœ’—ǯȱȱ

 2ord∈ℜؘ›Ƽؘ›ȱ
Q ˜œ’’ŸŽȱœŽ–’ȬŽ’—’Žȱ–Š›’¡ȱ Ž’‘’—ȱ‘Žȱ ˜›Š—ȱŸŠ›’Š‹•Žœȱ
˜ȱ‘ŽȱšžŠ›Š’ŒȱŠ›•Ž–Š—ȱ–˜Ž•ȱ˜›ȱ‘ŽȱȱŽœ’—ǯȱȱ

R∈ℜR×Rȱ ˜œ’’ŸŽȱœŽ–’ȬŽ’—’Žȱ–Š›’¡ȱ Ž’‘’—ȱ‘Žȱ’—™žȱœ’—Š•œȱ’—ȱ‘Žȱȱ
Žœ’—ǯȱ

σj∈ℜ ŽŒŠ¢ȱ›ŠŽȱ˜ȱ‘Žȱ•’—ŽŠ›ȱœŠ‹’•’¢ȱŽ’Ž—ŸŠ•žŽœȱȜjǯȱ

Ȣj∈ℜ Š¡’–ž–ȱŸŠ•žŽȱ˜ȱ‘ŽȱŸŽŒ˜›ȱ˜ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœȱojǯȱ

ςˆ j∈ℜ Š¡’–ž–ȱŸŠ•žŽȱ˜ȱ‘ŽȱŸŽŒ˜›ȱ˜ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœȱ ô jǯȱ

ς j∈ℜ Š¡’–ž–ȱŸŠ•žŽȱ˜ȱ‘ŽȱŸŽŒ˜›ȱ˜ȱ˜‹œŽ›ŸŠ‹’•’¢ȱ–ŽŠœž›Žœȱ o jǯȱ


Ȉ ∈ ^M ×M

 ǯȱ
’Š˜—Š•ȱ–Š›’¡ȱ ’‘ȱ‘Žȱœ’—ž•Š›ȱŸŠ•žŽœȱ˜ȱ X

~ N × N
T∈ ȱ Š›’¡ȱ˜ȱ›’‘ȱŽ’Ž—ŸŽŒ˜›œȱ˜ȱ‘ŽȱŠ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱ˜™Ž›Š˜›ǯȱ

  ~
T̂ ∈ ^M ×M Š›’¡ȱ˜ȱ›’‘ȱŽ’Ž—ŸŽŒ˜›œȱ˜ C ǯȱ

[[[

t∈ℜ ’–Žǯȱ

IJj,k,…∈ℜ ’–ŽȱŒ˜—œŠ—ȱ˜ȱ‘Žȱ ˜˜™–Š—ȱ–˜ŽȱȜj,k,…ǯȱ

u∈ℜR ȱ ŽŒ˜›ȱ˜ȱ’—™žȱŸŠ›’Š‹•Žœȱ˜ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ǯȱ

uOpt∈ℜR ™’–Š•ȱŽŽ‹ŠŒ”ȱ˜‹Š’—Žȱ ’‘ȱ‘Žȱž••ȱǯȱ

ǻ2u∈ℜR2ȱ ŽŒ˜›ȱŒ˜—Š’—’—ȱŠ••ȱ‘Žȱ’—™žȱŸŠ›’Š‹•ŽœȂȱ™Ž›ž›‹Š’˜—œȱ˜ȱ˜›Ž›ȱŘǯȱ

ǻ3u∈ℜR3ȱ ŽŒ˜›ȱŒ˜—Š’—’—ȱŠ••ȱ‘Žȱ’—™žȱŸŠ›’Š‹•ŽœȂȱ™Ž›ž›‹Š’˜—œȱ˜ȱ˜›Ž›ȱřǯȱ

uj+∈ℜRȱ Ž›ž›‹Žȱ’—™žȱŸŽŒ˜›ȱ˜›–Žȱ‹¢ȱŠ’—ȱŠȱ™Ž›ž›‹Š’˜—ȱȡȱ˜ȱusepȱ’—ȱ
‘ŽȱjȬ‘ȱ™˜œ’’˜—ǯȱ

uj-∈ℜRȱ Ž›ž›‹Žȱ’—™žȱŸŽŒ˜›ȱ˜›–Žȱ‹¢ȱœž‹œ›ŠŒ’—ȱŠȱ™Ž›ž›‹Š’˜—ȱȡȱ˜ȱ
usepȱ’—ȱ‘ŽȱjȬ‘ȱ™˜œ’’˜—ǯȱ

ǻuj+∈ℜRȱ ’Ž›Ž—ŒŽȱŠ–˜—ȱuj+ȱŠ—ȱusepǯȱ

ǻuj-∈ℜRȱ ’Ž›Ž—ŒŽȱŠ–˜—ȱui-ȱŠ—ȱusepǯȱ

U11∈ N×Nȱ Š›’¡ȱ˜ȱ•ŽȱŽ’Ž—ŸŽŒ˜›œȱ˜ȱ‘Žȱ ŠŒ˜‹’Š—ȱ–Š›’¡ȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

 
Û ∈ ^M ×M ȱ  ǯȱ
Žȱ–Š›’¡ȱ˜ȱ‘Žȱȱ˜ȱ X

υ jk ∈ ȱ
l
˜Ž’Œ’Ž—ȱ‘Šȱ™›˜“ŽŒœȱ‘ŽȱlȬ‘ȱ—˜—Ȭ•’—ŽŠ›ȱœŠŽȱ˜ȱǻx2ȱ˜ȱ‘ŽȱšžŠȬ
›Š’ŒȱŽ›–ȱǻyjǻykǯȱ

vj_∈ Nȱȱ ’‘ȱŽ’Ž—ŸŽŒ˜›ȱ˜ȱ‘Žȱ ŠŒ˜‹’Š—ȱ–Š›’¡ȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

ৼj_∈ N
ȱȱ ǻ’—ŽŠ›Ǽȱ ˜˜™–Š—ȱ–˜Žœǯȱ

ৼj,k_∈ Nȱ ˜˜™–Š—ȱ–˜Žœȱ˜ȱ˜›Ž›ȱ ˜ǯȱ

ৼj,k,ll∈ Nȱ ˜˜™–Š—ȱ–˜Žœȱ˜ȱ˜›Ž›ȱ‘›ŽŽǯȱ

[[[L

V11∈ N×Nȱ Š›’¡ȱ˜ȱ›’‘ȱŽ’Ž—ŸŽŒ˜›œȱ˜ȱ‘Žȱ ŠŒ˜‹’Š—ȱ–Š›’¡ȱ˜ȱ‘Žȱ˜™Ž—Ȭ•˜˜™ȱ
œ¢œŽ–ǯȱ


V̂ ∈ ^M × Ñ ȱ Š›’¡ȱ˜ȱŠŠȬ‹ŠœŽȱŠ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱ–˜Žœǯȱ

V̂ 11∈ N×N
ȱ Š›’¡ȱ˜ȱ›’‘ȱŽ’Ž—ŸŽŒ˜›œȱ˜ȱ‘Žȱ ŠŒ˜‹’Š—ȱ–Š›’¡ȱ˜ȱ‘ŽȱŒ•˜œŽȬ•˜˜™ȱ
œ¢œŽ–ǯȱ

 11∈
V N×Nȱ Š›’¡ȱ‘Šȱ™›˜“ŽŒœȱ ›˜–ȱ‘Žȱ ˜›Š—ȱ Œ˜˜›’—ŠŽœȱ ˜ȱ‘Žȱ ˜™Ž—Ȭ•˜˜™ȱ
œ¢œŽ–ȱ˜ȱ‘ŽȱŒ•˜œŽȬ•˜˜™ȱœ¢œŽ–Ȃœȱ ˜›Š—ȱŒ˜˜›’—ŠŽœǯȱȱ

Ǒ ∈ ^ Ñ ×M ȱ

 ǯȱ
’‘ȱ–Š›’¡ȱ˜ȱ‘Žȱȱ˜ȱ X

W nord∈ M×M ˜›–Š•’£Žȱ–Š›’¡ȱ˜ȱ›’‘ȱŽ’Ž—ŸŽŒ˜›œȱ˜ȱHnordǯȱ

Wnord∈ M×M Matrix of right eigenvectors of –Š›’¡ȱHnordǯȱ

Wjk∈ Nj×Nk Sub-matrix of Wnordȱ‘Šȱ™›˜“ŽŒœȱŽ’Ž—ž—Œ’˜—œȱȌkǻtǼȱ’—˜ȱŽ’Ž—ž—ŒȬ


’˜—œȱĭj(t).

W jk∈ Nj×Nk Sub-matrix of W nord.

wj_∈ Nȱ ŽȱŽ’Ž—ŸŽŒ˜›ȱ˜ȱ‘Žȱ ŠŒ˜‹’Š—ȱ–Š›’¡ȱ˜ȱ‘Žȱœ¢œŽ–ǯȱ

 j_∈
w Mȱ ’‘ȱŽ’Ž—ŸŽŒ˜›ȱ˜ȱHnordǯȱ

w j_∈ Mȱ ˜›–Š•’£Žȱ›’‘ȱŽ’Ž—ŸŽŒ˜›ȱ˜ȱHnordǯȱ

 ,Y
X  ∈ℜM ×N ŽŒ˜›Žȱœ—Š™œ‘˜ȱŠŠȱ–Š›’ŒŽœǯȱ

xk∈ℜNȱȱ ŠŽȱŸŠ›’Š‹•Žœȱ˜ȱ‘Žȱœ¢œŽ–ȱ’—ȱ’œŒ›ŽŽȬ’–Žȱ›Š–Ž ˜›”ǯȱ

x∈ℜNȱ ȱ ŠŽȱŸŠ›’Š‹•Žœȱ˜ȱ‘Žȱœ¢œŽ–ȱ’—ȱŒ˜—’—ž˜žœȬ’–Žȱ›Š–Ž ˜›”ǯȱ

ǻ2xi,j∈ℜN2ȱ Ž›ž›‹ŽȱœŠŽȱŸŽŒ˜›ȱŒ˜—Š’—’—ȱŠ••ȱ‘Žȱ™Ž›ž›‹Š’˜—œȱ˜ȱ˜›Ž›ȱŘȱ˜›ȱ
‘Žȱ›ŽŒž›œ’ŸŽȱ•’—ŽŠ›’£Š’˜—ȱŽŒ‘—’šžŽǯȱ

ǻ2x∈ℜN2ȱ ŽŒ˜›ȱŒ˜—Š’—’—ȱŠ••ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•ŽœȂȱ™Ž›ž›‹Š’˜—œȱ˜ȱ˜›Ž›ȱŘǯȱ

[[[LL

ǻ3x∈ℜN3ȱ ŽŒ˜›ȱŒ˜—Š’—’—ȱŠ••ȱ‘ŽȱœŠŽȱŸŠ›’Š‹•ŽœȂȱ™Ž›ž›‹Š’˜—œȱ˜ȱ˜›Ž›ȱřǯȱ

ǻ3xi,j,k∈ℜN3ȱ Ž›ž›‹ŽȱœŠŽȱŸŽŒ˜›ȱŒ˜—Š’—’—ȱŠ••ȱ‘Žȱ™Ž›ž›‹Š’˜—œȱ˜ȱ˜›Ž›ȱřȱ˜›ȱ
‘Žȱ›ŽŒž›œ’ŸŽȱ•’—ŽŠ›’£Š’˜—ȱŽŒ‘—’šžŽǯȱ

xi+∈ℜNȱ Ž›ž›‹ŽȱœŠŽȱŸŽŒ˜›ȱ˜›–Žȱ‹¢ȱŠ’—ȱŠȱ™Ž›ž›‹Š’˜—ȱȡȱ˜ȱxsepȱ’—ȱ
‘ŽȱiȬ‘ȱ™˜œ’’˜—ǯȱ

xi-∈ℜNȱ Ž›ž›‹ŽȱœŠŽȱŸŽŒ˜›ȱ˜›–Žȱ‹¢ȱœž‹œ›ŠŒ’—ȱŠȱ™Ž›ž›‹Š’˜—ȱȡȱ˜ȱ
xsepȱ’—ȱ‘ŽȱiȬ‘ȱ™˜œ’’˜—ǯȱ

ǻxi+∈ℜNȱ ’Ž›Ž—ŒŽȱŠ–˜—ȱxi+ȱŠ—ȱxsepǯȱ

ǻxi-∈ℜNȱ ’Ž›Ž—ŒŽȱŠ–˜—ȱxi-ȱŠ—ȱxsepǯȱ

ǻxi∈ℜNȱ ŽŠ—ȱ˜ȱxi+ȱŠ—ȱ-xi-ǯȱ

਄Ui+∈ℜNȱ ŽŒ˜›ȱ˜ȱ’–ŽȱŽ›’ŸŠ’ŸŽœȱ˜ȱ‘Žȱœ¢œŽ–ȱœŠŽȱŸŠ›’Š‹•ŽœȱŠŽ›ȱŠ’—ȱ
Šȱ™Ž›ž›‹Š’˜—ȱȡȱ˜ȱ‘ŽȱiȬ‘ȱ’—™žȱŸŠ›’Š‹•Žǯȱ

ǻ਄Ui+∈ℜNȱ ’Ž›Ž—ŒŽȱŠ–˜—ȱ਄Ui+ȱŠ—ȱ਄sepǯȱ

ǻkx∈ℜNkȱ ŠŽȱŸŽŒ˜›ȱŒ˜—Š’—’—ȱŠ••ȱ‘Žȱ™Ž›ž›‹Š’˜—œȱ˜ȱ˜›Ž›ȱkȱ˜›ȱ‘Žȱ›ŽŒž›Ȭ
œ’ŸŽȱ•’—ŽŠ›’£Š’˜—ȱŽŒ‘—’šžŽǯȱȱ

ǻ1xi∈ℜNkȱ Ž›ž›‹ŽȱœŠŽȱŸŽŒ˜›ȱ˜ȱ˜›Ž›ȱŗȱ˜›ȱ‘Žȱ›ŽŒž›œ’ŸŽȱ•’—ŽŠ›’£Š’˜—ȱŽŒ‘Ȭ
—’šžŽǯȱ

ȟj∈ m ’Ž—ŸŽŒ˜›ȱ˜ȱ‘ŽȱŠ™™›˜¡’–ŠŽȱ ˜˜™–Š—ȱ˜™Ž›Š˜›ȱ K̂ ǯȱ

y∈ N
ȱ ˜›Š—ȱŒ˜˜›’—ŠŽœȱ˜ȱ‘Žȱ˜™Ž—Ȭ•˜˜™ȱœ¢œŽ–ǯȱ

y ∈ N ȱ ˜›Š—ȱŒ˜˜›’—ŠŽœȱ˜ȱ‘ŽȱŒ•˜œŽȬ•˜˜™ȱœ¢œŽ–ǯȱ

ǔ∈ℜL ȱ ŽŒ˜›ȱ˜ȱž™žȱŸŠ›’Š‹•Žœȱ˜ȱŠȱ¢—Š–’ŒŠ•ȱœ¢œŽ–ǯȱ

ǻǔi+∈ℜL —Œ›Ž–Ž—œȱ’—ȱ‘Žȱ˜ž™žȱŸŠ›’Š‹•Žœȱ ‘Ž—ȱ਄i+ȱ’œȱžœŽǯȱ

[[[LLL

ǻǔi-∈ℜL —Œ›Ž–Ž—œȱ’—ȱ‘Žȱ˜ž™žȱŸŠ›’Š‹•Žœȱ ‘Ž—ȱ਄i-ȱ’œȱžœŽǯȱ

z∈ Mȱ ȱ ˜˜™–Š—ȱŒ˜˜›’—ŠŽœǯȱȱ

zj∈ Mȱ ȱ ˜˜™–Š—ȱŸŠ›’Š‹•Žœȱ˜ȱ˜›Ž›ȱjǯȱȱ

znord∈ Mȱ ˜˜™–Š—ȱŸŠ›’Š‹•Žœȱ˜ȱ˜›Ž›ȱŗȱ‘›˜ž‘ȱnǯȱ




















[[[LY

Chapter 1
Introduction

This introductory chapter presents a brief description of the research work in this disser‐
tation.  

The  background  and  motivations  are  explained  as  well  as  the  statement  of  the 
problem that is addressed in the research. 

A concise review of the previous work related to the topics treated in this disser‐
tation is presented. Also, the goals and objectives of this research work, the obtained re‐
sults, and the limitations of the proposed approach are stated.  

Moreover, the main contributions are summarized, and the publications generat‐
ed from this work are listed.  

The last part of the chapter is an outline of the general structure of the disserta‐
tion. 

 
1.1 Background and motivation
The description and characterization of spatiotemporal oscillatory dynamics is a 
central question in power system stability and control [1]‐[5]. Global mode anal‐
ysis techniques are of interest because they can be used to extract a small set of 
global  variables  with  which  to  characterize  the  short‐term  swing  dynamics  in 
cases where linear methods fail. 

With such an approach, the essential dynamical coordinates or modes of 
high‐dimensional  measured  trajectories  that  have  an  important  influence  on 
system behavior can be isolated, and the mechanisms leading to a system insta‐
bility can be identified [4]. 

Transient motions associated with system modes can vary from localized 
behavior to large‐scale global motion and may be highly correlated for a given 
time  interval,  especially  in  the  case  of  poorly  damped  or  unstable  oscillations 
following large disturbances [1], [2]. Such interactions can be usefully described 
in terms of modes interacting with each other via the network structure and via 
the initial operating conditions [2], [3]. 

Understanding  how  these  modes  evolve  and  interact  is  key  not  only  to 
assessing system stability but also to identifying dynamic devices and their con‐
trol systems involved in the oscillations and to design controllers [6], [7].  

The extracted global coordinates can then be correlated with specific sys‐
tem behavior and be used to identify coherent patterns and global trends. This is 
the subject of this research. 

1.2 Problem statement

The increasing complexity of electric power systems and their operation under 
high loading conditions require the availability of efficient techniques capable to 
provide useful information about intrinsic global dynamic behavior [2], [3], [8]‐
[10],  as  well  as  appropriate  schemes  for  monitoring  and  control  those  slow 
wide‐area oscillations [11]‐[13]. 
2
Model‐based  techniques  for  describing  global  behavior  of  complex  tran‐
sient  processes  are  of  particular  interest  for  characterization  of  wide‐area  phe‐
nomena.  

Large  stressed  interconnected  systems  when  subjected  to  large  disturb‐


ances, due to its number of degrees of freedom and the sparse geographical dis‐
tribution,  exhibit  highly  complex  phenomena  including  modal  interactions, 
temporarily  chaotic  vibrations,  and  intermodulation.  To  determine  the  mecha‐
nisms  governing  these  physical  variations,  it  is  essential  to  characterize  the 
large‐scale interactions between the system modes [8].  

However, the application of this type of analyses is limited to small pow‐
er  systems  due  to  the  enormous  computational  burden  [2],  opening  an  oppor‐
tunity to develop methodologies that allow this kind of non‐linear analysis to be 
applied on medium‐sized and large multi‐machine power systems. 

Finally, the use of efficient analysis techniques is of utmost importance to 
the  ultimate  utilization  of  these  non‐linear  analysis  techniques  to  the  develop‐
ment of new control laws or devices. 

In the following section, the research work related to the mentioned areas 
of investigation is briefly described. Areas of application and future research are 
also identified. 

1.3 A brief review of previous work


Over  the  last  few  years,  a  data‐driven  framework  for  the  analysis  of  high‐
dimensional measured data, based on the Koopman operator with the ability to 
characterize the spatiotemporal structure of stability phenomena using spectral 
analysis [14], [15] has been introduced.  

Central to this framework is the decomposition of observables or outputs 
based on the projection onto eigenfunctions of a linear Koopman operator asso‐

3
ciated  with  the  dynamical  evolution  of  the  underlying  transient  process  [16]‐
[18]. 

Analytical  model‐based  methods  to  efficiently  and  accurately  compute 


the Koopman eigen‐tuples of large and complex dynamical systems are surpris‐
ingly  sparse.  Apart  from  a  limited  number  of  publications  using  perturbation 
methods,  the  problem  of  characterization  of  large  non‐linear  power  system 
models has received limited attention.  

In the literature, non‐linear analysis techniques have been efficiently used 
to  investigate  global  system  behavior  [3],  [8]‐[10]  and  modal  interactions  [19], 
[20], to define non‐linearity indexes [2], [5], [9], [21], [22], to predict power sys‐
tem inter‐area separation [20], to assess the effect of renewable energy [23], and 
to analyze [19] or design controllers [24]‐[27].  

In reference [25], the authors redesign a non‐linear controller to eliminate 
Hopf  bifurcations  from  the  systemʹs  normal  operation  region.  References  [26] 
and  [27]  use  non‐linear  participation  factors  to  locate  power  system  stabilizers 
(PSSs) and to retune the generatorʹs excitation system, respectively. 

On the other hand, many wide‐area controllers have been designed based 
on  linearized  representations  of  the  system  and  optimal  linear  control  theory 
[11]‐[13],  whereas  non‐linear  controllers  have  been  designed  based  on  linear 
feedback linearization [24], [28]‐[29] or Lyapunov energy functions [30]‐[31]. 

As  stated  in  [24],  the  Lyapunov  function‐based  methods  need  to  find  a 
Lyapunov  energy  function  for  a  certain  system,  while  the  normal  forms  meth‐
ods have well‐established  procedures. These  methods are based on  Lie  deriva‐
tives of a non‐linear power system model [24] and can be used to study the ef‐
fect of non‐linear modal interaction on power system dynamic behavior as well 
as to design and locate system controllers. 

The extension of these approaches to deal with large, sparse system mod‐
els has been recently  addressed in  the power system  literature [32] but  several 

4
issues  warrant  further  investigation.  These  include  the  incorporation  of  trans‐
mission network devices,  the  efficient solution  of  the  resulting  models and  the 
analysis of system‐device interactions, among other issues. More recently, mod‐
el‐driven  approaches  with  the  ability  to  analyze  complex  system  models  have 
been  proposed  that  provide  an  alternate  characterization  of  non‐linear  system 
behavior [33]. These models are of special interest here since they can be used in 
conjunction with data‐based analysis methods based on Koopman mode analy‐
sis. 

In  closing,  it  must  be  mentioned  that  the  characterization  of  dynamical 
systems’  global  behavior  and  the  development  of  new  control  laws  has  been 
usually  addressed  under  perturbation  theory,  with  techniques  such  as  linear 
stability analysis [2], [34], the method of normal forms [2], [9], [35], and modal 
series [23], [36].  

Key issues of these approaches that require further investigation include 
[2], [3], [9]: 

1. The  obtained  representations  are  valid  in  a  region  around  a  stable 


(unstable) equilibrium point. 

2. The computational burden of computing the higher‐order matrices of 
coefficients is high 

3. It is required to have knowledge about the system stable equilibrium 
points. 

4. Obtaining  the  system  response  in  the  non‐linear  coordinates  is  com‐
putationally demanding. 

5. The  size  of  the  problem  grows  exponentially  with  system  dimension 
and stress. 

5
The emergence of new techniques and approaches in non‐linear systems 
theory  opens  the  possibility  to  extend  the  realm  of  applicability  of  non‐linear 
methods to circumvent some of the above limitations.   

1.4 Dissertation objectives


The  primary  objective  of  this  dissertation  is  to  provide  a  novel  alternative  ap‐
proach  to  characterizing  the  global  behavior  of  complex  power  system  models 
based on Koopman mode decomposition of a perturbed analytical model. As a 
by‐product,  techniques  to  design  non‐linear  feedback  controls  are  investigated 
and methods to automate the analysis techniques are developed. 

Specific objectives include: 

1. The development of a general high‐order theory for studying non‐linear 
behavior in complex power system models. 

2. The design of simple and straightforward schemes for the efficient appli‐
cation of the developed methods.  

3. To gain insight into the non‐linear phenomena arising from complex sys‐
tem formulations, and how these phenomena can be visualized and char‐
acterized in terms of relevant modes of motion.  

4. To  derive  measures  of  observability  to  identify  critical  system  modes 
based on the notion of observables in Koopman mode analysis. 

5. To develop an analytical framework for relating the measured evolution 
of a non‐linear response with the obtained higher‐order approximation.  

1.5 Research contributions

The main original contributions of this research work are: 

 The  development  of  an  analytical  approach  based  on  the  chain  rule  to 
rapidly linearize a dynamical system up to a certain order of analysis.  
6
 The  development  of  a  method  that  combines  Koopman  mode  analysis 
with  perturbation  theory  to  obtain  a  closed‐form  approximation  to  the 
system response, in  which the  influence of  mode interactions on  system 
response can be singled out in an efficient manner. 

 The  development  of  an  efficient  methodology  to  compute  the  proposed 
non‐linear  perturbed  model,  as  well  as  the  corresponding  initial  condi‐
tions and eigendecomposition.  

 The derivation of non‐linear measures of observability, measures of non‐
linearity, and non‐linear residues. 

 The development of a quadratic non‐linear controller with structural con‐
straints and the inclusion of a truncated set of non‐linear dynamics.  

1.6 Organization of the dissertation

This  dissertation  is  structured  into  nine  chapters.  After  this  introductory  chap‐
ter, Chapter 2 presents a theoretical background on Koopman operator analysis 
and briefly describes its most relevant properties. A review of both, data‐driven 
and model‐based Koopman‐based analysis techniques is presented along with a 
review of Koopman operator theory. 

Chapter 3 presents a novel methodology for the linearization of dynam‐
ical systems of differential equations. This method is named “Recursive Lineari‐
zation” and is based on the chain rule and perturbation theory that extend exist‐
ing analysis methods to the non‐linear case. Efficient algorithms for its straight‐
forward and fast implementation are also introduced. 

In  addition,  a  new  method  that  combines  Koopman  mode  analysis  and 
perturbation  theory  to  analyze  and  characterize  weak  non‐linearities  of  a  dy‐
namical  system  around  a  stable  system  equilibrium  point  is  then  presented  in 
Chapter 4. Efficient schemes for its implementations are also provided. 

7
In Chapter 5 Koopman observability measures in the Koopman operator 
theory and  the data‐driven  frameworks are derived. These  non‐linear observa‐
bility measures are extended to the perturbed Koopman mode analysis. 

An  illustrative  application  of  the  recursive  linearization  method  to  ana‐
lyze multi‐machine power systems is presented in Chapter 6. Additionally, non‐
linear interactions of the type state‐state and state‐input are assessed for a cubic 
extended Koopman eigenfunctions‐based model.  

In Chapter 7 the development of a truncated quadratic non‐linear control‐
ler to damping wide‐area oscillations is presented. The proposed control law is 
based on optimal control theory and the perturbed Koopman mode analysis in 
Chapter  four.  Structural  constraints  of  the  communication  network  are  consid‐
ered for the controller design, and a structural architecture for online application 
is suggested. 

Numerical results are provided in Chapter 8 for various test systems 

Finally,  in  Chapter  9  conclusions,  and  suggestions  for  future  work  are 
summarized. 

1.7 Publications

The following publications are the result of this research work. 

1.7.1 Conference papers 
‐ M. A. Hernández‐Ortega and A. R. Messina, “Koopman mode analysis of 
measured oscillations: An observability‐based approach,” North American 
Power Symposium (NAPS), Charlotte, NC, October 2015.  

‐ M. A. Hernández‐Ortega and A. R. Messina, “Koopman mode analysis of 
very large datasets: Numerical experience,” Transmission and Distribution‐
Latin America (T&D LA), Morelia, Mex., Sept. 2016. 

8
1.7.2 Refereed journal papers 
‐ M.  A.  Hernández‐Ortega  and  A.  R.  Messina,  “An  observability‐based 
approach  to  extract  spatiotemporal  patterns  from  power  system 
Koopman mode analysis,” Electric Power Components and Systems, vol. 45, 
no. 4, pp. 355‐365, Feb. 2017. 

‐ M.  A.  Hernández‐Ortega  and  A.  R.  Messina,  “Nonlinear  power  system 
analysis using Koopman mode decomposition and perturbation theory,” 
IEEE  Transactions  on  Power  Systems,  vol.  33,  no.  5,  pp.  5124‐5134,  Sept. 
2018. 

‐ M. A. Hernández‐Ortega and A. R. Messina, “Recursive linearization of 
third  order  for  electric  power  systems,”  IEEE  Transactions  on  Power  Sys‐
tems, submitted. 

‐ M.  A.  Hernández‐Ortega,  A.  Chakrabortty,  and  A.  R.  Messina,  “Design 
of  a  non‐linear  quadratic  controller  for  damping  wide‐area  oscillations 
through  the  perturbed  Koopman  mode  analysis,”  IEEE  Transactions  on 
Power Systems, submitted. 

1.8 References

[1] J.  Thapar,  V.  Vittal,  W.  Kliemann,  and  A.  A.  Fouad,  “Application  of  the 
normal form of vector fields to predict interarea separation in power sys‐
tems,” IEEE Trans. Power Syst., vol. 12, no. 2, pp. 844‐850, May 1997.  

[2]  J. J. Sanchez‐Gasca, V. Vittal, M. J. Gibbard, A. R. Messina, D. J. Vowles, S. 
Liu,  and  U.  D.  Annakkage,  “Inclusion  of  higher  order  terms  for  small‐
signal (modal) analysis: committee report‐task force on assessing the need 
to  include  higher  order  terms  for  small‐signal  (modal)  analysis”,  IEEE 
Trans. Power Syst., vol. 20, no. 4, pp. 1886–1904, 2006.  

9
[3] V.  Vittal,  N.  Bhatia,  and  A.  A.  Fouad,  “Analysis  of  the  inter‐area  mode 
phenomenon in power systems following large disturbances,” IEEE Trans. 
Power Syst., vol. 6, no. 4, pp. 1515‐1521, Nov. 1991. 

[4] Y.  Susuki  and  I.  Mezić,  “Nonlinear  Koopman  modes  and  precursor  to 
power  system  swing  instabilities,”  IEEE  Trans.  Power  Syst.,  vol.  27,  no.  3, 
pp. 1182‐11191, Aug. 2012.   

[5] H. Amano, T. Kumano, and T. Inoue, “ Nonlinear stability indexes of pow‐
er swing oscillation using normal form analysis,” IEEE Trans. Power Syst., 
vol. 21, no. 2, pp. 825‐834, May 2006.  

[6] J. J. Sanchez‐Gasca and D. J. Trudnowski, “Identification of electromechan‐
ical modes in power systems,” IEEE Task Force Report, Special Publication 
TP462, June 2012.  

[7] A. R. Messina, Inter‐area Oscillations in Power Systems: A Nonlinear and 
Nonstationary Perspective, New York, Springer, 2009. 

[8] P. Kundur, Power System Stability and Control, New York, Mc‐Graw‐Hill, 
1994. 

[9] T. Tian, X. Kestelyn, O. Thomas, A. Hiroyuki, and A. R. Messina, “An ac‐
curate third‐order normal form approximation for power system nonlinear 
analysis,” IEEE Trans. Power Syst., vol. 33, no. 2, pp. 2128‐2139, Mar. 2018. 

[10] N.  Pariz,  M.  H.  Shanechi,  and  E.  Vaahedi,  ʺExplaining  and  validating 
stressed  power  systems  behavior  using  modal  series,”  IEEE  Trans.  Power 
Syst., vol. 18, no. 2, pp. 778‐785, May 2003.  

[11] A.  Chakrabortty  and  P.  M.  Khargonekar,  “Introduction  to  wide‐area  con‐
trol  of  power  systems,”  IEEE  American  Control  Conference  (ACC)  2013,  pp. 
6758‐6770, Washington DC, USA, June 2013. 

10
[12] A. Chakrabortty, “Wide‐area damping control of power systems using dy‐
namic clustering and TCSC‐based redesigns,” IEEE Trans. Smart Grid, vol. 
3, no. 3, pp. 1503‐1514, May 2012.  

[13] A. Jain, A. Chakrabortty, and E. Biyik, “An online structurally constrained 
LQR  design  for  damping  oscillations  in  power  system  networks,”  IEEE 
American  Control  Conference  (ACC)  2017,  pp.  2093‐2098,  Seattle,  WA,  USA, 
July 2017. 

[14] M. Budišíc, R. Mohr, and I. Mezić, “Applied Koopmanism,” Chaos: Inter‐
discip. J. Nonlinear Sci., vol. 22, no. 4, pp. 047510, December 2012. 

[15] A.  Mauroy  and  I.  Mezić.,  “Global  stability  analysis  using  the  eigenfunc‐
tions  of  the  Koopman  operator,”  IEEE  Trans.  Automatic  Control,  vol.  61, 
no. 11, pp. 3356–3369, Nov. 2016. 

[16] Y. Susuki and I. Mezić, “Nonlinear Koopman modes and power system as‐
sessment without models,” IEEE Trans. Power Syst., vol. 29, no. 2, pp. 899–
907, March. 2014. 

[17] E. Barocio, B. C. Pal, N. F. Thornhill, and A. R. Messina, “A dynamic mode 
decomposition  framework  for  global  power  system  oscillation  analysis,” 
IEEE Trans. Power Syst., vol. 30, no. 6, pp. 2902–2912, November 2015.  

[18] I. Mezić, “Analysis of fluid flows via spectral properties of Koopman oper‐
ator”, Annu. Rev. Fluid Mech., vol. 45, pp. 357‐378, 2013. 

[19] C.‐M.  Lin,  V.  Vittal,  W.  Klienmann,  and  A.  A.  Fouad,  “Investigation  of 
modal interaction and its effect on control performance in stressed power 
systems using normal forms of vector fields,” IEEE Trans. Power Syst., vol. 
11, no. 2, pp. 781‐787, May 1996. 

[20] X. Y. Ni, V. Vittal, W. Klienmann, and A. A. Fouad, “Nonlinear modal in‐
teraction  in  hvdc/ac  power  systems  with  dc  power  modulation,”  IEEE 
Trans. Power Syst., vol. 11, no. 4, pp. 2011‐2017, Nov. 1996. 

11
[21] S. K. Starret and A. A. Fouad, “Nonlinear measures of mode‐machine par‐
ticipation [transmission system stability],” IEEE Trans. Power Syst., vol. 13, 
no. 2, pp. 389‐394, May 1998. 

[22] J. Arroyo, E. Barocio, R. Betancourt, and A. R. Messina, “A bilinear analysis 
technique  for  detection  and  quantification  of  nonlinear  modal  interaction 
in power systems,” 2006 IEEE PES GM, Montreal, Canada, June 2006. 

[23] R. Z. Davarani, R. Ghazi, and N. Pariz, “Non‐linear analysis of DFIG based 
wind farm in stressed power systems,” IET Renew. Power Gener., vol. 8, no. 
8, pp. 867‐877, Nov. 2014. 

[24] A. K. Singh and B. C. Pal, “Decentralized nonlinear control for power sys‐
tems  using  normal  forms  and  detailed  models,ʺ  IEEE  Trans.  Power  Syst., 
vol. 33, no. 2, pp. 1160‐1172, Mar. 2018. 

[25] P. M. Vahdati, A. Kazemi, M. H. Amini, and L. Vanfretti, ʺHopf bifurcation 
control of  power system nonlinear dynamics via a state feedback control‐
ler‐Part I: Theory and modeling,ʺ IEEE Trans. Power Syst., vol. 32, No. 4, pp. 
3217‐3228, July 2017.  

[26] S. Liu, A. R. Messina, and V. Vittal, “A normal form analysis approach to 
siting power system stabilizers (PSSs) and assessing power system nonlin‐
ear  behavior,”  IEEE  Trans.  Power  Syst.,  vol.  21,  no.  4,  pp.  1755‐1762,  Nov. 
2006. 

[27] H.  Lomei,  M.  Assili,  D.  Sutanto,  and  K.  M.  Muttaqi,  “A  new  approach  to 
reduce  the  nonlinear  characteristics  of  a  stressed  power  system  by  using 
the normal form technique in the control design of the excitation system,” 
IEEE  Trans.  on  Industry  Applications,  vol.  53,  no.  1,  pp.  492‐500,  Jan./Feb. 
2017. 

[28] M. A. Mahmud, M. J. Hossain, H. R. Pota, and Amanullah M. T. Oo, ʺRo‐
bust  partial  feedback  linearizing  excitation  controller  design  for  multi‐

12
machine power systems,ʺ IEEE Trans. Power Syst., vol. 32, No. 1, pp. 3‐16, 
Jan. 2017.  

[29] J. W. Chapman, M. D. Ilic, C. A. King, L. Eng, and H. Kaufman, ʺStabilizing 
a multimachine power system via decentralized feedback linearizing exci‐
tation  control,ʺ  IEEE  Trans.  Power  Syst.,  vol.  8,  No.  3,  pp.  830‐839,  Aug. 
1993.  

[30] H. Liu, Z. Hu, and Y. Song, “Lyapunov‐based decentralized excitation con‐
trol for global asymptotic stability and voltage regulation of multi‐machine 
power systems,” IEEE Trans. Power Syst., vol. 27, No. 4, pp. 2262‐2270, Nov. 
2002.  

[31] R.  Yousefian  and  S.  Kamalasadan,  “A  Lyapunov  function  based  optimal 
hybrid  power  system  controller  for  improved  transient  stability,”  Electric 
Power Syst. Res., vol. 137, pp. 6‐15, Aug. 2015. 

[32] I.  Martínez,  A.  R.  Messina,  and  V.  Vittal,  “Normal  form  analysis  of  com‐
plex  system  models:  a  structure  preserving  approach,”  IEEE  Trans.  Power 
Syst., vol. 22, no. 4, pp. 1908‐1915, Nov. 2007.  

[33] M.  A.  Hernández‐Ortega  and  A.  R.  Messina,  “Nonlinear  power  system 
analysis  using  Koopman  mode  decomposition  and  perturbation  theory,” 
IEEE Trans. Power Syst., vol. 33, no. 5, pp. 5124‐5134, Sept. 2018. 

[34] J.  Persson  and  L.  Söder,  “Comparison  of  three  linearization  methods,” 
Proc. of the 16th PSCC, Glasgow, 14‐18 July 2008. 

[35] V. Vittal, N. Bhatia, and A. A. Fouad, “Analysis of stressed power systems 
following  using  normal  forms,”  Proc.  of  the  IEEE  ISCAS,  vol.  5,  pp.  2553‐
2556, USA, 1992. 

[36] O.  Rodríguez  and  A.  Medina,  “Stressed  power  systems  analysis  by  using 
higher  order  modal  series  method:  a  basic  study,”  IEEE  PES  Transmission 
and Distribution Conference and Exposition 2010, LA, USA, 2010. 

13
Chapter 2
Koopman Mode Analysis
Over the last few years, Koopman mode analysis has emerged as an alternative approach
to the study of data-driven system representations.

In this context, Koopman mode analysis captures the full information of a (non-
)linear system through the spectral analysis of the Koopman operator and provides a
very general analysis tool to investigate the stability of both, model-based, and data-
driven-based representations. The associated Koopman modes represent spatial flow
structures and have an associated temporal frequency and growth rate that may be
viewed as a non-linear generalization of linear global modes.

In this chapter, a brief description of Koopman mode analysis is developed in a


discrete-time framework. The main properties of this technique are presented, and special
attention is focused on its mathematical formulation.

Further, the most relevant algorithms for estimating approximations of the


Koopman modes are presented. Numerical issues related to its implementation are also
discussed.
2.1 The Koopman operator
Following [1]-[3], consider a continuous-time dynamical system evolving on a
finite M-dimensional smooth manifold Z, such that, for all z∈Z

zɺ (t ) = f (z(t )), z(0) = z(t0 ) (2. 1)

where f is a vector field.

In discrete time, mapping the state z(t0) forward in time, yields zk+1 =
T(z(k)) = T(zk), where zk = z(k∆t), and T is a discrete map, T(∆t): Z → Z.

Let T denote the iterated map T: Z → Z, projecting from the space Z ⊂ M

onto itself, describing a non-linear system in a certain system of coordinates


zk∈ . Then, we can define a set of observables xk∈ℜN, as the projection of the
M

space Z onto the space X ⊂ ℜN, through a function f [1], [2].

The evolution of the system and the observables is given by:

z k +1 = T(z k )
(2. 2)
x k +1 = f (z k +1 ) = f (T(z k ))

Mathematically, the evolution of the observables induces a (discrete-time)


Koopman operator UT, as:

[U T f ](z k ) = [ f T](z k ) (2. 3)

where the terms within brackets [∙] are functions acting over the vector of varia-
bles zk, and f ○T denotes the composition of the functions f and T.

It should be pointed out that, in general, UT is infinite-dimensional. By re-


stricting the Koopman operator to a subspace spanned by a limited number of
observables, however, a finite-dimensional linear representation of a non-linear
system can be obtained [2], [3].

More importantly, when the space of observables is a vector space, the


Koopman operator is linear [2]. Therefore, the study of the spectral properties of

15
UT gives insight into the dynamics of the system in a similar way to the case of
finite-dimensional systems [2], [3].

2.1.1 Koopman eigenfunctions


Let {ψ1, ψ2, …, ψM} denote a set of eigenfunctions of the Koopman operator. In
the discrete-time case, UT satisfies:

[UT ψ j ](z) = µ jψ j (z) (2. 4)

where the terms μj ∈ are the Koopman eigenvalues (KVs).

The following four properties of the Koopman eigenfunctions can readily


be verified [2]:

Property 2.1 (Linearity). Since UT is linear, it holds that for any functions
f1, f2, and scalars α, β

[U T (α f1 + β f 2 )](z 0 ) = α [U T f1 ](z 0 ) + β [U T f 2 ](z 0 ) (2. 5)

Property 2.2 (Potency). Let ψ (z) = zk, k∈ℵ. Then


__

[UT ψ ]( z ) = ψ (µz ) = µ k z k = µ kψ ( z ) (2. 6)

and thus, the function ψ(z) is an eigenfunction of UT with eigenvalue μk.

Property 2.3 (Algebraic structure). If ψ1 and ψ2 are eigenfunctions of UT


with eigenvalues μ1 and μ2, then ψ1ψ2(z) is an eigenfunction of UT with eigenval-
ue μ1μ2:

U T (ψ 1ψ 2 )  ( z ) = [U T ψ 1 ] ( z ) [U T ψ 2 ] ( z ) = ( µ1 µ 2 ) [ψ 1ψ 2 ] ( z ) (2. 7)

Property 2.4 (Spectral equivalence of topologically conjugate systems).


Let S : Y → Y and T : Z → Z be topologically conjugate and let ℋ : Z → Y be the
homeomorphism between them, such that S ○ ℋ = ℋ ○ T. If ψ is an eigenfunction
of US with eigenvalue μ, then ψ ○ ℋ is an eigenfunction of UT at eigenvalue μ [1].

16
2.1.2 Koopman modes
Assume that the function f is defining the observables in the span of a set of lin-
early independent eigenfunctions ψj. Then, the function f in (2.2) can be expand-
ed as

f ( z ) = ∑ j =1ψ j ( z ) vɶ j

(2. 8)

where the vectors of coefficients ṽj ∈ N are the Koopman modes (KMs), and are,
in general, not necessarily orthogonal [2], [3].

The dynamic evolution of the system (2.2), given by the iterations of (2.7)
and (2.8) from the initial value z0, is [2], [4]:

 [
f (z1 ) = f (T(z 0 )) = U T1 f (z 0 ) ]

[ ]
 f (z 2 ) = f (T(z1 )) = U T f (z1 ) = U T f (z 0 )
1 2
[ ] (2. 9)

 ⋮
 [
f (z k ) = U Tk f (z 0 ) ]
and, in accordance with (2.3), (2.4), and (2.8)

∞ ∞
f ( z k ) = ∑ U T k ψ j  ( z 0 ) vɶ j = ∑ µ kjψ j ( z 0 ) vɶ j (2. 10)
j =1 j =1

where the terms μj are the discrete-time Koopman eigenvalues with a decay rate
σ j and oscillate to at a unique frequency γj, such that for the considered frame-
work, their evolution is given by μj k = exp (λjk∆t) = exp ((σj +iγj)k∆t).

To further illustrate this notion, let now {μ1, μ2, …, μN} be the set of linear
eigenvalues of f, and {ψ1, ψ2,…, ψN} be the set of corresponding linear eigenfunc-
tions. By making use of properties 2.2 and 2.3 in equation (2.10), one has that

∞ ∞ N N
f ( z k ) = ∑ j =1 µ kjψ j ( z 0 ) vɶ j + ∑∑∑∑ µ jp µlq ψ jpψ lq  ( z 0 ) vɶ ( p ,q , j ,l )
N
(2. 11)
p =1 q =1 j =1 l =1

where ṽ(p,q,j,l) is the Koopman mode corresponding to the eigenvalue ( µ jp µ lq ).

17
2.2 Particular cases of analysis
Three particular cases for analysis that are of interest to this research are
discussed below:

1. Koopman mode analysis (KMA) of a diagonal system,

2. Koopman mode analysis of a linear system, and

3. Koopman mode analysis of the output signals of a linear system.

2.2.1 KMA of diagonal systems


Let x = [x1, x2, …, xN]T and ẋ = A11x, where A11∈ N×N is a transformation matrix. If
{λ1, λ2,… , λN} is the set of eigenvalues of A11 and λj≠λk for j≠k, then A11 can be
decomposed as A11=V11Λ11U11, where V11 and U11 are the matrices of right and
left eigenvectors of A11, respectively, and Λ11 = diag(λ1, λ2, …, λN) [5].

Use of the transformation x = V11y = V11[y1 y2 … yN]T in the linear model


yields

yɺ = V11−1 [ A11x] = V11−1A11V11y = Λ1y (2. 12)

where Λ1= U11A11V11, and ẏ= [ẏ1 ẏ2 … ẏN]T.

In this relation, the maps Λ1 and A11 are topologically conjugate by


Λ1(V11)-1 = (V11)-1 A11. Further, from (2.12) it can be seen that φj(y) = yj are eigen-
functions of UΛ corresponding to eigenvalues λ_j.

By making use of properties 2.3 and 2.4 with ℋ = (V11)-1, it is immediately


seen that:

ϕ (j ,pk,q ) (y ) = [ϕ j (y )]p [ϕk (y )]q = y jp ykq (2. 13)

is an eigenfunction of UΛ with eigenvalue (pλj+qλk) [1], [2].

18
2.2.2 KMA of a linear system
Let us consider again a linear system with the form ẋ = A11x, and let A11 be de-
composed as A11= V11Λ1U11 with V11 = [v1 v2 ∙∙∙ vN], where vj is a right eigenvector
of A11 corresponding to the linear eigenvalue λ_j. Then, if wj is the j-th right ei-
genvector of the adjoint A11, A11*, we can define the observable ϕj(x) = 〈x, wj〉 and
write [3]

U Aφ j ( x ) = φ j ( A11x ) = A11x, w j = x, A11


*
w j = x, λ *j w j
(2. 14)
= λ j x, w j = λ jφ j ( x )

One concludes that that ϕj is a linear eigenfunction of the Koopman oper-


ator UA. Let be the vector-valued observable be defined as g(x) = x. Then

N N
g ( x ) = ∑ x, w j v j = ∑φ j ( x ) vɶ j (2. 15)
j =1 j =1

From this expression, we can see that the eigenvector vj of the linear map
A11 is the Koopman mode corresponding to ϕj.

2.2.3 KMA of the output variables of a linear system


It can be noted that in Section 2.2.1 the Koopman operator acting on the dynam-
ics of the regarded observables is indeed the linear transformation A11, while in
Section 2.2.2 it is the diagonal matrix Λ1.

In this section, the Koopman operator of a discrete-time linear system


where the observables are defined as the output signals is found. This case is
interesting because it illustrates the way an infinite-dimensional linear operator
acts linearly over the evolution of a bunch of observables. This case has inspired
the developments presented in Chapter 4.

Consider the discrete-time linear system

x k +1 = A11x k
(2. 16)
yˆ k = C1x k

19
where A 11∈ℜN×N is the discrete-time system matrix, C 1∈ℜL×N is the discrete-
time matrix of output signals and ŷk∈ℜL are the system output signals evaluated
at time step k.

We can then define the observables as:

g ( x k ) = yˆ k = C1x k (2. 17)

Thus, at the initial discrete-time point k=0, equation (2.17) gives

g ( x0 ) = yˆ 0 = C1x0 (2. 18)

Then, at k =1

g ( x1 ) = yˆ 1 = C1x1
= C1A11x 0 (2. 19)
+ −1 +
= C1A11C x 0 = C1V11M1 V C x 0
1 11 11

U 1A U 1A

where the symbol + denotes the Moore-Penrose pseudoinverse and M 1∈ N×N


is
a diagonal matrix containing the discrete-time eigenvalues of the system.

Furthermore, from (2.19) it can be noted that for a time step k

g ( x k ) = yˆ k = C1x k
= C1 A11
k
x0 (2. 20)
+ −1 +
= C1 A C yˆ 0 = C1V11M V C yˆ 0
k
11 1
k
1 11 1

U Ak U Ak

and the evolution of the observables can be expressed as:

g ( x k ) = ∑ j =1 µ kj φ j ( x0 ) vɶ j
N
(2. 21)

where the Koopman modes ṽj are calculated as:

ɶ = [ vɶ vɶ ⋯ vɶ ] = C V
V (2. 22)
1 2 N 1 11

20
and the values of the Koopman eigenfunctions may be calculated from:

Θ ( x0 ) = φ1 ( x0 ) φ2 ( x0 ) ⋯ φN ( x0 )  = V11−1x0
T
(2. 23)

Now, by extrapolating the above results to the system (2.2), if T is an infi-


nite-dimensional matrix describing the time-evolution of the Koopman eigen-
functions, and if the state variables of the system (2.2) were output signals, or
the observables, equations (2.17) through (2.23) illustrate how the dynamics of
the Koopman modes are projected onto the observed state variables.

Although a finite set of observables could be recorded, the quantity of the


dynamics regarding the Koopman operator is infinite.

2.3 Approximate methods


Although in Section 2.2, some examples of linear systems and their correspond-
ing Koopman operators were shown, in general, an explicit finite representation
of the Koopman operator for non-linear dynamical systems is unreachable [1].

In literature, two types of strategies are followed to obtain approxima-


tions to the Koopman operator as well as its tuples (Koopman eigenvalues,
Koopman eigenfunctions, and Koopman modes):

i) Data-driven approaches: These techniques do not use any information


about the equations describing the dynamics of the system. The most
important ones are the Koopman mode decomposition (KMD) tech-
nique [3], the dynamic mode decomposition (DMD) method [6], and
the extended dynamic mode decomposition (EDMD) [7].

ii) Model-based approaches: This type of approaches use the equations


modeling a non-linear system to obtain a finite representation of the
Koopman operator. These finite representations may be exact or ap-
proximated, depending on the dimension of the subspace of
Koopman eigenfunctions containing the underlying dynamics [8].

21
In what follows, the data-driven approaches and the two types of finite
representations of the Koopman operator (exact and approximated) are de-
scribed.

2.3.1 Numerical data-driven methods

2.3.1.1 Koopman mode decomposition

The Koopman mode decomposition (KMD) method is a variant of the standard


Arnoldi method, based on the recursive generation of Krylov subspaces [3].

Consider a network of sensors that represent PMUs or other measuring


devices [9]. Let now ~
x j(tk), j=1,…, Mɶ , k=0, …, Ñ, denote an element of observa-
tion, where Mɶ is the number of sensors, Nɶ +1 is the number of samples. In
power systems’ practical applications, typically Mɶ < Nɶ +1.

ɶ
Following [2] and [7], if xɶ k ∈ℜM is a snapshot, then the snapshot data ma-
ɶ ∈ℜMɶ × Nɶ can be defined as:
ɶ ,Y
trices X

ɶ = xɶ
X xɶ 2 ⋯ xɶ Mɶ −1  (2. 24)
 0

ɶ = xɶ xɶ ⋯ xɶ ɶ 
Y (2. 25)
 1 2 M

where for the specific case of the KMD technique,

ɶ =g X
Y ( )
ɶ = XC
ɶˆ (2. 26)

ɶ ɶ
i.e., Ĉ ∈ℜN ×N is the associated companion matrix resembling the effect of the
~
Koopman operator in the data set X , and is constructed as:

0 0 0 0 cˆ0 
1 0 0 ⋯ 0 cˆ1 

ˆ = 0
C 1 0 0 cˆ2  (2. 27)
 
 ⋮ ⋱ ⋮ ⋮ 
0 0 0 ⋯ 1 cˆNɶ −1 

22
where ĉ = (ĉ0, …, ĉ Nɶ −1 )T is the vector of coefficients of the linear combination [10]

xɶ Nɶ = g ( xɶ Nɶ −1 ) = cˆ0 xɶ 0 + ⋯ + cˆNɶ −1xɶ Nɶ −1 + r = Xc


ɶˆ (2. 28)

and r is the residue of the linear combination. We note that, in the limit, r ≈ 0 as
~
Mɶ increases and rank( X ) → Nɶ .

~ ~
Consider now the eigendecomposition Ĉ= T -1 Μ̂ T , Μ̂ =diag( µ~1,…, µ~Nɶ ).
The eigenvalues µ~ j ∈ C of Ĉ are then approximations to the eigenvalues μj of g,

called empirical Ritz eigenvalues. The empirical Ritz eigenvectors v̂ j∈ are
defined to be the columns of the matrix V̂= [ v̂ 1 v̂ 2 ⋅⋅⋅ v̂ Nɶ ], where [3]

V ɶ Tɶ −1
ˆ =X (2. 29)

Using this approach, each snapshot can be decomposed as:


xɶ k = ∑ µɶ jk vˆ j , k = 0,… , Nɶ − 1 (2. 30)
j =1

Then, the last snapshot is given by:


xɶ Nɶ = ∑ µɶ jN vˆ j + r, r ⊥ span {xɶ 0 ,… , xɶ Nɶ −1}
ɶ
(2. 31)
j =1

It can be observed from (2.21), (2.30), and (2.31) that the Ritz eigenvalues,
µ~ j, are approximations to the Koopman eigenvalues μj as the empirical Ritz ei-
genvectors v̂ j are approximations to the Koopman modes ṽj multiplied by the
value of the corresponding Koopman eigenfunctions ϕj(x0), i.e. ϕj(x0)ṽj. However,
equation (2.30) approximates each snapshot via a finite sum of modes, rather
than an infinite sum.

There are three main sources of computational error affecting the imple-
mentation of the Koopman mode decomposition method:

23
1. Ill-conditioning of the computations: This is the most common and im-
ɶ in (2.29), when T
portant factor. It is associated with the inversion of T ɶ

has large entries originated from highly-unstable Koopman eigenvalues


estimations. Two factors generate this condition:

a. The presence of highly unstable dynamics in the analyzed data.


The only one solution for this case is to adjust the considered sam-
pling window.

b. The second reason is an inaccurate computation of the empirical


Ritz eigenvalues. Numerical experience indicates that this problem
lies in the process of computing the vector ĉ. In literature, two
methods are used to compute the linear combination ĉ: In [11], the
~
authors use the Moore-Penrose pseudoinverse of X for

( )
+
ɶ
cˆ = X xɶ Ñ (2. 32)

~
whereas in [12] the economy-size QR decomposition of X is used.
~ ɶ ɶ ɶ
This means that with X = QR, Q∈ ℜ M × M , R∈ ℜ M × Ñ , the vector ĉ is
calculated as:

cˆ = R +Q−1xɶ Ñ (2. 33)

For the case of power systems, we propose the use of the formula

( )
+
ɶ TX
cˆ = X ɶ ɶɶ
Xx (2. 34)
Ñ

which is more stable for the cases where Mɶ < Nɶ +1.

2. Dependence of the results on the last snapshot: This implies that the
Koopman mode estimates may vary from one sampling window to an-
other. Numerical experience shows that this variation on the estimations
decreases as the dataset and the sampling rate increases.

24
3. Similarity with the discrete Fourier transform (DFT): Analogously to the
DFT, the rate of oscillation of the Koopman modes is limited by the
Nyquist criterion; the Koopman eigenvalues are separated around the
unit circle by a differential of frequency Δf almost equal to that defined
for the DFT, except for the most dominant Koopman modes.

This essentially means that the resolution of the Koopman mode decom-
position method increases as the observational period increases.

As can be noted, the second and third numerical issues associated with
the Koopman mode decomposition method are intrinsically enhanced when
large datasets are used. It should be stressed, however that the results of these
numerical methods are still estimations of the true Koopman modes.

A remarkable advantage of this technique is that the sub-process of calcu-


lating the approximate Koopman eigenvalues depends on the computation of
the vector ĉ, which in fact is the less demanding stage of the algorithm.

2.3.1.2 Dynamic mode decomposition

A variation of the Koopman mode decomposition method is the dynamic mode


decomposition (DMD) algorithm. This method is based on the projection of
large-scale datasets onto a dynamical system with fewer freedom degrees. The
dynamic modes are intended to characterize the most energetic Koopman struc-
tures contained in the analyzed dataset [6].

ɶ such that X
This variant computes the SVD of the matrix X ɶ = ÛΣŴH,
Mɶ ×Mɶ ɶ ɶ ɶ ɶ
where Û∈ , Ŵ∈ N ×M , and Σ∈ M ×M is a diagonal matrix containing the
ɶ . ŴH denotes the conjugate transpose of Ŵ [5].
singular values of X

By substituting the above decomposition in (2.26) and rearranging we ob-


tain

ɶ =U
C ɶ ˆ −1 = ΣW
ˆ H YWΣ ˆ ˆ −1
ˆ H CWΣ (2. 35)

25
~
where C generally allows a more robust implementation and preserves the most
dominant patterns encountered in matrix Ĉ. This alternate algorithm is exact
ɶ is full-rank [11].
when X

ɶ is then performed by solving


The extraction of the dynamic patterns of X
the diagonalization problem

ɶ ˆ −1 = TM
ˆ H YWΣ
U ˆ Tˆ −1 (3. 36)


where M =diag( µˆ 1 , … , µˆ J ) contains the set of most dominant eigenvalues of Ĉ
~
and T̂ ∈ Nɶ × Nɶ
is the matrix of right eigenvectors of C .

The dynamic modes are set as V̂ = Û T̂ , whereas the time coefficients are
ˆ −1ΣW
computed either with T ˆ H or by scaling the columns of V̂ by appropriate
complex scalars.

The dynamic mode decomposition shares strengths and weaknesses with


other modal decompositions forms such as the proper orthogonal decomposi-
tion (POD) method [3].

In this sense, the main numerical issue of the dynamic mode decomposi-
tion method resides in the fact that its estimates may result in mode mixing and
lead to an ill-conditioning problem, especially for datasets with a reduced num-
ber of sensors [12].

2.3.2 Model-based approaches


In this section, two model-based approaches to approximate the Koopman op-
erator are reviewed in the context of this research. The first one, the extended
dynamic mode decomposition (EDMD), is model-based but data-driven. The
discussion also extends to the analysis of finite approximations based on the
system dynamical equations.

26
2.3.2.1 Extended dynamic mode decomposition

The extended dynamic mode decomposition (EDMD) was first proposed in [7].
This method approximates not just the Koopman modes and eigenvalues, but
also the Koopman eigenfunctions.

ɶ and Y
Here, the definition of the previously used snapshots matrices X ɶ ,
is extended by simply considering

yɶ k = g (xɶ k ) (2. 37)

ɶ and Y
where X ɶ are not necessarily time series [13].

Additionally, a dictionary of candidate non-linear functions ψ̂ j of the col-


ɶ is required:
umns of X

ɶ ) = ψˆ (X
 1 ) ψˆ 2 (X) ⋯ ψˆ m (X) 
ˆ (X
Ψ ɶ ɶ ɶ (2. 38)

that may consist of constant, polynomial, trigonometric, or other non-linear


functions [7], [14].

The optimal choice of the functions being part of the dictionary is an open
problem, but here it is assumed to be rich enough to accurately approximate the
most dominant Koopman tuples [7].

The time evolution of the selected observables can then be written in


terms of the dictionary functions as:

ɶ ) = [Ψ
ˆ ]( X
[U Ψ ɶ)=Ψ
ˆ gˆ ]( X ɶ )K
ˆ (X ˆ +r (2. 39)
O

ˆ ∈C m× m is a finite representation of the Koopman


where r is a residue and K O

operator, related with the dictionary of functions.

ˆ is computed by correlating the func-


In literature, this approximation K O

ɶ ) and Ψ
ˆ (X
tions Ψ ɶ ) [7], or by sparse regression [14].
ˆ (Y

27
ˆ with eigenvalue µ~ j, the ap-
Thus, if ξj is the j-th right eigenvector of K O

proximation to the j-th Koopman eigenfunctions is given by:

φˆj (xɶ k ) = Ψ
ˆ (xɶ )ξ
k j (2. 40)

Now, the observables can be conveniently expressed as part of the span


of the functions ψ̂ j as:

m
g j ( xɶ k ) = ∑ψˆ l ( xɶ k )bˆk ( l ) = Ψ
ˆ ( xɶ )bˆ
k k (2. 41)
l =1

where b̂ k∈ m is a vector of coefficients.

Therefore, the vector of observables at time instant k may be expressed as


follows

g ( xɶ k ) = Bˆ T Ψ
ˆ T ( xɶ )
k (2. 42)

where Bˆ T = [bˆ 1 bˆ 2 ⋯ bˆ m ] .

ˆ (xɶ ) = [φˆ (xɶ ) φˆ (xɶ ) ⋯ φˆ (xɶ )] , and using (2.40), it is


Assuming that Φ k 1 k 2 k m k

obtained that:

ˆ (xɶ ) = Ψ
Φ ˆ (xɶ )Ξ (2. 43)
k k

where Ξ=[ξ1 ξ2 ··· ξm].

This means that, as in reference [7],

m
ˆ ˆ T (xɶ ) = ∑ vˆ φˆ (xɶ )
g (xɶ k ) = VΦ (2. 44)
k j j k
j =1

with
ˆ = [ vˆ
V 1 vˆ 2 ⋯ vˆ m ] = (Ξ−1Bˆ )T (2. 45)

28
Although this method is thought to use data for obtaining the Koopman
tuples, knowledge about the nature of the dynamical system and the analyzed
data is required for a proper selection of the dictionary’s basis functions [7], [14].

2.3.2.2 Finite linear representations

An alternative philosophy to obtain an approximate representation of the


Koopman operator, based on the dynamical system model is presented in [8].

Conceptually, this proposal is based on finding a finite invariant space


where a finite linear representation of the Koopman operator can be easily ob-
tained. This means that it can be just applied to systems characterized by non-
linear polynomial functions.

In Chapter 4 of this dissertation, this idea is extended by means of the


perturbation theory and the use of a few terms of the Taylor series to analyze
more complex systems.

The objective of the presented type of techniques is to determine a finite


linear operator that appropriately represents the dominant linear and non-linear
dynamics of a dynamical system. Then, the theory of linear systems and optimal
linear control can be easily extended to the obtained linear representation.

2.4 Concluding remarks


In this chapter, the theoretical background and main properties and aspects of
Koopman mode analysis are reviewed. The Koopman operator theory provides
a rigorous framework that enables the analysis of non-linear responses in terms
of Koopman modes.

Important features of the Koopman operator spectral analysis, when ap-


plied to some special cases, were also presented.

Furthermore, the currently used methods to estimate the Koopman


modes, eigenvalues, and eigenfunctions were briefly reviewed. Two categories

29
were used: methods based on data and methods based on the dynamical equa-
tions.

Among the data-based techniques, the Koopman mode decomposition


(KMD) and the dynamic mode decomposition (DMD) provide an approxima-
tion based on the analyzed observables, whilst the extended dynamic mode de-
composition (EDMD) is able to extract more complex patterns by extending the
kernel basis functions.

Also of interest, the model-based methods use the non-linear equations


describing a dynamical system to define a set of appropriate basis functions that
may characterize the non-linear behavior of the response.

Of utmost importance, is the idea of using a finite set of non-linear func-


tions that properly approximate a system response by extending the space of the
basis functions. This idea is exploited below to develop new tools based on
Koopman operator theory.

2.5 References

[1] M. Budišíc, R. Mohr, and I. Mezić, “Applied Koopmanism,” Chaos: Inter-


discip. J. Nonlinear Sci., vol. 22, no. 4, pp. 047510, December 2012.

[2] I. Mezić, “Analysis of fluid flows via spectral properties of Koopman oper-
ator”, Annu. Rev. Fluid Mech., vol. 45, pp. 357–378, 2013.

[3] C. Rowley, I. Mezić, S. Bagheri, P. Schlatter, and D. Henningson, “Spectral


analysis of fluid flows,” J. Fluid Mech., vol. 641, pp. 115–127, 2009.

[4] A. Mauroy and I. Mezić., “Global stability analysis using the eigenfunc-
tions of the Koopman operator,” IEEE Trans. Automatic Control, vol. 61,
no. 11, pp. 3356–3369, Nov. 2016.

[5] L. Perko, Differential Equations and Dynamical Systems, 3rd edition, Springer-
Verlag, NY, 2001.

30
[6] P. J. Schmid, “Dynamic mode decomposition of numerical and experi-
mental data,” J. Fluid Mechanics, vol. 656, Cambridge University Press 2010,
pp. 5–28.

[7] M. O. Williams, I. G. Kevredikis, and C. W. Rowley, “A data-driven ap-


proximation of the Koopman operator: extending dynamic mode decom-
position”, J. Nonlinear Sci., vol. 25, pp. 1307–1346, 2015.

[8] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, “Koopman in-


variant subspaces and finite linear representations of nonlinear dynamical
systems for control,” PLoS ONE, vol. 11, no. 2, pp.

[9] E. Barocio, B. C. Pal, N. F. Thornhill, and A. R. Messina, “A dynamic mode


decomposition framework for global power system oscillation analysis,”
IEEE Trans. Power Syst., vol. 30, no. 6, pp. 2902–2912, November 2015.

[10] A. Ruhe, “Rational Krylov sequence methods for eigenvalue computa-


tions,” Linear Alg. Appl., vol. 58, pp. 279–316, 1984.

[11] K. K. Chen, J. H. Tu, and C. W. Rowley, “Variants of dynamic mode de-


composition: boundary conditions, Koopman and Fourier analyses,” J.
Nonlinear Sci., vol. 22, no. 6, pp. 887–915, 2012.

[12] P. J. Schmid, “Application of the dynamic mode decomposition to experi-


mental data,” Experiments in Fluids, vol. 50, no. 4, pp. 1123–1130, 2011.

[13] J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz,


“On dynamic mode decomposition: Theory and applications,” J. Comput.
Dyn., vol. 1, no. 2, pp. 391-421, 2014.

[14] S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering governing equa-


tions from data by sparse identification of nonlinear dynamical systems,”
Proc. of the Nat. Acad. Sci. (PNAS), vol. 113, no. 15, pp. 3932-3937, 2016.

31
 

Chapter 3
Recursive Linearization
Recent studies in the use of analytical perturbation methods have shown that the exploi‐
tation of the structure of the underlying models can greatly enhance the performance of 
these  tools  and  produce,  in  certain  applications,  efficient  algorithms  for  performing 
modal analysis.  

In this chapter, a novel methodology for the linearization of a dynamical system 
up  to  the  third  order  based  on  the  use  of  chain‐rule  based techniques  is  presented that 
provides a straightforward and fast alternative for the linearization of large and complex 
power system dynamic models. 

First, the concept of recursive linearization is introduced to aid in simplifying the 
system model. Expressions are developed and techniques for exploiting the structure of 
the expansions are then presented. Associated relationships for computing second‐ and 
higher‐order approximations to the power series expansion terms of a non‐linear analyt‐
ical  model  are  also  developed.  This  renders  the  proposed  framework  highly  suitable  for 
automated non‐linear analysis of complex models. 

   
3.1 Mathematical background
Non‐linear  dynamical  systems  often  exhibit  complicated  performance  that  fall 
outside the domain of traditional linear analysis methods. To obtain more local‐
ized and accurate information, extensions to linear methods are required, which 
account for non‐linear behavior around a  stable  system equilibrium point (sep) 
[1]‐[6]. 

Obtaining these higher‐order approximations is generally a computation‐
ally demanding task that makes the application of non‐linear analysis methods 
difficult [3], [5]. In this chapter, a novel linearization scheme is presented, which 
reduces  computational  effort  when  analyzing  complex  dynamical  systems  ex‐
pressed  as  a  set  of  (real‐  or  complex‐valued)  ordinary  differential  equations 
(ODEs). 

As a basis for our discussion, assume that the non‐linear equations corre‐
sponding to a dynamical system can be expressed as a set of ODEs [1], [2] 

  x t   ĝ xt , u t    (3. 1) 

  yˆ t   h' x t    (3. 2) 

where  xX  are  the  state  variables,  uU  are  the  input  signals,  and  ŷŶ  are  the 
output signals; XN, UR, and ŶL represent the space where the state var‐
iables, the entries, and the outputs evolve, respectively.  

The function ĝ: (X, U) → X defines the evolution of the dynamical system, 


and h': X → Ŷ is a non‐linear function projecting the state variables of a dynam‐
ical system onto the output signals.   

In  the  context  above,  the  functions  defining  the  time  derivatives  of  the 
state variables can be stacked into a vector ĝ(x, u) = [ĝ1(x, u) ĝ 2(x, u) … ĝN(x, u)]T, 
where: 

  x j  gˆ j  x1,, xN , u1,, uR    (3. 3) 

33
Equation  (3.3)  can  also  be  written  as  a  function  f j  of  embedded  sub‐
~ j (x, u) as: 
functions  g

~
x j  f j  g~1  x1 ,  , x N , u1 ,  , u R , g~2  x1 ,  , x N , u1 ,  , u R ,  , g~r  x1 ,  , x N , u1 ,  , u R    (3. 4) 

~ ~ ~ ~ ~ ~ ~
where  f =[ f 1 f 2 … f N]T, f : G → X, and  G  represents the space generated by 
~ j(x, u).  
the sub‐functions  g

In  equation  (3.3),  the  functions  ĝj  have  a  more  complex  representation 
~ ~ j of (3.4). 
than the sub‐functions  f j or  g

Analogously, for the output signals ŷj, we can write 

~
  yˆ j  h ' j  x1 , , x N   h j s1  x1 , , x N , , s S  x1 , , x N    (3. 5) 
~ ~ ~ ~ ~
where  h =[ h 1 h 2 … h L]T and  h : S → Ŷ, where S is the space generated by the 
sub‐functions sj(x). 

From the above description, the power series expansion of ẋ about a sep 
results in [3], [4]:  

  x  F1x, u  F2 x, u  F3 x, u    (3. 6) 

where F1, F2, …, Fj refer to the terms of first‐, second‐, and j‐th order. 

Additionally, for the output signals, we get 

  yˆ  H'1 x  H'2 x  H'3 x    (3. 7) 

where H’j(Δx) contains terms of order j affecting the outputs’ evolution. 

In the next sections, background on linear analysis techniques for compu‐
ting perturbation models is provided. 

3.2 Conventional linearization methods

Broadly speaking, two approaches exist for linearizing system models: the ana‐
lytical methods and the perturbation‐based linearization methods [7], [8]. 

34
Although the conventional  analytical  method  is  accurate,  it can be quite 
computationally  expensive  [3].  On  the  other  hand,  the  utilization  of  perturba‐
tion‐based methods offers a rapid performance but sacrifices accuracy [8]. 

 Recent  developments  in  the  application  of  linear  analysis  techniques 


have provided mechanisms to adaptively compute automated system represen‐
tations.  For completeness, conventional methods used to obtain linearized mod‐
els of first‐, second‐, and third‐order are presented. 

3.2.1 First‐order linearization methods 
If  the  model  of  (3.6)  presents  a  small  perturbation  Δx,  only  a  few  terms  in  the 
expansion are usually necessary [1]‐[3]. 

For  linear  stability  analysis,  a  linear  dynamical  model  is  created  from  a 
stable  equilibrium,  xsep,  that  shows  how  the  system  responds  to  small  disturb‐
ances [2]. These models can be expressed in the form 

  x  F1  x,u   A11x  B11u   (3. 8) 

  yˆ  H'1  x   C1x   (3. 9) 

where A11NN, B11NR, and C1L×N. 

First‐order linearization of a non‐linear system requires that matrices A11, 
B11, and C1 of (3.8) and (3.9) are computed. These methods can be classified into 
three categories [8]:  

a) Analytical Linearization (AL), 

b) Forward‐Difference Approximation (FDA), and 

c) Center‐Difference Approximation (CDA). 

The first method is an analytical technique based on the non‐linear equa‐
tions  describing  a  dynamical  system.  The  other  two  methods,  in  contrast,  are 
numerical approximations. 

35
In the following, the mathematical background of the three approaches is 
presented.  A  numerical  comparison  of  these  methods  and  the  proposed  meth‐
odology is deferred until Chapter 8. 

3.2.1.1 Analytical linearization 

In the analytical linearization (AL) method, a description of the dynamical sys‐
tem in the form of equations (3.1) and (3.2) is used to obtain A11, B11, and C1 di‐
rectly from the linear matrices 

 gˆ1 gˆ1 
 x 
xN 
 1 
  A11         (3. 10) 
 ˆ 
 g N 
gˆ N 
 x1 xN  sep

 gˆ1 gˆ1 
 u 
uR 
 1 
  B11         (3. 11) 
 ˆ 
 g N 
gˆ N 
 u1 uR  sep

and 

 h '1 h '1 
 x  x 
 1 N

  C1         (3. 12) 
 
 h 'L  h 'L 
 x1 xN  sep

3.2.1.2 Forward‐difference approximation (FDA) 

The forward‐difference approximation is a numerical method based on the ad‐
dition of a small perturbation to each state variable to calculate the columns of 
the matrix A11 one at a time [7], [8]. 

36
Starting from an equilibrium condition, a perturbed state vector xi+ is de‐
fined, where a small perturbation ρ is added to the i-th state variable. The differ‐
ence between xi+ and xsep is then denoted as: 

xi   xi   x sep   0    0  
T
  (3. 13) 

In this setting, the time derivatives ẋi+ are calculated with the vector Δxi+
as explained below. To make this precise, let the difference between ẋi+ and the 
time derivatives at equilibrium (ẋsep = 0) be defined as: 

  x i   gˆ (xi  )  gˆ (xsep )  x i   x sep  x i    (3. 14) 

Then, we can calculate each column i of A11, denoted as A11(i), by  noting 
that for Δẋi+ we have 

0

 
  x i   A11xi    A11(1)  A11(i )  A11( N )       A11(i )   (3. 15) 
 

 0 

so that 

x i 
  A11(i )    (3. 16) 

In a similar way, the column i of matrix C1 can be obtained from 

yˆ i 
  C1(i )    (3. 17) 

where Δŷi+ is the vector of functions of equation (3.5) evaluated with Δxi+, minus 
the same equations evaluated with Δxsep. 

Then, the j-th column of matrix B11, B11(j), is calculated as: 

x U j 
  B11( j )    (3. 18) 

37
with  

  x U j   x U j   x sep  x U j    (3. 19) 

and ẋUj+ being the value of (3.3) for a perturbation in the input signal j. 

The  size  of  the  perturbation  ρ  affects  the  results  obtained  with  the  FDA 
method; these are closer to the AL method as ρ gets smaller [7], [8].  

3.2.1.3 Center‐difference approximation (CDA) 

For this method, let the mean value between xi+ and ‐xi- be expressed as: 

xi   xi 
  0    0  
T
  xi  (3. 20) 
2

Given the size of the perturbation, , the time derivatives ẋi+ and ẋi- can be 
calculated.  

The difference between ẋi+ and the time derivatives at equilibrium (ẋsep = 
0) is determined using (3.14), whereas for the case of ẋi- we have that: 

  x i   x sep  x i   x i    (3. 21) 

The average of the two vectors Δẋi+ and Δẋi- is then given by: 

x i  x i x i  x i
  x i     (3. 22) 
2 2

Following  a  similar  procedure  to  that  of  the  FDA  method,  each  column 
A11(i) can be calculated as: 

x i x i   x i 
  A11(i )     (3. 23) 
 2

Similarly, C1(i) can be expressed in terms of average vectors as: 

yˆ i yˆ i   yˆ i 


  C1(i )     (3. 24) 
 2

38
where Δŷi+ was defined above and Δŷi- is the vector of functions of equation (3.5) 
evaluated at Δxi-, minus the same equations evaluated at Δxsep. 

Further, the j-th column of matrix B11, B11(j), is calculated from 

x U j x U j   x U j 
  B11( j )     (3. 25) 
 2

where ẋUj- represents the value of equation (3.3) with a negative perturbation in 
the input signal j. 

3.2.2 Second‐ and higher‐order approximations 
The above methods extend readily to the higher‐dimensional case. Following [3] 
and [4], the terms of second‐ and third‐order in (3.6) are:  


 xT F21x   xT Ν12 u   uT B12 u 
1  1  1 
  F2  x,u               (3. 26) 
2 2 N 2
x F2 x
T N x Ν2 u 
T u B2 u 
T N
     

and  

 x  0   x  0  
 T 1    
 x F3      x  T  1 
 x M3      u 
  0  x     0  x  
1  1 
F3  x,u        
3!   3!  
 x  0
  x  0  
T N 
x F3 
    x  xT M N      u 
3   
  
  0  x    
 0  x  
 u  0   u  0 
 T 1     T 1 
 x L3      u   u B3      u 
  0  u     0  u  
1  1 
      
3!  3! 
u  0  u  0 
   
T N   T N 
 x L3 
    u  u B3 
    u 
   
  0  u     0  u  
  (3. 27) 

39
3.2.2.1 Analytic linearization 
j j
As is seen from equations (3.26) and (3.27),  F2  and  F3 , j = 1, 2, …, N are constant 
matrices given by: 

  2 gˆ j  2 gˆ j 
 2  
  x1 x1xN 
  F2j         (3. 28) 
 2 
  gˆ j  2 gˆ j 
  
 xN x1  2 xN  sep

and 

  3 gˆ j  3 gˆ j  3 gˆ j  3 gˆ j 
 3  
  x1  2 x1x2 x1x2 x3 x1 2 xN 
  F3j         (3. 29) 
 3 
  gˆ j  3 gˆ j  3 gˆ j  3 gˆ j 
 2  
  x1xN x1x2 xN x2 x3xN  3 xN  sep

j
whereas  B 2j  and  B 3 , j = 1, 2, …, N are given by: 

  2 gˆ j  2 gˆ j 
 2  
  u1 u1uR 
  B 2j         (3. 30) 
 2 
  gˆ j  2 gˆ j 
  
 uR u1  2uR  sep

and 

 3 gˆ j 3 gˆ j 3 gˆ j 3 gˆ j 
 3  
  u1  2u1u2 u1u2u3 u1 2uR 
  B3j         (3. 31) 
 3 
  gˆ j 3 gˆ j 3 gˆ j 3 gˆ j 
 2  
  u1uR u1u2uR u2u3uR 3uR  sep

40

 j , and  L j  can be computed using 
Similarly, the coefficient matrices  Ν 2j ,  M 3 3

the second‐order sensitivity matrices 

  2 gˆ j  2 gˆ j 
  
 x1u1 x1uR 

  Ν 2j         (3. 32) 
 2 
  gˆ j  2 gˆ j 
  
 xN u1 xN uR  sep

 3 gˆ j 3 gˆ j 3 gˆ j 
3 gˆ j
 2  
  x1u1 x1x2u1 x1x3u1 x1xN uR 
   j 
M       (3. 33) 
3
 
  gˆ j 3 gˆ j 3 gˆ j 3 gˆ j 
3

  
 xN x1u1 xN x2u1 xN x3u1  2 xN uR  sep

 3 gˆ j 3 gˆ j 3 gˆ j 
3 gˆ j
  
 x1 2u1 x1u2u1 x1u3u1 x1 2uR

~
  L3j         (3. 34) 
 3 gˆ j 3 gˆ j 3 gˆ j 
 gˆ j
3

  
 xN  u1 xN u2u1 xN u3u1 xN  2uR  sep
2

To third order, the output signals ŷ, are: 

~
 xT C12 x 
1 
  H'2 x       (3. 35) 
2 ~
x C x
T L
 2 

 x  0 
 T ~1  
 x C3      x 
  0  x 
1 
  H'3 x      (3. 36) 
3!  x  0  

T~L
x C3 
    x
 
  0  x 

41
where:  

  2 h' j  2 h' j 
 2 
 x x1xN 
~j  1
  C2         (3. 37) 
  2
h' j  2
h' j 
 x x   2 x 
 N 1 N   sep

  3h' j  3h' j  3h' j  3h' j 


 3  
 x1  2 x1x2 x1x2x3 x1 2 xN 
~j 
  C3         (3. 38) 
  3h' j  h' j
3
 3h' j  h' j 
3

 2  
  x1xN x1x2xN x2x3xN  3 xN  sep

~ ~
for  C 2j , and  C 3j , j = 1, 2, …, L.

Higher order terms can be obtained in like manner. 

3.2.2.2 Perturbation‐based methods 

The higher‐order coefficients of expressions (3.6) and (3.7) can also be approxi‐
mated by perturbation‐based methods.  

First, the second‐ and third‐order terms of equations (3.26) and (3.27) are 
rewritten in the alternate form 

F2  x,u   F2  2 x   j 1 u j N j x  B12  2u  
R
  (3. 39) 

R R R
  F3  x,u   F3 3x   u j M j  2 x   u j uk L j ,k x  B13  3u   (3. 40) 
j 1 j 1 k  j

where F2N×N2,  F3N×N3,  NjN×N,  Mj  N×N2,  B12N×R2,  B13N×R3,  and 


Lj,kN×N. 

 In these expressions, Nk and Rk are the number of permutations, of order 
k, of the state variables and of the input variables, respectively, that are stacked 
into the vectors 2x=[x1x1… xNxN]TN2, 3x=[x1x1x1 … xNxNxN]TN3, 
2u=[u1u1 … uRuR]TR2, and 3u=[u1u1u1 … uRuRuR]TR3. 

42
Using  the  above  framework,  equations  (3.35)  and  (3.36),  with  C2L×N2 
and C3L×N3, can be readily written in the compact form 

  H'2  x   C2 2 x   (3. 41) 

  H'3  x   C33x   (3. 42) 

Based on [8] and beginning from equation (3.14), the second‐order partial 
derivative of  ĝ (x, u) with respect to xjxk can be approximated by:  

2 x ij  (x i ) j  x i (x j )  x i (xsep ) 


    (3. 43) 
 gˆ (xij )  gˆ (xi )  gˆ (x j )  gˆ (xsep )

where  xij  denotes  the  vector  of  states  x  with  a  perturbation    in  the  positions  i 
and j. For notational simplicity, the symbol denoting the direction of the pertur‐
bation has been omitted here in order to generalize for both, the FDA and CDA 
methods.  

Then, the column of F2 corresponding to the quadratic term xjxk can be 
calculated from 

2 x ij
  F2(i , j )    (3. 44) 
2

For  the  third‐order  perturbation‐based  methods  we  compute  the  cubic 


differences [8] 

3x ijk  (2x ij )k  2 x ij (xk )  2 x ij (xsep ) 


   gˆ (xijk )  gˆ (xij )  gˆ (x jk )  gˆ (xik )    (3. 45) 
 gˆ (xi )  gˆ (x j )  gˆ (xk )  gˆ (xsep )

whereas the column of F3 corresponding to xixjxk is estimated as:  

3x ijk
  F3(i, j ,k )    (3. 46) 
3

43
The same schemes applied in sections 3.2.1.2 and 3.2.1.3 are applied here 
to obtain the FDA and CDA approximations.  

This  means  that  equations  (3.43)  through  (3.46)  are  used  with  a  positive 
perturbation   for the FDA, whereas for the CDA a negative perturbation ‐ is 
also considered, and the obtained dynamics are averaged.  

Extensions to this  framework  to  include the effect of the input  variables 


and for assessing the higher‐order effects actuating over the output signals can 
be easily obtained. 

There are several limitations to these approaches.  

1. The complexity of the system model grows exponentially. This, in 
turn, leads to an exponential growth in the required computational 
effort [3]. 

2. The second caveat to conventional analysis is the sensitivity of the 
estimation  accuracy  on  the  size  of  the  perturbation  .  While  the 
FDA method is fast, it leads to large estimation errors. On the oth‐
er hand, the CDA method is more accurate but results in a twofold 
increased of CPU effort when compared with the FDA method [9].  

3. The  action  of  the  limiters  in  the  physical  devices  is  generally  not 
properly  addressed  and  may  lead  to  miscalculations  in  the  esti‐
mated models.  

Numerical studies for assessing the behavior of the presented method in 
the context of these limitations and in comparison with the performance of the 
recursive linearization are provided in Chapter 8.  

In  the  following  Section,  the  recursive  linearization  technique  is  intro‐
duced.  The  mathematical  formulation  and  an  efficient  scheme  for  its 
computation are provided.   

44
3.3 Recursive linearization
Several  approaches  to  the  automated  linearization  of  non‐linear  models  are 
available in the literature [7]‐[10]. 

This section introduces a methodology for the linearization of dynamical 
systems up to an arbitrary order. The developed procedure is based on pertur‐
bation theory and the chain rule and is applicable for ODE systems.  

The kernel of the recursive linearization method is obtained by analyzing 
separately the embedded functions describing the parts composing a dynamical 
system instead of dealing with the entire non‐linear expression. This is illustrat‐
ed in Fig. 3.1 for the first‐order linearization process.  

To  introduce  the  proposed  linearization  scheme  for  equations  (3.1)  and 
(3.2) let us express (3.6) and (3.7) in the compact form 

  x  1 gˆ  2 gˆ  3 gˆ  1x  2x  3x   (3. 47) 

  ŷ  1h'2h'3h'   (3. 48) 

where  Δ(j)ĝ  represents  the  j-th  order  vector  of  linearized  modules  of  ĝ,  Δ(j)h’  is 
the j-th order vector of modules of h’, and Δ(k)ẋ is the set of dynamical equations 
denoting the recursive linearization of order k. 

Fig. 3.1 Comparison of the conventional first-order linearization process with the proposed
recursive linearization.

45
3.3.1 Mathematical formulation 
In order to compute the third‐order expansion of equations (3.47) and (3.48), we 
introduce below the mathematical basis of the recursive linearization.  

The presented developments are focused on the effect of the state varia‐
bles. The inclusion of u and extensions to the analysis of ŷ are straightforward. 

3.3.1.1 First order 

The  chain  rule  provides  a  mechanism  to  compute  the  first‐order  terms  in  the 
system  model  Δ(1)ĝ  and  Δ(1)h’ in equations  (3.47)  and  (3.48).  Applying  the  chain 
rule [11] to the j-th function ĝj of equation (3.1) for ĝj yields 

gˆ j fj gˆ j g1


x j  x1    xN  x1 
x1 sep xN sep g1 x1 sep
sep
    (3. 49) 
f g1 f g r f g r
  j xN  j x1    j xN
g1 xN sep
g1 x1 sep
g r xN sep
sep sep sep

or 

 
fj  g1 g1 
x j 
  x1    xN    
g1 x1 xN
sep  
 sep sep 
    (3. 50) 
  
f j  g r g 
  x1    r xN 
g r  x1 xN 
sep
 sep sep 

Defining  

g~k
1 ~ g~k
   gk  x1    xN   (3. 51) 
x1 sep
x N sep

equation (3.50) can be rewritten as  

f j f j f j r fj


x j   1 gˆ j   1 g1   1 g 2     1 g r    1 g k   (3. 52) 
g1 g 2 g r k 1 g
k
sep sep sep sep

46
A key strength of this approach is its scalability. The form of these equa‐
~ k have other simpler functions embedded, 
tions suggests that if the functions  g
the same scheme presented above can be easily extended. 

3.3.1.2 Second order 

As with the first‐order case, we have:  

gˆ j gˆ j
1  gˆ j
2

x j  x1    xN  x12   


x1 sep xN sep 2  2 x1 sep
              (3. 53) 
1  gˆ j  2 gˆ j  2 gˆ j
2

 xN2  x1x2    xN 1xN


2  2 xN sep
x1x2 xN 1xN sep
sep

or 

1  gˆ j 1  gˆ j
2 2

x j  1 gˆ j  x   
2
1 xN2 
2 x12 sep
2 xN 2 sep
    (3. 54) 
 2 gˆ j  2 gˆ j
 x1x2    xN 1xN
x1x2 sep
xN 1xN sep

Then, if we analyze the second partial derivative of  ĝj, with respect to the 
k-th state variable, we have that [11] 

~ ~ ~ ~ ~
 2 gˆ j 2 f j   f j    f j g~1 f j g~2 f j g~r 
       ~  ~  ~   (3. 55) 
xk
2
xk
2
xk  xk  xk  g1 xk g 2 xk g r xk 

and if we take the first term of the right‐hand part of (3.55) we get:  

  fj g1  fj  2 g1 g1 j   fj  fj  2 g1


     
xk  g1 xk  g1 xk
2
xk xk  g1  g1 xk 2

g1    fj  g1   fj  g 2   fj  g r 


           
  xk  g1  g1  xk g 2  g1  xk g r  g1  xk    (3. 56) 

fj  2 g1 g1  2 fj g1 g1  2 fj g 2 g1  fj g r
2

    
g1 xk 2 xk g12 xk xk g1g 2 xk xk g1g r xk

47
Moreover, for the second partial derivative of  ĝj, with respect to the  k-th 
and the l-th state variables, we have:  

 2 gˆ j  2 fj  fj 

 
 
xk xl xk xl xk
 xl 
    (3. 57) 
  fj g1 fj g 2 fj g r 
    
xk  g1 xl g 2 xl g r xl 

Now, by taking into account the first term of (3.57), as in (3.56), we get:  

  fj g1  fj  2 g1 g1   fj  fj  2 g1


     
xk  g1 xl  g1 xk xl xl xk  g1  g1 xk xl
g    fj  g1   fj  g 2   fj  g r 
 1          
       xl  g1  g1  xk g 2  g j  xk g r  g1  xk  (3. 58) 

fj  2 g1 g1  2 fj g1 g1  2 fj g 2 g1  fj g r
2

    
g1 xk xl xl g12 xk xl g1g 2 xk xl g1g r xk

Equations (3.55) through (3.58) are now used to expand (3.54). After rear‐
rangement, one has that:  

1 fj N N   2 g 
x j     gˆ j 
1

2 g1
 

k 1 l 1 xk xl
1
xk xl   

sep  sep 
1 fj  N N  2 g 
   r
xk xl  
2 g r  k 1 l 1 xk xl 
sep  sep 
2 2
1  fj
2  N g  1  fj
2  N g 
    1 xk      r xk     (3. 59) 
2 g12  k 1 xk  2 g r 2  k 1 xk 
sep  sep  sep  sep 
 2 fj  N g   N g 
  1 xm     2 xm    
g1g 2  m 1 xm   m 1 xm 
sep  sep   sep 
 2 fj  N g   N g 
   r 1 xm     r xm 
g r 1g r  m 1 xm   m 1 xm 
sep  sep   sep 

48
Defining 

1  2 g~k  2 g~k 1  2 g~k


2  g~k 
2 2
  x1  x1x2    xN   (3. 60) 
2 x12 sep
x1x2 sep
2 xN 2 sep

and using (3.60) and (3.51), equation (3.59) can be rewritten as 

fj fj
x j     gˆ j     g1       g r 
1 2 2

g1 g r
sep sep

1  fj 1  fj
2 2

     g 
2 2
   g1
1 1
      r
  (3. 61) 
2 g12 2 g r 2
sep sep

 2 fj  2 fj
   g1   g 2       g r 1   g r
1 1 1 1

g1g 2 g r 1g r
sep sep

By recursively applying (3.60) we can define 

~ ~
f j 2  ~f j
2 
 gˆ j  ~  g1    ~ 2  g~r 
g 1 g r
sep sep
~ 2~
1  fj 1  fj
2

  
2 g~ 2
 g 
1 ~
1
2
 
2 g~r 2
  g~ 
1
r
2
   (3. 62a) 
1 sep sep
~ ~
2 f j 1 ~ 1 ~ 2 f j
 ~ ~  g1 g 2    ~ ~ 1 g~r 11 g~r
g g 1 2 g g r 1 r
sep sep

or  

~ 2~
f j r
2  ~1 r r  fj
 gˆ j   ~
2 
 g k   ~ ~ 1 g~k 1 g~l
k 1 g k 2 k 1 l 1 g k gl
sep sep
    (3. 62b) 

Finally, substituting (3.62) in (3.61), it is obtained that  

  x j  1 gˆ j  2  gˆ j   (3. 63) 

As in the first‐order linearization, if there are other simpler functions em‐
~ , the presented scheme can be extended. 
bedded in the definition of functions  g k

49
3.3.1.3 Third order 

The Taylor series expansion of ĝj up to third order is 

1  gˆ j 1  gˆ j
3 3

x j   1 gˆ j    2 gˆ j  x13    xN3 


6 x13 sep
6 xN3 sep

1  gˆ j 1  gˆ j
3 3

   x12 x2    xN2 xN 1     (3. 64) 


2 x12 x2 sep
2 xN2 xN 1 sep

 gˆ j
3
 gˆ j
3

 x1x2 x3    xN  2 xN 1xN


x1x2 x3 sep
xN  2 xN 1xN sep

Details are omitted. 
Then, defining 

1  g j 1  g j
3 3
 3
 g j  x  3
1 x12 x2 
6 x13 sep
2 x12x2 sep
    (3. 65) 
 g j
3
1  g j
3

 x1x2 x3    xN3


x1x2x3 sep 6 xN3 sep

and using definitions (3.51) and (3.62) we can obtain: 

rfj r  2 fj
 gˆ j  
 3
 g k  
 3
  2 g k  1 g k 
k 1 g
k k 1 g 12
sep sep

 fj
2

   g   g    g   g  
r r
    g g 2
k
1
l
1
k
2
l   (3. 66) 
k 1 l  k 1 k l sep

1 r r r  3 fj
   1 g k 1 g l  1 g m
6 k 1 l 1 m 1 g k g l g m
sep

Therefore, equation (3.64) may also be expressed as: 

  x j   1 gˆ j    2  gˆ j    3 gˆ j   (3. 67) 

3.3.2 An efficient scheme for the third‐order recursive linearization 
As emphasized above, the computation of terms  Δ(1)ẋ,  Δ(2)ẋ, and  Δ(3)ẋ is succes‐
sive. In Fig. 3.2 below, an efficient scheme for the computation of the third‐order 

50
recursive linearization is depicted that exploits the inner structure of the meth‐
od.  The  algorithm  focuses  on  the  computation  of  coefficient  matrices  corre‐
sponding  to  the  state  variables  dynamics.  A  physical  real‐valued  system  has 
been considered.  

Three  main  stages  indicated  by  dotted‐lined  rectangles  can  be  distin‐
guished. First, some basic definitions are introduced. 

Fig. 3.2 Flow diagram of the proposed algorithm for implementation.

51
3.3.2.1 Perturbation vectors 

So far, the choice of perturbation vectors has been limited to linear models [8]. 
To introduce the general ideas that follow, let the high‐order perturbation vec‐
tor, xi,j,… be defined as:  

xk  1if k {i, j,}


xi , j ,   x1  xN  with 
T
    (3. 68) 
xk  0 otherwise

and, more generally, to order k  

T
   k xi , j ,  x1k x1k 1x2  xNk    (3. 69) 

For the complex case,  

xl  1 if l  j

1x j ik   x1  xN 
T
  with xl  i if l  k   (3. 70) 
x  0 otherwise
 l

with i=√-1. 

Higher‐order  perturbed  vectors,  lxj±ik,  and  derivatives,  (l)ẋj±ik,  are  de‐


fined similarly. 

3.3.2.2 First stage 

In  this  first  stage,  corresponding  to  the  rightmost  dotted  rectangle  in  Fig.  3.2, 
just  one  state  variables  is  perturbed.  Thus,  the  linear  perturbed  vector  Δxi  in 
(3.68) is defined  for  each  perturbed  state,  and the corresponding linear deriva‐
tive (1)ẋi , the i-th column of A11, A11(i), can be calculated from 

0 
 
 
   1 x i  A11xi   A11(1)  A11(i )  A11( N )  1   A11(i )   (3. 71) 
 
 
0 

for i = 1, …, N. 

52
In this setting,  Δxi and the quadratic and cubic vectors  Δ2xi,i and  Δ3xi,i are 
used as described in the definition (3.69). 

The columns of F2 related to the terms Δxi2 are obtained by evaluating the 
equations  of  second  order  presented  in  the  previous  section.  Using  this  proce‐
dure, the vector of perturbed derivatives Δ(2)ẋi,i is obtained, which is equal to the 
column of F2 corresponding to Δxi2, F2(i,i): 

0 
0 
 
 
                  (2) x i ,i  F2  2 xi ,i  F2(1,1) F2(1,2)  F2(i ,i )  F2( N , N )     F2( i ,i )   (3. 72) 
1 
 
 
0 

Additionally, the columns of  F3 related to the terms Δxi3 are computed by 
iteratively  evaluating  the  linearized  dynamical  equations  to  obtain  Δ(3)ẋi,i,i,  as 
shown below 

0
0
 
 
          (3) x i ,i ,i  F33xi ,i ,i  F3(1,1,1) F3(1,1,2)  F3(i ,i ,i )  F3( N , N , N )     F3(i ,i ,i ) (3. 73) 
1 
 
 
0

3.3.2.3 Second stage 

In  the  central  dotted  rectangle  of  Fig.  3.2,  two  different  states  are  perturbed 
simultaneously; recalling Section 3.3.2.1, the vector Δxi,j for i ≠ j is defined.  

This step defines the vector Δ2xi,j in equation (3.69), which is used to eval‐
uate the equations of second order to obtain Δ(2)ẋi,j. This results in 

2  x i , j  F2 2 xi , j  F2(i ,i )  F2( j , j )  F2(i , j )  


2
  (3. 74) 

53
so that 

  F2(i , j )  22 x i , j  F2(i ,i )  F2( j , j )   (3. 75) 

In order to obtain the columns  F3(iik) and  F3(ikk), use is made of (3.70) with 


Δxj+ik, which allows to define the third‐order perturbed vector Δ3xj+ik. 

Evaluating the corresponding recursive third‐order equations with  Δ3xj+ik 
we obtain that Δ(3)ẋj+ik can also be expressed as:  

   (3) x j ik  F33x j ik  F3( j , j , j )  iF3( k ,k ,k )  iF3( j , j ,k )  F3( j ,k ,k )   (3. 76) 

so that 

  F3( j , k ,k )  Re  (3) x j ik  F3( j , j , j )  iF3( k ,k , k )    (3. 77) 

  F3( j , j ,k )  Im  (3) x j  ik  F3( j , j , j )  iF3( k ,k ,k )    (3. 78) 

Therefore,  all  the  columns  F3(iik)  can  be  rapidly  obtained  with  the  latter 
procedure. 

3.3.2.4 Third stage 

Finally,  for  the  third  stage  of  the  algorithm,  three  different  state  variables  are 
perturbed at the same time for computing the columns F3(ijk) by using the vector 
Δxi,j,k, properly defined according to (3.68). 

The vector  Δxi,j,k allows to define  Δ3xi,j,k. Both coefficients are used to ob‐


tain Δ(3)ẋi,j,k, defined as:  

 (3) x i , j ,k  F33xi , j ,k  F3(i ,i ,i )  F3( j , j , j )  F3( k ,k ,k )  F3(i ,i , j ) 


    (3. 79) 
 F3( j , j ,i )  F3(i ,i ,k )  F3( k ,k ,i )  F3( j , j ,k )  F3( k ,k , j )  F3(i , j ,k )

where, in terms of this notation, 

F3(i , j ,k )  (3) x i , j ,k  F3(i ,i ,i )  F3( j , j , j )  F3( k ,k ,k )  F3(i ,i , j ) 


    (3. 80) 
 F3( j , j ,i )  F3(i ,i ,k )  F3( k ,k ,i )  F3( j , j ,k )  F3( k ,k , j )

54
3.3.3 Recursive linearization for complex‐valued systems 
The third‐order recursive linearization procedure presented above has been de‐
veloped  for  analyzing  physical  systems  where  the  state  variables  evolve  in  a 
subset of the real numbers space.  

In some cases, however, a linear transformation may be required to ana‐
lyze  the  dynamical  equations  in  a  complex‐valued  space.  In  this  sense,  the  re‐
cursive  linearization  method  can  be  easily  expanded  to  the  complex  numbers’ 
domain. In fact, only the part presented in equation (3.76) through (3.78) corre‐
sponding  to  the  computation  of  F3(jkk)  and  F3(jjk)  needs  to  be  modified.  This  is 
because, as the system itself is complex, the presented strategy does not provide 
a means to distinguish the term F3(jkk) from F3(jjk).  

Therefore,  for  complex‐valued  systems,  the  perturbed  vectors  Δxj+ik  and 


Δxj-ik are used according to expression (3.70). Then, the vectors  Δ3xj+ik and  Δ3xj-ik 
rise, and by evaluating the linearized dynamical equations of the system we ob‐
tain Δ(3)ẋj+ik and Δ(3)ẋj-ik, respectively.  

The latter term can then be expressed as:  

   (3) x j ik  F3 3x j ik  F3( j , j , j )  iF3( k ,k ,k )  iF3( j . j ,k )  F3( j ,k ,k )   (3. 81) 

Then, from (3. 76) we can define two terms, 1 and 2, as:  

  1   (3) x j ik  F3( j , j , j )  iF3( k ,k ,k )  iF3( j , j ,k )  F3( j ,k ,k )   (3. 82) 

   2   (3) x j ik  F3( j , j , j )  iF3( k ,k ,k )  iF3( j , j ,k )  F3( j ,k ,k )   (3. 83) 

so that the columns F3(jkk) from F3(jjk) can be calculated as follows 

1
  F3( j , k , k )   1   2    (3. 84) 
2

  F3( j , j ,k )  i  2  F3( j ,k ,k )    (3. 85) 

The same procedure may be used to obtain higher order cascade approx‐
imations. 

55
3.4 Concluding remarks
In this Chapter, a novel methodology for linearizing non‐linear dynamical sys‐
tems up to third  order has been presented, but the approach in general can be 
extended to higher dimensions. For later discussion, the technique is referred to 
as the “Recursive Linearization” (RL) method and is based on the chain rule and 
perturbation theory.  

The mathematical formulation of the RL method was developed to obtain 
simple  formulae  for  its  application  to  (real‐  or  complex‐valued)  non‐linear  dy‐
namical systems expressed as a set of ordinary differential equations. Addition‐
ally,  a  straightforward  scheme  for  computing  the  linear,  quadratic,  and  cubic 
parts of the model was presented for its implementation.  

An illustrative case of application of the reported methodology to multi‐
machine  power  systems  is  presented  further  in  Chapter  6,  whereas  a  synthetic 
size‐variable  non‐linear  system  is  utilized  in  Chapter  8.  Numerical  results  for 
the synthetic system and for several multi‐machine power systems are also pre‐
sented in the latter chapter.  

3.5 References

[1] L.  Perko,  Differential  Equations  and  Dynamical  Systems,  3rd  ed.,  Springer‐
Verlag, New York, USA, 2001. 

[2] R. H. Enns & G. C. McGuire, Nonlinear Physics with Maple for Scientists and 
Engineers, Springer, New York, 2000. 

[3] J.  J.  Sanchez‐Gasca,  V.  Vittal,  M.J.  Gibbard,  A.R.  Messina,  D.J.  Vowles,  S. 
Liu, and U.D. Annakkage, “Inclusion of higher order terms for small‐signal 
(modal) analysis: Committee report‐task force on assessing the need to in‐
clude  higher  order  terms  for  small‐signal  (modal)  analysis,”  IEEE  Trans. 
Power Syst., vol. 20, no. 4, pp. 1886‐1904,  Nov. 2005. 

56
[4] C. A. Tsiligiannis and G. Lyberatos, “Normal forms, resonance, and bifur‐
cation  analysis  via  the  Carleman  linearization,”  J.  Math.  Anal.  Appl.,  vol. 
139, pp. 123‐138, 1989. 

[5] E.  Salajegheh  and  J.  Salajegheh,  “Optimum  design  of  structures  with  dis‐
crete variables using higher order  approximation,” Comput.  Methods  Appl. 
Mech. Engrg., vol. 191, no. 13‐14, pp. 1395‐1419, Jan. 2002. 

[6] V.  Vittal,  W.  Kliemann,  S.  K.  Starret,  and  A.  A.  Fouad,  “Analysis  of 
stressed power systems using normal forms,” Proc. of the IEEE ISCAS, vol. 
5, pp. 2553‐2556, USA, 1992. 

[7] P. E. Hill, W. Murray, and M. H. Wright, Practical Optimization, Academic 
Press, 1981.  

[8] J.  Persson  &  L.  Söder,  ʺComparison  of  three  linearization  methods,ʺ  Proc. 
of the 16th PSCC, Glasgow, 14‐18 July 2008. 

[9] W. Fellin et al, Analyzing Uncertainty in Civil Engineering, Springer, Berlin, 
Germ. 2005.  

[10] P. W. Sauer, M. A. Pai, and J. H. Chow, Power System Dynamics and Stabil‐
ity: With Synchrophasor Measurement and Power System Toolbox, Second edi‐
tion, John Wiley & Sons, NJ, USA, 2017.  

[11] E. Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, 2005. 

57
Chapter 4
Koopman eigenfunctions-based
extended model
Koopman mode analysis has shown considerable promise for the analysis and
characterization of the global behavior of power system transient processes recorded us-
ing wide-area sensors.

In this chapter, a new framework for feature extraction and mode decomposition
of model-based power system representations based on the Koopman operator is present-
ed. By combining perturbation theory with Koopman mode analysis, the proposed tech-
nique allows obtaining a closed-form approximation to the system response, in which the
influence of mode interactions can be singled out in an efficient manner.

First, a systematic and general high-order theory for studying non-linear behav-
ior in large power system models is presented. Drawing on this framework, weak non-
linearities up to an arbitrary order can be explicitly represented using conventional per-
turbation analysis. Non-linear behavior is then interpreted as the projection of the
Koopman operator eigenfunctions of an extended coordinate system onto the physical
variables of the system.

Methods for the specification and selection of the initial conditions of the
Koopman eigenfunctions-based extended model are proposed, and indices to measure
non-linear behavior are developed. Also, algorithms for efficient computation and spar-
sity promoting criteria intended to obtain reduced-order non-linear models are suggest-
ed.
4.1 Mathematical formulation
In this section, the linear and non-linear Koopman eigenfunctions obtained from
the linearization of a dynamical system are used to construct an extended ana-
lytical system that includes these eigenfunctions up to an arbitrary order.

First, some basic concepts are introduced.

4.1.1 Perturbed non-linear models


In the perturbation theory framework, a dynamical system model is reinterpret-
ed as a perturbed model [1], [2]. Let x  x1 x2  x N T . Around a stable sys-
tem equilibrium point (sep), each of the state variables is re-expressed as:

x j t   x j sep  x j t  (4. 1)

where xjsep is a term in equilibrium and Δxj(t) is a time-varying incremental part.

It follows from (4.1) that

dx j t  dx j sep dx j t 
 x j t     x j t 
dt dt dt

Therefore, under perturbation theory, the non-linear expression describ-


ing the time evolution of a dynamical system can be expressed as:

x j t   ĝ j ( x j  x j (t )) (4. 2)
sep

In order to avoid confusion, it must be highlighted that the time-varying


incremental term Δxj is the value of the j-th state variable determined from the
non-linear equations of the dynamical system after subtracting its value in equi-
librium.

Henceforth, the representation of the state variables as shown in expres-


sion (4.1) and (4.2) is adopted in order to obtain perturbed models of the dynam-
ical systems.

59
4.1.2 Transformation to an upper-triangular system
In order to obtain an expression for the analyzed dynamical system in terms of
the Koopman eigenfunctions obtained from its linearization, we use Property
2.4 and the results of Section 2.2.1 and propose a homeomorphism that trans-
forms the physical system

x  ĝx (4. 3)

into a system that is diagonal under a first-order linearization:

y  hy  (4. 4)

The required homeomorphism is, in fact, given by the matrix of left ei-
genvectors of the Jacobian of ĝ, (V11)-1, so that (4.3) is transformed into [1]

y  h  y   V111 gˆ  V11y  (4. 5)

From perturbation theory, equation (4.5) can be expanded about a stable


system equilibrium point (sep), as the power series

y  Λy  H 2 y   H 3 y   H 4 y    (4. 6)

where Hk (Δy)=[h1k (Δy) h2k (Δy) ∙∙∙ hNk (Δy)]T, and h1k (Δy) is the k-th order
non-linear function of Δy corresponding to Δẏk [1]-[3].

Define now the Koopman eigenfunctions in the y-coordinates space as


φj(Δy) = Δyj, φjk(Δy) = φj(Δy)φk(Δy) = ΔyjΔyk,…

Using this framework, the terms hjk(Δy) in the Koopman expansion (4.6)
can be expressed as [2], [3]

h2 k y    Nj1  lN j hk2 jl y j yl   Nj1  lN j hk2 jl  j y l y  (4. 7)

h3k  y    j 1  l  j  m l hk3 jlm y j yl ym 


N N N

  j 1  l  j  ml hk3 jlm  j  y  l  y  m  y 


N N N

(4. 8)

60
It follows that each term of equations (4.6) can be rewritten in terms of the
linear and non-linear Koopman eigenfunctions as [4]:

y k  k k y    Nj1 lN j hk2 jl  j y l y  


(4. 9)
  Nj1 lN j  m  l hk jlm  j y l y  m y   
N 3

Assume now that the variables Δyk are the observables of an extended
system characterized by the dynamics of the Koopman eigenfunctions φj up to
the n-th order, such that:

 Φ1  y    Λ1 H12 H13 H1n   Φ1  y  


    
Φ 2  y    Λ2 H 23 H 2 n  Φ 2  y  
 Φ3  y     Λ3   Φ3  y    H nord Φ nord  y  (4. 10)
    
   0  
Φ  y    Λ n  Φ n  y  
 
 n  

where Φk (Δy) denotes the vector of Koopman eigenfunctions of order k, Λ1 =


diag (λ1, λ2,…,λN), Λ2 = diag (2λ1, λ1+λ2, λ1+λ3, …, 2λ2, …, 2λN), …, Λn = diag (nλ1,
(n-1)λ1+λ2, …, (n-2)λ1+2λ2, …, nλ2, …, nλN).

In the equation above, the matrices H1k, k=1,…, n are constant matrices de-
fined as [2], [3]

  k1  y   k1  y   k1  y  


 k 
 1  y  1  y  2  y   N  y  
k k 1

 k 
   2  y   k  2  y   k  2  y  
H1k   1  y  1  y  2  y   N  y  
k k 1 k

 
 
 k 
   N  y   k N  y    N  y  
k

   y k 1  y 
k 1
2  y   N  y   sep
k
 1

Matrices Hjk, j = 2,…, n-1, k = j + 1, …, n, are linear combinations of the co-


efficients of the matrix H1l with l = k-j+1.

61
Formally,

H jk  Flineal  H1,k  j 1  (4. 11)

with Flineal being a linear function. More attention is focused on (4.11) in Section
4.5. By now, it is assumed that the sub-matrices of the system (4.10) are given.

Equation (4.10) provides an approximation of order n to the discrete-time


Koopman operator UT of equation (2.3), where T = Hnord, in which the terms of
order n+1 or greater are neglected. This representation is closely related to the
Carleman linearization procedure and provides an alternative form to compute
the normal form representation of a non-linear dynamical system [5] as dis-
cussed below.

4.1.3 Transformation to a diagonal system


It can be noted that in the extended system of equation (4.10), Hnord is a constant
upper-triangular matrix, so that its spectrum is determined by the terms on its
diagonal [1].

In order to provide some intuition for the proposed transformation as-


1
sume that no resonance conditions are found. A homeomorphism Wnord is pro-
1 1
posed such that Wnord Λnord = Wnord Hnord – see Section 2.2, where:

 Λ1 
 Λ2
0 
Λ nord   (4. 12)
 0  
 Λ n 

1
In terms of this representation, the new coordinates are znord = Wnord
Φnord(Δy), and

z nord  Λ nord z nord (4. 13)

Modifications to this basic approach to account for resonance conditions


are described in [5] and are, therefore, not discussed here.

62
The rest of the process is similar to other related analysis techniques. No-
tice from (4.13), that we can define the terms ψj(znord) = zj as the Koopman eigen-
functions of the Koopman operator UΛnord at eigenvalues j  diag(Λnord) in the z-
coordinates system.

The solution of system (4.13) is:

Ψnord  t   z nord  t   exp  Λnord t  z nord  0  (4. 14)

where znord(0) is the vector of initial conditions in the z-coordinates.

4.1.4 Initial conditions for the Koopman eigenfunctions


The computation of the vector of initial conditions znord(0) gives a closed-form
solution for the characterization of the non-linear response (4.10), for the n-th
order of approximation. This characterization is accurate if, as supposed, the
terms of order n+1 or greater are negligible [2], [3].

The procedure can be summarized as follows:

a) Given the initial conditions of the physical variables Δx0, calculate the
vector of initial conditions in the y-coordinates, Δy0 =V-1Δx0 [3],

b) Compute the initial point for the evolution of the extended system of
equation (4.10) as a function of the Koopman eigenfunctions φj(Δy), start-
ing with Δyj(0) as:

 j y 0  y j 0 
 j y 0 k y 0  y j 0 y k 0 
(4. 15)
 j y 0 k y 0 l y 0   y j 0 y k 0 yl 0 

c) Compute the vector znord(0) as:

z nord  0   Wnord
1
Φnord  0  (4. 16)

63
4.1.5 Solution of the non-linear response in physical variables
An advantage of using the Koopman-based approach is its simplicity. Once the
solution of equation (4.13) is obtained, the non-linear response of the system can
be expressed in the physical variables of the system as in (4.4).

First, expression (4.14) is transformed to the coordinates of the Koopman


eigenfunctions-based extended model through the transformation

Φ nord t   Wnord Ψ nord t  (4. 17)

or more explicitly as:

 Φ1 t  I N  N W12 W13  W1n   e Λ1t Ψ1 0  


Φ t   0  
 2   W22 W23  W2 n  e Λ 2t Ψ 2 0 
Φ 3 t    0 0 W33  W3n   e Λ 3t Ψ 3 0  (4. 18)
    
           
Φ n t   0 0 0  Wnn  e Ψ n 0 
 Λ n t

where IN×N is an N×N identity matrix, sub-matrices Wjj, j = 2, 3, …, n are diagonal


matrices of complex coefficients, and Wjk, j =2, 3, …, n, k = j+1, …, n are complex-
valued matrices.

It can be seen from equation (4.18) that the time evolution of the
Koopman modes of order 2 to n, i.e. eΛ2t, eΛ3t, …, eΛnt, are projected onto the first-
order Koopman eigenfunctions Φ1(t) through the matrices W12, W13, …, and W1n,
respectively.

Transforming back to the physical domain results in

x(t )  V11 Φ1 (t ) (4. 19)

where V11 is the matrix of right eigenvectors of the linear state representation. In
terms of the notation above,

x  t   V1ord e Λ1t Ψ1  0   V2ord e Λ2t Ψ 2  0  


(4. 20)
 V3ord e Λ3t Ψ3  0    Vnord e Λnt Ψ n  0 

64
which may be written as

x  t   v1 1 (0)e 1t   v N  N (0)e N t 


 v1,1 1,1 (0)e 2 1t   v N , N N , N (0)e 2 N t  (4. 21)
 v1,1,1 1,1,1 (0)e 31t   v N , N , N N , N , N (0)e 3N t 

with Ṽ1ord=V11, and

V2 ord  V 11W12  [ v11 v1 2 vN N ]


V3ord  V 11W13  [ v111 v11 2 vN N N ] (4. 22)

Here, vectors ṽj, ṽjk, ṽjkl, … represent the spatial structures of the
Koopman modes of first and higher order, and the vectors ṽj coincide with the
linear right eigenvectors of the linear stability modes of the system [4].

It should be emphasized that even if the vectors ṽj are orthogonal, the


vectors ṽjk, ṽjkl, are not necessarily orthogonal [4], [6].

4.2 Outline of the proposed framework


A simplified flowchart illustrating the proposed framework is given in Fig. 4.1.
Starting with a given operating point, the analytical system model is expanded
into a Taylor series as [3]

x  A11x  F2  x   F3  x   (4. 23)

Then, the linear transformation Δx=V11Δy is applied to (4.23) to obtain


(4.6). Equations (4.7) through (4.22) are used to compute fundamental system
behavior.

The inclusion of measured data is possible due to recent advances in dy-


namic state estimation [7], [8]. Basic outputs of the hybrid framework include
modal parameters, observability-based measures, and optimum observables.

The basic algorithm of the proposed methodology can be summarized as


shown in Algorithm 4.1.

65
Fig. 4.1. Flowchart of the proposed methodology. Extensions to the basic model are indicated
with an italic font.

Algorithm 4.1. The perturbed KMA procedure

Given a non-linear model of the form (4.3):


1) Compute the system response using a step-by-step transient stability
program and determine the conditions x0= xcl at the end of the disturb-
ance.
2) Approximate the post-disturbance trajectory behavior using the n-th
order model (4.23). Compute the modal decomposition of the linear
model, A=V11Λ1U11.
3) Transform (4.23) into (4.6) using the procedure in Section 4.1.2.
4) Construct the extended model (4.10):
a. Compute the decomposition Λnord (Wnord)-1=(Wnord)-1Hnord.
b. Compute z(0) using the procedure in Section 4.1.4.
5) Generate time-domain solutions in the z, y, and physical variables using
(4.14), (4.17), and (4.19), respectively.

Several refinements to this model are possible such as the computation of


sensitivities to changing operating conditions, the analysis of actuated control
systems, and the computation of optimum observables (sensor placement).

66
4.3 Single-machine, infinite-bus system: an illustrative example
To illustrate the development of the theory, consider a single-machine, infinite-
bus (SMIB) system shown in Fig. 4.2 [9].

Fig. 4.2. Single-machine, infinite bus test system. Parameters are expressed in pu on a 2200 MVA
base [9].

Two powerflow cases are considered:

a) A low-stress condition with P = 0.90 pu, Q = 0.30 pu, ET =1.0 pu, θ =36.0°,
and

b) A high-stress condition with P = 1.12 pu, ET =1.0 pu, θ = 47.0°, Q = 0.49 pu.

In both cases, the perturbation of interest is an increment on the angular


position of the rotor, ∆δ, of 30 degrees. Additionally, the maximum transferred
active power, PMax, is equal to 1.1762 pu and the damping coefficient of the ma-
chine is D =10 pu [10].

Following [9] and [11], the fourth-order machine model is given by:

gˆ1 (x)    0 
1
gˆ 2 (x)    (PMech  Te  D )
2H
1 (4. 24)
gˆ 3 (x)  E 'q  (E fd  E 'q  (X d  Xd )id )
T 'd 0
1
gˆ 4 (x)  Ed  [(X q  Xq )iq  Ed ]
T 'q 0

where the above symbols have the usual meaning [11].

67
Defining x=[δ Δω E’q E’d]T and applying the transformation x=V11y, the
generator terminal voltages E’D and E’Q can be expressed in D-Q (network coor-
dinates) as:

 4   4   4   4 
E ' D y     v4 j y j  sin   v1 j y j     v3 j y j  cos   v1 j y j 
 
 j 1   j 1   j 1   j 1 
 4   4   4   4 
E 'Q y     v3 j y j  sin   v1 j y j     v4 j y j  cos   v1 j y j 
 
 j 1   j 1   j 1   j 1 

where it is assumed that:

E ' D  E 'd sin ( )  E 'q cos ( )

E 'Q  E'q sin ( )  E'd cos( )

It then follows that:

ITre (y)  jITim (y)  [G grid  jBgrid ][E 'D (y)  jE 'Q (y)]

where Ggrid+jBgrid is the matrix of transfer admittances between the system ma-
chines.

Transforming to d-q coordinates,

  
id (y )  ITre (y )sin  4j 1 v1 j y j  ITim (y )cos  4j 1 v1 j y j 
iq (y )  ITre (y )cos 4j 1 v1 j y j   ITim (y )sin 4j 1 v1 j y j 

and

ed  y    j 1 v4 j y j  R aid  y   Xd iq  y 
4

eq  y    j 1 v3 j y j  R aiq  y   Xd id  y 
4

with

PE (y)  ed (y) id (y)  eq (y) iq (y)

Te(y)  PE (y)  Ra  id2 (y)  iq2 (y) 

68
Using the procedure presented in Section 4.1.2, the system of equations
(4.24) becomes

gˆ1 (V11y )  0  j 1 v2 j y j
4

gˆ 2 (V11y ) 
1
2H

PMech  Te(y )  D  j 1 v2 j y j
4

gˆ 3 (V11y ) 
1
T'd 0

E fd   j 1 v3 j y j   X d  Xd  id (y )
4

gˆ 4 (V11y ) 
1 
T 'q 0 
 X q  Xq  iq (y )   j 1 v4 j y j 
4

so that

 y1   h 1 (y )   gˆ1 (V11y ) 
 y   h (y )  ˆ 
 2    2   V 1  g 2 (V11y ) 
 y3   h3 (y )  11
 gˆ 3 (V11y ) 
     
 y4   h4 (y )   gˆ 4 (V11y ) 

It is observed from the above analysis that equation (4.6) can be obtained
directly from the above-transformed equations, or alternatively using the proce-
dure presented in Section 4.2.

Extensions to more complex system representations is immediate.

4.4 Non-linearity indexes


To assess the relative importance of the non-linear terms into a dynamical re-
sponse, a non-linear interaction index is introduced here inspired by [12].

First, recalling (4.18), the cubic expression for the time-evolving Koopman
eigenfunctions in Jordan space φj(t) can be expressed as follows:

 j (t )   j (t )   k 1  l k w12 j ( k ,l ) k ,l (t )   k 1  l k  ml w12 j ( k ,l ,m) k ,l ,m (t ) (4. 25)


N N N N N

where the terms w12j(k,l) and w13j(k,l,m) are weights pondering the relative im-
portance of non-linear terms in system response.

69
Therefore, a non-linearity index (NInj), of order n, is defined as

| max( w1,n j  k ,l ,  max( k ,l , (t ))) |


NIn j 
| max( j (t )) |

where max(ψk,l…(t))=ψk,l…(0) for stable modes and max(ψk,l…(t))=ψk,l…(T) for unsta-


ble modes, and w1nj(k,l,..) represents the interaction coefficient of the linear modes
λk, λl, …(refer to (4.25)) for the Koopman eigenfunction φj contained in matrix
W1n in (4.18).

This index provides a measure of the effect of the terms of order n relative
to the first order terms.

4.5 Efficient computation of PKMA


One of the most limiting drawbacks of the perturbation-based methods for ana-
lyzing non-linearities in dynamical systems is the huge computational effort [3],
[13], [14] required to obtain the Koopman coefficients.

In order to overcome this limitations weakness in the perturbed


Koopman mode analysis (PKMA) framework, two strategies are proposed in
this research:

1. The utilization of a recursive linearization (RL) procedure to compute


the matrices of coefficients that are the input data for the PKMA
method.

2. An efficient procedure to compute some steps of the perturbed


Koopman mode analysis method.

In this section, we focus on the second point, by means of the following


four efficient sub-processes:

1. Efficient computation of the higher-order matrices of coefficients.

2. Efficient eigendecomposition of matrix Hnord.

70
3. Efficient solution of the Koopman extended system.

4. Utilization of sparsity promoting criteria to reduce the computational


burden.

These proposed efficient procedures are presented below for a cubic


Koopman eigenfunctions-based extended model.

4.5.1 Higher-order matrices of coefficients


In order to derive an efficient numerical algorithm to compute the matrices H kj ,
j = 2,… , n-1, k = j + 1, …, n, concepts from the recursive linearization method are
used here.

Following Section 4.1.2, the first-, second-, and third-order Koopman ei-
genfunctions in the Jordan space are:

 j y  yj (4. 26)

 j ,k  y   y j ,k  y j yk (4. 27)

 j ,k ,l  y   y j ,k ,l  y j yk yl (4. 28)

This leads to:

 j y  yj (4. 29)

 j ,k  y   y j ,k  y j yk  yk y j (4. 30)

 j ,k ,l  y   y j ,k ,l  yk yl y j  y j yl yk  y j yk yl (4. 31)

The Taylor series of expression (4.26), according to equations (4.6) and


(4.10), and with the recursive linearization’s notation of equation (3.50), is:

N N N N N
 j  y   y j   j y j   h2j k ,l yk yl   h3j k ,l ,m yk yl ym (4. 32)
k 1 l  k k 1 l  k m l
 (1) y j
 (2) y j  (3) y j

These expressions are used below for the following derivations.

71
4.5.1.1 Quadratic eigenfunctions

Use of (3.57) in (4.30) results in:

 
 (1) j ,k  y    (1) y j ,k  [ y j yk  yk y j ]sep y j  [ y j yk  yk y j ]sep yk 
y j yk (4. 33)
 yksep y j  yk sep  (1) y j  y j sep  (1) yk  y jsep yk  0

With these assumptions, the values at equilibrium of the Jordan variables


yj, yk, as well as their time derivatives, are equal to zero.

Then, the application of (3.72) to equation (4.30) gives

(2) j ,k  y    (2) y j ,k  yk sep  (2) y j  y j sep  (2) yk  y j  (1) yk  yk  (1) y j 


(4. 34)
 y j (1) yk  yk (1) y j  ( j  k )y j yk

which is equivalent to matrix Λ2 defining the effect of the quadratic eigenfunc-


tions over themselves, as seen in (4.10).

Now, use of (3.82) in (4.30) yields

 (3) j ,k  y    (3) y j ,k  yk sep  (3) y j  y j sep  (3) yk  y j  (2) yk  yk  (2) y j 


N N N N (4. 35)
 y j  (2) yk  yk  (2) y j  y j  h2k l ,m yl ym  yk  h2j l ,m yl ym
l 1 m l l 1 m l

This expression can be used to efficiently compute the matrix H 32 of


(4.10).

4.5.1.2 Cubic eigenfunctions

Following a similar procedure to that of the quadratic formulation, application


of (3.57), (3.72) and (3.82) to (4.34) gives

 (1) j ,k ,l  y    (1) y j ,k ,l  ( yk sep yl sep  yl sep yk sep )y j  ( y j sep yl sep  yl sep y j sep )yk 
 ( y j sep yksep  yk sep y jsep )yl  yk sep yl sep  (1) y j  (4. 36)
 y j sep yl sep  (1) yk  y j sep yk sep  (1) yl  0

72
 (2) j ,k ,l  y    (2) y j ,k ,l  yk sep yl sep  (2) y j  y j sep yl sep  (2) yk  y j sep yk sep  (2) yl 
 yl sep y j yk  yksep y j yl  yl sep yk yl  y j sep yk  (1) yl  yk sep y j  (1) yl  (4. 37)
 y j sep yl  (1) yk  yl sep y j  (1) yk  yk sep yl  (1) y j  yl sep yk  (1) y j  0

and

 (3) j ,k ,l  y    (3) y j ,k ,l  yk sep yl sep  (3) y j  y j sep yl sep  (3) yk  y j sep yk sep  (3) yl 
 y j sep yk  (2) yl  yk sep y j  (2) yl  y j sep yl  (2) yk 
 yl sep y j  (2) yk  yk sep yl  (2) y j  yl sep yk  (2) y j  (4. 38)
 y j yk  (1) yl  y j yl  (1) yk  yk yl  (1) y j 
 ( j  k  l )y j yk yl

This is the analytical explanation for the appearance of two zero-valued


matrices and the diagonal matrix Λ3 in the third row of Hnord, as shown in equa-
tion (4.10).

The projection of the input signals’ effect on the higher-order eigenfunc-


tions can be derived in a similar way.

4.5.2 Efficient eigendecomposition of the extended system


To gain insight into the structure of matrix Wnord of (4.18), a simple two-state
example is considered.

In this representation, the cubic extended matrix, H3ord, is

1 0 1
h21,1 1
h21,2 1
h22,2 1
h31,1,1 1
h31,1,2 1
h31,2,2 1
h32,2,2 
 
0 2 h221,1 h221,2 h22 2,2 h321,1,1 h321,1,2 h321,2,2 2
h3 2,2,2 
0 0 21 0 0 h31,11,1,1 h31,11,1,2 h31,11,2,2 h31,12,2,2 
 
0 0 0 1  2 0 h31,21,1,1 h31,21,1,2 h31,21,2,2 h31,2 2,2,2 
H 3ord 0 0 0 0 22 h32,21,1,1 h32,21,1,2 h32,21,2,2 h32,2 2,2,2 
 
0 0 0 0 0 31 0 0 0 
0 0 0 0 0 0 21  2 0 0 
 
0 0 0 0 0 0 0 1  22 0 
 
0 0 0 0 0 0 0 0 32 

where the terms h3j ,k l ,m, p can be determined with the help of definition (4.35).

73
To understand the structure of the columns w j of W3ord, we evaluate the
eigenvalue problem W3ord w j=  j w j:

1 0 1
h21,1 1
h21,2 1
h22,2 1
h31,1,1 1
h31,1,2 1
h31,2,2 1
h32,2,2   w1 j    j w1 j 
    
0 2 h221,1 h221,2 h22 2,2 h321,1,1 h321,1,2 h321,2,2 h32 2,2,2   w2 j   j w2 j 
0 0 21 0 0 h31,11,1,1 h31,11,1,2 h31,11,2,2 h31,12,2,2   w3 j    j w3 j 
    
0 0 0 1  2 0 h31,21,1,1 h31,21,1,2 h31,21,2,2 h31,2 2,2,2   w4 j   j w4 j 
0 0 0 0 22 h32,21,1,1 h32,21,1,2 h32,21,2,2 h32,2 2,2,2   w5 j     j w5 j 
    
0 0 0 0 0 31 0 0 0   w6 j    j w6 j 
 
0 0 0 0 0 0 21  2 0 0   w7 j   j w7 j 
  
0 0 0 0 0 0 0 1  22 0   w8 j    j w8 j 
    
0 0 0 0 0 0 0 0 32   w9 j    j w9 j 

(4. 39)

where  j is an eigenvalue of H3ord and N=2. The constants N2=3 and N3=4 denote
the number of quadratic and cubic Koopman eigenfunctions, respectively.

From equation (4.39), it can be easily noted that w 1=e1 and w 2=e2, where
ej is the j-th column of the identity matrix IN3ord×N3ord, and N3ord=N+N2+N3.

~
Then, for the subset  j {2λ1, λ1+ λ2, 2λ2}, we have that w kj =0 for k=3,…,
N3ord, j=3, 4, 5, and j≠k. Furthermore, by setting w jj=1, it can be defined that:

h2l j ,k
w2l j ,k  (4. 40)
 j  k  l
~
Now, for  j {3λ1, 2λ1+ λ2, λ1+ 2λ2, 3λ2}, it can be found that w kj =0 for
j,k=6,…, N3ord, and j≠k. Once again, w jj can be set to a unitary value, resulting in:

h3m, p j ,k ,l
w3m, pj,k ,l  (4. 41)
( j  k  l )  (m   p )

and

h3m j ,k ,l   p 1  q  p h2m p ,q w3p ,qj ,k ,l


N N
m
w  (4. 42)
3 j , k ,l
 j  k  l  m

74
Using expressions (4.40) through (4.42), W3ord can be rewritten as

 I N N W12 W13 
W3ord  0 N 2 N I N 2 N 2 W23  
 0 N 3 N 0 N 3 N 2 I N 3 N 3 
1 0 w121,1 w121,2 w12 2,2 1
w31,1,1 1
w31,1,2 1
w31,2,2 w31 2,2,2 
 2 2 
0 1 w21,1 w21,2 w222,2 2
w31,1,1 2
w31,1,2 2
w31,2,2 w322,2,2 
0 0 1 0 0 w31,11,1,1 w31,11,1,2 w31,11,2,2 w31,12,2,2 
 
0 0 0 1 0 w31,21,1,1 w31,21,1,2 w31,21,2,2 w31,22,2,2 
 0 0 0 0 1 w32,21,1,1 w32,21,1,2 w32,21,2,2 w32,22,2,2 
 
0 0 0 0 0 1 0 0 0 
0 0 0 0 0 0 1 0 0 
 
0 0 0 0 0 0 0 1 0 
 
0 0 0 0 0 0 0 0 1 

The computation of W3ord with these formulations is much easier and


more efficient than the utilization of conventional eigendecomposition algo-
rithms.

It can be noted that when all the diagonal elements of W3ord are chosen to
be equal to 1, the definitions (4.40) and (4.42) are consistent with the standard
normal forms method [3], [14].

In fact, definitions (4.40) through (4.42) can be used to obtain participa-


tion factors and residues, in a similar way to the normal forms method [15].

However, for other cases (when the norm of w j must be unitary, for in-
stance) the computed non-linear participation factors and residues would not be
the same for both techniques.

4.5.3 Efficient computation of initial conditions


The first step to obtaining the time evolution of the extended system in physical
coordinates is to calculate z3ord(0). Here, we take again the two-state system to
expand equation W3ord z3ord = Φ3ord as follows

75
 y10  1 0 w121,1 w121,2 w12 2,2 1
w31,1,1 1
w31,1,2 1
w31,2,2 w31 2,2,2   z10 
 y   2 2  
 20  0 1 w21,1 w21,2 w222,2 2
w31,1,1 2
w31,1,2 2
w31,2,2 w322,2,2   z20 
 y1,10  0 0 1 0 0 w31,11,1,1 w31,11,1,2 w31,11,2,2 w31,12,2,2   z1,10 
    
 y1,20  0 0 0 1 0 w31,21,1,1 w31,21,1,2 w31,21,2,2 w31,22,2,2   z1,20 
 y1,20   0 0 0 0 1 w32,21,1,1 w32,21,1,2 w32,21,2,2 w32,22,2,2   z1,20  (4. 43)
    
 y1,1,10  0 0 0 0 0 1 0 0 0   z1,1,10 
 y  0 0 0 0 0 0 1 0 0   z1,1,20 
 1,1,20    
 y1,2,20  0 0 0 0 0 0 0 1 0   z1,2,20 
 y    
 2,2,20  0 0 0 0 0 0 0 0 1   z2,2,20 

where yj0=yj(0), yjk0=yj(0)yk(0), and yjkl0=yj(0)yk(0)yl(0).

It can be seen from equation (4.43) that

z jkl 0  y jkl 0

N N N
z jk 0  y jk 0    w3jk lmp zlmp 0
l 1 m l p  m

N N N N N
z j 0  y j 0   k2j kl zkl 0   k3j klm zklm 0
k 1 l  k k 1 l  k m  l

1
whose use is simpler than the direct computation of W3ord .

4.5.4 Sparsity-promoting criteria


With the aim of increasing the sparsity of the third-order system, matrix H3ord is
first rewritten in an extended manner:

 1 (y )  1 0 1
h21,1 h21 N , N 1
h31,1,1 h31N , N , N   1 (y ) 
    
    
  N (y )   0 N h2N 1,1 h2N N , N h3N 1,1,1 h3N N , N , N    N (y ) 
    
 1,1 (y )   0 0 21 0 h31,11,1,1 h31,1N , N , N   1,1 (y ) 
   
     
  N , N (y )   0 0 0 2 N h3N , N 1,1,1 h31,1N , N , N    N , N (y ) 
  (y )   0 0 0 0 31 0   1,1,1 (y ) 
 1,1,1    
    
     
 N , N , N (y )   0 0 0 0 0 3N   N , N , N (y ) 
(4. 44)

76
We note that the extended system (4.44) is sparse in the lower triangular
part, but, the matrices Hjk, j = 1,…, n-1, k = j + 1, …, n are not.

Therefore, three indexes of importance can be defined as follows:

h2j k ,l
I 2j k , l  100 (4. 45)
j

h3j k ,l ,m
I j
 100 (4. 46)
j
3 k ,l , m

h3j ,k l ,m, p
I j ,k
 100 (4. 47)
 j  k
3 l ,m, p

Furthermore, from (4.44) it can be noted that the dynamics are dominated
by the diagonal elements of H3ord.

Based on this observation and the importance indexes of equations (4.45)


through (4.47), a first criterion promoting sparsity is proposed:

h2j k ,l  0 if I 2j k ,l  1 (4. 48a)

h3j k ,l ,m  0 if I3j k ,l ,m  1
(4. 48b)

h3j ,k l ,m, p  0 if I3j ,k l ,m, p  1


(4. 48c)

A second sparsity-promoting criterion for H3ord is based on the following


time constants:

1
j  (4. 49)
Re ( j )

1
 jk  (4. 50)
Re ( j  k )

1
 jkl  (4. 51)
Re ( j  k  l )

77
Then, with the help of (4.49), (4.50), and (4.51), the following criteria can
be stated:

h2j k ,l  0 for k  1, , N , l  k, , N , if 5 j   2 (4. 52a)

h3j k ,l ,m  0 for k  1, , N , l  k, , N , m  l, , N , if 5 j   2
(4. 52b)

h2j k ,l  0 for j  1, , N , if 5 k ,l   2
(4. 52c)

h3j k ,l ,m  0 for j  1, , N , if 5 k ,l ,m   2
(4. 52d)

h3j ,k l ,m, p  0 for j  1, , N , k  j, , N , if 5 l ,m, p   2


(4. 52e)

After applying the criteria (4.48) and (4.52), the sparsity of H3ord can be
highly increased. Furthermore, the use of the two proposed criteria promotes
the sparsity, but also leads to an order reduction of the extended system.

In fact, each criterion of (4.52) directly leads to the elimination of a col-


umn and a row of H3ord, whereas (4.48) could lead to the same phenomenon if
after its application a higher-order mode gets isolated.

4.6 Concluding remarks


In this chapter, a general model-based analysis tool for the study of power sys-
tem dynamic behavior that combines Koopman mode analysis with perturba-
tion theory is proposed. The formulation can be used to supplement information
on current data-driven approaches in which knowledge of the underlying sys-
tem model is not required.

The method can be used to analyze the influence of multimode interac-


tions of various types on the non-linear response of non-linear dynamical sys-
tems, as well as to assess the nature and strength of interactions between system
components.

The usefulness of the proposed framework lies in its simplicity; the phys-
ical variables of a system are interpreted as the observables of an extended sys-

78
tem that has the linear and non-linear Koopman functions as its state variables.
Such insights can guide the selection of control structures and can also help to
uncover hidden features in system dynamics that might not be accurately identi-
fied using conventional linear analysis.

Using this framework, efficient techniques for the PKMA implementation


are obtained, as well as useful definitions of non-linear indexes, residues, and
sparsity promoting criteria.

In the next chapter, quantitative measures of observability are derived


under linear systems theory for Koopman operator theory, the Koopman mode
decomposition (KMD) algorithm, and the presented perturbed Koopman mode
analysis.

These observability measures and other concepts presented in this chap-


ter are used in the following chapters to assess the non-linear effects intrinsic to
the modeling used for power systems and to develop non-linear quadratic con-
trollers.

4.7 References
[1] L. Perko, Differential Equations and Dynamical Systems, 3rd edition, Springer-
Verlag, NY, 2001.

[2] R. H. Enns and G. C. McGuire, Non-linear Physics with Maple for Scientists
and Engineers, Springer, New York, 2000.

[3] J. J. Sanchez-Gasca, V. Vittal, M.J. Gibbard, A.R. Messina, D.J. Vowles, S.


Liu, and U.D. Annakkage, “Inclusion of higher order terms for small-signal
(modal) analysis: Committee report-task force on assessing the need to in-
clude higher order terms for small-signal (modal) analysis,” IEEE Trans.
Power Syst., vol. 20, no. 4, pp. 1886-1904, Nov. 2005.

[4] M. Budišić, R. Mohr, and I. Mezić., “Applied Koopmanisms,” Chaos: An In-


terdisciplinary Journal of Nonlinear Sci., vol. 22, no 4, Dec. 2012.

79
[5] C. A. Tsiligiannis and G. Lyberatos, “Normal forms, resonance, and bifur-
cation analysis via the Carleman linearization,” J. Math. Anal. Appl., vol.
139, pp. 123-138, 1989.

[6] I. Mezić, “Analysis of fluid flows via spectral properties of Koopman oper-
ator,” Annu. Rev. Fluid Mech., vol. 45, pp. 357-378, 2013.

[7] M. Ghosal and V. Rao, “Fusion of multirate measurements for non-linear


dynamic state estimation of the power systems,” IEEE Trans. Smart Grid,
in press, http://ieeexplore.ieee.org/document/8004460/.

[8] A. K. Singh and B. C. Pal, “Decentralized dynamic state estimation in pow-


er systems using unscented transformation,” IEEE Trans. Power Syst., vol.
29, no. 2, pp. 794-804, Mar. 2014.

[9] P. Kundur, Power System Stability and Control, Mc-Graw Hill, NY, USA,
1994.

[10] I. Martinez, A. R. Messina, and E. Barocio, “Higher-order normal form


analysis of stressed power systems: a fundamental study,” Elect. Power
Comps. and Syst., vol. 32, no. 12, pp. 1301-1317, 2004.

[11] P. Anderson and A. Fouad, Power System Control and Stability (IEEE Press
Power Engineering Series). Piscataway, NJ, USA, 2003.

[12] S. Liu, Assessing placement of controllers and nonlinear behavior of electrical


power systems using normal form information, Ph.D. dissertation, Iowa State
Univ. Ames, IA, USA, 2006.

[13] V. Vittal, N. Bhatia, and A. A. Fouad, “Analysis of the inter-area mode


phenomenon in power systems following large disturbances,” IEEE Trans.
Power Syst., vol. 6, no. 4, pp. 1515-1521, Nov. 1991.

[14] T. Tian, X. Kestelyn, O. Thomas, A. Hiroyuki, and A. R. Messina, “An ac-


curate third-order normal form approximation for power system non-

80
linear analysis,” IEEE Trans. Power Syst., vol. 33, no. 2, pp. 2128-2139, Mar.
2018.

[15] S. K. Starret and A. A. Fouad, “Non-linear measures of mode-machine par-


ticipation [transmission system stability],” IEEE Trans. Power Syst., vol. 13,
no. 2, pp. 389-394, May 1998.

81
Chapter 5
Koopman Observability Measures
Koopman mode analysis has shown considerable promise for the analysis and
characterization of global behavior of power system transient processes recorded using
wide-area sensors.

In this chapter, a framework for feature extraction and mode decomposition of


spatiotemporal dynamics based on the Koopman operator is presented. A physical inter-
pretation of the Koopman modes as columns of a matrix of observability measures is de-
rived, and criteria for selecting a reduced set of Koopman modes are proposed.

This approach allows to selectively isolate and quantify the dominant physical
mechanisms underlying the recorded power system response, and it can be used for
wide-area monitoring and assessment.
5.1 Linear modal observability measures

5.1.1 Quantitative measures of observability


A strategic selection of relevant Koopman modes can be obtained from the no-
tion of linear measures of observability. To introduce the general ideas that fol-
low, consider a linear system described by the state-space model [1], [2]

x  A11x  B11u, x0  x(0) (5. 1)

yˆ  C1x (5. 2)

where xN are the state variables, A11N×N is the system matrix, B11N×R is
the inputs matrix, uR is the vector of system inputs, x0 is the vector of initial
conditions, C1L×N is the output matrix, and ŷL is the vector of system out-
puts [1], [2].

A useful alternative measure of observability is motivated by the follow-


ing expression for ŷ [2]:

N
yˆ  t    C1v j  wTj x0  wTj B11  e u  τ  dτ  e
t   j  jt
(5. 3)
j 1
 0 

where j is the j-th eigenvalue of A11, and vj N


and wj N
are the corre-
sponding right and left eigenvectors, respectively. Other useful alternative defi-
nitions of observability are given in [3].

From this expression and ignoring the effect of the input signals u(t), it is
clear that the elements of the vector C1vj determine the extent to which the j-th
mode appears at the different outputs of the system.

Then, by rewriting C1 = row[ c1T , cT2 , , cTL ], we obtain the matrix of linear
observability measures C1V11

c1T v1 c1T v 2 c1T v N 


 
C1V11    (5. 4)
cTL v1 cTL v 2 c L v N 
T

83
where the magnitude of the terms cjTvk measures how much the k-th mode ap-
pears in the j-th output of ŷ(t).

Further, the observability measure cjTvk can also be interpreted as the j-th
output signal at t = 0+, subject to an initial condition x(0) = vk, i.e.

yˆ j  0   cTj v k  wTj v j   cTj v k (5. 5)

From the preceding result, it follows that any given vector of initial con-
ditions x(0) can be expressed as a linear combination

N
x  0   a j v j
j 1

with aj  .

Consequently, the linear observability measures can be written now as


akcjTvk, being these elements of the matrix product C1V11Ka,

c1T v1 c1T v 2 c1T v N   a1 0


  
C1V11K a    
(5. 6)
cTL v1 cTL v 2 cTL v N   0 aN 

C1V11 Ka

where Ka  N×N
is a diagonal matrix containing the coefficients aj.

5.1.2 Effect of constant entries


The analysis extends readily to the state model in (5.1) when the input signals
are constant values, i.e., u(t) = u0. Straightforward analysis yields

N N
1
yˆ  t    C1 v j w Tj x 0 e 
 jt t
C1 v j w Tj B11u 0 (1  e j ) (5. 7)
j 1 j 1 j

Noting further that

N
B11u 0   b j v j
j 1

84
with bj  , equation (5.7) can be re-expressed as:

 b   b1 
 a1  1 0   e1t  cT v
c v1
T
c vN 
T
1 c v N   1
T 
   
1 1 1 1 1
  
yˆ  t         (5. 8)
cTL v1    N t   T  
 c L v N  
T
b e  c L v1 c L v N   bN
T
0 aN  N   
C1V11  N   N 
K coef

Close inspection of this model shows that the first term of the right-hand
part (rhp) of (5.8) captures the time-varying behavior of ŷ(t), whereas the second
term on the rhp is interpreted as an offset value of ŷ(t) associated with the con-
stant-value inputs.

Motivated by this analysis, we examine the use of matrix C1V11Ka to de-


termine approximate quantitative values of modal observability associated with
specific system behavior.

As a byproduct, an appropriate adjustment of the coefficient matrix Ka al-


lows selectively extracting system dynamics of interest associated with specific
system behavior as well as identifying spatial measures of modal observability.

5.2 Koopman observability measures under Koopman operator theory


To pursue the analogy between the Koopman observability measures and the
linear observability measures, let h’:X → Ŷ be the output function (a generaliza-
tion of matrix C1), which projects the space spanned by the state variables onto
the subspace of the observed variables, as:

yˆ  tk   h '(x(tk )) (5. 9)

such that

x  tk 1   gˆ (x(tk )) (5. 10)

85
Then [4]-[5],

yˆ  tk 1   g  x  tk 1     g gˆ   x  tk    Ug  x  tk   
 (5. 11)
 U k 1 g  x  0      kj 1 j  x0  v j
j 1

where g(x0) = ŷ0, ϕj is the k-th Koopman eigenfunction, and ṽj  N


is the cor-
responding Koopman mode.

For the special case of a linear system

x k 1  A11x k

we define the Koopman eigenfunctions

 j (x k )  x k , w j

for the observables ĝ (xk)= A11xk.

In this case,

yˆ  tk   g  x  tk    C1x  tk  (5. 12)

and we have from (5.11) that:

 
yˆ k 1   g gˆ   xk   C1   j j (xk ) v j    j j (xk )C1v j (5. 13)
j 1 j 1

which is equivalent to (5.7).

The right part of equation (5.8) corresponds to a Koopman mode with


j =0, μj = exp(jt)=1, and therefore

v j 
is analogous to  C1 v j (5. 14)

86
As a consequence, the Koopman modes ṽj can be stacked into a matrix
OKMA (a matrix of Koopman observability measures) as

O KMA  [ v1 v2 vM ] (5. 15)

This matrix OKMA is analogous to the matrix C1V11 of Section 5.1.1.

It follows that if the entire set of Koopman eigen-tuples of the analyzed


dynamical system is available, these non-linear quantitative measures will give
the means to identify the most dominant dynamics.

5.3 Koopman measures of observability in KMD

5.3.1 The ranking criteria problem


A fundamental issue when dealing with data-based numerical approximations
of the Koopman operator is to distinguish between the physical and the spuri-
ous modes generated and to properly rank the most dominant ones [6].

Several criteria to select the most dominant Koopman modes have been
proposed in literature. For the Koopman mode decomposition (KMD) method,
Koopman modes can be ranked by their amplitude, frequency or growth rate, or
energy parameters.

For other similar approaches, such as dynamic mode decomposition


(DMD), the most used criterion is the energy content of the temporal coefficients
of the Koopman decomposition [7].

In turn, the Koopman modes are ranked by either

1. The Ritz eigenmodes’ amplitude, || v̂ j ||, [5], [7], [8], or

2. The magnitude of the approximate Koopman eigenvalues, || ~ j || [9],


[10].

87
However, these criteria may lead to false conclusions and do not contem-
plate a method to separate the physically meaningful modes from the spurious
modes.

The following section reinterprets the notion of modal observability


measures in terms of the Koopman modes.

5.3.2 An observability-based approach


Since the empirical Ritz values and eigenvectors obtained from the Koopman
mode decomposition method approximate the Koopman eigenvalues and the
Koopman modes [8], it follows that:

 M
yˆ  tk     kj  j  x0  v j    kj vˆ j (5. 16)
j 1 j 1

Numerical experience with the analysis of complex measured data shows


that the most observables modes are related to those Ritz values with the least
absolute values.

Drawing on this observation, attention is focused on the subset = {  1,


 2, …,  p} such that |  1|>|  2|>∙∙∙>|  p|>∙∙∙>|  Ñ|, with p < Ñ so that the analysis
centers on the inter-area dominant behavior.

The selection of the optimal number of Koopman modes p that are part of
is not a trivial problem. In our experience, p = Ñ*2/3 is found to give good re-
sults.

From (5.16), it is evident that:

vˆ j   j (x0 )

for

vˆ j
 vj
vˆ j

where || v̂ j || denotes the L2 norm of v̂ j.


88
Thus, it is postulated that the empirical Ritz eigenvectors v̂ j, in analogy
with matrix C1V11, can be interpreted as the columns of a matrix

 vˆ11 vˆ12 vˆ1 p 


 
O KMD   vˆ 1 vˆ 2 vˆ p     (5. 17)
vˆ vˆM 2 vˆMp 
 M1

In this sense,

 bj 
| vˆ jk |  ak   cTj v k
  j 

The columns of the observability matrix in equation (5.17) are then


normalized to unit length, such that:

vˆ j
vj 

where κ = max( v̂ jk) for j = 1, …, M , k = 1, …, p and  j .

Clearly, if ŷ(t)=x(t), then

vˆ j
 vj (5. 18)
|| vˆ j ||

i.e., the normalized Koopman modes will approximate the linear stability modes.

In the more general case, the Koopman observability measures extend the
linear results to the non-linear setting.

5.4 Koopman observability measures under perturbation theory

5.4.1 Derivation of the quantitative observability measures


After obtaining a Koopman eigenfunctions-based extended model of a dynam-
ical system, a large number of linear and non-linear Koopman dynamics are
found.

Koopman observability measures are introduced here under perturbation


theory, to reduce the set of dynamics that must be taken into account.

89
First, making use of the recursive linearization method, equation (5.9) can
be approximated as

yˆ    h'    h'    h' = C1x  C2x2  C3x3


1 2 3
(5. 19)

and from (4.20)

x  t   V1ord e Λ1t Ψ1  0   V2 ord e Λ 2t Ψ 2  0   V3ord e Λ3t Ψ 3  0 


z1 (t ) z 2 (t ) z 3 (t )

we can write

yˆ  C1V1ord z1 (t )  C1V2ord z 2 (t )  C1V3ord z 3 (t ) 


(5. 20)
Cˆ z (t )  C ˆ z (t )  C ˆ z (t )  O  4 
22 2 23 3 33 3

where Ĉ22 L×N 2


and Ĉ23 L×N 3
are associated with C2Δx2 and Ĉ33 L×N 3
is relat-
ed to C3Δx3. The term O(4) contains the elements of fourth and higher order
originated from C2Δx2 and C3Δx3.

When the output signals are a linear combination of the state variables,
matrices C2 and C3 are null. In this case, equation (5.20) is simplified as:

yˆ  C1V1ord z1 (t )  C1V2ord z 2 (t )  C1V3ord z3 (t ) (5. 21)

Now, following [2], to accurately define the Koopman observability


measures, the columns of the matrix W3ord (refer to equation 4.55) need to be
normalized so that ||wj||2=1 for j =1, 2, …, M.

By defining the third-order matrix of output signals in the space of the


Koopman eigenfunctions as:

C3ord  C1V11 0 N N 2 0 N N 3  (5. 22)

the matrix of Koopman observability measures, OPKMA, can be written as:

OPKMA  C3ord W3ord  [C1V11 C1V11W12 C1V11W13 ] (5. 23)

where W 12 and W 13 are the corresponding sub-matrices of matrix W 3ord, which


is equal to matrix W3ord after the unitary normalization of its columns.
90
5.4.2 Weighted observability measures
In analogy with the modal observability measures of [2], the matrix OPKMA con-
tains general information about how much observable are the linear and non-
linear modes in the measured output signals.

In some cases, however, the determination of the most observable dy-


namics being excited during a specific event is of importance. In this setting, a
set of pondered Koopman observability measures can be defined as:

ˆ
O PKMA  C3ord W3ord diag (z 3ord (0)) 
(5. 24)
 C1V11diag (z1 (0)) C1V11W12 diag (z 2 (0)) C1V11W13diag (z 3 (0)) 

where the disturbance of interest produces the vector of initial conditions


z3ord(0)=[z1(0)T z2(0)T z3(0)T ]T.

The pondered Koopman observability measures of equation (5.24) are


equivalent to the residues corresponding to an extended system of order n for a
set of measured output signals [11].

A useful alternative for definition (5.24) is found by considering the spar-


sity promoting criteria of Section 4.5.4:

O PKMA  C3ord W3ord diag (z 3ord (0))diag (τ 3ord ) 


 [C1V11diag (z1 (0))diag (τ1 ) C1V11W12 diag (z 2 (0))diag (τ 2 ) (5. 25)
C1V11W13 diag (z 3 (0))diag (τ 3 )]

Here, 1 = [1 ··· N]T, 2 = [1,1 ··· N,N]T, 3 = [1,1,1 ··· N,N,N]T, and 3ord = [1 T
2 T 3 T]T, and j, jk, jkl are the corresponding time constant defined in equations
4.65 to 4.67.

The weighted residues allow the analysis to be centered on the most ex-
cited, observable, and slow oscillations.

91
5.4.3 Importance for wide-area monitoring and control
The provided non-linear measures of observability can be used to assess the sta-
bility of the system and to generate proper laws of wide-area oscillations control
[12]-[14].

In this sense, the use of the Koopman observability measures can be


viewed from two perspectives:

1. The perspective of the optimal measuring devices placement, or

2. The perspective of optimally using the installed measuring devices.

From the first point of view, the proposed non-linear measures of observ-
ability can be used to optimally locate sensors in a network, making sure that as
the linear as the non-linear dominant dynamics will be monitored appropriate-
ly.

For the second outlook, when a set of PMU devices are currently in-
stalled, the primary objective is to be able of determining which variables must
be measured, and which modes will be observable in each of them.

Of utmost interest, the combination of both perspectives can derive in a


distributed scheme of monitoring and control, where local data can be processed
to provide global information and calculate a distributed control law [14].

5.5 Concluding remarks


In this chapter, an observability-based approach to extract dominant spatiotem-
poral patterns for Koopman mode analysis has been presented.

The method interprets global dynamic behavior in terms of linear combi-


nations of Koopman modes from which techniques to isolate and quantify the
dominant physical mechanisms underlying the observed dynamical behavior
are derived.

92
A physical interpretation of the Koopman modes as columns of a matrix
of observability measures has been provided that extends linear results to the
non-linear setting.

Furthermore, the use of observability measures is seen to add important


information to modal analysis, which is of interest in the analysis and identifica-
tion of reduced-order models and provides direct comparison to conventional
linear analysis techniques.

Finally, some possible future advances in the wide-area monitoring and


control were mentioned. In this regard, Chapter 7 proposes a wide-area non-
linear controller, where the pondered Koopman observability measures are used
to identify the most dominant quadratic modes.

5.6 References
[1] K. Ogata, Modern Control Engineering, New Jersey, Prentice Hall, 2002.

[2] S. M. Chan, “Modal controllability and observability of power system


models,” Elect. Power Energy Syst., vol. 6, no. 2, pp. 83–88, April 1984.

[3] J. Qi, K. Sun, and W. Kang, “Optimal PMU placement for power system
dynamic state estimation by using empirical observability Grammian,”
IEEE Trans. on Power Syst., vol. 30, no. 4, pp. 2041–2054, July 2015.

[4] M. Budišíc, R. Mohr, and I. Mezić, “Applied Koopmanism,” Chaos: Inter-


discip. J. Nonlinear Sci., vol. 22, no. 4, pp. 047510, December 2012.

[5] I. Mezić, “Analysis of fluid flows via spectral properties of Koopman oper-
ator”, Annu. Rev. Fluid Mech., vol. 45, pp. 357–378, 2013.

[6] K. K. Chen, J. H. Tu, and C. W. Rowley, “Variants of dynamic mode de-


composition: boundary conditions, Koopman and Fourier analyses,” J.
Nonlinear Sci., vol. 22, no. 6, pp. 887–915, 2012.

93
[7] E. Barocio, B. C. Pal, N. F. Thornhill, and A. R. Messina, “A dynamic mode
decomposition framework for global power system oscillation analysis,”
IEEE Trans. Power Syst., vol. 30, no. 6, pp. 2902–2912, November 2015.

[8] C. Rowley, I. Mezić, S. Bagheri, P. Schlatter, and D. Henningson, “Spectral


analysis of fluid flows,” J. Fluid Mech., vol. 641, pp. 115–127, 2009.

[9] Y. Susuki and I. Mezić, “Nonlinear Koopman modes and coherency identi-
fication of coupled swing dynamics,” IEEE Trans. Power Syst., vol. 26, no.
4, pp. 1894-1904, Nov. 2011.

[10] J. F. Raak, Investigation of Power Grid Islanding based on Nonlinear Koopman


Modes, M.S. thesis, Royal Institute of Technology, Stockholm, Sweden,
2013.

[11] S. K. Starret and A. A. Fouad, “Nonlinear measures of mode-machine par-


ticipation [transmission system stability],” IEEE Trans. Power Syst., vol. 13,
no. 2, pp. 389-394, May 1998.

[12] A. Chakrabortty and P. M. Khargonekar, “Introduction to wide-area control of


power systems,” IEEE American Control Conference (ACC) 2013, pp. 6758-6770,
Washington, DC, USA, June 2013.

[13] A. Jain, A. Chakrabortty, and E. Biyik, “An online structurally constrained


LQR design for damping oscillations in power system networks,” IEEE
American Control Conference (ACC) 2017, pp. 2093-2098, Seattle, WA, USA,
July 2017.

[14] S. Nabavi, J. Zhang, and A. Chakrabortty, "Distributed optimization algo-


rithms for wide-area oscillation monitoring in power systems using inter-
regional PMU-PDC architectures,” IEEE Trans. Smart Grid, vol. 6, no. 5, pp.
2529-2538, Sept. 2015.

94
Chapter 6
Recursive Linearization of Power
System Models
The recursive linearization (RL) method developed in Chapter 3 is illustrated on a sim‐
ple power system model. By assuming that the system is represented by a transient mod‐
el, explicit analytical expressions are derived that show the efficiency and applicability of 
automated linearization.  

This simple model captures the inherent non‐linear dynamics and can be used to 
study interactions between state variables as well as interactions with the system’s input 
signals when a cubic Koopman eigenfunctions‐based model is utilized.  

This analysis is intended to give valuable information for predicting the system’s 
dynamical response in the presence of specific entries. 

Based on this theoretical framework, the projection of the linear, quadratic, and 
cubic representation onto some selected output signals is determined. This knowledge is 
of fundamental importance for connecting the analytical knowledge obtained by means 
of the model‐based Koopman mode analysis with the data‐driven techniques based on the 
Koopman operator theory. 
6.1 Non-linear model of a multi-machine power system
To illustrate the application of recursive linearization to complex system repre‐
sentations,  a  non‐linear  power  system  model  in  which  terms  higher  than  the 
third order in the amplitude of motion are discarded, is adopted.  

In this representation, synchronous machines are represented by a transi‐
ent d‐q model and equipped with a simple exciter. Loads are considered as con‐
stant impedances and a power system stabilizer is added to the generator mod‐
el. The methods adopted, however, are general and can be applied to arbitrary 
system representations. 

6.1.1 Transient model of the generator 
The equations of motion for the rotor of generator j are [1]‐[5]: 

d j
   0   j    (6. 1) 
dt

d  j 1
 
dt

2H j
 PMech j  Tej  D j  j    (6. 2) 

dEq j 1
 
dt

Td 0 j
VEx j  (X d j  Xd j ) id j  Eq j    (6. 3) 

dEd j 1
    X q j  Xq j  iq j  E 'd j    (6. 4) 
dt Tq0 j  

where the above symbols have the usual meaning [5]. 

From the theory of synchronous machines, the electrical torque Tej in (6.2) 
is defined as:  

  Te j  ed j id j  eq j iq j  R a j  id2 j  iq2 j    (6. 5) 


     
Electrical active output Electrical losses
power

where edj and eqj are calculated from the expressions 

  ed j  E 'd j  R a j id j  X'd j iq j   (6. 6) 

96
and 

  eq j  E 'q j  R a j iq j  X'd j id j   (6. 7) 

Similarly,  the  d‐axis  and  q‐axis  components  of  the  terminal  current  on 
system framework, idj and iqj, are calculated as:  

  id j  ITre j sin( j )  ITim j cos( j )   (6. 8) 

  iq j  ITre j cos( j )  ITim j sin( j )   (6. 9) 

in which, ITrej and ITimj are the real and imaginary parts of ITj, the complex ter‐
minal current in the system frame, so that 

 ITre1  jITim 1     E 'D1  jE 'Q1 


    
     Yred     (6. 10) 
    E 'Dng  jE 'Q ng 
 ITre ng  jITim ng  

where j = √(‐1), ng is the number of generators of the power system, and Yred is 
the  matrix  of  admittances  of  the  system,  reduced  to  the  internal  buses  of  the 
generators.  

In equation (6.10), E’Dj and E’Qj are the terminal voltages in the network 
frame, given by:  

  E 'D j  E 'd j sin( j )  E 'q j cos( j )   (6. 11) 

  E 'Q j  E 'q j sin( j )  E 'd j cos( j )   (6. 12) 

6.1.2 Simple exciter dynamics 
A block‐diagram representation of the exciter used in this analysis is shown in 
Fig. 6.1 [3]. 

 
Fig. 6.1. Schematic representation of the utilized simple exciter model.

97
With reference to Fig. 6.1, we can write [3], [5]: 

dVTr j ET j  VTr j
     (6. 13) 
dt TR j

dVAs j Err j  VAs j


     (6. 14) 
dt Tb j

dE fd j K A jVA j  E fd j
     (6. 15) 
dt TA j

where ETj is the absolute value of the machine terminal voltage in p.u., defined 
by 
2 2
  ET j  ed j  eq j   (6. 16) 

and pssoutj is the output signal of the PSS [3].  

6.1.3 Power system stabilizer 
A  block  diagram  of  the  considered  power  system  stabilizer  (PSS)  is  shown  in 
Fig. 6.2, where the input signal is the rotor speed deviation of machine j, and the 
output is a modulation signal that modifies the reference value of the exciter – 
refer to Fig. 6.1 [3]. 

For this model, 

1
 
d
dt
pss1 j 
TW j
 GPSS j  pss1 j    (6. 17) 

1
 
d
dt
pss2 j 
Td 1 j
Var1 j  pss2 j    (6. 18) 

1
 
d
dt
pss3 j 
Td 2 j
Var2 j  pss3 j    (6. 19) 

where pss1 j, pss2 j, and pss3 j are the PSS’ state variables, GPSS j is the gain of the 


PSS, and Tw j is the washout time constant.  

98
 
Fig. 6.2. Schematic representation of the considered power system stabilizer model.

6.2 Recursive analysis of third order

In  this  section,  the  third‐order  recursive  linearization  method  is  applied  to  the 
simplified  system  model.  For  reference,  the  state  model  is  defined  as  x = [δT,
ΔωT, Eq T, E d T, VTrT, VAsT, EfdT, pss1T, pss2T, pss3T] T. 

  A method to determine the linearized system equations is as follows. 

6.2.1 Recursive linearization of the dynamical equations 
As a first step, the dynamical system equations are linearized to obtain the mod‐
el of equation (3.50),  

x  1 x   2 x  3 x  

where 

  δ  0  ω    (6. 20) 

  ω  (PMech 
 
(1)
Te  D  
ω 

(2)
 e 
T 

(3)
e)  M  
T (6. 21) 
First order Sec ond order Third order

   (E  E   X  X   (1) i 


E q

fd q

d d
d
First order
    (6. 22) 
  Xd  Xd   (2) id   Xd  Xd   (3) id )  Td 0
   
Sec ond order Third order

   (( X  X )  (1) i  E  (X  X )  (2) i  ( X  X )  3 i )  T  


E (6. 23) 
q q q0
d

q q
 q
d 
q q

q q
First order Sec ond order Third order

99
In  an  analogous  manner,  the  linear  equations  for  the  synchronous  ma‐
chines model and their controllers are, 

    ((1) E  V  (2) E  (3) E )  T  


V (6. 24) 
Tr
 TTr
T T R
First order Sec ond order Third order

    (V  V  pssout  V )  T  
V (6. 25) 
As ref Tr As b

and 

    (K   (1) V  E )  T  
E (6. 26) 
fd A A fd A

so that, 

   1  TW   GPSS ω  pss1   


pss (6. 27) 

   2  Td 1  (1) Var1  pss 2   


pss (6. 28) 

   3  Td 2  (1) Var2  pss3   


pss (6. 29) 

In these equations, the symbol ∙ represents the Hadamard product [6], i.e. 
x∙y = [x1y1 x2y2 … xnyn]T for x, yn . Additionally, the symbol  x denotes  x≡ [1/x1
1/x2 … 1/xn]T. 

6.2.2 Analysis of the system’s embedded sub‐functions 
The linearized modules of the inner dynamics of equations (6.1) to (6.4), (6.13) to 
(6.15), and (6.17) to (6.19) can be obtained by following the procedure below. 

6.2.2.1 Synchronous machines and grid equations 
From (6.5) through (6.9), we can solve for the electric torque, Te, as:  

 ( k ) Te   ( k 1) id  Ed   ( k 1) iq  Eq 


    eq sep  R a  iq  Xd  id    ( k ) iq    (6. 30) 
  ed sep  R a  id  Xd  iq    ( k ) id

for k=1, 2,… 

100
The modules (k)iq and (k)id, are given by: 

k
1  j  ( k - j )  j  ( k - j ) 
   ( k ) iq    cos  δsep    ITre  sin  δsep    ITim   δ  (6. 31) 
j

j 0 j!  2   2  
k
1  j  ( k - j )  j  ( k - j ) 
   ( k ) id    sin  δsep    ITre  cos  δsep    ITim   δ  (6. 32) 
j

j 0 j!  2   2  

where: 

T
δ j   1j  2j   ngj   

Noting now that Yred = Gred + jBred, we can write 

 ( k ) I Tre  G red  ( k ) E 'D  B red  ( k ) E 'Q


    (6. 33) 

  (k ) ITim  Gred (k )E 'Q  Bred (k )E 'D   (6. 34) 

Comparison of equations (6.11) and (6.12) with (6.8) and (6.9), shows that 
both groups of equations have the same structure. Further, the formulations for 
(k)E’D and (k)E’Q are analogous to those in equations (6.31) and (6.32). The de‐
tails are omitted. 

6.2.2.2 Simple exciter and PSS 
Linearization of the terminal voltages terms, (k)ET, in (6.16) yields 

K1
 ( k ) ET     ( j ) ed   ( k  j ) ed   ( j ) ed   ( k  j ) ed   ETsep 
j 0

1 K  K1  j 
   l !( j  l )!  ed   ed   ed   ed  
( j l ) ( j l )
 (l ) (l )

2 j 1  l 0
    (6. 35) 
 K2  j  
   m !(k  j  m)!  ( m ) ed   ( k  j m ) ed   ( m ) ed   ( k  j m ) ed     ET3 sep 
 m 0  
2k  3 ( 0)
  ed   (1) ed   (0) ed   (1) ed   E1Tsep
k
   (1) k 1 2( k 1)

 k !
where k=1, 2, 3; K=k/2, K1(j)= j/2 , and K2(j)=(k-j)/2.  

101
In (6.35), the terms (k)ed and (k)eq are given by 

   ( k ) ed   ( k ) Ed  R a   ( k ) id  Xd   ( k ) iq   (6. 36) 

   ( k ) eq   ( k ) Eq  R a   ( k ) iq  Xd   ( k ) id   (6. 37) 

The modules corresponding to the other internal dynamics represented in 
the diagrams of figures 6.1 and 6.2 can be easily obtained by applying the same 
procedure. 

6.3 Input matrices and interactions state-input

In this section, the effect of the input signals on the non‐linear evolution of the 
considered  multi‐machine  power  system  model  is  studied.  From  our  previous 
analyzes,  we  assume  that  the  third‐order  model  of  the  system  considering  in‐
puts, u, can be written as 

 y   Λ1 H12 H13   y   B 11 B 


12 B 13   u 
 y    0 Λ  
 2  2 H 23   y 2   B 21 B 22 B 23  u 2  
 y 3   0 0 Λ3   y 3   B 
31 B 32 B  u 
33   3 

    N y j N 1  j 1  k  j y j yk M 1jk   j 1 y j L1j   


N N N
(6. 38) 
 j 1 j

  j 1 y j N2  j 1  k  j y j yk M 2jk  u   j 1 y j L 2j  u2


N N N N

 j

 N   N 
 j 1 y j N j  j 1  k  j y j yk M 3jk   j 1 y j L 3j 
3 N N

with  y2=[y1y1 y1y2··· yNyN]T,  u2=[u1u1 u1u2··· uNuN]T,  y3=[y1y1y1


y1y1y2 ··· yNyNyN]T and u3=[u1u1u1 u1u1u2 ··· uNuNuN]T.  

Recall from Chapter 3, that A11 is the Jacobian matrix of the system, V11 is 
the  corresponding  matrix  of  right  eigenvectors,  and  Λ1  a  diagonal  matrix  con‐
taining the linear stability eigenvalues. 

  in (6.38) is computed as:  
From these assumptions, matrix  B11

      B 11  V111B11   (6. 39) 

where B11 is the input matrix of the system.  

102
The other interaction matrices are defined similarly.  

6.3.1 First‐order dynamics  
In  deriving  the  first‐order  dynamics,  we  note  from  equations  (6.1)  and  (6.21), 
and  Figure  6.1  that  PMech  does  not  have  any  direct  influence  on  the  second  or 
higher order terms. This is also seen for the reference voltage, Vref. As a result, 
 = 0 and  B
one has that  B  = 0. 
12 13

From the analysis above, it can be noted that there is not any an interac‐
tion between the input signals, PMech and Vref, and the state variables. Therefore, 
 1 , and  L
 1 ,  M
the matrices  N 1  are null. 
j jk j

6.3.2 Second‐order dynamics 
Assume that equation (4.32) is expressed in the form 

R N N N N N
  y j   j y j   b1j k u j   h2j k ,l yk yl   h3j k ,l ,m yk yl ym   (6. 40) 
k 1 k 1 l  k k 1 l  k m l

where  B 11=[ b1j k ]. 

Then, equations (4.33) to (4.35) and (6.40) are used to describe the evolu‐
tion  of  the  quadratic  Koopman  eigenfunctions.  Straightforward  computation 
shows that:  

y j ,k  y j y k  yk y j 


 R N N

           y j  k yk   bk1l ul   h2k lm yl ym     (6. 41) 
 l 1 l 1 m l 
 R N N

 yk   j y j   b j l ul   h2j lm yl ym   O  4 
1

 l 1 l 1 m l 

where fourth and higher‐order effects are contained in O(4) and are neglected in 
the analysis that follows. 

103
Of interest, it can be noted from (6.41) that for the quadratic Koopman ei‐
genfunctions,  the  only  set  of  non‐linear  interactions  that  is  not  null  is  the  one 
 2 , obtained from  B
related to the matrices  N  , i.e.   
j 11

  2 F
N j

lineal B11      

6.3.3 Third‐order dynamics 
From (6.40) equations (4.36), (4.37), and (4.38), we have that 

y jkl  y j yk yl  yk yl y j  y j yk yl 


 
 
R
 y j yk  l yl   bl1 mum   y j yl k yk   m 1 bk1um    (6. 42) 
R
 
 m 1 
 y y  y 
k l  j
R
j 
b1u  O  4 
m 1 j m 
 ,  B ,  B ,  N
Of note, matrices of coefficients  B  3 , and  L 3  are null, whilst 
31 32 33 j j

 3  can be obtained from the coefficients  b1 , via 


M jk jk

   3 F
M jk

lineal B11      

which leads upon simplification to the third‐order model 

 
 y   Λ1 H12 H13   y  B11 0 0 
 y    0 Λ H 23   y 2    0 u 
 2
N
 2  y jN 0 (6. 43) 
2
 j 1 j

 y 3   0 0 Λ3   y 3   
 j 1  k  j y j yk M jk 
N N 3
 0 0

Attention  is  now  turned  to  the  assessment  of  the  non‐linear  content  un‐
derlying some output signals of interest.   

6.4 Third-order analysis of the output matrices

The  selected  observables  of  interest  are  the  magnitudes  and  phases  of  the  bus 
voltages  and  the  active  power  flow  being  transferred  through  the  tie  lines  [2], 
[3]. The analysis can be extended to include other output functions.  

104
6.4.1 Bus voltages 

6.4.1.1 Non‐linear equations  
Let Y12=G12+jB12 be the matrix of admittances that projects the internal voltages 
of the generators onto the bus voltages of the system as [3] 

  Vbus  Vbre  jVbim  (G12  jB12 )(E 'D  jE 'Q )   (6. 44) 

  It follows that, 

  Ebus  Vbre
2
 Vbim
2
  (6. 45) 

and 

  θbus  arctan(FV )   (6. 46) 

where 

  FV  Vbre  Vbim   (6. 47) 

6.4.1.2 Analysis of non‐linear content 
Insight  into  the  non‐linear  effects  of  the  state  variables  on  the  evolution  of  the 
bus voltages can be obtained from cubic recursive linearization of the model. 

In assessing these effects it should be observed that the analysis of equa‐
tion (6.44) is straightforward, whereas the formulation (6.45) for computing the 
bus voltages Ebus is equivalent to the analysis of the bus magnitude equations in 
(6.16). As a direct consequence, the terms (k)Vbre and (k)Vbim need not to be in‐
cluded  in the  analysis,  whilst, for the  modular analysis of Ebus, the same struc‐
ture presented in (6.35) must be considered. 

Analysis  of  the  vector  of  bus  voltages’  phases,  θbus,  is  more  complicated 
due to the nature of the function arctan and deserves some context. The lineari‐
zation of the bus angles is illustrated in Figure 6.3. 

105
 

Fig. 6.3. Scheme of the recursive linearization of the vector θbus.

where 

  K FV  (1ng  FV sep  FV sep )    

The  linearization  of  the  auxiliary  term  FV  follows  the  same  lines  as  that 
presented in Fig. 6.3. The derivation has been omitted for reasons of space. 

6.4.2 Active power flows 
Let  the  vector  of  active  power  flows  of  all  the  lines  of  the  system  be  Plinenl, 
where  nl  represents  the  number  of  tie  lines  [1]‐[3],  and  VS nl
  and  VR nl
,    be 
the  vectors  of  bus  voltage  magnitudes  at  the  sending  –  receiving  ends  of  the 
lines, respectively, obtained from simple projections of the vector of bus voltag‐
es Vbus.  

  The  procedure  adopted  to  compute  the  output  matrices  is  summarized 
below. 

6.4.2.1 Non‐linear equations 
Thus, for instance, if VSj is equal to the voltage at bus 1, 

VS j  e1Vbus  

where ej is the j‐th column of a properly‐sized identity matrix.   

106
Noting that YL is the matrix of series admittances of the lines and that Bsh 
is the vector of shunt susceptances, the line currents are determined from [3] 

  I L  YL ( VS  VR )  B sh VS   (6. 48) 

Recalling that YL = GL +jBL, we can write 

  VS  VSre  jVSim    

  V R  V Rre  j VRim    

Similarly,  the  real  and  imaginary  parts  of  the  line  currents,  ILre  and  ILim, 
are determined from equation (6.48), and can be expressed as:  

  I Lre  G L ( VSre  VRre )  B L ( VSim  VRim )  B sh VSre   (6. 49) 

  I Lim  G L ( VSim  VRim )  B L ( VSre  VRre )  B sh VSim   (6. 50) 

and [1], [2] 

  Sline  VS  I*L    

Also, 

  Pline  VSre  I Lre  VSim  I Lim   (6. 51) 

Based  on  these  representations,  the  non‐linear  effects  of  the  state  varia‐
bles on the output variables can be examined as discussed below.  

6.4.2.2 Recursively linearized modules 
Applying  the  recursive  linearization  method  to  equations  (6.49)  and  (6.50)  re‐
sults in:  

                   k  I Lre  G L [    VSre     VRre ]  B L [    VSim     VRim ]  B Sh    VSre   (6. 52) 


k k k k k

                  k  I Lim  B L [    VSre     VRre ]  G L [    VSim     VRim ]  B Sh    VSim   (6. 53) 


k k k k k

Similarly, it can be proved that:  

 ( k ) Pline   j 0  ( j ) VSre   ( k  j ) I Lre  ( j ) VSim   ( k  j ) I Lim  


k
              (6. 54) 

107
Finally, the modular expression of order k for the selected output signals, 
following the notation of equation (3.48), would be:  

   k  Ebus 
 k 
   h'    θbus   
k
  (6. 55) 
 k  
  Pline 

6.5 Concluding remarks


In  this  Chapter,  the  recursive  linearization  method  was  illustrated  on  a  simple 
power system model. The model, however, is general and can be applied to ar‐
bitrary power system representations. 

An interesting observation  from  the  analysis of higher‐order expansions 


is that even if the inputs of the system do not have a direct non‐linear effect on 
the evolution of the system, input‐state interactions arise from the higher‐order 
Koopman eigenfunctions.  

It  is  also  observed  that  although  the  recursive  linearization  provides  a 
simple algebraic formulation, the number of required operations may grow ex‐
ponentially for some complex non‐linear functions. This may be a limiting factor 
when higher‐order representations of complex dynamical systems are required. 

In  the  next  Chapter,  the  usefulness  of  the  obtained  third‐order  model  is 
demonstrated when it is used to calculate a non‐linear quadratic controller.  

6.6 References
[1] P. Kundur, Power System Stability and Control, The EPRI Power System En‐
gineering Series, McGraw‐Hill, New York, 1994. 

[2] P.  W.  Sauer  and  M.  A.  Pai,  Power  system  dynamics  and  stability,  Prentice 
Hall, New Jersey, 1998. 

108
[3] G.  Rogers,  Power  Systems  Oscillations,  Kluwer  Academic  Publishers,  MA, 
2000. 

[4] J.  J.  Sanchez‐Gasca,  V.  Vittal,  M.J.  Gibbard,  A.R.  Messina,  D.J.  Vowles,  S. 
Liu, and U.D. Annakkage, “Inclusion of higher order terms for small‐signal 
(modal) analysis: Committee report‐task force on assessing the need to in‐
clude  higher  order  terms  for  small‐signal  (modal)  analysis,”  IEEE  Trans. 
Power Syst., vol. 20, no. 4, pp. 1886‐1904,  Nov. 2005.  

[5] P.  Anderson and  A.  Fouad,  Power  System  Control  and  Stability  (IEEE  Press 
Power Engineering Series). Piscataway, NJ, USA, 2003. 

[6] K. J. Horadam, Hadamard Matrices and Their Applications, Princeton Univer‐
sity Press, NJ, USA, 2006. 

109
Chapter 7
PKMA-based Truncated Non-
linear Quadratic Controller
In this chapter, the development of a truncated non-linear quadratic controller based on
the perturbed Koopman mode analysis (PKMA) method and the linear optimal control
theory is presented.

The theoretical developments behind the non-linear controller design are first es-
tablished, including the analysis of the bilinear effect arising in the quadratic Koopman
eigenfunctions-based extended models, the structural constraints on the quadratic gains
matrix, and the study of the second-order phenomena in the closed-loop system.

The derivation of the truncated quadratic controller is then provided, highlight-


ing all the considerations made during the designing process.

Additionally, efficient algorithms for computing some sub-processes and


implementing the obtained non-linear control law are presented. The use of the
Koopman observability measures is considered to center the analysis on the most domi-
nant quadratic modes depending on the applied disturbance.

Finally, the main advantages and drawbacks of the proposed approach are
discussed, and a scheme for future online applications is provided.
7.1 Problem formulation
The high dimensionality of electric power systems and the possibly-limited
communications lead to the necessity of structurally constrained controllers [1]-
[3]. Moreover, the power system operation under high loading conditions and
the occurrence of stressing disturbances may require the design of high-order
controllers capable of extending the area of attraction of the linear quadratic
regulator (LQR) and of increasing the reliability of the system [4]-[7].

In this Section, the design of structurally-constrained LQR controllers is


reviewed. As well, the general features required for designing a quadratic non-
linear control law are established.

7.1.1 Linear structurally-constrained LQR controllers


Following [1] and [2], let us consider a discrete-time linearized power system
model:

x  k  1  A11x  k   B11u  k   Bd d  k  (7. 1)

where x(k)=[x1(k), …, xN(k)]T is the vector of state variables, u(k)=[u1(k), …, up(k)]T


is the vector of input variables, A11N×N is the Jacobian matrix of the system,
and B11N×p is the control input matrix. The matrix BdN×d is the disturbance
input matrix, and d(k) is the disturbance applied to the system. The quantity
Bdd(k) is usually unknown [1].

The objective is to design a feedback control of the form [8]

u  k   K11x  k  (7. 2)

that damps the closed-loop oscillations, where K11 satisfies the structural con-
straints denoted by 1 and minimizes the objective function:


J   xT (k )Q11x(k )  uT (k )Ru(k ) (7. 3)
k 0

where Q11N×N and Rp×p are positive semi-definite matrices.

111
7.1.1.1 Distributed control
The problem of designing a distributed control strategy for determining 1 was
treated in [1] based on the modal participation analysis. In this sense, the evolu-
tion of the j-th system state variable can be expressed as:


x j  k    k 1 pˆ jk k   k 1 pˆ jk k   j  k 
N
(7. 4)

where p̂ jk is the residue of the k-th linear eigenmode in the state variable xj,
N is the number of state variables of the system, and N̂ is the number of elec-
tromechanical (or dominant) modes. The term βj(k) represents the joint effect of
the non-dominant linear modes.

By using this partition, and considering those residues with a greater


value than a threshold value η1, a sparse structure is determined for the gain
matrix K11, which would have the form [1]:

 u1   0   x1 
u  
 2   0    x2 
(7. 5)
    
    
u p   0 0   xN 
1

with symbols × indicating the positions where a gain must be placed for the
feedback gain matrix K11.

In order to emphasize the importance of the sparsity of matrix K11, Fig.


7.1 illustrates the full communication network of a 6-machine system including
all the links connecting the generators in comparison with a sparse communica-
tion structure determined by 1 and consisting of the continuous blue links. For
more details on the determination of , please refer to [1].

7.1.1.2 Constrained LQR control


Once 1 has been determined using the strategy described above, the problem is
now to obtain the optimal gain matrix K111 that minimizes the function J of
equation (7.3).

112
Fig. 7.1. Scheme of a sparse communication structure (continuous blue lines), in comparison
with the full communication network (including the red dotted lines).

Following [2], the optimal gain matrix K11 that minimizes (7.3) is given by

K11   (R  B11
T
P11B11 )1B1T P11A11 (7. 6)

with P11 being the unique positive semi-definite N×N matrix satisfying the dis-
crete algebraic Riccati equation (DARE)

P11  Q11  A11


T
P11B11 (R  B11
T
P11B11 )1B11
T
P11A11  A11
T
P11A11 (7. 7)

However, the computed gain matrix, in general, do not satisfy the struc-
tural constraint of 1 [1].

In order to obtain the desired K11, we first define the matrix Ic1 as the in-
dicator matrix, which is equal to 1, but with 1’s instead of the ×’s. Then, the
following identity holds for the structurally-constrained gain matrix K11

Fˆ (K11 )  K11  Ic1  0 (7. 8)

where F̂ (K11) is the orthogonal projection of K11 onto the state variables ig-
nored by 1 [2]. In (7.8), the symbol · represents the Hadamard product [9].

Then, according to [2], for the discrete-time system of (7.1), and a control
law of the form (7.2), if the gain matrix K11 satisfies

K11  L11  (R  B11


T
P11B1 )1B11
T
P11A11 (7. 9)

113
where L11 is an arbitrary matrix, and P11 is the solution of the generalized dis-
crete algebraic Riccati equation (GDARE):

P11  Q11  A11


T
P11B11 (R  B11
T
P11B11 )1B11
T
P11A11  A11
T
P11A11  LT11 (R  B11
T
P11B11 ) L11 (7. 10)

then the closed-loop system A11-B11K11 is Hurwitz [2].

Additionally, in order to enforce the structural constraints, the matrix L11


must be chosen as [2]:

L11  Fˆ ( (P11 )) (7. 11)

with

 (P11 )  (R  B11


T
P11B11 )1B11
T
P11A11 (7. 12)

A simple and practical algorithm for the iterative solution of the problem
of finding P11 can be found in reference [1] and is here presented as the Algo-
rithm 7.1

Algorithm 7.1. Algorithm to find P11 for the configuration 1.


(0)
1) Obtain P11 from (7.7) – Initialize

2) Start k=0

3) L11
(k 1)
   P11(k)   Ic1 – Enforce structure 1
(k 1)
4) Q11  Q11
(0)
 L(11k 1)T  R  B11
T
P11(k ) B11  L(11k 1) ,

P11(k 1)  A11


T (k 1)
P11 B11  P11(k 1)   Q11(k 1) – Enforce structure 1
T (k 1)
P11 A11  A11

P11(k 1)  P11(k)


5) If  1 , then – Check convergence
P11(0)

6) K11   P11  L11 (k)  (k 1)

7) Else, k  k+1 and go to step 3 – end if.

114
7.1.2 General characteristics of the required quadratic controller
As has been established previously in several research works, the inclusion of
the second-order terms in the small-signal stability analysis of power systems
can be very demanding and unpractical [4]-[6].

The second-order controller design strategy proposed in this document


aims to avoid these drawbacks and to open the opportunity of an online appli-
cation to damping inter-area oscillations in power systems.

With this objective in mind, the main requirements for designing the de-
sired controller are:

1) Adequate assessment of the second-order phenomena that need to be


considered.

2) The introduction of ranking criteria to identify the most dominant


quadratic terms.

3) The extension of the structurally-constrained controller design de-


scribed in Section 7.1.1 to include quadratic terms.

4) A methodology to only include a small set of dominant quadratic


terms in the designing process, to reduce the computational burden.

5) The definition of a criterion to identify the cases requiring a quadratic


controller.

Furthermore, the computation of the quadratic LQR controller with the


considerations described above must be accomplished in a short period so that
the proposed scheme can be applied online.

115
7.2 LQR design for quadratic Carleman models

7.2.1 Quadratic Carleman linearization-based models


Following [4], [6], [7], and [10], let us assume a non-linear dynamical system is
approximated using the perturbed model of equation (3.6), rewritten here for
convenience until the second-order of approximation:

x  F1x, u  F2 x, u  

where

F1  x,u   A11x  B11u

F2  x,u   F2 x 2   j 1 N1j x j u  B12 u 2


N

The matrices A11, F2, and B11, B12, and N1j are properly sized real-valued
matrices of coefficients [7].

Now, the second-order non-linear variables can be defined as:

x jk  x j xk

and the following second-order Carleman model can be built [7]:

 x   A11 A12   x   B11 



 x   0 A   x    u (7. 13)
  j 1 x j N j 
N 2
 2  22   2

A2 ord x2 ord

In (7.13), the quadratic effects of B12 and N1j have been considered as
null, based on the results of the previous Chapter.

7.2.2 Bilinear-quadratic optimal control


The system of equation (7.13) is strictly a bilinear system [7], [10], [11]. The
quadratic cost functional to be minimized for this problem is given by [11]:

J 
2 0

1  T
x 2 ord Q 2 ord x 2 ord  uT Ru  dt (7. 14)

116
where Q2ord and R are symmetric positive definite matrices, and the matrix of
weights Q2ordN2ord×N2ord is defined as [12]:

Q 0
Q 2ord   11  (7. 15)
 0 0

For this type of dynamical systems, there exist some procedures for de-
termining an optimal control law [11], [13]-[15], where the optimal feedback uOpt
is computed as


uOpt  R 1 B11   j 1 x j  t  N 2j
N
 T
p1  t  (7. 16)

with

p1t   K11t xt  (7. 17)

being an adjoint variable of the optimization process [11].

Moreover, the optimal control law is calculated iteratively in time for a


specific vector of initial conditions x0 [11], [13]-[15].

As these techniques are computationally demanding and dependent on


the initial conditions x0, their application for wide-area oscillations damping in
power systems is unpractical.

In the next Section, an alternative manner of calculating a feedback con-


trol for the system of (7.13) is presented.

7.2.3 Optimal control of quadratic Carleman models


As can be seen from equation (7.13), the bilinearity of the model arises from the
evolution of the quadratic terms, and it is directly related to the interaction of
the first-order variables.

In order to assess how a quasi-linear feedback control can be found for a


system of the form (7.13), we first compute the optimal uOpt for the linear system

x  A11x  B11u (7. 18)

117
which is given by [8]

uOpt  R 1B11
T
P11x  K11x (7. 19)

Now, by substituting (7.19) in (7.13), we get

 x   A11 A12   x  B11K11 0   x 


 x    0 A   x    0 K 22   x 2 
(7. 20)
 2  22   2 

where K22x2 comes from the reinterpretation of

 N
j 1 
x j N 2j K11x (7. 21)

The projection of (7.21) to the term K22x2 is achieved by the introduction


of a set of unitary matrices Mj projecting the interaction of the matrix of coeffi-
cients K11 with the bilinear matrices Nj as shown below:

 N
j 1 
N 2j K11M j  K 22 (7. 22)

With the help of (7.22), a new algorithm for determining a quasi-linear


design of an LQR controller for the system of (7.13) can be proposed:

Algorithm 7.2. Iterative calculation of the quadratic LQR.


Given a non-linear model of the form (7.13),
1) Compute the input signal of iteration 0, u(0), as the linear optimal solu-
tion uOpt for the linear system (7.18) according to (7.19)
 k 1   k
2) For iteration k, using K11 compute matrix K 22 with (7.22).

3) Calculate u(k) as the optimal feedback for the linear system

 x   A11 A12   x  B11  (k )


 x     k    u (7. 23)
 2  0 A 22  K   x 2   0 
22

as
   
u(k )  K11 x  K12 x2
k k
(7. 24)

118
 k 1 k 
4) If K 11  K 11   2 go to step 2. If not, go to the next step.

5) Compute the eigenvalues of the controlled dynamical system


k   
 x   A11 A12   x  B11K11  x 
k
B11K12
 x    0 
A 22   x 2   0
  (7. 25)
 2  K 22   x 2 
k 

As can be noted in the Algorithm 7.2, the effect of the third-order terms
 
originated in ẋ2 due to K12 is neglected.
k

  k   k
The inclusion of matrix K 22 in (7.23) to calculate the gain matrices K11
 
k
and K12 in (7.24) is intended to avoid ill-conditioned optimization issues and to
  k
achieve a better calculation of K12 . It must be pointed out that, in general, one
iteration of Algorithm 7.2 is enough.

However, the main drawback of this proposal is that the entire set of sec-
ond-order terms must be aggregated into the analysis to adequately compute
the gain matrices, exponentially increasing the required computational time.

In the next Section, this question is attacked as well as the problem of in-
cluding structural constraints.

7.3 Controller design in Jordan space


In this Section, the quadratic controller design is performed in the Jordan coor-
dinates using the perturbed Koopman mode analysis (PKMA) technique and the
optimal linear control theory.

An objective comparison against the designing process based on the Car-


leman linearization presented above is provided. The advantages of using a sec-
ond-order Koopman eigenfunctions-based extended model instead of a Car-
leman quadratic model are highlighted and exploited to obtain a strategy to
compute a truncated quadratic controller.

119
7.3.1 First order
Following [8], for a linear system of the form (7.18), the optimal feedback law is
the one shown in (7.19) that minimizes the function J of (7.3).

Now, using the Jordan transformation

x  V11y (7. 26)

the system of (7.18) is transformed into:

y  Λ1y  B11u (7. 27)

where

B11  V111B11 (7. 28)

Then, the optimal feedback uOpt of equation (7.19) is re-expressed as:

uOpt  K11y (7. 29)

with

K11  V111K11 (7. 30)

Therefore, the cost function J of (7.14) is now written as:

J
1  H
2 0

y Q11y  uT Ru dt  (7. 31)

where (·)H means the conjugate transpose operation and Q 11 is Hermitian [8].

In order to gain insight into the nature of Q 11, we recall the part of
equation (7.3) regarding the cost of the state variables, which can be expressed
as:

xT Q11x  f J  x  f J  x 
T

where

f J  x   DJ x (7. 32)

120
Thus, using (7.32) the matrix Q 11 can be defined as:

Q11  V111DTJ DJ V11 (7. 33)

Then, the optimization problem is solved similarly than in the physical


coordinates, with the discrete-time algebraic Riccati equation

Λ1H P11  P11Λ1  P11B11R 1B11


T
P11  Q11  0 (7. 34)

with P11  V111P11V11 being a Hermitian matrix [8].

7.3.2 Second order


Now, with the perturbed Koopman mode decomposition (PKMA) technique the
following bilinear representation is obtained,

 y   Λ1 H12   y   B11 
 y    0   u
Λ 2   y 2    j 1 y j N 2j 
N (7. 35)
 2   

with the proper definition of N 2j C N2×R.

Following the procedure presented in Section 7.2.3, the optimal feedback


would be given as:

 y 
uOpt   K11 K12    (7. 36)
 y 2 

The definition of K 11 is given by equation (7.30), whereas the relationship


between the matrix K 12 and the physical variables needs to be determined.

Using the complete projection of the second-order Carleman model of


(7.20) onto the Jordan space through the transformation (7.26), it would result in

 y   Λ1 V111A12 V22   y   B11 


 y       u (7. 37)
  2    j 1 j j 
 2  0 Λ  y 
N
y ˆ
N 2
2

121
with Nˆ 2j denoting V221N 2j , which is different to the matrix N 2j of (7.35), as will be
shown below.

Following this idea, the computed control law would result in a matrix
K̂12 that can be interpreted through

ˆ y  K
K ˆ V x (7. 38)
12 2 12 22 2

where

A 22  V221Λ 2 V22

Then, from equation (7.25) we can define:

ˆ  V 1A V  [ hˆ 12
H hˆ 12 hˆ 12
12 11 12 22 1 2 N2 ]

If we also define H12  [ h12


1 h12
2 h12N2 ] from (7.35), we can note that

h12j   j hˆ 12j

i.e., the columns of matrix H12 obtained through the PKMA method are equal to
the columns of matrix Ĥ12 multiplied by a complex-valued scalar αj.

However, the transformation from the Carleman model (7.13) to the


model of equation (7.37) needs to be avoided to reduce the computational bur-
den.

With this in mind, we first consider the matrix U11= V111 =[ujk] so that the
Jordan variables yj can be expressed in terms of the state variables xj as:

y j   k 1 u jk xk
N

and using the definition of the second order terms provided in Chapter 4 we get

y jk  y j yk   N
l 1
u jl xl   N
m 1 km 
u xm   l 1 u jl ukl xl2 
N

(7. 39)
  l 1  ml 1  u jl ukm  u jmukl  xl xm   jk1  jk 2  jk N  x2
N N
2

122
Therefore, whether a predetermined set of quadratic terms need to be
considered, they can be easily projected to the physical space by means of (7.39).

The interpretation of the computed gain matrix K 12 of in terms of the


state variables xj is reached by using the projection (7.39) instead of (7.38).

7.3.3 Structurally-constrained design


The structural constraints considered for the linear LQR design shown in Sec-
tion 7.1.1 are extended here for the quadratic Koopman eigenfunctions-based
representation of a dynamical system.

The developments provided in the previous subsection are used to obtain


a scheme where a few quadratic terms can be included.

7.3.3.1 Sparse structure


As with the first-order system representation, the first step is to determine the
sparse structure of the controller 1ord that is now defined as:

1ord  1  2 (7. 40)

where 1 was defined in (7.5) and 2 is obtained as follows.

Fig. 7.2. An illustrative construction of the quadratic communication network.

123
First, based on the results of Section 7.1.1 and the weighted observability
measures defined in Section 5.4.2, the residues corresponding to the second-
order modes are calculated.

Secondly, a new threshold η2 is introduced for the second-order terms.


The matrix 2 is then built following the same philosophy of [1], but for the
most dominant quadratic terms.

Therefore, the combination of 1 and 2 originates the matrix 1ord as


drawn in Fig. 7.2. Once 1ord has been determined, the full structure of the sec-
ond-order gain matrix is formed as:

 x1 x1 
 u1   0   x1   0 0    x1 x2 
u  
 2   0    x2  0 0 0 0   x1 x3 
   (7. 41)
   0      x1 x4 
      
u p   0 0   xN   0  0 0  
 
1ord  2 ord  xN xN 

where a ‘×’ is placed in the position of the row j of 2ord related to the term xkxl if
and only if both elements of the j-th row of 1ord related to xk and xl are not zero.

Then, the indicator matrix Ic2ord is defined as:

Ic2ord   Ic1 Ic2 

with Ic1 and Ic2 being the matrices 1ord and 2ord when the symbols × are re-
placed by 1’s.

7.3.3.2 Iterative process


In order to obtain a structurally-constrained design of the quadratic LQR con-
troller, the function F̂ (·) of equation (7.8) is modified to define the subspace
containing the gain matrix

K 2ord  K11 K12  (7. 42)

124
This modified definition of equation (7.8) is given by

   K
F K 2 ord  2 ord  
υ2 ord  I c2 ord υ21ord (7. 43)

where K 2 ord υ2 ord can be rewritten as:

 V 1 0 
K 2 ord υ2 ord  K11 K12   11  (7. 44)
 0 υ2 

and the relationship y 2  υ2 x2 is found from (7.39).

Then, the iterative processes described in algorithms 7.1 and 7.2 can be
followed to obtain the matrices K 2ord and

P P 
P2 ord   11 12  (7. 45)
 0 P22 

with

Q 0
Q 2ord   11  (7. 46)
 0 0

However, this procedure has two major disadvantages:

1. Definition (7.43) requires the use of all the quadratic terms.

2. The computational burden of the iterative process to calculate K 2ord and


P2ord from (7.36) and Algorithm 7.2 is much higher than designing the
first-order controller.

At this point, an important observation was made during the computa-


tion of P2ord : the submatrix P11 of (7.45) converges to the same value if it is com-
puted either with the Algorithm 7.1 or by the process described above for the
second-order model.

So, the Algorithm 7.3 written below is proposed as the most efficient way
to calculate the gain matrix K 2ord .

125
Algorithm 7.3. Computation of quadratic LQR in Jordan coordinates.
1) Calculate the residues for the first- and second-order modes.
2) Identify the most dominant linear and non-linear modes above the
thresholds η1 and η2.
3) Construct 1ord and 2ord.
4) With 1ord, use Algorithm 7.1 to obtain K 11 .

5) Use Algorithm 7.2 to calculate K 12 and K 22 .

With Algorithm 7.3, the number of DARE processes that need to be


solved for the complete second-order model is only one.

However, the computational cost of this only one DARE for an N2ord di-
mensional system is still excessively high and is centered on the bilinear part of
the model. This effect is studied in the following sub-section.

7.3.3.3 Bilinear effect on the quadratic PKMA model


For assessing the projection of the bilinear part onto the Jordan variables and its
impact on the quadratic PKMA-based model, we recall the linear system in Jor-
dan coordinates of equation (7.35)

 y   Λ1 H12   y   B11 

 y   0 Λ   y    u
  j 1 y j N 2j 
N
 2  2  2

Then, by considering uOpt  K11y from (7.29) and using the develop-
ments of Section 7.2.3, the effect of the linear control of (7.29) on the quadratic
variables can be expressed as:

 
y 2  Λ2  K 22 y 2 (7. 47)

so that the resulting quadratic model is analogous to equation (7.20):

 y   Λ1  K11 H12   y 
 y      (7. 48)
 2  0 Λ 2  K 22   y 2 

126
In general, the matrix Λ 2  K 22 is diagonally dominant, but the properties
obtained through the projection to the Jordan coordinates do not prevail after
including the bilinear effect.

7.3.3.4 Diagonalization of the closed-loop system


As observed before, the effect of the matrices K 11 and K 22 in the PKMA model
is that they eliminate the three most important properties of this type of models:

1. The general sparsity of the extended model,

2. The submatrices in the diagonal are diagonal matrices, and

3. The quadratic terms are decoupled from each other.

In order to recover these properties, a procedure based on the recursive


linearization method (analogous to the procedure presented in Chapter 4) is re-
quired to obtain

 y   Λ1 H12   y 
    (7. 49)
 y 2   0 Λ 2   y 2 

where

 
V111 Λ1  K11 V11  Λ1 (7. 50)

 
V221 Λ2  K 22 V22  Λ2 (7. 51)

H12  V111H12 V22 (7. 52)

Although in this new space the three previously mentioned properties


are recovered, the required eigendecomposition processes are highly demand-
ing.

With the objective of bypass these eigendecompositions and the


additional calculation of the matrix K 22 , a new procedure is proposed.

127
Let us begin with the linear model of equation (7.18), in which an optimal
gain matrix is calculated as in (7.19). Therefore, the closed-loop linear model of
the dynamical system is:

x   A11  B11K11  x (7. 53)

Then, the linear transformation

ˆ y
x  V (7. 54)
11

is proposed that transforms (7.53) into

y  Λ1y (7. 55)

Following this idea, the matrix representing the effect of the quadratic
Koopman eigenfunctions, H12 , can be computed by means of the recursive line-
arization method (Chapter 3) and utilizing the non-linear equation:

ˆ 1 gˆ V
y  V11 
ˆ y
11  (7. 56)

where

gˆ  x   g  x, u  u  K (7. 57)
11x

Thus, the resulting PKMA representation is in fact that previously pre-


sented in equation (7.49).

Now, in order to design the second-order part of the controller, K 12 , we


need to project back to the standard Jordan coordinates with the transformation

y  V11y (7. 58)

In consequence, the dynamical model that will be used henceforth is as


follows:

ˆ   y  B 
 y   Λ1 H

 y  
12
    11  u (7. 59)
 2   0 Λ 2   y 2   0 

128
with

ˆ V H
H (7. 60)
12 11 12

and B11 as defined in (7.28).

The design of the quadratic LQR controller with a few second-order


terms can then be performed by using equation (7.59).

A subtle detail is that, according to equations (7.26) and (7.54),

ˆ V V
V (7. 61)
11 11 11

ˆ U U
U (7. 62)
11 11 11

In general, the matrices V11, U11, V̂11 , and Û11 need to be computed from
(7.18) and (7.53), respectively. Arrays V11 and U 11 are recommended to be
calculated from (7.61) and (7.62) to avoid scaling issues.

7.3.3.5 Truncated quadratic LQR controller.


Considering all the developments presented above, an efficient and straightfor-
ward algorithm to compute a truncated quadratic controller can be derived.
Such an algorithm can be summarized as follows:

Algorithm 7.4. Computation of truncated quadratic LQR controller.

1) Calculate the residues for the first- and second-order modes of the
open-loop system of (7.18).
2) Normalize the residues taking the one with the maximum absolute
value as the base value.
3) Identify the most dominant (non-)linear modes above the thresholds η1
and η2, expressed now in a percentage of the most dominant one.
4) Construct 1ord and 2ord.
5) With 1ord, use Algorithm 7.1 to obtain K 11 .

129
6) Calculate matrices V11, U11, V̂11 , Û11 , and then matrices V11 and U 11 .

7) Compute matrix Ĥ12 , calculate the residues of the closed-loop system,


and normalize them.
8) Identify the most dominant quadratic closed-loop modes that are
above the threshold η3.
9) Use the identified non-linear modes to compute the required columns
of the closed-loop gain matrix K̂12 .

10) Compute the rows of the closed-loop projection υ̂ 2 .

11) Compute the feedback matrix K 2ord with

ˆ   V11 0
1
K 2ord  K11 K 12   (7. 63)
 0 υˆ 2 

In step number 8 of the latter algorithm, η3 is a new threshold for the


closed-loop system, and the matrix K̂12 projects the closed-loop quadratic
modes to the state variables of the dynamical system.

Additionally, υ̂ 2 is the projection of the quadratic Koopman eigenmodes


to the closed-loop quadratic variables using a formulation analogous to equa-
tion (7.39) but using Û11 instead of U11.

7.4 Scheme for future online application


In this Section, a general scheme for the future online application of the truncat-
ed quadratic LQR controller is presented. Figure 7.3 below shows the flow dia-
gram of the proposed strategy.

It is important to note that some information is assumed to be calculated


offline to be available when a fault is identified.

130
Fig. 7.3. Flow diagram of the quadratic PKMA-based LQR controller design. The blocks
marked with * and ** require some attention.

131
In the blue block marked with an *, the definition of C1 is a fundamental
point. It is highly recommended to define C1 in accordance with the state varia-
bles weighted by matrix Q11.

Additionally, the block marked with ** in the purple part indicates that
some attention needs to be put in the process of defining T1 and, in general, to
make the best decision on selecting the structure of communication.

Furthermore, in Fig. 7.4 a scheme for the online application of the


quadratic non-linear controller is proposed, where the linear and quadratic parts
of the controller can be applied in a cascade.

It can be observed from Fig. 7.4 that the proposed control law is
considered as a supplementary control. Additionally, a relaxation of the value of
ε1 can be regarded to reduce the required CPU time.

For the provided strategy, the intrinsic calculation of the vector of initial
conditions x0 is possible thanks to the dynamic state estimation technique intro-
duced in [15]. Although this technique is not treated in this dissertation, its use
is supposed for future online applications.

Fig. 7.4. Scheme for the online application of the truncated quadratic PKMA controller.

132
7.5 Concluding remarks
In this chapter, a strategy for designing a (truncated) second-order linear quad-
ratic regulator (LQR) based on the perturbed Koopman mode analysis (PKMA)
was derived for damping wide-area oscillations.

The design of the mentioned controller contemplates the open-loop


quadratic analysis, the design of the linear feedback matrix, the inclusion of the
bilinear effects of the model, and the study of the closed-loop system.

The problem of the structurally-constrained design was also treated for


the quadratic approximation, and some important simplifications and assump-
tions were pointed out that reduce enormously the required computational bur-
den.

Moreover, the general scheme for the application of the proposed strate-
gy was provided, highlighting the main considerations that must be taken into
account to assure its applicability in real time.

An important open issue is the definition of the thresholds for identifying


the most dominant linear and non-linear open-loop modes as well as the most
dominant quadratic closed-loop dynamics. Moreover, the selection of the value
of ε1 for online applications is also essential.

Numerical results and applications of the different schemes presented


here are offered in Chapter 8.

7.6 References
[1] A. Jain, A. Chakrabortty, and E. Biyik, “An online structurally constrained
LQR design for damping oscillations in power system networks,” IEEE
American Control Conference (ACC) 2017, pp. 2093-2098, Seattle, WA, USA,
July 2017.

133
[2] J. Geromel, A. Yamakani, and V. Armentano, "Structurally constrained
controllers for discrete-time linear systems," J. of Optim. Theory Appl., vol.
61, no. 1, pp. 73-94, 1989.

[3] A. Chakrabortty and P. M. Khargonekar, “Introduction to wide-area control of


power systems,” IEEE American Control Conference (ACC) 2013, pp. 6758-6770,
Washington, DC, USA, June 2013.

[4] J. J. Sanchez-Gasca, V. Vittal, M.J. Gibbard, A.R. Messina, D.J. Vowles, S.


Liu, and U.D. Annakkage, “Inclusion of higher order terms for small-signal
(modal) analysis: Committee report-task force on assessing the need to in-
clude higher order terms for small-signal (modal) analysis,” IEEE Trans.
Power Syst., vol. 20, no. 4, pp. 1886-1904, Nov. 2005.

[5] V. Vittal, N. Bhatia, and A. A. Fouad, “Analysis of the inter-area mode


phenomenon in power systems following large disturbances,” IEEE Trans.
Power Syst., vol. 6, no. 4, pp. 1515-1521, Nov. 1991.

[6] T. Tian, X. Kestelyn, O. Thomas, A. Hiroyuki, and A. R. Messina, “An ac-


curate third-order normal form approximation for power system nonlinear
analysis,” IEEE Trans. Power Syst., vol. 33, no. 2, pp. 2128-2139, Mar. 2018.

[7] J. Arroyo, E. Barocio, R. Betancourt, and A. R. Messina, “A bilinear analysis


technique for detection and quantification of nonlinear modal interaction
in power systems,” 2006 IEEE PES GM, Montreal, Canada, June 2006.

[8] B. D. Anderson and B. J. Moore, Optimal Control: Linear Quadratic Methods,


Dover, 1989.

[9] K. J. Horadam, Hadamard Matrices and Their Applications, Princeton Univer-


sity Press, NJ, USA, 2006.

[10] C. A. Tsiliaginnis and G. Lyberatos, “Normal forms, resonance, and bifur-


cation analysis via the Carleman linearization,” J. Math. Anal. Appl., vol.
139, pp. 123-138, 1989.

134
[11] W. A. Cebuhar and V. Constanza, “Approximation procedures for the op-
timal control of bilinear and nonlinear systems,” J. Optim. Theory Appl., vol.
43, no. 4, pp. 615-627, August 1984.

[12] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, “Koopman in-


variant subspaces and finite linear representations of nonlinear dynamical
systems for control,” PLoS one, vol. 11, no. 2, 2016.

[13] E. P. Hofer and B. Tibken, “An iterative method for the finite-time bilinear-
quadratic control problem,” J. Optim. Theory Appl., vol. 57, no. 3, pp. 411-
427, June 1988.

[14] Z. Aganovic and Z. Gajic, “The successive approximation procedure for fi-
nite optimal control of bilinear systems,” IEEE Trans. Autom. Control, vol.
39, no. 9, pp. 1932-1935, Sept. 1994.

[15] Z. Aganovic and Z. Gajic, “Successive approximation procedure for


steady-state optimal control of bilinear systems,” J. Optim. Theory Appl.,
vol. 84, no. 2, pp. 273-291, Feb. 1995.

[16] A. K. Singh and B. C. Pal, “Decentralized dynamic state estimation in pow-


er systems using unscented transformation," IEEE Trans. Power Syst., vol.
29, no. 2, pp. 794-804, Mar. 2014.

135
Chapter 8
Numerical Results
In this Chapter, numerical simulation results are presented for the four proposed meth-
ods: the recursive linearization method, the perturbed Koopman mode analysis (PKMA),
the (weighted) Koopman observability measures, and the PKMA-based truncated non-
linear quadratic controller.

First, qualitative and quantitative comparisons of the recursive linearization


with the conventional linearization methods are provided for a synthetic non-linear,
size-variable case of study and several multi-machine power systems.

Then, the efficiency and accuracy of the perturbed Koopman mode analysis is
demonstrated by comparison to fully non-linear simulations and using the method of
normal forms (MNF).

Moreover, numerical studies of the observability-based approach to extract spati-


otemporal patterns are carried out for the model-based and the data-driven-based formu-
lations. In the case of the data-based approach, simulations and measured data are used.

Finally, the proposed structurally constrained quadratic law of control is com-


pared against existing linear controllers, providing cases where the quadratic controller
is vastly superior. Some interesting results regarding the future online application are
also presented.
8.1 Systems of study
Several multi-machine power systems are used to perform the numerical studies
presented in this Chapter. In this section, the power systems used for the model-
based analyses are briefly described.

Two groups of cases of study are first established: a set of four power sys-
tems that are used during the entire Chapter and some other multi-machine
grids that are exclusively used for validating the recursive linearization method.

The set of four multi-machine power systems is formed by:

1. The three-machine, 9-bus test system of [1].

2. The two-area, four-machine power system of [2] with the operating con-
ditions reported in [3]. A d-q transient model equipped with a simple ex-
citer is considered for each generator.

3. The 16-generator, 68-bus system used in [4]. All the generators are
described with a transient model and a simple exciter.

4. The IEEE 50-machine, 145-bus system of [5], with the operating condi-
tions provided in [6], where the six largest generators are modeled with
transient model and simple exciter. The other generators use a classical
model.

A more detailed description of the listed systems, as well as the infor-


mation about their linear stability modes, are provided below. The considered
disturbance is specified before each application.

8.1.1 Three-machine test system


For this case, the three generators are modeled with a fourth-order model with-
out exciter. The base case is obtained from [1] by using the loading conditions
shown in Table 8.1.

137
Table 8.1. Loading conditions in p.u. on a 100 MVA base.

Bus PLj QLj

5 1.00 0.50

7 0.80 0.35

9 0.72 0.30

Moreover, Table 8.2 shows the two electromechanical modes of concern,


while Fig. 8.1 provides a graphic diagram of the system.

Fig. 8.1. Schematic representation of the three-machine power system of [1].

Table 8.2. Linear modes of the three-machine system.

# Eigenvalue Freq. [Hz] Damp. [%]

1, 2 -1.11± j12.13 1.9306 9.1235

3, 4 -0.30± j6.82 1.0852 4.4240

8.1.2 Two-area, four-machine power system


The system data and operating conditions were taken from Appendix B of refer-
ence [3], a high-stress operating condition where 410 MW are flowing through
the tie line. Fig. 8.2 shows a schematic representation of this system.

138
The most important linear stability eigenmodes of the system are
contained in Table 8.3.

Fig. 8.2. Schematic representation of the two-area, four-machine power system of [2].

Table 8.3. Selected linear stability eigenmodes of the two-area, four-machine power system.

# Eigenvalue Freq. [Hz] Damp. [%] Mode description

1, 2 -1.25± j7.71 1.23 16.02 Local Area 1

3, 4 -1.57±j7.50 1.19 20.46 Local Area 2

5, 6 -0.18±j1.96 0.31 9.31 Inter-Area

7, 8 -1.04±j0.95 0.15 73.75 EFd 2

9, 10 -0.65±j0.87 0.14 59.78 EFd 3

11, 12 -0.26±j0.39 0.06 55.44 EFd 1

15 -0.45 - - δ3

16 -3.41 - - EFd 3

17 -4.07 - - EFd 3

8.1.3 16-generator, 68-bus power system


As mentioned previously, the system data were taken from reference [4], where
the time constants of the simple exciters, TRj, Tbj, and TCj (refer to the diagram of
Fig. 6.1) are set to a value of zero.

139
For the considered operating condition, the most critical linear
eigenmodes of the system are shown in Table 8.4 below. Especial emphasis is
put on the electromechanical dynamics.

Table 8.4. Linear stability eigenmodes of the 16-machine, 68-bus power system.

# Eigenvalue Freq. [Hz] Damp. [%] Mode description

1, 2 -0.48± j11.36 1.81 4.18 Gen 11

3, 4 -0.58±j9.58 1.52 6.03 Gen 4

5, 6 -0.40±j9.46 1.51 4.18 Gen 5&6 vs. Gen 4&7

7, 8 -0.62±j9.44 1.50 6.58 Gen 6 vs. Gen 7

9, 10 -0.25±j8.31 1.32 3.03 Gen 10

11, 12 -0.32±j7.94 1.26 4.00 Gen 2 vs. Gen 3

13, 14 -0.23±j7.63 1.21 3.04 Gen 1,8,&10 vs. Gen 2,3,&9

15, 16 -0.31±j7.42 1.18 4.14 Gen 12 vs. Gen 13

17, 18 -0.29±j7.32 1.17 3.93 Gen 4&5 vs. Gen 6&7

19, 20 -0.17±j6.36 1.01 2.63 Area 1 vs. Area 5

21, 22 -0.22±j5.17 0.82 4.23 Area 3 vs. Area 4

23, 24 -0.11±j4.79 0.76 2.19 Area 1 vs. Area 4

25, 26 -0.10±j4.00 0.64 2.54 Area 1,2,4 vs. Area 3,4,5

27, 28 -0.065±j3.61 0.57 1.81 Area 1,2,4 vs. Area 3,5

29, 30 0.21±j1.73 0.275 -11.93 Inter-Area

31, 32 -1.88±j1.12 0.18 85.94 E’d 4 & E’d 7

33, 34 -1.03±j0.29 0.047 96.22 E’d 9 & E’q 9

35, 36 -0.89±j0.077 0.012 99.63 E’d 16 & E’q 16

37, 38 -0.61±j0.014 0.002 99.97 E’q 16

140
Additionally, Fig. 8.3 depicts a diagram of the 16-machine power system,
showing the five areas in which it can be divided.

Fig. 8.3. Schematic representation of the 16-machine, 68-bus power system [4].

8.1.4 IEEE 50-machine power system


A high-stress operating condition used in previous research is considered to
enable comparison with other approaches.

In this model, generators at buses 93, 104, 105, 106, 110, and 111 are repre-
sented by a fourth-order d-q model and equipped with a simple exciter. All the
other generators are represented by classical models [6].

For later reference, Table 8.5 shows selected oscillatory modes; for the
high-stress operating condition, the system exhibits two unstable modes at
about 1.16 Hz and 0.33 Hz.

141
Table 8.5. Selected oscillatory modes of the IEEE 50-machine system.

# Eigenvalue Freq. [Hz] Damp. [%] Mode description

9, 10 -0.05± j18.17 2.891 0.29 Local Gen 60

17, 18 -0.05±j14.70 2.340 0.36 Local Gen 139

55, 56 -0.13±j9.75 1.552 1.34 Local Gen 111, 104

65, 66 -0.050±j8.46 1.346 0.59 Local Gen 95

69, 70 -0.051±j8.26 1.315 0.62 Local Gen 102

73, 74 -0.316±j7.91 1.259 3.99 Local Gen 106, 105

75, 76 0.0067±j7.31 1.163 -0.09 Local Gen 89, 103

77, 78 -0.127±j7.29 1.159 1.74 Local Gen 89, 103

87, 88 -0.145±j6.15 0.978 2.36 Local Gen 110, 102

91, 92 -0.07±j5.035 0.801 1.39 Local Gen 94, 90, 95

97, 98 0.138±j2.125 0.338 -6.48 Inter-area Gen 137, 140

8.1.5 Other multi-machine power systems


With the objective of assessing the performance of the recursive linearization
when applied to multi-machine power systems, several cases of study are taken
into account. Their information is contained below in Table 8.6.

The four highlighted power systems correspond to those presented in the


previous sub-sections.

Table 8.6. Selected multi-machine cases of study.

Analyzed system Models* # states #buses # lines

CL 2 4 4
SMIB of [2]
TR 4 4 4

3-machine, 9-bus system of [1] CL 6 9 9

142
Table 8.6. Selected multi-machine cases of study (CONT.)

Analyzed system Models* # states #buses # lines

3-machine, 9-bus system of [1] TR 12 9 9

CL 8 7 6
2-area system of [2]
TR 16 7 6

TR, SMP1, 20 8 8
2-area system used in [4]
TR, SMP3 28 8 8

TR, SMP3, 1PSS 31 8 8


2-area system used in [7]
TR, SMP3, 4PSS 40 8 8

16-machine system of [8] CL 32 68 86

TR 64 68 86
16-machine system used in [4]
TR, SMP1 80 68 86

10-machine system of [9] TR, SMP3 70 39 45

CL 96 140 233
48-machine NPCC system used
in [10]
CL, TR 150 140 233

50-machine system of [5] CL 100 145 453

CL, TR 112 145 453

50-machine system used in [6] CL, TR, SMP3 130 145 453

CL,TR,SMP3,1PSS 133 145 453

* CL – classical model, TR – transient d-q model, SMP1 – simple exciter with TCj=TBj=0,
SMP3 – simple exciter with three state variables, #PSS – number of PSSs if are included.

8.2 Recursive linearization


In this section, the recursive linearization method is validated by means of com-
parisons against the conventional linearization methods.

The mentioned validation is divided into two parts. In the first one, a syn-
thetic size-variable non-linear system is utilized for comparing the performance

143
of the considered linearization schemes regarding the required computational
time and accuracy.

Secondly, several multi-machine power systems are used for obtaining


similar curves of performance.

8.2.1 Size-variable synthetic case of application


Here, an embedded, non-linear, and size-variable synthetic system is introduced
and analyzed through the recursive linearization method.

8.2.1.1 Non-linear equations


The utilized test system for the following studies is given by the non-linear
equation:

x j  0.5  sin( x j )  a j x 2j 0.5   k 1 x j xk


N
(8. 1)

The size of the test system is varied by increasing the parameter N from 1
to a determined number of state variables.

Additionally, in (8.1) the constant values aj are calculated such that xj =


0.75 for j = 1, 2, …, N defines a system stable equilibrium point.

According to the theoretical background presented in Chapter 3, the sys-


tem (8.1) is equivalent to equation (3.3). Thus, in analogy to equation (3.4), we
can rewrite (8.1) as

x j  f j  g1 , g 2 , g3 , g 4   g1  g 2  a j g 3 g 4 (8. 2)

where g 1= 0.5,

g 2  sin( x j ) (8. 3)

g3  x 2j (8. 4)

g 4  0.5  g 5 (8. 5)

g5   k 1 x j xk
N
(8. 6)

144
8.2.1.2 Recursive linearization analysis
Using formulae (3.51), (3.62), and (3.65), the expressions (8.2) through (8.6) are
linearized, resulting in the third-order model

x  (1) f  (2) f  (3) f (8. 7)

where

 k 
 ( k ) f j   ( k ) g 2  a j    ( m ) g3 ( k  m ) g 4  (8. 8)
 m 0 

Additionally, the k-th order modules for the sub-functions g j are given
by

1  k  k
(k ) g2  sin  x j sep   x j (8. 9)
k!  2 

2 k
 ( k ) g3  
k ! m 1
x mj x kj  m (8. 10)

 ( k ) g5 k k 1 ( k l )  k 1  1   1
   g5  (l ) g5       l     (1) g5 
k
 g4 
(k )
(8. 11)
2 g 4 sep 8 l 1  l 0  2   k !

1  k ( k l ) 
 ( k ) g5   
k !  l 0
 xk  (l ) xk 

(8. 12)

It is evident from (8.1) that Δ(k)g1=0 for k=1, … so that it is not included in
equation (8.8).

8.2.2 Comparison with conventional linearization techniques


In this Section, a comparison of the proposed recursive linearization scheme is
performed against the conventional linearization techniques by making use of
the probing system presented before and the multi-machine power systems of
Table 8.6.

8.2.2.1 Qualitative comparison


In general, the recursive linearization has the same accuracy of the direct analyt-
ic linearization, but with much less computational effort because it is based on

145
analytical formulations that result in modular expressions, not in numerical ap-
proximations. Next sub-sections illustrate this by means of some numerical re-
sults.

On the other hand, the proposed linearization scheme has two ad-
vantages over the perturbation-based methods. The first one is that as the accu-
racy of the FDA and CDA methods depends on the size of the perturbation ρ
[11], [12], the recursive linearization accuracy does not.

The second advantage can be observed from expressions (8.8) through


(8.12) above: in contrast with the perturbation-based methods, the evaluation of
all the transcendental functions is only made at the system equilibrium point
and before the iterative process instead of every single iteration.

It means that the recursive linearization merely requires the evaluation of


multiplications and additions, decreasing the required computational burden.

8.2.2.2 Quantitative comparison – synthetic case of study


First, the computational effort in terms of the required time is drawn in Fig. 8.4
for the first- and second-order approximations and in Fig. 8.5 for the matrix H3.
The depicted curves were obtained by varying the number of states from 1
through 100 and averaging 100 computations per point.

(a) (b) .

Fig. 8.4. Comparison of the CPU time required for computing (a) A11 and (b) H2 of the
synthetic system with the Analytic linearization (AL), recursive linearization (RL), forward-
difference approximation (FDA), and center-difference approximation (CDA) methods.

146
Fig. 8.5. Comparison of the CPU time required for computing H3 of the synthetic system
with the recursive linearization (RL), forward-difference approximation (FDA), and center-
difference approximation (CDA) methods.

In figures 8.4(b) and 8.5, the required computational time of the AL


method has been omitted for practicality.

Now, the accuracy of the proposed method and the perturbation-based


techniques is evaluated in Fig. 8.6 for the first-order approximations. In the left
part, the error of computation of the RL, FDA, and CDA methods with respect
to the AL results are presented for N=100.

On the other side, Fig. 8.6(b) depicts how the estimation of the eigenvalue
with the largest magnitude changes when the parameter ρ is varied.

(a) (b) .

Fig. 8.6. (a) Absolute error of the computed A11 matrix of the synthetic system. (b)
Eigenvalue with the largest magnitude. Comparison of the RL, FDA, and CDA methods
with N = 100 and perturbation from ρ=1×10-4 through ρ =1×10-1.

147
It can be seen that the dependence of the perturbation-based methods on
ρ may considerably affect the accuracy of the estimated linear stability eigenval-
ues.

In Fig. 8.6(a), the absolute error is calculated with the following formula:

Error  A AL  A approx (8. 13)


F

where AAL is the Jacobian matrix estimated with the AL method and Aapprox de-
notes the approximations to A11 obtained by the RL, FDA, and CDA methods.
The symbol ||·||F means the Frobenius norm.

Then, Fig. 8.7 shows the error of computation of the three linearization
methods with respect to the analytic linearization (AL) for the matrices H2 and
H3, with N=100. Similar curves are obtained for other values of N.

(a) (b) .

Fig. 8.7. Absolute error of the computed matrices (a) H2 and (b) H3. Comparison of the RL,
FDA, and CDA methods. N = 100 and perturbation from ρ=1×10-4 through ρ =1×10-1.

From figures 8.4 and 8.5, it can be seen that the difference between the
computational times of the recursive linearization and the perturbation-based
methods grows exponentially. Additionally, the error reached by the RL method
is approximately ten decades below the error of the perturbation methods, as
can be appreciated from figures 8.6 and 8.7.

148
8.2.2.3 Quantitative comparison – multi-machine power systems
In this Section, the comparison of the recursive linearization with the conven-
tional linearization techniques is presented for the multi-machine power sys-
tems presented in Table 8.6.

Below, Fig. 8.8 depicts the required time and the accuracy of the consid-
ered methods when calculating the Jacobian matrix of the system.

(a) (b) .

Fig. 8.8. Comparison of the RL method with the conventional linearization techniques when
applied to the power systems of Table 8.6. (a) Required computational time and (b) Absolute
error of the computed A11 matrix.

(a) (b) .

Fig. 8.9. (a) Absolute error of the computed A11 matrix. (b) Eigenvalue corresponding to the
inter-area mode. Comparison of the RL, FDA, and CDA methods for the 50-machine power
system with a PSS at bus 111 and a perturbation magnitude from ρ=1×10-5 through ρ =1×10-1.

149
Of interest, in Fig. 8.9 the effect of the perturbation magnitude in the ac-
curacy of the regarded methods is illustrated by evaluating the absolute error of
the estimation and the calculated eigenvalue related to the inter-area mode.

Then, Fig. 8.10 compares the required computational time and the abso-
lute error of the RL, FDA, and CDA methods for the second- and third-order
cases. In Fig. 8.10(d), the accuracy of the perturbation-based methods is
measured against the matrix H3 computed with the RL method due to the
enormous amount of resources and time required by the AL method.

(a) (b) .

(c) (d) .

Fig. 8.10. Comparison of the recursive linearization method with the conventional lineariza-
tion techniques for the power systems of Table 8.6. (a) CPU time required for computing H2
and (b) H3. (c) Absolute error of the computed matrix H2 and (d) H3 with ρ=1×10-4.

It can be noted from the two images in the bottom of Fig. 8.10 that the er-
ror of the FDA and CDA methods is considerable. In order to assess how the

150
accuracy of these methods behaves when the magnitude of ρ is varied, Fig. 8.11
shows the absolute error of the FDA, CDA, and RL methods for the IEEE 50-
machine power system with a PSS at bus 111.

(a) (b) .

Fig. 8.11. Absolute error of the computed matrices (a) H2 and (b) H3. Comparison of the RL,
FDA, and CDA methods. IEEE 50-machine power system with a PSS at bus 111 and perturba-
tion from ρ=1×10-5 through ρ =1×10-1.

Similar curves can be obtained for the other multi-machine power sys-
tems, demonstrating the superiority of the proposed linearization scheme.

8.3 Perturbed Koopman mode analysis


In this section, the perturbed Koopman mode analysis (PKMA) framework is
tested on two multi-machine power systems:

 The three-machine test system of Section 8.1.1, and

 The IEEE 50-machine test system of Section 8.1.4.

To assess the accuracy of the proposed framework, results are compared


with the method of normal forms (MNF) described in [13].

For comparison, a third-order PKMA representation is adopted to effi-


ciently approximate system behavior.

151
Following the developments in Chapter 4, one has that

 Λ1 H12 H13 
Φ3ord  y    Λ2 H 23  H 3ord Φ3ord  y 
 0 Λ 3 

where Λ1 = diag (λ1, λ2,…,λN), Λ2 = diag (2λ1, λ1+λ2, …, 2λN), …, Λ3 = diag (3λ1,
2λ1+λ2, …, 3λN).

Use of equation (4.17),

Φ nord t   Wnord Ψ nord t 

results then in the model

 Φ1  t   I N  N W12 W13   e Λ1t Ψ1  0  


    
Φ 2  t     0 W22 W23  e Λ2t Ψ 2  0   (8. 14)
Φ3  t    0 0 W33   e Λ3t Ψ 3  0  

and

x(t )  v1 1 (0)e 1t   v N  N (0)e N t  v1 1 11 (0)e 2 1t  


a1 (t ) a N (t )
2 N t
 v N N N N (0)e  v111 111 (0)e 31t   v N N N N N N (0)e 3N t (8. 15)
a NNN (t )

 a1 (t )   a N (t )   a111 (t )   a N N N (t )

8.3.1 Three-machine test system


For this case, the simulated disturbance is a solid three-phase fault at bus 8
cleared in 50 milliseconds. Fig. 8.12 compares the performance of the third-order
PKMA response with the full system solution and the third-order MNF.

Table 8.7 presents the required time for PKMA and MNF and compares
the approximation error for Gen #3. The error is measured in decibels (dB) for a
time window of 10 seconds with a time step equal to 0.01 seconds, where

Error (x approx ,x full )  10 log10    ( x full k ) 2 


N N
( xapproxk ) 2
 k 1 k 1 

152
For completeness, the second-order normal form approximation and the
linear solution are included. Results are found to be in very good agreement.

Computational times are only representative of this analysis since the


codes are not optimized for speed.

Fig. 8.12. Comparison of PKMA method of third order with the third-order MNF, the linear
solution, and the full solution. Speed deviation of Gen 3.

Table 8.7. Computational times and achieved accuracy.

Approximation/method MNF PKMA

Parameter Time [s] Error [dB]* Time [s] Error [dB]*

Second order 0.0518 36.4167 0.0225 43.2168

Third order 0.3875 41.5806 0.1621 59.8289

* Speed deviation of Gen 3.

8.3.2 IEEE 50-machine test system


To enable the comparison with other approaches, a high-stress operating condi-
tion used in previous research was considered (See Appendix B of [14]). The
perturbation of interest is a solid three-phase fault at bus 111 cleared in 10 milli-
seconds.

153
(a) (b) .

Fig. 8.13. Comparison of PKMA estimates with the linear solution, the full solution, and the
MNF without terms of fourth and sixth order. (a) Speed deviation of Gen 94. (b) Rotor speed
deviation of Gen 137.

Fig. 8.13 compares the curves obtained with the full solution, the linear
solution, and the PKMA and MNF of second and third order. Then, Table 8.8
compares the eight Koopman eigenvalues with largest magnitude at t = 10 sec
with the MNF. Results are found to be in excellent agreement.

Table 8.8. Koopman modes with largest amplitudes.

max(|aj(t=10)|) Error2 [%]*


Eigenvalue
MNF PKMA MNF PKMA

λ97, λ98 0.14±j2.125 0.4084 0.4141 18.920 18.798

λ75, λ76 0.007±j7.31 0.0642 0.0646 18.614 18.492

2λ97+ λ98, 2λ98+ λ97 0.41±j2.125 0.0556 0.0578 5.1365 5.0276

λ97+λ98 0.276 0.0474 0.0496 2.6465 2.1906

2λ97, 2λ98 0.276±j4.25 0.0221 0.0231 2.4864 1.9256

λ75+λ76+ λ97, λ75+λ76+ λ98 0.15±j2.125 0.0137 0.0143 2.1706 1.7523

λ76+ λ97, λ75+λ98 0.145±j5.18 0.0115 0.0120 2.1770 1.7572

* The corresponding third order PKMA and MNF representation errors are 1.245% and
1.662%, respectively.

154
As shown in Table 8.8, the first five modes capture over the 98.1 % of the
total energy.

It can be noted that at t = 10 sec, the inter-area mode 97, 98 has the highest
amplitude. Also of interest, the unstable linear mode 75,76 has a similar ampli-
tude to that of mode combinations 2λ97+λ98, 2λ98+λ97, λ98+λ97, λ75+λ76+λ97, and
λ75+λ76+ λ98 involving second- and third-order terms.

8.3.3 Qualitative comparison with the MNF


A qualitative comparison between power system normal form analysis and the
perturbed KMA developed in Chapter 4 is presented in Table 8. 9.

Table 8.9. Qualitative comparison of the PKMA and the MNF.

Feature/method MNF [3], [6], [13]. PKMA

Reported order of representa-


Third-order* Arbitrary order
tion

Additional terms of the 3rd-


Fourth- and sixth-order terms -
order representation

Computation of initial condi-


Non-linear optimization Closed-form linear solution
tions

Design of PSS/FACTs controllers Available Under development

DAE model Available Under development

Actuated system Not reported Chapter 7

* Theoretically possible to an arbitrary order.

Although the two methods bear similarities, PKMA is being developed to


supplement information to data-driven approximations of the Koopman opera-
tor rather than specifically assessing modal behavior.

8.3.4 Non-linearity indexes


Recalling Section 4.4, Table 8.10 shows the eight Koopman eigenfunctions φj
with the largest non-linearity indices for the second- and third-order representa-
tions computed over a time window of 10 seconds.

155
Table 8.10. Eight largest non-linear interaction indices for Koopman eigenfunctions. IEEE
50-machine test system

2nd-order representation 3rd-order representation

j of φj NI2j λk,l j of φj NI3j λk,l,m j of φj NI2j λk,l

9, 10 15.1 λ97+λ98 9, 10 4.96 2λ97+λ98 9, 10 14.47 λ97+λ98

111 7.52 λ97+λ98 17, 18 1.53 3λ97 111 7.99 λ97+λ98

5, 6 4.56 λ97+λ98 5, 6 1.14 2λ97+λ98 5, 6 4.50 λ97+λ98

1, 2 3.36 λ97+λ98 3, 4 0.81 2λ97+λ98 1, 2 3.37 λ97+λ98

17, 18 2.32 λ97+λ98 1, 2 0.56 2λ97+λ98 3, 4 1.76 λ97+λ98

3, 4 1.75 λ97+λ98 13, 14 0.45 3λ97 117 1.72 λ97+λ98

117 1.67 λ97+λ98 47, 48 0.44 2λ97+λ98 37, 38 1.64 λ97+λ98

37, 38 1.63 λ97+λ98 37, 38 0.40 2λ97+λ98 116 1.57 λ99+λ100

Examination of numerical results in Table 8.10 shows that the size of the
third-order non-linearity index NI3j relative to NI2j is large, indicating a signifi-
cant effect of third order terms.

Results show that the Koopman mode analysis identifies second-order


mode 97+98 and third-order mode 297+98 as the most relevant modes in close
agreement with results in Table 8.8.

For completeness, Table 8.11 shows non-linearity indexes based on the


MNF for the case of study presented in Section 8.3.2. Here, second-order non-
linearity indices were obtained using the approach in [14], whereas the third-
order non-linearity index presented in [13] is used for the third-order terms.

The stable dynamics appearing in the first row of Table VII arise from the
definition of interaction indices used in the MNF [3]. It represents a big partici-
pation of the non-linear stable modes λ75+λ116 and λ99+λ100+λ116 (for the second-
and third-order terms, respectively) in a dynamics dominated by a linear unsta-
ble oscillation [3], [13].

156
Table 8.11. Eight largest non-linear interaction indices for the 3rd order MNF.

j MI3(j)×Tr3* λk,l,m j II(j)×Tr2** λk,l

75, 76 -28.9657 λ99+λ100+ λ116 75, 76 -22.4184 λ75+λ116

9, 10 -0.5901 2λ97+ λ98 9, 10 -4.7042 λ97+λ98

97, 98 -0.4896 2λ99+ λ100 5, 6 -1.5278 λ97+λ98

17, 18 -0.2408 3λ97 111 -1.3032 λ97+λ98

51, 52 -0.1638 2λ97+ λ87 1, 2 -1.1185 λ97+λ98

5, 6 -0.1421 2λ97+ λ87 59, 60 -0.9725 λ75+λ97

3, 4 -0.0993 2λ97+ λ87 17, 18 -0.6597 λ97+λ98

65, 66 -0.0945 λ75+λ97+ λ98 47, 48 -0.6545 λ75+λ97

* As defined in [13], ordered from most unstable to less unstable.


** As defined in [3], ordered from most unstable to less unstable.

Overall, the agreement is qualitatively good showing the correctness of


the adopted approach.

8.3.5 Complexity analysis


The increase in the complexity of the model and its memory requirements when
using higher-order representation is an issue of concern. As the dimension of
the parameter space increases, the dimension of Anord increases rapidly with
order.

Table 8.12 compares the CPU effort and accuracy of the PKMA method
with the MNF. Results are found to be similar. Again, no efforts were made to
optimize the performance of the models.

To investigate the CPU storage requirements involved in constructing the


state representation, Table 8.13 shows the number of states and percentage of
nonzero elements for each of the three orders of approximation described above.

157
Table 8.12. Computational times and achieved accuracy.

PKMA MNF

Order of approximation 2nd 3rd 2nd 3rd

CPU time [sec.]* 4.226 8,805.302 4.4479 10,480.0

Error Δωr Gen 94 [dB] 11.238 10.249 11.144 9.854

Error Δωr Gen 137 [dB] 8.677 13.709 8.666 12.576

* The time required to compute matrices H2 and H3 is not taken into account
** Third-order MNF without fourth- and sixth-order terms.

Table 8.13. Complexity of extended Koopman model for the IEEE 50-machine system

Order of approximation Number of states Number of nonzero elements [%]

Linear 129 0.7752

Second order 8, 514 0.5039

Third order 374, 659 0.1333

Several interesting features arise from the interpretation of this model:

a) Matrix W is sparse with the sub-matrices Wjj being diagonal.

b) Matrix W23, although can be very large, it is extremely sparse (99.9%


of elements equal to zero). In the implementation of the algorithm on-
ly sub-matrices W12 and W13 need to be stored.

8.4 Koopman observability measures


Now, the Koopman non-linear observability measures proposed in Chapter 5
are validated. First, simulated and measured data are utilized to compare the
data-based Koopman observability measures against several identification
methods of literature.

Then, the (weighted) non-linear observabilities defined for Koopman ex-


tended systems are evaluated to identify and isolate the most critical dynamics.

158
8.4.1 Application to transient stability data
Two test systems are considered for comparing and verifying the ability of the
proposed methodology: 1) The IEEE 39-bus test system and 2) a 6-area, 377-
machine test system.

8.4.1.1 The IEEE 39-bus system


The procedures developed in Section 5.3 are tested on the IEEE 39-bus, 10-
machine. Datasets of speed deviations from transient stability simulations of the
non-linear system model were used to assess the ability of KMD to characterize
system behavior

Based on the system response, the data matrix X M N is defined as X =


[Δω1 Δω2 ··· Δωng]T where Δωj is the vector of speed deviations of the j-th genera-
tor and ng is the number of generators.

To excite the inter-area mode, a three-phase stub fault is applied at bus 1


cleared after 0.005 sec. Measurements were recorded at a rate of 100 samples/sec
and cover a period of 17.5 sec, for a total number of Ñ=1751 snapshots. This
yields a total of 1750 empirical Ritz eigenvalues calculated with the KMD meth-
od.

Table 8.14 shows a comparison between the linear modes obtained


through small signal stability analysis (SSSA) of the linear system model and the
main KMs computed with the proposed methodology.

As shown in Table 8.14, the four dominant modes with the largest ob-
servability measure in columns 4 and 7 at about 0.58 Hz and 0.91 Hz are correct-
ly identified and ranked by the proposed approach.

Also of interest, the unstable mode at 0.082 Hz associated with the excita-
tion system of generator 2 is properly characterized. The analysis of other modes
is not pursued here.

159
Table 8.14. Nine top-ranked KMs. Norm and absolute value criteria.

SSSA Top-ranked KMs

Freq. Damp Max. Freq. Damp. Max.


# Description
[Hz] [%] Obs. [Hz] [%] Obs.

1 0.5896 6.6409 1.000 0.5759 6.2399 1.000 Inter-area mode

2 - - - 0.0256 67.7348 0.681 General trend

3 0.9198 3.9656 0.459 0.9173 3.4371 0.437 Gen 5 vs. Gen 9

4 - - - 0.5268 7.0296 0.235 Spurious KM 1

5 - - - 0.6360 4.9042 0.204 Spurious KM 1

6 1.5046 3.9655 0.240 1.4828 2.5305 0.161 Gen 8 vs. Gen 10

7 1.2699 3.4682 0.431 1.3031 1.9375 0.142 Gen 8 & Gen 10

8 1.4885 6.5294 0.195 1.4289 2.6330 0.138 Gen 8 & Gen 10

18 0.0874 -17.0212 0.062 0.0855 -14.5619 0.079 AVR Gen 2

8.4.1.2 Six-area, 377-machine model of the Mexican Interconnected System (MIS)


This test system is the 6-area, 377-machine detailed model of the Mexican Inter-
connected System (MIS) used in previous studies [15].

The six-area model of the MIS exhibits four critical inter-area modes in-
volving the interaction of machines in several parts of the system, as well as a
very observable local mode involving oscillations between the generators of the
Western system:

- Two critical inter-area modes at 0.29 and 0.52 Hz representing the in-
teraction of machines in the North and South systems and the East
and Central and West systems, respectively.

- Three higher-frequency modes at 0.62, 0.77, and 0.92 Hz representing


localized interaction of machines in the North and South systems.

160
For reference and comparison, Table 8.15 gives the five slowest modes of
the system for the base case condition obtained using the DSATools package
[16].

A number of case studies resulting in non-linear transient behavior were


selected as candidates for the application of the proposed framework analysis
involving both stable and unstable system response. These include:

1) Case 1. Double line-outage of 400 kV lines in the Southeastern system.


This case strongly excites inter-area modes 1, 3, and 4 in Table 8.15
and results in stable system response, and

2) Case 2. Simultaneous line-outage of two 400 kV circuits and a major


generator in the Southeastern system. This case strongly excites inter-
area modes 1, 2, and 4 and results in unstable oscillations involving
major generators.

Table 8.15. Eigenvalues of the MIS for the base case condition.

Description Eigenvalue Freq. [Hz] Damp. [%]

Inter-area mode 1 -0.017± j1.828 0.290 5.94

Inter-area mode 2 -0.014± j3.245 0.516 2.83

Inter-area mode 3 -0.021± j3.906 0.621 3.532

Inter-area mode4 -0.023± j4.854 0.772 3.023

Local mode 5 -0.031± j5.698 0.906 3.44

Selected simulation results for the aforementioned contingency scenarios


are shown in Fig. 8. 14. For clarity of exposition, 42 generators (ng=42), of the 377
machines are selected for analysis. Generator speed measurements at these ma-
chines were used to build the snapshots matrix X using a sampling frequency
of 80 Hz and a time period of 10 sec, resulting in a total of 794 snapshots (793
approximate KMs).

161
(a) (b) .

Fig. 8.14. Speed deviation of dominant machines. (a) Case 1. (b) Case 2.

For Case 1, Table 8.16 compares the modal estimates from KMD with
those from DMD and Prony analysis, while Fig. 8.15 shows the associated spa-
tial observability measures.

Table 8.16. Five top-ranked KMs for Case 1.

Koopman modes DMD modes ** Prony modes*,**

Freq. Damp Max. Freq. Damp. Energy Freq. Damp. Energy


#
[Hz] [%] Obs. [Hz] [%] [pu] [Hz] [%] [pu]

1 0.576 2.680 1.000 0.574 2.721 0.873 0.574 2.523 0.493

2 0.060 46.416 0.340 0.037 71.199 1.000 0.038 68.026 1.000

3 0.297 5.897 0.209 0.298 5.405 0.509 0.298 5.399 0.192

4 0.772 2.987 0.208 0.777 3.019 0.186 0.771 3.299 0.033

5 0.616 3.306 0.069 0.651 77.935 0.242 0.668 53.431 0.004

* Prony analysis of the 20 dominant speed deviations in Fig. 8.14(a)


** Ranked by energy.

As shown, the proposed approach is fully capable of determining the


dominant modes 1 through 4 at about 0.30, 0.57, 0.62, and 0.77 Hz. The physical
significance of other modes should be interpreted cautiously, since they may be
associated with trends or non-linear effects.

162
.

Fig. 8.15. Global behavior of the leading KMs in Table 8.16 based on the modulus of the
Koopman measures of observability.

As seen in Table 8.16, other approaches work well in identifying the main
inter-area modes but may fail to rank properly their relative importance. Of
special concern, modes of little physical significance for the analysis of inter-area
phenomena are ranked first, showing the need for better classification tech-
niques.

Now, for Case 2, Table 8.17 shows the slowest oscillatory modes extracted
using Koopman analysis.

For completeness, four methods that aim at extracting spatiotemporal


patterns from simultaneously recorded data are compared here:

1) The BPA/PNNL Prony method in the Dynamic Systems Identification


(DSI) Toolbox (beta version limited to the analysis of 20 simultaneous
signals) [17].

2) The basic DMD framework in [18], [19].

3) A home-made multi-signal Prony method based on the Kumaresan-


Tufts method [20].

4) A stochastic subspace identification method (SSI) based on [21], [22].

163
Table 8.17. Comparison of modal estimates for Case 2. Time window: 0-10 sec.

Koopman** KT Prony Analysis** BPA Prony Analysis* SSI***

Freq. Damp. Freq. Damp. Freq. Damp. Freq. Damp.


[Hz] [%] [Hz] [%] [Hz] [%] [Hz] [%]

0.266 -1.802 0.267 -1.520 0.267 -1.970 0.262 -0.450

0.751 1.183 0.758 1.777 0.757 1.778 0.760 9.100

0.472 -1.298 0.499 -0.667 0.488 1.471 0.487 -0.300

0.539 -4.192 0.582 4.420 0.547 3.895 0.561 3.610

0.952 2.537 0.941 4.765 0.932 5.308 0.950 21.970

* Prony analysis of the 20 dominant speed deviations in Fig. 8.14(b)


** Simultaneous analysis of 39 speed deviations in Fig. 8.14(b).
*** SSI analysis of the first 37 signals in Fig. 8.14(b).

The various techniques identify two unstable linear inter-area modes at


about 0.27 and 0.49 Hz although some differences are noted, especially with SSI
analysis. Koopman modal analysis, in turn, identifies a second harmonic mode
of the 0.266 Hz mode at about 0.53 Hz, indicating the presence of non-linear be-
havior that is not accurately identified with other methods for the selected time
window.

The analysis of Prony results in column 3 in Table 8.17 shows that partial
analysis of global behavior may result in inaccurate characterization of inter-
area mode 2 (0.49 Hz).

The analysis of observability measures in Fig. 8.16, in turn, shows the


modal distribution of inter-area modes 1-3 in Table 8.15.

In an effort to assess the computational effort associated with the applica-


tion of KMA, the performance of the method for several sampling rates was
evaluated. Table 8.18 summarizes the CPU time involved in the analysis of Case
2 for two sampling rates: 8 and 80 Hz.

164
.

Fig. 8.16. Global behavior of the three leading oscillatory KMs in Table 8.17 based on the
Koopman observability measures.

Table 8.18. Computational effort in seconds. Time window: 0-10 sec.

Sampling BPA Prony KT algo-


KMD* DMD* SSI***
rate software** rithm*

8 0.0032 0.0016 0.4793 0.4041 -

80 0.0082 0.0036 - 1.4321 0.0510

* Simultaneous analysis of 39 speed deviations in Fig. 8.14(b).


** Prony analysis of the 20 dominant speed deviations in Fig. 8.14(b).
*** SSI analysis of 37 speed deviations in Fig. 8.14(b). No meaningful results are
obtained for this case.

As shown, KMA compares well with other modal identification tech-


niques. No physically meaningful results are obtained for short observation
windows using SSI analysis. For the observation window and sampling rate se-
lected, numerical issues limit the application of SSI to 37 signals.

Results are illustrative only since the performance of the methods de-
pends on various factors, and the codes are not optimized for speed and
memory usage.

8.4.2 Application to measured data


In an effort to further verify the accuracy of the proposed framework when deal-
ing with larger datasets, wide-area PMU data from a real event in the MIS are
used to test the ability of observability measures analysis to identify the optimal
modes.

165
The observation set contains 3201 measurements of heterogeneous data
and covers a period of 160 sec; measurements were collected at a rate of 20 sam-
ples/sec at 22 system locations encompassing three major geographical regions
[23].

In order to make the measurements comparable, a normalization by


means of the variance was made, and then, the amplitudes were normalized to
have a maximum value of 1. For illustration, Fig. 8.17 shows the preprocessed
records of measured data including frequency, tie-line active power, and volt-
age measurements.

(a) (b) .

Fig. 8.17. Normalized time traces of recorded data. (a) Frequency measurements. (b) Power
and voltage measurements.

Insight into the nature of oscillatory behavior can be gleaned from the
analysis of the relative oscillations in Figure 8.18.

(a) (b) .

Fig. 8.18. Details of the recorder measurements. (a) Voltage (PMU #22) and power (PMUs
#19, 20, and 21) measurements. (b) Frequency measurements (PMUs #1-18).

166
Application of the KMD results in 3200 KMs, while DMD analysis results
in 22 dynamic modes. Table 8.19 compares modal estimates obtained using
KMD, Prony, and DMD analyses for the dataset in Fig. 8.17.

Table 8.19. Identified KMs. Time interval: 0-160 sec.

Koopman obs. analysis DMD (energy) Prony analysis*

Freq. Damp Max. Freq. Damp. Energy Freq. Damp. Energy


[Hz] [%] Obs. [Hz] [%] [pu] [Hz] [%] [pu]

0.000 - 1.000 0.000 - 1.000 0.026 12.744 1.000

0.993 -0.258 0.949 0.981 0.845 0.178 0.939 2.955 0.109

0.000 - 0.841 0.000 - 0.643 0.008 51.276 0.851

* Prony analysis of the 20 dominant speed deviations in Fig. 8.17.

Several observations are of interest here. First, DMD analysis and Prony
analysis necessitate more signals to approximate the signals’ trend. In addition,
modal estimates may be inconsistent or result in multiple modes with similar
frequencies, thus making physical interpretation difficult.

Of interest, Figures 8.19 and 8.20 show the time evolution and the corre-
sponding spatial pattern of the 0.99 Hz KM (refer to Fig. 8.18).

Fig. 8.19. Time evolution of the 0.99 Hz Koopman mode.

167
(a) (b) .

Fig. 8.20. Mode shape of the 0.99 Hz KM. (a) PMUs #1-18. (b) PMUs #19-22.

The results correlate very well with observed system behavior in figures
8.17 and 8.18, showing the accuracy and soundness of the developed frame-
work.

Experience shows that the accuracy of modal estimates using these tech-
niques may be enhanced by increasing the measurement set by improving its
observability measure.

8.4.3 Application to extended Koopman eigenfunctions-based models


The huge dimension of the Koopman extended systems demands means for se-
lecting a small set of critical linear and non-linear oscillations to be analyzed and
controlled.

Therefore, the use of (weighted) Koopman observability measures intro-


duced in Section 5.4 is explored here.

First, in order to be able to visualize the obtained results we define the


following measures:

 j  max(o j ) (8. 16)

ˆ j  max(oˆ j ) (8. 17)

 j  max(o j ) (8. 18)

where OPKMA=[o1 o2 ··· oM], Ô PKMA=[ ô 1 ô 2 ··· ô M], and O PKMA=[ o 1 o 2 ··· o M].

168
Two test systems are used here: 1) the two-area power system of Section
8.1.2 and 2) the 16-machine power system of Section 8.1.3.

8.4.3.1 Two-area power system


The simulated disturbance for this study is a three-phase solid fault at bus 5
cleared after 35 milliseconds. Fig. 8. 21 shows the highest ςj when the rotor speed
deviations are regarded as the output signals of the system. The observability
measures were normalized.

Fig. 8.21. Highest ςj when the rotor speed deviations of all the generators are the system’s
output signals. Two-area power system.

In Fig. 8.21, it can be seen that, in accordance with the general definition
of observability measures [24], the most observable mode is the real linear mode
number 15. The second and third most observable dynamics are the local area
modes, whereas the inter-area mode λ5, 6 does not appear in the graph.

Thus, the observability measures of matrix OPKMA seem to be inadequate


for our purposes, and the use of the pondered measures of matrix Ô PKMA is
tested. The results are drawn in Fig. 8.22.

169
Fig. 8.22. Highest ˆ j when the rotor speed deviations of all the generators are the system’s
output signals. Two-area power system.

As can be observed, now the most excited dynamics are shown in Fig.
8.22. However, there are two drawbacks in this classification:

1) Some of the top-ranked modes are fast dynamics that are not of inter-
est for analyzing wide-are phenomena, and

2) The first-ranked mode is not the inter-area mode.

Thus, the weighted Koopman observabilities O PKMA, defined in equation


(5.25), are used to obtain the data depicted below in Fig. 8.23.

From Fig. 8.23 it can be noted that the Koopman observability measures
O PKMA provide the sought means for ranking the most observable and slowest
dynamics.

For illustration, the second- and third-order modes appearing in Fig. 8.23
are presented in Table 8.20.

170
Fig. 8.23. Highest  j when the rotor speed deviations of all the generators are the system’s
output signals. Two-area power system.

Table 8.20. Most observable slow non-linear Koopman modes of the two-area system.

Combination Eigenvalue Freq. [Hz] Damp. [%]

λ5+λ6+λ9, λ5+λ6+λ10 -1.01±i0.87 0.14 75.96

λ5+λ6+λ8, λ5+λ6+λ7 -1.40±i0.95 0.15 82.80

λ5+λ8, λ6+λ7 -1.22±i1.00 0.16 77.22

λ5+λ6+λ6, λ5+λ5+λ6 -0.55±i1.96 0.31 27.00

λ5+λ6 -0.37 - -

λ5+λ10, λ6+λ9 -0.83±1.09 0.17 60.25

λ5+λ8+λ10, λ6+λ7+λ9 -1.87±0.14 0.02 99.72

8.4.3.2 16-machine test system


For the case of the 68-bus, 16-machine system, a three-phase stub fault is applied
at bus 8 and cleared in 0.07 seconds.

By using the Koopman observability measures O PKMA, the results pre-


sented in Fig. 8.24 are found. The information about the non-linear modes seen
there is given in Table 8.21.

171
Fig. 8.24. Highest  j when the rotor speed deviations of all the generators are the system’s
output signals. 16-machine power system.

Table 8.21. Most observable slow non-linear Koopman modes of the 16-machine system.

Combination Eigenvalue Freq. [Hz] Damp. [%]

λ29+2λ30, λ30+2λ29 -0.007±i2.27 0.36 0.31

λ29+λ30 -0.005 - -

λ25+ λ29+2λ30, λ26+ λ29+2λ30 -0.095±i4.38 0.70 2.19

2λ29,2λ30 -0.005±i4.54 0.72 0.11

λ19+ λ29+2λ30, λ20+ λ29+2λ30 -0.181±6.53 1.04 2.77

λ21+ λ29+2λ30, λ22+ λ29+2λ30 -0.138±5.74 0.91 2.40

It can be seen from Table 8.21 that the rapid decay on the magnitudes of
the weighted Koopman observabilities of Fig. 8.24 is due to the small damping
coefficients of the system stability eigenvalues.

Three important points must be observed here:

1) Second- and third-order modes can be better ranked than several line-
ar inter-area modes.

172
2) Linear modes precede the higher-order dynamics that are the product
of their interaction.

3) These weighted Koopman observability measures can be used to iden-


tify the most critical modes that need to be controlled.

Further application of these non-linear observability measures for con-


troller design is presented below.

8.5 PKMA-based quadratic non-linear controller


The proposed scheme for a quadratic non-linear controller based on the extend-
ed Koopman eigenfunctions-based models is applied here in the two-area pow-
er system and the 68-bus, 16-machine power system.

8.5.1 Two-area, four-machine power system


The two-area, four-machine power system of Section 8.1.2 is used here, with the
system architecture provided in Fig. 8.25, for evaluating the full PKMA-based
quadratic controller.

The simulated disturbance is a three-phase fault in one of the lines con-


necting buses 5 and 6, as shown in Fig. 8.25. The fault occurs at t =0.001 sec and
is cleared after 0.035 seconds in bus 5 and after 0.036 seconds in bus 6 without
tripping the line.

In Fig. 8.26, the full quadratic controller is compared against the system
response without any control and with the linear LQR controller.

Fig. 8.25. Scheme for the two-area, four-machine system used for analysis.

173
Fig. 8.26. Comparison of the linear LQR with the full quadratic nonlinear controller.

The weight matrices used for the controller's design are Q11 with a weight
of 50 in the positions corresponding to the rotor speed deviations and R = I4.

8.5.2 IEEE 16-machine, 68-bus power system


The 16-machine test system presented in Section 8.1.3 is used here for evaluating
the full and the truncated Koopman quadratic controllers when structural con-
straints are taken into account.

8.5.2.1 Full quadratic non-linear controller


Two disturbances are simulated for this test system:

1) A three-phase stub fault at bus 51 cleared after 0.01 sec.

2) A three-phase solid fault in the line between buses 16 and 17 cleared


by tripping the line after 0.01 seconds.

Fig. 8.27 below shows the communication networks obtained for the two
cases when η1 =0.25 and η2 =0.03. It can be noted that, for the selected parame-
ters, the first case does not require a quadratic non-linear controller.

174
(a) (b) .

(c) (d) .

Fig. 8.27. Communication structures: a) 1 Case 1. b) 2 Case 1, c) 1 Case 2, and d) 2 Case


2.

Therefore, Case 2 is used for testing the proposed methodology for de-
signing a non-linear control law.

Four configurations are used in the following analyses:

1) Full communication between all generators.

2) Considering η1=0.125 and η2=0.02 (Fig. 8.28.(a)).

3) Considering η1=0.25 and η2=0.03 (Fig. 8.27(d)).

4) Considering η1=0.3 and η2=0.04 (Fig. 8.28.(b)).

Above we consider R=diag(10, …, 10) and that Q11 is appropriately con-


structed to weight with 100 the speed deviations, the rotor angle deviations, and
the difference of the rotor angle deviations of all the generators [25].

175
(a) (b)

Fig. 8.28. Communication structure 1ord for configurations (a) 2, and (b) 4.

The speed deviations of all the generators are depicted in Fig. 8.29 with a
linear LQR controller and with the quadratic non-linear controller considering a
full communication structure

Fig. 8.29. Comparison of speed deviations for the linear LQR (black) and the quadratic LQR
(blue).

Furthermore, Table 8.22 provides a numerical comparison for the two


cases and the four configurations. A period of 10 seconds with a time step of
0.001 sec is used for calculating the objective function J. The parameter ε1 has
been set in 1×10-3 with a maximum of 100 iterations.

176
Table 8.22. Quadratic controller’s performance for different configurations.

Conf. θ ξ [Linear]* ξ [Quadratic]

1 1.0 1.000 1.000


C
A 2 0.785 1.343 1.344
S
E 3 0.488 1.895 1.896
1
4 0.320 2.076 2.078

1 1.0 Unstable 1.000


C
A 2 0.785 1.092 0.881
S
E 3 0.488 0.661 0.631
2
4 0.320 0.520 0.486

* Using the value of Jfull for the corresponding quadratic controller.

The sparsity index θ and the performance loss index ξ used in Table 8.22 are
defined as follows:

Ic1 J conf
 2
1

n g J full

where ||a||1=||[a1 a2 … aN]T||1=  j 1 a j , ng is the number of generators, and Jfull and


N

Jconf are the values of the objective function J reached with the full communica-
tion structure and with another network configuration, respectively.

The simulated fault of Case 2 is so severe that the system does not go
back to the pre-fault operating point. For this case, the designed controller has
better performance for sparser communication structures.

8.5.2.2 Truncated quadratic non-linear controller


Now, a numerical comparison of the linear LQR, the full quadratic LQR and
second-order controllers with some non-linear terms is presented in Table 8.23
for the two configurations of Fig. 8.28. The values of the objective function J

177
were obtained for a period of 10 seconds with a time step of 0.001 sec from t =0
to t =1 and 0.01 for t >1.

Table 8.23.Numerical comparison for a different number of non-linear modes.

No. 2nd order


η3 J J [%]
modes

C - 0 (Linear) Unstable -

O - 3160 (All) 270,818.11 100.00

N 0.1 0 - -

F. 0.075 0 - -

0.05 0 - -

Full 0.025 6 Unstable -

0.01 56 346,906.60 128.10

C - 0 (Linear) 278,019.31 114.44

O - 3160 (All) 242,932.38 100.00

N 0.1 4 286,463.18 117.92

F. 0.075 4 286,463.18 117.92

0.05 9 277,608.17 114.27

8.28(a) 0.025 28 250,220.36 103.00

0.01 109 242,040.95 99.63

C - 0 (Linear) 184,116.62 104.00

O - 3160 (All) 177,027.37 100.00

N 0.1 2 196,033.88 101.73

F. 0.075 6 226,032.66 127.68

0.05 10 196,033.88 110.73

8.28(b) 0.025 36 180,921.99 102.20

0.01 116 233,892.18 132.12

178
It seems from the first part of Table 8.23, corresponding to the full com-
munications structure, that a larger number of second-order terms obtains a bet-
ter effect on system response.

However, for sparser structurally-constrained configurations it is noted


that the same does not happen; the enhancement of the system´s performance
when the amount of second-order terms is increased is not linear.

The aforementioned behavior can be explained from two perspectives:

1) Pre-fault system perspective: The applied disturbance changes the con-


figuration of the system introducing another disturbance, denoted as
Bd d(k) in equation (7.1), which would have a bilinear effect that has
not been considered here. Then, some non-linear gains could nega-
tively affect the performance of the controller.

2) Post-fault system perspective: If a small-signal analysis were performed,


it would be found that the post-fault closed-loop scenario is stable.
However, besides the initial perturbation x0, the difference between
the steady-state reference voltages of the exciters act as a constant en-
try that interacts with the second-order gains changing the effect of
some of them.

This phenomenon requires an additional method to predict which non-


linear modes’ gains would be harmful to the system in different situations.
Then, this modes would be automatically ignored for the truncated second-
order controller, and better system responses could be reached, as observed in
the highlighted row of Table 8.23.

8.5.2.3 Online design


First, two configurations more are added for the following analyses. These are
shown in Fig. 8.30.

179
(a) (b)

Fig. 8.30. Extra configurations for the 16-machine power system.

Henceforth, for ease we will rename the previously presented configura-


tions in the following way:

1) Configuration 0: With all machines communicating.

2) Configuration 1: The left-side configuration of Fig. 8.30.

3) Configuration 2: The left-side configuration of Fig. 8.28.

4) Configuration 3: The right-side configuration of Fig. 8.30.

5) Configuration 4: The right-side configuration of Fig. 8.28.

Configurations 1 and 3 were obtained with η1=0.1, η2=0.01; and η1=0.125,


η2=0.03, respectively, for the applied disturbance.

Now, Table 8.24 shows the time required to compute the corresponding
K11 gain matrix for configurations 1 through 4 with different values of ε1. The
highlighted data indicate those configurations that could not stabilize the sys-
tem due to the amount of time required to calculate the linear LQR.

In order to evaluate the effectiveness of the control matrices K111ord


found with ε1=0.1, we show in Table 8.25 the value of the objective function J
obtained for the different configurations and values of convergence.

180
Table 8.24. Time required for computing K11 for the different configurations.

ε1=1×10-1 ε1=1×10-2 ε1=1×10-3


Configuration
Iterations Time [s] Iterations Time [s] Iterations Time [s]

1 2 0.126 12 0.695 51 3.017

2 2 0.115 12 0.755 73 4.361

3 2 0.122 15 0.883 104 6.240

4 2 0.127 20 1.169 119 6.863

Table 8.25. Value of J for the regarded communication networks.

Configuration ε1=1×10-1 ε1=1×10-2 ε1=1×10-3

1 259,019.60 266,646.90 277,556.10

2 278,160.9 281,521.60 290,869.10

3 277,789.60 278,019.30 279,341.90

4 192,713.80 184,275.40 184,116.60

It can be noted that the difference in the value of J is small. Now, in Table
8.26 below the time required to calculate the second-order controller is shown,
as well as the number of non-linear modes utilized and the time required by the
sub-process of computing Ĥ 12.

Table 8.26. Time required for the regarded communication networks. Seconds.

Full 2nd order Truncated 2nd order with ε1=1×10-1


Configuration
J [%]* Time 2nd order modes J [%]* Time Ĥ 12 Total time

1 89.42 2,168.19 67 91.44 1.4511 2.118

2 85.93 2,146.39 64 93.41 1.4477 2.099

3 87.36 2,106.58 66 91.26 1.4555 2.180

4 90.46 2,126.91 76 155.12 1.4312 1.925


* In percentage of the J of the linear LQR.

181
As can be noted from Table 8.24, the difference in the required time be-
tween the full and the truncated second-order controllers is enormous, and with
a similar performance. Nonetheless, the risk of bad-tuned controllers exists, as
happened with the configuration 4.

Further analysis demonstrated that this last behavior is due to the inaccu-
racy of the linear gain matrix; results similar to the other configurations are
obtained for more accurate solutions of K11.

Furthermore, it can be noted from Table 8.26 that the required time to
compute the truncated second-order LQR is high in comparison with the linear
LQR (refer to Table 8.24).

Thus, the proposed quadratic non-linear controller can be considered as a


supplementary control that enhances the performance of the linear controller.

8.6 Concluding remarks


In this chapter, numerical results for the methods proposed in this dissertation
are provided.

First, a synthetic size-variable non-linear system and several multi-


machine power systems were used to evaluate the performance of the recursive
linearization (RL) method against conventional linearization techniques. The
comparisons showed that the recursive linearization has the accuracy of the di-
rect analytic method with even less computational effort than that required by
the perturbation-based methods.

Secondly, numerical results for two cases of study were used to demon-
strate the efficiency and accuracy of the perturbed Koopman mode analysis.
Comparison against the fully non-linear simulations, the linear approximation,
and the method of normal forms (MNF) validate the proposed method.

Then, the data-driven and the model-based observability-based ap-


proaches were applied to extract dominant spatiotemporal patterns from power

182
systems’ dynamical responses. The framework provides a criterion for selecting
the dominant Koopman modes from a large set of simultaneous recordings (for
the data-driven method) or from a large amount of higher-order dynamics
obtained from the Koopman extended systems (for the model-based method).

Finally, two cases of study were presented to illustrate the truncated


quadratic non-linear controller. In both, the simulated disturbance leads the sys-
tem to a highly unstable response. For the fully-communicated scheme, the pro-
posed quadratic controller was very superior because the linear LQR controller
could not stabilize the system.

Additionally, it was observed that the required time to compute the trun-
cated quadratic non-linear controller online is relatively low in comparison with
the full case; low enough to be applied online. However, it was observed that it
is necessary to consider it as a supplementary control for the linear LQR and
that reducing the time required for the second-order gain matrix is essential.

In this sense, the relaxation of the convergence criterion for obtaining the
linear gain matrix under certain structural constraints is essential for assuring
system stability, letting sparser communication structures to be considered. As
well, the determination of detrimental modal gains when a topology’s change is
present is a paramount issue.

8.7 References
[1] P. M. Anderson and A. A. Fouad, Power system control and stability, IEEE
Press, John Wiley & Sons, NJ, USA, 2002.

[2] P. Kundur, Power System Stability and Control, Mc-Graw Hill, NY, USA,
1994.

[3] J. J. Sanchez-Gasca, V. Vittal, M.J. Gibbard, A.R. Messina, D.J. Vowles, S.


Liu, and U.D. Annakkage, “Inclusion of higher order terms for small-signal
(modal) analysis: Committee report-task force on assessing the need to in-

183
clude higher order terms for small-signal (modal) analysis,” IEEE Trans.
Power Syst., vol. 20, no. 4, pp. 1886-1904, Nov. 2005.

[4] J. Arroyo, E. Barocio, R. Betancourt, and A. R. Messina, “A bilinear analysis


technique for detection and quantification of nonlinear modal interaction
in power systems,” 2006 IEEE PES GM, Montreal, Canada, June 2006.

[5] V. Vittal, “Transient stability test systems for direct stability methods,”
IEEE Trans. Power Syst., vol. 7, no. 1, pp. 37-43, Feb. 1992.

[6] S. Liu, A. R. Messina, and V. Vittal, “A normal form analysis approach to


siting power system stabilizers (PSSs) and assessing power system nonlin-
ear behavior,” IEEE Trans. Power Syst., vol. 21, no. 4, pp. 1755-1762, Nov.
2006.

[7] S. Liu, A. R. Messina, and V. Vittal, “A normal form-based approach to


place power system stabilizers,” 2006 IEEE PES GM, Montreal, Canada,
June 2006.

[8] P. W. Sauer and M. A. Pai, Power system dynamics and stability, Prentice-
Hall, NJ, USA, 1998.

[9] R. Ramos et al., “Benchmark systems for small-signal stability analysis and
control,” IEEE PES Task Force on Benchmark Systems for Stability Con-
trols, IEEE Inc., NJ, USA, Tech. Rep. PES-TR18, Aug. 2015.

[10] J. H. Chow, Power system coherency and model reduction, Springer, NY, USA,
2013.

[11] P. E. Hill, W. Murray, and M. H. Wright, Practical Optimization, Academic


Press, 1981.

[12] J. Persson & L. Söder, “Comparison of three linearization methods,” Proc.


of the 16th PSCC, Glasgow, 14-18 July 2008.

184
[13] T. Tian, X. Kestelyn, O. Thomas, A. Hiroyuki, and A. R. Messina, “An ac-
curate third-order normal form approximation for power system non-
linear analysis,” IEEE Trans. Power Syst., vol. 33, no. 2, pp. 2128-2139, Mar.
2018.

[14] S. Liu, “Assessing placement of controllers and nonlinear behavior of elec-


trical power systems using normal form information,” Ph.D. dissertation.
Iowa State Univ., Ames, IA, USA, 2006.

[15] A. R. Messina and V. Vittal, “Extraction of dynamic patterns from wide-


area measurements using empirical orthogonal functions,” IEEE Trans.
Power Syst., vol. 22, no. 2, pp. 682-692, May 2007.

[16] DSA ToolsTM, Powertech Labs Inc., [Online] Available: www.dsatools.com

[17] BPA/PNNL, Dynamic Systems Identification (DSI) Toolbox. [Online].


Available: ftp://ftp.bpa.gov/pub/WAMS_Information/.

[18] C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, and D. Henningson, “Spec-


tral analysis of fluid flows,” J. Fluid Mech., vol. 641, pp. 115–127, 2009.

[19] P. J. Schmid, “Dynamic mode decomposition of numerical and experi-


mental data,” J. Fluid Mechanics, vol. 656, Cambridge University Press 2010,
pp. 5–28.

[20] D. W. Tufts and R. Kumaresan, “Singular value decomposition and im-


proved frequency estimation using linear prediction,” IEEE Trans. Acoust.,
Speech, Signal Process., vol. ASSP-30, no. 4, pp.671-675, Aug. 1982.

[21] J. J. Ramos, R. J. Betancourt, and E. Barocio, “Visualization of inter-area os-


cillations using an extended subspace identification technique,” IEEE North
American Power Symposium, KS, USA, Sept. 2013.

[22] T. Katayama, Subspace Methods for System Identification, US: Springer, 2005.

185
[23] E. Martínez and A. R. Messina, “Modal analysis of measured inter-area os-
cillations in the Mexican interconnected system: The July 31, 2008 event,”
Proceedings of the IEEE/PES GM, Detroit, MI, USA, July 2011.

[24] S. M. Chan, “Modal controllability and observability of power system


models,” Elect. Power Energy Syst., vol. 6, no. 2, pp. 83-88, April 1984.

[25] A. Jain, A. Chakrabortty, and E. Biyik, “An online structurally constrained


LQR design for damping oscillations in power system networks,” IEEE
American Control Conference (ACC) 2017, pp. 2093-2098, Seattle, WA, USA,
July 2017.

186
 

Chapter 9
Conclusions
9.1 General conclusions
In  this  dissertation,  a  theoretical  framework  for  the  efficient  computation  of  a 
higher‐order dynamical systems’ representation is proposed that can be used to 
identify the most critical linear and non‐linear dynamics and design appropriate 
law controls. 

The importance and usefulness of the proposed framework lie in its sim‐
plicity  and  the  reduced  computational  resources  demand,  leading  to  an  in‐
creased field of application.  

Several general conclusions can be drawn from this analysis. 

i) The  use  of  the  recursive  linearization  method  simplifies  the  applica‐
tion of the non‐linear analysis techniques and may be used to facilitate 
the online application of new analysis and control tools. 

ii) The perturbed Koopman mode analysis can be used to analyze the in‐
fluence  of  multimode  interactions  of  various  types  on  the  non‐linear 
response of non‐linear dynamical systems, as well as to assess the na‐
ture and strength of interactions between system components. 

iii) The use of observability measures is seen to add valuable information 
to  modal  analysis,  which  is  of  interest  in  the  analysis  and  identifica‐
tion  of  reduced‐order  models  and  provides  a  direct  comparison  to 
conventional linear analysis techniques. 

iv) The proposed truncated quadratic non‐linear controller  can  be  easily 


computed  by  using  the  recursive  linearization  method  and  the  pre‐
sented  procedure.  It  allows  enhancing  the  linear  LQR  controller  by 
adding a small set of critical non‐linear modes and increasing the sys‐
tem stability under conditions where the linear controller fails.  

9.2 Future work


Some  aspects  of  the  proposed  methodology  that  can  be  improved,  and  some 
other  areas  of  investigation  open  that  are  identified  in  this  dissertation  are  the 
following: 

a) Future application of the recursive linearization to include other non‐
linear devices, as well as formulations for dynamical systems defined 
by differential‐algebraic equations (DAE). 

b) Development  of  a  hybrid  model‐based,  data‐driven  approach  based 


on the power systems’ Koopman eigenfunctions model that improves 
the prognosis ability of both techniques. 

c) Application of the observability‐based approach for the optimal wide‐
area  system  monitoring  and  for  the  placement  of  PMUs  to  observe 
and control linear and non‐linear global phenomena. 

d) Obtain  a  projection  relating  the  most  excited  open‐loop  non‐linear 


modes  with  the  most  excited  closed‐loop  non‐linear  modes  to  deter‐

188
mine  a  small  set  of  columns  of  the  second‐order  interaction  matrix 
that need to be computed. 

e) Develop a method to assess the influence of changes in topology into 
the effect of the non‐linear modal gains.  

189

También podría gustarte