Está en la página 1de 13

1960

IRE

TRANSACTIONS

ON CIRCUIT

THEORY

212

Modes in Linear Circuits*


C. A. DESOERf,
SENIOR MEMBER,IRE

I. INTRODUCTION HE PURPOSE of this paper is to develop the concept of modes in linear time invariant circuits. It is well known that the chapter on modes in lossless circuits or in conservative mechanical systems belongs naturally to any good text on the subject [l]-[3], [5]. The purpose of this paper is to show that, provided a suitable point of view is taken, the concept of modes does indeed apply to any linear circuit; more precisely, it will be shown that it applies to any lossy or lossless, active or passive, reciprocal or nonreciprocal circuit. The importance of the mode concept lies in the fact that it allows one to consider a linear circuit as a whole, to break up its behavior as a sum of very simple patterns, and to visualize easily the effect of externally applied it provides a very informative forces. Iiurthermore, description of the resonance phenomenon. In particular it will be shown that

II.THE CLASSICAL CASE OF LOSSLESS RECIPROCAL CIRCUITS

Let us start with a short review of the classical theory of modes as it applies to lossless reciprocal circuits [l]-[3], [5]. Given any lossless reciprocal circuit, there are well established rules for choosing a set of independent loops and writing the loop equations [6]. Let there be n independent loop currents cjl(t), &(t), . . . c&(t), where 4 = dq/dt. The mesh equations read

Or, in vector form, denoting by q(t) the column vector whose components are q[(t), qZ(l), . . . pn(l), and by L and X the matrices whose elements are Lii and Xii, Lq(t) + Sq(2) = 0. (1)

1) any free oscillation of a linear circuit can be thought


of as a superposition of noninteracting modes of oscillation, 2) in the case of free oscillations the amount of excitation of each mode can easily be determined on the basis of initial conditions, 3) any forcing function can be considered as exciting each mode independently, 4) the resonance phenomenon can be given a very simple interpretation pointing out, in particular, the importance of the proper type of excitation. The contribution of this paper lies not so much in the mathematical developments as in the physical interpretation of some modern mathematical formalism [4]. In view of the fact that certain aspects are of great importance to linear circuits, they had t,o be developed at length in the present paper. Throughout the paper, emphasis will be placed on viewing the circuit as a whole. Consequently, vector notation will be used together with geometrical interpretation in order to give an intuitive grasp of the formalism. Examples are put at the end of the paper and are referred to at suitable places.
* Received by the PGCT, August 24, 1959; revised manuscript received, December 7, 1959. This research was supported by the United States Air Force, through the AF Office of Scientific Res., Air Res. and Dev. Command. t Dept. of Elec. Engrg., University of California, Berkeley, Calif. 1 Ip the literature one can find many statements to the effect that the mode concept does not apply to RCL circuits [13].

These equations, together with the initial conditions q(O), Q(O), determine the free oscillations of the circuit. Here L and X are real symmetric matrices since they represent the inductance and elastance matrix [6] of a reciprocal circuit. The magnetic and electric energy quadratic forms are3 T = 3 $ Li,QiQj = $(a, LQ) (2) (3) In most cases, one of the matrices L or S (or both) is positive definite; let us assume that L is positive definite. Consider the following eigenvalue problem: find the Xs for which the vector a satisfying Sa = XLa is not identically that [5]: zero. It is classically (4) demonstrated

1) There is a nontrivial solution, a # 0, only if X is equal to one of the eigenvalues X1, X2, . . . X,, and 2) For each k, Xk is real and positive; hence let X, = o:, and note that wk is the angular frequency of the kth mode. 3) Associated with each X,, there is a vector solution of (4) which can be taken to be real; this gives n real vectors, the eigenvectors a,, a2, . . . a,. It is con* An entirely analogous development would follow for the case of node analysis. 3 Here the notation (x, y) denotes the scalar product between z and y. This scalar product is often denoted by x.y in the engineering literature.

212

IRE

TRANSACTIONS

dN CIRCUIT

THEORY

September

venierlt to normalize them by the requirement (ak, La,) = 1 for k: = 1, 2, . . . , n. 4) It can be shown t,hat the vectors uk obey a special type of orthogonality requirement,
(ai, La,) = sik

that of all other coordinates but the solution assumes the simple form
p,(t)

= Cti COS

wit

+ pi sin wit = yi

COS

(wit + EJ

(i, k = 1,2, . . ..n)

(5) (6)

and, consequently, by (4), (a<, Sak) = Xk alk = u, &k (i, k = 1, 2, . . . ,n).

5) If the eigenvalues are distinct, the eigenvectors a,, a2, ... a, are linearly independent; this means that no linear combination of these vectors can be made equal to zero. Geometrically this means that any pair of eigenvectors, say, ai, ak, defines in the n-dimensional space a hyperplane distinct from that defined by any other pair. It can also be shown that any vector, and the vector q(t) of (1) in particular, can be expressed uniquely as a linear combination of the aks, vix., (7) where t,he scalar pb(t) is the expansion coefficient. Eq. (1) becomes a new vector equation
c (ijk(t)Lak + Pk(t)Sak) = 0.

where ai, pi, yi and ei are integration constants. We say that each eigenvector ai and corresponding angular frequency wi pertains to a different mode of oscillation. The differential equation governing the time evolution of each mode has the simple form (8), or in terms of the initial conditions ~~(0 = h, LdO)) ~0.34
+ (ai, Lil(0)) cd; sin w-t I . (9)

As a conclusion, any free oscillation of a lossless reciprocal circuit can be visualized as the superposition of normal modes of oscillation. The degree to which each mode of oscillation is excited by the initial conditions is given by the simple rule (9). We could consider the case of forced oscillations using the same principles. Next the case of lossless nonreciprocal circuits should be tackled. The question following that is, what happens if losses are present, if the circuit is an active circuit, etc. To answer all these questions it is more convenient to drop the conventional notation and use a slightly different one, the advantage gained being that it will cover all cases. The principle, however, is similar.
III. REDUCTION OF THE EQUATIONS TOASTANDARD FORM

Multiplying successively this equat,ion by each ai (i = 1, 2, ... n),


F bk(t)(aiJ Lak) + pk(t>(aij sak)] = 0 ci = 1, 2 e-e n>

and from (5) and (6) j&(t) +


c&j(t)

= 0

(i = 1,2, .** n).

(8)

The initial conditions for these differential equations can be evaluated directly from (7) and (5) by Pi(O) =
(ai,

Lq(O))

Pi(O) = (ai, U(O)). The above manipulations have the following geometric interpretation: Think of the vector q(t) as specifying the position of a point in n-dimensional space by its n coordinates ql, qa, . . . qn. This n-dimensional space is classically called the con$guration space. of the circuit [5]. If the initial coordinates and velocities are known, then the vector equation (1) specifies completely the future evolution of the system. Instead of considering the point q(t) as having its position specified by the coordinates. ql(t), qz(t), . . . q,,(t), consider (7) and note that the eigenvectors a,, a,, . . * a,, are fixed vectors in the configuration space Once they are found, the n coordinates PI(t), pz(t), . . . p,(2) specify completely the point q(t) by (7) The beauty of this point of view is that now the equations (8) governing the coordinates pi(t) are very simple. Not only is the evolution in time of the ith coordinate independent of

Before developing the required formalism it is necessary to indicate that the equation of any linear circuit can be reduced to the simple form (14) below. In each particular case, the selection of variables is a matter of choice; for example, they might be loop currents, node pair voltages, or mixtures of both as in Bashkows A matrix method [18]. An example of a case where the node equations lead to the standard form almost directly. is to be found in Example 1 of Section IX. If loop currents are selected as variables, they lead to a set of second-order equations of the form LB + Rq + Sq = 0. 00)

Let there be 1 independent loops so that L, R and S are the 7 X 1 inductance, resistance and elastance matrices respectively; q is an I dimensional column vector. If the inductance matrix L is nonsingular a simple way to reduce this set of equations to a set of first-order equations is as follows: if one multiplies on the left by L- the result is ij + L-RQ + L- Sq = 0. (11)

it is based on some results of matrix theory which would take too long to discusshere.

the c:se where the three matrices L,. R, and S are singular; however,

4 The author has obtained a general technique applicable to

1960

Desoer: Modes in Linear Circuits

213

In practice, this result is obtained by considering (10) as a set of linear equations in the q;(t) and solving for them. This set of I second-order equations (10) is equivalent to the following set of 21 first-order equations in which r(t) is a new column vector defined as follows: 4(t) = r(t) f(Z) = -L-Sq(t) - L-%(t). (12)

If y(L) is the column vector q,(t), q2(t), ... , yl(t), rl(t), f-2(t), . . * , rl(t), this can be written compactly, thus, I - L-R I y(t) (13)

It should be stressed that the configuration space, the space of the vector q(t) of Sect#ion II and the phase space, the space of the vector y(t) of the present section, are different spaces. Consider for example two series RLC circuits coupled by mutual induction. There are two loop currents, therefore the vector q(t) is a two-dimensional vector. In the phase space representation, however, y(t) will be a four-dimensional vector. As a rule the number of components of y(t) is the minimum number of independent initial conditions that, must be specified in order that the trajectory of y(t) be completely determined. IV. THE MODE EXPANSION

where I is the 1 X 1 unit matrix and 0 the Z X I null matrix, and

There are well-known methods for solving (14) [7]. The purpose of this section, however, is to develop the concept of noninteracting modes. To this end a certain jr(t) = Mt). (14) formalism must be developed and its usefulness will only Kate that (14) is a vector equation, A is a 21 X 21 matrix, become apparent later. Example- 1 of Section IX roughly y(t) is a 21:dimensional vector. For simplicity denote 22 parallels the development below. A concurrent reading by n. Let al(t), sz(t), . .. , a(t) be the components of of that example may facilitate considerably the reading y(l) and aiB (i, Ic = 1, 2, . . . , n) be the elements of the of the general presentation. matrix A; then (14) read? Since the formalism will include arbitrary linear circuits, complex characteristic roots will appear in the analysis; it is therefore useful to define the scalar product of two vectors x and y whose components (&, &, . . . .$J and q.) are complex numbers. By definition we set (?l, 72, ... (15) %I = hl?l + %lzqz + . . . + %wLql,.

lx, Y> = 8 r;%?i.


Note that the scalar product of two complex vectors is usually a complex number but the scalar product of a vector with itself is a real non-negative number;

Although in some applications [19] the aiks turn out to be complex, we shall assume throughout the paper that the aiks are real. This is the standard form that will be used throughout the paper. It should be stressed that this is not the only way of obtaining the equations in the form (14) and that in each case one choice is definitely preferred to others. Although the preceding discussion does not constitute a demonstration, it is a fact that the equations of any linear circuit, be it active or passive, reciprocal or nonreciprocal, can be written in the form (14) irrespective of the initial choice of variables. This fact has been established for some broad class of circuits by Bashkow [18]. The geometric interpretation of (14) is the following: y(t) defines a point in n space. This point is usually said to define the state of the circuit and referred to as the phase space point representing the state of the circuit.? From differential equation theory, it follows that if y(0) is specified, the future. behavior of y(t) is completely defined by (I 4). One may visualize (14) as giving at any instant the velocity of the phase space point, thus completely defining its trajectory in phase space.

The proposed defining characteristic of a mode is the following: if the l&h mode and only the lcth mode is excit.ed then all the variables have the same time behavior, say qi = PieAt (i = 1,2, . ..n) (16) where the /jis are appropriate constants and X is also a constant. In order to find out whether such modes are possible, substitute (16) into (14) and get Xbe = AbeAt where b is the column vector whose components are pl, P 2, . . . fin. Upon simplification [A - XI]b = 0. (17)

5 A word about notation: cauital letters will denote matrices (e.g., A); lower case will denote vectors [e.g., u(t)], and Greek letters will denote scalars (CU.. Xi. 01<E). The notable exceotion is t for time. 6 Example 2 wili iliust&eVthis fact. 7 Recently the word state space has been used in order to emphasize the fact that one is not restricted to the (pr q) phase space of the Hamiltonian formulation of classical mechamcs.

This is a set of n linear homogeneous algebraic equations in n unknowns, the components of b. A nonzero solution will exist if and only if det (A - XI) = 0.
8 The superscript *, as in Ei*, denotes the complex conjugate.

I
214 IRE TRANSACTION8 ON CIRCUIT THEORY September The expansion of the determinant leads to an algebraic equation of nth degree with real coefficients. Therefore lt has n roots X,, X2, . . . x,. Since the coefficients are real, complex roots, if any, occur in complex conjugate piirs. The roots Xi are called the eigenvalues of the matrix A. It should be stressed that the eigenvalues Xi are identical to the natural frequencies of the circuit, usually defined as the roots of the characteristic equation of the circuit. Throughout the rest of the paper we shall assume that the eigenvalues are distinct. This assumption is made because of complications that may occur when there are multiple eigenvalues.g This assumption is not physically as restrictive as it sounds, for to any linear circuit N that has multiple eigenvajues, there corresponds another linear circuit N that has distinct eigenvalues, and N is obtained from N by modifying some of its elements by an arbitrarily small amount [ 141. To each eigenvalue Xi there corresponds through (17) a set of n linear homog&eous equations [A - X,I]b = 0 (18)

whose solutions are the vectors ul, u2, . . . u,. These vectors are the eigenvectors of the matrix A; the eigenvector ui corresponding to the eigenvalue Xi. Since the linear homogeneous equations (18) define the components of the eigenvectors only within an arbitrary constant, let the eigenvectors ul, up, . . . u, be normalized, that is, the sum of the absolute value squared of each component is set equal to 1. Geometrically speaking, this amounts to setting the length of each eigenvector to 1. In the formalism defined above, the eigenvectors are defined by Aui = Xiu; (Ll<,Ui) = 11 These developments are classical [4], [7] but the purpose of repeating them here is to provide a foundation for their geometrical interpretation. For ease of visualization consider the case where n = 3 and where X,, X,, X, are all real. Such a case occurs in the RC circuit of Example 1. In Fig. 1 are shown the three eigenvectors ul, uZ and u,. The geometric meaning of the fact that u, ehlt is a solution of (14) is that if the initial conditions, which by definit,ion specify the vector y(O), correspond to the point A, along the straight line defined by ul, then the subsequent behavior of the network is represented by the motion of A, along the line OA. The distance from the representative point A, to the origin is OA ehLt. It is the simplicity of the time behavior of that particular type of solution that makes the concept of eigenvalues and eigenvectors very useful. For a detailed description see Example 1 in Section IX. (i = 1,2, .f. n). (19)

Fig.

l-Perspective

view of the three eigenvectors Example 1, Section IX.

~1, UZ, uz of

Similarly, if the initial conditions were described by the point A situated on the straight line supporting the vector uZ the motion is also along that straight line and the distance of the representative point to the origin is OA e. Any motion of the circuit having the simple form y(t) = ciiuieXit (LY(= constant scalar)

is called a mode of oscillation. When the eigenvalues are distinct the eigenvectors are linearly independent and any vector in phase space, say, y(t), can be represented uniquely as a linear combination of the eigenvectors,
y(t) = 2 Ei(OUi. i=l (20)

In order to consider the time behavior of y(t), substitute (20) into (14)

and invoking the linear independence of the eigenvectors, 4i(Q = MO (i = 1,2, ... n). (22)

The reader is well aware that. in the case of multiple roots, solutions of the form tkehl are possible. A discussion of these case& would lead us too far away from the main point.
9

Eqs. (20) and (21) have the following interpretation: At any time the state of the circuit. can be thought of as being the superposition of the modes of oscillation, each

1960

Desoer: Modes in Linear Circuits

215

one of them excited by various amounts El(t), &(t>, * * . &n(t). The amount by which any particular mode is excited obeys (22), which demonstrates that there is no interaction between the modes, and that each mode evolves in time at the pace specified by the corresponding eigenvalue. This property is often referred to by saying that the modes are uncoupled or noninteracting. V. RECIPROCAL BASIS AND
SPECTRAL REPRESENTATION

where the dyad ui )( vi is defined by the rule which defines the scalar product of the dyad with any vector x: uixvi, x>

[4]

Suppose we define a new set of vectors vl, vp, . . . v,, by the relationships (Vi,&) = &i (i,j = 1,2 . ..n). (23)

that is, the result is a column vector ui multiplied by the scalar (vi, x). The dyad can be thought of as a matrix of rank 1; its range is one-dimensional and lies along the vector ui. In a similar fashion the successive relations Ay = A( c (vi, Y)u,) = c (vi; y>Au, = c (vi, Y)& i z I suggest the representation A = 2 XiuJ(v, i=l (30)

In particular the vector v1 is obtained by solving the n scalar equations (Vl,Ul) = 1 (24)

(VI, u,) = 0

(Vl, u,) = 0. Each one of the vectors vi is uniquely defined because its n components are the solution of a set. of n linear equations whose determinant is nonzero. Indeed the ith row of the determinant consists of the components of ui; hence the linear independence of the eigenvectors implies that the determinant be nonzero. The set of vectors vl, vs, . f . v,, defined by (23) is said to be the reciprocal basis to ul, uz, . . u,,. A typical use of the relation (23) is the following: Take the scalar product of (21) by v,; because of (23), it reduces
t0

when the dyad u, )( vi has the same meaning as before. The right-hand side of (29) and (30) give the spectral representation of eAt and A, respectively. A typica, use of (29) is in the standard solution (28) to yield (27) directly. We shall now have to consider the geometric interpretation of these formal manipulations. It should, however, be stressed that: a) the above formalism is quite general; the only assumptions made are that the matrix A has real coefficients and has distinct eigenvalues, and b) the statements 1) and 2) of the Introduction are clearly established in this general context. VI. GEOMETRIC INTERPRET.4TIOiY A. Real Eigenvalues

If all eigenvalues are real, the eigenvectors are real. Consequently, the vis will also be real vectors. All equations such as (20), (22), and (27)-(29) involve only Similarly the scalar product of (20), evaluated at t = 0, real quantities. The &(t), the components of y(t) along by vi yields the initial value of the ii(t) each eigenvector, are monotonic, either decaying or Em = (Vi, Y(O)). (261 increasing exponentially with time as required by (25). Examples 1 and 2 illustrate the meaning of the This result, together with (20) and (22) gives the free formalism. oscillations of the circuit in terms of the initial conditions in the form B. Complex Eigenvalues &s(t) = MAQ. (25) Y(t) = g (vi, y(O)V k. (27) An example of such occurrence is given as Example 3 of Section IX. If some eigenvalues are complex, they can be grouped in complex conjugate pairs. Suppose h, is complex, then XT is also an eigenvalue. Let it be labeled 1,. In Appendix I it is shown that u2 = u*,. Separating real and imaginary parts, let Xl = -ffl + $1 . (31)

This last relation is the most concise expression of the mode expansion. At a glance we see that at any time y(t) is a weighted sum of the modes of oscillation eXitu, and that the amount of excitation of each mode is given by (Vi, Y(O)). Iiormally (27) may be compared with the standard solution [4], [7] Y(t) = e*y@) w3)

Ill = Ll: + jl.l{ 2v, = v: + jv{ Then l.lz = u: - ju:.

where eAt is the matrix defined by the expansion ~~+ (A/n!) t. This suggests the identification

(32)

216 It is shown in Appendix I that

IRE

TRANSACTIONS

6N CIRCUIT

THEORY

2v, = v: - jv: and that the contribution

(33)

of the first two terms of (27) is

emal[(v{, y(0)) cos colt + (v:, y(0)) sin w,t]u: + eeal[(v:, y(O)) cos cdl2 - (v:, y(0)) sin wlt]u: (34)
Unit Length I

or in its equivalent polar form emal[-u cos (w,t - cp)u: + y sin (wit - cp)u:]. (35)

The bracketed quantity is the superposition of two sinusoidal oscillations along the directions of u{ and u:, respectively the real and imaginary part of the eigenvector ul. The amplitude and the phase of each harmonic oscillation depend on the initial conditions as can be seen from (34). Furthermore, the fact that u{ and u: are not parallel implies that the ith component of the vector expression (35), say Ai e- a1t cos (wit - cpi), has a phase (pi that varies from component to component. This is a Fig. 2-Relative position and magnitude of 1111,ull with, respect to vll, v11. The figure is drawn under the special assumption that significant difference from the classical treatment of the u,, are orthogonal to III and III*. UP, ua, . . ., lossless reciprocal circuit where, in conjiguration space, all the ai have the same phase when only one mode is excited, since in that case the motion is described by (9). Insight into the relationship between the vectors v1 and vZ and the eigenvectors is provided by the following property, which is established in Appendix II. If X, is complex and if X, = XT, then the real vectors v: and vi are orthogonal to every vector of the subspace spanned by u3, uq, . . * u,. Furthermore, the vectors vi and vi constitute a reciprocal basis to LI: and u{ respectively. Using the above formalism, this is expressed by (vi, Ll:> = 1 (vi, II:) = 0 (vi, u:> = 0 (v:, ll:) = 1. (36)
Fig. S-Example of motions when only one mode is excited in the case where AZ = X1* and uz = III*.

The geometric relationship between the four real vectors u:, u{, vi and vi is illustrated by Fig. 2, where for drafting convenience it is assumed that uI, uq, . . . , u, are In summary, it should be stressed that, in the case of a orthogonal to u1 and uZ so that u{, u:, v{, vi lie in the real eigenvector, the representative point in phase space same plane. Note that the normalization condition moves along the eigenvector with a law of motion of the (u,, II,) = 1 becomes (u:, u:) + (u:, u:) = 1 m the case form y ext. In the case of a complex eigenvector u1 = u{ + of complex eigenvectors; i.e., the sum of the squares of jw, the motion takes place in the plane defined by II: the lengths of u; and u; is equal to unity. The condition and u: and has the form of a exponential spiral. [See (VL u:) = 0 requires that vi be perpendicular to u:, (35) and Fig. 3.1 and (v:, u:) = 1 requires that the product of the length of u{ by the length of the projection of vi on u{ be equal VII. FORCED OSCILLATIONS to unity. These projections are illustrated in Fig. 2. Consider the circuit shown in Fig. 4. It is identical to For example, if y(0) = u{ then only the first two modes that of Fig. 5 (see Example 1) except for the presence of the are excited and (34) together with (36) yields three current sources i,(f), i,(t) and i,(t). The three node y(t) = emrrl cos wit u: - e-*lt sin colt u:. (364 equations read If y(0) = u:, then y(t) = e- sin wit u: + e- These two particular cos w,t U:. (36b)
01 V2 .&-

motions are illustrated

in Fig. 3.

-1.1 1 -2

0 1

-2.4.

01 02 V3 II

1960

Desoer: Modes in Linear Circuits

217

Fig. 4-Linear

circuit with three sources.

This indicates that in the case of forced oscillations equations read y=Ay+f

the (37)

where the columnvector f represents the forcing functions. In many cases all but one component of f are equal to zero. By appropriate manipulations the equations of any circuit can be brought into the form of (37); we shall take it to be the standard form for the case of forced oscillations. Since (37) represents a linear system the superposition theorem applies, hence the effect of a forcing function can be determined by a convolution integral once the unit impulse response is known. In the present case, the concept analogous to the unit impulse response of the ordinary transfer function approach is more involved for two reasons. First, n variables of the circuit are being observed, namely, vl, q2, . . * , v~, .the components of y; and second: these are possibly n independent forcing functions, namely, the n components $4, 921 .I. 1 (Pmof f. It will become apparent that these two facts will require the introduction of a matrix which for the system (37) plays the role of unit impulse response. This matrix is defined as follows: consider the circuit with zero initial conditions at t = 0- and apply successively the following n V ector forcing functions 6(t) f1= 0 7 f, = <O 6(L) .*.f,= 0 ,O ,O, 0 .WL 0 0 .

excitation. Probably the best t)erminology is transition matrix because it describes complet.ely the time evolution of the system [15]. We shall denote it by F(t; 0), where the first argument t indicates the time at which the vector responses y,(t) are observed and the second argument, 0, reminds us that the impulse has been applied at t = 0. This notation is a little more complicated than necessary for this case, but it leads automatically to the correct expressions for the time varying case. Consider the forcing function f(t) = [cpl(t), q2(t), . . . , p*(t)]. Consider the contribution to the vector response y(t) due to the effect of f(t) acting during the infinitesimal interval (7, 7 + &). That contribution is, in the limit of dr --+ 0, equal to that caused by impulses &T) dr 6(t - T) (33)

applied to each of the scalar equations (37). By superposition, the response is


2 y,(t T)Pi(T) a7
t

(394

because y,(t - T) is the vector response at time due to a unit impulse applied to the ith equation at time r. In matrix notation the vector response may be written as F(t; T)f(T) dr. Wb)

Note that the second argument of F is 7, since the impulses have been applied at time 7. In order to evaluate (39a), note that the impulses (38) are applied to the system (37), assumed to be at rest just before their application. In other words, at r -, the initial conditions of (37) are ~~(7 -) = 0, (i = 1, 2, . . . , n). The impulsive functions in the right-hand side of (37) are balanced by the impulsive behavior at = 7 of the derivatives Therefore, between 7 - and T+, jumps from 0 to (P<(T)&. Since after the application of the impulses the forcing function is identical to zero, we are back to the initial value problem. The solution may be written down by using (27) after having observed that the initial value vector is f(T)&,
t ii(t). qi(t)

(vi, f(7)) d7uieAi(--)

> 7.

(40)

To each one of these forcing functions there corresponds for t > 0, the vector responses yl(t), y2(t), . . . , y,(t). These n vectors can be arranged into a matrix where y,(t) will be the ith column of the matrix (i = 1, 2, . . . , n). Thus the element of the kth row and ith column is qk(t), the kth component of the vector response when the excitation is defined by fi, namely, a unit impulse appearing at the ith scalar equation. Thus the n2 elements of the matrix describe the behavior of the n components of y(t) for each of the n possible unit impulse excitations. The matrix just defined is referred to by several names. Mathematicians often call it the principal fundamental matrix; some physicists prefer to use the analogy to Greens functions and call it the Green dyadic corresponding to the initial conditions defined by the impulsive

From (40) we deduce the expression for the transition matrix F(t; T) = 2 ui)(vieAi(-)
i=l

and, by comparison wit.h (29), F(t; 7) = e*(-I. Note that (40) and (39b) give only the contribution to y(t) due to the forcing function f(t) acting during the interval (7, 7 + dT). If the initial conditions at t = 0 are y(0) and if the forcing function is applied from t = 0 on, then the superposition theorem gives the resulting oscillation in the form

218

IRE

TRANSACTIONS1

ON CIRCUIT

THEORY

September

where w is the angular frequency of the forcing function and g is a real vector fixed in phase space.l Following the conventional procedure we take g eiUt to be the excitation, The contribution of the forcing function may be written as remembering that the real part of the final result is the n physically meaningful answer. With zero initial conditions (vi, f(7))exi(r-i) d7 ui. we get from (41) dl i=l 0

y(t) = 2 (vi, y(O))e"'w + S,' 2 (vi, f(7))ei('-r'ui dr.

The bracketed quantity is a scalar. It is the amount by which the ith mode of oscillation is excited by the forcing function. This leads to the important conclusion that any forcing function excites each mode independently; i.e., the amount of excitation of any particular mode does not depend on the amount of excitation of the other modes. In order to grasp the full meaning of (41) it is useful to consider a few special cases. Suppose the eigenvector uk is real and the vector f(t) lies along uk, say, f(t) = p(t) uk. The sum in (40) reduces to a single term in view of (23). Thus only the kth mode is excited, or more precisely, at any time t, it is excited by the amount

y(t) = 2 (vi, g)(l i=l or y(t) = & * *

eiwCt-)exi d,)ui

(exit - eiw)ui. 0 for

If the circuit is strictly stable, i.e., if Re Xi < i = 1, 2, ... n, then for large t, eXit + 0 and
y(t)

---f 2 (Vi,pu. i=l jw-Axi

as

t+

00;

t s
0

that is, as t -+ ~0, the sinusoidal steady state is reached. The oscillation of the ith mode has a peak amplitude of (Vi, g> 1 jw - xi I if Xi is real. If X; is complex, one has to consider both ui and UT and the oscillation of the ith mode in the (u:, u:) plane has an amplitude

p(T)e(-)

dr.

Suppose the eigenvector uk is complex, uI, = U: + j&. First consider the case where f(t) = ,B(t)u:. From (36a) and assuming zero initial conditions, y(t) = [S, /3(7)e-art(fol-r) cos wk(t - T) dr u,i

1 ot P(T)e-""'"-" - [S 1
sin wk(t - T) dr
u;.

Let Xi = - (Y~+ jfl,. Then in the high Q case, i.e., /3; >> G+, if w v pi the above expression reduces to approximately tvij g) . I jw - hi I\ From these results we draw the following conclusions:

If, on the other hand, f(t) = p(t)u;, from (36b), y(t) = l /3(7)eCcLkC1-) sin wk(t - T) dr

1
U:

S, p(T)e-mk(f-) cosuk(t - T) dr

1
u:.

In both cases the resulting oscillation is restricted to the plane defined by the real vectors u; and u;. The difference in the two cases lies in the amplitude and phase of the resulting oscillation. There are many well-known examples of forcing functions that do not excite particular modes. In most practical cases itresults from symmetry: balanced circuits, push-pull amplifiers [S] and phantom circuits of telephone practice [9]. In those cases the forcing function f(t) maintains a certain direction in phase space such that it is at all times orthogonal to a number of the vks and, consequently, does not excite the corresponding modes. Thus one talks of a symmetric excitation exciting only symmetric modes and none of the antisymmetric modes. VIII.
RESONANCE

1) In the sinusoidal steady state, the amplitude of oscillation of each mode depends on the direction in phase space of the forcing function. This is often referred to by saying that the excitation must be strongly coupled to the mode. A discussion analogous to that of the previous section could be inserted here to illustrate the point. 2) In the sinusoidal steady state, the amplitude of oscillation is inversely proportional to Ijw - Xi/, which is the distance between the points j, and Xi in the s plane. This conclusion of course is well known: in the Laplace transform approach jw - Xi is a factor in the denominator of the residue corresponding to the ith natural frequency. Thus to get the largest possible steady-state oscillation in a particular mode one picks first the proper type of excitation, namely, that which makes the product I(vi, g)l as large as possible and then picks the right frequency, namely, w = pi.
10 We could have allowed 9 to have complex components, which would allow each component of f(l) to have a different phase. The resultinn comtAication is. however, not illuminating.

Here we impose a sinusoidal time variation forcing function. In particular we set f(t) = Re [g e]

on the

1960 IX. NUMERICAL EXAMPLES Example I

Desoer: Modes in Linear Circuits

219

If, on the other hand, the initia.1 conditions vector v(0) had been proportional to the eigenvector uz, then v,(t) = 0.6064e-0.26 v2(t) = 0.5986e-0~1126t V,(t) = 0.5235e-0.26. The waveforms are illustrated on Fig. 7.

As a first example consider the RC circuit shown in Fig. 5. It is well known that such a circuit has natural frequencies located on the negative real axis on the s plane; thus the eigenvalues will be negative numbers [3]. Use the voltages across each capacitance as variables; the three node equations then read 61 + l.l?J, - vz = 0
V2

- VI + 2vz - 213 = 0 - 02 + 1.2v, = 0.

I MM!

I WV4 0.2

(l/2)% Multiply

the last equation by 2 to obtain

3+J

0.5

Fig. 5-Linear circuit used in Example 1, Section XI.

Thteystdf equations (42) has now the standard form of (14)-% Section III, and -1.1 1 -2 2 0 I -2.41 .

L0

In order to find the eigenvectors one must first solve the characteristic equation, namely, the third-degree algebraic equation det (A - XI) = 0. The roots are X, = -1.5878 x, = -0.1126 X, = -3.7996.
-06 -06

Insert successively X,, Xz and X, in the system of three homogeneous equations (A - XI) b = 0. This leads to three solutions that are determined only within a constant factor. This factor is arbitrarily selected so that the length of each vector solution will be unity. The resulting normalized eigenvectors are respectively r-O.61081 rO.60641 r-O.20781

Fig. 6-Waveforms observedin the circuit of Fig. 5 when the third mode only is excited.

These eigenvectors are illustrated on Fig. 1. From Section IV it follows that if the initial conditions are ~~(0) = - 0.2078, ~~(0) = 0.5609, v,(O) = - 0.8015 (or any set of numbers proportional to the components of u,), the resulting behavior is
v,(t) v,(t) v3(t) = =

-0.2078e-37996 0.5609e-3.7996 Fig. 7-Waveforms observedin the circuit of Fig. 5 when only the second mode ^ and .-. - is excited. Note the differencein scalebetweenFig. 6 Mg. 7.

= -0.8015e-3.Q96. on Fie. 6.

These three waveforms are illustrated

220

IRE

TRANSACTIONSi

ON CIRCUIT

THEORY

September of this transfer function

When the initial energy stored is distributed in the circuit proportionately to the components of u3 the rate of exponential decay of the stored energy is the maximum possible for the circuit. On the other hand, when the initial energy stored is distributed along u2, the rate of exponential decay of the stored energy is minimum. Eq. (27) is the most convenient way to compute the response to an arbitrary set of initial conditions. It: requires the determination of the vectors vl, v2 and v3 of the reciprocal basis which are found by solving (23): In particular the equations for the components of v2 =

The time domain interpretation is the differential equation


i (t) c + po dic(Q WT dt

fiOib(t).

&I, h

are
+ 0.298Ov,, + 0.73371~2,= 0 + 0.5235~~~ = 1

-0.6108v,,

0.6064~2, + (-0.5986)v*g -0.2078v,,

+ 0.5609v22 + (-0.8015)~~~ = 0.

In t>hethree cases the solutions are

The equivalent circuits used to analyze the multivibrator are shown in Fig. 10(a) and 10(b). The notation is as follows: Vcl, v,,~ are the collector to ground and base to ground voltages of the first transistor; ial and icl are the current into the base and the current into the collector of the first transistor; veb, is the voltage between t,he collector of transistor 1 and the base of transistor 2. Thus, vcbl is the voltage across the first coupling capacitor. An appropriate set of circuit equations is obtained as follows: the first three equations are obtained by considering Fig. 10(a), and the latter three by considering Fig. IO(b). Eqs. (43a) and (4.3~) are node equations and (43b) follows from the transistor representation. @ti,,, + (G + G)V,m + GLvbz + i,~ = ii, + I,,

VI = ~~~;LzL]

V2 = [rjt]

V3 = [;;yii#

(43a)

Example 2 [lo]-[12] Consider the symmetrical multivibrator shown on Fig. 8. The equivalent circuit used for the transistor is shown on Fig. 9. The transfer function relating the collector current i,(t) to the base current ib(t) is the conventional one whose frequency domain representation is

(43b)

IGA,
dT dt

(G,

G,

gbl)ub2

,&I

-1,

Iw,

(43c) (43a)

PO .

Po dicz

= -ic2 + Pogivb2
(GL + G, + gh + C, = --I, + I**.

(43b) (43~)

1+j$

IGLV~M+

(a) Fig. 8-Symmetrical multivibrator of Example 2. The element values are: GL = 10-a mhos; Gb = 1.1 10e3 mhos; C = 5 lo+ mhos; 2rb = lo* ohms; PO = 100; wg = 2~ 10 rad/second.

Fig.

O-Transistor

equivalent circuit used in Example 2.

the

analysis

of

Fig.

lo-Equivalent

circuits used - for the .- analysis vibrator ot Example 2.

of the multi-

1960 In order to take advantage introduce new variables:


vcbl

Desoer: Modes in Linear Circuits of the symmetry, let us two node equations and one loop equation, one gets

221

Veb2 ic2
vb2

= =
=

71 q2
%

L
vbl

vcbl

2cbZ

r/4

The matrix of this circuit has three eigenvalues, one real and a pair of complex conjugate ones. The corresponding eigenvectors are given in the table x1 = -1.4534 A, = -0.7733 + j1.468 x, = A,*

Cl

ic2

v5

vbl

vb2

96.

As expected from symmetry considerations [8], [lo], [ll], with these variables, the six equations separate into two independent sets of three variables each. Since the main interest lies in the switching of the conduction from one transistor to the other, we consider exclusively the odd-symmetric variables. After substituting and noting that the third equation is algebraic, we eliminate vZ using that last equation to express Q in terms of 7, and Q. The result is numerically simplified by the following normalization a) measure time in ps b) measure TJ~ in millivolts c) measure v2 in amperes. The result is

Ul = ~~~~~~J

?& = ~~~~~~~8 : ~~~~].

The eigenvalues and eigenvectors are X1 = +48.8 Ill = 0.999 -0.0357 1 X, = -0.81 uz = -0.883 -0.468

For example, if the initial conditions are along u1 [say, v,(O) = 0.834, ~~(0) = 0.503 and ~~(0) = -0.2281, only the exponentially damped mode is excited and the resulting voltages and currents are very similar to those of the RC circuit of Example 1. On the other hand, if the vector representing the initial conditions lies in the plane defined by u; and u; [where u: = (- 0.100, 0.268, - 0.4118) and u: = (- 0.646, 0.322, 0.468)], then only the oscillatory mode is excited. Throughout the circuit the voltages and the currents are damped sinusoids of different amplitudes and phases, but they all have in common the decrement - 0.7733 and the angular frequency 1.468 rad/second. For such initial conditions, although this circuit is a three-reactance circuit, the voltages and currents have the simple waveforms of t,he standard RLC resonant circuit. X. CONCLUSION

As expected, the first mode is unstable. The second one is stable. The meaning of the components follows. If by external excitation the difference v~:,, - v,,, = 0.999 mv and icl - ic2 = - 35.7 ma, then only the unstable mode is excited and all the energy transferred will increase exponentially in proportion to e2x4*.8,where t is expressed in microseconds. It should be noted that here the components of the eigenvector uI specify the initial distribution of stored energy required to achieve minimum rise time. Example 3 Consider the RLC circuit shown on Fig. 11. Writing

I
Fig. ll-RLC

I
3.

example used as Example

The validity of the mode concept has been established by demonstrating that it. applies to any linear time invariant circuit, be it active or passive, reciprocal or not. The four statements exhibited in the Introduction have been demonstrated in detail. An obvious point that should be stressed is that although this paper has been written using linear circuits as a vehicle, the mode concept is applicable to any linear system such as a linear servomechanism for example. It is clear that as a tool for analysis the mode concept is unlikely to displace the transfer function. Its importance lies in the insight it provides into the behavior of linear systems. By insight we mean that it is a convenient way to visualize what a linear system can do and what it cannot do. It is a tool for diagnosis and correction in that it shows why a particular circuit behaves in a particular way and what feature of said circuit should be modified in order to obtain a more desirable performance. In addition to the above-mentioned insight, the mode concept gives a representation for any linear system. This representation is useful because it is simple and has an obvious physical meaning. This last point is very important because the engineer thinks intuitively. Therefore a convenient analytical tool that very closely resembles his

222

IRE TRANSACTIONi

ON CIRCUIT

THEORY

September

intuition is extremely useful. Example 2 above bears this point out. It is also the most convenient way of expressing analytically the concept of controllability recently developed by Kalman in his discussion of multiple input servomechanisms [15]. It has also been useful for solving an optimum control problem [16]. Finally, it should be mentioned that Lures canonical representation of a class of nonlinear servomechanisms is based on this concept [17] .
APPENDIX

Since y(0) is real and since X,, uZ and v2 are respectively the complex conjugate of &, u1 and vl, the second term of (47) is the complex conjugate of the first, thus

z(t) = .[2 Re (vl, y(O))euJ.


Inserting the definitions (31.), namely x, = -al + $0,

(48)

Ill = u: + ju: I into (48), then 2v, = vf + jv:

The purposes of this Appendix are first to establish that if X, is complex and X, = Xi, then uZ = ui and vZ = vz; and second to derive .(34) and (35) of the text. 1) Refer back to the definition and normalization condition of eigenvectors (I9), in particular, Au, = X,u, (44a) Au2 = Xzuz i (uz, u2) = 1. (44c) (44d)

z(t) = emO[(v:,y(O))(u: cos wit - u: sin wit)


+ (vf: y(O))(u: cos wit + u{ sin wit)] or

z(t) = emoI[(v!,y(0)) cos wit + (v{: y(0)) sin wlt]u:


+ ewa[(v!: y(0)) cos w,t - (vi, y(0)) sin wlt]u:. (49)

{ (UI, u,> = 1 (44b)

The matrix A has all its coefficients real; hence when (44~) is written in the homogeneous form (A - XJ)uz = 0, (45)

The bracketed terms are scalars so that at any time t, z(t) is a linear combination of the real vectors u: and u;. Note that all the quantities appearing in (49) are real .. quantities.
APPENDIX II

it is identical to that which would be obtained from (44a) except for the replacement of X, by Xa = X:. Hence the coefficients of the two systems are the complex conjugate of one another. Thus u: is a solution of (45). In view of (44b), it also fulfills the normalization condition (44d). 2) Consider now v1 and v,. The vector v1 is the solution of (24). Take the complex conjugate of (24), taking into account that u: = u2 and u*, = ul, then (VT, Lb> = 1 (v?, Ul) = 0 (VT, UB) = 0 (46)

The following is to be established: if X, is, complex and X, = X:, then the real vectors vi and vi are orthogonal to every vector of the subspace spanned by us, u4, . . . u,. Furthermore vi and vi constitute a reciprocal basis to u{ and u{ in the sense that (v:, u:> = 1 (v:, I$) = 0 (vi{ u:> = 0 (v:: up> = 1.

Case1
Suppose that for k 2 3, uk is real. From (24), we have (Vl, Uk) = 0 (k = 3, 4, --- n). ;

(v;, al:) = 0.
If the eigenvectors ua, uq, . . . u, are real, u*, = uk for k 2 3 and the last (n - 2) equations in (46) are identical to those defining vZ. [See (23).] The first two are identical too except for the order in which they appear. If some or all the eigenvectors uI, uq, . . . u, are complex, they can be grouped in complex conjugate pairs. Let u3 and u4 form such a pair. Then the fact that they appear with the superscript * in (46) amounts to a reordering of the equations since ug = u4 and u*, = us. Thus in all cases the system (46) is identical to that which defines v2 and VT = v 3) Let the irst two terms of (27) be denoted by z(t), t,hat is, z(t) = (vl, y(O>)exul + (v2, y(O>)exztu2. (47)

Each of the scalar products is a complex number. Substitute for v1 and vZ their expression in terms of their real and imaginary parts, that is 2(vl, uJ = (vi + jv:, uJ = (vi, uJ - j(v{: u,) = 0. Equate real and imaginary parts to zero: (v:, Ilk) = 0 (v:, u,) = 0 (k = 3, 4, . . . n).

Hence the real vectors v: and vi are orthogonal to each of the uk (k = 3, 4, . . . n). In other words, they are orthogonal to every vector of the subspace spanned by u37 u&, . . . u,.

1960 From (24) (Vl, Ul) = 1

Desoer: Modes in Linear Circuits

223

(50)

Thus the real vectors vi and v{ are orthogonal to the real vectors u: and u:. Hence they are orthogonal to every vector of the subspace spanned by

(Vl, UP)= 0.
Substituting the expressions of vl, u1 and u2 in terms of their real and imaginary parts, (v: + jv:: u: + jd)
(v: + N, Ll: $I:)

It is also obvious that (53) still holds in this case.


BIBLIOGRAPHY

= 2
= 0, C51)

VI J. W. Strutt,

hence (v:, II:> + (vi, II:) = 2


(vi, d') (v:, u:> (v:, u:') + (vi', (vi', (vi', u:> = 0 (52)

l.ly> = 0 u:> = 0.

And by adding and substracting

Lord Rayleigh, The Theory of Sound, The Macmillan Co., New York, N . Y., vol. 1; 1894. [2] H. W. Bode, Network Analysis and Feedback Amplifier Design, D. Van Nostrand Co., Inc., Princeton, N. J.; 1945. 131E. A. Guillemin, Synthesis of Passive Networks, John Wiley and Sons, Inc., New York, N. Y.; 1957. Also, Communication Networks, John Wiley and Sons, Inc., New York, N. Y.; vol. 2. 1935. [41 B. Fdedman, Principles and Techniques of Applied Mathematics, John Wiley and Sons, Inc., New York, N. Y.; 1956. Classical Mechanics, Addison-Wellesley r51 H. Goldstein, Publishing Co.? Reading, Mass.; 1950. Circuit Theory, John Wiley ~61 E. A. Guillemm, Introductory and Sons Inc New York, N. Y.; 1953. Differential Equations, Addison[71 W. Kaplan, Ordinary Wellesley Publishing Co., Reading, Mass.; 1958. PI H. J. Zimmerman and S. J. Mason, Electronic Circuit Theory, John Wiley and Sons, Inc., New York, N. Y.; 1959. 191K. S. Johnson, Transmission Circuits for Telephonic Communications, D. Van Nostrand Co., Inc., New York, N. Y.
lcn?
__. .

-I
Case 2

(v:, u:> = 1 (v:, u:') = 0

(v:, u:> = 0 -Iw, u:) = 1.


(53)

One or more pairs of uks are complex for k = 3,4, . . . n. Suppose u,, u,+~ form such a pair, thus, u,+~ = u:. From (24)
(Vl, u,) ( Vl, u,+J = 0 = 0.

D. 0. Pederson, Regeneration analysis of junction transistor multivibrator. IRE TRANS. ON CIRCUIT THEORY. vol. CT-2. pp. 171-178; June, 1955. pulse circuits using two junction [ll] J. G. Linvill, Nonsaturating transistors, PROC. IRE, vol. 43, pp. 826-834; July, 1955 WI J. J. Suran and F. A. Reibert, The terminal analysis and synthesis of junction transistor multivibrators, IRE TRANS. vol. CT-3, pp. 26-37; March, 1956. ON CIRCUIT THEORY, IRE 1131 A. Bers, The degrees of freedom in RLC networks, T,b9~s. ON CIRCUIT THEORY, vol. CT-6, pp. 91-94; March, of the zeros of the impedance D41 A. Papoulis,. Displacement z(p) due to mcremental variations in the network elements, PROC. IRE, vol. 43, pp. 79-81; January, 1955. P51 R. E. Kalman and J. E. Bertram, Control system analysis and design via the Second Method of Lyapunov. I. Continuous sydsfitnems, J. Basic Engrg. ASME, vol. 1, pp. 371-393; June,
-V-1.

Break up the vectors in their real and imaginary parts; then a manipulation almost identical to that which led to (53) above gives (vi, Ilk) = 0 (v:, UL) = 0 (vi; II;> = 0 (v:: UA) = 0.

P61 C. A. Desoer, The bang bang servo problem treated by variational techniaues. ZnfornLation and Control. vol. 2., pn. A& :3I34$Dcembei, 1959. 1171 Some Nonlinear Problems in the Theory of Automatic Control, Her Majestys Stationery Office, London, Eng.; 1957. (Translation.) 1181T. R. Bashkow, The A matrix, a new network description, IRE TRANS. ON CIRCUIT THEORY, vol. CT-4, pp. 117-120; September, 1957. u91 C. A. Desoer, Steady state transmission through a network containing a single time varying element, IRE TRANS. ON CIRCUIT THEORY, vol. CT-6, pp. 244-252; September, 1959.

También podría gustarte