Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Non-Linear Elastic Deformations
Non-Linear Elastic Deformations
Non-Linear Elastic Deformations
Ebook823 pages4 hours

Non-Linear Elastic Deformations

Rating: 3.5 out of 5 stars

3.5/5

()

Read preview

About this ebook

This meticulous and precise account of the theory of finite elasticity fills a significant gap in the literature. The book is concerned with the mathematical theory of non-linear elasticity, the application of this theory to the solution of boundary-value problems (including discussion of bifurcation and stability) and the analysis of the mechanical properties of solid materials capable of large elastic deformations. The setting is purely isothermal and no reference is made to thermodynamics. For the most part attention is restricted to the quasi-static theory, but some brief relevant discussion of time-dependent problems is included.
Especially coherent and well written, Professor Ogden's book includes not only all the basic material but many unpublished results and new approaches to existing problems. In part the work can be regarded as a research monograph but, at the same time, parts of it are also suitable as a postgraduate text. Problems designed to further develop the text material are given throughout and some of these contain statements of new results.
Widely regarded as a classic in the field, this work is aimed at research workers and students in applied mathematics, mechanical engineering, and continuum mechanics. It will also be of great interest to materials scientists and other scientists concerned with the elastic properties of materials.
LanguageEnglish
Release dateApr 26, 2013
ISBN9780486318714
Non-Linear Elastic Deformations

Related to Non-Linear Elastic Deformations

Related ebooks

Civil Engineering For You

View More

Related articles

Reviews for Non-Linear Elastic Deformations

Rating: 3.5 out of 5 stars
3.5/5

2 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Non-Linear Elastic Deformations - R. W. Ogden

    1983

    Preface

    This book is concerned with the mathematical theory of non-linear elasticity, the application of this theory to the solution of boundary-value problems (including discussion of bifurcation and stability) and the analysis of the mechanical properties of solid materials capable of large elastic deformations. The setting is purely isothermal and no reference is made to thermodynamics. For the most part attention is restricted to the quasi-static theory, but some brief relevant discussion of time-dependent problems is included.

    Apart from much basic material the book includes many previously unpublished results and also provides new approaches to some problems whose solutions are known. In part the book can be regarded as a research monograph but, at the same time, parts of it should also be suitable as a postgraduate text. Problems designed to develop further the text material are given throughout and some of these contain statements of new results.

    Because so much of the theory depends on the use of tensors, Chapter 1 concentrates on the development of much of the tensor algebra and analysis which is used in subsequent chapters. However, there are parts of the book (in particular, Sections 4.4 and 7.2) which do not rely on a knowledge of tensors and can be read accordingly. Chapter 2 provides a detailed development of the basic kinematics of deformation and motion. Chapter 3 deals with the balance laws for a general continuum and the concept of stress. Prominence is given to the nominal stress tensor and the notion of conjugate stress and strain tensors is examined in detail.

    In Chapter 4 the properties of the constitutive laws of both Cauchy- and Green-elastic materials are studied and, in particular, the implications of objectivity and material symmetry are assessed. Considerable attention is devoted to isotropic constitutive laws for both (internally) constrained and unconstrained materials. The basic boundary-value problems of non-linear elasticity are formulated in Chapter 5 and the governing equations are solved for a selection of problems in respect of unconstrained and incompressible isotropic materials. A section dealing with variational aspects of boundary-value problems is included along with a short discussion of conservation laws.

    Chapter 6, the longest chapter, is concerned with incremental deformations superposed on an underlying finite deformation. The resulting (linearized) boundary-value problem is formulated and its structure discussed in relation to the analysis of uniqueness, stability and bifurcation. The role of the strong ellipticity inequality is examined. Constitutive inequalities are discussed and the implications of their failure in relation to bifurcation (or branching) is assessed from the local (i.e. incremental) viewpoint. Global aspects of non-uniqueness are also considered. The incremental theory is then applied to some representative problems whose bifurcation behaviour is studied in detail.

    In the final chapter, Chapter 7, the theory of elasticity is applied to certain deformations and geometries associated with simple experimental tests, in particular the pure homogeneous biaxial deformation of a rectangular sheet. The relevant theory is provided in a concise form as a background for comparison with experimental results, isotropic materials being considered for simplicity of illustration. This is then used to assess the elastic response of certain rubberlike materials. The incremental theory governing the change in deformation due to a small change in material properties is developed and applied to the case of a slightly compressible material and this in turn is illustrated by means of rubberlike materials.

    The book concentrates on ‘exact’ theories in the sense that no discussion of ‘special’ theories, such as shell, rod or membrane theories, or of numerical methods is included. (Excellent separate accounts of these topics are available elsewhere.) Within this framework a broad spectrum of topics has been covered and a balanced overview attempted (although this is, not surprisingly, influenced by the areas of the subject on which the writer has been actively engaged). Attention is confined to twice-continuously differentiate deformations on the whole, with discontinuities being touched on only briefly, in Chapter 6, in relation to failure of ellipticity.

    References to standard works for background reading are given throughout the text but historical attributions and detailed lists of references to papers are not provided. Only where further development of the textual material might be required are references to the more recent papers cited, but the list of references is not intended to be exhaustive. References are indicated by the author’s name followed by the year of publication in the text and gathered together at the end of each chapter.

    CHAPTER 1

    Tensor Theory

    The use of vector and tensor analysis is of fundamental importance in the development of the theory which describes the deformation and motion of continuous media. In non-linear elasticity theory, in particular, little progress can be made or insight gained without the use of tensor formulations. This first chapter is therefore devoted to an account of the vector and tensor algebra and analysis which underlies the requirements of subsequent chapters. Some theorems of tensor algebra, however, are not dealt with here but postponed until the later chapters in which they are needed.

    It is assumed that the reader is familiar with elementary vector and matrix algebra, including determinants, with the concept of a vector space, including linear independence and the notion of a basis, and with linear mappings. Also some familiarity with the index (or suffix) notation and the summation convention is assumed. Nevertheless, certain basic ideas are summarized in the early part of this chapter, primarily to establish notations but also for convenience of reference.

    1.1 EUCLIDEAN VECTOR SPACE

    A scalar

    A (real) vector space V is a set of elements (called vectors†) such that (a) u + v V, u + v = v + u, u + (v + w) = (u + v) + w for all u, v, w V, (b) V contains the zero vector 0 such that u + 0 = u for all uV and for every u V there is an inverse element, denoted — u, such that u + (— u) = 0, (c) αuV, lu = u, α(βu) = (αβ)u, (α + β)u = αu + βu, α(u + v) = αu + αv where 1 denotes unity.

    is a real vector space such that, for any pair of vectors u, v , there is defined a scalar, denoted u.v, with the properties

    equality in (1.1.2) holding if and only if u = 0. The scalar product (or ‘dot’ product) u·v of u and v is bilinear (that is linear in each element of the product.† Thus

    and all u,v,w and dually by (1.1.1).

    The magnitude (or modulus) of u is denoted by |u| and defined as the positive square root of

    If |u| = 1 then u is said to be a unit vector.

    If u·v = 0 then u and v are said to be orthogonal.

    The above discussion applies to a vector space of arbitrary finite dimension. With a view to the application in subsequent chapters to continuous bodies occupying three-dimensional physical space we confine the remaining development in this chapter to an underlying three-dimensional Euclidean space. (Generalization to n dimensions, however, is for the most part a straightforward matter.)

    In three dimensions we denote the vector product of u and v by u v. It is a vector with the properties

    It follows immediately from (1.1.5) that

    If u and v are unit vectors, (1.1.6) can be written

    |u ^ v|²+(u.v)²=1

    and this, together with (1.1.9), leads naturally to the following geometrical interpretations of the scalar and vector products. We write

    where (1.1.10) defines the angle θ between the directions u and v, and k is a unit vector in the direction u v (0 ≤ θ ≤ π).

    So far the discussion has been in invariant (or absolute) notation, that is no reference to a ‘basis’, ‘axes’ or ‘components’ is made, implied or required. It turns out that much of the theory to be developed is more concise and transparent in such notation than in terms of corresponding component notations. However, there are circumstances in which use of the component forms of vectors (and tensors) is helpful. In particular, practical ideas can often be fixed more readily by reference to components (and associated basis vectors) and algebraic manipulations can be made more convincing for the beginner. The component representation of vectors is therefore examined in the following two sub-sections.

    1.1.1 Orthonormal bases and components

    A basis is a set of three linearly independent vectors. An orthonormal basis is a set of three vectors, here denoted el, e2, e3 and collectively by {ei}, such that

    for any pair of indices i,j. (Italic letters i,j,..., p, g,... are used for indices running over values 1, 2, 3.) The Krönecker delta symbol, δij, is defined by the right-hand equality in (1.1.12).

    With reference to the basis {ei} a vector u is decomposed as

    where u1,u2,u3 are called the components of u relative to the given basis.

    This summation convention allows (1.1.13) to be written in the compact form

    in which summation over an index (in this case j) from 1 to 3 is implied by its repetition. This convention is followed throughout the book without further comment except where an explicit statement is made to the contrary.

    If the dot product of equation (1.1.14) with ei is taken, use of (1.1.12) leads to

    ui = u.ei.

    Thus, for an arbitrary choice of orthonormal basis {ei} the component ui of a vector u is defined as the scalar product of u with the basis vector ei.

    It is left as an exercise for the reader to show that

    It is assumed here that the orthonormal basis {ei} forms a right-handed triad of unit vectors, that is

    In summation notation these are put jointly as

    where εijk, which is called the alternating symbol, is defined by

    We note, in particular, the cyclic properties

    and the antisymmetry

    (on any pair of indices) which follow immediately from the definition (1.1.16).

    In this notation the vector product u v becomes

    and the three components of this are

    The triple scalar product (u vw is written

    on use of (1.1.12) and (1.1.19). This leads to the convenient determinantal representation

    from which the properties

    follow immediately together with their anticyclic counterparts. These properties can, of course, be established without reference to the basis {ei}; see, for example, Chadwick (1976, p. 13).† A further result which can be seen immediately from (1.1.21) is that u,v,w are linearly dependent if and only if (uvw = 0.

    Problem 1.1.1 Let det A denote the determinant of the 3 × 3 matrix A which has elements Aij. Use (1.1.20) and (1.1.21) to show that

    Deduce that

    and hence show that

    Problem 1.1.2 If A and B denote two 3 × 3 matrices, use (1.1.22) and (1.1.23) to show that

    (The reader is reminded that (AB)ij = AipBpj.)

    From the definitions of δij and εijk it is easily seen that

    Use of this with (1.1.25) leads to the representation

    and on setting r = k and summing over k from 1 to 3 in (1.1.26) we obtain the useful identity

    With reference to the basis {ei} the triple vector product u ∧ (v w) expanded as

    by use of (1.1.15), (1.1.17) and (1.1.19). Application of (1.1.27) reduces this to

    and the identity

    follows.

    1.1.2 Change of basis

    We now consider a second (right-handed) orthonormal basis {ei} oriented with respect to {ei} as depicted in Fig. 1.1.

    Since {ei} is a basis, each of e’1, e’2, e’3 is expressible as a linear combination of e1, e2, e3. We therefore write

    and, on taking the dot product of (1.1.28) with ej, it is seen that the coefficients Qij are given by

    Fig. 1.1 Orientation of the basis vectors ei relative to ei

    The definition (1.1.10) with (1.1.29) shows that the Qij’s are the direction cosines of the vectors e’i relative to the ej, as indicated in Fig. 1.1.

    By orthonormality and (1.1.28), we have

    It is convenient to represent the collection of coefficients Qij as a matrix Q, with transpose QT. Then (1.1.30) shows that QT is the inverse matrix of Q and so

    where | is the identity matrix, or, in component notation,

    A matrix Q satisfying (1.1.31) is said to be an orthogonal matrix.

    Premultiplication of (1.1.28) by Qij and use of (1.1.32) leads to the dual connections

    between the basis vectors.

    From (1.1.22), (1.1.25) and (1.1.31) we obtain

    and hence

    Here we have restricted attention to the situation in which det Q = + 1, this corresponding to maintenance of right-handedness of the basis vectors. In this case Q is said to be proper orthogonal (it may be interpreted as a rotation which takes {ei} into {ei}; see Section 1.3.5). For a change of basis in which right-handedness is not preserved, on the other hand, det Q = – 1 and Q is improper orthogonal.

    Let vi v’i be the components of a vector v with respect to bases {ei}, {ei} respectively. Then, by use of (1.1.33),

    v’ke’k = vjej = vjQkje’k.

    Taking the dot product of this with ei and applying (1.1.32), we obtain

    which show that the components of v transform under change of (ortho-normal) basis according to the same rule (1.1.33) applicable to the basis vectors themselves.

    As a specific example we consider a change of basis for which e’3 = e3 and

    corresponding to a positive (i.e. anticlockwise) rotation through an angle θ about e3.

    Then

    and it is easily confirmed that (1.1.32) are satisfied and det Q = 1.

    Problem 1.1.3 Write down (a) the Q corresponding to a rotation θ about e2, (b) the Q corresponding to a rotation ϕ about e1, (c) the Q corresponding to (b) followed by (a).

    Problem 1.1.4 Show that

    is an improper orthogonal matrix that represents a change of basis equivalent to a reflection in the plane through e3 inclined at a positive angle θ to e1.

    1.1.3 Euclidean point space: Cartesian coordinates

    The mechanical behaviour of continuous media is most conveniently described in terms of scalars, vectors and tensors which in general vary from point to point in the material, and may therefore be regarded as functions of position in the physical space occupied by the material. In order to express this formally in mathematical terms the notion of a Euclidean point space is required.

    be a set of elements which we refer to as points. If, for each pair (x, y) of points x, y there exists a vector.† denoted v(xsuch that

    for all x, y, z , and

    for each x is said to be a Euclidean point space (it is not a vector space).

    From (a) it is easily shown that

    and hence that

    For what follows it is convenient to adopt the notation x(y) for v (x, y). Then, if a fixed (but arbitrary) point o is chosen for reference x(o) is called the position vector of the point x relative to o, and o is referred to as the origin.

    By (1.1.38) and (1.1.41), we obtain

    x(y) = x(o)-y(o),

    and this is independent of the choice of o. It is therefore convenient to write this in the conventional form x y and to use the abbreviated notation x in place of x(o).

    The distance d(x, y) between two points x, y according to

    It is straightforward to establish that the bilinear mapping d x is a metric, that is

    for all x, y, z . (a) follows from the definition (1.1.42), (b) by use of the inequality (x z).(z y) ≤ d(x,z)d(z,y), which can be obtained from (1.1.10) with (1.1.42), and (c) follows from (a) and (b) by putting x = y and using (1.1.42).

    is endowed with a metric it is a metric space.

    The angle θ between the lines joining o to x and o to y . Thus, by (1.1.10),

    for an arbitrary choice of origin o.

    With an origin o , an arbitrary point x corresponds to a unique position vector x . Let {ei. Then the components xi of x are given by xi = x·ei (, ei say, such that ei(x) = x·ei (i = l, 2, 3) for every x . The origin o, together with the collection of mappings ei is denoted {o,ei} and this is said to form a (rectangular) Cartesian coordinate system The components xi, are called (rectangular) Cartesian coordinates of the point x in the coordinate system {0,ei}. The distinction between ei and ei can be ignored for most of the applications envisaged in this book, and, moreover, when o is fixed the point x may be identified with its position vector x relative to o. We shall emphasize this identification in later sections.

    With respect to rectangular Cartesian coordinate systems {o,ei} and {o’,ei} the point x has coordinates xi and x’i respectively, where the basis vectors are related by (1.1.33). Let o’(o) be denoted by c. Then

    and it follows that

    where ci, c’i are the components of c relative to the bases {ei}, {ei} respectively. When c= 0 the transformation law (1.1.44) for the components of x is equivalent to that given in (1.1.35).

    From (1.1.44) we obtain

    and the chain rule for partial derivatives may be used to show that

    thus confirming (1.1.30).

    Equation corresponding to the (linear) coordinate transformation (1.1.44) and, in that such a matrix plays an important role for more general (non-linear) coordinate transformations, the above provides a lead in to our discussion of curvilinear coordinates in Section 1.5.4.

    Until and in the meantime therefore we take the terms ‘orthonormal’ and ‘rectangular Cartesian’ to be equivalent.

    1.2 CARTESIAN TENSORS

    1.2.1 Motivation: stress in a continuum

    Consider an infinitesimal element of surface area dS in a continuous medium. Let n be the unit normal to dS (by convention we take n, rather than — n, to be the positive unit normal; see Section 1.5.5 for a definition of positive unit normal). In general, the material on one side of dS exerts a force on the material on the other side (that into which n points in our convention). As indicated in Fig. 1.2, we denote the force by t(n)dS, where t(n) is called the stress vector and is such that t(— n) = — t(n). It has dimensions of force per unit area and it depends on the orientation of dS, that is on n. In fact, as is shown in Chapter 3, t(n) depends linearly on n. We express this dependence by writing

    Fig. 1.2 The force t(n) per unit area acting on an infinitesimal surface area dS with unit normal n.

    where T, which is independent of n, is a linear mapping .

    Equation (1.2.1) is in invariant form . When referred to the orthonormal basis {ei}, however, (1.2.1) is decomposed as

    where Tij are called the components of T relative to the basis {ei}.

    If, for example, we take n = e1 then the components of n are ni = δi1 and (1.2.2) becomes ti(e1)=Ti1, and similarly ti(e2) = Ti2, ti(e3) = Ti3. The nine components Tij can therefore be thought of as representing the components ti(ej)of the vectors t(ej),j = 1,2,3, corresponding to the forces per unit area on three mutually perpendicular planes (at a point in the material). In other words, the state of stress in the material can be represented by the components Tij, for any chosen basis {ei}, or, in invariant form, by the linear mapping T. This provides a physical interpretation of T which will be elaborated in Chapter 3.

    The linear mapping T is also called a tensor (or more specifically a second-order tensor). In the present context it is a stress tensor, but, as will be seen in Chapter 3, many different tensor measures of stress can be constructed so we do not refer to T as the stress tensor. Unless indicated otherwise tensors are denoted by bold-face, upper-case letters such as T, U, V, W,... in this chapter.

    As a specific example, we take T = - pI, where p is a scalar and I (in components Tij= – pδij), so that t(n) = – pn. Physically, this corresponds to the situation in an inviscid fluid for which p acts in the direction normal to an arbitrary surface and no shearing (viscous) forces act parallel to the surface.

    We now determine how the components of T transform under change of orthonormal basis. Let ti ,Tij , nj and t’i , T’ij, n’j be the components of t, T, n with respect to bases {ei} and {ei} respectively, the basis vectors being related by (1.1.33). Then

    But, since t and n are vectors, their components transform according to (1.1.35); thus

    Combining (1.2.3) and (1.2.4) we obtain

    But this holds for arbitrary n so that

    and use of (1.1.32) after post-multiplication of this equation by Qjq leads to

    This is the transformation rule for the (rectangular Cartesian) components of the second-order tensor T under change of basis (1.1.33). Indeed, (1.2.5) provides the basis for the characterization of second- and higher-order Cartesian tensors as defined in the following section.

    Let T,T’ respectively denote the matrices (Tij) and (T’ij). Then (1.2.5) may be represented

    We emphasize that it is important to distinguish between the tensor T and its component representation T. Although this matrix representation is useful for manipulative purposes in respect of second-order tensors it does not generalize conveniently to tensors of higher order.

    1.2.2 Definition of a Cartesian tensor

    An entityT which has components Tijk... (n indices) relative to a rectangular Cartesian basis {ei} and transforms like

    under a change of basis ei→ e’i ≡ Qijej, where Q = (Qij) is a (proper) orthogonal matrix, is called a Cartesian tensor of order n. We abbreviate this phrase as CT(n) for convenience. All indices run over values 1,2,3 so that a CT(n) has 3n components.

    Examples

    (a) The tensor described in Section 1.2.1 is a CT(2), and its components transform according to (1.2.5).

    (b) A vector is a CT(1) whose components transform in accordance with (1.1.35).

    (c) A scalar is a CT(0) and is unchanged by a transformation of basis.

    (d) The Krönecker delta is a CT(2) since

    use having been made of (1.1.12) and (1.1.33). Furthermore, it follows from the definition of δpq and (1.1.32) that δ’ij = δiJ, that is the components of the identity tensor I are unaffected by a change of basis. In fact, I is a member of an important special class of Cartesian tensors, referred to as isotropic tensors, which are discussed more fully in Section 1.2.5.

    Problem 1.2.1 If S is a CT(3) and T is a CT(2) with components Sijk, Tlm respectively with respect to the basis {ei}, show that SijkTlm are the components of a CT(5). Generalize this result to the situation where S is a CT(m) and T is a CT(n). (An invariant notation to represent such a product is introduced in Section 1.2.3.)

    Deduce that Uijk SijpTkp, vi = SijkTjk are respectively the components of a CT(3) and a CT(1).

    Problem 1.2.2 If T is a CT(2) and Tn = 0 for arbitrary vectors n show that T = 0 (the zero second-order tensor), that is Tij = 0 with respect to an arbitrary basis. (This result was used in the derivation of equation (1.2.5).)

    Problem 1.2.3 If S and T are each CT(n)’s and α,β are scalars, prove that αS + βT is also a CT(n). This shows that Cartesian tensors of the same order may be added, but no meaning can be attached to the sum of tensors of different orders.

    Problem 1.2.4

    Problem 1.2.5 If Eij are the components of a CT(2) and λ, μ are scalars, deduce that

    are also the components of a CT(2), and show that

    are defined by

    and hence show that

    1.2.3 The tensor product

    Consider two vectors u and v. The product uivj of their components transforms according to

    u′iv′j = QipQjqupvq,

    so the tensor with components uivj with respect to the basis {eiand is called the tensor product (or dyadic product) of u and vby analogy with the notation used for matrix transposes in Section 1.1.2.

    we have, in invariant notation,

    for all u, v, w

    In respect of the basis vectors ei equation (1.2.7) gives

    for all vectors n If T is an arbitrary CT(2) with components Tij with respect to the basis {ei} then multiplication of (1.2.8) by Tij with summation over i and j leads to

    since, by definition, Tn has components Tijnj (see Section 1.2.1).

    Since n is arbitrary, it follows from the result of Problem 1.2.2 that

    Thus (1.2.9) provides a representation for an arbitrary CT(2) T with respect to an arbitrarily chosen basis {eiThis will be discussed in more detail in Section 1.3.)

    In passing we remark that in general a pair of vectors u, v

    For the identity tensor I, with components δij, we have

    for an arbitrary (orthonormal) basis {ei}.

    is a third-order tensor with components uivjwk with respect to the basis {ei

    More generally, a CT(n) may be represented

    of a CT(m) S and a CT(n) T is a CT(m + n)

    such that

    1.2.4 Contraction

    Set any two indices equal, iq = ip say, and sum over ip from 1 to 3. These indices are then said to be contracted and the order of the tensor is reduced by two. This result follows from the transformation rule (1.2.6) on use of (1.1.32). The general proof is left as an exercise for the reader, but we examine here some special cases.

    of two vectors becomes their dot product u·v on contraction since uivj becomes uivi.

    (b) If T is a CT(2) with components Tij then this contracts to the scalar Tii. Since

    it follows by (1.1.32) that

    This scalar is called the trace of T, and is denoted by tr T. It is a particular example of a scalar invariant of T (full discussion of scalar invariants is given in Section 1.3 and Chapter 4).

    (c) If S and T is a CT(4).

    Let Sij Tkl with respect to the basis {ei}. Then the indices can be contracted in a number of ways.

    For example SijTjl are the components of the CT(2) which is written ST and this in turn contracts to the scalar tr(ST) = SijTji = tr(TS). Similarly SijTkj are the components of STT when TT denotes the transpose of T The results tr(ST)= tr(STTT), tr(STT)= tr (STT) are then easily established.

    If S = T then TT is denoted by T², TT² by T³ and so on, and tr(T²), tr (T³) are further examples of scalar invariants of T.

    For the products of higher-order tensors there are many possible contractions available but we do not need to go into details here.

    Problem 1.2.6 If T is an arbitrary CT(2) and I deduce that T = TI. Hence use the representation (1.2.10) for 1 to show that T has the representation (1.2.9). The decomposition Tn = Tiknkei may be used.

    1.2.5 Isotropic tensors

    If the components of a Cartesian tensor T are unchanged under an arbitrary (subject to right-handedness being maintained) transformation of rectangular Cartesian basis then T is said to be an isotropic tensor.

    Examples

    (a) CT(0): all scalars are isotropic.

    (b) CT(1): there are no non-trivial isotropic vectors.

    If v is an isotropic vector then its components must satisfy Qijvj = vi for arbitrary proper orthogonal matrices Q. The choice

    corresponding to θ = π/2 in (1.1.37), leads to v1 = v2 = 0 and, similarly, another choice of Q give v3 = 0. Hence v = 0 is the only isotropic vector.

    (c) CT(2): scalar multiples of the identity I are the only isotropic CT(2). For this particular example we work through the proof in detail. Let T be an isotropic CT(2). Then its components Tij must satisfy

    or, in matrix form,

    for all proper orthogonal Q.

    The choice of Q used in (b) above leads to

    so that T22 = T11 ,T12 = – T21, T23 = T13 = T31 = T32 =0.

    The choice

    then yields T12 = 0, T33 = T11, so that Tij = T11δij. Since (1.2.11) is unaffected by multiplication of T by an arbitrary scalar, the required result follows.

    (d) CT(3): scalar multiples of the tensor which has components εijk (defined by (1.1.16)) are the only isotropic CT(3)’s.

    Firstly, we note from (1.1.23) that

    If Q is proper orthogonal then det Q = 1 and it follows that ε′ijk = εijk in accordance with the definition of isotropy. If det Q = - 1, on the other hand, then ε′ijk = – εijk. This problem does not arise in respect of even-order tensors, but explains why proper orthogonal changes of basis are used in the definition of isotropy. It follows that scalar multiples of εiJk are isotropic, but it is left to the reader to prove that these are the only ones (follow the method used for CT(2)’s above).

    (e) CT(4): if the tensor T has components Tijkl with respect to an arbitrary basis {ei} then the only independent forms of Tijkl are scalar multiples of

    The most general isotropic CT(4) is therefore expressible in the component form

    where α, β, γ are scalars. The proof of this is omitted, but details can be found in Jeffreys (1952), for example.

    are the components of a fourth-order isotropic tensor. Hence it is expressible in the form

    where α,β,γ are to be determined.

    But, because of the antisymmetry property (1.1.18) of εijk it follows immediately that α = 0 and γ = – β. The choice ij = pq = 12 gives β = 1 and the result (1.1.27) is obtained.

    (f) Isotropic tensors of all higher orders have components expressible as linear combinations of products of Krönecker deltas and alternating symbols†.

    For CT(5), for example, with ijklm as indices, there are ten sets of components (not all independent), namely

    Every isotropic CT(6) has components expressible as linear combinations of products of Krönecker deltas alone. Amongst the indices ijklmn there are fifteen independent products of the form δijδklδmn. A consequence of this is that εijkεlmn, being the components of an isotropic CT(6), is expressible as

    which is equivalent to (1.1.26) The reader is invited to consider the consequences of these results for isotropic CT(n)’s of all higher even and odd orders.

    has dimension two or greater than three parallel results can be obtained.

    1.3 TENSOR ALGEBRA

    Following the discussion of Cartesian tensors in terms of component transformations under change of basis in Section 1.2 we now develop the theory in its equivalent invariant form. Since second-order tensors are most important in applications attention is confined to these in Sections 1.3.1 to 1.3.5, while Section 1.3.6 is devoted to a short discussion of higher-order tensors.

    1.3.1 Second-order tensors

    As we indicated in Section 1.2 a second-order Cartesian tensor T to which the vector u maps is denoted Tu(either orthonormal or otherwise) and this is the starting point for the development in this section.

    The tensor T is said to be linear if

    for all u, v If an orthonormal basis {ei} is chosen for E then (1.3.1) has the component form

    For the most part, however, we avoid using such component representations in this section.

    is itself a vector space with the element αS + βT defined according to

    The inner product ST is defined by

    Note that ST has the Cartesian component form SikTkj as discussed in Section 1.2.4(c).

    The zero tensor 0 to the zero vector o and the identity tensor I to itself. Thus

    which may be identified

    If an orthonormal basis {eiis defined so that

    For an arbitrary member T equation (1.3.5) leads to

    and hence

    Since this holds for all u, v, we obtain the representation

    for T with respect to the basis {ei}. This is equivalent to (1.2.9) and we identify T(ei,ej) as the component Tij of T relative to the basis {ei}. Equation (1.3.6) then becomes

    and this can also be written as the scalar product u with Tv. Thus

    In general, however, this is not equal to v·(Tu). This leads us to define the transpose Tof T by

    or, equivalently,

    (This generalizes the definition given in Section 1.2.4 in respect of Cartesian tensors.) With respect to an arbitrary orthonormal basis the components of TT are given by

    The properties

    follow immediately from the definitions.

    A second-order tensor T is said to be symmetric if TT = T. In components, this implies that Tij = Tij with respect to an arbitrary basis {ei}. Note that the identity tensor I is symmetric and

    A second-order tensor T is said to be skew-symmetric (or antisymmetric) if TT = — T, or, in components Tij = - Tij.

    The reader should confirm that symmetry or antisymmetry of the components Tij is preserved under change of orthonormal basis. Note that a symmetric tensor has six independent components while an antisymmetric one has three, and that a general second-order tensor (with nine independent components) can be written as the sum of a symmetric part and an antisymmetric part. Thus

    The trace of T may be defined with respect to an orthonormal basis {ei} as in Section 1.2.4(b). Thus

    Likewise, the determinant of T is defined as the determinant of the matrix T of components of T with respect to an orthonormal basis. Thus, by (1.1.22),

    Using the fact that εijk are the components of an isotropic tensor it is easy to establish, by (1.1.23) applied to Qij and the transformation rule (1.2.5), that det T is independent of the choice of basis. In other words, like tr T, det T is a scalar invariant of T.

    If det T ≠ 0 then there exists a unique inverse tensor, denoted T-1, such that

    such that det S ≠ 0.

    The adjugate tensor of T, denoted adj T, is then defined by

    although it may also be defined when T-1 does not exist. It is easily shown that

    Problem 1.3.1 are as defined in Problem 1.2.4. Show that (a) if Tij are the components of a symmetric CT(2) then

    and (b) if Ttj are the components of a skew-symmetric CT(2) then

    Problem 1.3.2 If Tij are the components of a CT(1). Deduce that Tij

    Problem 1.3.3 If Wij are the components of an antisymmetric CT(2) W then the vector w is called the axial vector of Wand that, for an arbitrary vector a, w a = Wa.

    Deduce that u v

    Problem 1.3.4 If

    is the matrix representing the components of an antisymmetric CT(2) T with respect to basis {ei}, show that, for any change of basis eiei = Qijej such that e'3=e3, the matrix representing the components of T is unchanged.

    1.3.2 Eigenvalues and eigenvectors of a second-order tensor

    Let T be a second-order tensor. A

    Enjoying the preview?
    Page 1 of 1