Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Lecture Notes
Textbook
Students are strongly advised to acquire a copy of the Textbook:
About Homework
The Student Handbook 2.10 (f) says:
As a rough guide you should be spending
approximately twice the number of
instruction hours in private study,
mainly working through the examples
sheets and reading your lecture notes and
the recommended text books.
In respect of this course, MATH10212
Linear Algebra B, this means that students are expected to spend 8 (eight!)
hours a week in private study of Linear
Algebra.
Communication
The Course Webpage is
Twitter:
http://www.maths.manchester.ac.
uk/~avb/math10212-Linear-Algebra-B.
html
https://twitter.com/math10212
3
X
pi gi .
i=1
where
p1
g1
P = p2 and G = g2
p3
g3
Physicists use even more short notation and, instead of
p1 g1 + p2 g2 + p3 g3 =
3
X
pi gi
i=1
write
p1 g 1 + p2 g 2 + p3 g 3 = p i g i ,
omitting the summation sign entirely. This particular trick was
invented by Albert Einstein, of all
people. I do not use physics
tricks in my lectures, but am
prepared to give a few additional lectures to physics students.
Unlike, say, Calculus, Linear Algebra focuses more on the development of a special mathematics language rather than on procedures.
We shall abbreviate the words a system of simultaneous linear equations just to a linear system.
A solution of the system is a list
(s1 , . . . , sn ) of numbers that makes each
equation a true identity when the values
s1 , . . . , sn are substituted for x1 , . . . , xn ,
respectively. For example, in the system
above (2, 1) is a solution.
The set of all possible solutions is called
the solution set of the linear system.
Two linear systems are equivalent if the
have the same solution set.
We shall prove later in the course that
a system of linear equations has either
no solution, or
Existence
questions
and
uniqueness
1 2 3
1 1 0
0 1 1
and the augmented matrix
1 2 3 1
1 1 0 2 ;
0 1 1 3
notice how the coefficients are aligned
in columns, and how missing coefficients
are replaced by 0.
The augmented matrix in the example
above has 3 rows and 4 columns; we say
that it is a 34 matrix. Generally, a matrix with m rows and n columns is called
an m n matrix.
Elementary row operations
Replacement Replace one row by the
sum of itself and a multiple of another row.
Interchange Interchange two rows.
Equivalence of linear
0 2 2 2 2
1 1 1 1 1
1 1 1 3 3
1 1 1 2 2
A pivot is a nonzero number in a pivot
position which is used to create zeroes in
the column below it.
A rule for row reduction:
0
0 0 0
and this is a reduced echelon matrix:
1 0 0
0 1 0
0 0 0 1
1 0 5 1
0 1 1 4
0 0 0 0
be the augmented matrix of a a linear
system; then the system is equivalent to
x1
5x3 = 1
x2 + x3 = 4
0 = 0
Existence and
0 b with b nonzero
1. u + v = v + u
2. (u + v) + w = u + (v + w)
3. u + 0 = 0 + u = u
4. u + (u) = u + u = 0
5. c(u + v) = cu + cv
6. (c + d)u = cu + du
0 =
0 .
1.5
3
2
7. c(du) = (cd)u
8. 1u = u
(Here u denotes (1)u.)
Linear combinations
Given vectors v1 , v2 , . . . , vp in Rn and
scalars c1 , c2 , . . . , cp , the vector
y = c1 v1 + cp vp
is
called
a
linear
combination of v1 , v2 , . . . , vp with weights
c1 , c2 , . . . , cp .
x2 + x3 = 2
x 1 + x2 + x3 = 3
x1 + x2 x3 = 2
can be written as equality of two vectors:
x2 + x3
2
x1 + x2 + x3 = 3
x1 + x2 x3
2
which is the same as
0
1
1
2
1 = 3
x1 1 + x2 1 + x 3
0
1
1
matrix
1
1 2
1
1 3
1 1 2
1
a1 = 1 , a2 = 1 , a3 =
1
1
1
and
2
b = 3 ,
2
right part of the system as a linear combination of columns in its matrix of coefficients.
A vector equation
x1 a1 + x2 a2 + + xn an = b.
has the same solution set as the linear
system whose augmented matrix is
a1 a2 an b
In particular b can be generated by a linear combination of a1 , a2 , . . . , an if and
only if there is a solution of the corresponding linear system.
Definition.
If v1 , . . . , vp are in
n
R , then the set of all linear combination of v1 , . . . , vp is denoted by
Span{v1 , . . . , vp } and is called the subset of Rn spanned (or generated) by
v1 , . . . , vp .
That is, Span{v1 , . . . , vp } is the collection of all vectors which can be written
in the form
c1 v1 + c2 v2 + + cp vp
with c1 , . . . , cp scalars.
Therefore
solving a linear system is the same as
finding an expression of the vector of the
a1 a2
an b .
= x1 a1 + x2 a2 + + xn an
1
a1 = 1 , a2 = 1 , a3 =
1
1
1
and
2
b = 3 .
2
In the matrix product notation it becomes
0 1
1 x1
2
1 1
1 x2 = 3
1 1 1
x3
2
or
Ax = b
where
A = a1 a2 a3 ,
x1
x = x2 .
x3
10
11
{ v1 , . . . , vp }
The set
of two or more vectors is linearly dependent if and only if at least one of the
vectors in S is a linear combination of
the others.
Theorem 1.7.8:
dependence of
big sets. If a set contains more vectors than entries in each vector, then the
set is linearly dependent. Thus, any set
{ v1 , . . . , vp }
in Rn is linearly dependent if p > n.
12
The identity matrix. An n n matrix with 1s on the diagonal and 0s elsewhere is called the identity matrix In .
For example,
1 0
I1 = 1 , I2 =
,
0 1
1 0 0 0
1 0 0
0 1 0 0
I3 = 0 1 0 , I4 =
0 0 1 0 .
0 0 1
0 0 0 1
The columns of the identity matrix In
will be denoted
In short:
e1 , e2 , . . . , en .
T (x) = Ax.
The range of a matrix transformation. The range of T is the set of all
linear combinations of the columns of A.
Indeed, this can be immediately seen
from the fact that each image T (x) has
the form
T (x) = Ax = x1 a1 + + xn an
Definition: Linear transformations.
A transformation
For example, in R3
1
0
e1 = 0 , e2 = 1 ,
0
0
0
e3 = 0 .
1
T : Rn Rm
is linear if:
T (u + v) = T (u) + T (v) for all
vectors u, v Rn ;
T (cu) = cT (u) for all vectors u
and all scalars c.
Properties of linear transformations. If T is a linear transformation
then
T (0) = 0
and
T (cu + dv) = cT (u) + dT (v).
x1
x2
x = x3
..
.
xn
1
0
0
0
1
0
0
0
= x1 + x2 + + xn 0
..
..
..
.
.
.
0
0
1
= x1 e1 + x2 e2 + + xn en .
The identity transformation. It is
easy to check that
In x = x for all x Rn .
13
Therefore the linear transformation associated with the identity matrix is the
identity transformation of Rn :
Rn Rn
x 7 x
T (x) = 0
has only the trivial solution.
Theorem:
One-to-one and
onto in terms of matrices. Let
T (x) = T (x1 e1 + x2 e2 + + xn en )
T : Rn Rm
= T (x1 e1 ) + + T (xn en )
= x1 T (e1 ) + + xn T (en )
and then switch to matrix
notation:
x
.1
= T (e1 ) T (en ) ..
xn
= Ax.
14
= ai1
.
..
am1
a1j
..
.
aij
..
.
amj
a1n
..
.
ain
..
.
amn
a1j
..
.
aj = aij .
.
..
amj
Matrices
1 0
,
0 2
0 0
0
2 0
0 0 3
1 ,
1 0 0
0 1 0 , . . .
0 0 1
1 0
,
0 1
are diagonal.
Zero matrix. By definition, 0 is a mn
matrix whose entries are all zero. For example, matrices
0 0 ,
0 0 0
0 0 0
1 2 3
A = 4 5 6
7 8 9
1 0 0
0 0 0 ,
0 0 2
0 0
,
0 0
0 0 0
0 0 0
0 0 0
are diagonal!
Sums. If
A = a1 a2
an
B = b1 b2
bn
are 1, 5, and 9.
A square matrix is a matrix with equal
numbers of rows and columns.
and
a1 + b1 a2 + b2
a11 + b11
..
= ai1 + bi1
..
.
am1 + bm1
A+B =
an + bn
a1j + b1j
..
.
aij + bij
..
.
amj + bmj
a1n + b1n
..
ain + bin
..
.
amn + bmn
15
= cai1
.
..
cam1
ca1j
..
.
caij
..
.
camj
ca1n
..
.
cain
..
.
camn
16
C =AB
x 7 Bx
y 7 Ay.
and
S : Rm Rp ,
b1 , . . . , bn
Their composition
(S T )(x) = S(T (x))
is a linear transformation
S T : Rn Rp .
From the previous lecture, we know that
linear transformations are given by matrices. What is the matrix of S T ?
Multiplication of matrices. To answer the above question, we need to compute A(Bx) in matrix form.
Write x as
x1
x = ...
xn
and observe
Bx = x1 b1 + + xn bn .
Hence
Abj = a1 am ...
bmj
= b1j a1 + + bmj am
Mnemonic rules
[m n matrix] [n p matrix] =
[m p matrix]
A(Bx) = A(x1 b1 + + xn bn )
= A(x1 b1 ) + + A(xn bn )
= x1 A(b1 ) + + xn A(bn )
= Ab1 Ab2 Abn x
1. A(BC) = (AB)C
2. A(B + C) = AB + AC
3. (B + C)A = BA + CA
5. Im A = A = AIn
17
18
(k times)
If A 6= 0 then we set
A0 = I
The transpose of a matrix. The
transpose AT of an m n matrix A is
the n m matrix whose rows are formed
from corresponding columns of A:
T
1 4
1 2 3
= 2 5
4 5 6
3 6
Theorem 2.1.3: Properties of transpose. Let A and B denote matrices
whose sizes are appropriate for the following sums and products. Then we
have:
1. (AT )T = A
T
2. (A + B) = A + B
19
20
S(x) = A1 x
is the only transformation satisfying
S(T (x)) = x
T (S(x)) = x
for all x Rn
for all x Rn
21
1 2 a
A = 3 4 b
p q z
A11 A12
=
A21 A22
A is a 3 3 matrix which can be viewed
as a 2 2 partitioned (or block) matrix with blocks
1 2
a
A11 =
, A12 =
,
3 4
b
A21 = p q , A22 = z
Addition of partitioned matrices. If
matrices A and B are of the same size
and partitioned the same way, they can
be added block-by-block.
Similarly, partitioned matrices can be
multiplied by a scalar blockwise.
Multiplication of partitioned matrices. If the column partition of A
matches the row partition of B then
AB can be computed by the usual rowcolumn rule, with blocks treated as matrix entries.
1 2 a
A = 3 4 b , B =
p q z
Example
1 2 a
A11 A12
A= 3 4 b =
,
A21 A22
p q z
B1
B= =
B2
A11 A12 B1
AB =
A21 A22 B2
A11 B1 + A12 B2
=
A21 B1 + A22 B2
1 2
3 4
=
p q
a
+
+ z
row1 (B)
row2 (B)
AB = col1 (A) col2 (A) coln (A)
..
.
rown (B)
= col1 (A)row1 (B) + + coln (A)rown (B)
22
(a) 0 H.
Ax = 0
such that
The vector
Ax = 0
x = c1 b1 + + cp bp .
c1
..
x B=.
cp
Dimension.
The dimension of a
nonzero subspace H, denoted by dim H,
is the number of vectors in any basis of
H.
B = { b1 , . . . , bp }
be a basis for a subspace H.
For each x H, the coordinates of x
relative to the basis B are the weights
c1 , . . . , c p
23
24
a11 a12
det
= a11 a22 a12 a21 .
a21 a22
The determinant of a 3 3 matrix
a11 a12 a13
a22 a23
a21 a23
a21 a22
2 0 7
A = 0 1 0
1 0 4
For example, if
1 2 3
A = 4 5 6 ,
7 8 9
equals
1 0
0 1
2 det
+ 7 det
0 4
1 0
then
A22
1 3
=
,
7 9
A31
2 3
=
5 6
2 4 + 7 (1) = 8 7 = 1.
Submatrices. By definition, the submatrix Aij is obtained from the matrix
A by crossing out row i and column j.
det A = a11 det A11 a12 det A12 + + (1)1+n a1n det A1n
=
n
X
j=1
25
a11 0
0
0
0
a21 a22 0
0
0
0
A = a31 a32 a33 0
.
a41 a42 a43 a44 0
a51 a52 a53 a54 a55
All entries in the first row of Awith possible exception of a11 are zeroes,
Then
a12 = = a15 = 0,
+ +
+
+ +
..
...
.
Theorem 3.1.2: The determinant
of a triangular matrix. If A is a triangular n n matrix then det A is the
product of the diagonal entries of A.
Proof: This proof contains more details
than the one given in the textbook. It
a22 0
0
0
a32 a33 0
0
= a11 det
a42 a43 a44 0 .
a52 a53 a54 a55
But the smaller matrix A11 is also lower
triangle, and therefore we can conclude
by induction that
a22 0
0
0
a32 a33 0
0
det
a42 a43 a44 0 = a22 a55
a52 a53 a54 a55
hence
det A = a11 (a22 a55 )
= a11 a22 a55
is the product of diagonal entries of A.
The basis of induction is the case n = 2;
of course, in that case
a11 0
det
= a11 a22 0 a21
a21 a22
= a11 a22
26
d1 0 0
0 d2
0
det ..
.. = d1 d2 dn .
.
.
.
. .
0
dn
27
Similarly
det A
Example. Let
1 2 0
A = 0 1 0 ,
0 3 4
then
1 0 0
AT = 2 1 3
0 0 4
1 3
= 1 det
+0+0
0 4
= 1 4 = 4;
2 0 7
A = det 0 1 0
1 0 4
by expansion across the first row. Now
we do it by expansion across the third
column:
0 1
1+3
det A = 7 (1)
det
1 0
2 0
2+3
+0 (1)
det
1 0
2 0
3+3
+4 (1)
det
0 1
= 7 0 + 8
= 1.
and
1 0
det A = 1 det
3 4
0 0
2 det
0 4
0 1
+0 det
0 3
= 1420+00
= 4.
28
det B = det A.
(c) If one column of A is multiplied by
k to produce B, then
det B = k det A.
Proof: Column operations on A are row
operation of AT which has the same determinant as A.
Computing determinants. In computations by hand, the quickest method
of computing determinants is to work
with both columns and rows:
Example.
1 0 3
det 1 3 1
1 1 1
C3 3C1
out of C3
1 0 0
det 1 3 2
1 1 4
1 0 0
2 det 1 3 1
1 1 2
out of C3
expand
R1 3R2
R1 R2
=
=
1 1 1 1 0
1 1 1 0 1
1
1
0
1
1
= det
1 0 0
2 det 1 3 1
1 1 2
3 1
2 1 (1)
det
1 2
0 5
2 det
1 2
1 2
2 (1) det
0 5
1+1
2 (5)
10.
29
terminant as
1 1
1
1 0
0 0
0 1 1
= det 0 0 1 0 1
0 1 0
0 1
0 1
1
1 1
R3 +R2
=
=
2+1
(1) (1)
1 1
det
1 3
=
=
1 1
det
1 3
1 1
det
0 4
(4)
4.
=
1 (1)1+1 det
1 0
0 1 trix of size n n should be (n 1)?
1
1
1 1 But how can one determine the correct
sign?
0
0 1 1
And one more exercise: you should now
0 1 0 1
=
det
be able to instantly compute
1 0
0 1
1
1
1 1
1 2 3 4 0
1 2 3 0 5
.
0
0 1 1
1
2
0
4
5
det
0 1 0 1
1 0 3 4 5
R4 +R3
=
det
1 0
0 1
0 2 3 4 5
0
1
1 2
Theorem 3.2.4: Invertible matrices. A square matrix A is invertible if
and only if
After expansion across the 1st column
det A 6= 0.
0 1 1
(1) (1)3+1 det 1 0 1
1
1 2
0 1 1
det 1 0 1
1
1 2
R3 +R2
0 1 1
det 1 0 1
0
1 3
Recall that this theorem has been already known to us in the case of 2 2
matrices A.
Example. The matrix
2 0 7
A = 0 1 0
1 0 4
has the determinant
1 0
0 1
det A = 2 det
+ 7 det
0 4
1 0
= 2 4 + 7 (1)
= 87=1
30
hence A is invertible.
4
A1 = 0
1
Indeed,
0 7
1 0
0 2
and
det(AB) = 1 17 5 3 = 2,
which is exactly the value of
check!
Corollary. If A is invertible,
Example. Take
1 0
1 5
A=
and B =
,
3 2
0 1
then
det A = 2 and det B = 1.
But
1 0 1 5
AB =
3 2 0 1
1 5
=
3 17
is given by
xi =
det Ai (b)
,
det A
i = 1, 2, . . . , n.
31
1
C12 C22 Cn2
A1 =
..
..
..
det A .
.
.
C1n C2n Cnn
=
=
=
=
(1)1+1 d = d
(1)2+1 b = b
(1)1+2 c = c
(1)2+2 a = a,
matrix
b
d
1
d b
=
a
det A c
1
d b
.
=
a
ad bc c
32
1 2 3
= det 0 2 3
1 2 6
Quiz
0 0 71
= det 0 93 0
87 0 0
Eigenvectors. An eigenvector of an
n n matrix A is a nonzero vector x
such that
Ax = x
for some scalar .
Eigenvalues. A scalar is called an
eigenvalue of A if there is a non-trivial
solution x of
Ax = x;
(A I)x = 0
is a subspace of Rn called the
eigenspace of A corresponding to the
eigenvalue .
Theorem: Linear independence of
eigenvectors. If v1 , . . . , vp are eigenvectors that correspond to distinct eigenvalues of 1 , . . . , p of an nn matrix A,
then the set
{ v1 , . . . , vp }
is linearly independent.
33
det(A I) = 0
is the characteristic equation for A.
Characterisation of eigenvalues
Theorem. A scalar is an eigenvalue
of an n n matrix A if and only if
satisfies the characteristic equation
det(A I) = 0
Zero as an eigenvalue. = 0 is an
eigenvalue of A if and only if det A = 0.
d1 a b
A = 0 d2 c
0 0 d3
its characteristic polynomial det(AI)
equals
d1
a
b
d2
c
det 0
0
0
d3
and therefore
det(A I) = (d1 )(d2 )(d3 )
34
A = P DP 1
reflexive: A A
symmetric: A B implies B A
transitive: A B and B C implies A C.
Conjugation. Operation
AP = P 1 AP
is called conjugation of A by P .
Properties of conjugation
IP = I
P
(AB) = A B ; as a corollary:
(A + B)P = AP + B P
(Ak )P = (AP )k
(A1 )P = (AP )1
(AP )Q = AP Q
(c A) = c A
35
Definitions
Most concepts introduced in the previous lectures for the vector spaces Rn can
be transferred to arbitrary vector spaces.
Let V be a vector space.
Given vectors v1 , v2 , . . . , vp in V and
scalars c1 , c2 , . . . , cp , the vector
y = c1 v1 + cp vp
is
called
a
linear
combination of v1 , v2 , . . . , vp with weights
c1 , c2 , . . . , cp .
If v1 , . . . , vp are in V , then the set of
all linear combination of v1 , . . . , vp is denoted by Span{v1 , . . . , vp } and is called
the subset of V spanned (or generated) by v1 , . . . , vp .
That is, Span{v1 , . . . , vp } is the collection of all vectors which can be written
in the form
c1 v1 + c2 v2 + + cp vp
with c1 , . . . , cp scalars.
36
The set
is linear if:
{ v1 , . . . , vp }
x = x1 b1 + xn bn ;
the scalars x1 , . . . , xn are called coordinates of x with respect to the basis B.
A transformation T from a vector
space V to a vector space W is a rule
that assigns to each vector x in V a vector T (x) in W .
The set V is called the domain of T , the
set W is the codomain of T .
A transformation
T : V W
If
T : V W
is a linear transformation, its kernel
Ker T = {x V : T (x) = 0}
is a subspace of V , while its image
Im T = {y W : T (x) = y for some x V }
is a subspace of W .
MATH10212 Linear Algebra B Lectures 2527 Symmetric matrices and inner product
37
1 2 3
A = 2 3 4 .
3 4 5
Notice that symmetric matrices are necessarily square.
Inner (or dot) product. For vectors
u, v Rn their inner product (also
called scalar product or dot product)
u v is defined as
u v = uT v
v1
v2
= u1 u2 un ..
.
vn
= u1 v1 + u2 v2 + + un vn
(a) u v = v u
(b) (u + v) w = u w + v w
(c) (cu) v = c(u v)
(d) u u > 0, and u u = 0 if and only
if u = 0
kvk = v v = v12 + v22 + + vn2
In particular,
kvk2 = v v.
(u + v) (u + v)
uu + uv + vu + vv
uu + uv + uv + vv
u u + 2u v + v v
kuk2 + 2u v + kvk2 .
MATH10212 Linear Algebra B Lectures 2527 Symmetric matrices and inner product
Hence
38
c1 u1 + + cp up = 0.
{ u1 , . . . , up }
in Rn is orthogonal if
ui uj = 0 whenever i 6= j.
Theorem 6.2.4. Is
S = { u1 , . . . , up }
ci ui ui = 0
1 , . . . , p .
Since ui is non-zero,
Then
ui ui 6= 0
and therefore ci = 0. This argument
works for every index i = 1, . . . , p. We
get a contradition with our assumption
that one of ci is non-zero.
{ v1 , . . . , vp }
is an orthogonal set.
Proof. We need to prove the following:
MATH10212 Linear Algebra B Lectures 2527 Symmetric matrices and inner product
quence of equalities:
u v =
=
=
=
=
=
=
=
=
39
(u) v
(Au) v
(Au)T v
(uT AT )v
(uT A)v
uT (Av)
uT (v)
uT v
u v.
Hence
v = x1 u1 + + xn un
u v = u v
where
and
( )u v = 0.
Since 6= 0, we have u v = 0.
Corollary. If A is an n n symmetric
matrix with n distinct eigenvalues
1 , . . . , n
then the corresponding eigenvectors
v1 , . . . , vn
form an orthogonal basis of Rn .
An orthonormal basis in Rn is an orthogonal basis
u1 , . . . , un
made of unit vectors,
kui k = 1 for all i = 1, 2, . . . , n.
xi = v ui , for all i = 1, 2, . . . , n.
Proof. Let
v = x1 u1 + + xn un ,
then, for i = 1, 2, . . . , n,
v ui = x1 u1 ui + + xn un ui
= x1 ui ui
= xi .
Theorem: Eigenvectors of symmetric matrices. Let A be a symmetric
n n matrix with n distinct eigenvalues
1 , . . . , n .
Then Rn has an orthonormal basis made
of eigenvectors of A.
40
1. u v = v u
2. (u + v) w = u w + v w
3. (cu) v = c(u v)
4. u u > 0 and u u = 0 if and only
if u = 0
Inner product space. A vector space
with an inner product is called an inner
product space.
Example: School Geometry. The ordinary Euclidean plane of school geometry is an inner product space:
Vectors: directed segments starting at the origin O
then f (x) = 0 for all x [0, 1]. This requires the use of properties of continuous
functions; since we study linear algebra,
not analysis, I am leaving it to the readers as an exercise.
Length. The length of the vector u is
defined as
kuk = u u.
Notice that
kuk2 = u u,
and
kuk = 0
u = 0.
Dot product:
u v = kukkvk cos ,
where is the angle between vectors u and v.
Example: C[0, 1]. The vector space
C[0, 1] of real-valued continuous functions on the segment [0, 1] becomes an
inner product space if we define inner
product by the formula
Z 1
f g=
f (x)g(x)dx.
0
(u + v) (u + v)
uu + uv + vu + vv
uu + uv + uv + vv
u u + 2u v + v v
kuk2 + 2u v + kvk2 .
41
Hence
|u v| 6 kukkvk.
ku + vk2 = kuk2 + kvk2
if and only if
The Cauchy-Schwarz Inequality:
an example. In the vector space Rn
with the dot product of
u v = 0.
u1
v1
..
..
u = . and v = .
un
vn
defined as
|u v| 6 kukkvk.
u v = uT v = u1 v1 + + un vn ,
|u1 v1 + + un vn | 6
u21
+ +
u2n
q
v12 + + vn2 .
sZ
f (x)g(x)dx 6
s
Z
f (x)2 dx
g(x)2 dx,
0
or, equivalently,
Z
2
f (x)g(x)dx
Z
6
Z
f (x) dx
2
g(x) dx .
Let
q(t) = at2 + bt + c
be a quadratic function in
variable t with the property
42
that
2. d(u, v) > 0.
q(t) > 0
3. d(u, v) = 0 iff u = v.
|u v| 6 kukkvk.
or
Distance. We define distance between
vectors u and v as
d(u, v) = ku vk.
It satisfies axioms of metric:
1. d(u, v) = d(v, u).
43
vi vj =
1 if i = j
.
0 if i 6= j
u = c1 v1 + + cn vn
iff
ci =
u vi
vi vi
Orthonormal basis.
A basis
v1 , . . . , vn in the inner product space V
v2 = x2
v3
vp
u v = uT v = u1
v
.1
un .. .
vn