Está en la página 1de 53

Chapter 3: The Image, its Mathematical

and Physical Background


3.1. Overview
3.2. Linear Integral Transforms
3.3. Images as Stochastic Processes
3.4. Image Formation Physics

3-0

3.1. Overview
3.1.1. Linearity
Let L : operator, mapping, function, or process
(a) Additivity L( x y) L( x ) L( y)
(b) Homogeneity L(ax ) aL( x )
(c) Linearity L(ax by) aL( x ) bL( y)
a, b: scalars
x, y: elements of a vector space
e.g., vectors, functions

3-1

3.1.2. Dirac Delta Function


Heaviside function:

0
H (t )
1

t0
t0

0 t a
H (t a )
1 t a

a0

ta
0

H
(
t

a
)

H
(
t

b
)

Pulse:
1 a t b
0
t b

ab

Impulse: (t ) [ H (t ) H (t )]

Dirac Delta Function: (t ) lim (t )

t dt 1, t 0, t 0

3-2

2D delta function:

x, y dxdy 1, x, y 0 ( x, y) 0

Sampling (shifting property):



f x, y x a, y b dxdy f a, b

Image function:

f x, y
f a, b a x, b y dadb
3-3


Assignment: Show f ( x)
f a a x da

3.1.3. Convolution ()

h( x ) f ( x ) g ( x )

f ( ) g ( x )d

f ( x ) g ( )d

3-4

Discrete case:
N 1

N 1

n 0

n 0

h( k ) f e ( n) g e ( k n) f e ( k n) g e ( n)
f e , ge : extended f , g ;

If N < A+B,

N A B 1

Wraparound
3-6

In practice,

5-6

p( x, y ) m( s, t ) p ( x s, y t )
1

s 1

t 2

m(1, 2) p( x 1, y 2) m(1, 1) p ( x 1, y 1)
m(1,2) p( x 1, y 2)
5-7

Properties: f g g f , f g h f g h
f g h f g f h
a f g af g f ag
f g f g f g

Assignment : f g f g f g
2-D convolution:

f

g
x
,
y

f a, b g x a, y b dadb


f x a, y b g a, b dadb g f x, y

M 1 N 1

Discrete: h i, j ge i m, j n he m, n
m 0 n 0

3-8

Correlation: ( )

f ( x) g ( x) f ( ) g ( x )d f ( x ) g ( )d

N 1

f e (k ) g e (k ) f e (n) g e (k n), N A B 1
n 0

3-9

3.2. Linear Integral Transforms


Transform: domain 1 domain 2
Spatial
e.g.,
domain

Frequency
domain

Advantages: implicit explicit properties

3.2.3. Fourier Transform


Fourier series
Fourier analysis =
Fourier transform

3-10

Fourier series -- A periodic (T) function f(x) can

be written as f ( x) a0 / 2 an cos n x bn sin n x


n 1

1 T /2
2 T /2
a0
f ( x )dx, an
f ( x )cos n xdx
T T / 2
T T / 2
where
2
2 T /2

,
bn
f ( x )sin n xdx
T
T T / 2

In complex form f ( x ) cn exp( jn x ), where


n
(Assignment)
1 T /2
cn
f ( x )exp( jn x )dx
T / 2
T
In continuous case,

f ( x ) [a ( )cos 2 x b( )sin 2 x ]d
0

c( )e j 2 x d

3-11

Some function is
formed by a finite
number of sinuous
functions
f ( x) sin x (1/ 3)sin 2 x (1/ 5)sin 4 x

Some function requires


an infinite number of
sinuous functions to
compose
1
1
1
1
f ( x) sin x sin 3x sin 5 x sin 7 x sin 9 x
3
5
7
9
3-12

The spectrum of a periodic function, which is


composed of a finite number of sinuous functions,
is discrete consisting of components at dc, 1/T,
and its multiples

For non-periodic functions, T or 0


The spectrum of the function
continuous
3-13

Fourier transform (FT)

1-D:

F f t F f t e2 it dt

-1

2 i t
F

f
t

e
d

e j 2t cos2t j sin 2t

f t e

j 2t

dt f t (cos 2t j sin 2t )dt

f t cos 2tdt j f t sin 2tdt F ( )

F aggregates the frequency- sinuous


component of f.

3-14

Complex spectrumF Re F jIm( F )


Phase spectrum tan 1 Im F / Re F

Amplitude spectrum F Re2 F Im2 ( F )


Power spectrum P F Re2 F Im2 ( F )
2

Properties:
1. F 0

2.

f t

f t dt, f 0 F d

dt

d (Parsevals theory)

3-15

Examples:

3-16

Discrete Fourier transform


Input signal: f (n), n 0, 1, , N 1
1
F k
N

N 1

nk

f (n)exp 2 j where

n 0
N 1

nk

f (n) F k exp 2 j
N

k 0

3-17

Properties:
Linearity af1 bf2 aF1 bF2
Periodicity

F u, v F (u, N v)
F u, v F M u, v
F m, n F M m, n
F m, n F (m, N n)
Conjugate symmetry F u, v F * (u, v)

3-18

Shifting f ( x)exp( j 2 u0 x / N ) F (u u0 )

f ( x x0 ) F (u )exp( j 2 ux0 / N )
Let u0 N / 2 exp( j 2 u0 x / N ) exp( j x)
(e j ) x (cos j sin ) x (1) x

f { f 0 , f1 , , f N 1}, F {F0 , F1 , , FN 1}
f ( x) f ( x)exp( j 2 u0 x / N ) (1) x f ( x)

f { f 0 , f1 , f 2 , f3 , , f N 2 , f N 1}
F {FN / 2 , , FN 1 , F0 , F1 , , FN / 21}
3-19

Examples:
1-D signal f {2 3 4 5 6 7 8 1}
F {36 - 9.6569 4 j - 4 - 4 j 1.6569 - 4 j
4 1.6569 4 j - 4 4 j - 9.6569 - 4 j}
f {2 - 3 4 - 5 6 - 7 8 -1}
F { 4 1.6569 4 j - 4 4 j - 9.6569 - 4 j
36 - 9.6569 4 j - 4 - 4 j
2-D image

1.6569 - 4 j }

F
3-20

Rotation
Polar coordinates: x r cos , y r sin
u w cos , v w sin
f ( x, y ) f (r , ), F (u, v) F ( w, )
f (r , 0 ) F ( w, 0 )

Correlation theorem
*
*
f
g

F
G
,
f
g F G
(Assignment)
Convolution theorem f g F G, f g F G

3-21

3-22

Any real function f ( x) can always be decomposed


into its even fe ( x) and odd fo ( x) parts, i.e.,
f t f t
f t f t
f ( x) f e ( x) f o ( x), f e t
, fo t
2
2
e.g.,

f ( x) f e ( x) f o ( x)
F (u ) R{F (u )} i I{F (u )}
f e ( x) R{F (u )}
f o ( x) I{F (u )}

3-23

Convolution involving impulse function


f ( x) g ( x)

f ( x) g ( x)

g ( x) ( x T )

g ( x) ( x T ) ( x) ( x T )
f ( x)

g ( x)

f ( x) g ( x)

Copy f(x) at the location of each impulse


3-24

3.2.5. Sampling theory

Continuous
function

Dense
sampling

Sparse
sampling

Objective: looking at the question of how


many sampling should be taken
so that no information is lost in
the sampling process
3-25

Spatial domain

Frequency domain

f(x): band-limited
function

Sampling function
S ( x) ( x x)

( x) ( x x)

Sampling

f ( x)S ( x)dx

3-26

Whittaker-Shannon sampling theory


1
1-D: x
2w
1
1
2-D: x , y
2u
2v

Interpolation Reconstruction Formula


sin[2( x xn )]
f ( x)
f ( xn )
2( x xn )
n

sinc[2( x xn )] f ( xn )
n

3-27

3.2.6. Discrete cosine transform


-- often used in image/video compression,
e.g., JPEG, MPEG, FGS, H.261, H.263, JVT
2c(u )c(v) N 1 N 1
2m 1 2n 1
F (u, v)
f (m, n) cos
u cos
v

N
2N
2N

m 0 n 0

u 0,1..., N 1
1/ 2
k 0
where c(k )
v 0,1,..., N 1
otherwise
1
2 N 1 N 1
2m 1 2n 1
f (m, n) c(u )c(v)F (u, v) cos
u cos
v ,
N u 0 v 0
2N
2N

where m 0,1..., N 1, n 0,1,..., N 1


3-28

3.2.7. Wavelet transform


Fourier spectrum provides all the frequencies
present in a signal but does not tell where they
are present.
Windowed Fourier transform suffers from the
dilemma:
Small range poor frequency resolution
Large range poor localization
Wavelet: wave that is only nonzero in a small region
Wave

Wavelet
3-29

Types of wavelets:

Haar: ( x) 1
0

Morlet: ( x) e

if 0 x 1/ 2
if 1/ 2 x 1
otherwise

sin x, Mexican hat: DOG, LOG


3-30

Operations on wavelet: ( x)
(a) Dilation:
i) Squashing (2 x)

ii) Expanding ( x / 2)

(b) Translation:
i) Shift to the right

ii) Shift to the left

( x 2)
(c) Magnitude change:
i) Amplification

2 ( x)

( x 2)
ii) Minification

1/ 2 ( x)

Any function can be expressed as a sum of wavelets


of the form ai (bi x ci )

Wavelet transform: decomposes a function into


a set of wavelets W ( s, ) f (t ) s , (t )dt
R

1 t

where s , (t )
: wavelets
s s
t : mother wavelet
New variables:
scale s R {0},
translation R
Inverse wavelet transform: synthesize a function
from wavelets coefficients
f (t ) W (s, ) s , (t )d ds
R

3-33

Discrete wavelet transform:


Approximation coefficients (cA)
1
W (k , s )
N

f (t )

k ,s

k ,s (t ) : scaling

(t ),

functions

Detail coefficients (cD)


1
W (k , s )
N

f (t )

k ,s

(t ),

k ,s (t ) : wavelet

functions

Inverse discrete wavelet transform:


1
f (t )
N

1
k W (k , s)k ,s (t ) N

W (k , s)

k ,s

(t )

j j0

3-34

Multiresolution Analysis views a function at various


levels of resolution
h: step size

3-35

Let AS contain all the functions on the left-hand side


and W S those on the right-hand side

Function spaces:

AS { ck , s (2 t k ) : ck , s
s

WS { d k , s (2 t k ) : d k , s
s

}
2

i.e., AS is generated by the bases

k ,s (t ) 2 (2 t k ), k Z
s/2

(scaling
functions)

W S is generated by the bases

(wavelet
k ,s (t ) 2s / 2 (2s t k ), k Z functions)

3-36

Example: Haar wavelet


1 0 x 1
Scaling function ( x)
0 otherwise

ji ( x) 2 j / 2 (2 j x i), i 0,..., 2 j 1

1 0 x 1/ 2

Wavelet function ( x) 1 1/ 2 x 1
0 otherwise

ji ( x) 2 j / 2 (2 j x i), i 0,..., 2 j 1
3-37

Properties:
i) AS 1 AS WS ii) AS W S iii) WS AS
2

L
iv)
1
0
1

Fast Discrete Wavelet Transform:


Discrete signal s(i), i 0, 1, , N 1
is decomposed into wavelet coefficients
1
s (i )k ,s (i ),
Approx. coefficients (cA) ck ,s

N i
1
Detail coefficients (cD) d k ,s
s (i ) k ,s (i )

N i

Suppose scales and positions are based on


power of 2 (dyadic)
3-38

Inverse Discrete Wavelet Transform


1
s (t )
N

1
k ck ,sk ,s (t ) N

k ,s (t )

k ,s

3-39

Wavelet transforms
Low pass filtering: averaging ;
High pass filtering: differencing
Input data: a, b
Average: s = (a + b) / 2 (low pass filtering)
Difference: d = a s
(high pass filtering)
Wavelet coefficients: (s, d).
Inverse wavelet transforms
Addition; subtraction
Wavelet coefficients: (s, d).
Addition: s + d = s + (a s) = a,
Subtraction: s d = s (a s) = 2s a = b
Input data: (a, b).
3-40

Example:
Input data 14, 22
(i) Wavelet Transform
Average: s = (1422)/2 = 18,
Difference: d = 14-18= -4
Wavelet coefficients: (18, -4).
(ii) Inverse Wavelet Transform (to recover the
input data)
s d = 18+(-4) = 14,
s d = 18-(-4) = 22
Input data: (14, 22).
3-41

2-D:

3-42

3-43

3-44

3.2.11. Other Orthogonal Image Transforms


Hadamard-Haar, Slant, Slant-Haar, Discrete sine,
Paley-Walsh, Radon, Hough

3.2.8. Eigen Analysis


Let A be an n by n square matrix. x and are
corresponding eigenvalue and eigenvector of A
if Ax x .
Let e1 , , en : eigenvectors of A
1 , , n : corresponding eigenvalues
Let P [e1 en ]
Then A PDP1 , D P1 AP diag (1 , , n )
3-45

3.2.9. Singular Value Decomposition


Amn : real matrix
column-orthonormal matrix U mn
(i.e., U TU Idn ) and row-orthonormal matrix
Vnn(i.e., VV T Idn), s.t. A UWV T
where W diag ( 1 , 2 , , n )
1 , 2 , , n : non-negative singular values
of A
The singular values of A are the eigenvalues
of AT A and the corresponding eigenvectors
are the columns of V
3-46

Let e1 , , en : eigenvectors of matrix B


1 , , n : corresponding eigenvalues
Let E [e1 en ]
Then B EDE T , D diag (1 , , n )
A A (UWV ) UWV VW U UWV
T
T
2
2
2
T
VW WV Vdiag (1 , 2 , , n )V
T

T T

Compare with B EDET


(a) The eigenvalues i2 of AT A correspond
to the singular values i of A
T
(b) The eigenvectors of A A are columns of V
3-47

3.2.10. Principal Component Analysis (PCA)


Karhunen-Loeve (KL) or Hotelling transforms
PCA: linearly transforms a number of correlated
variables into the same number of uncorrelated
variables (principal components)

Data vectors: xi ( x1 , x2 , , xn )T , i 1, 2, , M
Mean vectors: m x E{ x}

Covariance matrix: Cx = E{( x m x )( x m x )T }

C x : n by n real symmetric matrix


3-48

1 M
xi ,
Approximation: m x

M i 1
1 M
T
Cx
(
x

m
)(
x

m
)

i
x
i
x
M i 1

1 M
T
T
T
T

(
x
x

x
m

m
x

m
m

i i
i
x
x i
x
x )
M i 1
M
M
M
1 M
[ xi xiT m x m Tx ( xi )m Tx m x xiT )]
M i 1
i 1
i 1
i 1
1

M
1

T
T
T
T
x
x

m
m

m
m

m
m
i i x x x x x x
i 1
M

T
T
x
x

m
m
i i x x
i 1

3-49
49

Let i and ei , i 1, , n, be corresponding


eigenvalues and eigenvectors of C x .
Assume 1 2 n
Construct matrix A e1 e2 en
yi A( xi m x )
The mean of ys:
my 0
1
0
Cy
0

The covariance matrix of ys:

Cy = ACx AT

2
0
0

0
0

0
n

3-50
50

From yi A( xi m x ), xi A1 yi m x AT yi m x
Let Ak , k n and y i Ak ( xi m x )

yi x i ,

y i x i

The mean square error between xi and x i is


n

e i i
i 1

i 1

i k 1

3-51

Eigen faces

3-52
52

También podría gustarte