Está en la página 1de 20

Image Processing:The Fundamentals.

Maria Petrou and Panagiota Bosdogianni


Copyright 0 1999 John Wiley & Sons Ltd
Print ISBN 0-471-99883-4 Electronic ISBN 0-470-84190-7

Chapter 6

Image Restoration
What is image restoration?

Image restoration is the improvement of an image using objective criteria and prior
knowledge as to what the image should look like.
What is the difference between image enhancement and image restoration?

In image enhancement we try to improve the image usingsubjective criteria, while


in image restoration we are trying toreverse a specific damage suffered by the image,
using objective criteria.
Why may an image require restoration?

An image may be degraded because the grey values of individual pixels may be altered,
or it may be distorted because the position of individual pixels may be shifted away
from their correct position. The second case is the subject of geometric lestoration
Geometric restoration is also called image registration because it helps in finding
corresponding points between two images of the same region taken from different
viewing angles. Image registration is very important in remote sensing when aerial
photographs have to be registered against the map, or two aerial photographs of the
same region have to be registered with eachother.

How may geometric distortion arise?


Geometric distortion may arise because of the lens or because of the irregular movement of the sensor during image capture. In the former case, the distortion looks
regular like those shown in Figure 6.1. The latter case arises, for example, when an
aeroplane photographs the surface of the Earthwith a line scan camera. As the aeroplane wobbles, the captured image may be inhomogeneously distorted, with pixels
displaced by as much as 4-5 interpixel distances away from their true positions.

194

Processing:
Image

The Fundamentals

(b) Pincushion distortion

(a) Original

(c) Barrel distortion

Figure 6.1: Examples of geometric distortions caused by the lens.

A
X

Corrected image

Distorted image

Figure 6.2: In this figure the pixels correspond to the nodes of the g1rids.
Pixel A of the corrected grid corresponds to inter-pixel position A of the
original image.
How can a geometrically distorted image be restored?

We start by creating an empty array of numbers the same size as the distorted image.
This array will become the corrected image. Our purpose is to assign grey values to
the elements of this array. This can be achieved by performing a two-stage operation:
spatial transformation followed by grey level interpolation.
How do we perform the spatial transformation?

Suppose that the true position of a pixel is (z, y) and the distorted position is ( g , $ )
(see Figure 6.2 ). In general there will be a transformation which leads from one set
of coordinates to the other, say:

Image Restoration

195

First we must find to which coordinate position in the distorted image each pixel
position of the corrected image corresponds. Here we usually make someassumptions.
For example, we may say that the above transformation has the following form:

where cl, c2, . . . ,c8 are some parameters. Alternatively, we may assume a more general
form, where squares of the coordinates X and y appear on the right hand sides of the
above equations. The values of parameters c l , . . . ,C8 can be determined from the
transformation of known points called tie points. For example, in aerial photographs
of the surface of the Earth, there are
certain landmarkswith exactly known positions.
There are several such points scattered all over the surface of the Earth. We can use,
for example, four such points to find the values of the above eight parameters and
assume that these transformation equations with the derived parameter values hold
inside the whole quadrilateral region defined by these four tie points.
Then, we apply the transformation to find the position A' of point A of the
corrected image, in the distorted image.
Why is grey level interpolation needed?

It is likely that point A' will not have integer coordinates even though the coordinates
of point A in the (z,
y) space are integer. This means that we do not actually know the
grey level valueat position A'. That is whenthe grey level interpolation process comes
into play. The grey level value at position A' can be estimated from the values at its
four nearest neighbouring pixels in the (?,G) space, by some method, for example by
bilinear interpolation. We assume that inside each little square the grey level value is
a simple function of the positional coordinates:
g(D,jj) = a0

+ pjj + y2i.g + 6

where a , . . . , 6 are some parameters. We apply this formula to the four corner pixels
to derive values of a , p, y and 6 and then use these values to calculate g($, jj) at the
position of A' point.
Figure 6.3 below shows in magnification the neighbourhood of point A' in the
distorted image with the four nearest pixels at the neighbouring positions with integer
coordinates.
Simpler as well as more sophisticated methods of interpolation may be employed.
For example, the simplest method that can be used is the nearest neighbour method
where A' gets the grey level value of the pixel which is nearest to it. A more sophisticated method is to fit a higher order surface through a larger patch of pixels around
A' and find the value at A' from the equation of that surface.

196

Processing:
Image

The Fundamentals

Figure 6.3: The non-integer position of point A' is surrounded by four


pixels at integer positions, with known grey values ( [ X ] means integer part
of X ) .

Example 6.1
In the figure below the grid on the right is a geometrically distorted
image and has to be registered with the reference image on the left
using points A, B , C and D as tie points. The entries in the image
on the left indicate coordinate positions. Assuming that the distortion
within the rectangle ABCD can be modelled by bilinear interpolation
and the grey level value at an interpixel position can be modelled by
bilinear interpolation too, find the grey level value at pixel position
(2,2) in the reference image.

Suppose that the position (i,


6) of a pixel in the distorted image is given in terms
of its position (X,y ) in the reference image by:

W e have the following set of the corresponding coordinates between the two grids
using the four tie points:

197

Image Restoration

Pixel
Pixel
Pixel
Pixel
4)

A
B
C
D

Distorted ( 2 , y ) coords.Reference
(0,O)
(3,1>
(1,3)
(4,

(X,y)

coords.

(090)
(390)
((43)
(3,3)

W e can use these to calculate the values of the parameters cl, . . . ,CS:
Pixel A:
Pixel B:
Pixel C
Pixel D

}+{

1=3~2
3 = 3cs

4=3+3x$+9c3
4=3x++3+9c7

The distorted coordinates, therefore,


given by:

of any pixel within the square

? = a : +y-=Y, - + y
3

ABDC are

For X = y = 2 we have 2 = 2 + 2 , y = $ + 2. So, the coordinates of pixel (2,2)


in the distorted image are (2$, 2 i ) . This position is located between pixels in the
distorted image and actually between pixels with the following grey level values:

213

W e define a local coordinate system ( 2 ,y), so that the pixel at the top left corner
hascoordinateposition (0,O) thepixelatthetoprightcornerhascoordinates
(1,0), the one at the bottom left ( 0 , l ) and the one at the bottom right ( 1 , l ) .
Assuming that the grey level value between four pixels can be computed from the
grey level values in the four corner pixels with bilinear interpolation, we have:

Applying this for the four neighbouring pixels we have:

199

Image Restoration

We recognize now that equation (6.7) is the convolution between the undegraded
image f (z,y) and the point spread function, and therefore we can write it in terms
of their Fourier transforms:
G(u,W) = F(u,v)H(u,W)

(6.8)

where G, F and H are the Fourier transforms of functions f , g and h respectively.


What form does equation (6.5) take for the case of discrete images?
N

k=l k 1

We have shown that equation (6.9) can be written in matrix form (see equation
(1.25)):
g=Hf

(6.10)

What is the problem of image restoration?

The problem of image restoration is: given the degraded image g, recover the original
undegraded image f .
How can the problem of image restoration be solved?

The problem of image restoration can besolved if we have prior knowledge of the point
spread function or its Fourier transform (the transfer function) of the degradation
process.
How can we obtain information on the transfer function ~ ( u , wof) the
degradation process?

1. From the knowledge of the physical process that caused degradation. For example, if the degradation is due to diffraction, H ( u ,W ) can be calculated. Similarly,
if the degradation is due to atmospheric turbulence or due to motion, it can be
modelled and H(u,W) calculated.

2. We may try toextract information on k ( u ,W ) or h(a - z, ,B -9) from the image


itself; i.e. from the effect the process has on the images of some known objects,
ignoring the actual nature of the underlying physical process that takes place.

Processing:
Image

200

The Fundamentals

Example 6.2
When a certain static scene was being recorded, the camera
underwent
planar motion parallel to the image plane (X,y ) . This motion appeared
as if the scene moved in the X,y directions by distances zo(t) and yo(t)
which are functions of time t. The shutterof the camera remained open
from t = 0 to t = T where T is a constant. Write down the equation
that expresses the intensity recorded at pixel position (X,y ) in terms of
the scene intensity function f (X,y ) .

The total exposure at any point of the recording medium (say the film) will be T
and we shall have for the blurred image:
T

(6.11)

Example 6.3
In Example 6.2, derive the transfer function with which you can model
the degradation suffered by the image due to the camera motion, assuming that the degradationis linear with a shift invariant point spread
function.

Consider the Fourier transform of g ( z , y )defined in Example 6.2.


+a

+a

S_,

i*(u,v) = S
_
, g(z, y)e-2"j(uz+uY)d XdY

(6.12)

If we substitute (6.11) into (6.12) we have:

W e can exchange the order of integrals:


f (X - x o ( t ) y, - yo(t))e-2"j(ur+ur)drdy} dt (6.14)

This is the Fourier transfoA of the shifted function by


in directions X,y respectively

20,yo

Image Restoration

201

W e have shown (see equation (2.67)) that the Fourier transform of a shifted function and the Fourier transform of the unshifted function are related by:
( F . T. of shifted function) = ( F . T. of unshifted function)e-2aj(uzo+vyo)
Therefore:
&(U,

v) =

I'

fi(u,y)e-2?rj(uzo+vy'J)dt

where F(u,v) is the Fourier transformof the scene intensity functionf (X,y ) , i.e.
the unblurred image. F ( u , v ) is independent of time, so it can come out of the
integral sign:

.)
=

I'

.) e-2?rj(uzo(t)+vYo(t))dt

Comparing this equation with (6.8) we conclude that:


(6.15)

Example 6.4
Suppose that the motion in Example 6.2 was in the
and with constant speed F, so that yo(t) = 0, zo(t) =
transfer function of the motion blurring caused.

direction only

F. Calculate the

I n the result of Example 6.3, equation (6.15), substitute yo(t) and xO(t) to obtain:

Processing:
Image

202

The Fundamentals

Example 6.5 (B)

It was established that during the time interval T when the shutter
was open, the camera moved in such a way that it appeared as if the
objects in the scene moved along the positive y axis, with constant
acceleration 2 a and initial velocity SO, starting from zero displacement.
Derive the transfer function of the degradation process for this case.
In this case x:o(t)= 0 and

dY0 = 2 a t + b + y o ( t ) = at2 + bt + C
%
= 2a + dt2

dt

where a is half the constant acceleration and b and c some integration constants.
W e have the following initial conditions:

t=O
zero
shifting
i.e. c = 0
t = 0 velocity of shifting = SO + b = SO
Therefore:
yo(t) = at2 + sot
W e substitute xo(t) and yo(t) in equation 6.15 f o r H ( u , v ) :
H(u,w) =
=

e-~?rj71(Cd2+Sot)dt

LT

cos [2.rrvat2 + 2 m s o t ] d t

We may use the following formulae:

S
S

cos(ax2+bx+c)dx =
sin(ax2+bx+c)dx =

where S ( x ) and C ( x ) are

S ( x )E
C ( x )E

clx

sin t2dt

6lx

and they are called Fresnel integrals.

cos t2dt

sin[27rvat2

+2~vsot] dt

204

Image Processing:
Fundamentals
The

1
lim S(z) = 2
1
lim C(%)= X+oO
2
lim S(z) = 0
X+CC

x+o

lim C(%)= 0

x+o

Therefore, for SO

0 and T

+ m, we have:

c ( E ( a T + s o ) )+
S

; c (Eso)
+
0

(E,,,,,))
(Eso)
+;

+o

27Fvs;
sin -+ O
a

2 7 4

cos -+ 1
a!
Therefore equation (6.17) becomes:
H ( u , v ) c!

1
~

['+a]

6 2

1-j
~

Example 6.7

How can we infer the point spread function of the degradation process
from an astronomical image?

W e k n o w t h a tby definition the point spread function is the output of the imaging
system when the input isa point source. In a n astronomical image, a very distant
star can be considered as a point source. B y measuring then the brightness profile
of a star we immediately have the point spread functionof the degradation process
this image has been subjected to.

Image

Restoration
205

Example 6.8
Suppose that we have an ideal bright straight line in the scene parallel
to the image axis X. Use this information to derive the point spread
function of the process that degrades the captured image.
Mathematically the undegraded image of a bright line can be represented by:

(2,Y

) = S(Y)

where we assume that the line actually coincides with the


of this line will be:
h1 (X,Y ) =

/'" /'"
-m

W e changevariable 3

axis. Then the image

+"

h ( -~X ' , y - y')S(y')dy'dx' =

-m

X - X'

+ dx'

= -d3. The limits

h ( -~X ' , y ) d d

of 3 are f r o m +m t o

-ca. Then:

-S,,

-"

hZ(X,Y) =

h(?,Y)d? =

+"
S_,

h(?,Y)d?

(6.18)

The right hand side of this equation does not depend on X, and therefore the left
hand side should not depend either;
i.e. the image of the line will be parallel t o
the X axis (or rather coincident with it) and the same all along it:
hl(X,Y) = hl(Y) =

+"
S_,

hl(%Y)d3

(6.19)

3 is a d u m m y variable,
independent of X

(6.20)

The point spread function has as Fourier transform the transfer function given
by:
(6.21)
J -m

If we set

= 0 in this expression, we obtain

206

Image Processing:
Fundamentals
The

k(0,W) =

1: L[:

h(x,y)dx]
e-2?rjwydy

~I(Y)f r o m

(6.22)

(6.19)

B y comparing equation (6.20) with (6.22) we get:

H ( 0 , W) = H l ( W )

(6.23)

That is, the imageof the ideal line givesus the profile of the transfer function along
a single direction; i.e. the direction orthogonal to the line. This is understandable,
as the cross-section of a line orthogonal to its length is no different from the crosssection of a point. By definition, the cross-section of a point is the point spread
If now we have lots
of ideallines in various
function of theblurringprocess.
directions in the image, we are going to have information as to how the transfer
function looks along the directions orthogonal to the lines in the frequency plane.
B y interpolation then we can calculateH(u,W) at any pointin the frequency plane.

Example 6.9
It is known that a certain scene contains a sharp edge. How can the
image of the edge be used to infer some information concerning the
point spread function of the imaging device?

Let us assume that the ideal edge can be represented by a step function along the
axis, defined by:

1 for y
0 for y

>0
50

The image of this function will be:

he(%,Y ) =

00

h(x - X', y

- y')u(y')dx'dy'

W e m a y definenewvariables 53 X - X', J
y - y'.Obviouslydx'
= -d3and
dy' = -dJ. The limits of both 3 and J are f r o m +m t o -m. Then:

Image Restoration

207

Let us take the partial derivative of both sides of this equation with respect to y:

It is known that the derivative of a step function with respect to its argument is a
delta function:

(6.24)

If we compare (6.24) with equation (6.18) we see that the derivative of the image
of the edge is the image of a line parallel to the edge. Therefore, we can derive
information concerning the pointspread function of the imaging process by obtaining images of ideal step edges at various orientations. Each such image should
be differentiated alonga directionorthogonal to the direction of the edge. Each
resultant derivative image should be treated as the image of a n ideal line and used
to yield the profile of the point spread function along the direction orthogonal to
the line, as described in Example 6.8.

Example 6.10
Use the methodology of Example6.9 to derive the point spreadfunction
of an imaging device.
Using a ruler and black ink we create the chart shown in Figure 6.4.

Figure 6.4: A test chart for the derivation of the point spread function
of an imaging device.

Image Processing:
Fundamentals
The

208

This chart can be used to measure the point spread function of our imaging sysimaged using
tem at orientations 0", 45", 90" and 135". First the test chart is
of the image is computed by
our imaging apparatus. Then the partial derivative
convolution at orientations O", 45", 90" and 135" using the Robinson operators.
These operators are shown in Figure 6.5.
0 -2-1
-1

-2

-1

( 4 MO

-1 -2
(b) M1

(c) M2

( 4 M3

Figure 6.5: Filters used to compute the derivative in 0,45,90 and 135
degrees.

190.0

so 0

-10 0
0.0

50.0

1w.o

'
60.0

'

'
65.0

'

(b) Zooming into (a)

(a) Four profiles of the PSF

-1000
55 0

(c) PSF profile for orientations


0' and 90'

'

-1000
55.0

'

'
60.0

'

'

'

65.0

(d) PSF profile for orientations


45' and 135'

Figure 6.6: The point spread function (PSF) of an imaging system, in


two different scales.

209

Image Restoration

The profiles of the resultant images along several lines orthogonal to the original
edges are computed and averaged t o produce the four profiles for0", 45", 90" and
135" plotted in Figure 6.6a. These are the profiles of the point spread function. In
Figure 6.6b we zoom into the central part of the plot of Figure 6.6a. Two of the
four profiles of the point spread functions plotted there are clearly narrower than
the other two. This is
because they correspond to orientations 45" and 135" and
the distance of the pixels along these orientations is fi longer than the distance
of pixels along 0" and 90". Thus, the value of the point spread function that is
plotted as being 1 pixel away from the peak, in reality is approximately 1.4 pixels
away. Indeed, if we take the ratio of the widths of the two pairs of the profiles, we
find thevalue of 1.4.
In Figures 6 . 6 ~and 6.6d we plot separately the two pairs
of profiles and see that
and 0", 90" orientations.
the system has the same behaviour along the 45", 135"
fi correction for the 45" and 135", weconcludethat
Takingintoaccountthe
the point spread function of this imaging system is to
a high degree circularly
symmetric.
In a practical application these four profiles can
be averaged t o produce a
single cross-section of a circularly symmetric point spread function. The Fourier
of the imaging device.
transform of this 2D function is the system transfer function

Ifweknow
the transfer function of the degradation process, isn't the
solution to the problem of image restoration trivial?

If we know the transfer function of the degradation andcalculate the Fourier transform
of the degraded image, it appears thatfrom equation (6.8)we can obtain the Fourier
transform of the undegraded image:
(6.25)

Then, by taking the inverse Fourier transform of $ ( U , W), we should be able to


recover f ( z ,y ) , which is what we want. However, this straightforward approach produces unacceptably poor results.
What happens at points

(U, W )

where

B(u,W ) = O?

H(u,W ) probably becomes 0 at some points in the

( U , W)

plane and this means that

G(u,v) will also be zero at the same points as seen from equation (6.8). The ratio
G(u,o)/I?(u, W) as appears in (6.25)will be O/O; i.e. undetermined. All this means is

that for the particular frequencies ( U , W ) the frequency content of the original image
cannot be recovered. One can overcome this problem by simply omitting the corresponding points in the frequency plane, provided of course that they are countable.

Image Processing: The Fundamentals

210

Will the zeroes of

B(u,v) and G(u7v) always coincide?

No, if there is the slightest amount of noise in equation (6.8), the zeroes of k ( u ,v)
will not coincide with the zeroes of G(u7v).
How can we take noise into consideration when writing the linear degradation equation?

For additive noise, the complete form of equation (6.8) is:

G(u7v) = $(U7 v ) B ( u ,v) + fi(u,v)


where fi(u,
v) is the Fourier transform of the noise field.
A

F(u,v)=

@(U,

(6.26)
v) is then given by:

G(,, v) fi(u,v)
k ( u ,v) k ( u ,v)

(6.27)

In places where k ( u ,v) is zero or even just very small, the noise term may be
enormously amplified.
How can we avoid the amplification of noise?

In many cases, Ik(u,v)l drops rapidly away from the origin while Ifi(u,
v)l remains
more or less constant. To avoid the amplification of noise then when using equation
(6.27), we do not use as filter the factor l / k ( u , v ) , but a windowed version of it,
cutting it off at a frequency before Ik(u,v)[becomes too small or before its first zero.
In other words we use:
@(U,

v) =

&!(U,

v)G(u,v) - & ( U , v)fi(u,


v)

(6.28)

where

(6.29)
where WO is chosen so that all zeroes of H ( u ,v) are excluded. Of course, one may use
other windowing functions instead of the above window with rectangular profile, to
make & ( U , v) go smoothly to zero at W O .

Example 6.11
Demonstrate the application of inverse filtering in practice by restoring
a motion blurred image.

211

Image Restoration

Let us consider the image of Figure 6 . 7 ~ .To imitate the way this image
would loot
if it were blurred by motion, we take every 10 consecutive pixels along the X axis,
find their average value, and assign it to the tenth pixel. This is what would have
happened if, when the image was being recorded, the camera had moved 10 pixels
to the left: the brightness of a line segment in the scene with length equivalent to
10 pixels would have been recorded by asingle pixel. The result
would loot lite
Figure 6.76. The blurred image g ( i , j ) in terms of theoriginalimage f ( i , j ) is
given by the discrete version of equation (6.11):
g(i,j)=

C f(i-t,j)

1 ZT-l
7

2T

i = O , l , ...,N - l

(6.30)

k=O

where i~ is the total number of pixels with their brightness recorded by the same
cell of the camera, and N is the total number of pixels in a row of the image. I n
this example i~ = 10 and N = 128.
The transfer function of the degradation is given by the discrete version of
the equation derived in Example 6.4. W e shall derive it now here. The discrete
Fourier transform of g ( i , j ) is given by:

If we substitute g(1, t ) from equation (6.30) we have:

W e rearrange the order of summations to obtain:

D F T of shifted f (1, t )

By applying the property of the Fourier transforms concerning shifted functions,


we have:

where F ( m , n )is the Fourier transform of the original image.


As F ( m , n )does not depend o n t , it can be taken out of the summation:

Image Processing: The Fundamentals

212

W e identify then the Fourier transform of the degradation process as


(6.32)

The sum on the right hand side


ratio between successive terms

of this equation is a geometric progression with


.arm

q Ee - 3 7

W e apply the formula

Cqk=qn

n-l

k=O

where q # 1

q-1

t o obtain:

Therefore:
(6.33)

Notice that for m = 0 we have q = 1 and we cannot apply the formula


of the
geometric progression. Instead we have
a s u m of 1 'S in (6.32) which is equal t o
i~ and so
fi(o,n)= 1 for

o5n5N

-1

It is interesting to compare equation (6.33) with its continuous counterpart, equaa fundamental diference between the two
tion (6.16). W e can see that there is
equations: in the denominator equation (6.16) has the frequency U along the blurring axis appearing on its own, while in the denominator of equation (6.33) we
havethesine
of thisfrequencyappearing.Thisis
because discreteimages are
treated by the discrete Fourier transform as periodic signals,
repeated ad injinitum in all directions.
of the blurred image in its real and
W e can analyse the Fourier transform
imaginary parts:
G(m,n) E G1 (m,n)

+ jG2 (m,n)

También podría gustarte