Está en la página 1de 4

FAST EDGE DIRECTED POLYNOMIAL INTEROPLATION

D. Darian Muresan
darian@dmmd.net
Digital Multi-Media Design (DMMD)
Arlington, VA. 22209
ABSTRACT
Image interpolation is a very important topic in digital image processing, especially as consumer digital photography outgrows regular film photography. From enlarging
consumer images to creating large artistic prints interpolation is at the heart of it all. This paper presents a fast and
efficient interpolation algorithm that produces good visual
results while maintaining the computational cost close to
polynomial interpolation.
1. INTRODUCTION
Due to a high interest in better interpolation algorithms there
has been a plethora of new interpolation algorithms, especially in the consumer market [1, 2]. Convolution based
algorithms are often computationally efficient and good for
small interpolation factors or decimation. For interpolation
factors larger than two it is often difficult to notice visual
improvements over cubic interpolation. In most cases nonlinear algorithms produce the better results. One particular class of algorithms that has generated a lot of interest has been directional interpolation algorithms that try to
first detect edges and then interpolate along edges, avoiding interpolation across them [3, 4]. In this class, there
are algorithms that do not require the explicit detection of
edges. Rather the edge information is built into the algorithm itself. For example, [5] uses directional derivatives
to generate weights used in estimating the missing pixels
from the neighboring pixels. In [6] the local covariance
matrix is used in estimating the missing pixels. This interpolation tends to adjust to an arbitrarily oriented edge,
although it introduces artifacts in high frequency regions.
In [7] we generalized the notion of least squares through the
use of optimal recovery and adaptively quadratic (AQua)
signal classes. This allowed us to add additional assumptions about the local image model, such as for example that
the decimated image was obtained by an averaging operation from the high density image. A lot of the interpolation
algorithms that perform well visually also tend to be very
computationally intensive. In this paper we present a directional polynomial interpolation algorithm, which is cur-

0-7803-9134-9/05/$20.00 2005 IEEE

rently patent pending, and is based on oversimplifying the


assumptions of more complex algorithms. Interestingly, the
simplified assumptions do not deteriorate the performance
of the overall algorithm, keeping the computational cost
low while the visual results are oftentimes better than using more complex algorithms.

2. DIRECTIONAL INTERPOLATION
The algorithms of [6] and [7] assume that the local weights
are determined using least squares or optimal recovery in a
quadratic signal class. In most cases the interpolated values
are weighted sums of all the local neighbors. Considering
that in a rectangular grid edges at 0, 45, 90, and 135 degrees
have the best visual representations we assume that the interpolated pixels are simply weighted values of the pixels in
the directions of 0, 45, 90, or 135 degrees. The two issues
are: first, how to decide on one of the four directions and
second, how to interpolate in the chosen direction. The answer to both of these questions is dictated by the rectangular
sampling grid used for most digital images.
Before explaining the general procedure for enlargement
by any factor K we focus our attention to the cases when the
enlargement factor K is an even integer. In order to better
understand the procedure we will further use a working example where the interpolation factor is a factor of K = 6.
Later, we show how to generalize the procedure to any real
K. Further, it is our initial assumption that we are working
with a gray scale image. The interpolation steps are based
on first determining the directional edge at every pixel as
shown in Fig. 1 and then interpolating in the desired direction. For determining the local direction at each pixel we
can use a multitude of approaches. We opted for determining the local direction by applying a high-pass filter in the
four directions and choosing the direction for which the response is smallest. In the original image, each pixel has one
diagonal and one nondiagonal direction. The diagonal directions can be: 45 degrees (also named diagonal-1), 135
degrees (also named diagonal-2), or diagonal-neutral. The
nondiagonal directions can be zero degrees (also named hor-

3. If abs(d2 d4 ) < T H the direction is diagonalneutral,


Else if d2 < d4 the direction is diagonal-1,
Else the direction is diagonal-2.

Diagonal
Process

Original
Image

Diagonal
Direction
Label Image

Nondiagonal
Process

Nondiagonal
Direction
Label Image

(a)
p1

p2

p3

p4

p5

p6

p7

p8

p9

(b)
Fig. 1. Determination of local edge direction.
izontal), 90 degrees (also named vertical), or nondiagonalneutral. In Fig. 1-b, pixels p 1 , p2 , p3 , p4 , p5 , p6 , p7 , p8 , and
p9 are the original image pixels. Given a positive threshold value TH, which can be determined adaptively or preselected, the diagonal and nondiagonal directions of pixel p 5
are determined as follows:
1. Calculate the following four differences
d1

d2

d3

d4

1
abs( (p4 + p6 ) p5 ),
2
1
abs( (p3 + p7 ) p5 ),
2
1
abs( (p2 + p8 ) p5 ),
2
1
abs( (p1 + p9 ) p5 ).
2

2. If abs(d1 d3 ) < T H the direction is nondiagonalneutral,


Else if d1 < d3 the direction is horizontal,
Else the direction is vertical.

From the original image we formed two new images: the


first image, which we call diagonal direction label image
(4), corresponds to the diagonal direction (in which each
pixels will have one of the three possible labels, namely
diagonal-1, diagonal-2, or diagonal-neutral) and the second,
which we call nondiagonal direction label image (5), corresponds to the nondiagonal direction (in which each pixels
will have one of the three possible labels, namely horizontal,
vertical, or nondiagonal-neutral). If we focus our attention
on image (4), it is unlikely that in any region of the image
we will find an isolated pixel labeled diagonal-1 and being
surrounded by diagonal-2 pixels. The reason for this is that
in most images, edge pixels are clustered together. If we
do find a diagonal-1 pixel surrounded by diagonal-2 pixels,
then most likely the labeling of the diagonal-1 pixel was a
mistake and the actual label should have been a diagonal-2
label. To further improve the robustness of the labeling algorithm, images (4) and (5) are median or low-pass filtered
and the output images form the new labels for the diagonal
and nondiagonal directions.
Using the directionally labeled images, the interpolation
is applied in the horizontal and vertical direction as follows.
First, we process the diagonal pixels, which are the gray
pixels (11) in Fig. 2-a. In order to do this, we segment the
image into regions of interest that are (K + 1) (K + 1)
(in our case the regions are 7 7) and focus our attention
on one region of interest at a time. In Fig. 2-a one such
region of interest is (14), which is depicted more clearly
in Fig. 2-b. We label this region of interest (14) as being diagonal-1, diagonal-2 or diagonal-neutral based on the
majority of the diagonal labeling of the nearby original pixels (13). For example, if three of the four nearby pixels
are labeled as diagonal-1, then the region will be labeled
diagonal-1. The nearby region is a predefined, a priory fixed
size. The smallest nearby region would include at least the
original pixels (13) in the region of interest (14). If the
nearby region would be increased by 6 pixels, this would
include all the original pixels (13) shown in Fig. 2-a plus all
the other original pixels on the left and at the top of (14)
not shown in the figure. Once the region of interest (14) is
labeled, use 1-dimensional polynomial interpolation in the
determined diagonal direction. If we assume that in Fig.
2-b the direction is diagonal-2 then interpolate to find all
the pixels along diagonal-2. This also includes the center
pixel. Next, interpolate along the diagonal-1 direction. For
the diagonal-1 direction the sampling rate is twice as high
as for the diagonal-2 direction since the center pixel is now
assumed known. After this step is complete all the diagonal
pixels are assumed known as shown in Fig. 3-a.

14

15

10

11

10

16

11

12

12

13

13

(a)

(a)

16

15

14

(b)

(c)

(b)
Fig. 2. Interpolation in the diagonal direction. This patch
contains known pixels (10), pixels to be interpolated (11),
non-processed pixels (12) and original pixels (13).
Next, we process the horizontal/vertical pixels, which
are the gray pixels in Fig. 3-a. To do this, segment the image into (K + 1) (K + 1) regions of interest. In Fig. 3-a
there are four such regions of interest. This time, two of
the regions of interest are (15) (i.e. known pixels on top and
bottom) and two are (16) (i.e. known pixels to the left and to
the right). Similarly to the previous step, label the regions of
interest as horizontal, vertical or nondiagonal-neutral based
on the majority of the nondiagonal labeling of the nearby
original pixels (13). Once the region of interest (15) and
(16) is labeled, use 1-dimensional polynomial interpolation
in the determined direction for each row (if direction is horizontal) or column (if direction is vertical). If the interpolation direction is horizontal notice that that each row will
have a different sampling rate. Because the sampling grid is
non-uniform the easiest way to handle this is to apply linear
interpolation.

Fig. 3. Interpolation in the horizontal and vertical direction.


This patch contains known pixels (10), pixels to be interpolated (11), non-processed pixels (12) and original pixels
(13).

When the interpolation factor K is an odd integer we


proceed in two steps. First interpolate by 2K (notice that
2K is even) and then downsample by two. For example, to
interpolate by 3, first interpolate by 6 and then downsample
by 2. For K a non-integer factor larger than 1, write K as a
fraction of two integers (say K = M/N ). Then interpolate
by M and downsample by N .
Some caution must be given when applying the algorithm to color images. In particular, the caution must be
taken in the labeling step of the algorithm. First, we can
convert a color image to gray and use the gray image to determine the local directions. Second, we can convert from
RGB to CIELab and then use the distance defined in the
CIELab space to define d 1 through d 4 . Once the local directions are determined directional interpolation is applied,
as described above, to all the color planes separately.

10

10

15

15

20

20

25

25

30

30

35

35

40

80

90

100

40

50

100
80

90

100

150

Fig. 4. Comparison of interpolation algorithms: Cubic interpolation on the left (a) and the proposed directional interpolation algorithm on the right (b).

200

50

100

150

200

50

100

150

200

3. RESULTS
We conclude this paper with two examples of the image interpolation algorithm applied to two different images. The
first image is a 512 512 rings image down-sampled by
two, without pre-filtering, and then interpolated back to its
original size. The downsampling process introduces some
aliasing that manifests itself as extra rings going from the
top of the image to the lower-right hand corner. Cubic interpolation (Fig. 4-a) maintains the aliasing introduced by
the downsampling process, while our interpolation (Fig. 4b) removes it very nicely (the alias is also more noticeable
when the image is kept at arms length). Notice that the
image in Fig. 4-b is much sharper. The second example is
a leaf image interpolated by a factor of 4. In Fig. 5-a we
have the cubic interpolation and in Fig. 5-b we have the results of the interpolation presented in this paper. Notice how
much sharper and less jagged the proposed interpolation is.
The proposed algorithm is integrated in Pictura and Visere
Pro. (version higher than 3.0) which are available online
from [8]. In Pictura the interpolation algorithm is under the
menu item Process/Interpolate/AQua-2.
4. REFERENCES
[1]

Interpolation
Discussion
http://www.interpolatethis.com

Website

[2]

Commercial Interpolation Comparison Website


http://www.americaswonderlands.com/
digital photo interpolation.htm

[3] J. Allebach and P. W. Wong. Edge-directed interpolation. In IEEE Proc. ICIP., pages 707710, 1996.
[4] K. Jensen and D. Anastassio. Subpixel edge localization and the interpolation of still images. IEEE Trans.
Image Processing, 4:285295, 1995.

50

100

150

200

Fig. 5. Comparison of interpolation algorithms: Cubic interpolation on top (a) and the proposed directional interpolation algorithm on the bottom (b).
[5] Ron Kimmel. Demosaicing: Image reconstruction from
color ccd samples. IEEE Trans. Image Processing,
8:12211228, 1999.
[6] Xin Li. New edge directed interpolation. IEEE Trans.
Image Processing, vol. 10, pp. 1521 - 1527, October
2001.
[7] D. D. Muresan and T. W. Parks. Adaptivel Quadratic
(AQua) Image Interpolation. In IEEE Transactions on
Image Processing., , vol. 13, no. 5, pp. 690699, May
2004..
[8]

Digital Multi-Media Design (DMMD) Website


http://dmmd.net

También podría gustarte