Está en la página 1de 22

Unit 1

DIGITAL IMAGE FUNDAMENTALS

DIGITAL IMAGE FUNDAMENTALS


1.1 INTRODUCTION:
The term digital image processing generally refers to processing of a two dimensional picture by a digital computer. A digital image is an array of real or complex numbers represented by a finite number of bits. An image may be defined as a two-dimensional function f(x, y) where x and y are spatial (plane) coordinates. F is amplitude at any pair of coordinates, (x, y) is intensity of gray level of the image at that point. When x, y and amplitude values of f are all finite, discrete quantities, we call the image a digital image.

1.2 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING:


The fundamental steps involved in digital image processing are summarized in the fig-1.1. Image acquisition is the first process. To do so requires an imaging sensor and the capability to digitize the signal produced by the sensor. The sensor could be a monochrome or color TV camera that produces an entire image of the problem domain every 1/30 seconds. The imaging sensor could be a line scan camera that produces a single image line at a time. If the output of the camera or other imaging sensor is not already in digital form, an A/D converter digitizes it. Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is observed or simply to highlight certain features of interest in an image. Image restoration is an area that deals with improving the appearance of an image. Image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Color image processing is an area that has been gaining importance because of the significant increase in the use of digital images over the internet. Wavelets are the foundation for representing images in various degrees of resolution. In particular, it is used for data compression and for pyramidal representation in which image are subdivided successively into smaller regions. Compression deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it. The underlying basic of the reduction process is the removal of redundant data. Morphological processing deals with tools for extraction image component that are useful in the representation and description of shape. Segmentation partitions an input image into its constituent part or objects. In general, autonomous segmentation is one of the most difficult task in digital image processing and rugged segmentation procedure brings the process a long way towards successful solutions of an image problem

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Fig.1.1: Fundamental steps in digital Image processing The output of the segmentation stage usually is a raw pixel data constitution either the boundary of a region or all the points in the region itself. In either case, conversion of the data to a form suitable for computer processing is necessary. Representation and description Representation is only a part of the solution for transforming raw data into a form suitable for subsequent computer processing. Description, also called selection deals with extracting features that result in some quantitative information of interest or features that are basic for differentiating one class of objects from another. Recognition is the process that assigns a label to an object based on the information provided by its descriptors. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge data base. This knowledge may be simple as detailing regions of an image, where the information of interest is known to be located. Thus limiting the search to be conducted in seeking that information.

1.3 ELEMENTS OF AN IMAGE PROCESSING SYSTEM: The basic components comprising a typical general purpose system for digital image processing
shown in fig1.2. The function of each component is discussed with reference to sensing, two elements are required to acquire digital images. The first is a physical device that is sensitive to the energy radiated by the object we wish to image. The second called digitizer is a device for converting the output of the physical sensing device into digital form.

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Specialized image processing hardware usually consists of the digits just mentioned, plus hardware that performs other primitive operation such as an (ALU) which performs arithmetic and logic operations in parallel on entire images.(ex: ALU is used as averaging the images as quickly, as they are digitized, for the purpose of noise reduction). This type of hardware sometimes is called a front-end subsystem and its most distinguishing characteristic is speed.

Fig.1.2: Components of a general purpose image processing system The computer in an image processing system is a general-purpose computer and can range from a pc to supercomputer. In dedicated applications, sometimes specially designed computers are used to achieve a required level of performance. In general purpose image processing systems, almost any well equipped pc type machine is suitable for off line image processing tasks. Software consists of specialized modules that perform specific task. A well designed package also includes the capability for the uses to write code that as a minimum utilizes the specialized modules. Mass storage is essential requirement in image processing system. An image of size 1024*1024 pixels in which the intensity of each pixel is an 8-bit quantity, requires one megabyte of storage space if the image is not compressed. Image displays in use today are mainly color (preferably flat screen) TV monitors. Monitors are driven by the outputs of image and graphics display cards that are an integral part of computer
3

Unit 1

DIGITAL IMAGE FUNDAMENTALS

system. In some cases it is necessary to have stereo displays, and these are implemented in the form of head gear containing two small display embedded in googles worn by the user. Hard copy devices for recording images include laser printers, film cameras, hear-sensitive devices, inkjet units, digital units, such as optical and CD-ROM.

1.4 NETWORKING:
The large amount of data inherent in image processing applications the key considerations in image transmission bandwidth. In dedicated network, this typically is not a problem. But communications with remote sites via the internet are not always as efficient.

1.5 ELEMENTS OF VISUAL PERCEPTION:


Physical limitations of human vision in terms of factors that also used in digital images. Thus factors such as how human and electrons imaging compare in terms of reduction and ability to adapt to changes in illumination are not only interesting they are also important from a practical point of view. 1.5.1 STRUCTURE OF THE HUMAN EYE: Fig shows a simplified horizontal cross section of the human eye. The eye is nearly a sphere, with an average diameter of approximately 20 mm. Three membranes enclose the eye.

Cornea and selera outer cores Choroid Retina


The cornea is a tough, transparent tissue that covers the anterior surface of the eye continuous with the cornea. The selera is an opaque membrane that encloses the remainder of the optic globe. The choroid lies directly below the selera. This membrane contains a network of blood vessels that serve as the major source of nutrition to the eye. The choroid coat is heavily pigmented and hence helps to reduce the amount of extraneous light entering the eye and the back scatter with in the optical globe. At its anterior extreme, the choroid is divided into the ciliary body and the iris diaphragm. The latter contracts or expands to control the amount of light that enters the eye. The central opening of the iris (the pupil) varies in diameter from approximately 2 to 8mm. the front of the iris contains the visible pigment of the eye, whereas the back contains a black pigment. The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to the ciliary body. It contains 60 to70% water about 6% fat and move protein than any other tissue in the eye. The lens is colored by a slightly yellow pigmentation that increases with age. The lens absorbs approximately 8% of the visible light spectrum, with relatively higher absorption at shorter wave lengths. Both infrared and UV light are observed appreciably by proteins within the lens structure and in excessive amounts, can damage the eye. The inner most membrane of the eye is retina, which lines the inside of the walls entire posterior portion. When the eye is properly focused, light from an object outside the eye is imaged on the retina. Pattern vision is afforded by the distribution of discrete light receptor cones and rods. The cones in each eye number between 6 and 7 millions. They are located primarily in the central portion of the retina called fovea, and are highly sensitive to color. Humans can resolve fine

Unit 1

DIGITAL IMAGE FUNDAMENTALS

details with these cones largely because each one is connected to its own nerve end. Cone vision is called photopic or bright-light vision.

Fig.1.3: Simplified diagram of a cross section of the human eye The number of rods is much larger: some 75 to 150 million are distributed over the retinal surface. The larger area if distribution and the fact that several rods are connected to a single nerve end reduce the amount of details discernibly by these receptors. Rods serve to give a general, overall picture of the field of view. They are not involved in color vision and are sensitive to low levels of illumination. 1.5.2 IMAGE FORMATION IN THE EYE: The principal difference between the lens of the eye and an ordinary optical lens is that the former is flexible. The radius of curvature of the anterior surface of the lens is greater than the radius of its posterior surface. The shape of the lens is controlled by tension in the fibers of the ciliary body. To focus on distant objects, the controlled muscles cause the lens to be relatively flattened. Similarly these muscles allow the lens to become thicker in order to focus on objects nearer the eye. The distance between the centre of the lens and the retina (called the focal length) varies from approximately 17mm to about 14mm as the refractive power of the lens increases from its minimum to its maximum.

Unit 1

DIGITAL IMAGE FUNDAMENTALS

When the eye focused on an object farther away than about 3m the lens exhibits its lowest refractive power when the eye focuses on a nearby object, the lens is most strongly refractive. This information makes it easy to calculate the size of the retinal image of any object. For example the observer is looking at a tree 15m high at a distance of 100m. if h is the height in mm of that object in the retinal image, the geometry of yield 15/100=h/17 or h=2.55mm perception then takes place by the relative excitation of light receptors which transform radiant energy into electrical impulses that are ultimately decoded by the brain.

Fig.1.4; Graphical representation of the eye looking at a tree. Point C is the optical center of the lens. 1.5.3 BRIGHTNESS ADAPTATION AND DISCRIMINATION: Digital images are displayed as a discrete set of intensities; the eyes ability to discriminate between intensity levels is an important consideration in presenting image processing results. The range of light intensity levels to which the human visual system can adapt is enormous on the order of from the scotopic threshold to the glare limit. Experimental evidence indicates that subjective brightness (intensity as perceived by human visual system) is a logarithmic function of light intensity incident on the eye. The solid curve represents the range of intensities to which the visual system can adapt. In photonic vision alone, the range is about . The transition from scotopic to photopic vision is gradually over the approximate range from 0.001 to .1 millilambert (-3 to -1 ml in the log scale), as the double branches of the adaptation curve in this range shown.

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Fig.1.5: intensity vs. subjective brightness The Visual system cannot operate over such a range simultaneously rather it accomplishes these large variations by changes in its overall, sensitivity a phenomenon known as brightness adaptation. The total range of distinct intensity levels can discriminate simultaneously is rather small when the current sensitivity level of the visual system is called the brightness adaptation level.

(a)

(b)

Unit 1

DIGITAL IMAGE FUNDAMENTALS

(c) Fig.1.6: Some examples for well known optical illusions shown in a, b, c Other example of human perception phenomena are optical illusions in which eye fills in non existing information or wrongly perceives geometrical properties of objects. Fig (a) the outline of a square is seen clearly, in spite of the fact that no line defining such a figure are part of the image. Fig (b) the same effect, this time with a circle, can be seen note how just a few line are sufficient to give the illusion of a complete circle. Figs (c) the two horizontal line segments are of the same length but one appears shorter than the other.

1.6 A SIMPLE IMAGE FORMATION MODEL:


Images are 20 function of the form f(x, y) where f is the amplitude or value at spatial coordinates (x, y) is a positive scalar quantity . What exactly does it mean the value of f? Def: an image is a distribution of light energy as a function of spatial position to see an image of an object. Light source Emits light energy

sun

P(x, y)

r(x, y)
Scanner Monitor f(x, y)

Fig : 1.7
8

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Light energy, can do one of the 3 things when incident on an object Can be absorbed by the object. Can be transmitted through the object. Can be reflected from the object. By the conservation of energy, total light energy incident on an object must be conserved R+T+A=1 conservation of energy

R- % of light reflected A - % of light absorbed. T - % of light transmitted. If an object transmits most of light T=1[R&A=0] it is referred to as a clear object be --Object behind that can be seen Ex: glass transparent An opaque object transmits no light (T=0) and simply reflects (R=1) or absorbs (A=1) the incident light. Light reflected from its surface is the most important in the formation of image. Light radiating from the sun, is reflected by off the surface of the tree and received by the camera lens to from an image on the camera film or photograph. The image formed at the camera lens is to converted into electronic signals or on a monitor or a photograph can be expressed mathematically as f(x, y) = i (x, y)* r (x, y), Where i incident, r reflected Intensity of monochrome image at any coordinate ( , ) is called grey level (x) of the image at that point. (0, 0) y

.(

x Fig.1.8 Consider two bit image where (0, 0) white (0, 1) light gray (1, 0) dark gray (1, 1) gray

For 3 bit image (0, 0, 0) (0, 0, 1) (0, 1, 0) (0, 1, 1) (1, 0, 0) black very dark gray dark gray gray gray
9

Unit 1

DIGITAL IMAGE FUNDAMENTALS

(1, 0, 1) light gray (1, 1, 0) very light gray (1, 1, 1) white

Mathematically L (gray level) = f ( , ) represents gray level value at the point ( , ) in an image.

1.7 IMAGE SAMPLING AND QUANTIZATION:


Conversion from continuous image to digital image. To generate digital images from sensed data. The output of most sensors is a continuous voltage waveform whose amplitude and spatial behavior are related to the physical phenomenon being sensed. To create a digital image we need to convert the continuous sensed data into digital form. This involves two processes 1) Sampling 2) Quantization.

(a)

(b)

(c) (d ) Fig.1.9: Generating digital image. (a) Continuous image, (b) A scan line from A to B in the continuous image, (c) sampling and quantization, (d) digital scan line. From fig 1.9 a, b, c, d
10

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Black and white image. White - 1 and black - 0. Shades of gray from 0 to 1. Continuous image f(x, y). x, y coordinates are continuous and amplitude f is also continuous. To convert into digital form we have to sample x & y and f. Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called quantization. Voltage waveform of gray levels values of the image along the line AB. The random variations are due to image noise. To sample this function take equally spaced samples along AB. The set of discrete locations gives the sampled function. In a 3-bit image number of gray-levels =8. Gray level scale divided into 8 discrete gray levels. The vertical marks indicate the specific values assigned to each of the eight gray levels. The continuous gray levels are quantized by assigning one of the 8 discrete gray levels to each sample. Starting at the top of the image and carrying out this procedure line by line produces a 2D- digital image.

(b) Fig.1.10 In digital image f(x, y) where f is quantized value, (x, y) is sampled values. An image f(x, y) is sampled to the resulting digital images has M rows and N columns .The values of he coordinates (x, y) are discrete quantities and for covariance we will use integer values for these discrete coordinates. (x, y) = (0, 0), (x, y) = (0, 1) [this is just rotation but not actual values of physical coordinates]

(a)

11

Unit 1

DIGITAL IMAGE FUNDAMENTALS

0 1 2 - - - - - - - - -N-1 0 1 2 . . M-1 . . . . . . . . . . . . . . . . . . . . .

Convert the M N digital image into a compact matrix form

. . . . . . . . . Fig.1.11

f(x, y) =

digital image by definition

Each element of this matrix called an image element picture element pixel or del or dot. Expressing sampling and quantization in mathematical terms: Z set of real integers (+ & -1). R set of real numbers. Sampling partitioning xy pane into a grid.

Y f(x, y)

x (a)

x Fig 1.12 (b) f(x, y)(x, y) Z xy plane

The coordinates of the center of each grid being a pair of elements form the Cartesian product . Note: f (x, y) is a digital image if (x, y) are integers from F is a function that assigns a gray level value f R (set of real numbers) to each distinct pair of coordinates (x, y) quantization process. If gray level values are also integers f Z
12

Unit 1

DIGITAL IMAGE FUNDAMENTALS

1.7.1 DECISION MAKING POINT: The digitization process requires decisions about values for M, N (no. of pixels) and for the number of discrete gray levels for each pixel. The requirement of M&N is that they have to be positive integers. Due to processing, storage and sampling hardware considerations, the number of gray level should be L= K- Number of bits per pixel. Assume that discrete gray levels are equally spaced integers in the interval [0, L-1]-dynamic range of an image. The no. of bits required to store a digital image b=M*N*k file size in bit M=N B= K K=bit depth Bytes = K/8. 1.7.2 SPATIAL AND GRAY LEVEL RESOLUTION: Resolution: no. of pixels/inch. Spatial resolution: resolving image plane into small pixels. Sampling is the principal factor determining the spatial resolution of an image spatial resolution is the smallest discernible detail in an image. If sampling frequency is more, resolution is good. GRAY LEVEL RESOLUTION: Number of gray levels per pixel. Smallest discernible change in gray level which is purely a subjective process. Due to hardware considerations the number of gray levels is usually on integer power of 2. For an 8-bit pixel image the no. of gray levels is =256.
y

Image Plane

Fig.1.13

ALIASING AND NOISE PATTERNS: Shannon sampling theorem: ( >2 ) If a function is sampled at a rate equal to greater than twice its highest frequency it is possible to completely recover the original function from its sample. Aliasing introduces additional frequency components into the sampled function called an Aliasing frequency. Sampling rate in image is no. of samples/inch.

13

Unit 1

DIGITAL IMAGE FUNDAMENTALS

By reducing the high frequency components in images we called reduce aliasing. (aliasing always present in sampled image). The effect of aliased frequencies can be seen under the right conditions in the form of so called patterns. 2 identical periodic patterns of equally spaced vertical bars, rotated in opposite directions and then super imposed on each other by multiplying the 2 images. A noise pattern is caused by the breakup in periodicity as a 2D sinusoidal waveform (aliased) running in vertical direction.

Fig 1.14

1.7 BASIC RELATIONSHIPS BETWEEN PIXELS:


A pixel P at coordinates (x, y) has 8 neighbors 2 vertical neighbors 4 horizontal neighbors 4 diagonal neighbors. Pixel P at coordinates (x, y) has four horizontal and vertical neighbors whose coordinates are given by

1 P 2

y 4

X (x - 1, y) (x + 1, y) (x, y - 1) (x, y + 1)

(x, y)

{(x-1, y), (x+1, y), (x, y-1), (x, y+1)}This set of pixels called the 4 neighbors of P, is denoted by (P).
14

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Each pixel is at unit distance from (x, y) and some of the neighbor of P lie outside the digital image if (x, y) is on the border of the image. The four diagonal neighbor of p have coordinates

1 P 2 x

4 (x, y)

(x-1, y-1) (x+1, y-1) (x-1, y+1) (x+1, y+1) Set consisting of { (p), (p)} together are called the 8-neighbor of P denoted by (p). Note: Some of the points in (p) and (p) fall outside the image if f(x, y) is on the border of the image. 1.7.1 ADJACENCY, CONNECTIVITY, REGIONS AND BOUNDARIES: To find if 2 pixels are connected or not It must be determined if they are neighbor. If their gray level satisfy a specified criterion of similarity (if their gray levels are equal) Let V be the set of gray level values used to define adjacency of pixels with value 1. In a binary image V= {1} if we are referring to adjacency of pixels with value 1. ex: 256 gray levels V= { 0, 4, 10, 255} We are referring to values with these pixels in the range of 0-25 Three types of adjacency: 4-adjacnency 8-adjacnency M-adjacency (mixed adjacency) 4-ADJACENCY: Two pixels p and q with values from y are 4-adjacency if q is in the set (p). Ex. 3*3 image (binary) (1, 0)

0 0 (p)1

1 1 1

(q)1 0 0
15

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Two pixels p and q are 4-connected provided p is 4-neighbors of q and there gray levels meet some predetermined criteria. 4-connectivity allows only vertical or horizontal path to be traced from pixel p to q. 8-ADJENCY: Two pixels p and q are connected provided p is an 8-neighbor of q and this gray level determine some predefined criteria. 8- Connectivity includes the diagonal neighbor in determining a connected path. Two pixels p and q are connected provides p is an 8-neighbour of q and their grey levels determine some predefined criteria. The main difference between 4-connectivity and 8-connectivity is that 8-conncectivity allows for diagonal paths between pixels. The main difficulty with 8-adjacency is that it produces two possible paths as shown.

0 0 (p)1

1 1 0

1(q) 0 0

M-ADJACENCY (Mixed adjacency): Two pixels p and q with values from V and M-adjacent if q is in (p) or q is in (p) and the set (p) (q) has no pixels whose values are from V. Two pixels p and q are m-connected provided p is an 8-neighbor of q, and 4-neighbor set of p with q does not intersect the 4-neighbor set of q with p.

5 5 5(p)

5(q)

M-connectivity eliminates multiple paths by removing the diagonal path if 4-connectivity already exists between 2-pixels p and q. 1.7.2 CONNECTIVITY, REGIONS AND BOUNDARIES: If S is a subset of pixels in an image, then two pixels in S are called connected if there exists a path between them whose pixels are all in S. For a pixel in S, the set of pixels connected to it is called a connected component of S. If it only has one connected component then 8 is called a connected set. If R is an image subset, it is called a region if R is connected set. The boundary of a region is a set of pixel in R that has at least one neighbor which is not in R. If R is the entire image, then the boundary is defined by the extreme position image pixels.
16

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Edges are formed from pixel whose derivatives exceed a specific value [1]. 1.7.3 DISTANCE MEASURE: Def: A function D of two pixels p, q: D (p, q) is a distance functions if D(p, q) (D(p, q) = 0, if p=q) D(p, q) = D(q, p) D(p, q) D(p, z) + D(z, q) (z is third pixel) Assume that the coordinates of pixels p and q are (x, y) and (s, t) respectively. Euclidean Distance: | | | Distance: (p, q) = | 2 2 Distance: (p, q) = max | || |) 2

M distance: Previous distances are depending only on the pixel coordinates and not on the path between pixels. The Dm distance is defined as the shortest m-path between the points (considering m-adjacency). 1.7.4 IMAGE OPERATION ON A PIXEL BASIS: Images are represented in the form of matrices, when we refer to an operation like dividing one image by another, the division is carried out between corresponding pixels in the two images. Example f and g are images, the first element of the image formed by dividing f by g is simply the first pixel in f divided by the first pixel in g; the assumption is that none of the pixels in g have values other arithmetic and logic operations are similarly defined between corresponding pixels in the image involved.

1.8 IMAGING GEOMETRY:


A geometric operation maps pixel information (i.e. intensity values at each pixel location ( in an input image to another location ( ) in an output image. )

17

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Mapping

f(

) input image

f(

) output image

Fig 1.15 Mathematical Equation: =A +B

Translate: change image content position. Scale: change image content size. Rotate: change image content orientation. Reflect: flip over image content. Affine transformation: general image content linear or geometric transformation. All transformation are expressed in a 3D Cartesian coordinate system in which a point has coordinates demoted by (x, y) coordinates of an image. If 2D images are involved, we will go with our previous converted g(x, y) coordinates. 1.8.1 TRANSLATION (As shown table in fig 1.17): Definition: The translate operator performs a geometric transformation which maps the position each pixel or picture element in an input image into a new position in an output image, where the dimensionality of the two image often is, but need not necessarily the same. Task is to translate a point with coordinates (X, Y, Z) to a location by using displacements ( ). The translation is easily accomplished by using the equation. = X+ = Y+ = Z+ [ ] [ ][ ] Where ( form. ) are co- ordinates of new point in matrix (1.1)

( 1.2)

To simplify the notation, we will represent equation 1.2 in square matrix.

18

Unit 1

DIGITAL IMAGE FUNDAMENTALS

][ ]

(1.3)

Unified matrix representation as V* = AV [ ] Column vector containing the original coordinates.

(1.4)

Column vector containing the transformed coordinates.

The translation process is accomplished by using V* = T V (1.5) 2-D: An image element located at ( ) in the original image is shifted to new position ( , ) in the corresponding output image by using the displacement ( ). Application used to improve the visualization of the image. How It Works: The translation operator performs a translation of the form

Note: The dimensions of the input image are well defined; the output image is also a discrete space finite dimension. If the new coordinates ( , ) are outside the image, the translate operator will normally ignore them. Guidelines For Use: The translate operator taken two arguments ( ) which specify the desired horizontal and vertical displacements. 6*6 pixel image whose center lies as (3, 3) . TASK: Is to translate the subject into lower right corner of the image ( , ) = (6, 6) ( ) = (3, 3)

Fig 1.16
19

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Information is lost bc2.pixels mapped out the boundaries defined by the input image were ignored. 1.8.2 SCALING (As shown table in fig 1.17): Common names: zoom shrink, pixel replications, pixel interpolation, sub-sampling. The scale operator performs a geometric transformation which can be used to shrink or zoom the size of an image. Image reduction commonly known as sub-sampling is performed by replacement of a group of pixels values by one arbitrary chosen pixel value from within the group or by interpolating between pixel values in a local neighbor hoods. Image zooming is achieved by pixel replication or by interpolating (by inserting pixels into the image). Application used to change the visual appearance of an image to alter the quality of information stored in a scene. How It Works: Scaling compresses or expands an image along he coordinates directions. They are two methods Sub-sampling. Shrinking. One pixel value with in a local neighborhood is chosen randomly to be representative of its surroundings. Replace with upper left pixel. Interpolates between pixel values with in a neighborhood by taking mean (statistical sample) of local values interpolation using mean value. An image (or regions of an image) can be zoomed either through pixel replication or interpolation. Pixel replication simply replaces each original image pixel by a group of pixels with the same value. Interpolation of the values of the neighboring pixels in the original image can be performed in order to replace each pixel with an expanded group of pixels.

Mathematically for 3D:


1.8.3 ROTATION (As shown table in fig 1.17): Def: The rotation operator perform a geometric transform which maps the position ( ) of a pixel in an input image onto a position ( , ) in an output image by rotating through a user specified angle theta about an origin. 3D Rotation: To rotate a point about another arbitrary point in space requires 3-trasnformations. Translate the arbitrary point to the origin. Perform the rotation. Translate the point back to the original position.

20

Unit 1

DIGITAL IMAGE FUNDAMENTALS

Rotation of a point about the Z coordinates by an angle theta is achieved by using the transformations. The rotation angle theta is measured clockwise when looking at the origin from a point on the Z axis. The transformations affect only the values of X and Y coordinates. Rotation of a point about X-axis by an alpha is performed by using the transformation. Rotation of a point about Y-axis by an angle beta.

Fig 1.17 (Table) 1.8.4 CONCATENATION AND INVERSE TRANSFORMATIONS (As shown table in fig 1.17): The applications of several transformations can be represented by a single 44 transformation matrix.
21

Unit 1

DIGITAL IMAGE FUNDAMENTALS

V* =

Where A is the 44 matrix A= of application is important. The above discussed ideas can be extended to transforming a set of the points simultaneously by using a single transformation. We can perform the opposite transformation by the use of inverse matrices. Perspective Transformation: A perspective transformation projects 3-D points on to a plane. Role: they provide an approximately to the manner in which an image is formed by viewing a 3D world.

(S (T v)) = Av S T .These matrices generally do not commute, so the order

22