Documentos de Académico
Documentos de Profesional
Documentos de Cultura
www.sciencepublication.org
15
ISSN 2347-6788 International Journal of Advances in Computer Science and Communication Engineering (IJACSCE)
Vol 2 Issue2 (June 2014)
the different expressions [1].The recognition rate is done on the same image for obtaining best
obtained for the proposed system is 95%. feature representation. Then these feature points
are selected. A discrete set of Gabor kernels is
2.2 Principal component analysis applied to image. Convolution of real Gabor
Principal component analysis (PCA) involves with Image is taken over selected fiducial points
some sort of numerical procedure that changes to generate feature vector. Length of feature
several (possibly) correlated variables into a vector is reduced by using PCA. Reduced
(smaller) number of uncorrelated variables feature vector is applied to NN classifier to get
called principal components. (PCA) is a the results [3]. Results obtained by using Gabor
technique of identifying patterns in data, and wavelet for randomly selected images are
expressing the data in such a way so as to around 72.50%.
highlight their differences and similarities.
Akshat Garg and Vishakha Choudhary in 2.4 Principal Component Analysis with
their paper “Facial Expression Recognition Singular Value Decomposition
Using Principal Component Analysis” use PCA
to recognize face expression. They find a subset The next proposed technique is PCA with SVD
of principal directions (principal components) algorithm for classification of facial expressions.
from the set of training faces. Then project faces Ajit P.Gosavi and S.R. Khot implements
into this principal components space and get hybrid facial expression recognition technique
feature vectors. Comparison is performed by using Principal Component analysis (PCA) with
calculating the distance between these kinds of Singular Value Decomposition (SVD) in their
vectors. Generally comparison of face images is paper “Facial Expression Recognition uses
carried out by computing the Euclidean distance Principal Component Analysis with Singular
between these feature vectors [2]. Value Decomposition”. They performed
experiments on real database images. They used
universally accepted five principal emotions to
2.3 Gabor Wavelet
be recognized are: Happy, Disgust, Sad, Angry
The next technique introduces is Gabor wavelet. and Surprise along with neutral. They used
Euclidean distance based matching Classifier for
Mahesh Kumbhar, Manasi Patil and Ashish
Jadhav proposed a paper “Facial Expression finding the closest match. This algorithm can
Recognition using Gabor Wavelet” in which effectively distinguish different expressions by
they discusses the application of Gabor filter identifying features [4]. The average Accuracy
based feature extraction by using feed-forward of the system obtained is about 89.70% and
neural networks (classifier) for recognition of 65.42% average recognition rate for all five
four different facial expressions. The principal emotions Happy, Disgust, Sad, Angry
Recognition process start firstly by acquiring the and Surprise along with neutral.
image using an image capturing device like a
camera. The image that is captured then required 2.5 Independent Component Analysis with
to be preprocessed such that environmental and Principal Component Analysis
other variations in different images are
minimized. The image preprocessing steps Roman W. ´Swiniarski1 and Andrzej
comprises with operations like image scaling, Skowron presents a paper ‘‘Independent
image brightness and contrast adjustment and Component Analysis, Principal Component
other image enhancement operations. Processing Analysis and Rough Sets in Face Recognition’’
www.sciencepublication.org
16
ISSN 2347-6788 International Journal of Advances in Computer Science and Communication Engineering (IJACSCE)
Vol 2 Issue2 (June 2014)
www.sciencepublication.org
17
ISSN 2347-6788 International Journal of Advances in Computer Science and Communication Engineering (IJACSCE)
Vol 2 Issue2 (June 2014)
79.1% for the 7-class task and 84.5% for the 6-class task [8].
3. Database
4. Classifier
Euclidean distance based classifier is used which To design a class of feed forward networks with
is obtained by calculating of distance between layers called multilayer perceptrons (MLP)
image to test and available images that are taken algorithm called back-propagation is used. Its
as training images. Using the given set of values input layer has source nodes and output layer is
minimum distance can be found. of neurons and these layers connect the world
In testing, for every expression computation of outside to the network easily. Along with these
Euclidean distance (ED) is done between new layers it has other layers with hidden neurons are
image (testing) Eigenvector and Eigen there. These are hidden as are not accessible
subspaces, to find the input image expression directly. Features of input data are extracted by
classification based on minimum Euclidean hidden neurons. For images selected randomly
distance is done. The formula for the Euclidean results are around 72.50%.
distance is given by:
4.3 PCA
ED =
Gray-level pixel values in image when
The recognition rate for the system proposed is concatenated give raw feature vector. Let us
found to be 95%. suppose that given are m images and n pixel
www.sciencepublication.org
18
ISSN 2347-6788 International Journal of Advances in Computer Science and Communication Engineering (IJACSCE)
Vol 2 Issue2 (June 2014)
values are there per image and Z be a matrix of cosine of the angle between them [11].
(m,n), where m is the number of images and n is Formally, the MahCosine between the images i
the number of pixels (raw feature vector). The and j having projections a and b in the
mean image from Z is subtracted from every Mahalanobis space is computed as:
image from the training set, ∆ = −E [ ]. MahCosine(i,j)=cos( )=
Let the matrix M is representing
resulting”centered” images; M
=(∆ ,∆ ,..∆ ) T. The covariance matrix 4.5 Linear Discriminant Analysis (LDA)
can then be represented as: Ω = M. . Ω is
To discriminate different subject’s projection is
symmetric and can be expressed in terms of the
achieved using LDA. Before using it,
singular value decomposition Ω = U.Λ. ,
dimensions can be reduced by using PCA. In
where U is an m x m unitary matrix and Λ =
first d principal components a dimensional
diag(λ1,...,λm). The vectors U1,...,Um are basis
subspace is defined and construction of
for the m-dimensional subspace. The covariance
Fisherface is done [14]. In Fisher’s method the
matrix can now be re-written as [9]:
projecting vectors W is so that its basis vectors
maximize the ratio between the determinants of
Ω =m
the inter-class scatter matrix and intra-class
scatter matrix .
The coordinate ζi, i ∈ 1,2,...m, is called the ζth i
principal component. It shows the projection of
∆Z onto the basis vector U. Principal W = argmax
components of training set are vectors . After
constructing the subspace centered probe image Suppose number of subjects is m and the
is projected into subspace for recognition. As a number of images (samples) per subject
match gallery image that is closest to the probe available for training to be , where i is the
is selected. Images are also cropped along with subject index. Then and can be defined as:
normalization before PCA is applied, resulted
image being of size of 130x150. When image is =
unwrapped a vector of size 19,500 is resulted.
PCA also reduces it to a basis vector count of
=
m−1; here m represents the number of images.
PCA approach drops a few vectors while face
recognition in order to form a face space. And where is the mean of vector of samples
Usually from the beginning it is small number belonging to the class (or subject) i, µ is the
and from the end a larger number. mean vector of all the samples. When samples
are small in number may be less well
4.4 Distance Measure estimated.
www.sciencepublication.org
19
ISSN 2347-6788 International Journal of Advances in Computer Science and Communication Engineering (IJACSCE)
Vol 2 Issue2 (June 2014)
References
www.sciencepublication.org
20