Está en la página 1de 4

ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print) IJECT Vol. 2, Issue 3, Sept.

2011

Object Classification through Perceptron Model


using LabView
1
Sachin Sharma, 2Gaurav Kumar
1,2
Dept. of Instrument & Control Engineering, NIT Jalandhar,Punjab, India

Abstract II. Perceptron Model


Artificial neural networks are considered to be good alternatives Perceptron model is the basic and simple architecture of
to conventional statistical methods for the classification Artificial Neural network. Perceptron may have one single
problems and pattern recognition problems. Perceptron neuron or the layers of neuron. One single neuron is mainly
model is the basic model of ANN and is generally used for the used to classify the data which are linearly separable while
classification of binary class problems. It is a fast and reliable more than one neurons are used to classify the data which
network for the class of problems that it can solve. In this are not linearly separable.
paper we have implemented the perceptron model through Architecture of the single neuron perceptron with hardlimit
labview and we have tested it for real time problem of object function is shown in the figure.
classification. In this paper we implemented a single layer The working of this model is very simple. The weighted sum
perceptron for two different types of objects. of input signals is compared to a threshold to determine the
neuron output.
Keywords When the sum is greater than or equal to the threshold, the
Perceptron model, Artificial Neural network, Labview, Back output is 1. When the sum is less than the threshold, the output
Propagation. is 0.

I. Introduction
Artificial Intelligence is an alternative solution for processing
the data and information like human brain. From the start of
20th century, the scientists have started the study of brain
functions. Artificial Neural Networks‘(ANN) represent an area This architecture is limited to simple binary class
of expert system. It is basically an attempt to simulate the problems which are linearly separable. For non-linearly
processing of information like human brain. There is a variety separable class problems we use multi-layer perceptron
of neural network architectures that have been developed over model
the past few decades. Different models have also be defined
depending upon their activation function. Perceptron model
[1,3] is a neural network having symmetrical hard limit activation
function. Perceptron is fast and reliable network for the class
of problems that it can solve. In addition an understanding
of the operations of the perceptron provides a good basis for
understanding more complex networks. Beginning from the
feed-forward networks and the multi-layer perceptron (MLP)
to the highly dynamic recurrent ANN. There are many different
types of neural nets like feedback, feed forward etc. Also,
various algorithms have been developed for training these
networks to perform the necessary functions.
In this paper, a new approach has been applied to build
perceptron using Lab VIEW, virtual instrumentation software
developed by National instruments USA. Lab view is a powerful
graphical tool for measurement, instrumentation and control Fig. 1: Architecture of perceptron
structured problems .However, it does not provide neuron
models in its library. Neural network It has been successfully III. Perceptron Learning
used for data acquisition and control functions. However it All class of Neural Network needs to learn the environment in
has been used here for building neural nets, presently not the which that network has to perform. These neural networks learn
library function of Lab VIEW. via some rules. Being a part of neural network, there should
Neural networks are parallel processors. They have data be some learning rule for the perceptron [4].
flowing in parallel lines simultaneously. Lab VIEW has the By learning rule we mean a procedure for modifying the weights
unique ability to develop data flow diagrams that are highly and biases of a network. This procedure may also be referred
parallel in structure. Lab VIEW seems to be a very effective to as a training algorithm. The purpose of the learning rule is to
approach for building neural nets. In this paper we have also train the network to perform some task. There are many types
implemented the supervised learning algorithm, for training of of neural network learning rules. They fall into three broad
the perceptron, using labview. e been successfully developed in categories: supervised learning, unsupervised learning and
this thesis using LabView and it has been shown that Lab VIEW reinforcement (or graded) learning.
is a very powerful software tool for building neural nets..

w w w. i j e c t. o r g   International Journal of Electronics & Communication Technology  255


IJECT Vol. 2, Issue 3, Sept. 2011 ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print)

A. Supervised Learning learning rate it may happen that system may miss the minima
In supervised learning, the learning rule is provided with a set which cause instability to the system. Learning rate should not
of examples (the training set) of proper network behaviour: be very slow, otherwise the system response will become very
{p1,t1}, {p2,t2}…….…..{pQ,tQ}, where pq is an input to the network slow, and it will take considerable time to converge.
and tq is the corresponding correct(target) output. As the inputs Apart from this learning rate sometimes we can add some
are applied to the network, the network outputs are compared momentum term to the weight update equation.
to the targets. The learning rule is then used to adjust the
weights and biases of the network in order to move the network V. System Model For Classification
outputs closer to the targets. The perceptron learning rule falls In this paper we took a simple problem to classify two objects
in this supervised learning category. In this paper we have say apple and orange. The objective of this system is to classify
implemented the supervised learning with back propagation objects according to their features. There are three main
algorithm to train the perceptron network. components of this system: sensors, computer and sorter. Fig.
2 shows the graphical view of the system model.
B. Reinforcement Learning
Reinforcement learning is similar to supervised learning, except
that, instead of being provided with the correct output for each
network input, the algorithm is only given a grade. The grade
(or score) is a measure of the network performance over some
sequence of inputs. It appears to be most suited to control
system applications [5-6].

C. Unsupervised Learning
In unsupervised learning, the weights and biases are modified
in response to network inputs only. There are no target outputs
available. Most of these algorithms perform some kind of
clustering operation. They learn to categorize the input patterns
into a finite number of classes. This is especially useful in such Fig. 2: System model
applications as vector quantization [7-8].
Both objects have different shape, weight and texture. We have
IV. Learning Algorithm: Back Propagation learning three different type of sensors which gives information about
Back propagation learning algorithm [9-11] is based on the the weight, shape and texture. These sensors are somewhat
propagation of error backward to the input and depending primitive. The shape sensor will output a 1 if the object is
upon that updating the weight vector. It based on the steepest approximately round and -1 if it is more elliptical. The texture
descent approach to find the update in the weight vector. sensor will output a 1 if the surface of the fruit is smooth and
Let us consider one output neuron j. then the error is given a -1 if it is rough. The weight sensor will output 1 if the fruit
by: ej(n)=dj(n)-yj(n); is more than one pound and a -1 if it is less than one pound.
n: represents the iteration n for nth pattern. These sensors give their output to the computer.
d: represents the target output
y: represents the neuron output. A. Software Architecture
The output of neuron is given by the equation given below.

Change in weight or update in weight is given by the


equation:

By using the chain rule of differentiation, we get the steepest


descent direction or we may say the derivative value, given
by:

xi(n): input to the neuron.


Hence, the change in the weights is given by:
Fig. 3: Software Architecture

This equation is also called the update equation. The factor Computer is equipped with LabView software, and an operating
is called the learning rate and it is very important parameter system. A NI DAQ card is also connected to the computer for
for the convergence of the algorithm. Learning rate should be data acquisition purpose. We set the DAQ card for digital input
well optimal. It means, it should not be too high and can lead to mode because the sensory information is digital in nature.
instability or oscillatory behaviour of the system and due to high Inside the computer this information goes to the neural network

256  International Journal of Electronics & Communication Technology w w w. i j e c t. o r g


ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print) IJECT Vol. 2, Issue 3, Sept. 2011

model, which is implemented in LabView. The neural network end result. The end-user can change the variable parameters
model classifies the objects and generates the output in binary of the network. For example, in the network the end-user can
form either o or 1. These values works as the command to change the learning rate for updating the hidden & output
the sorter hardware. Neural network output is given to the weights; error pre-set value which is the stopping criterion. But
RS232 interface which provide serial interfacing between the user cannot change the number of neurons in the hidden
the software and external world hardware. Fig. 3 shows the layer or output
graphical representation of the software architecture.
Sorter is a relay based hardware structure which receives
commands from the computer through RS232 protocol. This
command gives the instructions regarding desired decision for
the coming object. After taking the decision sorter sort them
to corresponding bin.

VI. GUI Application


We have developed the GUI for the training and for the trained
perceptron using the weights which we get after the training.
We developed this GUI using LabView. Fig. 4 shows the diagram
of front panel for the training of Perceptron model.

Fig. 5 : Front panel of trained perceptron

layer because the changes have to be made at various locations


in the network.

References
[1] W. McCulloch, W.Pitts, “A logical calculus of the Ideas
immanent in nervous activity,” Bulletin of Mathematical
Biophysics, Vol. 5, pp. 115-133,1943.
[2] M. Minsky, S. Papert, Perceptrons, Cambridge, MA-MIT
Press, 1969.
[3] F. Rosenblatt, “The perceptron: A probabilistic model
for information storage and organization in the brain,”
Psychological Review, Vol. 65, pp. 386-408, 1958.
[4] Gallant, S. I.,”Perceptron-based learning algorithms.”. IEEE
Fig. 4: Front panel for training the Perceptron transaction on Neural Networks, vol. 1, no. 2. Pp. 179-191l,
1990
We have used 16 different patterns corresponding the output [5] A. Barto, R. Surron, C. Anderson, “Neuron-like adaptive
of the sensors. We use 100 no. of epochs to train the network elements can solve difficult learning control problems,”
for better results. We have chosen the learning rate 0.5 for IEEE Transactions on Systems, Man and Cybernetics, Vol.
optimal performance. 13, No. 5, pp. 834-846, 1983.
Fig. 5 shows the front panel of the trained perceptron. we [6] D. White, D. Sofge (Eds.), “Handbook of Intelligent Control”,
have tested this model in the real time environment for the New York: Van Nostrand Reinhold, 1992.
classification of the objects. [7] Richard ). Duda, Peter E. Hart, David G. Stork,
“Unsupervised Learning and Clustering,” Ch. 10 in Pattern
VII. Conclusion classification (2nd edition), pp. 571, Wiley, New York, ISBN
In this paper, a perceptron network architecture has been 0471056693, 2001.
successfully developed and trained accordingly to produce [8] Ranjan Acharyya,  "A new approach for Blind Source
appropriate results using LabView as the application development Separation of Convolutive Sources, 2008.
environment(ADE). It has been proved that LabView is a good [9] Rumelhart, David E.; Hinton, Geoffrey E., Williams, Ronald
platform for developing and testing computational intelligence J. "Learning representations by back-propagating errors".
algorithms. Neural network built in this thesis have a very good Nature 323 (6088): pp. 533–536, 1986.
GUI and is very user friendly in terms of the presentation and the [10] Paul J. Werbos., “The Roots of Backpropagation”. From

w w w. i j e c t. o r g   International Journal of Electronics & Communication Technology  257


IJECT Vol. 2, Issue 3, Sept. 2011 ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print)

Ordered Derivatives to Neural Networks and Political


Forecasting”. New York, NY: John Wiley & Sons, Inc., 1994
[11] D.E. Rumelhart, G.E. Hinton, R.J. Williams,”Learning
representations by back propagation errors,” Nature, vol.
323, pp. 533-536, 1986.

Sachin Sharma was born at Dadri,


Uttar Pradesh, India on 5 July, 1987.
He is pursuing his M.tech degree
in Instrumentation and Control
Engineering from NIT Jalandhar.
He did his B.tech in Electronics and
Communication Engineering from
GLA Institute of Technology, Mathura.
He has published several papers in
International conferences. His area
of interest includes signal processing,
neural networks and system designing.

Gaurav Kumar was born at Jalalpur


village, Aligarh, U.P., India on 20
May, 1989. He is pursuing his M.tech
degree in Instrumentation and Control
Engineering from NIT Jalandhar. He
did his B.tech in Electronics and
Instrumentation Engineering from
Hindustan College of Science and
Technology, Mathura. His area of
interest includes control and automatio.

258  International Journal of Electronics & Communication Technology w w w. i j e c t. o r g

También podría gustarte