An Efficient Implementation of Multi Layer Perceptron Neural Network for Signal Processing

Download Full-Text PDF Cite this Publication

Text Only Version

An Efficient Implementation of Multi Layer Perceptron Neural Network for Signal Processing


M.E Vlsi Desgin(Pg Scholar), Srinivasan Engineering College, Perambalur-621 212, Tamilnadu, India. Email.Id:thilagbe@Gmail.Com


    Assistant Professor/Ece, Srinivasan Engineering College, Perambalur-621 212, Tamilnadu, India.


    AbstractArtificial Neural Networks support their processing capabilities in a parallel architecture. It is widely used in pattern recognition, system identification and control problems. Multilayer Perceptron is an artificial neural network with one or more hidden layers. This paper presents the digital implementation of multi layer perceptron neuron network using FPGA (Field Programmable Gate Array) for pattern recognition. If the pattern matches with the original then process continued else it is rejected. This network was implemented by using three types of non linear activation function: hardlims, satlins and tansig. A neural network was implemented by using VHDL hardware description Language codes and XC3S250E- PQ 208 Xilinx FPGA device. The results obtained with Xilinx Foundation 9.2i software are presented. The results are analyzed by using device utilization and time delay.











    f a


    Keywords FPGA, Multi Layer Perceptron, Neuron PLAN approximation, Sigmoid Activation.



      he human brain is probably the most complex and intelligent system in the world. It consists of the processing element which is called as neurons. Each neuron has performs a set of simple operations but they exhibit complex global behavior in the network. Artificial neural network (ANN) is used for engineering purposes, to replicate the brains activities. Artificial neural networks (ANNs) have been used successfully in solving pattern classification and recognition problems, function approximation and predictions. Their dispensation capabilities are based on their architecture which is highly, parallel and

      interconnected. For a specific application in all biological

      Fig:1: Simple Artificial Neuron Model

      systems through a learning process an ANN was configured; the synaptic connections are adjusted between the neurons is required by the learning process.

      An artificial neural network is a massively parallel distributed processor made up of simple processing units (neurons), which has the ability to learn functional dependencies from data. It resembles the brain in two respects:

      • Knowledge is acquired by the network from its environment through a learning process.

      • Strengths of Inter neuron connection are called synaptic weights, which is used to store the acquired knowledge.

      This procedure is used to perform the learning process so it is called as a learning algorithm; the function is to modify the

      A.Thilagavathy, K.Vijaya Kanth


      synaptic weights of the network in an orderly fashion to attain a desired design objective.

      A simple processing unit such as neuron, which collects some weighted data, sums them with a bias and calculates an output to be passed on. Fig 1 shows the simple artificial neuran model. The inputs to the neuron are P1, P2, P3 and the w1, w2, w3 are the corresponding weight values. The weight values and their corresponding inputs are multiplied and summed together, which is the input to the activation function block. The function that the neurons are used to calculate the output is called the activation function.

      An artificial neural network consists of three or more layers (an input and an output layer with one or more hidden layers) of nonlinearly-activating nodes are called the multilayer perceptron. Each node in one layer connects with a certain weight Wij to every node in the following layer.

      Hardware implementation of ANN can be implemented on Application Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs). ASIC design has several drawbacks such as the ability to run only a specific algorithm and limitations on the size of a network. Hence FPGA are used to overcome the drawback, which has flexibility with an appreciable performance. It maintains the high processing density, which is needed to utilize the parallel computation in an ANN. Every digital module is instantiated on the FPGA by concurrently and hence the parallel operation is performed. Thus the speed of the network is not dependent on the complexity. To design a multilayer perceptron model is the main purpose of this work. The model consists of two stages. The first stage is the multiplication of parallel inputs and weight values. And the second stage is the nonlinear activation function for the output signal and weight update. Three types of nonlinear activation functions were considered: symmetrical hard limiter, symmetric saturating linear and hyperbolic tangent sigmoid.


      The terms Neural Network (NN) and Artificial Neural Network (ANN) is used without qualification, is usually referred as Multilayer Perceptron Network. There are many types of neural networks including Probabilistic Neural Networks, General Regression Neural Networks, Radial Basis Function Networks, Cascade Correlation, Functional Link Networks, Kohonen networks, Gram-Charlier networks, Learning Vector Quantization, Hebb networks, Adaline networks, Heteroassociative networks, Recurrent Networks and Hybrid Networks.

      Here we used the most widely used type of neural networks: Multilayer Perceptron Networks. A multilayer

      perceptron (MLP) is a model that maps sets of input data onto a set of appropriate output. A multiple layers of nodes in a directed graph; with each layer fully connected to the next one is model by an MLP. Apart from the input nodes, each node is a neuron has a processing element such as neuron is connected to the nonlinear MLP utilizes a technique called for training the network.

      Fig 2 shows a three layer perceptron network. This network has an input layer (on the left) with p inputs, one hidden layer (in the middle) with L neurons and an output layer (on the right) with m outputs.

      Fig.2 Three layer perceptron network

      For classification problems, only one winning node of the output layer is active for each input pattern. Each layer has provided a connection with its adjacent layers. There is no connection between non-adjacent layers and there are no recurrent connections. Each of these connections is defined by an associated weight. The weighted sum of its inputs is calculated neuron and applies an activation function that produces the high or low neuron output. By using this type of propagation from the output of each layer, the MLP generates the specified output vector from each input pattern. By using the supervised learning algorithm such as back propagation, the synaptic weights are adjusted. Different types of activation functions have been proposed to transform the activity level (weighted sum of the node inputs) into an output signal


      The activation function may be linear or nonlinear function of N. A particular nonlinear activation function of neuron is chosen to satisfy specification of the training algorithm that the neural network is attempted to run. In this work, three types of the most commonly used nonlinear activation functions are implemented on FPGA. They are hard limit activation function, saturating liner activation function, Hyperbolic Tangent Sigmoid activation function.

      1. Hard limit activation function

        In the hard limit activation function, if the function argument is less than 0 then the output of the neuron is 0, if the function is greater than or equal to 0 then the output of the

        A.Thilagavathy, K.Vijaya Kanth


        neuron is 1. This function is used to create neurons that classify inputs into two distinct categories.


        If the hard limiter activation function is used with neuron then it is referred as the McCulloch-Pitts model.

      2. Saturating linear activation function

        This type of nonlinear activation function is also referred to as piecewise linear function. It has either a binary or bipolar range for the saturation limits of the output. The mathematical model for a symmetric saturation function is described as follows:


      3. Hyperbolic tangent sigmoid activation function

      This function takes the input any value between plus and minus infinity and the output value into the range – 1 to 1, according to the expression

      as synthesis tool for implementing the designs in Spartan 3E FPGA

      Fig.3 a) simulation wave form of linear neuron without activation function



      The tansig activation function is commonly used in multilayer neural networks that are trained by the back propagation algorithm since this function is differentiable. The tansig function is not easily implemented in digital hardware because it is consists of an infinite exponential series.

      A simple second order nonlinear function presented by Kwan can be used as an approximation to a sigmoid function. This nonlinear function can be implemented directly using digital techniques. The following equation is a second order nonlinear function which has a tansig transition between the upper and lower saturation regions:


      Where B and g represent the slope and the gain of the nonlinear function f (n) between the saturation regions -L and L.

    4. RESULTS

      The digital hardware were modeled using VHDL and simulated using Model Sim 5.7 and Xilinx 9.2 ISE was used

      Fig.3 b) simulation wave form of linear neuron with hard limit activation function

      Fig.3 c) Simulation wave form of linear neuron with saturating linear activation function

      Figure 3 shows the simulation waveform of neural network with all the three activation functions. Table I&II gives the resource use and performance summary for 3 layer perceptron network with 3 different activation functions.

      A.Thilagavathy, K.Vijaya Kanth


      Table II : Timing summary for different activation functions.

      Neuron type

      Hard limit

      Saturating linear

      Hyperbolic tangent

      Fig.3 d) Simulation wave form of linear neuron with hyperbolic tangent sigmoid activation function

      Max path delay

      15.820ns 15.231ns 29.598ns


Table I: Device utilization summary for different activation functions.

The results of this work successfully demonstrate the hardware implementation of Multilayer perceptron networks with three different activation functions for image recognition. The design was synthesized using Xilinx 9.2i ISE tool and















Slice FF







4 input LUTs

Hard limit Saturating


implemented in Spartan 3e FPGA. By using this, the comparisons to be made between the hardware realizations of this neuron, which are regarded as basic building block of artificial neural networks. The operation frequency in all cases is very good and it gives a clear idea of the advantages of using FPGAs, since multiple modules can be working in parallel with a minimum reduction in performance due to the increased number of interconnections. Finally, it can be said that FPGAs technology and their low cost, and reprogrammability make this approach a very powerful option for implementing ANNS. The implemented design can be used in adaptive filters, voice recognition, and image recognition.

Number of bonded IOBs


Neeraj Chasta , Sarita Chouhan and Yogesh Kumar,

Analog VLSI implementation of neural network

Number of GCLKs




architecture for signal processing VLSICS.,,2012.

Number of Multipliers

25 25 25

3 3 4


  1. Rafid Ahmed Khalil Saad Ahmed AI-Kazzaz Digital Hardware implementation of Artificial Neurons models using FPGA , Springer U.S.,2012.

    1. Manish Panicker, C.Babu, Efficient FPGA Implementation of Sigmoid and Bipolar Sigmoid

      A.Thilagavathy, K.Vijaya Kanth


      Activation Functions IOSR Journal of Engineering


    2. R.Omondi, C. Rajapakse, FPGA Implementation of Neural Networks, Springer U.S., 2006.

    3. O. Maischberger ,V. Salapura , A Fast FPGA Implementation of a General Purpose Neuron, Technical University , Institute of in formatik , Austria , 2006.

    4. Hiroomi Hikawa , A Digital Hardware Pulse-Model Neuron With Piecewise Linear Activation Function, IEEE Trans.Neural Networks, vol.14,no.5,pp.1028- 1037,Sept.2003

    5. M. Banuelos Sauced, et. al., Implementation of neuron model using FPGAs. Journal of Applied Research and Technology ISSN: 1665- 6423, vol. 3 (10), 2003, pp. 248-255.

    6. Parasovic A., latinovic I . , A Neural Network FPGA Implementation . IEEE. 5th seminar on Neural Network Application in Electrical Engineering , September 2000 , PP117 120

    7. Ranjeet Ranade & Sanjay Bhandari & A.N. Chandorkar VLSI Implementation of Artificial Neural Digital Multiplier and Adder pp.318-319.

    8. Roy Ludvig Sigvartsen, An Analog Neural Network with On-Chip Learning Thesis Department of informatics, University of Oslo, 1994.

    9. D. Nguyen a and B. Wid row, Improving the learning speed of 2-layer neural network by choosing initial values of the adaptive weights, IEEE First International Joint Conference on Neural Networks , 3, 2126, (1990).

    10. S. Orcioni G. Biagetti, M. Conti, A mixed signal fuzzy controller using current model circuits, Analog Integrated Circuits Process. 38, 2004 , pp.215-231.

    11. Sahin, I. Koyuncu Design and Implementation of Neural Networks Neurons with RadBas, LogSig, and TanSig Activation Functions on FPGA.

    12. Nirmaladevi M., Mohankumar N., Arumugam S. Modeling and Analysis of NeuroGenetic Hybrid System on FPGA // Electronics and Electrical Engineering. Kaunas: Technologija, 2009. No. 8(96). P. 6974.

    13. Reyneri L. M. Implementation Issues of Neuro Fuzzy Hardware: Going Toward HW/SW Codesign

      // IEEE Transactions on Neural Networks, 2003. Vol. 14. No. 1.

    14. Pauktaitis V., Dosinas A. Pulsed Neural Networks for Image Processing // Electronics and Electrical

      Engineering. Kaunas: Technologija, 2009. No. 7(95). P. 1520.

    15. Rutka G. Prediction Accuracy of Neural Network Models // Electronics and Electrical Engineering. Kaunas: Technologija, 2008. No. 3(83). P. 2932.

    16. Raudois V., Narvydas G., Simutis R. A Classification of Flash Evoked Potentials based on Artificial Neural Network // Electronics and Electrical Engineering. Kaunas: Technologija, 2008.

      No. 1(81). P. 3136.

    17. Juang J. G., Chien L. H.,Lin F. Automatic Landing Control System Design Using Adaptive Neural Network and Its Hardware Realization // IEEE System Journal, 2011. Vol. 5. No. 2.

    18. Yonggang W., Junwei D., Zhonghui Z., Yang Y., Lijun Z., Bruyndonckx P. FPGA Based Electronics for PET Detector Modules With Neural Network Position Estimators // IEEE Trans. on Nuclear Science, 2011. Vol. 58. No. 1.

    19. Himavathi S., Anitha D., Muthuramalingam A. Feedforward Neural Network Implementation in FPGA Using Layer Multiplexing for Effective Resource Utilization // IEEE Trans. on Neural Networks, 2007. Vol. 18. No. 3.

    20. Sahin I. A 32bit floatingpoint module design for 3D graphic transformations. 2010. Vol. 5(20). 3070 p.

    21. Gomperts A., Ukil A., Zurfluh F. Development and Implementation of Parameterized FPGABased General Purpose Neural Networks for Online Applications // IEEE Trans. on Industrial Informatics, 2011. Vol. 7. No. 1.

    22. E. Vittoz, et al., The design of high performance analog circuits on digital CMOS chips, IEEE J. Solid State Circuit 20, 1985, pp.657-665

Thilagavathy.A received the B.E. degree in Electronics and Communication Engineering from Anna University of technology, Coimbatore in the year of 2011. Currently, she is pursuing the M.E. degree in VLSI Design in Srinivasan Engineering College which is affiliated by Anna University, Chennai. Currently her

major research focuses on Neural Networks and VLSI design.

Vijaya kanth.K received the B.E. degree in Electrical and Electronics Engineering from Dhanalakshmi Srinivasan Engineering College which is affiliated to Anna University, Chennai in the year of 2006. And then he received the M.E. degree in VLSI Design in GCT, Coimbatore in the year 2009. Currently, he is working as Assistant professor in the department of Electronics and Communication Engineering at Srinivasan Engineering

College. His major research interests focus on image processing, neural networks and VLSI.

A.Thilagavathy, K.Vijaya Kanth


A.Thilagavathy, K.Vijaya Kanth


Leave a Reply

Your email address will not be published.