Design of Neural Architecture Using WTA for Image Segmentation in 0.35µm Technology Using Analog VLSI

DOI : 10.17577/IJERTV1IS4229

Download Full-Text PDF Cite this Publication

Text Only Version

Design of Neural Architecture Using WTA for Image Segmentation in 0.35µm Technology Using Analog VLSI

will be Generally larger and less accurate than the

1 Mr.Maulik B.Rami , 2 Prof.H.G.Bhatt , 3Prof.Y.B.shukla

1PG student(EC), Electronics and Communication Department S.V.I.T., Vasad-Gujarat.

2Associate professor, Electronics and Communication Department, S.P.C.E., Visnagar-Gujarat. 3Associate professor, Electronics and Communication Department, S.V.I.T., Vasad-Gujarat.

Abstract

Artificial neural network are attempts to mimic, at least partially, the structure and function of brains and nervous systems. The human brain contains billion of biological neurons whose manner of interconnection allow us the reason, memorize and compute. Advances in VLSI technology and demand for intelligent machines have created strong resurgence of interest in emulating neural system for real time applications. Such an artificial neural network can be built with help of simple analog components like MOSFET circuits and basic circuits with help of operational amplifier..WTA circuits are used to identify active output from numbers of inputs so using this we can use neural architecture for image segmentation especially for X-Ray image. This paper gives information about neuron behaviour and how it takes intelligent decision how it can segment the especially medical images

  1. Introduction

    Neural networks are used when there is no algorithmic solution to a problem or a problem is too complicated to be solved by known algorithms Neural networks can be used when the definition of the problem does not exist, but the samples of inputs and corresponding outputs are available. If we are ever going to understand intelligence and develop artificially intelligent machines or computers, we need to study the brain and its neurons, and how neurons work together to solve problems. It is not useful consider neural networks to solve problems for which the analytical

    Solution can be easily found and implemented. In that case, the corresponding neural implementation

    direct algorithmic solution. [2]

    Figure.1 Comparison of Neurons [7]

    Here we compare a biological neuron with a typical artificial neuron. Similarly both take inputs, use weights and generate an output. Neural Networks are composed of a large number of computational nodes operating in parallel. Computational nodes, called neurons, consists in processing elements with a certain number of inputs and a single output that branches into collateral connections, leading to the input of other neurons.[2]

    Normally they perform a nonlinear function on the sum (or collection) of their input. The neurons are highly interconnected via weight strengths these interconnections are typically called synapses and control the influence of neurons on the others neurons.[2]

    The synaptic processing is typically modelled as multiplication between a neuron outputs and synaptic weight strengths.[3] Each neurons output level depends therefore on the outputs of the connected neurons and on the synaptic weight strengths of the connections.[3]

    Digital technology has advantages of mature fabrication techniques, weight storage in RAM, and arithmetic operations exact within the number of bits of the operands and accumulators. Digital chips are easily embedded into most applications. Digital operations are usually slower than in analogue systems, especially in the weight x input multiplication, and analogue input must first be converted to digital.[2] Analog neural networks can

    exploit physical properties to perform network operations and thereby obtain high speed and densities. Analog design can be very difficult because of the need to compensate for variations in manufacturing, in temperature, etc Creating an analog synapse involves the complication of analog weight storage and the need for a multiplier being linear over a wide range. Hybrid design attempts to combine t he best of analog and digital techniques. The external inputs/outputs are digital to facilitate integration into digital systems, while internally some or all of the processing is done in analog. There are three types of neural networks.

    can stack as many of these forecasting layers as you want, each one taking the forecasts of the previous layer as its predictors The final forecast layer produces the forecast of the quantity you were seeking. This type of neural network is called feed forward because the information feeds forward from the predictors, through the layers, and on to the final prediction It is said to be fully connected because every node in the one layer connects to every node in the layer above and below it Feed forward networks with sigmoid squashing functions are sometimes called perceptrons

    1. Non-learning neural networks:

    2. Off-chip learning networks:

    3. On-chip learning networks:[3]

      ui

      j

      Where x

      wij x j

      s the jth predictor, w

      (1)

      is the weight for

      On-chip learning networks is the neural j ij

      network performs both the feed forward phase and the learning one The advantages are the high learning speed due to the analog parallel operations

      and the absence of the interface with a host computer for the weight update. On-chip learning

      that predictor for node i, and ui is the weighted average coming out of the ith node These weighted averages are then squashed by a non-linear sigmoid (s-shaped) function in order to prevent the occurrence of extreme values

      networks are suited to implement adaptive neural systems, i.e. systems that are continuously taught

      y 1

      i 1 e ui

      (2)

      while been used.

  2. Architecture of Neuron

    Figure.2 Architecture of Neuron.

    Neural network produces a forecast by taking the weighted average of the predictors. This is just what a regression equation does Where neural networks go further is to layer this procedure The first layer of weighted averages (regression equations) produces a hidden layer of intermediate forecasts.

    These forecasts are then used as predictors in another regression equation to produce the final forecast Note that It is possible to have more than one hidden layer, each one taking the forecasts of the previous layer as its predictors Each layer is made up of nodes Each node takes the weighted average of the predictors generated by the previous layer The first layer is an input layer consisting of the predictors It feeds the first layer of averaging nodes Each of these averaging nodes feeds a corresponding sigmoid to produce a forecast You

    Where, yi is the forecast generated by the ith node

  3. Analog Neuron Components

The inputs to the neuron as shown in figure 2 are multiplied by the weight matrix, the resultant output is summed up and is passed through an neuron activation function (NAF). The output obtained from the activation function is taken through the next layer for further processing. The multiplier block, adder block and the activation function model the artificial neural network. Blocks to be used are as follows

  1. Multiplication block

  2. Adders

  3. NAF (Neuron Activation Function)block

    1. Analog Multiplier

      Here I have used Gilbert multiplier is named for Barrie Gilbert who designed the circuit in 1968 .The circuit combines diode-connected transistors, current mirrors, summing junctions, and differential pairs to multiply two differential signals. Consider two differential pairs that amplify the input by opposites gin

      vout1/vin = -gmRd (3)

      vout2/vin = gmRd (4)

      Vout = Vout1 +Vout2 =A1Vin+A2Vin, (5)

      where A1 and A2 are gain which are controlled by Vount1 and Vcount2, respectively. If I1 is zero then Vout = +gmRd Vin. Vcount is used to vary currents monotonically. if Vcount1-Vcount2 is large then Vout will be most positive or most negative. Vout=Vin*f(Vcount) and f(Vcount)is taylor expansion.

      Vout= VinVcount which is multiplication of two inputs.

      Figure.3 Gillbert Multiplier.

      As shown in figure.3 six N-MOSFET are used in designing of Gilbert Multiplier. Designing of MOSFETs are based on 0.35um technology. For given lengths W/L ratios are given in Tabel I.

      Table 1. W/L ratio of Gillbert cell.

      MOSFET(M)

      W(µm)

      1

      1.4

      2

      1.4

      3

      1.4

      4

      1.4

      5

      0.2

      6

      0.2

    2. Operational Amplifier.

      Figure.4 General Block Diagram of OPAMP [8]

      This OPAMP is a two-stage OPAMP where the first stage is a differential amplifier whose differential current output is mirrored into the next stage and converted to a single ended output through circuitry very similar to the synapse circuit. The outputs of the synapses can easily be summed. The summation is done by connecting all current outputs together. The summed current then must be converted to a voltage by a current to voltage (IV) converter.

      MOSFET(M)

      W/L(µm)

      W(µm)

      (1,2)

      3

      1.05

      (3,4)

      15

      5.25

      (5,8)

      4.5

      1.58

      (6)

      94

      32.9

      (7)

      14

      4.92

      Figure.5 Schematic of Two stage OPAMP. Table 2. W/L ratio of opamp.

    3. Opamp as Adder.

      Adder can be design using operational amplifier. So it is necessary to implement op- amp. As shown in figure, opamp is designed with eight MOSFETs. In which five are N- MOSFET and 3 are P-MOSFETS are designed with L=0.35um length.

      Adder

      V1

      0.25V

      -2.0 volt

      V2

      0.3V

      V3

      0.5V

      V4

      1.0V

      Figure.6 OPAMP as Adder Circuit. Table 3. Specification of adder.

    4. Neuron Activation Function Block.

      Neuron activation function designed here is tan sigmoid. The design is basically a variation of the differential amplifier with modification for the differentiation output. The same circuit should be able to output the neuron activation function and the differentiation of the activation function. Here three designs are considered for NAF. NAF can be operated in three regions.

      1. Linear function with adjustable threshold

        The output voltage of the circuit is determined by Vo= Vi(R2/R1) – Vref(1+R2/R1) and threshold is determined by q= Vref(R1+R2)/R1. Lower set point is determined by Vref (R1+R2)/R2- Vcc (R1/R2) and Upper set point is determined by the Vref(R1+R2)/R2+Vcc(R1/R2).

        Figure.7 Schematic of Linear function with adjustable threshold

        Table 4. Parameter values.

        Parameter Values

        R1

        1000k

        R2

        2500k

        R3

        1000k

        Vref

        1 volt

      2. Sigmoid function with fixed gain control.

        Another nonlinear function widely used is sigmoid function as shown in figure.

        Figure.8 Schematic of Sigmoid Function With Fixed Gain Control

        This is same as Linear threshold function only Vref is made to ground, so Upper limit as well as lower limit will be symmetrical around axis as shown its transfer function.

        Table 5. Parameter values.

        Parameter Values

        Rs

        1000k

        Rf

        2500k

        R3

        1k

        Vref

        Ground

      3. Step Function

This is also same as sigmoid function but the difference is only in Upper Limit and Lower Limit is almost same. As shown in its Transfer Function it is at zero voltage.

  1. Winner Takes All (WTA) Circuit

    The winner-take-all (WTA) is an important circuit for neural network applications in which the most activated neuron has to be selected or a specific output obtained. There were several structures of WTA circuits proposed and they can be broadly categorized as the current based and the voltage based structures. The current based structures use the current signal as the Carrier of information. They are easier to implement since the current signal can be added simply by wiring two signal lines together.

    Two general types of inhibition mediate activity in neural systems: subtractive inhibition, which sets a zero level for the computation, and multiplicative (nonlinear) inhibition, which regulates the gain of the computation. We report a physical realization of general nonlinear inhibition in its extreme form, known as winner take-all.

    Figure.9 Schematic of Voltage Based WTA circuit [9]

    Any image can in general be described by a two- dimensional function f(x,y), where x and y represent the spatial coordinates and f(x,y) the value at that location. Depending on the type of image, the value f(x,y) can be either light intensity, temperature (for thermal images), intensity of X- rays (for X-ray images), intensity of radio-waves (for nuclear magnetic resonance images MRI), depth for range images, etc.

    Figure.10 Segmentation of four colour image

    Segmentation techniques can be placed into three classes:

    • Classical algorithms mostly based on mathematical or statistical methods

    • Artificial Intelligence techniques

    • Other techniques which either crossover or fall into none of the first two categories.

      The present survey is intended to be a more comprehensive study of the existing neural- network-based segmentation techniques. Due to the extensive number of segmentation techniques reported in the literature, this survey is a selective one. Because most of the methods in the literature can be applied or extended easily to colour images,

      Figure.9 shows transistor level schematic of the in the following discussion we will refer only to WTA circuit. VIN [k], k=1, 2, … n, are the input signals, grey level images.

      and either VOA [k] or VOB [k], k=1,2, …n can be the

      outputs. Each cell contains two half differential amplifiers. All cells share the other half differential amplifiers which are not enclosed by the cell boundaries. In Figure 3.10 PMOS transistors P1 and PA and NMOS transistors N1 and NA form Differential amplifier-1. Likewise, PMOS transistors P2 and PB and NMOS transistors N2 and NB form Differential amplifier-1. VB is a bias voltage.

  2. Neural Architecture for image Segmentation

Image segmentation is one of the most important processes in modern computer vision. It involves partitioning the image into meaningful segments. It is the process by which a computation translates the original image description i.e. an array of grey levels into segments with uniform and homogenous characteristics. They should correspond to structural units (objects) in the scene.[15]

(a) (b)

Figure.11 original image (a) pxels of original image (b)

Figure.11 (a) shows the original that is being to segmented and (b) shows the original image divide in number of pixels. Now for especially for X-ray images there are mainly two colours are used black and white. So for medical

images if we remove the black pixels or portion from the original image than we can easily find the irregularities of medical x-ray image so for this I have used neural architecture with winner take all circuit and designed architecture for the same.

`There are many applications of the image segmentation. From the medical field to robotics, image segmentation has played and plays an important role. For instance, for the automated detection of cancerous cells from mammographic images, segmentation followed by recognition or classification is required. Another example is that of automatic non-destructive testing techniques, such as automatic inspection of welding, castings, detection of foreign bodies within food products etc. Such techniques involve the segmentation of the image, and detection (recognition) of possible anomalies or foreign bodies within. Therefore the output of such a system is, in most of the cases, directly dependant of the segmented output of the original image.

Figure.12 Black portion of X-ray image

Figure.13 White portion of X-ray image

As shown in figure.14 first takes image which is to be segment than divide original image in certain pixels. For higher resolutions divide it in more numbers of pixels than apply these pixels to as an input of neuron. Here I designed architecture for two neurons this will take pixels as input than it will consider that if the portion of pixel is black than output of neuron goes to logic 0 means low input

Figure.14 Neural Architecture for X-ray image segmentation

Figure.15 Selection of logic high or low

Otherwise for white portion of image (eg. X-ray image) it will shows logic 1 means high output 2.5v (Vdd) So WTA circuit in architecture will only give logic 1 (2.5v)to the white portion of image and remaining all portion of image will be goes down to logic 0 so at the last image with only white portion will be received which is segmented image of original image. Result for X-ray image

Figure.16 Segmentation of X-ray image

Artificial neural networks have come to be used as a different approach for image segmentation. Their properties, such as graceful degradation in the presence of noise, their ability to be used in real-time applications and the ease of implementing them with VLSI processors, led to a booming of ANN-based methods for segmentation. Almost all types of neural networks have been applied with a different degree of success. The mostly used being Kohonen and Hopfield ANNs. The below figures shows the image segmentation of medical images.

  1. Results and Discussion

    1. Analog Multiplier

      Figure.17 DC Transfer Characteristics of gillebert cell

      Transfer Function INPUT:

      Vin : -2.5 to 2.5 Volt Vcount : -2.5 to 2.5Volt OUTPUT:

      Multiplication of Respective Inputs.

      Figure.18 Modulated Waveform for Multiplier

      Output: Modulated Signal Input: SIN (0 100MV 500)

      SIN (0 10MV 300KHZ)

    2. Operational Amplifier (OPAMP)

      Figure.19 Transfer characteristic, Gain and frequency response Of OPAMP

    3. OPAMP as Adder

      Figure.20 Adder Circuit Simulation Result

    4. NAF (Neuron Activation Function)block

      Figure.21 Transfer Function of Linear Function with Adjustable Threshold

    5. Results of Winner Takes All (WTA)

      The computer simulation of the presented WTA circuit uses T-spice. The circuits are simulated in 0.35 µm CMOS technology with NMOS and PMOS transistor models. The simulation results of the WTA circuit with 2unit cell are shown in Figure 4.8. The circuit chooses the largest input VIN [1] =1.5V by pushing VOB

      [1] up to high values and pulling VOB [2] down. The total competition time is about 12ns including the 5ns of early competition when D2 are shut off by the clock .

      Figure.24 Simulation Results for Two Cells WTA Circuit

      Figure.22 Transfer Function of Sigmoid function with Fixed Gain Control

      Figure.23 Transfer Function of Step Function

      Figure.25 Transient Responses for Two Cells WTA Circuit [16]

    6. Results for Image Segmentation

Figure.26 Segmentation of X-ray image using neural network

Above figure.26 shows the segmentation of X- ray image in which white portion of image will take the logic 1(+2.5 Vdd) & black portion logic 0. So in the segmented image only white portion is highlighted as shown in figure.

Figure.27 Segmentation of cine-angiographic image of left ventricle

7 Conclusion and future work

Neural networks is widely used in Real world interface & it can be implemented using different methods but we choose analog VLSI because it is very fast compared to digital VLSI & no need of A/D or D/A converter. One important is that it can be directly interface with physical sensors & actuators so used in pattern recognition like ECG, image processing and many applications.

Here I have designed analog neural components like (gillebert cell) multiplier or mixer, cmos opamp, adder using opamp, activation function sigmoid, step function & linear threshold function with help of simulation TSPICE Software. I have also designed voltage based cmos WTA circuit & Design Neural network in 0.35µm technology using combine all analog components.

I have also applied neural network as an application of image segmentation. Finally I have applied this neural network with WTA for image segmentation and it is specially it is most suitable for X-ray image that I have shown in simulation results. The results which have been obtained are using T-spice simulator & Simulation results of each module are verified.

References

  1. B. Gilbert, "A precise four quadrant multiplier with sub nanosecond response," IEEE Journal of Solid State Circuits, Vol 3. pp 365-373, 1968.

  2. Ismet Bayraktaroglu,CIRCUIT LEVEL SIMULATION BASED TRAINING ALGORITHMS FOR ANALOG NEURALNETWORKS, Electrical and Electronic Engineering, M.S. Thesis, 1996

  3. Bo Gian Marco,MICROELECTRONIC NEURAL SYSTEMS:ANALOG VLSI FOR PERCEPTION AND COGNITION,Thesis November 1998

  4. Bose N. K., Liang P., Neural Network Fundamentals with graphs, algorithms and Application, Tata McGraw hill, New Delhi, 2002,

    ISBN 0-07-463529-8

  5. Cyril Prasanna Raj P. ,Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing

  6. Ramraj Gottiparthy,An Accurate CMOS Four Quadrant Analog Multiplier,2007

  7. A PPT on FUNDAMENTALS OF NEURAL NETWORK.

  8. P.E. Allen, CMOS Analog Circuit Design , 2005

  9. Janusz A. Starzyk And Ying-wei Jan, A Voltage Based Winner Takes All Circuit For Analog Neural Networks

  10. Constantino Carlos Reyes-Aldasoro, Ana Laura Aldeco, Image Segmentation and Compression using Neural Networks, Departamento de Sistemas Digitales, Instituto Tecnológico Autónomo de México Río Hondo No. 1, Tizapán San Angel, 01000 México D.F.

  11. Hung, Yu-Cherng, CMOS Nonlinear Signal Processing Circuits, National Chin-Yi University of Technology Taiwan, R.O.C.

  12. Darryl Davis, Su Linying*, Bernadette Sharp, NEURAL NETWORKS FOR XRAY IMAGE SEGMENTATION, School of Computing, Staffordshire University, Stafford ST16 0DG, UK.

  13. Dean K. McNeill, Christian R. Schneider, Howard C. Card, Analog CMOS Neural Networks Based on Gilbert Multipliers with In-Circuit Learning, Department of Electrical and Computer Engineering University of Manitoba Winnipeg,

    Manitoba, Canada R3T 5V6

  14. CATALIN AMZA,A REVIEW ON NEURAL NETWORK-BASED IMAGE SEGMENTATION TECHNIQUES, De Montfort University Mechanical and Manufacturing Egineering The Gateway Leicester, LE1 9BH United Kingdom.

  15. K.L.Baishnab, Amlan Nag, F.A.Talukdar, Members IEEE, A NOVEL HIGH PRECISION LOW POWER CURRENT MODE CMOS WINNER-TAKE-ALL CIRCUIT, ECE Dept., National institute of Technology Silchar, Assam India

  16. Razavi Behzad, Design of Analog CMOS Integrated Circuits, Tata McGraw-Hill, New Delhi, 2002, ISBN 0-07-052903-5

  17. R. Jacob Baker, Harry W. Li and David E. Boyce CMOS Circuit Design, Layout, and Simulation Department of Electrical Engineering Microelectronics Research Center

  18. Mohammad Rashid Pspice Using Orcad for Circuits and Electronics, PHI learning

  19. Douglas R. Holberg, Phillip E. Allen, CMOS Analog circuit design OXFORD University Press.

Leave a Reply