Image Classification Using Deep Learning

DOI : 10.17577/IJERTV6IS110016

Download Full-Text PDF Cite this Publication

Text Only Version

Image Classification Using Deep Learning

Dr. Vinayak Bharadi Associate Professor, HOD-IT,

Finolex Academy of Management and Technology, Ratnagiri

Arusa Irfan Mukadam Research Scholar

Finolex Academy of Management and Technology, Ratnagiri

Misbah Naimuddin Panchbhai Research Scholar

Finolex Academy of Management and Technology, Ratnagiri

Nikita Narayan Rode Research Scholar

Finolex Academy of Management and Technology, Ratnagiri

Abstract Image Classification nowadays is used to narrow the gap between the computer vision and human vision so that the images can be recognized by machines in the same way as we humans do. It deals with assigning the appropriate class for the given image. We therefore propose a system named Image Classification using Deep Learning that classifies the given images using Classifiers like Neural Network. This system will be developed to measure the accuracy of classifying images on GPU (NVIDIA) and CPU. The system will be designed using Python as a Programming Language and Tensorflow for creating neural networks.

Keywords Python , Tensorflow Library, CUDA Library, Convolutional Neural Network, Arificial Neural Network.

INTRODUCTION

Neural Networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952. The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a neural network as "…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs. Neural network are used for machine learning, in which system learns to perform tasks by analysing training set. The thousands/millions of interconnected processing nodes are used in NN. Mostly those nodes are feed-forward, which means data only moves in one direction. The individual node receives data from several nodes and send process data to others nodes. During training, node weights are initially assign to random value, when training data is fed to NN weights are continuously adjusted until it give accurate output.

Deep learning is more advanced in few years, many TECH GIANTS like, GOOGLE, MICROSOFT, FACEBOOK,

BAIDU are take interest in deep learning. Basically, Deep Learning is a subfield of machine learning which consists of algorithms that permit software to train itself to perform image recognition, by exposing multi-layered neural network for large amount of data, which is inspired by the function of the brain called artificial neural networks. Deep learning is good at identifying patterns in images. It use the same neural net approach for many different problems like Support Vector Machines, Linear Classifier, Regression, Bayesian, Decision Trees, Clustering and Association Rules. One example of deep learning in the wild is how Facebook can

automatically organize photos, identify faces, and suggest which friends to tag.

CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units(GPUs).CUDA stands for Computer Unified Device Architecture. CUDA technology influences parallel processing power of NVIDIA GPUs. CUDA architecture is a parallel computing architecture that delivers the performance of NVIDIAs graphics processor technology to general purpose GPU Computing [5]. Applications that run on the CUDA architecture can take benefit of an installed base of over one hundred million CUDA-enabled GPUs in desktop and notebook computers, professional workstations, and supercomputer clusters. The CUDA Toolkit from NVIDIA consist of GPU-accelerated libraries, development tools, CUDA runtime and a compiler. We can express programs in C, C++, Python and MATLAB by parallelism through extensions in the form of a few basic keywords.

Image classification is the process of assigning the pixels in a digital image into classes of interest. The objective of the image classification is to identify the unique features of the image. In order to classify a set of data into different classes or categories, the relation between the data and the classes into which they are classified must be well known. The image classification is used in different areas like; eg,in satellite remote sensing the features are measurements made by sensing in different wavelengths of the electromagnetic spectrum-visible/infrared/microwave/texture features etc

  1. LITERATURE REVIEW

    Neural Networks are essentially mathematical models to solve an optimization problem. They are made of neurons, the basic computation unit of neural networks. It also called Artificial Neural Network (ANN). There are different types of ANN Depending upon the human brain neuron and network functions, an artificial neural network or ANN performs tasks in a similar manner. Most of the artificial neural networks will have some resemblance with more complex biological counterparts and are very effective at their intended tasks like for e.g. segmentation or classification.

    Feedback ANN In these type of ANN, the output goes back into the network to accomplish the best-evolved outcomes internally. The feedback network feeds information back into itself and is well appropriate to resolve optimization problems. Feed Forward ANN A feed-forward network is a simple neural network be made up of of an input layer, an output layer and one or more layers of neurons. Through assessment of its output by go through its input, the power of the network can be observed base on group behaviour of the connected neurons and the output is decided.

  2. PROPOSED SYSTEM

    Our system will work according to the system architecture show in below figure, it will first capture the image throuth a digital camera or else it will capture through Database. Every image will be normalizied to predifined size for the further process. For Dimentianality redunction we use feature extraction methods like M-BTC(Block Transition Coding), Histogram Equlization,etc.

    Through a image, feature vectors are created for by extractingng the feature by using different methods like M- BTC(Block Transition Coding), Histogram Equlization,etc.

    This processed image will be given to the NN for the classification process.

    Fig. 1 System Architecture

    There are many open source frameworks used for implementing deep learning. The most popular framework is Tensorflow. It is a framework that is used to implement Deep Learning. Deep Learning deals with making computer recognize objects, shapes, speech on its own .It can also be thought similar to machine Learning. In traditional applications the computers are given knowledge about how to recognize the unique features of an object manually by humans, but thats not the case with Deep Learning. In deep learning we build neural networks that does the task of identifying the features of an image. The neural network has one input layer, n hidden layers and one output layer. Once you input the image, it traverses through n hidden layers, each responsible for performing specific operation, and finally produces the output at the output layer. In this way instead of manually making the system understand how to classify the images we will ask the system to learn by itself by finding various patterns within different images and assign appropriate classes. Our system also deals with creating different types of neural networks that will train itself by observing the patterns in it. Currently our system focuses on creating only 4 classes namely (indoor, outdoor, cat, dog). Our system is developedusing Python and Tensorflow framework for CPU based and using CUDA library for GPU based. The performance with CPU and GPU will be measured by evaluating different parameters like execution time, accuracy of classification and so on.

  3. METHODOLOGY

Fig. 2 Conceptual multilayer perceptron

Fig. 2 shows a conceptual multilayer perceptron which consist of 3 different layers namely input, hidden and output layer. Each layer comprises of neurons .Each neuron consists of weights that are calculated based on the function associated with that neuron. The input to the network are namely x1,x2xn and are propagated to next layer. Each layer acts like a door i.e. when you exit the door you become a completely different person. Similarly when an image is given to different layers the output of the layer will be somewhat different from all other layers. The weights of the neural network are adjusted based on the error rate. One of the most significant method for this is back-propagation where the weights are adjusted based on the error rate .This error is the difference between actual result and obtained result.

In this way an image will be classified into desired output class.

TABLE I . Training dataset

Tiger cat

Zebra

Dog

Horse

giraffe

Tiger

Lion

Panda

Jaguar

Fig .3 Results of Classification

Sr. No

Actual Class

Image 1

Image 2

Image 3

Obtained Result Class

1

Zebra

Zebra

2

Tiger

Tiger

3

Panda

Panda

4

Horse

Horse

TABLE II . Obtained result after classification

SYSTEM

SPECIFICATIONS

EXECUTION TIME

CPU

Intel(R)Core(TM) i3-2100 CPU @3.10GHz RAM:2.00GM

120 sec

NVIDIA GeForce Titan X Pascal

Build-In memory:12 GB NVIDIA CUDA

Core:3072 Memory Speed:336.5

GB/sec

68 sec

TABLE III . Hardware Specification

CONCLUSION AND FUTURE WORK

In this paper we proposed a system that uses convolution neural network for extracting and selecting the features for any given image and classify the images into appropriate classes. The Convolution neural network can give high accuracy compared to other classifiers. The performance and accuracy is tested on simple CPU as well as GPU. Hence we conclude that Convolution Neural Networks are a good choice for Image Classification. Further this system can be extended for applications such as biometric recognition [7][8][9][10].

As a future work, we will consider several algorithms and different weight adjacent functions of deep learning in order to compare the performance enhancement with GPU Platform.

ACKNOWLEDGEMENT

Our special thanks to NVIDIA Corporation, to contribute in our project by giving NVIDIA TITAN X Pascal GPU Hardware. This research work is supported by NVDIA GPU Grant Program.

REFERENCES

  1. Young Jong Mo, Joongheon Kim, Jong-Kook Kim, Aziz Mohaisen, and Woojoo Lee,Performance of Deep Learning Computation with TensorFlow Software Library in GPU-Capable Multi-Core Computing Platforms.

  2. Raja Majid Mehmood 1, Ruoyu Du 2 and Hyo Jong Lee Optimal feature selection and Deep Learning Ensembles Method for emotion recognition from human brain EEG sensors.

  3. Ju-Seok Shin, Ung-Tae Kim, Deok-Kwon Lee, Sang Jun Park Real-Time Vehicle Detection using Deep Learning Scheme on Embedded System.

  4. Yuan Yuan, Senior Member, IEEE, Lichao Mou, and Xiaoqiang LuScene Recognition by Manifold Regularized Deep Learning Architecture.

  5. Rafia Inam, Malardalen Real-Time Research Centre, Malardalen University, Vasteras, Sweden, An Introduction to GPGPU Programming- CUDA Architecture.

  6. S. S. Dubal and V. A. Bharadi, "Comparative analysis of various approaches for different biometric traits," International Conference & Workshop on Electronics & Telecommunication Engineering (ICWET 2016), Mumbai, 2016, pp. 163-168.

    doi: 10.1049/cp.2016.1140

  7. A. V. Kartha and V. A. Bharadi, "Face recognition using orthogonal transform coefficients of hyperspectral face images," 2015 International Conference on Information Processing (ICIP), Pune, 2015, pp. 349-354. doi: 10.1109/INFOP.2015.7489406

  8. V. A. Bharadi, P. Mishra and B. Pandya, "Multimodal face recognition using multidimensional clustering on hyperspectral face images," 2014 5th International Conference – Confluence The Next Generation Information Technology Summit (Confluence), Noida, 2014, pp. 582- 588. doi: 10.1109/CONFLUENCE.2014.6949048

  9. V. A. Bharadi, B. Pandya and B. Nemade, "Multimodal biometric recognition using iris & fingerprint: By texture feature extraction using hybrid wavelets," 2014 5th International Conference – Confluence The Next Generation Information Technology Summit

    (Confluence), Noida, 2014, pp. 697-702.

    doi: 10.1109/CONFLUENCE.2014.6949309

  10. V. A. Bharadi, V. I. Singh and B. Nemade, "Hybrid Wavelets based Feature Vector Generation from Multidimensional Data set for On-line Handwritten Signature Recognition," 2014 5th International Conference

– Confluence The Next Generation Information Technology Summit (Confluence), Noida, 2014, pp. 561-568.,

doi: 10.1109/CONFLUENCE.2014.6949038

Leave a Reply