An Overview of Deep Learning

DOI : 10.17577/IJERTCONV9IS05064

Download Full-Text PDF Cite this Publication

Text Only Version

An Overview of Deep Learning

S. Sunitha,

Associate Professor, Dept. of Computer Science, Bankatlal Badruka College for Information Technology, Kachiguda, Hyderabad.

Abstract – This paper informs overview and recent advances in Deep Learning. Deep Learning (DL) is the subjectivity of Machine Learning (ML) and Machine learning is the subclass of Artificial Intelligence (AI). Now a days Deep Learning is one of the most advanced scientific research area in all domains. Deep learning is driving significant advancements across industries, enterprises, health care, retail and financial services, automotive and daily life also. In this data is processed via neural networks and thus machine works alike human does. So the methods of Deep learning creates world shattering in all areas especially Machine Learning. Machine Learning and Deep Learning technologies work on algorithms and programming that activates the computer to think like a human and take a decision like learn by example. Deep learning uses Machine learning technologies to get solution of problems and make decisions. Day by day a new deep learning technique comes into the market and it gives good performance i.e. solution to the problem. Since Deep Learning evolves vast and fastest growing part of Artificial Intelligence.

Keywords – Artificial Intelligence, Neural Networks, Machine Learning, Deep Learning, Technologies

INTRODUCTION

Deep Learning is an execution of Artificial Intelligence (AI). The main aim of an artificial intelligence is how human acts and thinks (in all aspects) like system can do. It perks up the capability of computers to do work automatically by taking times of yore of experience. Deep Learning (DL) emphasized on the development of programs that can retrieve data and use its learning.

Artificial Intelligence (AI) is the field of study that describes any human like intelligence exhibited by a computer system. It refers to the ability of a system to transmit the capabilities of the human mind by using learning from past examples and experience to identify, understanding, responding, making decision and problem solving etc so on what not each and every thing.

Machine learning is an integral part or subset of AI application that learns by itself. It is a study that gives machines the capability to learn without writing programs explicitly to perform the specific task.

Deep learning is an integral part or sub part of machine learning (ML) application that teaches itself to perform a task without human interaction.

Artificial Intelligence

Machine Learning

Deep Learning

In 1986, the word Deep Learning (DL) came into Machine Learning (ML). Later it in the year 2000 it was used in Artificial Neural Networks. Deep Learning uses many layered neural networks to learn level of abstraction and representation that provides sense of data. It uses many supervised and unsupervised learning methods and representations to make state-of-the-art. Modern Artificial Intelligence (AI) shaped by using the methods of advance Deep Learning approaches.

In this paper, before embarking the recent advances in Deep Learning first I will discuss the related work on deep learning models and approaches and after that I will talk about recent advances in Deep Learning (DL) approaches, deep architectures i.e. Deep Neural Networks (DNN) and Deep Generative Models (DGM), Training, Regularization and Optimization methods.

CONNECTED WORKS

The below table describes related works under Deep learning.

S.No.

Name of the Author & Year

Title

Domain

1

Young et al,2017

DL models and architecture

Natural Language Processing

2

Zhang et al, 2017

DL techniques

Front end & Back end speech recognition

3

Zhu, 2017

DL

Frameworks

Remote Sensing

4

Ronen and Bouquet, 2017

DL concepts

Geometry

5

Wang, 2017

DL models in Time Series

Neural Networks

6

Goodfellow et al, 2016

Deep Networks and Generative Models

Machine Learning

7

Lecun et al, 2015

DL with CNN and RNN

Predictive Future Learning

8

Wang and Raj, 2015

Evolution of DL

Artificial Neural Networks

HISTORY OF DEEP LEARNING ARCHITECTURE

Artificial Neural Networks (ANN) or Neural networks (NN) are a collection of connected units or nodes called artificial Neurons. These neurons transmit the signals. In ANN, three components are there called Neurons (input), Connections and Weights (neuron to neuron) and Prorogation function (sum of weights of neurons). Examples train neural networks. Each example contains input and output. To get desired output we use Model, it is a Data Structure consist of algorithms and Directed Weighted Graph. The following figure represents Basic Neural Network.

Fig. 1.1 Basic Neural Networks

The conventional ANN made up with simple neural layers and used for simple calculations. Next Back Propagation came into ANN after that Support Vector Machine (SVM) and it is better than ANN. Later we got a new network called Boltzmann Machine makes learning easier. Machine learning models are based on artificial neural networks. These are capable of Supervised Learning. Like that neural networks came with different technologies.

Deep Learning is modern technology based on neural networks that try to work like a human cortex. Deep learning architecture consists of neural networks and based on several layers that process data into outcome result. There are mainly three layers called input layer, hidden layers and an output layer.

Input layer: raw data. Hidden layers: where algorithms process the inputs. Output layer: various conclusions and result.

Fig. 1.2 Deep Neural Networks

Deep learning models are based on deep neural networks with large set of labeled data and back propagation and forward propagation techniques. These are capable of Supervised and Unsupervised Learning.

Create and Train Deep Learning Models

Fig. 1.3 Deep learning process

Three ways are there to create and train.

  1. Training from Scratch

  2. Transfer Learning

  3. Feature Extraction

In deep learning totally ten popular architectures are there.

  1. Back Propagation

  2. Stochastic Gradient descent

  3. Learning Rate decay

  4. Drop Out

  5. Max Pooling

  6. Batch Normalization

  7. Long Short Term Memory

  8. Skip gram

  9. Continuo Bag of words

  10. Transfer Learning

ADVANCED DEEP NEURAL NETWORKS:

The above ten popular architectures we used any of these given networks.

  1. RNN Recurring Neural Network

    It is a feed-forward neural network. It has internal memory. It is used to process sequence of inputs. 1.1) One-to-one

    1. One-to-many

    2. Many-to-one

    3. Many-to-Many

    4. Bidirectional

    5. Deep RNN

  2. LSTM – Long Short Term Memory

    It is a combination of RNN and gradient based learnig algorithm.

    2.1) Batch Normalized LSTM 2.2) PIXEL RNN

      1. Bidirectional LSTM

      2. Variational Bi-LSTM

  3. GRU – Gated Recurrent Unit It is also under LSTM.

  4. CNN – Convolutional Neural Network

    It is a subclass of DNN. It is used to analyzing audio and video.

      1. Flattened CNN

      2. Depth wise separable CNN 4.3) Grouped CNN

    4.4) Shuffled Grouped CNN

  5. DBN – Deep Belief Network

    It is used to create to recognize image.

  6. DSN – Deep Stacking Network

  7. FNN Forward Neural Network

  8. Auto-Encoders : outputs are the inputs. 8.1) Variational Auto-encoders

    8.2) Stacked Denoising Auto-encoders 8.3) Transforming Auto-encoders

  9. Deep Generative Models

    In this DNN we used multiple levels of abstraction and representation. The list of techniques is

      1. Boltzmann Machine probability distribution

      2. Restricted Boltzmann Machine Document processing

      3. Recurrent Support Vector Machine objective discrimination will be in sequence level.

  10. Training and Optimization Techniques

Here we are going to discuss the different techniques for this category.

10.1) Dropout prevent over fitting in NN 10.2) Maxout output gives maximum

10.3) Zoneout regularization method for RNN 10.4) Batch Normalization accelerating DNN 10.5) Distillation NN composed into smaller Model

10.6) Layer Normalization Speedup of NN DEEP LEARNING FRAMEWORKS

For deep learning more number of open-source libraries and frameworks are available. The below figure represent different types of frameworks.

All most all used Python programming language.

Fig. 1.4 Different Frameworks

APPLICATIONS OF DEEP LEARNING

Supervised, semi-supervised or reinforcement learning used in DL methods.

  • Image classification and recognition video classification sequence generation Defect classification

  • text, speech, image and video processing text

    classification Speech processing speech recognition and spoken language understanding Text-to-speech generation query classification sentence classification sentence modeling Word processing document and sentence processing generating image captions Photographic style transfer image colorization generating textures and stylized images Visual and textual question answering visual recognition and description and visual art processing object detection document processing character motion synthesis and editing Person identification face recognition and verification action recognition in videos Human action recognition action recognition classifying and visualizing motion capture sequences Handwriting generation and prediction automated and machine translation Named entity recognition mobile vision and advertising conversational agents calling genetic variants and Bioinformatics Cancer detection X-ray CT reconstruction Epileptic Seizure Prediction hardware acceleration Robotics, speech and audio processing Information retrieval, object recognition and computer vision Financial fraud detection, Medical Image Analysis

  • Military, Image Restoration Customer Relationship management, Drug Discovery and Toxicology Multimodal and multi-task learning.

DISCUSSION

Deep Learning itself proved it gives good results in all most all areas are domains. But still few areas are left over to use this technique to get better results like psychology and cognitive sense. The main drawback of DL Models is large set of labeled data and it is not suit for hierarchical structure.

CONCLUSION

The objective is to make a decision similar to human beings. Deep Learning is safe to assume that we have some solid progress in understanding why and how deep learning models work. Day by day, deep learning models and approaches are becoming more popular to get accurate result or outcome from data in more efficient way. But still so many problems which are related to health issues like Cancer and Covid-19. We hope by using deep learning researchers can do miracles and they will give accurate values.

ACKNOWLEDGEMENT

The author would like to thank the well wishers and critics for their very insightful comments that helped to improve paper weight age.

REFERENCES

  1. Neural Networks and Deep Learning by C. Aggrwal Springer book.

  2. Neural Networks a Comprehensive Foundations, Simon Haykin, PHI edition.

  3. Deep Learning – wikipedia

  4. Yoshua Bengio. URL http://www.iro.umontreal.ca/~bengioy/yoshua_en/index.html. MILA, University of Montreal, Quebec, Canada.

  5. Yoshua Bengio. Learning deep architectures for ai. Found. Trends Mach. Learn., 2(1): 1127, January 2009. ISSN 1935-8237. doi: 10.1561/2200000006. URL http://dx.doi. org/10.1561/2200000006.

  6. Yoshua Bengio. Deep learning of representations: Looking forward. CoRR, abs/1305.0445, 2013.

  7. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):17981828, August 2013. ISSN 0162-8828. doi: 10.1109/TPAMI.2013.50. URL http://dx.doi.org/10. 1109/TPAMI.2013.50.

  8. Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013.

  9. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014.

  10. Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yao, Sanjeev Khudanpur, and James R. Glass. Highway long short-term memory rnns for distant speech recognition. CoRR, abs/1510.08983, 2015b.

  11. Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yao, Sanjeev Khudanpur, and James R. Glass. Highway long short-term memory RNNS for distant speech recognition. In ICASSP, pages 5755 5759. IEEE, 2016c.

  12. ML and DL models https://www.geeksforgeeks.org/exposing-ml-dl- models-as-rest-apis

Leave a Reply