Exterminating Computational Limits of Machine Learning with Merits of Serverless

DOI : 10.17577/IJERTV7IS010147

Download Full-Text PDF Cite this Publication

Text Only Version

Exterminating Computational Limits of Machine Learning with Merits of Serverless

Harpreet Kaur 1 Prabhpreet Kaur 2

(M.Tech*) Department of CET Assistant Professor, Department of CET

Guru Nanak Dev University Guru Nanak Dev University

Amritsar, Punjab, India Amritsar, Punjab, India

Abstract – The trend of determining patterns using the various machine-learning, data mining, deep-learning or neural networks are limited due to the computational power of machines. Sometimes the cost estimation exceeds heavily from the estimates, or the cost is not sufficient for the enterprises to get the desired infrastructure. Unexpected computational resources at times limit the accuracy and increase the cost of infrastructure to find the perfect patterns. A serverless compute may give an optimal solution to the computational resources provided by the cloud. Resources are elastic, which can expand and shrink according to the input data sets, resulting in lesser computational costs.

Index Terms AWS (Amazon Web Service) Lambda, Amazon EC2 (Elastic Compute Cloud), Machine Learning, Neural Networks.

  1. INTRODUCTION –

    The Scaling of silicon behind Moores law has enabled fantastically powerful processing elements in the last twenty years. Despite this processing power, computing tasks which are elementary for a five year old child (semantic image recognition, self-learning, etc.) are still out of reach. Neuromorphic architectures incorporate key features of neurons in the brain and/or process data in a manner analogous to the human brain. Although neuromorphic computing can be classified in many ways, one divisor can be whether they emulate the neuron or not. Or in the case of integrated chips, standard chips versus neurochips. Neuron emulation has been extensively studied. For example, in [15], 16 analog chips simulate over a million neurons in real time with a power consumption of 1W. In reference [15], a practical weak point is the many required input parameters (such as bias voltages), high process variation sensibility and poor correspondence with current software applications (machine vision, machine learning, etc.).Neuromorphic computing which does not emulate the neuron is often classified as, or closely resembles, machine learning and/or artificial neural networks. Unlike neuron emulation,this type of neuromorphic computing does not attempt to simulate individual neurons or their connections, but rather model the large-scale hierarchy of the brain [1], b3 [2]. The aim is to automatically learn to identify complicated patterns and make intelligent decisions based on data. Although this type of computing does not necessarily require complex neuron emulating equations or analog memory, extremely large memory size and high memory bandwidth are still required making these algorithms inefficient when run on the Von Neumann computational model[8].It is shown here that machine learning type neuromorphic algorithms have the

    potential to be realized as single-chip implementations, although some novel breakthroughs in interconnect and memory design are required to achieve area and energy efficiency. Energy efficiency can best be achieved with an ASIC implementation, but all practical realizations require programmability. Thus, subsequent research will concentrate on determining the optimal (in an energy efficient sense) programmable architecture. Most of this type of research has been done on vector processors, such as [21]. However, the machine learning type neuromorphic algorithms pose different research problems; increased internode communication requirements, less mathematical computation due to absent non-linear equations; state- machines with branches, etc. Consider, for example, the breakdown of a load-store unit of a vector processor versus the breakdown of a single processor in a NoC. The former could lead to the inoperability of the whole system where the latter would easily be learned out be the system. There is also much work to be done in achieving energy efficiency by taking advantage of the algorithms inherent noise robustness. Algorithmic simulations are required on how much noise can be tolerated in logic, memory, and communication. These limits can then be used in designing stochastic hardware.

  2. LITERATURE SURVEY

    1. Mining Unstructured Processes: An Exploratory Study on a Distance Learning Domain

      Maita et al.[2017] discussed to use artificial neural networks in the distance learning domain and result of paper showed that ANN was neglected in process mining. The various benefits and limitations of using ANN on unstructured processes discussed.

      This paper describes a study aimed at exploring some process mining scenarios, considering the main purpose of applying ANN (Artificial Neural Network) as a technique representative of both the CI/ML fields. The study was carried out based on an unstructured process related to a distance learning supported by a Learning Management System (LMS). In particular, we address some possible benefits and limitations in mining unstructured processes.

      Business process and data mining combined establish the process mining field [18], which applies data mining tasks on process data. This new field aims at extracting knowledge about process execution events, seeking to improve business processes, and discovering associations between variables, behavior or misbehavior patterns [18]. Although process mining was introduced in 1990s and it has been considerably improved in recent years, there are some types of processes for which no satisfactory mining technique has been proposed. An example of this is when dealing with unstructured processes, i.e., loosely B. Data mining and mining techniques defined processes, also known as spaghetti processes [18].

      For our attempts, either, Weka presented a problem of lack of computing memory or although the tool indicated the end of the training, the results were in fact not presented. An exploration work directly on the ANN implementation provided by the tool, directly manipulating its variables, as well as the use of a more robust test environment could lead to the achievement of results also for very large dataset in terms of attributes which makes the training complexity very high[10].

    2. Learning How to Communicate in the Internet of Things: Finite Resources and Heterogeneity

      Park et al.[2016] provided an overview on the use of learning techniques within the scope of the IoT. This paper discussed about the various types of learning framework that are able to overcome a number of challenges of the IoT and in each learning techniques the main challenges, applications and results were shown. To handle the problem of heterogeneity, cognitive hierarchy theory and its applications were used in the IoT.

      For a smooth and continuous deployment of the Internet of Things (IoT), there is a need for self-organizing methods to over-come IoT challenges that include data processing, resource management, coexistence with existing wireless networks, and help for IoT-wide event detection. To developed learning mechanisms for the IoT requires coping with unique IoT properties in terms of resource constraints, heterogeneity, and strict quality-of-service requirements. In this paper, there are number of learning frameworks suitable for IoT applications are presented. In particular, the advantages, limitations, IoT applications, and key results pertaining to machine learning, sequential learning, and reinforcement learning are studied [12].

      The major limitation of applying ML(Machine Learning) in IoT scenarios is that ML requires an extensie data set, such as the information of sensor locations and corresponding senser measurements [3], for good performance. Such a data set needs to be quickly processed for the IoT devices to learn the environment, but the resource-constrained IoT

      devices may not be able to store and process the data set given that they have limited resources in terms of computation and memory [5].

      Learning Framework

      Advantage

      Disadvantage

      Machine Learning

      1. Diversity of available techniques

      2. Useful in processing the massive data collected in IoT to reduce resource consumption.

      1. Implementation typically centralized

      2. Need for significant Computational capabilities.

      3. Need for extensive training data.

    3. An Android Malware Detection Approach using Community Structures of Weighted Function Call graphs Wang et al.[2017] presented a new malware detection method that divides a function call graph into community structures and these structures were used to detect malware. By using machine learning classification the overall computation time reduced. The results shows that detection accuracy of this method is higher than that of other anti virus software.

      In this paper, a new malware detection method based on the analysis of community structures of function call graphs. First, the method applies reverse engineering to obtain the function call graph of an Android application. Second, the function call graph is weighted by sensitive permissions and application programming inter-faces (APIs). Third, the community structures are extracted from the weighted function call graph. Finally, a machine learning classification process is used to detect malware. In an evaluation of 13,790 Android applications, our method achieves 96.5 accuracy in detecting unknown malware [20]. Though the method achieves good malware detection performance, the experimental results indicate that the run- time performance of our method could be improved for large function call graphs with more computational resources. In future work, more efficient computing machines should be designed for community structure generation and feature extraction.

    4. Predicting of Job Failure in Compute Cloud Based on Online Extreme Learning Machine : A Comparative Study Liu et al.[2017] proposed a method based on Online Sequential Extreme Learning Machine (OS-ELM) to predict online job termination status. This method has fast learning speed and good generalization.

      The existing machine learning-based methods used offline working pattern, which cannot be used for online prediction in which data arrive sequentially. To solve this problem, a new method based on Online Sequential Extreme Learning Machine (OS-ELM) is used to predict online job termination status. With this method, real-time data are collected according to the sequence of job arriving, the job status could be predicted and the operation model is thus updated based on these data. The method with online incremental learning strategy has fast learning speed and good generalization [9].

      The method achieves good time and accuracy performance. OSELM model updated within 0.01s, and predict the job termination with an accuracy rate of 93%. It can reduce the

      storage space overhead by intelligently identifying job failure, and significantly reduce the resource waste in the cloud [9].

    5. Machine Learning Based DDoS Attack Detection From Source Side in Cloud

      Tianwei et al.[2017] proposed DOS attack detection system that is based on machine learning techniques in the cloud.

      In this paper, machine learning techniques are used to detect DOS(Denial of Service) attack. Four types of DoS attacks are used and we achieves accuracy upto 99.7%. Attackers cannot use their own physical machines instead of their rent many virtual machines and use them as VM bolt to attack the outside world. The major limitation of this paper that it does not degrade performance and can be easily extended to broader DOS attacks.

      In future work, for better performance combine different machine learning algorithms and investigate more DoS attacks and integrate their features into current system.[17].

    6. Infrastructure Cost Comparison of Running Web Applications in the Cloud using AWS Lambda and Monolithic and Microservice Architectures

      Villamizar et al.[2016] compared the infrastructure cost of web applications in the cloud using AWS Lambda, Monolithic and Microservice and showed that microservices reduce infrastructure cost as compared to monolithic architecture. The paper described the application deployments on the AWS infrastructure and also showed the performance test of each architecture so that to compare their infrastructure costs.

      This paper compares cost of web applications developed and deployed with monolithic architecture, a microservice architecture operated by the cloud customer, and a microservice architecture operated by the cloud provider. Results show that microservices can help reduce infrastructure costs in comparison to standard monolithic architectures. Moreover, the use of services specifically designed to deploy and scale microservices reduces infrastructure costs by 70% or more [19].

      The results shows that microservices can reduce up to 13.42% of infrastructure costs and AWS Lambda reduce infrastructure costs up to 77.08%. For these three architecture cost per million of requests(CMR) calculated by dividing monthly infrastructure costs by no of requests supported per month. The microservice architecture reduce upto 9.50% of cost and AWS(Amazon Web Service) Lambda help to reduce costs upto 50.43%.

      The future work is to evaluate the costs incurred by companies during the process of implementing microservices. The process of defining the number of microservices to implement, and the tools required to automate the deployment of microservices in general purpose will also be evaluated [19]

    7. Lambda Architecture for Cost-effective Batch and Speed Big Data processing

      Kiran et al.[2015] presented an overview of the lambda architecture and how it is used with Apache Storm and Hadoop. He also showed online data collection and

      processing for multiple router sensors sending data at a constant rate of 30 seconds.

      The paper implemented the lambda architecture design to work on Amazon EC2(Elastic Compute Cloud) and these provide the higher throughput and minimize the cost of network. They define how to optimize the cost of data processing by examining which part of data needs online or batch processing.

      For batch processing, when implemented on Amazon AWS reduce the cost and find code errors and also Amazon cloud charges for full hour even if cluster was executed for 2 minutes. For online processing, kinesis stream read data as last in first out order and Amazon cloud allocate data to multiple members of a team. This architecture also capable for big data processing problems.

      In this paper, network sensor is used i.e ESnet router production network which is used to collect router in and out bytes from every 30 seconds. The lambda architecture on Amazon AWS was able to provide a proof-of-concept for data processing in data arrives. The solutions allow users to perform cost-optimised processing. Both processes can produce data aggregations which are easier to plot and reduces time for data processing through the data visualization interface [7].

    8. Be Wary of the Economics of Serverless Cloud Computing

      Weinman[2017] defined what is serverless and how AWS uses Lambda for Serverless Computing. This paper also discussed about the total price of serverless function based on transaction and execution time. During execution, AWS Lambda provides first 1 million hits free and 400,000GB seconds per month free. Hits per second defines the VM Infrastructure Cost and Serverless Cost. This paper defined that if we get more hits per second then we have to pay more for serverless and we have to pay less if function executes fast for serverless.

      This paper shows that for 150 hits per second VM infrastructure cost is $200/ month and serverless cost is

      $167/month. Serverless might be 3x the cost of on demand compute, it might save DevOps cost in setting up autoscale, managing security patches and debugging issues with load balancers at scale Weinman [22].

    9. Infra: SLO Aware Elastic Auto Scaling in the Cloud for Cost Reduction

      Sidhanta and Mukhopadhyay[2016] presented SLO (Service Level Objective) Aware Elastic Auto Scaling in the cloud for cost reduction. They provide a tool that applies on existing machine learning technique to predict minimum cluster composition.

      The cluster composition required to run on cloud under an SLO deadline. The degree of confidence 100 * (1 – )% specifies that the probability of failure in achieving the error threshold is at most . The accuracy of prediction of minimum cluster composition ranges from 93.1% to 97.99% with a 98% degree of confidence[14]. To change application workloads, infra automatically adapts a cloud infrastructure and by using infra provisioning tools, it scales the cluster up and down. This paper also described that state-of -art auto

      scaling tools require manual specification of scaling rules, and supports scaling with only fixed instance types. The – minimal cluster compositions predicted by Infra satisfies an error threshold of 0.1 with a degree of confidence 0.02 in approximating the minimum cluster composition[14].

    10. Hardware for Machine Learning: Challenges and Opportunities

      Sze et al.[2017] discussed that machine learning has various applications and opportunities for hardware design but during the designings, it is important to balance accuracy, energy, throughput and cost requirements. Machine learning also transform the input data into higher dimensional space along with the programmable weights that helps to increases data movement and energy consumption.

      This paper also discussed about the various challenges of hardware design for the architecture, hardware-friendly algorithms, mixed-signal circuits and advanced technologies. DNN (Deep Neural Network) delivers higher accuracy by mapping input data into higher dimensional space and increases complexity.

      This paper also compared the energy and accuracy of hand crafted and DNN in which hand crafted gives higher energy efficiency as compared to DNN.

    11. Predictive Analytics for Banking User Data using AWS Machine Learning Cloud Service Ramesh[2017]developed a Machine Learning model to perform predictive analytics on the banking dataset. Data set is used to create a binary classification model using Amazon Web Service (AWS) Machine Learning platform. 70 % of the data is used to train the binary classification model and 30 % of the dataset is used to test the model[13]. The model tested by using two features in AWS Machine Learning. In first, the model was tested by using real time prediction in which real time input data is given and in second, model tested by using batch prediction in which set of customer data is used.

      Amazon Machine Learning powerful algorithms create machine learning (ML) models by finding patterns in our existing data. Then, the service uses these models to process new data and generate predictions for your application[13]. To create a model various steps was used in AWS Machine Learning.

    12. Study on a migration scheme by fuzzy-logic-based learning and decision approach for QoS in cloud computing Huh and young Son[2017]presented QoS metrics of migration and proposed migration scheme based on fuzzy

      logic. They we determine whether the VMs should wait or should be migrated to other active PMs for continuous service.

      This paper also described that proposed scheme determining VMs which need to migrate through fuzzy system and then selecting target machine through considering QoS metrics. The goal of migration is to maximize efficiency and QoS by choosing the target machine through present scheme. The proposed scheme verified performance and reliability that can achieve much better[6].The major goal of this paper was to apply fuzzy logic and machine learning technique for advanced migration. The future work of this paper is to improve the efficiency in the large scale service and variety of datacenter environment on live migration mechanism.

    13. Cloud Computing Threats Classification Model Based on the Detection Feasibility of Machine Learning Algorithms Masetic et al.[2017] proposed that for successful threat detection in cloud computing infrastructures is the application of machine learning algorithms. This paper described that there are three types of classification type of learning algorithm, input features and cloud computing level. First considers two types of learning: supervised and unsupervised then second criterion considers system performance data and network traffic data, which are input to both types of learning.

      Finally, last considers network specific and cloud environment specific threats, which are described by system performance and network traffic data[11].

    14. Machine learning capabilities, limitations and implications

    Armstrong[2015] discussed the various capabilities and limitations of machine learning algorithms. A model of relationship between inputs and outputs is improved by testing its predictions and correcting when wrong. Machine learning is a set of computerized techniques for recognizing patterns in data.

    Machine learning algorithms are an integral part of driverless cars and will have an increasingly important role in their operational ability. These learning systems are broadly used for tasks such as image recognition or scheduling but learning in noisy real world environments is difficult. Machine learning techniques will be instrumental in providing personalized medicine and finding value in the emerging large medical and personal data sets from genomics and consumer health technologies[4].

  3. COMPARISON TABLE II

ANALYSIS OF TECHNIQUES WITH THEIR OUTCOMES

Author

Index, Year

Dataset

Technique

Purpose

Strength

Limits

Harry Armstrong

Nesta, July 2015

Discuss about capabilities limitations

and implication of machine learning

Games, driverless car, medicine and healthcare

Useful in

processing

the massive data collected

in IoT to reduce resource

consumption

Need for significant

computational capabilities and No security

Mariam Kiran, Peter

Murphy, Sartaj Singh

Baveja

IEEE, Dec 2015

ESnet network data

Lambda

To implement Lambda on Amazon

EC2 and results are present on ESnet network data

Dynamo DB has $ 0.02 per 100,000

transaction and max storage costs is $0.09. S3 has $

0.005 per 1000 requests and storage costs is

$0.03 per GB

Loss of data and security

Mario Villamizar,

IEEE, July

AWS lambda,

Cost comparison of

Microservices can

To evaluate the costs

Oscar Garces,Lina

2016

monolithic and

a web application

reduce

incurred

Ochoa

microservice

developed and

infrastructure

by companies

deployed with three

costs as compared

during the

different

to

process of

approaches: 1) a

standard

implementing

monolithic

monolithic

microservices

architecture, 2) a

architectures.

microservice

Moreover,

architecture

Microservices can

operated by the

reduce up

cloud customer, and

to 13.42% of

3) a microservice

infrastructure costs

architecture

and AWS Lambda

operated by the

reduce

cloud provider

infrastructure

costs up to 77.08%

Subhajit Sidhanta,

IEEE,06 Oct

Auto Scaling

Infra

To predict

the accuracy of

State-of-the art auto scaling

Supratik

2016

minimum

prediction

tools

Mukhopadhyay

cluster composition

of minimum

require manual

required to run on a

cluster

specification of scaling

cloud under an SLO

composition

rules, and

ranges from 93.1%

supports scaling

to 97.99% with a

with only fixed

98% degree of

instance types

confidence

Taehyeun Park,

IEEE, 8 Nov

Cognitive

Internet of things

Self-organizing

Determine which

ML in IoT

Nof

2016

hierarchy

(IOT)

solutions for

learning

devices may

Abuzainab, and

Theory

IoT challenges

frameworks

not be able to store and

Walid

implementing

are most suitable

process

Saad

innovative learning

for learning and

the data set

frameworks

the

because they

enables the IoT

available resources

have limited

devices to operate

of the IoT devices

resources

autonomously

in the level of

in terms of

in a dynamic

cognitive

computation and

environment

hierarchy

memory

Joe Weinam

IEEE, April

AWS

Describes what is

AWS Lambda

Problem due to third party

2017

serverless and how

offer first 1

API

AWS uses Lambda

million hits

system

and 400,000 GB

sec

per month free.

For

150 hits per sec

VM

infrastructure cost

is $200/month and

serverless cost is

$167/month

Chunhong Liu,

IEEE, May

Support Vector

Online

To solve the

OSELM model

Reduce the

Jingjing Han,

2017

Machine

Sequential

problem

updated within

storage space

Yanlei

(SVM),

Extreme

of offline working

0.01s, and predict

overhead by

Shang

Extreme

Learning

pattern problem,

the job termination

identifying

job

failure,

Learning

Machine

OS-ELM is used to

with an accuracy

reduce

Machine

(OSELM)

predict online

rate of 93%.

the resource

(ELM) and

termination

waste in the

online

job status.

cloud.

Sequential

Support Vector

Machine (OS-

SVM)

Z. Masetic, K.

IEEE, 26

Cloud threat

Machine

cloud computing

99.4% detection

Accuracy

Hajdarevic,

May 2017

detection

Learning

services became a

N. Dogru

common target of

cyber-attacks. So,

cloud computing

vendors and

providers need to

implement

strong information

security protection

mechanisms

on their cloud

infrastructures.

Ana R. C. Maita,

IEEE,3 July

Id, Time and

ANN (Artificial

ANN technique

Training

Weka presented a

Marcelo Fantinato

2017

Activity

Neural Network)

on an unstructured

complexity

lack of computing

and Sarajane M.

process related to a

very high

memory

Peres

distance

learning supported

by a Learning

Management

System (LMS).

Ranjith Ramesh

IEEE, 11

Bank

AWS Machine

To develop a

Prediction score

Only 9 % of

July 2017

Learning

Machine Learning

is compared to

predicted data is

model to perform

the threshold and

wrong.

predictive analytics

it is converted to

on the banking

numerical value 0

dataset

or

1. If the predicted

value is above the

threshold then the

numerical value is

1 (yes)or else it is

0(No)

Zecheng He,

IEEE, 24

Internet Control

Machine

To detect

DoS attacks

Does not degrade

Tianwei

July 2017

Message

learning

DOS(Denial of

achieves accuracy

performance

and

can be

Zhang, Ruby B.

Protocol

techniques

Service) attack,

upto 99.7%.

easily extended

Lee

(ICMP)

machine learning

to broader

techniques are used.

DOS. attacks.

System give

information from

both the cloud

servers hypervisor

and the virtual

machines, to

prevent

network packages

from being sent out

to the outside

network.

Vivienne Sze, Yu-

IEEE,27 July

ImageNet

Machine

During the design

By reducing

the

cost of

Hsin Chen, Joel

2017

Learning

process, it is

memory

Emer

important

access, it helps in

to balance

more energy efficient

the accuracy,

data flows

energy,

throughput and cost

requirements

A-young Son, Eui-

IEEE, 27

Migration

Fuzzy Logic

Apply fuzzy logic

Decrease

in

Improve the

Nam Huh

July 2017

and machine

migration

efficiency in

learning

time about 9.5%

the large scale

technique for

service.

advanced

migration

Yao Du, Junfeng

Springer,

Android

Malware

New malware

For 13,790

The rapid

Wang

2017

applications

Detection

detection method

Android

growth of new

and

based on the

applications, our

malware families

malware

analysis of

method achieves

presents another

samples

community

96.5% accuracy in

challenge

structures of

detecting unknown

function call

malware

graphs.

machine learning

process is used to

detect malware

  1. Gaps

    1. ML(Machine Learning) in IoT devices may not be able to store and process the data set because they have limited resources in terms of computation and memory.

    2. Online Sequential Extreme Learning Machine (OS- ELM) reduce the storage space overhead by identifying job failure, reduce the resource waste in the cloud.

    3. The rapid growth of new malware families presents another challenge.

    4. Does not degrade performance and can be easily extended to broader DOS(Denial of Service) attacks.

    5. By reducing the cost of memory access, Machine Learning helps in more energy efficient data flows

  2. CONCLUSION –

An analysis of the various Machine Learning techniques was done with respect to their computational perspectives. From the various other limitations, lack of adequate computational resources in the machines was limiting the machine learning algorithms to flourish and perform efficiently. In some cases the large dataset was becoming a limiting factor while in others small datasets in need of larger computations became the bottleneck.

Advent of Cloud Computing , although surpassed the present day limitations to buy expensive physical machines for the data analysis, but the cloud infrastructure needs a dedicated expertise and regular observation of the virtual machines.

A serverless compute, on the other hand has become a boon for the Data Scientists in terms of computational power. The Lambda functions in AWS are automatically expanding and shrinking according to the various needs of data analysis computations, datasets and the algorithms. Resource pooling is dynamically leveraged from the cloud without any expertise and, results obtained are expected to be more accurate. The cost of analysis for the various time specific patterns (yearly/quarterly) will be reduced significantly.

A more accurate, quick, varied and cost-effective results from structured or unstructured datasets are expected with the collaboration of ServerLess and Machine Learning Techniques.

REFERENCES

  1. Hierarchical temporal memory including htm cortical learning algorithms. URL http://www.numenta.com/education.php.

  2. Alchemy-open source ai. URL http://alchemy.cs.washington.edu/.

  3. M. A. Alsheikh, S. Lin, D. Niyato, and H. P. Tan. Machine learning in wireless sensor networks: Algorithms, strategies, and applications. IEEE, 16(2):19962018, 2014. Doi : 10.1109/COMST.2014.2320099.

  4. H. Armstrong. Machine learning capabilities, limitations and implications. Nesta, 2015. URL www.nesta.org.uk.

  5. A. Forster. Machine learning techniques applied to wireless ad-hoc networks: Guide and survey. In T. editor, editor, in Proc. of 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, volume 4 of 5, page 213, The address of the publisher, 2007. IEEE, IEEE Access. Doi : 10.1109/ISSNIP.2007.4496871.

  6. E.-N. Huh and A. young Son. Study on a migration scheme by fuzzy-logic-based learning and decision approach for qos in cloud computing. IEEE Access, pages 507512, 2017. doi: 10.1109/ICUFN.2017.7993836.

  7. M. Kiran, P. Murphy, and S. S. Baveja. Lambda architecture for cost-effective batch and speed big data processing. IEEE Access, 2015. doi: 10.1109/Big- Data.2015.7364082.

  8. L. Koskinen and E. Roverato. Hardware requirements of communication-centric machine learning algorithms. In T. editor, editor, NASA/ESA Conference on Adaptive Hardware and Systems, volume 4. AHS-2013, 2013. doi: 10.1109/AHS.2013.6604237.

  9. C. Liu, J. Han, and Y. Shang. Predicting of job failure in compute cloud based on online extreme learning machine: A comparative study. IEEE Access, 2017. doi:10.1109/ACCESS.2017.2706740

  10. A. R. Maita, M. Fantinato, and S. M.Peres. Mining unstructured processes: An exploratory study on a distance learning domain. In Published in International Joint Conference on Neural Network Anchorage, AK, USA, number 2, pages 201213. IEEE Access, 2017. doi:10.1109/IJCNN.2017.7966261.

  11. Z. Masetic, K. Hajdarevi, and N. Dogru. Cloud computing threats classification model based on the detection feasibility of machine learning algorithms. IEEE Access, pages 1314 1318, 2017. doi: 10.23919/MIPRO.2017.7973626.

  12. T. Park, N. Abuzainab, and W. Saad. Learning how to communicate in the internet of things:finite resources and heterogeneity. IEEE Access, 4:70637073, 2016. doi: 10.1109/ACCESS.2016.2615643.

  13. R. Ramesh. Predictive analytics for banking user data using aws machine learning cloud service. IEEE Access, pages 210 215, 2017. doi: 10.1109/ICCCT2.2017.7972282.

  14. S. Sidhanta and S. Mukhopadhyay. Infra: Slo aware elastic auto scaling in the cloud for cost reduction. IEEE Access, pages 141148, 2016. doi: 10.1109/BigData- Congress.2016.25.

  15. D. Sridharan, B. Percival, J. Arthur, and K. Boahen. An insilico neural model of dynamic routing through neuronal coherence. Proceedings of the Twenty First Annual Conference on Neural Information Processing Systems, 322(10):891921, 2007. doi: http://dx.doi.org/10.1002/andp.19053221004

  16. V. Sze, Y.-H. Chen, J. Emer, A. Suleimana, and Z. Zhang. Hardware for machine learning: Challenges and opportunities. IEEE Access, pages 18, 2017. doi:10.1109/CICC.2017.7993626.

  17. Z. H. Tianwei, Z. Ruby, and B. Lee. Machine learning based ddos attack detection from source side in cloud. IEEE Access, 2017. doi: 10.1109/CSCloud.2017.58.

  18. W. van der Aalst. Process mining discovery, conformance and enhancement of business processes. Springer, 2011.

  19. M. Villamizar, O. Garces, and L. Ochoa. Infrastructure cost comparison of running web applications in the cloud using aws lambda and monolithic and microservice architectures. IEEE Access, 2016. doi: 10.1109/CCGrid. 2016.37.

  20. J. Wang, Y. Du, and Q. Li. An android malware detection approach using community structures of weighted function call graphs. Springer, 14, 2017. doi: 10.1109/ACCESS. 2017.2720160.

  21. J. Wawrzynek, K. Asanovic, B. Kingsbury, J. Beck, D. Johnson, and N.Morgan. Spert-ii: a vector microprocessor system. IEEE Computer, 29:7986, 1996. doi: 10.1109/2.485896.

  22. J. Weinman. Be wary of the economics of serverless cloud computing. IEEE Access, 4:612, 2017. doi:10.1109/MCC.2017.32.

Leave a Reply