Location Based Minimum Migration Implementation in Cloud Environment


Call for Papers Engineering Research Journal June 2019

Download Full-Text PDF Cite this Publication

Text Only Version

Location Based Minimum Migration Implementation in Cloud Environment

[1] [2]

Ms. K. Sathya , Dr. S. Rajalakshmi ,

PG Scholar[1], Associate Professor [2], Department of CSE, Jay Shriram Group of Institutions,

Tirupur

Abstract- The proposed method called Location based Minimum Migration to monitors all the virtual machines in all cloud locations centre. It identifies the state virtual machines whether it is sleep, idle, or running (less or over) state and also checks number of tasks running and number of tasks can able to run. Location based VM selection policy (algorithm) has to be applied to carry out the selection process. This paper focus on to analyze the requirements for the resources and the number of resources. The resource assumption takes place which means need to mention the number of resources and the resources configuration (Processor, RAM, Storage, Capacity, etc). To analyze the task, the task from the user is analyzed and forward for the allocation process. The resources are allocated for the user given jobs. Model Analyze takes place for migration process, where what all the tasks to be migrated. Tasks are migrated based on the model analysis.

In this paper,1) Migration of virtual machine tasks where less number of tasks are running, which occupies extra energy and power consumption.

  1. Migration of virtual machine tasks where more number of tasks are running, which gives overhead to the system performance.

  2. Less number of migrations should be selected, to avoid more processing cost for migration process.

  3. Migration of Virtual machine tasks to reduce the user provisioning cost, by finishing the users tasks in their deadline.

Index Terms— LBMMC, Migration, Virtual Machine, Resource Allocation

I.INTRODUCTION

Cloud computing has been envisioned as the next generation information technology (IT) architecture for enterprises, due to its long list of unprecedented advantages in the IT history: on-demand self-service, ubiquitous network access, location independent resource pooling, rapid resource elasticity, usage-based pricing and transference of risk. As a disruptive technology with profound implications, cloud computing is transforming

the very nature of how businesses use information technology. One fundamental aspect of this paradigm

shifting is that data are being centralized or outsourced to the cloud. From users perspective, including both individuals and IT enterprises, storing data remotely to the cloud in a flexible on-demand manner brings appealing benefits: relief of the burden for storage management, universal data access with location independence, and avoidance of capital expenditure on hardware, software, and personnel maintenances, etc.,

Fig.1 Architecture of Cloud Computing

The primary benefit of moving to Clouds is application scalability. Unlike Grids, scalability of Cloud resources allows real-time provisioning of resources to meet application requirements. Cloud services like compute, storage and bandwidth resources are available at substantially lower costs. Usually tasks are scheduled by user requirements. New scheduling strategies need to be proposed to overcome the problems posed by

network properties between user and resources.It may use some of the conventional scheduling concepts to merge them together with some network aware strategies to provide solutions for better and more efficient job scheduling. Cloud services like compute, storage and bandwidth resources are available at substantially lower costs. Cloud

applications often require very complex execution environments. These environments are difficult to create on grid resources.

Virtual machines allow the application developer to create a fully customized, portable execution environment configured specifically for their application. Traditional way for scheduling in cloud computing tended to use the direct tasks of users as the overhead application base. The problem is that there may be no relationship between the overhead application base and the way that different tasks cause overhead costs of resources in cloud systems. For large number of simple tasks this increases the cost and the cost is decreased if we have small number of complex tasks.

Previous research in execution of scientific workflows in Clouds either try to minimize the workflow execution time ignoring deadlines and budgets or focus on the minimization of cost while trying to meet the application deadline. It implement limited contingency strategies to correct delays caused by underestimation of tasks execution time or fluctuations in the delivered performance of leased public Cloud resources.

To mitigate effects of performance variation of resources on soft deadlines of workflow applications, an EIPR algorithm was proposed that uses idle time of provisioned resources and budget surplus to replicate tasks. Typical Cloud environments do not present regular performance in terms of execution and data transfer times. Fluctuations are caused in performance of Cloud resources delay tasks execution, what also delays such tasks successors. If the delayed tasks were part of the critical path of the workflow, it will delay its completion time, and may cause its deadline to be missed. Workflow execution is also subject to delays if one or more of the virtual machines fail during task execution. However, typical Cloud infrastructures offer availability above 99.9 percent; therefore, performance degradation is a more serious apprehension than resource failures in such environments.

In order to address limitations of previous research, the proposed method called Location based Minimum Migration to monitors all the virtual machines in all cloud locations centre.

  1. RELATED WORK

    Shane Canon and Shreyas Cholia et al., said that cloud computing has seen tremendous growth, particularly for

    commercial web applications. The on-demand, pay-as- you- go model creates a flexible and cost-effective means to access compute resources.

    Yu-Kwong Kwok and Ishfaq Ahmad said that static scheduling of a program represented by a directed task graph on a multiprocessor system to minimize the program completion time is a well-known problem in parallel processing.The objective of scheduling was to minimize the completion time of a parallel application by properly allocating the tasks to the processors.

    Andrei Tchernykh et al., explored an experimental study of deterministic non-preemptive multiple workflow scheduling strategies on a Grid. This paper distinguished twenty five strategies depending on the type and amount of information they require.This paper analyzed an scheduling strategies that consist of two and four stages: labeling, adaptive allocation, prioritization,and parallel machine scheduling.

    Ming Mao and Marty Humphrey defined that cloud computing is to allocate (and thus pay for) only those cloud resources that are truly needed. To date, cloud practitioners have pursued schedule-based (e.g., time-of- day) and rule- based mechanisms to attempt to automate this matching between computing requirements and computing resources. However, most of these auto- scaling mechanisms only support simple resource utilization indicators and do not specifically consider both user performance requirements and budget concerns.

    Zhiao Shi and Jack J. Dongarra mentioned that efficient scheduling of workflow applications represented by weighted directed acyclic graphs (DAG) on a set of heterogeneous processors is essential for achieving high performance.

    Walfredo Cirne andDaniel Paranhos discussed that Large distributed systems challenge traditional schedulers, as it is often hard to determine a priori how long each task will take to complete on each resource, information that is input for such schedulers. Task replication has been applied in a variety of scenarios as a way to circumvent this problem. Task replication consists of dispatching multiple replicas of a task and using the result from the first replica to finish.

  2. OBJECTIVE

    The objective of the project is to minimum number of virtual machine is allocated to complete the user tasks within the deadline based on the migration of tasks. Less number of should be selected, to avoid more processing cost for migration process. Migration of Virtual machine tasks to reduce the user provisioning cost, by finishing the users tasks in their deadline.

  3. PROPOSED SYSTEM

    The aim of proposed work is to increase the capabilities of the EIPR algorithm by enabling replication of tasks

    across multiple Clouds. The proposed method called Location based Minimum Migration in Cloud (LBMMC). The resource assumption takes place which means need to mention the number of resources and the resources configuration. To analyze the task, the task from the user is analyzed and forward for the allocation process. The resources are allocated for the user given jobs. The provision of these computational resources is controlled by a provider, and resources are allocated in an elastic way, according to consumers needs. To accommodate unforeseen demands on the infrastructure in a scalable and elastic way, the process of allocation and reallocation in Cloud Computing must be dynamic. The trust levels are given for each all resources and the user will give the task with the security demands to run the task in resources. Based on the user demanded security, the resources are allocated for each user jobs. Model Analyze takes place for migration process, where what all the tasks to be migrated. Location based VM selection policy (algorithm) has to be applied to carry out the selection process. The tasks are migrated based on the analyze of the last process.

    1. Advantage

      The advantage of proposed system are

      1. Minimum number of Virtual machines is allocated to complete the user tasks in the deadline.

      2. Migrations of tasks in Virtual machines take minimum cost and energy.

      3. The underload and overload of tasks are considered while allocation of tasks in Virtual machines.

    2. System Architecture

    Fig.2 System Architecture of LBMMC Algorithm

    The above diagram depicts the architecture diagram of LBMMC algorithm. First,the user tasks are given as input. It then explores the analyzation and the requirements for the resources and the number of resources. The resource assumption takes place which means need to mention the number of resources and the resources configuration. To analyze the task, the task from the user is analyzed and forward for the allocation process. The resources are allocated for the user given jobs. Model Analyze takes place for migration process, where what all the tasks to be migrated. Tasks are migrated based on the analyze of the last module. Less number of migrations should be selected, to avoid more processing cost for migration process. Migration of Virtual machine tasks to reduce the user provisioning cost, by finishing the users tasks in their deadline.

  4. THE LBMMC ALGORITHM

    The goal of the proposed LBMMC algorithm is increasing the likelihood of completing the execution of task within a user-defined deadline, which offers minimum cost and reduces power consumption.This method monitors all the virtual machines in all cloud locations centre.It identifies the state virtual machines whether it is sleep, idle, or running (less or over) state. And also checks number of tasks running and number of tasks can able to run.When a virtual machine is considered to be overloaded requiring live migration to one or more VMs from the nearby location cloud centres.

    When a virtual machine is considered to be under loaded (minimum task running) requiring live migration to one or more VMs (which are already running to perform some jobs) from the nearby location cloud centres.

    Working samples

    Let us assume aVM1, VM2, VM3, VM4, VM5 are five Virtual machines running in the cloud. Each can run 3 tasks at a time.

    Fig.3 Working Samples of LBMMC Algorithm

    After 13 secs, only one task will be run in VM2, then we can migrate the task9 and task10 to the VM2 from VM4.After 31 secs, two tasks will be run in VM3,then we migrate the task14 to the VM3 from VM5.And if some conditions, two tasks are running in VM2 and then it will not migrate.

  5. MODULES DESCRIPTION The proposed algorithm performs four steps: Step 1. Resource Assumption and Task Analyze Step 2. Allocate resource for jobs

    Step 3. Model Analyze for Migration Step 4.Migration of tasks

    Resource Assumption and Task Analyze:

    For developing one cloud environment, have to analyze the requirements for the resources and the number of resources. In this module, the resource assumption takes place which means need to mention the number of resources and the resources configuration (Processor, RAM, Storage, Capacity, etc). Next work in this module is to analyze the task, the task from the user is analyzed and forward for the allocation process.

    Allocate Resource for Jobs

    Resources are allocated for the user given jobs. The provision of these computational resources is controlled by a provider, and resources are allocated in an elastic way, according to consumers needs. To accommodate unforeseen demands on the infrastructure in a scalable and elastic way, the process of allocation and reallocation in Cloud Computing must be dynamic. The trust levels are given for each all resources and the user will give the task with the security demands to run the task in resources. Based on the user demanded security, the resources are allocated for each user jobs.

    Model Analyze for Migration

    Model Analyze takes place for migration process, where what all the tasks to be migrated. When a virtual machine is considered to be overloaded requiring live migration to one or more VMs from the nearby location cloud centres. When a virtual machine is considered to be under loaded (minimum task running) requiring live migration to one or more VMs (which are already running to perform some jobs) from the nearby location cloud centres. Location based VM selection policy (algorithm) has to be applied to carry out the selection process.

    Migration of tasks

    Tasks are migrated based on the analyze of the last module.Migration of Virtual machine tasks where less number of tasks are running, Migration of Virtual machine tasks where more number tasks are running and

    Less number of migrations should be selected. The tasks are migrated from one VM to other VMs.

  6. SCREENSHOTS

    A .Home Screen to Get Input

    Fig.4 Home Screen to Get Input

    1. Users Resource Assumption

      Fig.5 Users Resource Allocation

    2. Users Task Input

      Fig.6 Users Task Input

    3. Calculating Security Demand and Resource Display

      Fig.7 Calculating Security Demand and Resource Display

    4. Users Resource Allocation

    Fig.8 Users Resource Allocation

  7. CONCLUSION

Workflow execution is also subject to delays if one or more of the virtual machines fail during task execution. Takes more number of Virtual Machines to complete the user tasks in their deadlines. More Energy consumed and high cost for the providers side, when more virtual machines are allocated. The proposed LBMMC algorithm was presented a resource assumption and task analyze. The resource assumption is takes place which means need to mention the number of resources and the resources configuration. To analyze the task, the task from the user was analyze and forward for the allocation process.The resources are allocated for the user given jobs. Model Analyze takes place for migration process, where what all the tasks to be migrated. Location based VM selection policy (algorithm) has to be applied to carry out the selection process. The tasks are migrated based on the analyze of the last process. Minimum number of virtual machine is allocated to complete the user tasks within the deadline based on the migration of tasks. Less number of should be selected, to avoid more processing cost for migration process. Migration of Virtual machine tasks to reduce the user provisioning cost, by finishing the users tasks in their deadline.

VIII.REFERENCES

[1].Rodrigo N.Calheiros, Rajkumar Buyya,Meeting Deadlines of Scientific Workflows in Clouds with Tasks Replication, IEEE Transactions on Parallel And Distributed Systems,VOL.25, NO.7,JULY 2014.

[2]. R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, and I. Brandic,

Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility, Future Gener. Comput. Syst., vol. 25, no. 6, pp. 599-616, June 2009.

[3]. Z. Shi and J.J. Dongarra, Scheduling Workflow Applications on Processors with Different Capabilities, Future Gener. Comput.Syst., vol. 22, no. 6, pp. 665-675,

May 2006.

[4]. M. Mao and M. Humphrey, Auto-Scaling to Minimize Cost and Meet Application Deadlines in Cloud Workflows, in Proc. Intl Conf. High Perform. Compute., Netw., Storage Anal. (SC), 2011, p. 49.

[5]. S. Abrishami, M. Naghibzadeh, and D. Epema,

Deadline-Constrained Workflow Scheduling Algorithms for IaaS Clouds,Future Gener. Comput. Syst., vol. 29, no. 1, pp. 158-169, Jan. 2013.

Leave a Reply

Your email address will not be published. Required fields are marked *