Workload Management in Cloud Environment Using Communication-aware Inter-Virtual Machine Scheduling Technique

DOI : 10.17577/IJERTCONV6IS03023

Download Full-Text PDF Cite this Publication

Text Only Version

Workload Management in Cloud Environment Using Communication-aware Inter-Virtual Machine Scheduling Technique

Mr. Baskar K #1, Mrs. Sangeetha S *2, Mrs. Priya S *3

#1 Assistant Professor, Department of Computer Science and Engineering

*2 Assistant Professor, Department of Information Technology

*3 Assistant Professor, Department of Computer Science and Engineering

#1 Kongunadu College of Engineering and Technology, Thottiam, Tamil Nadu, India

*2 Kongunadu College of Engineering and Technology, Thottiam, Tamil Nadu, India

*3 Trinity college for women, Namakkal, Tamil Nadu, India

Abstract The core objective of this paper is to present a decentralized advance towards energy-efficient and scalable management of virtual machine (VM) cases that are provisioned by large, enterprise clouds. In this proposed approach, the computation informations of the data centre are successfully organized into a hypercube constitution. The hypercube flawlessly scales up and down as resources are either added or eliminated in retort to changes in the number of conditioned VM instances. Without supervision from any central components, each compute node functions separately and manages its workload by applying a set of allocated load balancing laws and algorithms. For less Homogeneous setting, planned to add additional parameters in Load balancing algorithm. For Decreased Network Latency between Co-Located VMs, planned to merge communication-aware inter-VM scheduling (CIVSched) technique into Load balancing method to allow for a more fine grained collection of the VM instances to be migrated. Server consolidation in cloud computing environments makes it possible for multiple servers or desktops to run on a single physical server for high resource utilization, low cost, and reduced energy consumption.

Keywords centralization/decentralization, Distributed systems, Energy-aware systems using CIVSched.


    Today, a large number of cloud-based services are across the infrastructure platform, and software levels which it reflect the long-term growth of cloud computing. The energy expenditure of the service providers data centers provides the negative impact on the environment. A significant part of their power consumption is lost due to both over-provisioned and idle resources. So it becomes most considerable for cloud service providers to take on suitable measures for attaining energy-efficient processing and utilization of computing infrastructure. The workload would be translated into a number of provisioned virtual machine (VM) instance in computation-intended data centers.

    And also, a dynamic VM consolidation has been developed to address these problems.

    A dynamic VM consolidation[1] is used to reduce the energy consumption of the data center by stuff the running VM instances to as the alternative physical machines (PMs) as possible, and accordingly switching off the unnecessary resources. Combined with the use of live VM migration in which it refers to the process of moving a running VM instance between different physical compute nodes without disconnecting the client, VM consolidation has become possible in terms of cost and it can be significantly improved by the energy trace of cloud data centers. The major purpose is decreasing the energy and increasing the throughput. For less Homogeneous setting, where planned to add additional parameters in Load balancing algorithm. For Decreased Network Latency among Co-Located VMs, planned to merge communication-aware inter-VM scheduling (CIVSched) technique into Load balancing scheme to allow for a more fine grained selection of the VM instances to being migrated.


In [22], they proposed a mechanism for the dynamic consolidation of VMs in physical machines as possible; and their aim is to reduce the consumed energy of a private cloud without jeopardizing the compute nodes reliability. They used sliding-window condition detection mechanism and depend on the use of a centralized cloud manager that carries out the VM-to-PM mappings based on information. A power-efficient VM consolidation is developed by Mastroianniet al. [15]. In this cloud, the placement and migration of VM instances are done by probabilistic processes by considering both, the CPU and RAM utilization. This cloud enables load balancing decisions would be taken based on local information, but it still depends on a central data center manager for the organization of the VM host servers.

The recently developed Green Cloud computing through VM is widely utilized but the data center operators are struggling

to minimize their energy consumption and their operational costs.

The algorithms are implemented by a Green Cloud computing infrastructure, which introduces an additional layer to the typical cloud architecture. This infrastructure comprises a set of centralized components,

  1. Energy monitor which observes energy consumption caused by VMs and physical machines.

  2. VM manager which is in charge of provisioning new VMs as well as reallocating VMs across physical machines on the basis of the information collected by the energy monitor.

    1. Eradication of under/over-utilized nodes: balancing underutilized and over utilized physical machines.

    2. Power consumption: energy costs per hour for the data center.


    • It can be access in single and multiple sever[13].

    • A Set-Based Discrete for Cloud Workflow Scheduling with User-Defined QoS Constrain[17] of Data processing: Latency high, Implementing cost also high.

    • In Load Balancing in Cloud Computing Environment Using Evolutionary[19] : its has unsecured data management, Complex computing models implement.

    • Hybrid Particle Swarm Optimization Scheduling for Cloud Computing[20] : The Implementation cost is high and Slow Data processing modules.

    • Task Scheduling for Hybrid IaaS Cloud[21] where the Data base management occupies more memory, Complex modules, High power consumption.


    Dynamic consolidation of VMs in Physical machines

      1. Login :

        Figure 1: System Architecture


    are used, is to reduce the consumed energy of a hybrid cloud without compute nodes reliability.

    The approach is implemented via a sliding-window condition detection mechanism relies on the use of a centralized cloud manager that carries the VM-to-PM mappings. This settings has the following drawbacks


    • High Homogeneous setting.

    • Increased Network Latency.


      The proposed system design equipment active VM consolidation and relies on live VM movement. Specifically, the considerable machines in order to centre that are employed to host the VM cases are professionally self- organized in a greatly scalable hypercube overlay network. For less Homogeneous setting , planned to add additional parameters in Load balancing algorithm. For Decreased Network Latency between Co-Located VMs, planned to combine communication-aware inter-VM scheduling (CIVSched) technique into load balancing scheme to allow for a more fine grained selection of the VM instances to be migrated. The proposed system experiments aimed at examining the following main aspects:

      1. Elasticity: adapting to casual workload changes.

      • Used to check the cloud user

      • Register user only use this application for the security purpose.

      • All the sequence about the user are stored in the cloud server and maintain.

        1. Registration:

          New users register to the use this application.

        2. Load balancing:

          Dependent tasks are those whose implementation is dependent on one or more sub-tasks. They can be executed only after completion of the sub-tasks on which it is dependent. Consequently, scheduling of such task earlier to execution of task dependency. Task dependency is modeled using workflow based algorithms.

        3. Virtual machine:

          Workload gets essentially interpreted into a number of trained virtual machine. Reallocating VMs across physical machines on the source of the data together by the energy scrutinizer.

        4. Data centre:

          Finding the cloud user to keep away from the attacker, allocate the party to the cloud user for preserving the account and data

          privacy task are executed based on the algorithm for executing the action .

          A hybrid, endeavor cloud data center usually consists of one or more physical controller nodes, whose purpose is to sustain the overall cloud-OS. Since our goal is to enable decentralized workload management, to classify the data centers compute nodes in an n-dimensional binary hypercube topology.

        5. Compute Nodes

          Compared to the other system the property of a calculate node, such as memory and network, the CPU consumes the main part of its power, and its utilization is typically proportional to the overall system load.

          • pidle defines the quantity of power addicted by the compute node when idle, i.e., when the compute node is not hosting any VM instances.

          • pmin defines the level of power utilization, below which the compute node should try to transfer all its locally running virtual machines and shut down.

          pmax defines the critical level of power consumption, above which the compute nodes show is significantly degraded as

          its hardware property, mainly the CPU, are over-utilized.

        6. Initial VM Placement

          The data center clients can demand the construction and provision of new VM instances at any time, given that the data center has not exceeded its maximum capacity, i.e., at least one of its compute nodes is not in the over utilized state. In similar method, VM instances can be completed at any time. In our approach, the data center is able to place VM instances to its compute nodes in a completely decentralized manner, by leveraging the hypercube topology.

          1. Clustering:

      The file that user uploaded that should clustered in the form key and the indexing technique for distribute the task scheduling. It is the easy method to allocate the task to the server to avoid the load balancing and decentralized the server to the client.


    This segment provides an indication of our approach that is developed to decrease the inter-VMs communication latency. Firstly, two design principles that the CIVSched technique must take by are existing. Thereafter, pro-vide an indication of the scheduling architecture.

    Low latency for the inter-VM. To decrease the scheduling latency of the target VM and increase short response time between the inter-VMs, the CIVSched must observe all of the network packets transmitted between the co-located VMs, and preferentially program the target VMs.

    Low latency for the inner-VM process. The CIVSched must classify the target process in the targetVM for an received packet and request the VMs scheduler to directly schedule the packet.

    Figure 2: The CIVSched architecture.

    CIVSched architecture is shown in Fig. 2. It comprises of three parts or to be more specific, five modules. The Dom0 part holds the modules AutoCoverand CivMonitor. The modules in Dom0 confine the network data packets and extract the information. Thereafter, the module CivMonitor informs the schedulers running in the other two parts. The module CivScheduler is the core scheduler of the CIVSched and is located within the Xen hypervisor. The modules PidTrans and PidScheduler are in DomU. These two modules are responsible for identifying and development the target process, respectively.


      1. Decentralized load balancing algorithm:

        It depends on a priori in sequence of the applications and static in sequence about the load of the node.

        Memory and storage capacity and recently known communication presentation.

        Distributed algorithms are mainly proper for homogeneous It work in masterslave manner.

      2. Centralized load balancing algorithm:

    The work load is distributed among the processor at runtime. In this method, master allocates new routes to the slaves derived from the new data gathered. Work is central. In non allocated manner one node perform the load balancing process and task of load is shared among them. 7.3.(CIVSched) Technique:

    For decreased network latency between Co-Located VMs, planned to combine Communication-aware inter-VM Scheduling (CIVSched) in to Load balancing. Server consolidation in cloud computing environment makes is possible for multiple servers.

    In (CIVSched) technique that takes into the communication performance between inter-VMs running on the similar virtualization platform. The CIVSched technique inspects the network packets transmitted connecting local co-resident domains to identify the target VM and process that will collect the packets. CIVSched technique can decrease the common response time of network.

        1. CIV Schduler in VMM

          If not_empty(run_queue) and not_empty(shared_ring)then

          /*get destination domid from the shared ring*/


          if running_domid !=dest_domid then if dest_domid in run_ queue then

          /*if other domain waits too long,dont preem pt*/ If run_queue_head-dom_wait_time<threshold then

          /*preempt the running domain*/

          Move dest_domid to the head of the run_queue; Insert running_domid behind dest_domid;

          end if

          end if

          end if

          end if


    Figure 3: Use Case Diagram


    Figure 4: Node Creation

    Figure 5: Already work assign to server

    Figure 6: Data user

    Figure 7: CIV Technique

    Table 1: Comparison evaluation


In the proposed work which are include a hypercube overlay for the party of the data centers figures nodes ,and a set of distributed load balancing algorithms, which rely on live VM migration to transfer workload among nodes, with the dual goal to i) minimize the active resources of the data center , and thereby its energy utilization ii) avoid overloading of calculate nodes. We conducted a series of simulation-based experiments with the intention of appraise our proposed approach . The outcomes of propose that our decentralized load balancer is scalable, as it function in a related way regardless of the energy-proficient and data center size

.Moreover, it enables computerized elasticity as the data centers compute nodes are switched on and off on require, in response to the changes in the data centers taken as a whole workload. Our new consequence also demonstrated that the collective cost of live migrations beside with that of switching on and off compute nodes is irrelevant evaluated to the energy savings achieved by our approach .In future work, we plan to execute and put together our decentralized workload manger in an open source cloud operating system.


        1. Michael Pantazoglou, Gavriil Tzortzakis, and Alex Delis, Decentralized and Energy-Efficient Workload Management in Enterprise Clouds,


          VOL. 4, NO. 2, APRIL-JUNE 2016.

        2. R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic,Cloud computing and emrging IT platforms: Vision, hype, andreality for delivering computing as the 5th utility, Future GenerationComput. Syst., vol. 25, no. 6, pp. 599 616, 2009.

        3. Bei Guan, Jingzheng Wu, Yongji Wang, and Samee

          U. Khan, Senior Member, CIVSched: A Communication-aware Inter-VM Scheduling Technique for Decreased Network Latency between Co-located VMs, DOI 10.1109/TCC.2014.2328582,

          IEEE Transactions on Cloud Computing

        4. L. A. Barroso and U. Holzle, The Datacenter as a Computer: An Introductionto the Design of Warehouse-Scale Machines, 1st ed. San Rafael,CA, USA: Morgan and Claypool Publishers, 2009.

        5. L. A. Barroso and U. Holzle, The case for energy- proportionalcomputing, IEEE Comput., vol. 40, no. 12, pp. 3337, Dec.2007.Kulkarni, Dynamic virtual machine consolidation, GooglePatents, US Patent App. 13/604,134, Sept. 5, 2012.

        6. Roytman, A. Kansal, S. Govindan, J. Liu, and S. Nath,PACMan: Performance aware virtual machine consolidation, inProc. 10th Int. Conf. Autonomic Comput., Jun. 2013, pp. 8394.

        7. T. C. Ferreto, M. A. Netto, R. N. Calheiros, and C. A. De Rose,Server consolidation with migration control for virtualized datacenters, Future Generation Comput. Syst., vol. 27, no. 8, pp. 1027 1034, 2011.

        8. W. Shi and B. Hong, Towards profitable virtual machine placement in the data centre, in Proc. 4th IEEE Int. Conf. Utility CloudCompute., Melbourne, Australia, Dec. 2011, pp. 138145.

        9. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. Limpach, I.Pratt, and A. Warfield, Live migration of virtual machines, inProc. 2nd Symp.Netw.Syst. Design Implementation, May 2005, vol.2, pp. 273 286.

        10. K. Tsakalozos, V. Verroios, M. Roussopoulos, and

          1. Delis, Timeconstrainedlive VM migration in share-nothing IaaS-clouds, inProc. 7th IEEE Int. Conf. Cloud Comput., Anchorage, AK, USA, Jun.2014, pp. 5663.

        11. W. Voorsluys, J. Broberg, S. Venugopal, and R. Buyya, Cost ofvirtual machine live migration in clouds: A performance evaluation,in Proc. 1st Int. Conf. Cloud Comput., Beijing, China, Dec.2009, pp. 254265.

        12. T. Hirofuchi, H. Nakada, S. Itoh, and S. Sekiguchi, Reactive consolidationof virtual machines enabled by postcopylivemigration, in Proc. 5th Int.

          Workshop Virtualization Technol. Distrib.Comput., San Jose, CA, USA, Jun. 2011, pp. 1118.

        13. M. H. Ferdaus, M. Murshed, R. N. Calheiros, and R. Buyya,Virtual machine consolidation in cloud data centers using ACOmetaheuristic, in Proc. 20th Eur. Int. Conf. Parallel Process., 2014,pp. 306317.

        14. F. Hermenier, X. Lorca, J.-M.Menaud, G. Muller, and J. Lawall,Entropy: A consolidation manager for clusters, in Proc. ACMSIGPLAN/SIGOPS Int. Conf. Virtual Execution Environ., Washington,DC, USA, Mar. 2009, pp. 4150.

        15. E. Feller, L. Rilling, and C. Morin, Snooze: A scalable and autonomicvirtual machine management framework for privateclouds, in Proc. 12th IEEE/ACM Int. Symp. Cluster, Cloud GridComput., Ottawa, Canada, May 2012, pp. 482489.

        16. C. Mastroianni, M. Meo, and G. Papuzzo, Probabilistic consolidation of virtual machines in self-organizing cloud data centers, IEEE Trans. Cloud Comput., vol. 1, no. 2, pp. 215228, Jul. Dec.2013.

        17. Wei-Neng Chen, and Jun Zhang: A Set-Based Discrete PSO for Cloud Workflow Scheduling with User-Defined QoS Constraints, 2012

        18. J. Baliga, R. W. Ayre, K. Hinton, and R. Tucker, Green cloud computing: Balancing energy in processing, storage, and transport,Proc. IEEE, vol. 99, no. 1, pp. 149167, Jan. 2011.

        19. Madhurima Rana, Saurabh Bilgaiyan, Utsav Kara

A Study on Load Balancing in Cloud Computing Environment Using Evolutionary and Swarm Based Algorithms, 2014.

  1. Dr. M.Sridhar, Dr. G. Rama Mohan Babu

    Hybrid Particle Swarm Optimization Scheduling for Cloud Computing,2015.

  2. Xingquan Zuo, Guoxiang Zhang, and Wei Tan

    Self-Adaptive Learning PSO-Based Deadline Constrained Task Scheduling for Hybrid IaaS Cloud, 2014.

  3. Geng Yushui, Yuan Jiahengcloud data migration method based on pso algorithm,2015.

  4. A. M. Sampaio and J. G. Barbosa, Optimizing energy-efficiency in high-available scientific cloud environments, in Proc. 3rd Int. Conf.Cloud Green Comput., Karlruhe, Germany, Sep. 2013, pp. 7683.

Leave a Reply