Secure Data Transmission in Cloud Computing

DOI : 10.17577/IJERTCONV6IS04006

Download Full-Text PDF Cite this Publication

Text Only Version

Secure Data Transmission in Cloud Computing

Kavitha C1

1Department of CSE/R.M.K. College of Engineering and Technology,

Assistant Professor Chennai/Tamil Nadu

Vishnu Kumar.M2

2Department of CSE/R.M.K. College of Engineering and Technology,

UG Student Chennai/Tamil Nadu

Ramji.D3

3Department of CSE/R.M.K. College of Engineering and Technology,

UG Student Chennai/Tamil Nadu

Rishi Rathnavel.K4

4Department of CSE/R.M.K. College of Engineering and Technology,

UG Student Chennai/Tamil Nadu

Abstract:- Storage-as-a-Service offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their sensitive data to be stored on remote servers. In this paper, we propose a cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables indirect mutual trust between them. The proposed scheme has four important features: (i) it allows the owner to outsource sensitive data to a CSP, and perform full block-level dynamic operations on the outsourced data, i.e., block modification, insertion, deletion, and append, (ii) it ensures that authorized users (i.e., those who have the right to access the owners file) receive the latest version of the outsourced data, (iii) it enables indirect mutual trust between the owner and the CSP, and (iv) it allows the owner to grant or revoke access to the outsourced data. We discuss the security issues of the proposed scheme. Besides, we justify its performance through theoretical analysis and a prototype implementation on Amazon cloud platform to evaluate storage, communication, and computation overheads.

Keywords: cloud, security, encryption, decryption, cheating detection

  1. INTRODUCTION TO CLOUD COMPUTING Cloud computing is a colloquial expression used

    to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network (typically the internet). Cloud computing is a jargon term without a commonly accepted no ambiguous scientific or technical definition. In science, cloud computing is a synonym for distributed computing over a network and means the ability to run a program on many connected computers at the same time. The phrase is also, more commonly used to refer to network based services which appear to be provided by real server hardware, which in fact are served up by virtual hardware, simulated by software running on one or more real machines, such virtual servers do not physically exist

    and can therefore be moved around and scaled up on the fly without affecting the end user.

    The cloud also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but as dynamically re-allocated per demand. This can work for allocating resources to users in different time zones. For example, cloud computer facilities, which serve European users during European business hours with a specific application while the same resources are getting reallocated. And serve North American users during North Americas business hours with another application

    .

    Characteristics

    On-Demand Self-Service

    A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

    Broad Network Access

    Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms.

    Resource Pooling

    The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

    Rapid Elasticity

    Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.

  2. RELATED WORKS

    Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However,

    Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its clients (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e., it efficiently supports operations, such as block modification, deletion and append.

    Cloud Computing is a unique paradigm brings about many new security challenges, which have not been well understood. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of client through the auditing of whether his data stored in the cloud is indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. In particular, to achieve efficient data dynamics, we improve the Proof of Retrievability model by manipulating the classic Merkle Hash Tree (MHT) construction for block tag authentication. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure.

    Commonly, traditional access control techniques assume the existence of the data owner and the storage servers in the same trust domain. This assumption, however, no longer holds when the data is outsourced to a remote CSP, which takes the full charge of the outsourced data management, and resides outside the trust domain of the data owner. CSP needs to be safeguarded from a dishonest owner, who attempts to get illegal compensations by falsely claiming data corruption over cloud servers. This concern, if not properly handled, can cause the CSP to go out of business.

  3. PROPOSED WORK

    In the proposed system we design and implementation of a cloud-based storage scheme that has the following features:

    1. It allows a data owner to outsource the data to a CSP, and perform full dynamic operations at the block-level, i.e., it supports operations such as block modification, insertion, deletion, and append; (ii) It ensures the newness property, i.e., the authorized users receive the most recent version of the outsourced data; (iii) It establishes indirect mutual trust between the data owner and the CSP since each party resides in a different trust domain; and (iv) It enforces the access control for the outsourced data.

      Fig 1. Proposed Architecture

      System Models

      The system model includes the following.

      • File encryption

      • File upload to Service Providers

      • Dynamic Operations on the Outsourced Data

        /li>

      • Data Access and Cheating Detection

      • File decryption

    1. File Encryption

      The first module in this project is file encryption module. This module is designed for encrypt the file before outsourcing the file into cloud service providers. The encryption process done by the dynamic data owner to prevent their data from the unauthorized users. During the encryption time the secret key for the file to decrypt the file is produced. The owners have to keep the secret key. When they are retrieving the data from the cloud service providers the data will be in encrypted form. So this module plays an important role in our project.

    2. File Upload To Service Providers

      The data owner can not directly upload their files into the cloud service providers. The data owner first has to upload their files into the Trusted Third Party. The TTP in our project is a trusted intermediate between the cloud service providers and the data owner. The TTP first receives the data from the data owner and forward the file to the cloud service providers, when the file is receives at cloud service providers from the TTP then it sends a confirmation mail that the file is uploaded at the cloud service providers to the data owner.

    3. Dynamic Operations On The Outsourced Data

      The data owner can modify their file after uploading their file into the cloud service providers. They can do the operations dynamically on the data. So the authorized users can access recently updated version of the outsourced data. Only the data owner can change the data dynamically. The data can be deleted, updated or edited by the data owner.

    4. Data Access And Cheating Detection

      An authorized user sends a data-access request to both the CSP and the TTP to access the outsourced file. The outsourced data can be only retrieved by the authorized users. The TTP has to check whether the users are authorized persons or not. To check the authorization the CSP and the TTP check the secret key of the particular file which has the data request by the users. If the secret keys match with the database then only they can download the file and decrypt it. if there any unauthorized users try to access the data the notification will send to the TTP.

    5. File Decryption

      The last module in this project is file decryption. In this module the encrypted file will return back into its original form. For the decryption process the algorithm need the key which created at the time of encryption. The data owner keeps the key generated at encryption process. After enter the key the algorithm will decrypt the file and returns the data in a readable manner which can be understood by the users.

  4. ALGORITHMS

    Identity Based Encryption

    ID-based encryption (or identity based encryption (IBE)) is an important primitive of ID-based cryptography. As such it is a type of public-key encryption in which the public key of a user is some unique information about the identity of the user (e.g. a user's email address). This can use the text-value of the name or domain name as a key or the physical IP address it translates to. The first implementation of an email address based PKI, which allowed users to verify digital signatures using only public information such as the user's identifier.

    ID-based encryption was proposed, however only able to give an instantiation of identity-based signatures. Identity-based encryption remained an open problem for many years. One example of the research leading up to identity-based encryption.

    Identity-based systems allow any party to generate a public key from a known identity value such

    as an ASCII string. A trusted third party, called the private key generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the corresponding master private key (referred to as master key). Given the master public key, any party can compute a public key corresponding to the identity ID by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identity ID contacts the PKG, which uses the master private key to generate the private key for identity ID.

    Identity-based systems allow any party to generate a public key from a known identity value such as an ASCII string. A trusted third party, called the Private Key Generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the corresponding master private key (referred to as master key). Given the master public key, any party can compute a public key corresponding to the identity ID by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identity ID contacts the PKG, which uses the master private key to generate the private key for identity ID.

    As a result, parties may encrypt messages (or verify signatures) with no prior distribution of keys between individual participants. This is extremely useful in cases where pre-distribution of authenticated keys is inconvenient or infeasible due to technical restraints. However, to decrypt or sign messages, the authorized user must obtain the appropriate private key from the PKG. A caveat of this approach is that the PKG must be highly trusted, as it is capable of generating any user's private key and may therefore decrypt (or sign) messages without authorization.

  5. CONCLUSIONS

We have proposed a scheme related to outsourcing the storage of data. The owner is capable of not only archiving and accessing the data stored by the csp, but also updating and scaling this data on the remote servers. The remotely stored data can be not only accessed by authorized users, but also updated and scaled by the owner. After updating, authorized users should receive the latest version of the data. A TTP is able to determine the dishonest party. The outsourced data is protected using the broadcast encryption and decryption algorithm.

REFERENCES

    1. C. Erway, A. K¨upc¸ ¨ u, C. Papamanthou, and R. Tamassia,(2009) Dynamic provable data possession, in Proceedings of the 16th ACM Conference on Computer and Communications Security, pp. 213 222.

    2. Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, (2009) Enabling public verifiability and data dynamics for storage security in cloud computing, in Proceedings of the 14th European Conference on Research in Computer Security, pp. 355370.

    3. A. F. Barsoum and M. A. Hasan, (2010)Provable possession and replication of data over cloud servers, Centre For Applied Cryptographic Research,Report2010/32,http://www.cacr

      .math..ca/techreports/2010/cacr201032.pdf.

    4. R. Curtmola, O. Khan, R. Burns, and G. Ateniese,(2008) MR- PDP: multiplereplica provable data possession, in 28th IEEE ICDCS, pp. 411420.

    5. A. F. Barsoum and M. A. Hasan,(2011), On verifying dynamic multiple data copies over cloud servers, Cryptology ePrint Archive, Report 2011/447, http://eprint.iacr.org/.

    6. K. D. Bowers, A. Juels, and A. Oprea, (2009)HAIL: a high- availability and integrity layer for cloud storage, in CCS 09: Proceedings of the 16th ACM conference on Computer and communications security. New York, NY, USA: ACM, pp. 187198.

    7. Y. Dodis, S. Vadhan, and D. Wichs, (2009) Proofs of retrievability via hardness amplification, in Proceedings of the 6th Theory of Cryptography Conference on Theory of Cryptography.

    8. A. Juels and B. S. Kaliski,(2007) PORs: Proofs of Retrievability for large files, in CCS07: Proceedings of the 14th ACM conference on Computer and communications security. ACM, pp. 584 597.

    9. H. Shacham and B. Waters,(2008) Compact proofs of retrievability, in ASIACRYPT 08, pp. 90107.

    10. M. Kallahalla, E. Riedel, R. Swaminathan, Q. Wang, and K. Fu,(2003)Plutus: Scalable secure file sharing on untrusted storage, in Proceedings of the FAST 03: File and Storage Technologies.

Leave a Reply