A Comparative Study of Various Lossy Image Compression Techniques

DOI : 10.17577/IJERTCONV2IS03015

Download Full-Text PDF Cite this Publication

Text Only Version

A Comparative Study of Various Lossy Image Compression Techniques

A Comparative Study of Various Lossy Image Compression Techniques

Gunjan Mathur1,Rohit Mathur2 Department of Electronics and Communication Jodhpur Institute of Engineering and Technology

Jodhpur, India

1er_gunjan_mathur@yahoo.com, 2r4rohitmathur@gmail.com

Mridul Kumar Mathur

Department of Computer Science

Lachoo Memorial College of Science and Technology Jodhpur, India

mathur_mridul@yahoo.com

Abstract In present scenario, the storage and transmission of digital images is a very challenging issue. Image compression is a field of research where size of digital images is reduced by elimination of some redundancies. There are two types of image compression techniques, lossless and lossy. The authors in this paper discuss some of the lossy image compression techniques and provide a comparative study of these techniques for grayscale image compression.

Keywords Digital Image; Compression; DCT; DWT; Vector Quantizaton, Fractal Compression

  1. INTRODUCTION

    The use of digital images is increasing tremendously in this era of growing technology. A digital image is represented with the help of pixels, which can be considered as small dots on the screen. Each pixel of the digital image signifies the color at a single point in the image (for colored images) or the gray level (for monochrome images). A digital image is a rectangular array of pixels sometimes called a bitmap.

    Despite rapid progress in mass-storage density, processor speeds, and digital communication system performance, demand for data storage capacity and data-transmission bandwidth continues to outstrip the capabilities of available technologies.

    The basic aim of image compression is to reduce the number of bits required by image for its storage and the time taken by image during its transfer. Every image consists of two main ingredients, Information and Redundancy. Information contains the most important data which is to be preserved permanently. Information part of an image assists in the reconstruction of the original image. Redundancy means that portion of data which may be removed from the image without disturbing the reconstruction of the original image. Image compression is achieved by the removal of these data redundancies.

    1. Types of Image Compression Techniques:

      The image compression algorithms can be broadly classified into two categories:

      1. Lossless Compression:

        If the reconstructed image after the compression is exactly identical to the original image then the compression is known as lossless compression. The scope of lossless compression is limited by inflexible necessities such as medical imaging.

      2. Lossy Compression:

      If the reconstructed image after compression is not exactly same as the original image then the compression is known as lossy compression. In lossy compression, some part of data is always lost. The amount of compression is higher in lossy compression techniques compared to lossless compression techniques, but the quality of reconstructed image is good in lossless compression.

      Lossy compression are extensively utilized in the prospect of compression as the superiority of reconstructed image is acceptable for maximum users.

    2. PERFORMANCE INDICATORS

      To measure the performance of compression technique, some performance indicators are used as follows:

      1. Compression Ratio

        Compression ratio is the ratio of numbers of bits required to represent original image to the number of bits required to represent compressed image. With the increase in the compression ratio, the quality of image is compromised. Lossy compression techniques have higher compression ratio than lossless compression techniques.

      2. Mean Square Error:

        Mean square error is the measure of error between the original image and the compressed image. Mean square error is the cumulative squared error between the compressed image and the original image. For the lesser distortion and high output quality, the mean square error

        must be as low as possible. Mean square error may be calculated using following expression:

      3. Peak Signal to Noise Ratio(PSNR):

    Peak Signal to Noise Ratio is the ratio if maximum power of the signal and the power of unnecessary distorting noise. Here the signal is the original image and the noise is the error in reconstruction. For a better compression the Peak Signal to Noise Ratio must be high. The increase in the Peak Signal to Noise Ratio results in decrease in compression ratio.

    Therefore, a balance must be obtained between the compression ratio and peak signal to noise ratio for the effective compression. The Peak Signal to Noise Ratio may be calculated as:

  2. LOSSY IMAGE COMPRESSION TECHNIQUES

    1. Discrete Cosine Transform

      A discrete cosine transform (DCT) represents a sequence of finite points in terms of addition of cosine functions oscillating at different frequencies. DCT converts images from time-domain to frequency-domain to de-correlate pixels. Only important frequencies are left which helps in reconstruction of the image. The reconstructed image has distortion up to an acceptable level.

      Fig 1 Block diagram of DCT based encoder

      An image to be compressed is divided into 32×32 pixel blocks. Then, DCT for pixel values of each block is computed. After this, the quantization of DCT coefficients of image blocks is carried out. Larger quantization step provides larger compression ratio and simultaneously lead to larger losses. Then, the division of quantized DCT coefficients into bit- planes is carried out. The obtained bit-planes are coded in the order starting from higher bits to lower ones. While coding each next plane, the values bits of earlier coded planes are taken into account. For each group of bits, individual probability model is used for dynamic arithmetic coding.

      One dimensional DCT is defined as a set of n numbers P(x), where x=0.1.2 N-1. Length of array is given by:

      For inverse discrete transformation:

      Here,

      Following equation computes i,jth entry of the DCT of an image

    2. Discrete Wavelet Transform

      Wavelets are functions defined over a finite interval and having an average value of zero. The main purpose of wavelet transform is to represent any arbitrary function as a superposition of a set of such wavelets or basis functions. The discrete wavelet transform of a finite length signal x(n) having N components is expressed by an NxN matrix.

      For the compression of image, the DWT is applied in the image using threshold value. Threshold values neglects the certain wavelet coefficients, Value of threshold affects the quality of compressed image. Thresholding can be of two types:

      1. Hard threshold:

        If x is the set of wavelet coefficients, then threshold value t is given by,

        i.e. all the values of x which are less than threshold t

        are equated to zero.

      2. Soft threshold:

        In this case, all the coefficients x lesser than threshold t are mapped to zero. Then t subtracted from all x,t. This condition is depicted by following equation:

        Usually, soft threshold gives a better signal to noise ratio (PSNR) as compared to hard threshold.

    3. Vector Quantizer Compression

      The fundamental idea of VQ for image compression is to establish a codebook consisting of code vectors such that each code vector can represent a group of image blocks of size m × m, (m=4 s always used). An image or a set of images is first partitioned into m × m non overlapping blocks which are represented as m2-tuple vectors, called training vectors. The size of training vectors can be very large. The goal of codebook design is to establish a few representative vectors, called code vectors of size 256 or 512, from a set of training vectors. The encoding procedure is to look for a closest code vector in the codebook for each non overlapped 4 × 4 block of an image to be encoded.

    4. Fractal Compression

    A fractal compression algorithm first partitions an image into non overlapping 8×8 blocks, called range blocks and forms a domain pool containing all of possibly overlapped 16×16 blocks, associated with 8 isometrics from reflections and rotations, called domain blocks. For each range block, it exhaustively searches, in a domain pool, for a best matched domain block with the minimum square error after a contractive affine transform is applied to the domain block. A fractal compressed code for a range block consists of quantized contractively coefficients in the affine transform, an offset which is the mean of pixel gray levels in the range block, the position of the best matched domain block and its type of isometry. The decoding is to find the fixed point, the decoded image, by starting with any initial image. The procedure applies a compressed local affine transform on the domain block corresponding to the position of a range block until all of the decoded range blocks are obtained. The procedure is repeated iteratively until it converges (usually in no more than 8 iterations).

  3. CONCLUSION

Algorithm

Compression

Ratio

PSNR

Comments

Wavelet

>=32

138.7851

High

compression ratio

DCT

30.0

24.60

Low Image

Quality

VQ

<32

29.28

Not Suitable for

Low Bit Rate Compression

Fractal

>=16

29.04

May be used for Low Bit Rate

Compression

TABLE I. COMPARISON OF VARIOUS IMAGE COMPRESSION TECHNIQUES

It is concluded from the study, that Wavelet based compression techniques provide with high compression ratio and are strongly recommended for lower bit rates. Discrete Cosine Transform based approaches provide the widely used JPEG standard and may be used with an adaptive quantization table for better compression. Vector Quantization approach is a simple approach, but not suitable for low bit rate compression. Fractal approaches may be used for low bit rate compression, if used with its resolution-free decoding.

REFERENCES

  1. M. Kaur and G. Kaur, "A Survey of Lossless and Lossy Image Compression Techniques," International Journal of Advanced Research in Computer Science and Software Engineering, vol. 3, no. 2, pp. 323-326, February 2013.

  2. G. Mathur and M. K. Mathur, "Image Compression Using DFT Through Fast Fourier Transform Technique," International Journal of Emerging Trends & Technology in Computer Science, vol. 1, no. 2, pp. 129-133, July – August 2012.

  3. M. K. Mathur, S. Loonker and D. D. Saxena, "Lossless Huffman Coding Technique for Image Compression and Reconstruction Using Binary Trees," International Journal of Computer Technology Applications, vol. 3, no. 1, pp. 76- 79, Jan-Feb 2012.

  4. P. Telagarapu, V. J. Naveen, A. Lakshmai, P. and G. V. Santhi, "Image Compression Using DCT and Wavelet Transformations," International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 4, no. 3, pp. 61-73, September 2011.

  5. V. Poswal and D. P. , "Analysis of Image Compression Techniques Using DCT," International Journal of Elecctronics and Computer Science Engineering, vol. 1, no. 3, pp. 1727-1733, September 2012.

  6. M. Gupta and D. A. K. Garg, "Analysis of Compression Algorithm Using DCT," International Journal of Engineering Research and Applilcations, vol. 2, no. 1, pp. 515-521, Jan-Feb 2012.

  7. G. Vijayvargiya, D. S. Silakari and D. R. Pandey, "A Survey: Various Techniques of Image Compression," International Journal of Computer Science and Information Security, vol. 11, no. 10, October 2013.

  8. G. M. Padmaja and P. Nirupama, "Analysis of Various Image Compression Techniques," ARPN Journal of Science and Technology, vol. 2, no. 4, pp. 371-376, May 2012.

  9. S. Dhawan, "A Review of Image Compression adn Comparison of its Algorithm," International Journal of Electronics & Communication Technology, vol. 2, no. 1, pp. 22-26, March 2011.

  10. S. Sharma and S. Kaur, "Image Compression using Hybrid of DWT, DCT and Huffman Coding," International Journal for Science and Emerging Technologies with Latest Trends, vol. 5, no. 1, pp. 19-23, January 2013.

  11. D. L. E. George and N. A. Minas, "Speeding Up Fractal Image Compression Using DCT Descriptors," Journal of Information and Computing Science, vol. 6, no. 4, pp. 287- 294, 2011.

  12. A. E. Jacquin, "Image Coding Based on a Fractal Theory of Iterated Contractice Image Transformations," IEEE TRANSACTIONS ON IMAGE PROCESSING. , vol. 1, no. 1, pp. 18-30, January 1992.

  13. Kulbir Singh, Rajiv Saini and Rajiv Saxena, "Performance of Wavelet, Fractional Fourier and Fractional Cosine Transform in Image Compression," International Journal of Recent Trends in Engineering, vol. 2, no. 7, pp. 43-45, November 2009.

  14. N. Dara, "A Survey on Compression Techniques," Interntional Journal of Computer Science & Management Studies, vol. 13, no. 10, pp. 28-31, December 2013.

  15. M. Sharma, "Compression Using Huffman Coding," International Journal of Computer Science adn Network Security, vol. 10, no. 5, pp. 133-141, 2010.

  16. A. Averbuch, "Image Compression Using Wavelet Transform and Multiresolution Decomposition," IEEE Transactions on Image Processing, vol. 5, no. 1, 1996.

  17. K. Cabeen and P. Gent, Image Compression and the Discrete Cosine Transform, College of Redwoods.

  18. M. Theirschmann, U. Martin and R. Rosel, "New Perspective on Image Compression," D. Fritsh & D. Hobbie, Eds. , pp. 189-199, 1997.

  19. A. S. Lewis and G. Knowles, "Image Compression Using 2-D Wavelet Transform," IEEE Transactions on Image Processing, vol. 1, no. 2, pp. 244-250, 1992.

Leave a Reply