A Research on Image CompressionTechniques

DOI : 10.17577/IJERTV6IS080212

Download Full-Text PDF Cite this Publication

Text Only Version

A Research on Image CompressionTechniques

Manish Mishra

M.Tech Scholar,

Dr. APJ Abdul Kalam Technical University Lucknow, UP

Dr. Md. Sanawer Alam

HOD,

Associate Professor (EIC) AIET Lucknow UP

Abstract: The advancement in digital technology has led to the development of various easy to use devices and methods especially in the field of communications and data transfer to longer distances, yielding in an enormous growth of these technologies in each and every field. The scientists, researchers and innovators are constantly looking for new fields as to where the usage of the latest technologies of electronics, internet etc. can be a possibility. The increase in the image and video content requires for the advances in media storage technology, the compression techniques and the performance of the transmission media as previously mentioned , the demand for high data storage capacity and the faster transmission speeds will have to exceed the capabilities of current technologies. The main aim is to design a compression technique suitable for image processing, storage and its transmission, and also providing tolerable computational complexity suitable for the practical implementation. The basic rule of the compression is to minimize the numbers of bits needed for representing an image. A Digital image compression algorithms exploits the overabundance in an image so that it can be represented utilizing a smaller number of the bits while still maintaining an acceptable visual quality. In this paper, a survey into the image compression has been presented.

Key Words: Image Compression, Redundancy, storage capacity, digital image

  1. INTRODUCTION

    Digital image processing is the subset of the electronic domain wherein an image is converted to an array of the small integers, known as pixels (derived from a picture element), representing the physical quantity such as scene radiance, saved in a digital memory, and processed by the computer or any other digital hardware. An advancement in the digital technology has led in the rise of the visual communication. Now-a-days we are obsessed with our smartphones which have latest built in cameras to capture a high quality images and the videos which about a years ago, required costly cameras and an expert photographer. An increasing demand for a multimedia content such as digital images and video has allows to great interest in research into the compression techniques. The development of a higher quality and a less expensive image acquisition devices has yielded steady increases in both an image size and its resolution, and a greater consequent for the design of an efficient compression systems.

    Despite an advancement made in the media storage technology, the compression techniques and the

    performance of the transmission media as previously mentioned , the demand for high data storage capacity and the faster transmission speeds will have to exceed the capabilities of current technologies. The main aim is to design a compression technique suitable for image processing, storage and its transmission, and also providing tolerable computational complexity suitable for the practical implementation. The basic rule of the compression is to minimize the numbers of bits needed for representing an image. A Digital image compression algorithms exploits the overabundance in an image so that it can be represented utilizing a smaller number of the bits while still maintaining an acceptable visual quality.

    The various factors related to the need for an image compression includes:

    • Network bandwidths currently available for the transmission

    • Large storage requirements for the multimedia data

    • A Low power devices such as handheld mobile phones have small storage capacity

    • The effect of computational complexity on practical implementation.

    Some of the most prominent compression systems uses a discrete cosine transform (DCT) system suggested by the Joint Photographic Experts Group (JPEG). The simplest JPEG system is the baseline JPEG. The DCT system provides a high quality images at average bit rates, but shows bulkiness at low bit rates.

    Recently, the wavelet-based system has been abundantly studied and developed for an image compression applications. The discrete wavelet transform (DWT) is one of the sub-band coding. One implementation of the DWT system exploits a filter bank to separate the signal into a number of the frequency bands. Then each band is quantized and encoded based on the systems. The different artificial intelligence techniques such as Fuzzy Logic, Neural Network, etc. have also been studied with respect to the image compression. The main aim of this research work is to study the role of fuzzy logic technique to

    enhance the quality of an image obtained from the DWT Compression.

  2. IMAGE COMPRESSION

    An Image Compression is obtained by deleting the redundancy in the image. The Redundancies in an image can be classified into three categories; psycho-visual redundancy , inter-pixel or spatial redundancy and coding redundancy.

    Psycho-visual redundancy: An Images are normally determine for the consumption of human eyes, which does not acknowledge with equal sensitivity to all the visual information. The relativity of different image information components can be employed to eliminate or reduce any amount of data that is psycho-visually redundant. The process, which removes or reduces Psycho-visual redundancy, is referred as quantization.

    Inter-pixel Redundancy: The Natural images have higher degree of relationship among its pixels. This relationship is referred as an inter-pixel redundancy or spatial redundancy and is deleted by either predictive coding or transform coding.

    Coding redundancy: The variable-length codes relating to the statistical model of an image or its processed version employs the coding redundancy in the image.

    Direct transmission of the video data requires a high-bit- rate (Bandwidth) channel. When such a high bandwidth channel is unavailable or not economical, compression techniques have to be used to reduce the bit rate and ideally maintain the same visual quality. Similar arguments can be applied to storage media in which the concern is memory space. Video sequence contain significant amount of redundancy within and between frames. It is this redundancy frame. It is this redundancy that allows video sequences to be compressed. Within each individual frame, the values of neighboring pixels are usually close to one another. This spatial redundancy can be removed fro the image without degrading the picture quality using Intraframe techniques[7].

    Principles of Image Compression

    The principles of image compression are based on information theory. The amount of information that a source produce is Entropy. The amount of information one receives from a source is equivalent to the amount of the uncertainty that has been removed.

    A source produces a sequence of variables from a given symbol set. For each symbol, there is a product of the symbol probability and its logarithm. The entropy is a negative summation of the products of all the symbols in a given symbol set. Compression algorithms are methods that reduce the number of symbols used to represent source information, therefore reducing the amount of space needed to store the source information or the amount of time necessary to transmit it for a given channel capacity. The mapping from the source symbols into fewer target symbols is referred to as Compression and Vice-versa Decompression.

    Image compression refers to the task of reducing the amount of data required to store or transmit an image. At the system input, the image is encoded into its compressed from by the image coder. The compressed image may then be subjected to further digital processing, such as error control coding, encryption or multiplexing with other data sources, before being used to modulate the analog signal that is actually transmitted through the channel or stored in a storage medium. At the system output, the image is processed step by the step to undo each of the operations that were performed on it at the system input. At the final step, the image is decoded into its original uncompressed form by the image decoder. If the reconstructed image is identical to the original image the compression is said to be lossless, otherwise, it is lossy[7][10].

  3. COMPRESSION STANDARDS

    Digital images and digital video are normally compressed in order to save space on hard disks and to speed up transmission. There are presently several compression standards used for network transmission of digital signals on a network. Data sent by a camera using video standards contain still image mixed with data containing changes, so that unchanged data (for instance the background) are not sent in every image. Consequently the frame rate measured in frames per second (fps) is much greater[11].

    Image compression techniques

    Still images are simple and easy to send. However it is difficult to obtain single images from a compressed video signal. The video signal uses a lesser data to send or store a video image and it is not possible to reduce the frame rate using video compression. Sending single images is easier when using a modem connection or anyway with a narrow bandwidth.

    Table 4.1 Compression standards

    Main compression standard for still image

    Main compression standards for video signal

    JPEG

    M-JPEG (Motion.JPED)

    Wavelet

    H.261,263etc.

    JPEG 2000

    MPEG1

    GIF

    MPEG2

    MPEG3

    MPEG4

    JPEG (Joint Photographic Expert Group)

    Popular compression standard used exclusively for still images. Each image is divided in 8 x 8 pixels; each block is then individually compressed. When using a very high compression the 8 x 8 blocks can be actually seen in the image. Due to the compression mechanism, the decompressed image is not the same image which has been compressed; this because this standard has been designed considering the performance limits of human eyes. The degree of detail losses can be varied by adjusting compression parameters. It can store up to 16 million colors[6].

    Wavelet

    Wavelets are functions used in representing data or other functions. They analyze the signal at different frequencies with different resolutions. Optimized standard for images with amount of data with sharp discontinuities. Wavelet compression transforms the entire image differently from JPEG and is more natural as if follows the shape of the objects in the picture. It is necessary to use a special software for viewing, being this a non-standardized compression method.

    JPEG2000

    Based on Wavelet technology. Rarely used.

    GIF (Graphic Interchange Format)

    Graphic format used widely with Web images. It is limited to 256 colors and is a good standard for images which are

    not too complex. It is not recommended for network cameras being the compression ration too limited.

      1. PEG (Motion JPEG)

        This is a not a separate standard but rather a rapid flow of JPEG image that can be viewed at a rate sufficient to give the illusion of motion. Each frame within the video is stored as a complete image in JPEG format. Singe image do not interact among the selves. Images are then displayed sequentially at a high frame rate. This method produces a high quality video, but at a cost of large files.

        H.261, H.263 etc.

        Standards approved by ITU (International Telecommunications Union). They are designed for videoconference applications and produce images with a high.

        DCT-Based Image Coding Standard

        The idea of compressing an image is not new. The discovery of DCT in 1974 is an important achievement for the research community working on image compression. The DCT can be regarded as a discrete-time version of the Fourier-cosine series. It is a close relative of DFT, a technique for converting a single into elementary frequency components. Thus DCT can be computed with a Fast Fourier Transform (FFT) like algorithm in O (n log n) operations. Unlike DFT, DCT is revalued and provides a better approximation of a single with fewer coefficients. The DCT of a discrete signal x(n), n=0, 1,,N-1 is defined as[7]:

        X(u)

        2 a(u) r1 x(n) cos(2n 1)(m)

        N n0

        2N

        where C(u) = 0.707 for u = 0 and

        = 1 otherwise.

        In 1942 JPEG established the first international standard for still image compression where the encoders and decoders are DCT-based. The JPEG standard specifies there modes namely sequential, progressive, and hierarchical for lossy encoding, and one mode of lossless encoding. The baseline JPEG CODER, which is the sequential encoding in its simplest form, is briefly discussed here. shows the key processing steps in such as encoder and decoder for grayscale images. Color image compression can be approximately regarded as compression of multiple grayscale images, which are either compressed entirely one at a time, or are compressed by alternately interleaving 8 x 8 sample blocks from each in turn. In this article, we focus on grayscale images only.

        The DCT-based encoder is essentially compression of a stream of 8 x8 blocks of image samples. Each 8 x 8 block makes its way through each processing step, and yields output in compressed form into the data stream. Because adjacent image pixels are highly correlated , the forward DCT (FDCT) processing step lays the foundation for achieving data compression by concentrating most of the

        signal in the lower spatial frequencies have zero or near zero amplitude and need not be encoded. In principle, the DCT introduces no loss to the source image samples; it merely transforms them to a domain in which they can be more efficiently encoded.

        After output from the FDCT, each of the 64 DCT coefficients is uniformly quantization in conjunction with a carefully designed 64-element Quantization Table (QT). At the decoder, the quartered values are multiplied by the corresponding QT elements to recover the original unquantized values. After quantization, all of the quantized coefficients are ordered into the zigzag sequence as shown in. this ordering helps to facilitate entropy encoding by placing low frequency non-zero coefficients before high frequency coefficients. The DC coefficient, which contains a significant fraction of the total image energy, is differently encoded.

        Entropy Coding (EC) achieves additional compression listlessly by encoding the quantized DCT coefficients more compactly based on their statistical characteristics. The

        JPEG proposal specifies both Huffman coding and arithmetic coding. The baseline sequential code uses Huffman coding, but codecs with both methods are specified for all modes of operation. Arithmetic coding, though more complex, normally achieves 5-10% better compression than Huffman coding.

  4. PERFORMANCE MEASUREMENT OF IMAGE COMPRESSION

    There are three basic measurements for the IC algorithm.

    1. Compression Efficiency

      It is measured by compression ratio, which is defined as the ratio of the size (number of Bits) of the original image data over the size of the compressed image data

    2. Complexity

      The number of data operations required performing bit encodig and decoding processes measures complexity of an image compression algorithm. The data operations include additions, subtractions, multiplications, division and shift operations.

    3. Distortion measurement (DM)

For a lossy compression algorithm, DM is used to measure how much information has been lost when a reconstructed version of a digital image is produced from the compressed data. The common distortion measure is the Mean-Square- Error of the original data and the compressed data. The Single-to-Noise ratio is also used to measure the performance of lossy compression algorithm.

Table 1: Comparison of Various Compression Techniques

Properties

DCT

DWT

DCT+DWT

Compression Ratio

26.546

30.237

52.539

PSNR

48.248

40.232

27.592

Table 1 shows the comparison of various techniques based on the compression ratio and the Peak Signal to Noise Ratio (PSNR) values.

5. CONCLUSION

Image compression is an important task and it is increasingly becoming important because of a large amount of digital content available in the form of digital images, video etc. In this paper, a survey into the area of image compression and the various compression standards have been presented. An algorithm to provide image compression will be envisaged further in the future extension to the work presented here.

REFERENCES

      1. M. J. Weinberger, G. Seroussi and G. Sapiro, The LOCO-I Lossless Image Compression Algorithm: Principles and Sta- ndardization into JPEG-LS, IEEE Transaction on Image Processing, Vol. 9, No. 8, 2000, pp. 1309-1324.

      2. V. H. Gaidhane, Y. V. Hote and V. Singh, A New Ap-proach for Estimation of Eigenvalues of Images, Inter- national Journal of Computer Applications, Vol. 26, No. 9, 2011, pp. 1-6.

      3. S.-G. Miaou and C.-L. Lin, A Quality-on-Demand Algo- rithm for Wavelet-Based Compression of Electrocardio- gram Signals, IEEE Transaction on Biomedical Engi- neering, Vol. 49, No. 3, 2002, pp. 233-239.

      4. S. N. Sivanandam, S. Sumathi and S. N. Deepa, Introduction to Neural Network Using MATLAB 6.0, 2nd dition, Tata Mc-Graw Hill Publication, Boston, 2008.

      5. http://en.wikipedia.org/wiki/Neuralnetwork.

      6. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Reading. MA: Addison Wesley, 2004.

      7. R. C. Gonzalez, R. E. Woods and S. L. Eddins, Digital Image Processing Using MATLAB, Pearson Edition, Dorling Kindersley, London, 2003.

      8. S.-T. Bow, B. T. Bow and S. T. Bow, Pattern Recogni- tion and Image Processing, Revised and Expanded, 2nd Edition, CRC Press, Boca Raton, 2002.

      9. M. Nixon and A. Aguado, Feature Extraction & Image Processing, 2nd Edition, Academic Press, Cambridge, 2008, pp. 385-398.

      10. Pizurica and Philips, Digital Image Processing, Ghent University, Lecture notes in Computer science, January 2007.

      11. A. Laha, N. R. Pal and B. Chanda, Design of Vector Quantizer for Image Compression Using Self Organizing Feature Map and Surface Fitting, IEEE Transactions on Image Processing, Vol. 13, No. 10, October 2004, pp. 1291-1303. doi:10.1109/TIP.2004.833107

      12. Priyanka Dixit, Mayanka Dixit, Study of JPEG Image Compression Technique Using Discrete Cosine Transformation, International Journal of Interdisciplinary Research and Innovations (IJIRI),Vol. 1, Issue 1, pp: (32-35), Month: October- December 2013

Leave a Reply