Survey of Image Compression using Block Partition Technique and Its FPGA Implementation

DOI : 10.17577/IJERTV11IS040054

Download Full-Text PDF Cite this Publication

Text Only Version

Survey of Image Compression using Block Partition Technique and Its FPGA Implementation

Krishna Kant Sharma1, Prof. Satyarth Tiwari2, Prof. Amrita Pahadia3

1M. Tech. Scholar, 2Guide, 3Co-guide

Department of Electronics and Communication, Bhabha Engineering Research Institute, Bhopal

Abstract Image communication in web applications becomes handy because of highly developed compression tools. Human eye fixate on an images preview, carefully adjusting the quality and optimization settings until weve found that sweet spot, where the file size and quality are both the best they can possibly be method. This paper survey, which uses modified singular value decomposition (SVD) and block partition method for grayscale and color image compression. This hybrid method uses modified rank one updated SVD as a pre-processing step for block partition method to increase the quality of the reconstructed image. The high energy compaction in SVD process offers high image quality with less compression and requires a number of bits for reconstruction. The method is a combination of both SVD and blocks truncation for image compression and is tested with several test images without arithmetic coding and JPEG2000. This method improves the quality of reconstruction without altering the compression rates of SPIHT algorithm.

KeywordsSingular Value Decomposition (SVD), Block Partition Method, Bit Map, Multi-level, Quantization

  1. INTRODUCTION

    In the present scenario of satellite and mobile communication, the enormous amount of audio, image and video data need to be handled efficiently. It is well known that image and video data require large storage space and high transmission rate during communication [1]. It is apparent that by representing an image with reduced number of bits will speed up the communication. However, representing an image with the lesser number of bits reduces the perceptual quality of an image. Hence, image compression technology routed towards designing of hybrid algorithms to achieving high-quality image at the low bit rate transmission.

    Transform-based image compression is becoming more popular than direct and parametric extraction based compression techniques, because of its flexibility. It transforms the signals into a few highly de-correlated expansions of coefficients, which reduces the redundancy in image representation, and also increases compression and quality of reconstructed image [2]. Most popular compression algorithms like JPEG and JPEG2000 introduced by Joint Photographic Experts Group uses Discrete Cosine Transform (DCT) and wavelet transform

    for image compression. DCT based compression techniques suffered by blocking of an artifact, but multi- resolution and overlapping nature of wavelet alleviates the blocking artifact and creates superior energy compaction [3]. The wavelet based compression Embedded Zero Tree Coding (EZW) introduced by produces unavoidable artifacts during low bit-rate transmission and is a complex process which requires morestorage space. To optimize the complexity of EZW, an efficient wavelet method Set Partition Hierarchical Tree (SPIHT) was introduced. This algorithm produces an embedded bit-stream of wavelet coefficients with decreasing threshold values. It uses three lists to store significant and insignificant sets, then encode the most significant pixels for reconstruction9. Increasing the number of bits in the encoding bit stream and neglecting offew significant bits can also reduce the image quality. Hence, prior scanning of significant coefficients before encoding confirms the gradual improvement in the quality of image [4].

    A well-known decomposition algorithm SVD divides the image into three matrix products USVT, refactoring of a matrix with a small set of values allows us image compression. Because of its high-quality reconstruction, it was used as pre-processing for many image compression algorithms [5]. A hybrid method of SVD and EZW for ECG signal compression proposed shows significant improvement in compression ratio and excellent quality of image reconstruction with fewer bitrates. Another lossy compression algorithm proposed by using SVD andWavelet Difference Reduction (WDR) shows some improvement in quality.

  2. LITERATURE SURVEY

    Julio Cesar Stacchini de Souza et al. [2017], Electrical distribution systems have been experiencing many changes in recent times. Advances in metering system infrastructure and the deployment of a large number of smart meters in the grid will produce a big volume of data that will be required for many different applications. Despite the significant investments taking place in the communications infrastructure, this remains a bottleneck for the implementation of some applications. This paper

    presents a methodology for lossy data compression in smart distribution systems using the singular value decomposition technique. The proposed method is capable of significantly reducing the volume of data to be transmitted through the communications network and accurately reconstructing the original data. These features are illustrated by results from tests carried out using real data collected from metering devices at many different substations.

    Sunwoong Kim et al. [2016], color and multispectral image compression using Enhance block truncation code is proposed. These techniques are based on standard deviation and mean. This technique is applied to satellite image and reshapes the satellite image. The satellite image is divided into various sub-blocks. After calculate mean values, all number of pixel in sub-block are compared tothe mean and according to the mean all pixel value is replaced by binary number. Finally MSE, PSNR and compression ratio are calculated for the Enhance blocktruncation code for satellite image.

    C. Senthil kumar et al. [2016], in this paper, image compression plays vital role in saving memory storage space and saving time while transmission images over network. The color and multispectral image is consideredas input image for the image compression. The proposed technique with Enhanced Block Truncation Coding [EBTC] is applied on component of color and multispectral image. The component image is divided into various sub blocks. After evaluating mean values, the number of bits can be reduced by Enhanced Block Truncation Coding. Finally, compression ratio table is generated using the parameters such as MSE, SNR and PSNR. The proposed method is implemented through standard color and multispectral images using MATLAB Version 8.1 R2013a.

    Jing-Ming Guo et al. [2014], Block truncation committal to writing (BTC) has been thought of extremely economical compression technique for many years. Moreover, this method can provide excellent processing efficiency by exploiting the nature parallelism advantage of the dot diffusion, and excellent image quality can alsobe offered through co-optimizing the class matrix and diffused matrix of the dot diffusion. According to the experimental results, the proposed DDBTC is superior to the former error-diffused BTC in terms of various objective image quality assessment methods as well asprocessing efficiency.

    A modified Block Truncation Coding using max-min quantizer (MBTC) is proposed in this paper to overcome the above mentioned drawbacks. In the conventional BTC, quantization is done based on the mean and standard deviation of the pixel values in each block. In the proposed

    method, instead of using the mean and standard deviation, an average value of the maximum, minimum and mean of the blocks of pixels is taken as the threshold for quantization.

    Jayamol Mathewset al. [2013], with the emerging multimedia technology, image data has been generated at high volume. It is thus important to reduce the image file sizes for storage and effective communication. Block Truncation Coding (BTC) is a lossy image compression technique which uses moment preserving quantization method for compressing digital gray level images. Even though this method retains the visual quality of the reconstructed image with good compression ratio, it shows some artifacts like staircase effect, raggedness, etc. near the edges. A set of advanced BTC variants reported in literature were studied and it was found that though the compression efficiency is good, the quality of the image has to be improved. A modified Block Truncation Coding using max-min quantizer (MBTC) is proposed in this paper to overcome the above mentioned drawbacks. In the conventional BTC, quantization is done based on the mean and standard deviation of the pixel values in each block. In the proposed method, instead of using the mean and standard deviation, an average value of the maximum, minimum and mean of the blocks of pixels is taken as the threshold for quantization. Experimental analysis shows an improvement in the visual quality of the reconstructed image by reducing the mean square error between the original and the reconstructed image. Since this method involves less number of simple computations, the time taken by this algorithm is also very less when compared with BTC.

    Seddeq E. Ghrare et al. [2013], with the continuing growth of modern communication technologies, demand for image data compression is increasing rapidly. Techniques for achieving data compression can be divided into two basic approaches: spatial coding and Transform coding. This research paper presents a proposed method for the compression of digital images using hybrid compression method based on Block Truncation Coding (BTC) and Walsh Hadamard Transform (WHT). The objective of this hybrid approach is to achieve higher compression ratio by applying BTC and WHT. Several grayscale test images are used to evaluate the coding efficency and performance of the hybrid method and compared with the BTC and WHT respectively. It is generally shown that the proposed method gives betterresults.

    Ki-Won Oh et al. [2012], this paper presents a parallel implementation of hybrid vector quantizer-based block truncation coding using Open Computing Language (OpenCL). Processing dependency in the conventional algorithm is removed by partitioning the input image and

    modifying neighboring reference pixel configuration. Experimental results show that the parallel implementationdrastically reduce processing time by 6~7 times with significant visual quality improvement.

  3. LOSSY IMAGE COMPRESSION SYSTEM In transform based image compression, the image is subjected to transformation and after that the changed information are encoded to deliver the compacted bit stream. The general structure of a change based picture pressure framework is appeared in Figure 1. There are two versions of transform coding. One is frame based and the other is the block based. The block based approach requires fewer computations and allows adaptive quantization of coefficients.

    So threshold value is:

    T =

    In Figure 1, X represents the original image pixel values; Yi denotes the transformed values of the original image.All the transformed coefficients are then quantized and entropy coded which are represented by Ci. These compressed bit streams are either transmitted or stored. Reconstructed image can be obtained by decompressing the coded signal. The goal is to design a system so that the coded signal Ci can be represented with fewer bits than theoriginal image X [6].

    Figure 1: Transform-based image compression system

    In the 1980s, almost all transform based compression approaches were using the DCT. Later, the trend moved to compression schemes based on the DWT. DWT overcomes the effect of blocking artifacts associated with DCT. Maybe the most critical change in customary coding is accomplished by the utilization of math coders rather than straightforward Huffman coders, which expands the pressure proportion by 5-8%. Be that as it may, the sight and sound substance in day by day life is developing exponentially; along these lines, an execution gain of around 10% out of ten years does not fulfill the interest. Along these lines, specialists have been searching for new arrangements that could take care of the issue of the stagnating picture pressure execution.

    At long last, the quantized coefficients are coded to create the compacted bit stream. The coding procedure commonly misuses a factual model keeping in mind the end goal to code images with fewer bits for images that has higher likelihood of event. In doing as such, the measure of the packed piece stream is diminished. Accepting that the change utilized is really invertible, the main potential reason for data misfortune is in the coefficientquantization, as the quantized coefficients are coded in a lossless way [7]. The decompression procedure essentially reflects the procedure utilized for pressure. The compacted bit stream is decoded to acquire the quantized change coefficients. At that point, the reverse of the change utilized amid pressure is utilized to get the recreated picture.

  4. PROPOSED METHODOLOGY Proposed Encoder and decoder block of the multi-level block truncation code algorithm is shown if figure 2. Encoder part of the proposed algorithm shows that the original image is divided into three parts i.e. R component, G component and B component. Each R, G, B component of the image is divided into non overlapping block of equal size and threshold value for each block size is being calculated.

    Threshold value means the average of the maximum value (max) of k × k pixels block, minimum value (min) of k ×k pixels block and m1 is the mean value of

    k × k pixels block. Where k represents block size of the color image.

    max min m

    So threshold value is:

    T 1

    3 (1)

    Figure 2: Block Diagram of Proposed Algorithm

    Each threshold value is passing through the quantization block. Quantization is the process of mapping a set of input fractional values to a whole number. Suppose the fractionalvalue is less than 0.5, then the quantization is replaced by

    previous whole number and if the fractional value is greater than 0.5, then the quantization is replaced by next whole number. Each quantization value is passing through the bit map block. Bit map means each block is represented by 0 and 1 bit map. If the Threshold value is less than

    Mean Square Error (MSE)

    1 N2 N1

    N1N2 j1 i1

    2

    ( f (i, j) g(i, j))

    (4)

    or equal to the input image value then the pixel value of the image is represent by 0 and if the threshold value is

    Peak Signal to Noise Ratio (PSNR) in dB

    2

    greater than the input image value then the pixel value of the image is represented by 1.

    10 log10

    ( 255 ) (5)

    MSE

    Bit map is directly connected to the high and low component of the proposed decoder multi-level BTC algorithm. High (H) and low (L) component is directly connected to the bit map, bitmap converted the 1 and 0 pixel value to high and low pixel value and arrange the entire block.

    Clearly, littler MSE and bigger PSNR esteems relate to bring down levels of bending. Despite the fact that these measurements are often utilized, it tends to be seen that the MSE and PSNR measurements don't constantly associate well with picture quality as seen by the human visual framework. Hence, it is desirable over supplement any

    goal lossy pressure execution estimation by emotional

    1 p tests, for example, the Mean Opinion Score (MOS) to

    q

    L Wi

    i 1

    1 p

    Wi T (2) guarantee that the target results are not deceiving.

    At times pressure is evaluated by expressing the Bit Rate (BR) accomlished by pressure calculation communicated

    p

    H Wi

    i1

    Wi T (3)

    in bpp (bits per pixel). Another parameter that estimates

    Wi represent the input color image block, q is the number of zeros in the bit plane, p is the number of ones in the bit plane. In the combine block of decoder, the values

    the measure of pressure is the Compression Ratio (CR) which is characterized as

    Originalimagesize

    obtained from the pattern fitting block of individual R, G,B

    components are combined after that all the individual combined block are merged into a single block . Finally

    CR

    Compressed imagesize

    (6)

    compressed image and all the parameter relative to that image will be obtained.

  5. IMAGE QUALITY MEASURES

    It is a noteworthy assignment in assessing the picture nature of a picture pressure framework to portray the measure of debasement in the remade picture. On account of lossy pressure, the recreated picture is just estimation to the first. The contrast between the first and reproduced flag is alluded to as guess mistake or twisting. Generally, the performance is evaluated in terms of compression ratio and image fidelity. A good image compression algorithm results in a high compression ratio and high fidelity. Unfortunately, both requirements cannot be achieved simultaneously. Albeit numerous measurements exist for evaluating contortion, it is most generally communicated as far as means squared mistake (MSE) or pinnacle motion to-clamor proportion (PSNR). The execution of picture pressure frameworks is estimated by the metric characterized in conditions (1) and (2). It is based on the assumption that the digital image is represented as N1 N2 matrix, where N1 and N2 denote the number of rows and columns of the image respectively. Also,

    f (i, j) and g(i, j) denote pixel values of the original image before compression and degraded image after compression respectively.

  6. FPGA IMPLEMENTAION

    The implementations of XPS are an IDE used to develop EDK based system designs. Designers use XPS to organize and put together a hardware requirement of their embedded systems. The XPS converts the designer's platform requirement into a synthesizable RTL explanation. The VHDL coding is used to write set of screenplay to computerize the implementation of fixed system from RTL to the bit stream file. The XPS is a GUI window that helps you to identify your system that is, which processors, memory blocks and other FPGA peripherals to use and how the different peripherals are connected and finally the memory map that is for addresses for memory mapped I/O peripherals. XPS also interface the tools used throughout the whole design flow. There are three components used in this.

    • Micro blaze processor

    • UART (serial port)

    • Memory block The Standard C function 'printf' generates huge libraries which will not fit in the memory.

    Instead use Xil-printf. The Xil-printf is similar to printf but much smaller and lacks some functions that is floating point support.

  7. CONCLUSION

We increase the block size of the images, performance of the algorithm degraded, i.e. as blurred image. But memory space needed to store the image is very less, so if user can compromise with the quality of image, 16 ×16 block size takes least memory space. But if balance between the memory size and image quality is needed, block size of 4×4 is the best option. The PSNR is utilized as a proportion of the reproduced picture quality correlation of the first and remade picture. The outcome demonstrates that this technique gives a decent pressure without debasing the recreated picture.

REFERENCES

[1] Julio Cesar Stacchini de Souza, Tatiana Mariano Lessa Assis, and Bikash Chandra Pal, Data Compression in Smart Distribution Systems via Singular Value Decomposition, IEEE Transactions on Smart Grid, Vol. 8, NO. 1, January 2017.

[2] Sunwoong Kim and Hyuk-Jae Lee, RGBW Image Compression by Low-Complexity Adaptive Multi-Level Block Truncation Coding, IEEE Transactions on Consumer Electronics, Vol. 62, No. 4, November 2016.

[3] C. Senthil kumar, Color and Multispectral Image Compression using Enhanced Block Truncation Coding [E- BTC] Scheme, accepted to be presented at the IEEE WiSPNET, PP. 01-06, 2016 IEEE.

[4] Jing-Ming Guo, Senior Member, IEEE, and Yun-Fu Liu, Member, IEEE, Improved Block Truncation Coding Using Optimized Dot Diffusion, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 3, MARCH 2014.

[5] Jayamol Mathews, Madhu S. Nair, Modified BTC Algorithm for Gray Scale Images using max-min Quantizer, 978-1- 4673-5090-7/13/$31.00 ©2013 IEEE.

[6] Ki-Won Oh and Kang-Sun Choi, Parallel Implementation of Hybrid Vector Quantizerbased Block Truncation Coding for Mobile Display Stream Compression, IEEE ISCE 2014 1569954165.

[7] Seddeq E. Ghrare and Ahmed R. Khobaiz, Digital Image Compression using Block Truncation Coding and Walsh Hadamard Transform Hybrid Technique, 2014 IEEE 2014 International Conference on Computer, Communication, and Control Technology (I4CT 2014), September 2 – 4, 2014 – Langkawi, Kedah, Malaysia.

[8] M. Brunig and W. Niehsen. Fast full search block matching. IEEE Transactions on Circuits and Systems for Video Technology, 11:241 247, 2001.

[9] K. W. Chan and K. L. Chan. Optimisation of multi-level block truncation coding. Signal Processing: Image Communication, 16:445 459, 2001.

[10] C. C. Chang and T. S. Chen. New tree-structured vector quantization with closed-coupled multipath searching method. Optical Engineering, 36:1713 1720, 1997.

Leave a Reply