Review on Compression of Medical Images using Various Techniques

DOI : 10.17577/IJERTV3IS080880

Download Full-Text PDF Cite this Publication

Text Only Version

Review on Compression of Medical Images using Various Techniques

Yashpreet Sain

Final Year M. Tech, Department of Computer Science Asra College of Engineering & Technology, Bhawanigarh (Sangrur)

AbstractImages are needed to be compressed in order to reduce the storage space and to minimize the transfer time over a network. Compression plays a more vital role in the field of medical imaging where a huge amount of storage is required to store these images and retrieve later for diagnosis. So in order to save the amount of storage space and network bandwidth a suitable compression scheme needs to be implemented in order to achieve optimum compression levels without any loss of valuable information. In this paper I have reviewed the concept of lossless compression on medical images and presented an introductory research paper.

Keywords Compression, Lossless Compression, Medical Images.


    Image compression is of two types: lossless compression (reversible) and lossy compression (irreversible). A lossless compression method involves no data loss and a much lower compression level. In lossy compression, data is discarded during compression and is not recoverable. Lossy compression achieves much greater compression than lossless technique. Wavelet and higher-level JPEG are examples of lossy compression. Compression is an important process in the field of medical imaging technologies and applications that include tele-radiology, tele-consultation, e-health, tele- medicine and statistical medical data analysis. For the tele- medicine, medical image compression (MIC) and analysis can be useful and play a vital role for the diagnosis of more sophisticated and complicated images through consultation of specialists.

    The application of digital imaging in the field of medicine is increasing rapidly therefore hospitals need to store a large amount of data. Medical images are an important part of this. As a result hospitals have a great bulk of images with them and require a huge disk space and transmission bandwidth to store the images. In most cases transmission bandwidth is not enough to store all the image data. Image compression is the process of encoding information in fewer bits than a compressed representation that would use through use of specific encoding schemes. Compression is convenient because it helps to decrease the need of expensive resources, such as hard disk space or transmission bandwidth decompressed. For example, a compression method for image may need a costly hardware for the image to be decompressed fast enough to be viewed as its being. The design of data

    compression schemes therefore involves balancing among various problems, including the amount of compression, the amount of distortion introduced (if using a lossy compression scheme),the computational resources required to compress and decompress the data. The use of computers and networks makes it easier to allocate the image data among the staff efficiently. As the health care is computerized, new methods and applications are developed, among them are the MR and CT techniques. MR and CT produce sequence of images (image stacks) each along the cross-section of an object. The amount of data produced by these techniques is enormous and this might be a problem when sending the data over a network. To overcome this problem image compression has been introduced in the field of medical. Different types of biomedical images:-

    1. X-Ray Image

    2. CT Scan

    3. MRI Image

    4. Ultrasound Image

    5. Optical imaging

    6. Mammography Image

    Image compression is reducing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time needed for images to be sent over the Internet or downloaded from Web sites. There are several different methods using which image files can be compressed. For Internet use, the two most formats are JPEG and the GIF formats. The JPEG method is more popularly used for photographs, while the GIF method is usually used for line art and other image which contain geometric shapes. Other techniques for image compression include the use of fractals and wavelets. These methods have not gained widespread acceptance for use on the Internet as of this writing.

    Both of the approaches offer potential because they have higher compression ratios than the JPEG or GIF methods for some types of images. Another new method that may in time replace the GIF format is the PNG format.


    A Wavelet implies a "small wave". Wavelets are functions that used in representing data or other functions in mathematical terms. A wavelet is a waveform of a limited interval that has a normal estimation of zero. Wave in itself alludes to a function that is oscillatory. Furthermore Wavelet dissection can perform local analysis i.e. it can analyse a localized area of a larger signal i.e. it can investigate a limited zone of a bigger indicator. The essential steps in wavelet packing will be performing a discrete wavelet Transformation (DWT), quantization of the wavelet-space image sub bands, and then encoding these sub bands. Wavelet pictures by and of themselves will be not compressed pictures; rather it is quantization and encoding stages that do the image compression. Image decompression, or recreation, is accomplished via doing the above steps in inverse and reverse. Wavelets are created from one single capacity (local function) called the mother wavelet. There will be numerous parts in the wavelet family, that are generally found to be more useful, are as per the Haar wavelet is one of the oldest and easiest wavelet.. The wavelets can be picked in light of their shape and their capability to investigate the sign in a specific application.

    1. Haar- This wavelet is intermittent, and looks like a step function.

    2. Coiflets- The wavelet capacity has 2n moments meet to 0 and the scaling capacity has 2n-1 minutes equivalent to 0. The two capacities have a backing of length 6n-1.

    3. Symlets- The symlets are almost symmetrical wavelets. The properties of the two wavelet families are comparable.

    4. Meyer – The Meyer wavelet and scaling capacity are characterized in the recurrence area.

    5. Biorthogonal- This group of wavelets shows the property of direct stage, which is required for sign and picture reproduction. By utilizing two wavelets, one for disintegration (on the left side) and the other for remaking (On the right side) rather than the same single one, interesting properties are determined.

    6. Daubechies – Daubechies are minimalistic ally underpinned orthonormal wavelets and discovered application in DWT. Its family has got nine parts in it.


    Image compression can be lossy or lossless. Lossless compression is favoured for artificial pictures, for example, specialized drawings, symbols or comics. This is on the grounds when used at low bit rates, lead to compression artefacts. Lossless squeezing strategies are favoured for high value data, for example, medicinal imaging or picture examines made for archival purposes. Lossy approaches are commonly used for common pictures, for example, photographs in applications where minor (sometimes invisible) loss of acceptable to achieve a substantial decrease in bit rate


    Lossy image compression: A lossy compression techniue is one where compressing data and then decompressing it retrieves data that may be unlike from the original, but is good enough to be useful in some way [11][13]. Lossy compression is generally used to compress visual and audio especially in applications, for example, streaming media and web telephony.

    Then again lossless compression is needed for content and information records, for example, bank records, content articles, and so forth. Lossy compression formats undergo generation loss: repeatedly compressing and decompressing the file will make it logically lose quality. This is in contrast with lossless data compression.

    Lossless Image Compression: Lossless or reversible compression are compression methods in which the revamped information precisely matches the first . Such compression systems give the assurance that no pixel contrast between the first and the compressed picture is over a certain limit. It finds prospective applications in remote sensing, medicinal and space imaging, and multispectral picture documenting. In these applications the volume of the information would call for lossy compression for useful stockpiling and transmission.

    Another approach to deal with the lossy-lossless dilemma confronted in applications, for example, therapeutic imaging and remote sensing is to utilize a progressively refinable compression strategy that gives a bit stream that prompts a dynamic remaking of an image. Utilizing wavelets, for instance, one can get an implanted bit stream from which different levels of rate and mutilation could be acquired. Numerous procedures have been found for conceivable use in tele-radiology where a specialist commonly asks for areas of a picture at enhanced quality while regions of an image at improved quality while accepting initial versions and insignificant parts at lower quality, and along these lines bringing down the general transfer speed prerequisites. Indeed, the new still picture layering standard, JPEG 2000, gives such peculiarities in its developed structure.

    The different sorts of image compression systems are depicted below:

      1. Joint Photographic Experts Group (JPEG)

        JPEG stands for Joint Photographic Experts Group, the first name of the Committee that composed the standard. JPEG is intended for compacting full-shade or grey scale pictures of common, demonstrable scenes. It works well on photos, naturalistic craftsmanship, and comparable material; not all that well on lettering, basic comics, or line drawings. JPEG handles just still pictures, however there is a related standard called MPEG for movies. JPEG is "lossy," implying that the decompressed picture isn't exactly the same as the one you began with. JPEG is intended to exploit known restrictions of the human eye, remarkably the way that little shade progressions are seen less exactly than little changes in intensity. In this way, JPEG is proposed for layering pictures that looked at by humans. In the event that you want to machine-investigate your pictures, the little errors presented by JPEG may be an issue for you, regardless of the possibility that they are imperceptible to the eye. A helpful property of JPEG is that the level of information loss might be differed by altering compression parameters. This implies that the picture creator can exchange off record size against yield picture quality. You can make greatly little records if its all the same to you low quality; this is helpful for applications, for example, indexing image files. On the other hand, on the off chance that you aren't in content with the yield quality at the default compression setting, you can lift the quality until you are

        satisfied, and acknowledge lesser compression. An alternate vital part of JPEG is that decoders can exchange off translating rate against picture quality, by utilizing quick however wrong rough guesses to the obliged computations. (Encoders can likewise exchange precision for rate, however there's generally less motivation to make such an offering when composing a record).

      2. Set Partitioning in Hierarchical Trees (SPIHT)

        SPIHT is the wavelet based image compression system. It gives the Maximum Image Quality, progressive picture transmission, completely installed coded record, Simple quantization calculation, quick coding/translating, totally versatile, Lossless compression, Exact bit rate coding and Error protection[6][11]. SPIHT makes utilization of three rundowns the List of Significant Pixels (LSP), List of Insignificant Pixels (LIP) and List of Insignificant Sets (LIS). These are coefficient area records that hold their directions. After the instatement, the calculation takes two stages for each one level of edge the sorting pass (in which records are sorted out) and the refinement pass (which does the genuine dynamic coding transmission). The result is as a bit stream. It

        codes four coefficients and after that shifts to the following four ones. Consequently, sees the four coefficients as a square. The most extreme of them viewed as the thought about limit will decrease number of inspections, which is related with the conveyance of coefficient grid. Thus, it can clearly diminish number of examination when checking and coding zero trees. The coefficients in non-important block will be coded in next scanning process or later, rather than be coded in the present scanning process. This method can implement the coefficients coded earlier to the non-important ones more adequately. Generally, wavelet transforms coding for still image using SPIHT [8]. Calculation could be demonstrated as Fig. Firstly, unique picture framework experiences wavelet convert. The yield wavelet coefficients are then quantized and encoded by SPIHT coder. After that, bit streams are obtained. Figure I. Wavelet transform image coding using SPIHT Traditional SPIHT has the advantages of embedded code stream structure, high compression rate, low complexity and easy to implement [9]. However, for it, there still exist several imperfections.

        Image Coding

        IMAGE Pixel






        is equipped for improving the image consummately (each and every bit of it) by coding all bits of the convert. Then again,

        the wavelet convert yields perfect recreation just on the off chance that its numbers are put away as boundless imprecision numbers. Top sign to-commotion proportion (PSNR) is one of the quantitative measures for picture quality assessment which is focused around the mean square slip (MSE) of the re- created picture. The MSE for N x M size picture is give:

        1 m1 n1

        Fig:1 Block diagram of ISPIHT



    MSE I (i, j) K (i, j)

    i0 j 0 2

    Where f (i,j) is the original image data and f ' (i,j) is the compressed image value. The formula for PSNR is given by:

    PSNR = 10 log ((255)2 / MSE)

    Advantages of SPIHT

    The effective wavelet-based image compression strategy called Set Partitioning in Hierarchical Trees (SPIHT). The SPIHT technique is not a straightforward augmentation of traditional systems for image compression, but speaks to a vital development in the field. The method deserves special attention because it provides the following:

    1. Highest Image Quality

    2. Progressive image transmission

    3. Fully embedded coded file

      There are various types of terms that are used in calculation of image compression. Some are listed below:

        1. Peak signal to noise ratio

          The phrase peak signal-to-noise ratio, often abbreviated PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity ofits representation [7][9]. Since numerous signals have a wide dynamic extent, PSNR is typically communicated in terms of the logarithmic decibel scale.

          The PSNR is most usually utilized as a measure of quality of reproduction in image compression and so forth. It is most effectively characterized through the mean squared error (MSE) which for two m×n monochrome pictures I and K where one of the pictures is viewed as a noisy estimate of the other is characterized as:

    4. Simple quantization algorithm

    5. Fast coding/decoding

    6. Completely adaptive

    7. Lossless compression

      MSE 1 m1 n1

      MN i0 j 0

      The PSNR is defined as:

      I (i, j) K (i,


    8. Exact bit rate coding

      MAX 2


      PSNR 10 log

      1 20 log


    9. Error protection

        1. Improved Set Partitioning in Hierarchical Trees (ISPIHT)





      ISPIHT is the wavelet based image compression method it provides the Highest Image Quality [1]. The improved SPIHT calculation basically rolls out the following changes. SPIHT

      Here, MAXi is the maximum possible pixel value of the image. When the pixels are represented using 8 bits per sample, this is 255. More generally, when samples are

      represented using linear PCM with B bits per sample, MAXI is 2B-1.

      For color images with three RGB values per pixel, the definition of PSNR is the same except the MSE is the sum over all squared value differences divided by image size and by three.

      An identical image to the original will yield an undefined PSNR as the MSE will become equal to zero due to no error. In this case the PSNR value can be thought of as approaching infinity as the MSE approaches zero; this shows that a higher PSNR value provides a higher image quality. At the other end of the scale an image that comes out with all zero value pixels (black) compared to an original does not provide a PSNR of zero [8][10]. This can be seen by observing the form, once again, of the MSE equation. Not all the original values will be a long distance from the zero value thus the PSNR of the image with all pixels at a value of zero is not the worst possible case.

        1. Signal-to-noise ratio

          It is an electrical engineering concept, also used in other fields (such as scientific measurements, biological cell signaling), defined as the ratio of a signal power to the noise power corrupting the signal.In less technical terms, signal-to- noise ratio compares the level of a desired signal (such as music) to the level of background noise [31]. The higher the ratio, the less obtrusive the background noise is.In engineering, signal-to-noise ratio is a term for the power ratio between a signal (meaningful information) and the background noise:

          P A 2

          signals from optical interferometer are following the 20 log rule. The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. An SNR less than 5 means less than 100% certainty in identifying image details.

        2. Mean Square Error

          In statistics, the mean square error or MSE of an estimator is one of many ways to quantify the amount by which an estimator differs from the true value of the quantity being estimated. As a loss function, MSE is called squared error loss. MSE measures the average of the square of the "error" [9][10]. The error is the amount by which the estimator differs from the quantity to be estimated. The difference occurs because of randomness or because the estimator doesn't account for information that could produce a more accurate estimate [22]. The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator and its bias. For an unbiased estimator, the MSE is the variance. Like the variance, MSE has the same unit of measurement as the square of the quantity being estimated. In an analogy to standard deviation, taking the square root of MSE yields the root mean square error or RMSE, which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as the standard error. Definition and basic properties

          The MSE of an estimator with respect to the estimated parameter is defined as


          The MSE can be written as the sum of the variance and the


          Signal Signal

          squared bias of the estimator

          P A

          Noise Signal


          Here P is an average power and A is RMS amplitude. Both signal and noise power (or amplitude) must be measured at the same or equivalent points in a system, and within the same system bandwidth.

          Because many signals have a very wide dynamic range,

          The MSE thus assesses the quality of an estimator in terms of its variation.In a statistical model where the estimate is unknown, the MSE is a random variable whose value must be estimated. This is usually done by the sample mean

          ' ' 1 n 2

          SNRs are usually expressed in terms of the logarithmic decibel scale. In decibels, the SNR is, by definition, 10 times

          MSE (

          ) ( j )


          j 1

          the logarithm of the power ratio. If the signal and the noise is measured across the same impedance then the SNR can be obtained by calculating 20 times the base-10 logarithm of the amplitude ratio:

          j being realizations of the estimator of size n.

        3. Medical Image Compression with Region of Interest

          In many Medical Applications, for fast interactivity while browsing through large sets of images (e.g. volumetric data



          sets, time sequences of images, image databases) or, for

          SNR(db) 10 log10 P 20 log10 A

          searching context dependent detailed image structures, and/or



          quantitative analysis of the measured data Compression is

          In image processing, the SNR of an image is usually defined as the ratio of the mean pixel value to the standard deviation of the pixel values. Related measures are the "contrast ratio" and the "contrast-to-noise ratio". The connection between optical power and voltage in an imaging system is linear. This usually means that the SNR of the electrical signal is calculated by the 10 log rule [8]. With an interferometry system, however, where interest lies in the signal from one arm only, the field of the electromagnetic wave is proportional to the voltage (assuming that the intensity in the second, the reference arm is constant). Therefore the optical power of the measurement arm is directly proportional to the electrical power and electrical

          required. In medical imaging, the loss of any information when storing or transmitting an image is unbearable [10]. There is a broad range of medical image sources, and for most of them discarding small image details that might be an indication of pathology could alter a diagnosis, causing severe human and legal consequences. Data transmission and prioritization of regions of interests (ROIs) and thus inherently support for lossy coding are as important. Fast image inspection of large volumes of images transmitted over low bandwidth channels like ISDN, public switched telephone, or satellite networks (traditionally known as teleradiology), requires compression schemes with such progressive transmission capabilities. Moreover, optimal rate-distortion

          performance over the complete range of bit-rates that is requested by the application should also be considered. Additionally, the increasing use of three dimensional imaging modalities, like Magnetic Resonance Imaging (MRI), Computerized Tomography (CT), Ultrasound (US) triggers the need for efficient techniques to transpor and store the related volumetric data. In order to decrease the volume of medical images, a context and ROI based approach is introduced. Images can be split to

          1. Foreground

          2. Background.

      Foreground (PROI)

      The PROI is a main region of an image which can be annotated manually by radiologist, as illustrated in Fig. 1 or automatically by a computer aided detection system (CAD)[10]. The PROI contains important information, so this region should be kept without any loss of information. As described above, measuring the stenosis plays an important role for the diagnosis of PAD. For this reason, in our application, steno sis is a good candidate for PROI [10]. Therefore a lossless region based technique is applied to compress this part of the images.

      Fig 2: PROI

      Background region

      All area outside of human body is background as shown in Fig. 3. It does not carry relevant data, thus it is compressed with very high compression rate.

      Fig 3: Background Region


Seeing the importance of concept of compression in medical imaging technologies, I have contemplated and introduced this paper. This paper spoke to the fundamental presentation of the systems and the viewpoint of diverse scientists towards the same objective. As in the event of future extension, I am attempting to create code to compress restorative images using the suitable coding algorithm figure out the best one after correlation between all.


  1. Yumnam Kirani Singh ISPIHT-Improved SPIHT A simplified and efficient subband coding scheme Center for Development of Advanced Computing Plot: E-2/1, Block GP, Sector V, Salt Lake Electronics Complex.

  2. H. L. Xu, S. H. Zhong, Image Compression Algorithm of SPIHT Based on Block-Tree, Journal of Hunan Institute of Engineering, vol. 19(1), pp.58-61, 2009.

  3. F. W. Wheeler, and W. A. Pearlman, SPIHT Image Compression without Lists, IEEE Int. Conf on Acoustics, Speech and Signal Processing(ICASSP 2000). Istanbul: IEEE, 2000.2047-2050.

  4. Jianxiong Wang Study of the Image Compression based on SPIHT Algorithm college of water resources & hydropower and architecture, yunnan agriculture university Kunming.

  5. Min HU, Changjiang Zhang , Juan LU, Bo Zhou A Multi-ROIs Medical Image Compression Algorithm with Edge Feature Preserving Zhejiang normal university.

  6. Jia ZhiGang Guo XiaoDong Li LinSheng A Fast Image Compression Algorithm Based on SPIHT College of Electronic and Information Engineering TaiYuan University of Science and Technology TaiYuan, ShanXi, China.

  7. Chunlei Jiang A Hybrid Image Compression Algorithm Based on Human Visual System Electrical and Information Engineering College Northeast Petroleum University Daqing, Heilongjiang Province.

  8. Stephan Rein , Martin Reisslein, Performance evaluation of the fractional wavelet filter: A low-memory image wavelet transform for multimedia sensor networks a Telecommunication Networks Group, Technical University Berlin, Berlin b School of Electrical, Computer, and Energy Eng., Goldwater Center, MC 5706, Arizona State University, Tempe, AZ 85287-5706, United States.

  9. J. Jyotheswar, Sudipta Mahapatra Efficient FPGA implementation of DWT and modified SPIHT for lossless image compression Department of Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur 721 302, West Bengal, India.

  10. Peter Schelkens, Adrian Munteanu, Jan Cornelis Wavelet-based compression of medical images: Protocols to improve resolution and quality scalability and region-of-interest coding Department of Electronics and Information Processing (ETRO), Vrije Universiteit Brussel, Pleinlaan.

  11. Lalitha Y. S, M.V.Latte Lossless and Lossy Compression of DICOM images With Scalable ROI Appa Institute of Engineering & Technology, Gulbarga, Karnataka, India. JSS Institute & Academy, Bangalore, India

Leave a Reply